var/home/core/zuul-output/0000755000175000017500000000000015145346175014540 5ustar corecorevar/home/core/zuul-output/logs/0000755000175000017500000000000015145361767015510 5ustar corecorevar/home/core/zuul-output/logs/kubelet.log.gz0000644000175000017500000364510515145361565020277 0ustar corecoreuikubelet.log_o[;r)Br'o-n(!9t%Cs7}g/غIs,r.k9GfB e>kYEڤ펯_ˎ6Ϸ7+%f?ᕷox[o8W5!Kޒ/h3_.gSeq5v(×_~^ǿq]n>߮}+ԏbś E^"Y^-Vۋz7wH׋0g"ŒGǯguz|ny;#)a "b BLc?^^4[ftlR%KF^j 8DΆgS^Kz۞_W#|`zIlp_@oEy5 fs&2x*g+W4m ɭiE߳Kfn!#Šgv cXk?`;'`&R7߿YKS'owHF6":=3Ȑ 3xҝd){Ts}cZ%BdARO#-o"D"ޮrFg4" 0ʡPBU[fi;dYu' IAgfPFS]dP>Li.`|!>ڌj+ACl21E^#QDuxGvZ4c$)9ӋrYWoxCNQWs]8M%3KpNGIrND}2SRCK.(^$0^@hH9%!40Jm>*Kdg?y7|&#)3+o,2s%R>!%*XC7Ln* wCƕH#FLzsѹ Xߛk׹1{,wŻ4v+(n^RϚOGO;5p Cj·1z_j( ,"z-Ee}t(QCuˠMkmi+2z5iݸ6C~z+_Ex$\}*9h>t m2m`QɢJ[a|$ᑨj:D+ʎ; 9Gacm_jY-y`)͐o΁GWo(C U ?}aK+d&?>Y;ufʕ"uZ0EyT0: =XVy#iEW&q]#v0nFNV-9JrdK\D2s&[#bE(mV9ىN囋{V5e1߯F1>9r;:J_T{*T\hVQxi0LZD T{ /WHc&)_`i=į`PÝr JovJw`纪}PSSii4wT (Dnm_`c46A>hPr0ιӦ q:Np8>R'8::8g'h"M{qd 㦿GGk\(Rh07uB^WrN_Ŏ6W>Bߔ)bQ) <4G0 C.iTEZ{(¥:-³xlՐ0A_Fݗw)(c>bugbǎ\J;tf*H7(?PЃkLM)}?=XkLd. yK>"dgӦ{ qke5@eTR BgT9(TڢKBEV*DDQ$3gFfThmIjh}iL;R:7A}Ss8ҧ ΁weor(Ё^g׬JyU{v3Fxlţ@U5$&~ay\CJ68?%tS KK3,87'T`ɻaNhIcn#T[2XDRcm0TJ#r)٧4!)'qϷכrTMiHe1[7c(+!C[KԹҤ 0q;;xG'ʐƭ5J; 6M^ CL3EQXy0Hy[``Xm635o,j&X}6$=}0vJ{*.Jw *nacԇ&~hb[nӉ>'݌6od NN&DǭZrb5Iffe6Rh&C4F;D3T\[ bk5̕@UFB1/ z/}KXg%q3Ifq CXReQP2$TbgK ء#AZ9 K>UHkZ;oﴍ8MEDa3[p1>m`XYB[9% E*:`cBCIqC(1&b f]fNhdQvݸCVA/X_]F@?qr7@sON_}ۿ릶ytoy͟מseQv^sP3.sP1'Ns}d_ս=f1Jid % Jwe`40^|ǜd]z dJR-Дxq4lZ,Z[|e 'Ƙ$b2JOh k[b>¾h[;:>OM=y)֖[Sm5*_?$cjf `~ߛUIOvl/.4`P{d056 %w ^?sʫ"nK)D}O >%9r}1j#e[tRQ9*ء !ǨLJ- upƜ/4cY\[|Xs;ɾ7-<S1wg y &SL9qk;NP> ,wդjtah-j:_[;4Wg_0K>є0vNۈ/ze={< 1;/STcD,ڙ`[3XPo0TXx ZYޏ=S-ܑ2ƹڞ7կZ8m1`qAewQT*:ÊxtŨ!u}$K6tem@t):êtx: `)L`m GƂ%k1羨(zv:U!2`cV, lNdV5m$/KFS#0gLwNO6¨h}'XvوPkWn}/7d*1q* c0.$\+XND]P*84[߷Q뽃J޸8iD WPC49 *#LC ءzCwS%'m'3ܚ|otoʉ!9:PZ"ρ5M^kVځIX%G^{;+Fi7Z(ZN~;MM/u2}ݼPݫedKAd#[ BeMP6" YǨ 0vyv?7R F"}8&q]ows!Z!C4g*8n]rMQ ;N>Sr??Ӽ]\+hSQזL c̖F4BJ2ᮚ苮p(r%Q 6<$(Ӣ(RvA A-^dX?+'h=TԫeVިO? )-1 8/%\hC(:=4< ,RmDRWfRoUJy ŗ-ܲ(4k%הrΒ]rύW -e]hx&gs7,6BxzxօoFMA['҉F=NGD4sTq1HPld=Q,DQ IJipqc2*;/!~x]y7D7@u邗`unn_ư-a9t_/.9tTo]r8-X{TMYtt =0AMUk}G9^UA,;Tt,"Dxl DfA\w; &`Ͱ٢x'H/jh7hM=~ ֟y[dI~fHIqC۶1Ik\)3 5Ķ']?SؠC"j_6Ÿ9؎]TTjm\D^x6ANbC ]tVUKe$,\ܺI `Qز@UӬ@B {~6caR!=A>\+܁<lW Gϸ}^w'̅dk  C 7fbU{3Se[s %'!?xL 2ڲ]>i+m^CM&WTj7ȗE!NC6P}H`k(FUM gul)b ;2n6'k}ˍ[`-fYX_pL +1wu(#'3"fxsuҮױdy.0]?ݽb+ uV4}rdM$ѢIA$;~Lvigu+]NC5ÿ nNჶT@~ܥ 7-mU,\rXmQALglNʆ P7k%v>"WCyVtnV K`pC?fE?~fjBwU&'ᚡilRї`m] leu]+?T4v\% ;qF0qV(]pP4W =d#t ru\M{Nj.~27)p|Vn60֭l$4԰vg`i{ 6uwŇctyX{>GXg&[ņzP8_ "J~7+0_t[%XU͍ &dtO:odtRWon%*44JٵK+Woc.F3 %N%FF"HH"\$ۤ_5UWd̡bh塘ZRI&{3TUFp/:4TƳ5[۲yzz+ 4D.Ճ`!TnPFp':.4dMFN=/5ܙz,4kA<:z7y0^} "NqK$2$ Ri ?2,ᙌEK@-V3ʱd:/4Kwm2$'dW<qIE2Ľ)5kJҼMЌ DR3csf6rRSr[I߽ogCc;S5ׂdKZ=M3դ#F;SYƘK`K<<ƛ G׌MU.APf\M*t*vw]xo{:l[n=`smFQµtxx7/W%g!&^=SzDNew(æ*m3D Bo.hI"!A6:uQզ}@j=Mo<}nYUw1Xw:]e/sm lˣaVۤkĨdԖ)RtS2 "E I"{;ōCb{yex&Td >@).p$`XKxnX~E膂Og\IGֻq<-uˮ◶>waPcPw3``m- } vS¢=j=1 W=&;JW(7b ?Q.|K,ϩ3g)D͵Q5PBj(h<[rqTɈjM-y͢FY~p_~O5-֠kDNTͷItI1mk"@$AǏ}%S5<`d+0o,AրcbvJ2O`gA2Ȏp@)i_TA|S2G4miBȨHM(2hys|F 94 DNlϒòκ-q|xC ,gKDzHR%t+E/wd#礱ºȄWEz o\JξB.wLKZ39(M +(PWՇfR6#ю3Ȋt ݪbh]MTw䀩S]'qf&)-_G;"1qz퇛0,#yiq$ՁɄ)KٮޓoJ|̖D?:3mhW=rOf'/wѹ8BS8]`;=?,ڼ"ϴq*(A?7 /W= #^ub"6q f+=^OI@߱^F[n4A#bYѤwd)J^Z{*ǥzw73LuaVad=$6)iI gC~.1%YmҪ+2gSt!8iIۛ*JgE7LGoş\bC}O i ycK1YhO6 /g:KT sPv6l+uN|!"V[^΄t*3b\N7dYܞLcn3rnNd8"is"1- ޑܧd[]~:'#N(NknfV('I rcj2J1G<5 Nj̒Qh]ꍾZBn&Un' CyUM0nCj.&Oڣg\q0^Ϻ%4i" ZZG>Xr'XKc$2iσֹH<6N8HSg>uMik{Fm(W F@@{7+ߑ`xV,)ޖ,3~JPͪm|$oV1yU<̐t6 T m^ [IgINJ\Оf*Z"I)+>n#y 9D*A$$"^)dVQ.(rO6߿Yw_Ȣaޒu'->Xmw,*=.[G n >X9Ī;x7%dT:`ٓ~:VD)O>UD;;]Y,2ڨi"R"*R2s@AK/u5,b#u>cY^*xkW%!vĉq|?mtB|A ?dXu7LGml?*uTC̶V`FVY>EC}DnG+UaKtȃbeb筃kݴO~f^o⊈ 8MK:mM;ߵoz+O~e3݌ƺ(ܸf)*gCQE*pp^~x܃`U'A~E90t~8-2S󹞙nk56s&"mgVKA: X>7QQ-CDC'| #]Y1E-$nP4N0#C'dvܸȟ.vIH"ŐR ;@_AH<%Ҝ ܣTvnVUY om?'4%hs.o&˛Sy*LD ZmWb{ݏa ې!rGHw@56DǑq LA!&mYJ*mxz2*{_;IYJXFfQ* 0kA".mݡ"3`Rd1]u6d逖`7zGMf}k/⨼0Κ_pLq7k!dTi XʽAQGwG% C<ˉvRO)?J=e:9[_v~\:P ؇'k01Q1jlX)/ΏL+NhBUx~Ga>Z"Q_wjTLRˀtL L+BT҂ll魳cf[L̎`;rK+S- (J[(6 b F? ZvƂcW+dˍ-m𢛲@ms~}3ɱ© R$ T5%:zZ甎܋)`ŰJ38!;NfHohVbK :S50exU}W`upHЍE_fNTU*q%bq@/5q0);F74~'*z[\M-~#aSmMÉB2Nnʇ)bAg`u2t"8U [tJYSk, "vu\h1Yhl~[mhm+F(g 6+YtHgd/}7m]Q!Mę5bR!JbV>&w6οH+NL$]p>8UU>Ѫg39Yg>OF9V?SAT~:gGt $*}aQ.Zi~%K\rfm$%ɪq(%W>*Hg>KStE)KS1z2"h%^NEN?  hxnd/)O{,:خcX1nIaJ/t4J\bƀWc-d4M^d/ ʂK0`v%"s#PCoT/*,:[4b=]N&, ,B82^WK9EHLPm))2.9ȱ  QAcBC-|$M\^B!`}M^t+C~Lb }D>{N{Vt)tpDN,FCz~$)*417l;V iэ(_,j]$9O+/Sh]ice wy\Mڗ$,DJ|lj*à␻,?XAe0bX@ h0[}BU0v']#Vo !ې: Z%ƶ(fl>'"Bg< 0^_d0Y@2!ӸfZ{Ibi/^cygwדzY'Ź$:fr;)ٔf ՠ3Kcxwg*EQU{$Sڸ3x~ 5clgSAW"X Pҿ.ظwyV}̒KX9U1>V..W%GX +Uvzg=npu{do#Vb4ra\sNC/T"*!k愨}plm@+@gSUX覽t01:)6kSL9Ug6rEr(3{ xRP8_S( $?uk| ]bP\vۗ晋cgLz2r~MMp!~~h?ljUc>rw}xxݸǻ*Wu{}M?\GSߋ2ꮺ5w"7U0)lۨB0ח*zW߬V}Z۫ܨJ<]B=\>V7¯8nq~q?A-?T_qOq?5-3 |q|w.dަ'/Y?> (<2y. ">8YAC| w&5fɹ(ȊVã50z)la.~LlQx[b&Pĥx BjIKn"@+z'}ũrDks^F\`%Di5~cZ*sXLqQ$q6v+jRcepO}[ s\VF5vROq%mX-RÈlб 6jf/AfN vRPػ.6<'"6dv .z{I>|&ׇ4Ăw4 [P{]"}r1殲)ߚA 2J1SGpw>ٕQѱ vb;pV ^WO+į1tq61W vzZ U'=҅}rZ:T#\_:ď);KX!LHuQ (6c94Ce|u$4a?"1] `Wa+m𢛲`Rs _I@U8jxɕͽf3[Pg%,IR Ř`QbmүcH&CLlvLҼé1ivGgJ+u7Τ!ljK1SpHR>:YF2cU(77eGG\ m#Tvmە8[,)4\\=V~?C~>_) cxF;;Ds'n [&8NJP5H2Զj{RC>he:ա+e/.I0\lWoӊĭYcxN^SPiMrFI_"*l§,̀+ å} .[c&SX( ( =X?D5ۙ@m cEpR?H0F>v6A*:W?*nzfw*B#d[se$U>tLNÔ+XX߇`cu0:U[tp^}{>H4z 4 (DtH-ʐ?sk7iIbΏ%T}v}e{aBs˞L=ilNeb]nltwfCEI"*S k`u ygz[~S [j3+sE.,uDΡ1R:Vݐ/CBc˾] shGՙf 2+);W{@dlG)%عF&4D&u.Im9c$A$Dfj-ء^6&#OȯTgرBӆI t[ 5)l>MR2ǂv JpU1cJpրj&*ߗEЍ0U#X) bpNVYSD1౱UR}UR,:lơ2<8"˓MlA2 KvP8 I7D Oj>;V|a|`U>D*KS;|:xI/ió21׭ȦS!e^t+28b$d:z4 .}gRcƈ^ʮC^0l[hl"য*6 ny!HQ=GOf"8vAq&*țTOWse~ (5TX%/8vS:w}[ą qf2Lυi lm/+QD4t.P*2V J`\g2%tJ4vX[7g"z{1|\*& >Vv:V^S7{{u%[^g=pn]Y#&ߓTί_z7e&ӃCx;xLh+NOEp";SB/eWٹ`64F 2AhF{Ɩ;>87DǍ-~e;\26Lة:*mUAN=VޮL> jwB}ѹ .MVfz0Ïd0l?7- }|>TT%9d-9UK=&l&~g&i"L{vrQۻou}q}hn+.{pWEqws]]|/ǫ\}/J.MLmc ԗWrU}/Ǜ+sYn[ﯾeywyY]]¨Kpx c./mo;ߟRy*4݀wm&8֨Or4 &+Bs=8'kP 3 |}44S8UXi;f;VE7e4AdX-fS烠1Uܦ$lznlq"җ^s RTn|RKm;ԻZ3)`S!9| ?}m*2@"G{yZ${˪A6yq>Elq*E< NX9@: Ih~|Y4sopp|v1f2춓t$kmYEJ|?03' h$CoIe'QM}NHIR$m̓*'jd_ &)$Wžg;<~_͠`lq JJwDYI%Vql}5JB..T.{<5aSVӕu?w^~cWh![3H2A$S<}5yv[x^j7Q(1͔ZhEX'&Usl}5X&7?*ľ Zp<7O R艎iƸ{qIqq*UBwKʰp.)qŜeF F^Dzd9|]R9[^Tiر[7BfA䇯K*1[*chc񈛮xNdq7G͸k^E_f1øfI{@QB vAo"ϴ\Je+;۾C!V7)y baZ;Qd!n/LquONSy^>&s =9! nhFLטQOw4cdnkֽ:Z]lkB7Q`{?MR7/MRw/rƖn6kW'U3Y}u^+ c!'Lh]oiygy?J?^,}^_wOdq޾?1MMJ ^ؒHhC% Zt2^-h\+ߎZXn{,oF}Iaܴ ƃzbi3MHdUIkg _+];npGivZ<)kx=^>j(ewQ(d#=e#l#2зЎXp[7-v\n؆‹##~2MeL6Ca%O&rYj!RTxǭTF,Iޒ̫8#qbFDOӺ80{HVi%o M伐  T (@(V fYR V U3٤>1B6gb'PYQ{|uqCo\ƣ !F._&l˨Ԭ&Sg-)uˇu2e9YZ`xSAִTשƛ}kJO8n[/JXLAM`l7u%nE[h㙥OS:/X =|8 .yx\9l Y%ƣ秬 yŊ<7h O8} B|ٞLޞ>Qw`?CT!a)p>J.Nϫúlfogkc 4f4c f (wVr#QZ/rL%K;]%sCro CT)#A?cq;n qYEy,Hkɕb|/V{T,LEod&U9[=񣨯eyu%O `k&53d9)k}N6YQxDޅ~gX?Nɥu1ڙn>m0c)v"Զ}$":R*Ղ,Mx ޟq.i_{S)ǔ4k.vr ٙ'rRG IdRtbc2&dzF}9_`RH9P'K$9c@2۪u}4*7Vk0BN#>NfM~u%$&6hw3b8;јXۯW>})( va"2`fUq.%fi6~#\T*jvb>iD+vc{BDM FBt .s}[ ON]%Im~aY ?آ9 !&=jjalm d w2I > /mI%_ǦruZru`/ן> Ҧ Eβ$~檒!O[e5{Q0>_4a.jj2%ϽyiFFjDkn?=Ǔn&r z_pNJ&PXܐەl})"gE55;Y~p`,H9T6VaSr!Zt\PaW/R4 `K#b$d![ hRg]d>wՌ42 'mW"hẌ2gӷ'4phıC?65˓Q?Ө8W`RLY+LPBU=T/XHc{@e2KňKNa A|2z/#McDwxL]Zp+ ,$BUI|ɦ(F4G"k|Kh\$KC'ڶ @R {v-™W1K+ }{Z۠LL<5YZQzmi5 ܵ_~Wg0$ZU8dUl>JgD1:`R2W\aQw:)aa, ~24*Rk4]8PuVd>Fa<)VW {};Q).41֋ʞ-)h_e5t _ B/t"B ז&L2M>@FsUhm~S9=jw/;*ycv6ayc(/3< 1>O*Yp~ܞr%6%]2vӎS p]祸̾ќ|"?O9$*H3޳@$nH|? ⛛rXU\xŒb YV!xP=Th(xbdP9ۯDHޮ2Jr鮹\PPTM%rz^CnGNTj9T]5ņE65`Q"lSPڌ=e5x8VY!Q3f;~:órX[Ylb7^􅈮buؔ9h * 1k/^ WTg{a '%+oI56E$J nJ%˫{r`7ve6Ti;`3 Y67I0ȫoiPel8^ZK,J+Ðq_@ #yB͍̒ADUm5+4-y66l Np;ab(BYDk#rMp㨻 LɦWB%hCsB@vWŠf,)MZTQ[La) h${x鵻x ܳ6 L'*d9ok51 tK@1z+moѢU*oLN<"w[d06F,δzv&S:|m;4Mp=ߦ mjȢ)I}CxSo6cE2ԊTeV}<Ջ\Z d]Y` KXblr-R29 hj.]A6&^mReV= D %BK*N^pE߷ U{@M0@A*A8mz,CUIq pEno\{hKȵp6D\fSm\mֽ\+ "*dͺ3keE|aS}@J;d @6RiMIXDCCX D$U5eJb*kEjYB*ț'RDGf9{K,+ hō3PJӹi8 k4n˶V/AD*,?R-U0؇.5e%n wݤ+VbAcdž Ӷus. E)S1wxᛧ1>'La9$Mw.w{RGu퓤/l)8zhInV)3pGG aq0J8uT'o><ڰEk[KC?òb1jL\ ]a,cY~;4,ۉ=00kXQhY1nB&4⮢<9ք'8O5Jpgl2lCRd0$z V2J& L6UVe SUCSth8ubX!`oҮ|Z.>֮鹿(/:4vnlU.J- L5nEC$mET8Rtn+j?@"gFzxtpC8}SYvՎë L&y$]%V[ʾOf, /6hDTK"mTMM_KDQ(Y ,{`0bhtĝi -sƷpji`GAV{Xqx?u%pU{ VUP=tW a,Z﫸q-_рi0Gմe8 B0g s] Tܺ4{G"HȈ5>eFh &M]ވb&2QZ~ô6Av-Ti՝,[*Ђ%x1V5INc!a\i}sESWtTiWb1V=E$RBrDeE<=̱2KY2$Ғ$xKU78J=>Zbg gf>y\[6~d}oiT'znm@k2r\ݼtq,uk·itڿРxX.B".5PBv4iknO@u@⹋R;A? ؑg'L+03]PcCX z0P!J>lK+ = ~['+ v&ppkJ`TG6a:!Gݍ?@r60hwLjϒOlcܱwqw0PtP[! 5xB!l)`B =?tn u#qMPhl}Qy eWG븡0jxw`w`V79:KXuʊ \Nu hd(wh=a zq?r]@00` p{a'6/H0B=9t\/臐nq|R'c/9[z`y 8A PhBt^ڶ X7Ʊ3^"qL+ +*B0, ]u*G L'c^uǧh8.2qx];u{cn!#ER (t!݈O(RD/_>0J,a?o,L] u1xWU9s,u_GK KPЅtRaMgAP;i_(X\ru5 ~MPnj\:^u$qE,KŸW-4F5wfR k.D)б50Sp{N=+^du˔H}cf"q:cڀ(/n_)Qv P81fNxC22a;/ 4r|;RsNڌ1{܌?2;eƣ/i4sXW0O#=[dB:MnPvy]iu(cHP8"e!5rU{)Y5Q a\|D()ͼ,V֟" j஑= <WQyQqCB3}:Q-FYIFe1!:* TیXڟQ)xOXxd*9mZ+Q*&@-@EX]0Y!XaFLm8R3PLs^W~iV% qI%cSei&<8&t=6C(];© u(i߲lOI?"P\@LFs l7pC @nl&f*=9 ŃrfhаV54p{io|_E&!ʲiY[ נ@+\sKU0L $q7X)V5䤨fq5Mp0`f}d!@Ͼ^,믫>!I@Y"xJ\o[NwZCkԬ~ O&޽[%W>*P=vlNȵMJi$"'G r(Z`fcSJL&TZ$&I:0=N3Ac q\0B@gT,$H=(+Ez Q=9??:-tkhy<"3vxdR8Ѝ9 c: Iz=N"ڮ2ot<Ӣe"Ϟgբ+EohB D&CˡŞ>~!U(kS|8j%Ix7A}Z ğEZd8P&QE'a,ׄc]Ü"'2WC-oe p>1t1DE9AXjq^t-rUp^Fׅ)YO{>&*5[o5V,PNj鋹V<{m߭>abf4O0ܟw@/On+X)5t&⁃X;r:%Z;N'Eyaܗsj7'|y(᝼CJ07AwVe_ ?StmQy\%UoA/Yv=ˏ)lnҡȋӘ A"0 FC+jib43nd#$ B68 l*/e[ ],G}SF,J9C¼ȱ&V`$fg~s8CpG56Pp4EaP&P0  EYXMZ5fXр\[cY) e4% C^i#[KS+5gWn !Ҟa< A|?MunȜL*; /Hz-Lx`ҵHcJ_&,ѐqm(kR3qpA_|~:1<-WнD3 YY>"2~@hP|L0([ Q{ ]p ~qj[>&^gXXlB1`ZZ=,q&Oy^ ~Y.:Z+ S( E`W.ǗiK* Lf ̷1:¥?>`JVu=4Mb uvR LFPE#AIwg8Q@q_IG-(!Og״߿?>Cٸi&ޢGj@|fӲ"[nF@X og@Z΀yrh>> NSaLd+ Z$\uru}QQ)C틇 "|]nx|9,OЖcYh/^&z?[.371hhĕ c"Rh RVBE=}VՇ(E㹠HmbiH )8Y$X=8OxeIמGzɴduj#ض}Lx2͆O=f2LZILI9罈^d Y}cXxS Оبqm靄 MĬyT$'YR&H6jM\n\8x}&B W0{U޾/)j}Vg Yݸd53*$E ,媘aI!SteJX K 9YnLުDiJY=*}9T,biϸ, "}%u'ܙKEb }Ep:(f)A,ZAG(J蚉0txuPj!bƴcCY;(Y`T,4^H{`Aˈ{0':%b|î=sw*T8tCSEu5k+gb *$ÀhXJ65r;i]l1yd{BAvߵ̤***h_f$Jg=so$ңr '='pzwaLqQ0u P,PXQFw-4Q~l(?NQ ZRI{^T0S.Km#'*}Qw.B{;)ﻃb2-rO0@Mڼ["nJ^ >zoTr. `E=AL zb G1Id"?yײƱd}w5Ÿ́3$! @̯,<WNV%TdVJaOz# w8!3ܱ?;x<cZɿiU?̹}?0y\K/_Nd_?&zk阙<Mt :QPc#ɉi`ݟt.MUh̀)ao,\ts%Vs,)/l6_ՇsMNLގЋ4Vz)*\nvM9L1HHK&[ a<::[-gSVFۢ2RFP!WPsJ_A7(ḡ!^Zit=tI%3z(JjP:MXmRpQp|D4]ҹ}ȭ QLLUѐ6^" 'x<)I"bҹ7G\=+ %\>J!IqP.hp!՘i bѻ#TjUT4oׁKCb#pJ=bLĠAD 5lT>Ls!K7.i=p-aP :țLwrb)1e_zԎőP$/W,(-D"^B4 P@S\; B$'46cmp8#HU$m R,u\sN2 +|TBtY"(f.׭=oX߫|g|QfL`tAjJ <+`w֐}9N聤c)XF="AxF #?%VWj1TPՂA)Sx_GIT*ז  ]y2$R\bCu1k4*HtCI8r16+Ñ(-h$zO!Z xnk^Htc4 ^~ɳ@Om~Xo%?h%lHZuʥGnFI$JW ) )B-H 5dRD(c<5#XK<΢Wz9[ 7^Zi#Ǧ#հ^r|i h xIxq f џP,]r6`|C<bAG`YRoMb :8uAE!8a6߯XE|t鐤kDf!+>hf* 1Jti{G78ŢuUP"Cζi]II\*yi5z\:x$:~_KDOn{Dm^W )e8FJ1(ђND>o 23ǨD f`C+sARoo&m%F yPE|t6rM(%] MrLP4A"b >h*vet_tZ0AXN],?N38okQN+‰CZĤDघj V ,ְ9&;m+dLxE)#{Oj)LģbvÝ끣Q1*&ЉFS.1t\K78ڃz[cV\5%܊]4OĢb-_ql_)}.jSK2sEH4V?eWB:ω.ykmmBRD/YnXR9qS!{+bMB/Y ~}\Wg77V>GuN vE_8:wÿEfQ{4!EwFN qS! ~o3  h ^)_I[;|./?WØ15lcS<)f:køWC/ -N)=y/wE1IƆlz9j#_'m*6]59)6_e>0mG'5 3-*x*m#~G;XF\'K!Y!'`D r&KVmOzyis{'G&9ʰp<ys 3/ה g|4#YVـBwշK6:c(`fE_=}w(]#cfshËN>.<_TmDRz w] \Z{7Ik%)'jtE%ݬWOJl!LsuM,`b3R$GR禄 ùp$F7SësjLy<>͖GA+&*mV.LtqԊ3 Йr#)Fz츯JT+R׫;Ͳ|u l=ps[d%NH9V-I ` NG c.Kbc,;iZ#LԽ =8:Aߞ5 |P{ž5KWy%r+D5$d#K!Tt+%yg{|{tT#o~)Wꚽ$)-6%bQUK*\k%Fj=/:RlE'r7u$ 7&ۚ$Mwx3R}Jy/,GTNطx.bxod=pt͝qe}yE`&QLIlO;ϫ N~y*>'02r#+2BA)XЂK \ڍw: _?=ݧ*}i`?'^&TEIUdoR*m2WBXlu^TdXN~[obX?#nl# ru8 ^_UV sgg Iɳ2AXs+H.]7nq^-j%H2xRYvog}bfpb7nr<+u($MJ rfp9ZB-Qݔ Rb}_͉(nV zh-'P"ojڵqe\,yvYLT"B1sꎹ5OkpnL (_650ünK#0q=>]=*m0j[;\H980NCCZ gb3&<ݭ"ewoJJ $6FK[(3ݹ~QEmrJQ{kGBV-@E?8bXFtnp yQ^mj1DmȤXmco`\*D1Us B&N"}h }`eKҗ9/=SbQ ɐə-`]i̛oV/]pRRIo:tE^n@rE Z>IZJ۹3t'%z17|Cg\%;v6v#w\t x^EMBb~ яP,&g}B'Лm1/ϳ;0wF 68C#%nŦ V񬃶*Hrm~ v9|I`qK҅sr2b Gͺ ovP̽N#r?Υ.8ڱ4D/ `رpxC-9P:dj2[*CJfVӺU^ >VMpuWj0Cv GHRdVǬ,8w:ߤ2rCQfs[O>'wƒXO_[8P"bt)1z33v[ j'A͍K, IX{x9t yH(gJ^ٻƍdUMEv߇#Ƴy,f6YL|WMRux$,{꺺i`+jqI|7|$9_Vmn7q+f7q[e R1~$1'z?GRh!'G<p!&=Z6&G{hD}LrlӴ'}X_KzMpdVtkv6_ |mr2g u+82ީ Kg#[Qg{o4JNkU՛dePX\5sN&R)X'9U{ǭԂ컅+ۿ@V=_ݴu*nR2G _4*߹~V4 'gЊ>CNEY isi܆v =i\$7AfR-gF$d,|a%lP^IK'IV2:8]E640~2=o GP$JL&`Ў}- SB .?$\$iFgi=&pBz4kDp} -ߏβ1|v웑[}|ߝ5V#^maon/jw£U|sF٨@{ܝo6__@Bo8ZE.uqan0dft}'sY9j",)Y^Xh jH G L$$[y&#;ޝb (˜(*|\h/ t?7NB8MkzWfղ咝J,bj=WN>gᘜ҄,JAs+z V'x9f.,Y=32&\<6Lx5 E?mk ~Ev+oZ@M+NǷ?Dxߛ D >p G") x[_M`&u]8`Q"|HF"|@~To&Z<U>Nt5S˗lvڏ`' :I"$~*EIpw5'1o~PzlD0FqGܟq%J6u69}{t;W N%{Pu9RDjOxS\nW|5V Ltx4fbVۆc{.aq*zd$6 R:bZ>;*ZM7t6iű.n'JbOfpc]7z2VkxĢr#ZnHo6cY{sπ]u.Pq(n؋N{׿ GNXW|ҏIǁ`z2+ϨC"Rr F?BCͧޝy֍knMR!s:ZrKH *dp, |^xnyѺi=reZg@[hG=Q+R2??;͢'֍yPuyrsJ6fWPԃu% T#bKʼnBuY3ÒT0I)1y0G=60nS,P)Q9V9eِsYQ#W̺QWrf5yj^3_PlݱlqbO/gq'>,۱g8挠=ِ}mb|5xM^p^ji&ڃ~EQ??zK#?'qJ9H[[uVYrNEw6\֝ZeV)'aAV8kLФ@hm]EFo~M;iz7J+v`;aAc1L LE|eUX\*V+^z,%p_bm4//$p2~P&%#슶KPڒ<#9R;zťFq胟rS=ݫ*5; ȶ,|,x>`K,kyH|$'ja7x /4!aϣp v-VnS}rE]/~JBE=],Qki[8 ~$81r=G=rṼxS/ '^Om h"My@q[WwK|zh.OsR<ew#3dC #^K4PpmnRpOВpzm$ZJ[&n6s;\`-_?(dOoyѾ7b:״r!m&!$M-aӔK8+!>x JQOC˟?OIȆndӉ 4C,^dw ΏfuRYv]PTk<8vl]lIJhnu'j~*E'݀Ԛo~[] >*' +03L*40@ TX2Jmo^X5k:y}ݧuQl]2SRkoB*23 -qi)ׄ9L*)XƒtFtA5b#ku @,gZN`= ChC#s I1KFӉIFm~hv`֡+"i R;@yvGpQ;mlpAP-M+?˓û(9m: rN뿡OFc̚HtCW?SCF- ֈy~~Į8dfH AD ~ /7&"w .U}L'Xpw,=g~srO: 8$_oN*mSŽ2dVo޷'Lr93cbyjO"ZД &րsp *ņPD{{6R7r<2a4wM)ڱXnht MF*s@y[oM |MF{w|}uMjPqƱүi*͈D'W= ` '/Gd\cɻvNp:8qoч@@7o߅_tx{dѥ~wBޏAvMN9B_=v SjL1WEpe.Ud)*)Ꙁ 9S0F`5Og=Z>3[|,|'&<0rz OL)&3pÚ'Ф%/5`eܟXcz~ې)[ v>[Q+(m1FJmz@BQ{O~8H$”Vrj(մv;bhH*H7nBPl4:m?/gg%4!eٗSd "/B(h+Bi28c(&Ǔ7@M_ѮexE-(܆Lz`p;|`p;r nCmܸډ[>B)# ל+OkaVi-r%zŽ4w`rPR]ҍyDK ºQCOع7ٔ N֔dC–y Sǩ K!-cB`"0DÈzPN LF8H JcR jL۸hYj('6S %8Pb]{#Q,)UD׉fIm7xVw{2R`ٺ);*{@ 4PΟVb#׍m c=Dž$EřHxp_Rn[ږK?T_g ͐t)Xi,)^'62UX;;lHosYcLcTv`wߡ~0ϝ j#3Te9zϴrB'tOӎXADŜ)Ʃm6E1VxHP=ϰ,^3I(3yu04J3( o!C(0g*qd9? LljYo ˹ʨ2`x mؐ»ڐR n˴W(jn8Sd2.1E0qA!f#֬*=H+ta>>Plrڙc7U Q'x$2mfl  ,f3،rhRBJB&tL~{3W/%m-"r]WmZ:mT;TT]Go]M:dݡknۅ롕j8{ǻ]z0B0匬h v??'ͮϒy=?.N/관kM:vjSixU!=S uecV܇Ყ| JϬ%RObX?\VSpmPbUVcLi%,ObZ9Zc2(gmt8@ɚba; IL8lKQHz05LEu`Ǡq -T]Yol6j':НB7-ւ+qaZh6(* huJ¬C !EIǴNi6)mPtIb+ڠuqZdbtEງ&gv)N U;Bu`-y{"̙%`N\])C¾<-s%יYhgFk*TR+ zcȀg8i1?.9gcǹ8p J4ƃd,l#\#$X 9K;sy aL9*7aS=\U+ T' (91Y eD0g͔тY.\q+' Ql$LT*ks["L9 u Y&Uig6k*%-T Fm -력0IuJ(cC@G 6; .iC3A9T`}]P5 0#U>әdQFD2 3;lWR}РN-vMDm-S(zh&`[M-2,:]?05^&Zu$XܗI pI!)mDi+a:R?'y @q|؋^t}{N~v=[ 4l uzdzlz /֤a!gQ.?ƒ^in^wIfY2x)d4/ϛWw'รIP7Q4b.sQWk<硒sWW/img;J&]aO.טjuk`]`)U+rã72\ " `'qyJ $r!lnP-J_Ǯ&`r{M3#q`qr%H'Q wrh)L$n?*Zp 5rݕ*5]hT(  svh`Hg_+ZЊhi#)>z5jցX~zp5hc3`F,KD"Ui~r>QSQ&`]9K$RTΠL-/MW$e[.kp8paX,ΓN$c}(hĻ 14zѐ?-~BYLWn;R5kz|7}F =Wų~u7Aizs=b>zE-F)ʳ&QxW_VM})e?Ux{x4^K/ӊYłN,R" p=}B)m^;-|dAh0$׳ 2!p_ 2 7P9}m5D-o-;{:q)Qo?EJek~~iR+~g  y螄rpi܅[Ea샹V,Xg7frAæ~xH6O1൏lM|/Evha3 ^9\1*,@ݭ3Xd6w M@ssd7;kl 7nǯ c֑:0 . ԃTgBKq)ZZXf`]ojfUG:j**U=w!4qͣ.#sTf8җgءz/C{t؁(t >ppU1rˠvx܈A.zDEЁ-mbfBUBՇ?Aaxx~m%A ?޽Yܱx0n;!z?`䀷.`쥳_??>± 7Y3$8GW@Q|Kp7xV\V ܴ8Yo, 9 8I2ſE羢iX(Wx+ !ĿjI(ݝQ]n.w7^Gh3Q(qFs齥&3(NaDGL amn͉9^荂OVIҢ:\,&\n XʀZ\ |@Epd,3ZCP̆W%n13~J wM^ NWݍ A8WRMP;3x- s`vgp ]Z8a5.ZV1MvrrSZ[KY5˖3kUh%\AS(/T\φ0]Ķ'd|U}Uʊ%-80#3* )oCj\4ɛKkv#8|<_,DtjW<˗`cyF{_plxr~BUƬ̥)["_~ay)ggq~tǭqq4O+ \ @ `YN'üSU~O0ҁ-LoL#U#|6˳ \`Aw'l2@<$Law^/ypׇo6vq=[bnbFE$d&W-~#PՇ[iAWYXŏ\7".i^;+ -+UL/WYr\L&ϣ\d&R8x9( --xBd6:/"* J ޣ/1.;kU<N'Uy|8[. ˦nPbSHыdl V)k?r<ow Vx-OY>m)gCagP1*ƒ\`S{_À$TX9r)vRR%nМ s来$7 KN|!Y*X=f#.T5NX֖ūŀ63oIxHSA>? oeh/搎,!$zϷ-x_Ķp+ҁ=&OV%i}SM|hz?"uK3^0ɵr B($u=ʹcØs q9u\8ZIOTzS㾁swlxg/00_4[zUOLl;qrD[.'aV(ESfPf&#)g&,ʘ>ѮovU}HE9DŽRn,;s-3,r-AgX+lNLD h~5%Ywj8 s?7 IhO MW}?۷D]¶AN|kw]f#kmV_E_"~-ⶽm騑%_I s$XlRԆ)Bϐ3f80]ٲVijhX^s5^YYP J*2i8e\G ~8_?|-F>,p7\%1/;%do/˼5׬Rտ\__Y[;@5i(ت+^_?7Qt?d" )# 伎lr~S0}tɭgX6G_r|ac@ 7k.lSeOTvYy[,o[tM@ C JR+iNasƤőaa[:ɤR4Zed୵7Q]OLh*"6FB)+`gNàV;D*QJ!)Vlo;i|7hoAYCvk~qxޥ~{NH oQg@No)o"ض?O:t{E e4zcq'""_g>$\ ,A+â* Q{ #Y\QN̛dTH,y U;Nny҆ o`DVBL%.ܮ (8mw:6֎-"\&IE"[Giw"֘PFEie0Eb8Nd/"gCmM% ^gYd0q8r錌'oT,K+Յ iwR„9P]YeT)D`Lz*00J# ^[ -qq^G MG"\Y[?o=l`G ͆쪺ʢ:Yʰ&;t9d2WbGΚ)v5/XLl`a.~j*/.o"*462[}U,ijRj.jj׊Ik׃ƍؔ^6k(4tE] G~QE_EݘIuɇrLqA{I ]n =[S|${rzjvՍ 7Җt| ;٫ u!f"?!Ls%>9W1}:a8#-%H8b=وA qpaap'`пYқ'⸉$D^&E&hKlM=~ٳ;x>eY/hʼnoL9,u=\#vM۫ >(}a/Ady4 `z #KT"- 2Hlf#,/ 9ka}4w6oA^mV5ҷm> |n}Ό^_9z d4 МJv`E#u &_ۡ͠%A|LR#,Y-N.b PyBjmw< !,Hq"sX`AL^d^VGX<Tw󣸛gq~uS|j=Lܰp?#]\6;xy:{k1) wNxCFhRtֵ5\bm9RbP"6EhVџrgYcCkZV V+wV,T+wʝrgY+wʝuiʝrgUr!M1`rgjZV;kZVHNثpGz58vC*3n"y2}c'd(j5śl.M1xA_5y6^^§ \ -'TͪBg= K~geNWW^m9[c# sP_3xt0x"j" -ţ03$ t5?{u㿼|mQxF1Z`Zl-0[ l-0[ \`Dq0FY1) LP ]Ck(t _#)RC+5BP ]CҮ5 }HSLdYCMVCk(t 5BP/@ ]cL+15ƴƘcZcLkqDZǷ{kZVkZꭅzkZꭅzkZꭅzkZꭅzkފjZzkZ dZ)!$d=G(=B< JJ4R G^kjE4[Q#Rp_<8*Q{_A FS[Ҋx6gFKq]|,9h>htl>{ 6@+p5,MN`@' YW4dm9yAQlXt{e:95C ſo>{8>Sbs^ӯ$e㰃*[ߍ?2kg.6dC 9QLO8O',0Fӳn5B_0`l x8e/0_WO(U ߿zizi+i6~=xnAs@\^*+{ONVx(bP 9J&7-= {rS}Ak-KjOfh\XY>9@osb; 2Y.N@Ϛm>3N?i^)Bw)amcuҋm1g ,nn6KGߧ .Oy/~|~<+<[P|2N?dBͧq/< f7!1}'`/#b/fmQ"x$[1XOd/!U62^80SH&"-E2J0FBKmFYq'}XlUɻ\${b8IJtSSn02@W^Ж>,/ayY,Wcsh(ɇZ)n֊iǨ1I-QS6wdx ˚c8p2~tQ. h jZmm"dy ˚e9cyp,OZ+nG0EN!ǒG$z IFdy ˚eS+m>bEB\R1(2&2J,쮘ZPe2L}^0aΌ1ExI H`m`ii'Cw| ˚

V,/eyQ,|qˑ'; `,G! "`m(PKpbmcQ t5JQ\W!;"RQ~G)hI `$D6r*p[vG΍=:wJ] BBRHb Q۶@ tQ,7`,7;$,IPNu413ƷYc5rX. kq$w jFlS mc5r|}Eh x0m^{p=Wz5Ӷ.o_fgE~%?=DZm'm6pɳAK/w;zy,V*mfq"r)2D Q)*hK+$Y@$\DbGm|S fq$}g;\ɴs;4erމx/.ò]c%܇' @(s`Pn 㴲F7.y[devjy*ޤ̮BiQ>-:* @_Fd2M4[E^OplR=ߠ~gJlb,/ayY,{3%U*bh`6Y-?}yO6=F,kz^d?fDc*'rjKAZ4&\t/5fC:hˢ]?Nm?s 0,.FD+Ǥ4 %vqQ:]iΐ>0"1pKu< T yK+Q@3etPFL/$LK쑑#̄ [e%v#ˊt;|S󅽙 }"eqt9㈖wuvKtķ ևI6怺!!sX, mcanE{ñ||YF Q{)5O} htދe9'uv7ȡطqL 1~4Lh`v#d')tǜ& .ۋ,MU)~.o|²7*P: ^.e1d7:Gta9?`ήԢh 8P='^8ǎW"/ɶNM"vdLX\}PCђ*گn_QLeGS6؁RX `rqBȶGgErW.b\|v;Dޕ\L/.j> Ŏ!aG &aw\#b ,C?XM Ϋ >WԼ'9>ܕ~hQV#Np͠il㘁Υ'5u2qxs&;G֢޺xc]HL,1JVPI##`-+q"<*=Knz}Y>x6E4ixRe 8\L{q. ]! 5F XQ`4y_%^܌0B H&Rލ(N`pa sqd 'SI,C.Q$HB́@ +A P0q+w꘱FyO+?)o ŕQc'-)l}e 6u^;y_ѵ{,dJO_vS瘽J)\!(0hq2 &,NXx9o,ڼ*wv4IIgIgTt3# *8\z;x,-})5=4b A3,;#0'c={6 $*u$ n(QHb1ha0UȊGʫ1D(48_ D+V |C8=@#i?vA w.&gG;wU$*Gܙ>٪Fl kpځJ4Je}rg.J"[Js#W1BY.RwDžr6oi-FW4#Δ9nlbh^&$OgtT;>70.ŶBx͂n\CC5͂Af2,X*ˏ?+IJ8]cČ a#K6|:ZMΠ M v |*og,S $'glc;\N]wriޕ^]jScH-TrPa2|UN ^ ƵDd/m/= Y8Syv2`+(VRu9GѾB@Yn!7v4n*M!7#!] 'P/D0`F#IEz@lHJ2ο9>0oF| j7;<;@# ^?f ZLvH0M#euWpd> _+MQ qT(Efu$VM_?^:T;mfÓEBBT^ۓÒ,#&p܃BZZl2XU^rTsKH+1(PA+U HRfX,}eCi( aVwԩ.1tӄAC> 711pqs$cEσ2AEm\{{'bGiB5C1eOC%Z/!5LŻiAN.2%y u #A/?vx. a_!ZmW~PG9B5rɱw`48A5}c#2j28jR>B!x'4(QPkoh }(^#Q0T2,?GkPlxt{#<.$lAPqny4G)B5RC`Ac>8Î=ȳ\cLK O$C%Z/12iJѴVPY{6+ :jCjFzGFB7'JK`WA}Qʍ0AC`-mkmsȭPF2z(ܨF e? 'i蝥 j42јho$H6^k;zJtB\IVּُ)hl韖}CY^GEtA0ܬvenϢǨnSG4"?mBw0o 0q'(bF w*= }A9!IΤ?V2C;z(At v=yڹ9ӂ9~<`^Nx|i\ܫ} ЧSzS`^#,C9O0*[mYM>~0 ԪXMEYi׮zeNtR;WZ6l֘XX:U(bUCALhlu~j\72NP'/8D!Ƿ=if'-8rr9^(}`#,Tb2k/`qW  'H}7?P }LΗ)x$"R^;م].R s8x=zoTc*/xY=^vLR\fdb˦RՃP10˶_`e._C$'gӃ%ZCJ%p`De[p8)K}ȊL!3-!z7f߷'̀'馲 |th]ަ[>W}:G#>by[ްAy|v29 Db`D`7EJbǜ~v[rqb9G0,g>r|$Y11 MI Hn#NYfGS܍ *t);> PG4:vFdV;I~y/%EEV5)r,  QQ!߷X je<%%zhXb9 | :l*@$"aIt^G3pYQ!]hLqpX =>ǻ_htG`݋yqsAHL@2|F5ofeF<h;OAY.h:3&6cΉbCROs+Q LDǺW2hdGybB'YͨrmFklcr:dѥǎ[6ϊ`zD"OI,@4ׁ'_'N9@ɤ Y hʠ`zxN/ 0W4W(w9ƈaR97%OۡBD38<"("aG 58K)\+Zx]Od-:^_ t߁&)W餘kwAM*##C̏99{bTд&. OdlƬDy q,jmȂZ"ӺPA֣,\pg9tmAERO(}J/kb3c=/pMnv$f$Bl@Syk$:7@2˩}^Y2(h$lY6cj"cS[%[RGX0Z /lJ$F'ӈ6mF ͲpJ#r).&NX\I~f]‡t悷鵚 C:xD"?Y7_-o&ZWּk(qH"wp㐴z)2L-rr`2:}Pv_D1r1PskA>u0w z%dzAiv!OI. ;.;.AhtSp9s)P)FT;PP&Nukj }%Y`Z/!,5FfΏ'"OSS!PxD$ `q(ྞtaxDi9LdPc@I:( ǚ JʯKj&Gȷm XTtW#M(YxDIQ 3ËXYUi}`atOBTB!'z쓿uD(9Av+imy*12{_;0jooޖ>fںVm |[PpL:V5$X`y[5ډr,YvſletE ^*rOD3WGB%O8WiPQp_kV__wAZ>U]ly(+zi7eéOoO~Ӣn•p.YGMPU^bnh˗FF`8^>4liFӖelۘP >kASj3RVBn[Q9Ng*OuVzSYo/OzDw<3,cypLQ29ec>Oe)  7v}F#9UMc96GX;X;D=lK\+1vrk^<^//R&4@#r!.fUtFCj/ǔrۮpw~]ײX)Ѡ$P¥$ڎ9Fn_6CW@I;ori0A@ۇڤnXؓ~WZoV*΢[1"!;#qo\02 :'$䣶K >zwϠv2qMeAGXѤ;j4ોawl|fɟmenG†t77uop=?hǂfM2aE9cQFe1wqփoЃ!YnHWyg9q4@#eב1کn=SMZ#8@Ubj[ztw|j}˕VGf&[LjZ ;˵ʂA!"M^I;>G~Z{2@#ݹ'Tߑ!UL sZ%ȰИT w |*kgl|%-ހJz\̙Q,JVpl1/b8*9R搊J!>Py(4$ T=tM,.p5N9; 2cm=eݔQ2hvw u mWvѫ͟*FghIQ 36=^p՘&ZOQs~yA- E:7"%@Ow&t<}ټ*BpQiy`2PP 2G$gR0>?h ]/}bD)l-6H`q'3dot=#&rю\',\BpI4z`eS%oMRA_:/[⎸M 럫z{^{ԋN W3 z"幑CWf";9xrH$v_gvȆiUZ疐 D>84J<ѸuӺ TڢLB`ݑn[#CXBgpI%u̠"YiF(]ɯNWnxr/!|(#ԧ TĺwS xh,jL:$ϲ= /v(QTPnq-/hZ>\W˪Gr*.i~(3OnwOssGywpH s!3~Hw3~;g^R%7Ty 𜎈T+? oUR.z%_\Z.'mx%GªU?'U;=ؽwg$ёe[3\` šn45ѪR-o`!ۧ1Fk5k{hkn엝oҔ$Q(ASeFD6=|%.R1,1ɱ)tl˻ !cB;! DF۠ pFj}ޠ%^~G}$4󵍆_u:Xu_~y5Lvq/(54bTI -&_hB۷ۗc,],zBm/" >]س'M===~{\(- IhT/Ok>)13ޔOws 5x;OiLјQ-'OcRyQ2VS/vmmŀ ߕs}o."\G7Bxd{]  Fx|6̥l >_U$G:Z1 2?}%&H7]/:= CK`p P7`=߷ +:|Ȟޑ˸ 4Ksܡ0B!o0qGW7 R=f۝qYז2aG(%.8&FqH!Ԫ 5Y$,e1RQ2>пAi1;= UhpT߲ _Jb%4{bn:{# ce D}8GpגM&&㌹`J71p+G&I{CZtY8BsjWF8|dq ֿ)MRq2INK}Y&Re\H -y,9|$e٤=L,lRP\ Ғ<='LUMd/0,k\Pvz+R>w68uZ -g1}=^: aWaohD}#i85ȭ1 J"+ ,g$ˏy&=*r!=G$T.檐$kMoCfE`ڰ6eX7TAR J:)Nu"]nw͚{XhooOB?V.Yg#<|}8?CnVf7OoZhsw)L+/>ls90/R`ʹM`peq Nyj^8R0U,f 82UTXE~; '^bFq01EH Bqo?R%\hM_K9 _2eYm~5AQ }ṤOa,spf[͂wy6۽mKwv חyn_&9+bg`0+a7>,t!0t~=Ͳ@faA[lh! x &uHgŊJt; mRPpܒ:ziL9WRQ FL8`a0ۧ1F%QPSqd`{Oc2(*>>RH B 6-*$NAK"Mt2)~R| qBw"QuwsL`# [sG GShKa"F5W7+0ir[W+,Za,"bQY~JxJQs N4 LqBzø xչR ">29@ىt^^I_~v_-/aXޙؽnQjeV~@,&P'||Zl %nsOcJGtLʅBI4M#4XV 7ʄX=D@*c-)*xʚ`XYoݺ*~{@6W!B;TXr{r)@Χ"EԟpZg`Dif`1.{`GYlԇ` _LC-rSh #V 0},ȈUB|bRٳܧ\3XuqO9s93CݻW $;?`r!Aje.ݡ~@>qgpU@_U/~5ipbeME_MowpGk}Qr^RDz'UeE5pOjzu?|T 8p p%:oAlKJ C R$$°p )5\ P'/B'A}-{_QclUDsݺw ?sAPZAY >\c.$/̅1ok#gi#0'2F1 ^i(6fiB9W]g206ylXؗlⓡ6Bm+z0 A5@mo|<" ;~<Lϰ!\O5}SV7Fpvgm ~-ROc-/Toupț5)xs:S]/gj|xl8?z=Bv_#PVs=vbw!0n!c 2ׄdGlZ*l-6H4E<Tfy4iF_`ԇ~U18wF+d2"yx5$,W>hz] cִjMWxIBލ&}[T8ϐZ癜$\ X ߞǩarlq-ʔIwY9*}ȵ6_ & n[mBoAn|0~&Pam&!}asg}Co:d5Nzq-VD[; Ey^'YqNF,JhT)F# r/XnsRò!o;;^RڋUE Pw8eG:N>|iigH\El$ v‘Ϙ@\k VerM dܪ .X$za} o$ ߙ_u.).N'0![d`F+yDuR9xpgp5쑔/o`{>U(b=RNc {Db (G ;M"KT ֈ'U>]R9Ip];uF]ymE#8B5jWPu*@sw.*}>:VXXq #{c#'* ( 9k]Ez>7X3@3#: u۹PBq12VfWt\˽PnW\۝b!']DkrgY)nxY)ނ$B_큺J6'{ ;MOco/hF|O`31<q=di"S|yO_ܸ2/ɕ -PU*9_*~e H^\r_kןq (pHdKp J*-e%|o꭭D{]%vT}3OK:efU,O Q)c.SY ?/z 61615$Kjpvgoc6T='S}W*.F|lK=A} 3v M(I^ZYoJ睩kn6ŃGb~xf/VD,~hBdSۦxe͸'җXi)̆Rv۬)K5kS)u(&Q],8wRF;JL3 ]9g g)JD!&pjTY7R:rCfʯ7NaUfnc[ϗ 'Ѵ IDK\*W6̜M\|r;_|z/)ה>i<%F QAdzfKѡf'ߦW^8,yNV4c_^D~w|eBͅs"ph8f}!&h\0IPf"^F&yNb3sD"sn6tq +Ή/I1-'{gfL_&-n_9*y*?♃vGfۗΐR%\,W 4z,7MKLgfG`<(jHYuSR۩t#3slzZ>`)x/OKfpSvgf SS-qR-TUª+|>lpT<p`iK5%+AZ"MJk(0KB[!9R`w t6āhX%vZAHD3}.U^ JW^ۘ`cej^@.Qۄ8y^㟑oG|iz+AvKjfAb*rF6ЏBĖCjڐ82P>aA5!/e-=s^v.$BOg P]gfH' yPxIpa|bkJRե"񥞡DкeYI^QᾝL'-=]P3 Q" M'Tuz5r\pdf2c &[:7f\hfu:$~sx>~ ʝǹH"iJ7.g B0sQۻJ} ecU܋JsC~Ζcn2'L#2\-nC+!823&)uYI¶ۇ +gl |33,UCM^EUVfO$? sG%̜ ަq@L̑iNT-XihK/\ M]Spkhe*Gf$x ?T xӮ 2C#zT6g^ ~{lݏ"3kO{*b J8{gJrS.׆WVLlr?.w,nj6G~ӊX-HuCע3MPT|d.F)段㰆#x#6|E (Fպ9՞Q[{8K;\C-":@kAor)D-z7̜eSߍJ^ .PQV.%{ͣӱ{+eT#zm dY5x̜MS݃L'/ C+E( N͍M?%0Wq9$3{2:~ṉn 4T]X ekLH% < 9 pdK T3 ʑOFM[$H5tz#Nw-!z~K?9#3sLX?x|\̗5 QI|JS7 #bʼ=1Icf%6>垌U(4i[Mnf1j gȑ/f9 [ T8ryAu?3_7w̠*^ Ie"Ď5BbvQ4=qhԀQ]R%T"k  a ԪY]ɼopdfe_œ*4ffă4R #U)1۶4@ż9#3sdOm}=4^\&92Q 'MaQ"MPBc.8c=ڌU 36v TllIuhQ}(Qfpl3)P7.c;P#R.(üXYOq{f*~VnGp4y ^OjI%Ͽ3ъҒ#md:qp"tZ9TF{t3xHcT}*'1(][X?״NX]p$~>/A:?x{]Q/@̉ FsVBtJ&H OyC5:j9hvk|#%ME}S5R~Y .#f2#(sYX".~l]{*${?u 0/mhoCȤ>+HW|UX/7݁;}w^MlG,DC":-$Ooޱ0#":~tƳ[wEPxB*6 dZp +㊖o(˟+>~ήN]k}fET7W]|!WnlI}E +)EIV^q)- LMh~}.. Q#$][b2tjySpDk3teG1Wf.ճrYPuu_~[,zi\=oaZ0Z+I9,>nθE[!4 ze!E"ΰtk XXbu7Y,w[5W,r][qUYgכy-)>©AXdSZ\~M~|fU_nU|S hظr"(+Ҍgo/!W|GL]Jo m‹ ^{»$.zu!n\Ļ¢,_>3VwtF},vD ~}%KD<5o{S1CHAѽVG4:B<B $IEQLj?!Rx̋@HB|뷓x## $"1^)Soϩ4 QM Z8[PZh6I7\&p)׉G9B<B %O~GGG@HA|)LjS'Gl{yn |-m1RoXyy"axlj;/$, EV:*## $!ްbhYtxċi+1ox.erBAqpV\Nr3xS4XB,ghmK']Sx`HNaKLj?! ֤@B UIS1~GG@HA.y4c T4W+8SBLV[G` y \cZe!!l]j]koG+p?^x~?w{wa w7X{$~ڼHCږzfH#q(Yr'" U=էOu`)M~pro:bR 5ߝݻwgWafR&m2cAmT$j"QڨX6J"%H2: Wa4ԃKAOZ=kz_W#ooREfJY@3| ~#]7oacכ66sYYՠ)gofTVwn䜾mo6_cyVjoY&5E|T%4puӉ.]liEv-7(>XeP?v;Gt/"JqTQ&;aYqGkZ )$,yjFzeIpөg)zdfu?/pB_?>A ]Yuy4ηka.ol )k>5N> R$OݽOC#W>(ո8wh$c\Z t/=ʔkugjt6.>SS_eKxѸŪUȐRH9{.J7kĚ]\ShZa݀|~n>-¿VoeW!9ay#7-mU73kh} lnply Ր{ _,N!83ЦGtK%,,D\~>fD#_hhB\JI17sU.(: 1up|M .@'0^Q8-JS-EKǂka0?, ~Hk0opRTfJZKRP#3XbDKЁsWE\ը,+d;FB*^jS -$ta|ӱS>HUPpZW lm$ta<](-5M!P5z0nj?Hxy{m8BB+y&A"N ƪSD,>m$tbR]oaЊIpZIb}:0^ $~m 7NWhoZd2%ž.}dљ60kB:7yM !)egX`Hx"dqDCE0iP1"Z딴`ˡl6:1^˛ΕՒ:q^SfLSba:ٔ/zKz4(tDP%:Pc+`H;Yb"y6x/Lvo|FH 8u%"e.w=r rxcП֞.\u<`| ]/:# 㛶.Kp4t[ @o# ]6BBkĺFmj7 @E4RLhQXlXqf\m$ta?zW)? P}a B:0D9` y4E?O6,v~wb-o{BW#v7b#?F{R؝rKڑ3ayo%M8 FD#mc E*0ZA$@' 1i GP ap) ' Nkm`y%1O@(ᇍL*1@tf[MMuA} |Å SY1,\^G*#R"% YTfiPK@G*9 +}F JBn+{&H +0YTV) e Ӷ\D& .{C Mm:I`e+wXZSDqZvѥՒFkܑH2rC#Ԋ1f&.b1QG6XiK]~ ~v#&wVk=Sy`DjU=wU3sywk2X2 cv{RPrwכbIs,;IHLo]8@_Xl믚TgNf ~3#+դ0!JQ@z-';&b\.M%ej$Xm.$n!QSWh烡$0Nha z|SkP @ĸ hY]2y4%}SV`O?SddŲdzq/+{3N6Dj{cP.7ʽ&+%f :/sS sWaJ`Ď! "/I -А[J8Fve460Go8pGOL5TTOT_.{q׋gò*=p F[1stI},ܪr0~_i3}H ή+<o {˦qG@pL:=}MADzH83}?}cG0δZlbUv#W^v$17Z bdO2ch % ,̈pXNR{A tST4ZDf58E?/ǚc1/Z;7Fv /ڌ3i#&yM/脶 @F oT~xq^md Hy<3|bmZ(e6դ:xљo`;f͹\8s P=t׈P5J6N򁺭y)fNL֩GX(|2nhUz/S,< %\ޮ@5*aVŬW逰%d~sVk/;^s%@)c G/w'N=]A]+ 8W+mիXlJnUxyz~'?_^-T^:hu-nxzw^B;D"UrZ*C#1I$"gh $ؠ!x`H uWOӯۍ& 3 LPbkS߂URN~24c)-s55G6mߤ sCÎqB&@BY֜ F #NTۤ?&ɈI,0 #xp!.({ws0#Ha9}6 Gأy.gcIxu߲Ν`nTG \ ihv`4\Y̆CMpkfkB նC5Ppd9˹`"el:vش¾-" `BK R 3<#:`6ۣSZf]ൣ %O@Ǝ4#툇Q2i3p3ߋ>_Ԛ(˲vGwJq2hTX5krM^9^8嚌arM^5y&/gxЊrn[.˹rn[.krn=sܲrn[.˹rn[.˹rn[.˹e˹rn[.˹rnO07UZN̵r-kZj` + s\t.眬6#sx9gO'wL7Ⱦ{ly;鼻+<#H*N?8B\U$CrU\U$WUErU\U$Wy$kč,9CY!yl8:]^rÜQ [XcI"!t]vm:E[]X@JqTQ&B:NY1rbr)$1W=dm:V;ktQ)\ۏR3]v@GFHtۇö0I8rG ~Tg\OWg|g+U2tfm\&\GcTu|_(?jqujܤ9ga} ǸW/Kvfr*5Gƣp nS?HÏ~aJv<"!Չ|Gh6ZCB$9콄WJc2V%ߟt|뺞Wn™t)eJPj웎dzTZG ж M{1 i5v99=x{ Vh=$ M)~`3i+&룴I}S껫}Xc1L]|_L~AvY:}1|}i_u)RZ)>S9y*@pRPvNC4J3߹GhQK' -3+nfͬ7fV̊3+nɬ7fͬ7fV̊Yq3+nfͬ7fYq3+nfͬ7v{xyU?׫^^X—%\VgT~ Vfm#J1'Rao /G$V$iwYLiw3nʹv7# -%7y}3oͼ7f^]>BID'gL3qp&a4efL33qf&ę=JÝmd%y i( 0Ըfh6P#8^GGsgI= kmH_ebquÀOևT-v?EgƆjGo?@\eߨ׏v2'egN/~4e}ۚm/ϲ~&qPn7'^nna}Y5kIN*ASiWWe*^hTغ8:?Nǿ߁]+ki+|2ВѠKf/V%R尝6~Q+܁~6% مzy6: #{b`#K%bPKl͔$h&u D6Ȅdz/wL"2v_+$(3R2BѲ(,BrkyjbEU8hnkA fSgW4 .F IR4A ͕l7d>ĥHC}O;[ktRHo7c\(-nXL:Ϟ1^9iNO]b@I``߭:ᦲ v_t9W/F~CAa.F*B]x s)R16 WC\4:4YJc3ĥH rZiT9K`EbNu0܆ȀBV)+C!$g$'ߗD/ӮOww:mXNhs’/)/ 8N%U&_-U§w5#`.F,B0#BXYL^jʈhA #(H8Hvi՗~qgRBzǯ]);D'Ni_cZnwh5Z`(33hQ9b0XJBԀDL*޺\Y 7]~w*š|,w.$vPre^ڕ,c}ၱXjT ʔV0$dUÙ[5'D1I1x Ht1 kcT0` gx /etT#Jm @>D(BQ0R`D$/h38Rt` l Y)RaȽ[nO]=.2OpT^,Rg2D'HkڃDd 10%0l6> P4Sz_wRml,;M ۚ{>mhsc4r{>O ʍvTAUMJa9 2zӄ:d6%ae{,! 9$i1*J+`|> ja`< %֪9ȁTu^̽sN!=cp -)=CA Tx bXχ:KwNϚk]o:|9Xe{K1P1}LWd~w&{yY4S':Xu|I8{xztAJ 'fԻ=G/H:yG\wWdyocL9/-=75_~fn4a^ɔC=)/aN(KAƒEjC\Ou.U˭\Ǜ\Br9&r65E""( -J))$/,]h^||rÕnnV*#3Ums3 {9딎c`_ z*B7m3z2+*hX<{?1|B>b`!`a*X%pu.Tqԭq 5DCS[{wwu9@ч;JY:-{A}|n1dPyP9EwPdqݶ-ar=Ka t! Q^ZB5êg`bgoPE_:N:ajXBwCKKaHpZcl+yQTcD4X!5޺ H7t~PE {,Zu;`*DVmfE!0e,"CHjB֧Ur>Lnk'LDAx MA,\*FnAhl(<6!1"Ya겶jkbH>`wXFY rG5 eQ`j,j_R҈V̆ԯ!;?+Vό0덦F0dC̕72*p tf.[9󶜂V3brS#B"U!UB9o)5.R0|Y,$4GI~KP?-gUBo7] ? _UEjvu.=ŝqk`ygDL%+{|8,v`zͤXuJMV~SieD7J"Dw/";4RL,N>KnGaJ`iC(X\:$Uh.&LFZަu"LR`f#LJ5W-FߙjwM>"zjv~{՘)Oͥ7u|%KH7+N)*\}P vI/MJ{/?FM˥3Ձ3uUC:*H>(3Wi⇿~_5BQG+ V}'&9ܶ ikxP'`#aN]f ؁?[6y܄eפ<[Z #z!Uxd!R&R/5eDDL ` Xy$RD}Wpv+?D;qg^{a_]T{_Nc2Z`(3*h2:ZݢrD`p'#>TupE |gl>XoOv$;egqʨ!~9rPnV,I;3MQGwańJm$4 gs2_~7RTåϫT?NoWwim)he)! ҨrjϓU╪&p?N٦a[(lXxZR3JuT:amu{EUW*U$w7i17 2]h.- pʠc#mNG>e,:uy߉vmr"r"(j=^n_y$+A(oxV)N' YVDVϘu1锻ʟ]gg>7K^Y⇊SÌ>Y.~?[aWˆXݷ.yu8ҕTC'`6,6 ""x1f 筮=os6խ->hkH8 {&P"Oz Ђ? p,HG|2+'Jptk`k.z!gh՞6#,RaXʥ9E5VC#a;I_}Uai>>ǚmN|v5gSut`?Z1+o Ӕx@&sXX` &sר<*  oK SA*[N|,ߦ4˷)m3Ͳ&Y&!J"a$#q4sE3AFIQHK˴:8/~Jo}UmzQ)%etb ~Sy*ƬvA+3/:-!/t,%9EVM+qOd&#QHY50k5 2bѰ6c i-vKO.b{ btz>/f(۩nށ L#EFN4Sd!R8ݲiN.K:O3.Tud;a҉;4/`F]kqF_E; ]`0|k$EرݯҴ4zҌ#)6zWŪbEz}Ngm6oWZrz/ݮ[=7y0٢FW7}6<vuf2Tw:V>ƬtӦ\-)3?[ ߞ㑽JOU gxGFF0`NU7Giu_km|e^+w"[F#VF3#Du&w51rF6<c|l~S6(}1L%Zh"_hY"MŊw,pu{3skVW\h'БhW!ۚv,Vc9kU!6W2t{ Dϖ΃tu)v͑VFtMy/Ϛd\ek8?Dn{Ok'V'cͮ( ͨ ,LNR=-UbSz/Βw颍䝴B)dww}N+HL3uy_sS;SK`vnbe8!U`z%L5^L%;->J.续J}Ej=KH/Z]@ wH$rqe^Ϋ'Ts3q aĻv`R5M?Կ&.]&.D'csvA9h;msvA9h;4:ARMH([B7囏G|Q(|o>7ei2%")Yc_\Ydl2* B /lPqxlK؀;V5J\5Ycwb*'D1I1x Ht:N kcT0{b/00vY4a#A2 <b1Vb'fd޿S.@}J`|rfق.S]@*Vݦ67lഏGm{Y{p1AG$t ?H*.(PR2`d|eZ)8Urq+,Th$UYϭAcS3zH)i]ڱa2)Ag5;`((JKm^tƟoCWMTjnC RbiڮrzYNK=aVӧG]{\ B^y: n,.\3d7jy i5ڗV߼~]lNY[o/ ZeB.4``PgcTxÄ!y8*&SQ3`厅%nvl>=lJ!=HPpಗﷃ ?lo'Md8S^Q~,JL<'`Hjkb <* 9N|0arroDʵ+f˒z|&GkQ,brK%bdF/cL)lEesҵtͥNj[fUǫzF#LxN0+D{-"#a:0- 9 ay=# \HL3 r0Gdi%d%X310BB23uS/IN Y$53iCV8-eI˒vHI;i ;PXg$\O[obo@y0aO? J]p/,!XH0 (/@kw`b`@T쐰sw`ww3<@g'cP2!/;%q|R#HtV"61JA"sB+#u̾bj  F)S|'.;ɝu'_x^,7u>Jkb"~p2H=fzӅSVЫJO͂%Xj:u(/:;÷2er;+UPF@e:'Q?w,q9V,a8(e2םOXL=.@H$4hnzroINJm YF!Q!~Gk()иcz3s"^rJwc d yOׂ xlnߘ^g↝uL7}HI0-ty%R"v$9N=KKG/ CtQ J1uf<r?1yxYT .kM㐥 Xt*Rݮ`uڭ˦Q`F)EYGnDž3EyDjUgf_޾|{Щ=劭rVi+>k>i荩?9zء4AqsiCY4x,XDA@, evl3hnR|?'ShA:G{ϙ4<"8A"9Ep@X/V0ɾ+.ks\HUrsfڮ>:S#U-Uy ¤S!V_Y[W|4.^,.ĤMLt RFAENPjTb?֌U54j]ATڃ^ yz?S)|Tv3>6l$KfI'<}lɆoA_ۑl;Δzz\#B:gj]-2m"g4`[($TΖS 8>}v書&.Ju)Rc Y)8r/rxĜy% p3c"{&CDtX{))qJ7Dda40%0lil>f8xS ɥ8U7I0tMZj9- -AƒE ,٪9E&:P6fr sL(QmGsPՀS`;X+C+?HrǞv+v ds53>\Ñ]|@7D{<-rWY{[PpΫ˙W,`]]:O/^tFICQ` -GN\*0Ř_ cVH'|%S%@r?1w֑J K9w4ׯ}2- u@F ) PH"Z%2g3R?w|Ǟ[-@O;bNN*Z??X'O~ǫ,pl@9tn{i+C JX- P*:G#2)G<9`o0D=.: rrI r(YI{VC]xP@.߅ag|yo(HRrFˤ ֎DPm"eWbH,xmDZC pt&^dX7ή@w`0NJE Yh^ |pl. LgZ7"̗=v\vK6`7ƠȒ"ʞ&)MJfbwWUwUuU"Jb*|L*1H Oiyo]r΁I/o:|Pmžk2Fy@4: ́O03,* !c  (BEH&=U+mj`ck|ݬ-e=H=QBƦҪ t{.U ab#D 46L!1"IĿ2qYڏu4H`}fe5 +aDG E>.:;P[Evݿ3<%S+ghbFS#2Fs危 G8"FiC``kuXVr r 6_[*Qp/e g]O? ߌ0Ey}5IQ&?%/Ŵ*Fr?͡ 'H͚'Xj_Luk̖oFfG|q3㦩PR6L?霹OxǦ SZ$ϦU/Ms}9$i).|XEqB"QW[W-"%x)HPWa1~JY<=Slָ顴3Ÿ/`)d·1z:u~<'ͮ,[bxpѿ..eE"L[p`)OeJ6/Er]HRLV#ClήSwf2rYn%P0 ! v^.*l.(& y7>aRuX\3/SZJ$3S]-;x1.T#onM3(e^ܒ|ZC tʮh =oZ"[03`~<+9S]4*gFA?O/?>g0j"Պm VgK&fG ۪;{o7m_<-lRIu45j^)׊ $Zi@R2.4Qd=l/}ŨދG <,nƯtpcQ:^Z/# nF2xqS!ﯰwTc龋KZ:zLG eZa^ˈ߀qؔqԻxSͯn#Ku>r ioJomna)v7[O3UD6)|8lQѴ-Wp}IΡS1TabftQg0 b xzMsR!z̞k{5+dO.'kyQ~xx?ʭ'w|:Aona·J  ZkYHⱥ X,k޻}:m-GnNtۨSP{YhZ/Aʾ{mԤJ勹5ͧz e9m'aR/1",DDꥦ0+Cʘt֖ t|6k^xD x0+x꬚W3-rYo:q1VYErIleF͝QFG2;A (Att! z\+Y[ ngTNh 7ml~ݨ(O刕ݤXG8~wr= ϰz8A9B]%hv;T\s8ތ6A[} H]*f/9YJyѐaR` ecGsݽ<_4n}UsΤ^)t$&,׮ldܩ8^{`|L A +8Z.ƥၱXhT V0bĠPF O`.ż'Q1#Ht:N kcT04]7hgOmu.sWG8ikN⽡S⟆p~irNPU4 W^ &cl܅䰋"@ F/pJ04ag#m4vu11qzraŋ/j*KFƯtuS;lRȤ.psm/E*7,Ra va^2J5=lp5Q¼]c2jd{T=I_36h Rk?ZEPvn̨xyet[VdGj3JlOr8iC;vG2$5Iݒy9N?`.0 "- 4{+%R 9z Z|/ ,7TϻOJF(JH1 N9ťoW12#XP1g&vkopuv/nS\|/,Y-zۼ"5̳hjAQ[yA"u> ΏǠ-X>T5`87kIǓjUx ;hNhq9_MC!Tf['ql ^՟' RG֣%XsOzaH`gz GM c|Xݱ1F{dp|qI;FA.'M*d)A VWj\A@9e͙cC:j,7H' A52\$̪`"9h1h11 I;m[KI^SotWRҹU4S{7%zΡבit`ݤՏEJn+gy"ϒ6::CCgh uXG,pH:CCgh IJ Fq :CCgh 34t+|eh +CCgh |fhcY24tִiY24tK>6aDcfֳ-[OF8XiO^5@l/r2GOlS]/"!Cղa1B7`g=XyrxLPR7Lo|q;^tMX 8z/ sP p=hߜ析9v|ʻKyK4Ȱ\H,5RH<2q^AuVn1Zw3*x*7wX+~6;q4%T__IzŒa]¼` ЁW%7iN3XR[e:{5o#.DÍw1!)Pʍ"VF5X"( -J)ơh$9^x w"C+Q熞C`aOfZ{8_Mp=d 'kH~\S$͡+[R$ŧ8$N!7# L]*$i-ZeζLS~y+l \A,X~6&L,N2^g9n7~XGܥ-|$R)9CqR9GQBP?K\RXjC@/]%C֩Pbͣu=7<'׽wVEO!ҕcbճ$bv>[DgcW )4O bJҵ\hB(67mdg"~<"I#&Yqm r0^~"\ u1 1LhXqSqLx7p"Arc ~UUw ]Y6A}3(n\^ M7]T+ ?ũl^ɠ] IpVG ̽~tƨs)pe7j U' tY7 N[lB@rTaph1\wkQzDNP`qX#ϙ,98I"jײz;x:ЪλߖɎR @]\./#]0me^[\\:_SgIarۜ&U7|H$f~׷@^1=6gՏi~DRm&>r$n8>+{cgo;A Ct2̀ch^A>%ShlRmF{ϙ4 <"8A":-RB8F ZŭoswkűPc2b58N;ml뀉ZztO8CȊ CX8Rj~<עV垵rhx#~5"7k##r\$ZD_O܋ϣ2CoG_0 Sf>:\}mSvRp)c/ېdWC41fz-3N5`/,-1'(E:8fߧh)r4{В2 ֊9EI8eB`GWAr Lb>eQ5'%|TOH#pzl+RGPד7`T{7eo&@UƙTz!K\>A(Vz+ֽ39Z +qC_~}49P_KLJ$ zK:+.4C{?7bvfT/h|<}Q9\_▱(._` z՚9e1daޯ2y+fĬWX򢎍[̖V L39k )1)vPS}I'EM2IyL]W/x LqˀY,SrH8+:i9*7ǃs`Qq쬅U dRKJ1·=S3ڇ|>[~K1nm< 3Up0?dM8[@44?5 Qgو#g34t[-Xَ@U|g9 ky=@g>Ż䣥D1g%|[), t! Q^ZB5ƃxT>q*v񝕊QvcnwigY v 3s cb61 TT` 0R`c,Kvn.s" Nkm`y%1Oq>JL*1H 4 VHѼ.HEYj&!@GwNvz{& ?՛.tMkԬY[M3'=T0C_| 79jܭT*ƒ.>qc.{lz4!rh#;QE\_O17=Hh^~"\\ u菜(0 bw}Lƅ{0‰Kɹ.`wWUݭ~ol~^fPLݸg/r90◔ V6NDjdP.օ$y8#`^r?B?D:cԹRL]TUN5mT@wD-6`M@ytHyL(Oz" uŋblk9 9I"jײz;x:Ъ:oKdG)[W ..jx)!mNa\O?$G˼$x:!֕ȦХIt 4#IԠ-Y\5_ _`5,Ihon_/:| Eq3YmL|2IVp\>+{cgo;A Ct2̀chA>%Sh@*]23i@#&yEqDtB[Ep Ƌ[os#m߭Z@Ċ(HWF}|ϠS|O> X{Yvh+'BcsCXqH6LQ\'0} 7EbVȹ9o1?!NcnrV[_߱A,?;{\s'bu0KdeQ{wd8&ƼÌ9$DY=pngNY"J^ ϢՁaPT[OǼ'|_>Ȅhw- +⳧#װgI\#O9.bnLڿ5tnpɃ0UWBEM\q™&1T5C5~JX_tGz6JP %&2` ܙ!"O1* "|FVO)tU]'AxeFEvlYfެ\?ö-n;@Ww";U>eio-ыZE,69DXn}q"O6xܣ܎A {Ho3cx$Cb&HHIHM"g[Th! Q4NScp@ :g( DQ:\.f1YDc2>vOl4ڽ{ $a9ϸڽ9qP{SϺ9d=_[ΥTyuͽ@hQʍn/pߧv4O]0NⰣ~.Ok{spXK#wn|*?B:r؂LlI/>Y;-HsVM~6rhHmjw74Hl/[n6?m,W-n[ ~6[_{po%_#W7i ׍pJ2f">U۽v?S tz" E87_J`I+4 $$"Emյ8r]Mヴ%/vCpahvxCMCY#8S$Iˈ*1 !R W{! bE\8@{ 4q0|:qZ#U!2 0*j7 s]Dgs/\N #iv[.nol .ګ+:::RtutI~OC#!$H̦ %P&$4F h a`EI#a{y( !hhf:v{gP$,|]]S;UiT'TZPef`Cu3M|UPJ(I5Lc@Z3Z1TmJDj٣(t9\yD#[DۣnbGu, hPnGK[Ưץ{r:}CV K'Sh_]9xQ *@E\>U !9GrMuH2Bd 6X[J؄\X>rŋdCm)f_<0}lpyヘY 'fh#9Y \*/IEMb`"|L+]BUT-8, wZP@ʔBYϣBf x"^E&7 IFem {bN,c[*UN;&k;S.\g]"i5]d4'pi(ɚJRSIj*IM%$5QTTSIj*IM%$5THH JRSI^%/ϼ+3IjH"5_||]V%H Rkw>p!HSQKNY#H=SerMyAVd d== [3T#jvOU{jvOrjٚSyX9=*잚S{jv;8/l1'(qzg+c!w1(4 \4(cCH6x41cq~/ZQO9z2H'v?ag[[^t+h 1(ɬd;E cΥ%I.b*OLvݱJ\J\h&6F npqQj"0  PrXg Yy%׾AՊ;n8\ޏk"m1LJ@喯eeFw1!MeC hEs1.ҔR‰UGDd)fQ z@H7_u|6 y2L\ϔL)ǥ0եgi 6 %S-2B\d@{Xs#($ܫ8+Bfӻ2 DdL$uƦUމ vZ\MDJ'!q+j:}v@X! ='(aZ7Oޢ-xY+UF*H/ l)ʝ5̂ 9V*%| ug0t1bMD)llcBj}K.hG?:!UOYד6_FmdgOQ&&>-Dvx?3KB?kP8$~?:O͚%j:>/#; +]nW&41ex>xM&vz凤|u~zú׾4l1{޽tf/UiLqQz% ~GgMP{o}Oѳo10SS_SV? vl~OpmǣߧOs7 9~xP+/?gFn^E>IpyFͽ~'WJrNmo񢝡c 濵$ʎ䍣9XW(Oˆfr#n} ߳YGT~^|hlG/3ENUt#-O;x6n30j .=z l]4<Y ثKvtWZ"1>o~kdsz40,&3 iDrq2~o#'141RuM nǶ99\-5) /$-D%1 ,p"G'A*L69Bsp>Ą(T($@")ZK`uD>h"^ZEm=ܟKjO1xajeS{ S{^sb( 1RqZ FX G][#,:GZlj`i+\0Z`X( $(H9w @i @ОZcTHHO`埫 MvtZsHOi-5M9IzLf͉Mfou}?6v 'K&(a0yY𧓞80ժyU#ʪom7jroDeϔFZFkҩ|+kvygDkGgDJ1lUM.e 빱1$~ B"gSGq#bs 3Dmru)tN酱[i7-RlmZ/HD6+ }N} Ն>a^v_Ho@ov @3V15ii(PVu;Zʣ9Uiy)Vy%%+)&D& 82}騷A( }N0PQQَmxϯjipbO&ue{ >*< TSV޻vu,]w4S{Slzj*4L{.-[he_l]W5z(i]]eYO?vl}8JKM\p63ϛz>&.s<:1q]6 &~0Y`jDzMk!}늱s_t>nwuNվvq>q^o^rrնW]fIU|s|sgMVA?% CqrXUEnlk P[VWoU-ߕ37pfeʘc5˺r<1cUxcY 扎W>T{P;Y[ԳTji|/" i2.0xiMU)wNRn^%&]#H"r߻zw= wPG;V4gavϜcs^XlyICy)-4O{-&10fQ)mO& >\7x6픅JK:/:k9ܭx~&|VʇAQm=q8r|yޥ$Q,<14Xx D oă'Q@S(R,"SOh͟S b%JWtF*^Vu5K!? ' g59Z! V8-Ϯ'Kj$4WF^)6Bjr gsSQU)ae=f(A +54rgJgc7egi?UPc}qvefS8_ڎ}HYevOU}ӓj;I7bav~˽+._BU\C~0]Nj~Ofo#-_?EdתƓl2Q0Koh!;w-C4\NƛUg6F}1k\ hw ~k|=@գvȖ{rmGͅ.]Rw]߸pǏ]쿈p^/[ D\oJb!{oq1⣉h R$Psk3ڡJF=R)UY ShUH;w}Bm%ׅ^vhvne6#t8| t$֣je2&cD\d<1xV^HݭKQ\8VgMO&0QAqBqKpg/[q lg+_xݦMvEoYviYꖖ%ke`28/arK*-L"F NOJv RTLQei%4A9}nW/c#zyR0.UT?gMG_&^6(\VOd)7lbqWTaB3UJ3#?)~mqO(-W5)υ:-dPM>WLd%Ƕ{U1(Z0[SEC]z÷E Qm uS^}t_c1FCT“1(LmIC۠ n4mT"#@GcCö%qQMvw~~XTJ\ͭd.E)+bPm6*F-(H> ^ɢ~4CK ;pJyXJɑ妧16AJ2W@<^ m)9h9̙.wl2 }DQ~xl[c1s0?ƓfI9GgF-iX)8R$X&,wc,'WJ(!Db6Ƅy='lqׄ {ov޾Lepv*ov@zw{w8oV׫?^e'}7Vq:|.RG8?h vPM]@L !_@UM iӎގP6ݣW*Xb%D?2$DeHp~eZJ$G]{oמ6Mtco#=xۮ=熙Zpڼ-86+i\!T2@JEqOk8>K ¡P},&cx8/Sy['UrM98Z?iy]Sk*ȥb4̲`!sk<}}S~S)坿U^UbYceufYfyF4ov ^Z~zj$q3uWMcXй_G9zNͦjʮG ,qVn2_ojPtIٹ7<ɠթ'O~<|Y&SU>i/#P ^)ℷZt9Bd8\|=jNu -y$]ԙnAR]K񩥰4pjOR= d3XH G08g8i^P_ tl0p1C3T;F^HQ {>_"6=XƝc#bf8.|hgGG#_F z'ʵ$?QΓ~>h)-s$%*$9{!reD$@)~'4 n6TP$@\t1I?Ep,£h%Z^kbB]ugŬr %wvFNf?06S/bQMa~?|1OL+Z;VQbLޓ\J U1YϢhRNCTTqG.jkκ_3vR:5U8tK͖ٜNҍFP@$ٜq=,ktP+5Q'^\Rmc6hK]HXJM\:@3#p(LHP WVB|L2js]H.^z{A2f %Dip&)ZGj& \B]'z/ivLI{vԳp?n"kuۢ p 'j}ҊzyEOܲyb<_W K~?(Bu'zc>W /.7 Lb 2__gRۆR65K2TgiϋOXzjqc&ʲuj8{ivtf/,|gH0Wiw.e4HEsv",QvJ e=[k~Onq^0(ЍE y~u(b! Ϲ[ɫN N&d<(NAs/=?NB+)n>K?|:_2wy #[.~E )XO] ,u `uwg$0d_qURu&my2GO]TTOTfWe[x1g^oԒllT9Lz*\=h0<#Mes ~BQ,FCr\<\qir~ >ld~ip^33g}~(~Ek{(bUw |nn$1ӣ7a-@5 j FR^X2DrEN;ߴm+0k⦗9Y p |6@3{Rݼ4"ߞWU=[-qxE2Ԟ ΓUqy\v"s2+0+0/-Ϻ"t/ 3`ON7 ~*av)=Ҵy/E%t\b)qV\Vj]C#NH*i^2,=[,4xGI }`E s68lsst)bmb=,e-F#]V<.fY)\L3h6#px/TiXE`'}}ꜟ>lx|ʇaQmq8x N F?.]zX N3Ns* =A}@(J*(m.#!nvv҉"v-8Dʹ+CM4U;ͅ'}qI_\מϦ]=hUEuWSTfRy^Cǫh6Y<ј!hE# a К{YR֌RBrha8ez^ q,tJ`U Jn)`*ҚrkƂ,٪xA*W`vОxnxQ4v49<0360ce- *BI L#xK1[wT=xD %!N B餀nQ!B$^tbnM I{@3" 85I*ʃ2\p^CQRţ+C/*dy(=A􊴊S"]#8=O,TQ]{E2ߞGUPI.N"!Ԝ/ |.|!nfDmeEHZ7D4.j۠'/@8e56~pGk90ˣK*@9W{GJ/?y^3l7M6Ox9i3kqUΖK ? K9j+;;b̝+Z[͖G|$xhfɨI2A ԢI,8 Q5CdXQI[HW=.8Iq.Kq@֫21,lR+O}yS)M]IMoqS(t .+ӫǃ'|`x;=Qϟ?Ʒ㻟= Ty+5cS^_k+1OiѩbX+ZX.fF7ASZ)Յ(gڈڈ6⭍xk#ڈ6('6k#ڈ6⭍xk#ڈ6+Ub.1DnWlJ!$>:6Ԉd@NIxϩ]Zo-&];|ct;lq^Œ!%,y9cTDNtR;̥R,OJ'&xk˲d"B31RH^84.YmCeLeh (̒-E}Vr#e|뼒oдvd)J!˷S;3~ \~xc٘ޣ]MZҲ荡EH"ẆK4pbQG'h!8O %/_u|6A2b:蓸 $Ȕ`!i%S%ڋŽ@q^*W>/X%dL$uƦUމ vZ\MD*a쫴4=H>: ->rϭ`J$ty;'oїT}vW6}lh U||+?w4)5;7_'w]/_$렋KQհnƭg ftQb_у!ϜH:)N#j?EYeNolU1'rp׈zzo ]#X/0O %g~ASaw^E |lpE5{9!%9_ٶ᪝~GYWK-H8j=!٘K qrd7r]үYO *,,ۙnDe׺~rPg63F-|Q. U l[5w)7sLٷ&#w;A=Ҍ[.5oWONw@z}uF5 \0ej5qwwl__yIK։AnzR@=IF(Qc0b01yr >,p#G'A*LsAKvtOR#>51!ʢ'< #FdIOH%#XBKkHWm+kQsQ5eR[bνOG).:gVjbe+s/'QЋm~ȲN24s*ؐw&~;i{upŮFwvV|Vu! (8x(;_m~L [1 ਷MtFRlՂ.Hȹ&h- NII)M" "QP2 eـO:qz׳A-D8Gl +<ߝS>~Բ?zzi{`<ƃ8' BJ* Hf>h&-{ EDWmiQ>NbUb7ܛ7oyyWQY۰_ fj41OQio1mˊU-՟ zwM37 ֏MVI̞?hg:cܠ?}wy~0@d!Mڅh2#@'"5%z^ER|FCwb΋-Ϊs2jY;s{&7mۿv^}̚.%nNyI ~/)ho;힭8<"\gEnD*.ρˎWd@ff倹YWUAqfP,a1a#ϵ^%N6"Z6|_Bֶ˘^ z:Ѵw6Ϊ듖 WM CxJ5=kA_X>9i6#sn9ݕ.`2X@l\ 7ߧ")dER3KIoZ6^W]8\eS9OSkR$Ok^yPHdI&y p,i;{8qתPU*Ԫ!Z+4&1a%<B!<)ALi)ķ˞̅29h 2\@I 1jMx;  Z-LF-8]Ĝ܍]''IyuR42h{4hUZ;3qM@ }Q|JO)HEJB;&[U]VeIS$vh %TI$({ uDHlnFNBŽ9T9/FP3ɰl*(P|5jWb, dgUN2#J?bFC`?b*ݼ;TÁ e"JDfR3 Ǎ y @TPR&RHMnI8L`s*ZCs~*XヤsKzul?{Wȑ ᧙Kh€/ޙ}7<"%ZI(uYU(!u5:X/"32DUJ"B8K{EFXB`/պX_Tkr3>_/ bţB 2+|N斴nnx_+knڠր罥8PHV'2oYrVz=S-o~&Ξzn>'"ޗnxff@]n~wW}p ]ߣqZnׇxIq!_PhVpnSoٓXM|Tz,OJߕv:xFئif/{no~"\M Wϛk_>O^bkŘ.C|w|3q1⣉h R$PskgC ,<{R "9ьAHO,γ߃4oy~hv=ןUC(AK9x%c:T(IWTt|,-2GNYnEJn+:DF>6R"Sz? ?Ǔ?M>G7hZ7Y~w_M8^_ >|22., '(/^Fя?/1Q>pv_~Qj\oz< хt!C%2{?I@Ln!?T6?nKMo?,6 MKW8p 0FjiR0 B7%Z-{>W7]Dh'/0 9qEj0,U  / zҼ]l5gձʏcWG+?Ʀ ꪳƐ尃;eYZS)XB=$kn}RRƹt[$)N$J")!d`$POQP^&]쾵&nmu:')Vi7 xgψ۾_9 _?^GEsNHa['9CӈMb PӌBG ay)I AZmé؆\=ak(Ge V Hd&"GBy\[A8CfwQٖeep96NO81uSAAL)pz: 7'X9%ז8!2oDLg^$A2䳳W uXemMa?TRbNE$ 8BaLi\ii̴ 18.JYzޑQjg Qgq) XKD4^/> m(9ܟ0UD'YZPE@PD2=C"S hZ~Y\{Ip ,`<^JD q޺A9BO3AY86L@]"$*$q')\q_y;U Jy;6З<р:Cjא^RD( p2'|\J()T\& (5.w=bbR]H-EX@0bXgh/${,s8k0*st0LE9`Q;O2NCl{V~wbpWO(Buduuo{n{pIU,Kؘ4||yO>є2>?Z=*d\CRi?8I~5^4z\yJe٢G;:4t(%zOm w/;zs<pg b^%MhWec,~],C/ܰ7 ޿?|?dZYvx51qUIr= jG폰 ԊsÅ+Kr m\78w`=uμR  ,č8{JyFbJyGof=gjZTo*V- ˬ{j!mG:5t MV}MܢC|^}MPe3EQ\!5Cp^uʊV_ه6,ici~߇k g\kDܫ{yk#gGHHބ[Qp`i!1,NȉGT%'QimCov覨9B̋cB;DY Q$$^ YBځ.J/,eEX_Z51QPX{%t/G8ăPpEB/-kJhH9 ."Wv 1;*;Ǎ3 %uhm5 B2*ZF]vjlPwPspzvkii 6zRk?-_m HRp C)8^x\(a+c^ıRűSR`.R$*BIR" EFj7 EH\D"1Yj2؅LI\"n]^ij?޸7Epqf_ %"Ƀ WwLqoXB\p^*X+gz8-Xu^3*_X8ggt4.rԨs.}lN9W b8 mʾ2'?BBFDU|2Mdu7cy/QmՑpi ~[.m?;TU5-6<Ϫ]kNF5,vt`n:9K9Xv@V Lh~w%pvʴZ%ʥP'1ߝe0;)Wjbʪ<͗U֝vbhtk(l}z}٧b}@yc(vg-NZ~vyބMFș$Qq<1 :)@p> J%AE⼶]M+)6Ys"njM7^Hܴ|± & >aU%Ix 9.4?2S!V47|R.$YBp )d ;Q.}5ί|>򩺥VUbDƤDP!*RJ'-iB8Ed Rv3yoD Gpͼ Ezai` 4pzb, 22mM`t;L+ȯ]{K8Hsm7g4q,8莘maTTU[= ԅOd蜊.CD [`THAH[\kuBTP&RHE.]xmd>5]q?U7O/#Jhi .CDX!*(QD@N6h%TI vR͎YUO5oN߼&!7>s" f^<~.dͰ-3=wPDPKHPv z 83(B N#OL',7][o#DZ+^NKM1$XTߴj)Rk'?3Cċ撔k*=(S4E\;ɩ]l9!7FDU kӨ5~g=ڋwwd5x<}Ε\W Z\by@Ч]hjuts@o}䵆EbI!ӑkY6ʚ` ^_ye|ۤ'a9CX$~X/}ƼDo4C`]>1xv8c2 }o#/4ISMCӗRJGu(2b"$3 ` 8(<ͥ4閛 |72<ٮ<{erԩ_˂fךҁrfSOWHIF<_v 3Q?N-F8/qЅTF } mi RmEodi٨- i7+Bݟ'Pθ•pw\ L.Id a๜SI3>]N$Pಎu61&<-G?! HJ{:>#\Hi4붽gk?7ĝonv,qaJst_>:q>n-[rT#Ε)f'$ĜU)UVYXf.KyIpA,2:9Q'G')2w6-7(˫z&٫^F?nĶBJ{3 ouAZ$ZF`!B! p`wit+.ul}_)A]vx^"{^\xjֆ9{Yoʑ=KZO}t/8\{,d[mc1'~yE{ 墂I\>ņxm7Z y}ݏw/9Gb GU)I{)r W2[C{JבŲ RU}oGS<'} ;1rjuN2(36@X6;RQnvzk$ BȜ'h*fC#հ&c'e&3tziP2vyKɕ|9& hS-asBn*hVFplF<ุy9vEJ+@UsYZJ49@"sc#2,y \Y.y|~9"TV rkP "I9difDp&Sdr Q<ƺZitP.*DenFxFs S)Z9*LQDp6piJ ?v(2)UҝoH.C2B/Kri[ykVXʥqwYGDFaI$Y D9_p 6z nu[} eN)q!X%,@zItLB9CdGxEj2l H90^h#`̀NE]10V kP܄"tJ_43ٌ*c}6cܤkZfy2Vp f'R,{AF(vb4qaLveP耙H.9#v,rQ  !-c$rXګ Ulז!*{++d;Gf67yuWNӭ0^N!;?& F9T!2e@Ǔ.BL霹,ǸVU!q5c3T_}M  ӽ~}SVHW \o]?j+oaw G ިf/e= RœUY8`b5(~H. [Xk|^7V=Y}Q@DACB""'O, A%-$X 5.%i8rhsqYd@Ч]9ky)ML gK 65'f緾rv^+׬}N●FtoGF8#H=n$iIzqn.ql!LO7UAN!766ļy-ͳK"~:ӂG֚ҁRR)ŭs9 ǫ6Isk2(4ʧ]"Nvv ?8YPTǓ&hB ( xtZ~;4QHBJՎ?LGyIVH'_4בM1 %9G Z!Jm!s sޠ Aki5"T>+^>ń"nLs> Qy#fa8[޵ˁjpXB NsEd-1SB@!@LHo@HY'ɂB P~q_<18c*֨>eJH!4Jt1d%,Zj؆zN9퐜9LQͻ8N866oч)X+_MeDhQb)kLƊ6!0݀QaRI9P=T|1dYgw>ENƏ3+G%GGЛ̃:I.3x-&#s <9:qƓosDt/O R76]+)<邶!JFF!ߍH'͏E2e}f2>r#SJ~bOO iw_})p} ܾe6] e-+ii=\'|?CY+Irq̸..HLXz^le\vIJ>g-\P,Xc8Y'U.$6B$ۣ[))%a'kVi) *LQ C-gNjd⸄}rK1]h!O{1>"p~l@&֊?e8NW|)`!xVkmDA5V1Sg%19a)6i9 Cvx{aVӞ'|7GoPOǍU&Y *zd&@ 3ȂC^e՘j[[5ͬo0 ){a*xTQyx!kǒt΄nzԚ )5DZRD3YeE8-EBlA\U[n`*N !0p-m%ʼRzwQ3#Տ]{s$#?@4j8lH܏mu΀,76)()jAeECu>i9k oN%u(0I-S".X \ +hȢ)11&YCI#( $ <Ч_M-&ykw;cnNy'cSC#AL#! ?{giZl )!QL&D :C"!IH6(2GZӶ[+Py =-[b4dNXev۩'㔠JfʊHO7aϬOu/Od#tA(GhƠ.8ǸY-A2c,sMOG!a{j+մ# @L*ȣ,MP *q87}Ȗ2H{FH/)+K THǂQm&.%M M &ӜZ]Wm)?Ki.A[?κŻ8M{}9*ήGQ!F:L?N;~ܔ _9S4fj}v( l?kgnX>w5cvqk7#oor\/b/Mg\y2bpw=L{KX,-^K nK3 ~-jo!?_FOuNĒ7, <':j7Hصke/k&vt78Y U_nenfHIpeFu{H1qv.xmᬝvC̮;Ɏ}X2hu~Ęl@yENh2% n,nlL[(b:be2^\lɰ-ǨϲFҭ){VK(*H\|JM:/;Uqt)X"!D%shJmV5}H(BRiW|}9QeIϟo="C+fmrV{ٟEvlniޔ-Ȳw I i9Ơ"E9#d,"4EВ%c}I4G7͔}~)4'҉V3%e`iL.Eژ qG %{ososSiYR&^زD-K J^s>p #bKdۡ8J`eh>j`=3KI z]\3]$k53lq $z$`iSӜt{ BPc{K8,^GӭmZ4b4*3VDWM/߂/53fϘc˺ |u~d⍙bk')-?ULHz_UCUkQRauؽ@XmkȸӳANAVR܊ t|hۊ8#.z2~ot;k:jj3|ĽeE3u4z4lng,^gxIN 4wp^LFc~;DO}QCxEsu<ϦPiIYE@'=z)Z݅oW<{ʇAQmp88|y!N$,s@fდ\{PIFF!(΢ H }*]8 L)W/G8mׯȸN^Vrjb)P_CP#+!>L GZ3|:Pd-6JBn\1-P+8RUC{U O=ӶRW/&0xc&g*dgr&XW>y_!H))L YG#`'`W3[fF&#&9*__b B] 43 "8aezRN8h^`P-Wbk(e+fS2+Йlp*Řsr(ud=)?%ocG1z>uےlA[A;H@no8DjnwףTJ2}5:"7UnJ#&,Q`%9^CX K.^RԠ^+b Uo=i4j0J*c3I&s'x:gU)ϸѲؑK ftKt]}b0M|AJ_4`BҼ'P҅K5"賫ӣ?F_G7 Gejznz=+]4nN߬ 7}׏Y۰ryr򂤗0cG*;!϶kSq\'tմ_VáEi؀ؐ2ȇ*ocoP}͑ 7AQ #6+O 3\@8 gce/Fg {!Es/\d,G֥I LïjӁy4?L+opu60u>SH5;9K8Lzv23\6nEn]Z [VBB࿺Ac.o{><ͩn˻ڸWim/3?9yivzlo~uwR܌y/xh6෮,f]o9XC=*o_HmSߠL:}rm#kmd |kHpޢlDߨ!\£1{ox|JFVM{zMv(s-d'^RI4Xv?XQ#/٥?N+ZãLCv)iUoVPzZuq&5Se@0]ҒeYKMm]$)rps=p@p[+F1U jiZn^>=o[\{[,6EgJn+z8'ŁO}!u mZ Jk5ـ̶QX^/v!ޠ=lzgE)$ p^֦\(Q[O]piV1zlq4|'NjlXeGnj>3ioY6EjKFlN)J:q.e.:mSA$rV`1n$pdR%cQe+J9,F/DJ p ]pn]3}hphwoHT:]koǒ+?݋H~ 0vI6 .p쇍tuWۄiR!|}M"ESCrF@i83驪>U]uf1+Vբi ĄEA $*"9߅OlyXo%Lԅc]LvH`)irLDOQ+!"E..4ec}C::`hM :U"a,Y4+أc,Zб-k15Ik:'pҬ_VTʊa)D]6ݤ$cl[0 S 54ѤT#%Oq'WQOŸDE>I+{6J@SvXAF"u+Mq)NUB*dE(Ib&$=[$*mr !a'm0JPtpƓK|J]kT<7ǻ(r-_pev }7Y]W0 FuG«^ mϺ<\q!5ƼPv@iB`c xCNKգGZ{/BmO=T1jPQSU*PPKI'=dv*)TX㽔IfD !,>P[Pi|9PtTD]v5=چA:%̦ArM _s:{zH sycUWlZȋDԀЦ䩑f#vL(O4B1 5\;|?:|nmqBFR쬊V 1$ i YΩJTT$BN1vŤW9R"k`/\FσGd3)/]%cp$\ Adm;#,]mˏk7wWhV?گbw4yuܸc|Nj٨( eR4սR)`祔B44u匉ܟ`H_&|Iw0Bfv-).\%D(>gxұFPdE $k{zZ}lpиhX8NF^Yo(^L^D d%48 T 2j>EQ$HۋjziiEb2 d'1 9X)7b|0"NXJH*!̟!5ZcP0Wv \)|PcD6)#Q"0dtZ̟ʘ?mg-¢%6װ 㗃f55jtf4*lj\]GUy[||Rl|9qa OpU Ձ)[8lwY(E>g^5whz0nh\6WRrI#֨:aZ GyRNdpk5tJ{𷲌;bLoI.c_D/~VAĚ: YUq26v[e_p18/.5uLiV~MuzOrSHjsVgҳWÏ|??.+g:ٌl6eB 2&!#Ufwpn.+X 7_DKQ [e<b+UϫD_vhhEbG*{eFZ{4pgԦi`srq<)ϯ:]@wjqŚnWo%l͇ҟf4+#KhJ8p+ B#*/k|s}_5[^k`qogl[:?_î: -[I-YJDa0FD ٺC6Go]UO ][ #Bـ,E$2-Ԧɲv(8F6wm>L&}MI%9|:#bkf-CX8*|mڰr3,wk>k 븯G@ /o~|0f6Viyi$_c5.9a$HßKo#T>|Au]т E4Zkl ib.}O FVw\}Q|_gQnk7ӟȯ/=||8Z^28%8`j2J>}qh!ax 뺩z%>`fxy-rhׯ51k5}O筝PZh`a@.˿^K}];Mdz"\ eDUusm#hjp~6oɻj{֑`i /_u!zq<8h~O'b@3hAbcP`r|uPS$Ȩ:kdA0EA:T %aJimX:(ڦQYjT Cv-*yTV%Άswmtrb{e溷ߞbiD닓>tuLdݵ,{=)(@e(Y4t*evK']>M)wlࡨYn^-w %o[0L'?ug:6s.UD]¿C1y% ֖τdNjFIYQdx[A+d|e-3: 9&ʖerV^<\&wͷ[;ߣkݳŝ^O?o?8%<=;~|SOv5zgw}sF\o#%:2Ry7AR\o5}sRRY0W /J:m9qZ.}:SV}Ԫ[Kg4cj~hC0ܗp?IV'='Q$(/upVQz5mCuɝu5NkBթ]w&'].fϧWOc_N,6zgwܩLfע%ϟJoNو2-0: Y鑒ꎒ~9mVYgf4*޵$O #!H.w.E6@$O1EHʶ~_ )␴4hyXI4{{UMuUw>nZs87ίg2*mH5tMH͖'ضg .OiMt"/?_}SM_h[馮zK߰eHvTI^%auNiv5]~ {;z>AxϾq2(Ŏ`M;8v2Fs~Nz),- Tݬݛx$%Yz5pJA;ZqNy8sEf,Ml^[6$';r@H.RMM.ɍqxײһ'Ҵ.Kz~tۮAuQ3WzD߬ɲ{x9Y^?bTmVNuciBcmm|TW#Hr^Xa";"'QR!1/9Jh z{j7hnZ|+'^>C!EZ^N\_ѢX M~/n/ yT%tBf22ҕ2,95@sIo=}&cn9]fuq\jnv~z9{ lAn9Iy oz>ȶ;PZ^:M?-‡*bݳբi:(ժe rrYA2wH `>`nyփvgA(X(eۆ^}bJM&o< @dOֶWdtw \rrB%K/u{Jj\^Ž.Hm+=dc1l801OG&핾*eֱA}z2 o;cJY4lbN J1 IE⼶C^E ) 7Yx3ñzM'7^OJ!RWT?NxZNOF˒U9T re#4/Y=Si}ݧO>ss/r9Ŕ9d"I'H9 Y 6k~+64@p]XӖk녥!RcJI#dH"=9 FWgfq>ўRx% ml|SKduYWJ[v6{aLPO@ޡ_7G.h¦~ٲӏ[z{Ikꪙ+@pG8?G[s*ft܎;wչwWimoqs503Cq-𐎋eѥvӺs 7VĚ5W_ =GwϔX|C'(joznoL!Bc…2yᾘϟx"NQM`&'GfL*>9%x'zr\ܕ9w+W x),ݻYKA(ҲZ2n%Y6iQ"F jr:!d !25 ?RBhTĩ|Uͫq/OnUkίD 㯰 yz?DLr.f5ۂMߡP:D %J}C/k+)k&YGyuD^ԆlAf哫a2aM$JŵlR v@f]_o5G_ VjuuԤ;˫ts`_~5(jހTwGpnƝ˫T֙W;q3mA闦h?Q31%V MtLvV +ڼt:#?А "tI_.ot٠iI[=Њ[ei!bɟ7/qwvĝMei}c$ߺqWHߊݧZZqՌ'IJozlyB{U!ܿ1=COq56qo=S<ּLKya6MXvSGH]iM Mp3ir$SP@-W4(qT .wn^_tsk?d}[6 WK_W5eEpeG!fH&{ >z$gQW^RG$@5RueyAM$Z)9q1ɤvG"gUTdMr?dzkdZa %'}d[k\|+#ˋ#ẋr2DϸYJe@2Iɤ.L8jp|_f<̘f`,9z :AE]sD7Xp)YX}כxqKQ#XڐTN.7 1}ƣ!c|!AC$w1gsY;֬d x {CV H8jDZb=Ląt{heAr4E5r_Jz'^ؠo'Z]R7;IE$,.z$h&[CRPrRh:d_2jMp-%VIluv y^b!p#HЀ=d2n.smg4(;%.؀^wLu$v(\cufm eiK@4:ZYEBOb-H# &` ւ`.̬QAoV#C J4K "(h60 p \uu8.Ob𠄏>hHm9/h"|ǒI'HE<ْ eìj@oL >~He!V@8Ob9A -"YPEVBHvޘ=PxBfGTmc?%A'S l1p@gRvI7S) ~-Xe6S TqVR HE4n*KB(a. 0ʰq«+h -.`~[G_1AAOCwoIf% @A /Ҝ4 %.g} `e9S<(~&02W^+㿳wmYM~ l"@/ s,(AjR$%IJj[}$[SUsՕb1N x3V=npS-]b_|1#9I`BTRXsd!u@esaõŤRbWir¨ȷZ.Dj#ja$ʋ0^ BP8p`Vi>n:@C6+*Z@L.H􀁩p1$;`Kj,(t_@JPI"5_!Q`rQR0-acE|Oyq `|Q@VǃVH "3`G[I"Yԏ/X?ON_ŸӬ(al2JRpac0w^s@ MހwK |Y]E!jUW@ @2ٷjlp!YBGm` @;k1-{ 9@g@!t9hs^u@؂ZNmJ'Н6)6.63j؅c<9ye>F/~ )dgZ0]v -, f,yՠ6*gUnKoĽ"Zz7*-Ia.J'y*0t.Yu0qX~<[4tb{/'iY\ӘI(KD0umMW-d0LvL[ vM<;5,zo 5$Z[pQS9)  `LyyWý=7fܵ9p7 JxKdM:39P.#d5HJdp A(P`AH:P#F' \ a752[46I#*dO"$U.9tQ˃>0t$ƀ?!\,nY#9sP*m! mMg TfD>8F2 (ǔpJfP z̀|Qbem]0""A6i$QZ6l!&XKo&b] 120+&[eH"b á Kp9,XcS+\p]ꊄީED8Ø1Հ]njz/nͮ6$h&}t0]:lE!ׯ7+1Cp㝂vЁ$zCw_4/U,/|u%irM,j ql׶@-͊оEߺ ̉S\I!;̕dWV\ɶ&FQp$ۯ% HpE+\W$" HpE+\W$" HpE+\W$" HpE+\W$" HpE+\W$" HpE+\W$zJo) u<PT'#^JY$W$" HpE+\W$" HpE+\W$" HpE+\W$" HpE+\W$" HpE+\W$" HpE+\W\!pxJ+## jw+Xi鄫RpPW$" HpE+\W$" HpE+\W$" HpE+\W$" HpE+\W$" HpE+\W$" HpE+\ R8%U?|:+''#r~+K[\W$" HpE+\W$" HpE+\W$" HpE+\W$" HpE+\W$" HpE+\W$" HpEQW+ݫ_Vzmoݫʦa[; yYjbayB> J}20Xg6a%ct}9ceս )}͕)R>AUpus/t=hbU3ړ Wsy*Y+ձf6pA܌@jڼ6tϾ<{{xCpc/1~)gŭyY~Gw1AL[2To?3l#cf?T kQôB伏ˬy8ŬN4?GC-Y{; p6Qq럞W4nyy;\nfQoc^^,Ooj/V/Jۆ涵]uW˚5Ah?\2*g34M-o'r.ƗKFLj?陮E5QPWJwQGUіGzy^\uV#zNI;os"J!J2?! db3רSZǏ*J8A**$O(\5 ?p̵TU֛cWA% WeܹV6s@ٜd:)ǁ Z۴AvAV۩T*+‰ NI̘RzWC,&rddU´G UC&%B?;m6X5b|.Ig9]ȋ;t1ͻ7_c^ր??wKi>T*_fǬg~fOK"[|z[zP~NHN[WtF=f/_H`*-3LSɒWVyJaouI2_/]=kc6iNeY{ۦbv>I6fم- iݸRN~~ ް`ᮺw?濟_+[}Szz],$iF',f 竷 X^rc'ۥգ]>|XWbZ$Y\)ZV$/B ^H*^| gyH.GbSx;MuQPoı29r<#طs*S*`v-Ji[cZ#}#hJt@eU][ ;1f1VABVTJtkMɫTa{RT{1vY}yC :}\9m/wб~:gųwj! txlscLMbޕ$u+aJYfEFU.ӵrlu7Z5A7w/&pn0Hfr xV3w7ec2ͮsc9C4LϮp_Kr[*o؇MC]ӷهfof[PmjvP)?aru U+\x}=َsҭ\.zEw~lC*@8th?`fݺ(nE\z3.\e=;<{"gntw{37FpyQ:* 2?P12kZڵ>k}Dkz-7F`#= lt\J_P#Vc娋_c8h8G4A5t1L ]ыO!Xql~k۟z\E~he~e>ѳqm嵃i|ޖ~6 ~t{[73o m}|߿-x4t硫{$nÝ[?|'oܹ.gϻ C:ly=ΡMŶ@ 7zǚ5o^C>?t=mK,WGu#R )$.&*y~>]Ksegfsdn͟;+|V /|p݇c8o,qptF_N;qwGݑx|nlfuiURz?ZVgA3ZqYhQFcf΁2i[@/..'F|v04ns}fqloSB|PTY5Aݶe!bYVak;Ei5Qi;1ecoY8n[];Eqs)TrW|f_xB47+ZH3WLqft^-YZX-DKŕ›TTx!U6h #҇ LE!M؏+ö}vö q{njrAa`٭_av鶗?ۆ3jjɏh=.՞L'W8lU3-9%eҸxZRnyZkALIQb MTtuּ) OEE+ε:X:GkK#Սf!Z~k\D -ww=bnp`nS|?%saD.{gbS7 f#W1x ַM,k>}ųMbDU)QqcGFl͹Ț8$(r(ժ-p#Z 5vd͜MD)O GB;r,4 h,7-nIiR7t||2#6f5b03UB\[D^CpbeVfc 1jԈ- _Ȃ-lJcQª>$/U(_75ʒu#Gl7DѼQۍEmB;q#ke664W`@\E'Y,bM.,QK1 nE+6g4sÞԷY\Z`<;<4"#Hq`gXZt;@eJbƨPٻn$A!j_@̄edEˬCCpr gddUf Hy(NYiJ>O)]:W<Qx¿寷_%4)ƴ'u>wo?t׷@z;z `;ALWJ%b_XH!m R^Or (Vh'@&Cd $`CIE 8ztʜidC5Dr# BbYprZ D 4 |gC,Xࣱ&7}mɗiI5*Z陎;o|qmT[QHDulog[{dWvJǽP>Ǐ7:ߙƱCGO4 Ş#/JI@@2m$D-I4%` Gd98فQfKi|kӴ0s2x44Uٯ" =:4키ǏdFTO0M q^Ҙ@ ߶?9Vn 'yNh CwV ˤˤdU :k ՂKJQ+A8ey.)fx s>ekuw !]ܯRʐQ/i`!&dƿ\+MEm߈i^U-tJ^mց9sG-nم_?ciA+,WC8ay'|Ql qLtN$K!CtRR3^F}uBaa$¢LS!n Y98R"ѠET,](gu($~Ae[kRz>6w)#%7m ]A̧j-9"J 6͐*$$¬.l_PL8;B1M]yw$8xytlm؎%K1ES<9RlP&AMJvX(yI@m εAHḁ̊8Q9o(3kR1qiQ+e9Ze9{&] Adڈǻ,Aڀr*ʮŖnחN=EdQ`-cD,+bϠ c VV$4b!<@H/_u8AWcz{_HdR䀤p3H2(ɡ $I j>$<6 U.pAVm(&a a݋#P1)Qj)Yw,s\fLu$_4#H>: ,$.y;UH@rCrp";AjWF&rLx"8gK!*C&Mt΃PK%RSĚNs\Kw>2Лg𰠸 #7M,no ]L_WugRu>*ϫrAmϣpZz43 }cY N_5wPpnuz>7zg0Z,Op #be%bޡ x fVgT Fy6 K=B^ϼPhW3WR+4!m @L@CpjˡYƶ>ư/E\)07JY@M U@rxQr?%+cO?Ul 9 @V1eQdfesD,y9KIvLC*:eY,02upJ:2Ep㑪s;:q<tRAw7kmyJ7d1>gf&?-gs^& 7`N[QFGx3/4}ݙ>"Nױ }^_ػ@܈f Q'oc5Wug./9ˋ$Qd8 ) f|L.2Vq<9#3@Ck(nκ}=bNs1myi; k7{msIרu}zղOQ`tY'7ܜsRSh;Y|f*F[[#Zb{73w[2{x:| lw'k$N0N0Wv֝n'N(Y(f\z2]ô^mbJwu/6lb2 %kۆ+2=ke]tZ܊ tvvlKq$4Mg'iֆL #- [{{lPntqLY#={ 4kvtm~;DS61uE1}yˋx¦{NPiX"@':}-M.ې΁lPTݝ8Ox[r}ryڹ(43u"s`:pE"PTok&xS٫.jpV2ݱ~Ӄ=oǛ~ł%h1H5. RTlb (o󏚕{5 rDU&y>ʨ1M\T4Iab K&YͬVoo 䅒|3 "v2c*.~B`~{[ 8^5~]]U_4^'Ke:?"h ^UqhV;J~TÌ->ޡv{|@EaZK}FI$P}6B.;dVkP sMDפb ެfLf%ᯪo{ r⊊[u5NM#fg$L^A)p! 7~g3yX7 WD? nP*5[:4iElr.+= mh$RkyL G<E`]o4fل5 3|W 3THv$ .'dZ)^ڨ% #KX-crplxNJs65.YHDJ xK}޹ӹyy7Aki^}lx[mk`{RCѾ=7VcO{ҁVMH Ir(&gky~V*3zDTk޴Ȼ,Z-vZ|J#zlEj"4Q&eU:aKg>]4PtvF{IVԷ h^E&ac2Βl x3Z NUF: ͊ADŽ\NWuos'>o{wdۀ2ZLo0 *w5Ty؋VQ?Xh^烏,*-%)jwibGW>"V VUGI`]hK.k墥C%g0U])V) Audl֜D)o YSc!6v/ |3['4@4l ]/oӛ}臈8D]ZM#wRܘ hE3*5̮dJRb& 2Gf#>wݶ~K~Xrnt{}/ ȷU7Ck԰VZakTqV?Oy=M ]2gZԐ mAP% 8b L"UQaq -sG8N;:T.$*s#A)c KkJI MA ɉ8KP>KW4޽eոbv5̘~v@Zo3փrhcEŀ9al/m*tU07&`kИ|cI=RD J)ɘUDT%FňCR YI"mE۔3 egD ʚB n"rҪ!9fٟx]Wyڒ1-V(xu#epaw'_vl>a^oEl{u!+#)uQ:EOS@'Rvj?8o督Dv4ꖍ&ktbf—?3˜;OYVHv9A!NmEb5X}mVc_xKxWVwNC~g^W)vG~8xm_kcڄ}VU3D0I$䆺Ĉ-"Uڄ.QFQ-o}M2n(!?>{w몯%XAp>o}*7͡OLg+=k1wRw;QWa8PaՁ#a+ D)P'C[ih=YkT"\DfU `Bk5:ga D5;}e  l9Bw"1@DYEe;j'M*IxxafŚ{*Y߯|>FSQKT.|#f"6ȃDԁЦQ')!srG^%>r((\zzF/mlG/3-tI(HʔѪ`UIs> -$kk~aAIBhI DR)50 WF \fR#1%{[Z4k΁:K;?"m'%㡵Kx~7QmO|dkyx KpE{)CJ\褔BEh9D{'WltieZku1mflI!]7Ea_cIPgf#(#e:`ԣhc73&4+8/g}[2E:bzD6`&tѹ*VLTz4XH"(Mz7It#1сa2ʓ1hPt>K.2k:Su]| !x Zł%,JrMek2:-y2`)Ev7`ĩ^Q5Ӛ:Lkv4ҸtOOUuUZ^]\Ϊɏ]nGgYc[]dOi&yp 2NxoY/Ϯ!> |MfG3Շy}!iށEaZ^1~9YzK pk݌.o}T8z1W!ѿrGߌ~Y&ƼqMZ}>uU,5=ٚ'`7ݐ,[txtF<=~J7_jm?n~{Q͝|\7+#_OW'ZK>.􆇹4cGI>ʐ|1:\vLln5/+8\|Wmf>ܜDwqe/lyMaѝUe]q--Z%<ц{+),7=o6X3'~FguxLyƽmɪZg'p{Nj O޲LxgV|R"㩤c ևS+"+]`VP9O0K0k~fNt˃9QNfA8U+C\Jx[:rT$NRYӸaKURlMCf:rW٣!%3}z鿹޿U#咚F,"s 8.R&R4UD b FрG!eLD+u3'#iv}!Arj]_{~mz.(W :䦕rs4d=Ug6䘁;-OփVMefEp7_< o:|,w'k$N0_N0W,NB7UgX&J^~0m`駮^M,;i)E^KjӰy`w AdCm:.tqp#/RYڤV\['aؖ4liII%Uѓo`DԂljŽz9Yjwh6*vtkf$յ!2nwTy^tmӄ1E}Znˋq²yYX'"@':}-gˮ{LڝAQEp:mqҀoeCgMQ&dh$*60I>@;cF9X*TH"Xm mo7 m_pl"qAz] PK!?B_(Ԟ"Ծv|H L746[({Q(LM^ Wf_o" E::xzwP4GnuL?ZNKS}/5&Jq!Ӈ3RӉ`FM0]碦]M'*+T!l0;Ӟfbۙ0y30PնpIG@y< R7\lytnC˹`1X>NK~i179Ky ▱W{|_}F@чZ^1M9ݫY*) y׃yޜ91߿^r# $g\Kr1uyrRnl*Z>|of4RQ !=k#1R](&9'*qCf˩*Jȃ!]dϷe?|MQ\}^0G.w( \I!P]X}&@J86474ʜ[@SΕ*/c*o/|aX/ &Xl ϣ2(B$U f4s&`@[t`cO7C^#7RfXB2|?X!~)0st^7QaB=l8Hca93X`YbC.I$d?(IS~|aTF50ER*0YsFā dPGg3 W>?!O˶{|ex%R@SrH%nX,3$(33tPo-aeJ˧֕Ok|)Z7Z=Vp"Jj0%4 'ӑOA=;MV+?y筜|j/Qغ4׀"~!X UC;b|bavJKIWn* K-,]9C֩cͣxxe@HU놑eܳ Ff`L9>x)$: p#mc< y*$H%O,G:Rb&y$ VQ(, | \K&Kȳ1~c<4[Y;D尴P/?d^6tnz_yô=QBj}F:=*0[e c&""fXt"쟎bu\KS ( Qc kaFDq)E3`KJъ 5+Hdy1"LaM`""cpD8PiC`5F>;yޙl&/(R'_"9"FÛØQr;1q[Bvz7=m>^ttRݿKz0@u㤫o23u+wng&:FWH#e^sҳ<XiB(V`,?ml+0+O>f7+{\!dѣ }" :9*̲r=0 kjN*&ƅ0г:;3L`@zxm+8/g~`M2T}lKykW"~0(bW$g# ̽<@+)(]22 u^ߛT@wD-6`M@ytH\R' *w0fIOQoX_LOc6z}R?a/qĔKE٢ǷYųax j 1OcR)YU .RĨ}Uɴ}LnzW Of, L #Q;SnU!\ol$~Y33~OfAZszMΪ>4\>[Cد>)zcꕣ7Z @cd!:gf14IFO(y!מ=4G7y)>_d 5B]!L∉CEjG Жuh!Fċ;os#Vչj'sEyu!1[ Bw|t Yc;ta!/± Ka!Mݜ פ( j:Rjs %<Y70c H>(ٟ2J{eS4ww; 5*fb+t@` A29HysQ ]yy10P"durr1З:ކ,(Ea͠]Z ?Ê<%p2>-ԻO;ݭTb{7nEp7_<³<(ZIm)%JJBMVP1㌁<4E^in`w AdCx %n cwf߸?hƛR+q_>mp[ʬ8:)*'nߌ4\@.Gm^0Z}>ls/3d:Em=*T{Fsnvtmӄ1"9_Xv74 H" Cro۾GwN 8(( 3mH 'hNl'mfC޽v(^iw-řDSb 3L9LG;cF9Xmm4x*Wcq`&2p#7 [\#ai{rʧbAkGNcj50l+Tbɐxt#ס~5 M*@afV+AgtD&4Eٚ$ By8~g;]RGo&tG֭jArQД=XQ[yً>/[].nNNr P0êRhCSCqqBs5W%%z*}K@%Y*Il3@ Xro (mY@YߎW9 i8\>_ fýS,nA=f _ԅjhx oW;hhv󷷫m䷷˕T銦_՗gU8_|C<\c~`xaFgJyEOʵb%?nŲ)2K& K?[S̹umFraB /2}L: ɽv[~䯍{察|puqy뻻sqǘvWĽQhCíb[Po믛U9cߤiIQrJɎ]1:ƻm^JFŐc48nyXg>srwxBIqNA-mSPNx@ЮU➵s}Ve͚2dB&5wAMbkɥ&4>Ca|(6Og/` fX\}S](?U9)l m5XU bFయ'I%hC&!D*H1T.+%CDhKFqTU˜(z~{ttދ9ncs+ffy$fi$zƼ"4l_ 18@iUCW˴ OFjipEf X/ȷ7wT0Ƚ>cl7iz|Z߭'?u_-*"VA NpV(.*@ހ)[OUѵ̚5QoS2ؕP !WNr&ToXg@{v䷛9]ÕL᱌)T6NR p2j*9/JkT޽,KĠc[#bR:0U._yV:b&$0ƃ\C>*z)$I /CrIbT׈mVnh/f,M1yhBhBU&k*U%9#6_;r*QxjԦQ=#ح8%c8k>zFV(fV0.sx&U1&&A C ¢-(&)(1J.D5v7sv%_+Y8k|<5"1qF|. cca;v>&l:/% jF@Z) ܸ8:TfS[,JIA$(d;"nG| rs,yMdnt{}p9/b3r>f[m;15;Qѿ@2IX=9]OsG}*NzxHxY}ԎK!aŐNkk1P\(V>'уf?6o-hKM!T%k0k),A@WNCuR E騫F~KZ=6'1|5b">Y˜=ڃ4%f{(l).􃗲X]iZN~/u8##Ã|OΥe$ӇaTr΃Y2:E5XXQq}ci4}|>?NyaB#.Ne[79ܿmR b2 c13WdQBJޫ[]q- +Y]ϨXp~dQ(P)EPV ,of;)r@9b(M3`*ܨ AHLqxkàdc2C7s8N!s:y 76Kewi!x>Ci=n*} b^f[ ӰN_ȣ*[M!dCK`jpB`*՜Y͑O乸~g1̑E%iR$ 9 A&M5ZjG'ŕ4]U#OzLgmܟ<8p'wG/;$wfZŸ#E3B`RL,qY ڷ>%N?>;g-6ph`hQz3SmZB{#;A9GȍQqeev ]:S 9n誐JgJԟx0`5: 4ڒTWuDy9?^ئuY`|` -p?Ylӥb397]͠Mk׭`m蠱ὣäMy'q0U-.c/G=׌//Ǟ[OtSE d @ L0fR[uzQLH*UtfDx|ch$?}N{6(~3m=r VUkTVv!B6lWޥJIoFF`'â\\N,1=2qcՔ(oS}L#eWD|z}|! R.0.Wr5fc,[Ԕv_>v!R B(Esdq(NU-+͜(z~|uqq]7}b箞 ;_+1ͼbfa5e#Wp}Xy.$ 5Az@9*a$ nf]ͻxNwa'uwyIT0 HZT^ MXJْZ+؊NWIae 3]u壓B} 1(  MYK!RNXfrd{^f!spyqvL̷u=Jk V7!nſ OXbGLNX]5Z.Ƅ|(sHlIVWud'Q[P|q,rƠk*VfΞ&BS) ok:Rh*ivm!o S&fg1{\j;7?O 1cin36[;M?]iO5tS  XtNJ>!tjfXxQ8g.*1v`-ÒC.|,iMAZea ;ːq/\d3,KIT&Wkl@l|[LFwu׉ѫ(͆쭭~70Դ=Kitp,WbrH72, VGHW]6￾\0zHjn=f;BoƱ]4•fo!mo|tÍU.<5jvknK~x헨nC]ubs82Bӭ:d!TY@T9 *Ro׻ƒ{i-AH1{e&dD6!f$>Э}Uun+o}\,xc".li2w1c=Ȕӌ 4ǀ"{mښRڔ%z0hI<&3xIs-= |.Z=Zr_ؚf ͑04wpIuI nNnY0)\P~xۿO޸fܛd"$4˄Äeu+Cއ9B/h91ˌ̄#V=2 C|L (nSj5&FD"9ɬJb#fmc&f\nM;vڶem;!ص8ţ>[lPD`DIld,DS`\SR^`kgV@2!CEG|MB@a&H>*mEfjmMg]uc<ۖ~]u4>a <VAJy RYڌ(iDEYΘ6@1*emLD.~&$2=@6F"1L=bklKŸ)ٚ][wY[GvwzrF↱WzԍѾʫ+;PyJjGnbLin ZBʈxk{}h أ5FXk9. d dID\:m,rFB,\.q[o yH\EE3B]HAcV{)06Fg3c41?t5Ce -^7 mu|;ˏ76ЫY^f"rҔG25YF8;-ݮwwMrw?ё?/י? GwX_dݗ?J>O\^p|ߟ'vVz,) 'ddI|K/DT쬞3Ff|):Pf?!9vt1 ^8ny>,ͶDй4վaSs:kn~6=s}E}fr={F_z ȗ'er{=vGػ^^7G)\w;nmByJILxr WAH.9L |;E{e{'k?K^D&c YJZ@HFvE lY.9c 1em5Iȟ^x(KJf8un?ǼbA܍c& d{CG^GїCDY$ZYmC1L J8с]Όu`h9*DPT(r RvRǔ`f'J/KH y>3!epԚ8Q)} hpspS DÛA y+aWziL3_pZNW6])`!hK! pY6*Vd3vwH@3}:hX/o Kg3x}etd:gE5Q~i H0t;kY={ :˽v"C3 r&3UHR(]I % Y(X䁓6b BmL涁QklF^Ҏ,+iWc̫-;!i?%ey;s #C47Uy!8`2 D]U*ANy*GuGTRA|A@]r^c1JHSZu:d)$ 07ǩG!SV$*Ѱn tBenڛTRk bVA8q/ŒgXuQ vRS6DfeP+ςNNLXt(X#V ]Cڅ_Qk4& !W4 W6y@( @XiN+ f:>]'#g7~$Y6^$^Vk0N濔M$~i#f ò Np~dL\s2?xBܧjzn =#c..&h /oSU Lje8|uan+_O{|?ߢʞI\t} >%u t]|/!vMǒ5>[P>=t'E*[zIkBDN/Ύ:\kiCy:)~QeAJ{4ϟ;_z\siƽUɪ(noiZ~| 7,;Kk(ga1NuXE0Σ(,iK*vàjrL#yMRJ '-\D S\:Lژ:j-:CwVc\_M/tbRyCXTїS K]*+ŠVh)۶% 4q<'eh+EDU2h0.JJ* R9rULt%ٽ$wumFɅ*SJ< z^Aꃣzj*<2 //wψT/~AoBו f)'OX~իo_`)c }jlپ];_G1ׯּ ۣ&;Ŀl;u eհq {-lEZǎ͘T :6oYǭxEwU5|*R ׹7䖯^0'z }Qz]=MJa]'+ٹ]_=_SSq/wpxUٌwuFr<~7M6v۸3[_4 1# ӿr-DB*aP9ed:7JqtMzrBH73=-r}$bb1lVFCYDZ{:)' JkYT"Ȥ ^a@e, }8 Bg.z+dc:2}fٲpl|hF|WOO{bv+QG"mf׭(Ƨf$ەpm#W׽=GEOw^X,Xd1đ= }pdMYm$VuxXKF<W?A|fKS=ӯsarr϶9biVmX$/xyT%0&_|Ӝc׭jjp \^|, =5]vh*rn;X ֬Cͪ2X!%न&e#C 4EU!h8ޝ9^ wN)F8:(ϚXI8I%Q^(H9)/.ⳇġ}F=ir`0_́Wx yc_O|a<U("39@UG!s`jdNl);VPn0~ot<+^18AUΎN8NR㦬ؤ1×6jܬ)aj8z|9s+]+W{l-d?o+L)9ꌭo&a8E#_&/\c/ܠ#͈kO嗛=vO<OI 9@0( !,\Cp@6JؑYRQ%!O/l,ȏoo]SrOJ.Hl>|9a.UkXL?uz׺z'vrm?g]_,- y7h={E83?)#[>-柢a#CT$!ٔ[TVWkW{}d'_&FT)7ۨ>RhwFqJB8Z=ʅ̉ٻj td.RHQ h#ܺDZAIt䑌軙sƍ%6ճ/s!_^fI맶%ErN'\X/٨js,|/n\P&2HHP*(8C !䁼1Ohv]z|m) rN{I#58[*5[ OתT[D)[Y6M̆msGި̗_...,Pξ}ٍ&o2'^hprvvr'?o{glGjHm=N`-mm>|-+oo^}Nz.V\K[!bf{~[y;78rlxf;|މsѨm12VypM@,&2BCȃh(s(Z֨Z,nڔ,d9)&{(T\D"V`(gk $愠uV@%DY餫%F0St-0,_#M@s@{4;#SczW 7\ Mhmu˒T'vt"v1cG۷2yaAbA'(JC5l  9#)Hn"Io#K<+ ~uא:r^7K|]icCvN: !Z{dTsxXh-s2|;p(N';$[qn?UCNuXN{y,׿SWI`wչIx^ly߆>.|Mi[s/>4 zx(77st}IXF%|YS ))@KހX(֕s!-Q[J\`6g}30=i/NQcfWn?N&@7h'WQ;˜. ]B~:t'c3~Ϗ~mFUґB 2\9*jz>+RCodk)HzPBV!F2%q BB.^{vTodfDw`aT,đtÚ.շ7_ZJxK  ժ7y...]o8b+WUxIƃJTCAppRM ,m6/,=hbU!V!3bw3g?blcAn㩨:vP{b;yJbS r"dYCM}`R! "=UKSP auS4H,PD2 V6!{!QBR]kcꭕ͜x0OEㄈ"+ %&M;r`]/%\٠!j"b Z) 1;:VSP5Z9K$m@ J0ٶvFnG{͹Ɣ6J=q|.Ը8jz:x#8EƗbauMlk*;Qy_ o`^Mf.Y;5+:߬+-(sSINCvܡ%q@ap RnH.g`&78GFF;+W T\ZLh3juիU$WB*w^]5Xth&)fME}d+)RNR|3w9h,qyL[yjDn\huD2Jn-H~=>~qw},wm)>Mxm'$RM4ʝR;n?%9;^פQ0i˹eM8cIlN:Ӓ?*iy(|uzԁlqu`|X5iZ82IĺF*Pk+XU#pMDdqhTfoAPϥrUE@0QǂhCpxLJ U\55NtΉu3gNL0ؾ=ׇ91:[0[p!bҩ{bРe hl!6ttΛj&J~ul&`q{{#ZYnyȞ/m{ oS|swW7ṁt!.A~0!AU|VSijRQƿXT=do0XM1G)he7\ŸjW:.eۮ5(We& 06Bk)[{n'FQhL 鉊[/isW7 w궁\+HIp>'>#O|Y^MU@u$4X 8m*AE>Du~RY]QEgV9@h $eі\*(V/I4.,Pzc鉲v{mK?Sfb=eXмK>&y")|)ǚ? \ɒLBB63ʺrAGWS-zG^T̎=ɾ#R {-^ 6!ObNs$oj&!E]o#GrW}:;~T,|3F|A> ZeBRS=3((rDI%yS쩪Uu=\IddJm(%SQKa!!Je2{D.K %l mi!sr, ]Ig0rv,Ԧ q,6 tE6yhC ,x f%m:)w X^`tv~] S6#d0s$F('1hCIJ!V;U[*I8.{ zUXoN@g-TPRnP{J`mQUΠvK˜OIX[.DveP+ςNNLXt(.'#V4 ݧF,A2hLB*m4h49EB1ʀ5 t{B[lCgͼ%0bE{xE(~Jŝ@?gōs6p8y&>L]O;w%p_MџPuK5gNPhif'iB/D|#e[:S$=SmǏePQ@ ZyVũy1g~x''tǣy R2&?J"ݺC2x$ų7+3[n+n7䏉Rr ?ٜOoq =#c<SDf,N. &[h!o=+VN-Sp~2~R[Us'pSz3x{o`_9xK<.)`)9b8QIRuDdI\RǶiC T2RR`#c<*ee6]k6LVͻ#Ld:-HZ{Bw> cN)C ǂJuJxkBP/Яb#1'-w4+?{?tK ^+x.Jt5Nrq#n>=;[#?ASH(Ͻk}z ӷYƛmwG_ϓfEd-l{9nkPb JA5(%֠QyD JAkPb JA5(%֠XkPb JA5(ZA5(%֠XkPb J'P)PB_ u(V?B.RY5C* VאWRοmC[;οo[;Vvf-Aj jvkvk7Dοժkvk}TIkvkv+bq6#AI5CӲc k"VJ>8R-d~C7tBf'bb1Vf&"btRN*=fQAadAP0KnCBe2Nz+hc:2}fn`=9;&FOv{:$m1> wv.Ÿ+ \|<+3iO ٻ֡ 'ť`0(N!3DK CpH"2 dA(דajh=WP(I}0ɓI$d=0x*hŹ!{B5`m<-ĕ7['gݪt*q4}n?qba9H`I&Yc\AiK%^A-}.X^޹@ν(^9!VAIϞy Vr5Rw>xjךW'io4.1qLHDc|`:9ۡꢝ,^ɼT j.+^*%c0 %G!RhCoAZV'rr8 ɖMV Zًl9X-b'%hP*p 浊+cC8 "XaGfmVXhJ}{U_}v5%cX0 %g roĜ, m,YO@p=M1"-kN͎oA[ˮǷ;;܁VG^|8z{KM׷*5>+2ԼA\TpMJ"ԋ5c=Ɯ|Гӣ(ULL'(Kd:YY& qd)FFc9LOί9\xdH г~~-, hl=rv+{6 & 3ٗT~.ʮ绣 yw]ri)ҿՃ+=~EO$&u6Տ?èur.ȃt^'8'h)Le)KFR r`?Zst!7=6 -* & i Gn~M4yly,o ]oS,n:]j?\G'[xE-:L_N'NFQ~9J],4ߎGW%&tcL?]q}t= &ؘtdžj>̡WaU[\<Š=B'lq֔Q8g.* 1v`uI38 '>gmk=VYXf12Cp Ldw*h~0rvż3M6.t5"$+Pk.|۶'|Ds1I^$M7zݲd7Wɰ[vuڻye0=?c)=w;FGL<ݓb|;Pn5'{ϳP{=B^;GV]OJk9mn½vp/_8.Hpޠl@k `jFp(Uv{vǥYH6))Ȩ,1";*$pO\):Tf.BS&wV\Zаm+tG.^#T~c6d$юHU)ɲN^0UheY̼6H̰IS`s/% [t*z]CQ،#Zz:l(rK8q1g+_rrݦLvE,Ȳ -yiR_h K(u:xq&MР*4jX>$y:W3Aě֓l/QCBJ"s(y,e2D8 {1{e@sXdDɩMB $ #g>k `"sīn ^$j ˧"~E[7NJ)E7t鄖X~ hߎ>R"4xQEC$K3 h xUAHx '=sHY%f H$D2>ms+ }2H3u0}MV32ƣ^6^frp;ikj #Z9oF^+j`")ݚ<@[R6C[YJံ/jhMJrA 2&-$BUܛ" .O)smX<۷w'n*- h:~42=+f *g|e*@FM`RLQ&npJQ#JzQ&%\,jZ+߼~sK-^ lcr`(yjF:ɳ#)t 7TTǠBŹ\'d0\E(\\j Av1֤%EiJ xF?./BM ^,Nog&k+ryr,_ &|I BTҡ&U[xmJ*@Ad*Yv23"\^Of`x`Yt?&,Q g)GP*e IKiM1q*+EQ/F`s)UѥX@"D9.ѻ15ț30tTҌ#x I/J]i=;B:ox#s~8/sQf} Bmr۾J/L;.%xygzfސ'8t١qTC U rYqT$:e[<]/ۼ$KCmFo^;dwZ1qUWpl)R^ULO?nJ?ň?zF Yew ~Ro nircX4WƉ)(*bɁéa\ª[:Q1 ZAƻ3/MˏL= dZc(ibpwuq8LᑁYdnpֵ ۥu\ . o[î P5Y!6%7 +yӯ_9m`yS(Z*^rcd;;2&}$^ɳXSs\zaJoqTxOVUnW󠷓vz(RoQb7uPm̃fwܝn;aʹ;_M>f U!;(6Bʨvƪ6͎c\ʜWؐeZhJ)- RFَc118n YMȟM׍Uܖ}6g2*|x}rhh&.6"K--~1̍DyêuFG貽c珍{Ĺnzj{>\_oswLz0B:lcOt\bj+f[8x4=cܑ ~:PmږKm_6o,m5v=d~r}p}Bl Z4ޘP<>e d WuVd[y1Wڵlbh_TiahYOu{߅;SRKs#gYn׹Ip~4 ` * (*%[Z1&U[,ddOయgA N$!Ė "r8U!Q#㲷Q4z_Y߁7|}5[,m)9nW̖ YAǓ9=O/ dd yp-w0,-FWxhPnH<8 Wj3_طee! "7o= /&{lJ1\A3[ĜmvB j!U'DF!bŁsAtlSE0B;bKQɗRMԙ{u OiJq.Uĭ9mJ|3ěr{?qxqfG/ÎH;!;*͕ L$v~;㕜 p{I hT3Ue`DFS0L\Wevc:ȷp266G:d} }I”l-ΥtT/DtK @chF7^WG7} Fdh?xWlq/~+c _&0 뵨Uw7 _s|_HZrl9*eRڏc*Xb)WV$kO%* t.l\<UKz51 KJ"*HoJ=릏 x#af{W^Y8ذFٯK_\V2k͇=[~onn+Yp_v:;XΤ }4:xA@Aw7okQQ&G\GV[d&bNG;D8@m`$VfW~WKlg[,#ZuZ+8~}scr⺎/-v)0ӱ1݂(YcMހXOPT\sI:i FG`ߣ@P >i/J!WB{ ;Vj4Y IuO$DEQ`/qh}s<< Bnd*E&/#EPyWڪ!,!s *57M ^) yNp=!)6LɔX 1CZ6j r+4s@,E8 q"of,<C];Cy`"TU!^^^_^,nsV:b!W,xaUHΈXQiB4DVXDhqqx,xؙw슇c< CW^qnSV\}mPXЂpv^9m&*_P/[i{*G7$J>rO %@#R3G8g QcY'2D =R<,pfZB0,tpf'lHeҚ,Jt@zCFygMPVӺ棶l'?~?.8l:rU&Iq_??Lݞvzs5J3?.~>~fjYUDՁͷq\FOe:c22e|=I(};4DJ//m[p5iErermjV,GZ\e}9Ks#kr&B?~wᢼP}B!r׍)ͩOtt]& xFYp7%q E/x^7C2xI^]'PW|48b'g2N~tsRh) CB#}x҆lWWxD=FtWEq~&Q},9Q"Uч/m3_NW/4_V0hA~EAl%ZErfVI)}(u1Sh&tψ6@Ew[.ίC$g7 c9/?x8(W^y[eJ;4u2e!Qd>9lVE.VjS" QR7J[6TrV:c**9k^ӧ֞r~l˙ӌIp5t?Z\:E <9mY2[o܆,[e LYdLd<$C529=+vhL̒ L0Jl.E38"rs.Hu7`ag+ꎱ{,+.-*6.kMۃ^M}*7M>7،; WAY&&(Y 03WЇYb/Q:}t0[#3m(񟋂I6%6IV2cb)Mע+]dWˣuוwڦc6=j v#OqIFa.QRZ]`\,]nnRN7AxjQ(B&fȁ &IrR^d*@Auu~!8^hxlW+"ڎ#FY Dqx<WUI,.Q\SJ.+=ጡmFe riA„\dBH9Md;3]+>&jOXWuL{yN(w:zG3]U":#:ۖ61҄` h т;ǂyǮx(:C–UgY#UUeU&!Yq}:a!%ㄴC6-g5V?*_.;HVZQ 'L߼h_ %@#R7Go QLf![ /PJ%r`2*k %`7D2B謏@Ae8JQ)rW<c=9jV.[=%8 e]%^&]^C^~o {՛]s`Abzz *+LJwZţVҧsCJ0*LUŐEH>)<&M;ٷGJc^(ݎ/3ͧք3;LSLQp(rAdRcI2(QkQ2VEw1݈eTF(K "A8ސQ{D|$մ-ѲFlgPֽ?LD5;N|' njP˱a&BH@ew\} 8Ds,q2 ]&x-P'QRQ~1[<%B1Yp!gcKQZ\z&8J# Dtn&އ"AmγEdM#Ѭ)  9 ΡݠTWdujqn2E42@WM"T'RI|70hWbB-ٯ~ǺGRPGV$MMܫ)*/AȏT7kz\yRiآGws= mX)vp3}rLqzxCn P=>'z6jPټu^,>U_l_p9v8//ۦ9[~1vz'x*$9Wjtr C#|Bίrn:~tûz84Uwo[K-H8j=!٘SD~d1pyu.ۉ8j)|x?-FË,,M=oYd//JlXgYߣU']%_Ghs":UӀm^juopZE6nfTy0Y#z({ 6qv39,پd9a\Y)!?Jtmh&)ۏvp wOӜLPyɎ<Yos$Fs8jɖ,KO]!KV0=x5A Ȋi+U:V?A$F>W#۫=SeI[ \˭4;%z; W~8A)=BPƹ\= 2˻AIQ؜¿ Qo`pvU+tJ鑙>iW5Jd'U=/ Ys$裉aSQ˧\#6 =@3V1*U2R%IUQzWηȐa*w'O]*GʹSR:Gb" >tۂ{Ah`x|,z3`[ݪ?K>j8`b}_!(8JzQttZnL 'u"P1M+)N[޶*h|(ؔ'6NpaOm 侤98B@!Z:BuI5GDM5"Wkt3\3` &N &2#F'DLືPDk%$"0&,QRIXAB~cfjƀ0 *J36H WƁ>PBZA=N#`A =CކY'ᵿ;͝y.qPL,Nidę h4ѪdBgEg㚀A^uQAIexFVdY]^`d @fY q'AZ#BiOZks /f5&bI3%FtL~:@XD@YeSN 3*q2| :24zO;MρSE~[׾N{qA[ ȫ"E@VBS9L`s*RUQr~)X/Gz D6JPa54rg:2eVA!T X8!R^#פg(@qS<0|zpT?i[&c/PHP}AP\x']Ld Ӑ@̓]# g7g6nn'[rp*_i6ig4g4>i?O-;4FG$GWIѳfz0}] qrdih!ܶ] /S\,ms]~j/5~,ύɝEZLGuk|/ A{Ffj r],4.(t1Lvy:@^b7C}F~샘XEĤ,V9YD_Yn}Xq"OZ԰(0cP}M1Xo`K BJi$a$@~&GQMI~KmR$v]9CѾPHF%p>ac F8=6=;Z-Fd|qi}|t=krZߣm:i s{3]w]'Ow=8;MWuhmqcG?(/wMT ơ*ip"$?I2")EJn+gy$R#8K@_Fx2?xIo׆֓&nU¿~L)< bWr2\wѳ)X>NCSJ.qaq-F8 Vz2-tp >e]s!b.'OtN/}6[;F7(hϽi||º~(*.$(\…Kp .%\X eh.\…K%\(\…Kp .%\,O.\…Kp .%\ p.\…Kp .%\ p.\…K p.\…Kp .%\ p.zNBR yH!)!<B:\j%YT+J5RT/K5RT/K5RT?o9{C왇K!2O3B {j,j,!*^!)!<BRCJ ў wrAuxzFKYjF"KUCDҦJKг;Df~Ww ]'lsٻ6%W<݋H]U" .1uMS(q~VERez$ `"33UTWWmEК@F@RMX0PVPXkElue`\j A.HɇLA8(|4ej.9eʡl(衁P< $|)sCwrEp eJڊ:cYT6Fe@O [(c= X٫`<2lW2Yx,EafuJ :-M0AʄlpL>`)6*otkzB]:(,*5h3vH*o(3-7%K!zKʭWnyecm h2 Z :äG'3ǂjv;aO(N66vZR+2VmګV  /$\f|NSgY L_Q.Hh"L,Fjf|U=F6|2h&DÜMZFu$ NWVa,uب@QqN0f/J]!F#Rmː({5 'Ł>7ҘVaatϮgnׯwV Xw$|0cpmqBG/(m p[,* dN :c-!0rNOC$ي]5t*w-v9# vIF.Ew.wbم:MOƦ:MtCϾLÂɟ棫1^SNzHE榶')j~j痨/W0s(VTW5΃.ڭ9& h̉u0ĺޘGX,OVr1P%B` A/`@WR+9YYlS]hr=.+N +Vdu. $1`Y]KQ F$HrT5TU.$ F1#T5E#ri`P z)NA=68(6 ĉ|d㳈A9_ j*U%v{ycEޠcڃiǶ^ '{/N (T糋VHXF,w5dɅ6XT6N)VKRtV0dAE'(,& p%ommq+I28Þԯ68;~lOqi|. ` %FKd X'V^C^[Kf NaB5zDR(ܸX+hj31aBIDB&[=`<?+庄biV~(=_<'x[]œCDAQȎ(g1[e#̪uC0с0L~)X`ڱ?4C?|6`X&7q_ܔBُ w EhgP4qw 9coCѤk2蘵M>i'lhcƀVضr0FLPC +<[.*}8YS̍\粛_?*ޑwQFwCml_0M f/CYi5Ju!(SCQއA ~EnUon͆5pJ8#E„A⡻kzL0n(k}ˮ>7#k-"ʢW 1_bƍOv-IvjZؕd&-۱';5)^a#kx |yn_ˍ)B,8ҡJ! Zed0r渕|Ow Fw b>3NlEf.d&fOXU b"2Mk.PbR>%be֡mʀ,7!:*\~Y(c= ٫ sYjfS^SWK_mC#rhrhqoO0}ͬ!ѣao<bE\1*cؼ/JK>.Vj9-g1怖'!ȍ>s@/6Q_~kg|H |泳t2?A -J:|j"~+"penM%i vFN^[%)w4\qyoy&܇2gP|GM>ꐔ*1֍rRaKqgG*m~,zarB}|OsaצܫgeGu(~[АZoY47]7+UGXSSV,pe][J=>P "!ʶJsݻF4jR3-uU1-4lhuiYv?;:2׼/Uv &]7/oIW޲NzG[2*(KM)GleW[D AFR1т>lޞM[ϟcbBk0v-r-$BTVYvf(6uxl$ؼ,g'i|9Nl m`ԯ 9> Hz1t uҟ^$I'COLShWҟƞԤ)I|vO!| `_]}=vLڧ.Tx2)+;mJ!w%w%+IݕH &wiAV6f9Xz3B5ُ' ]?03e,(ٯ,wS~'m!¹3.5Ηi qv˚W՞\+kQߩw 4 yoO?}3LN9zwsy.%ZTt6PN|NEPfJX O /~_s gٻFr$Wڣ#xh,vg1mN|Y`0Ҕmy%ܞ fde);e]]PRH&'\|{k2\tBF >߿9Tqd~pk!Ÿ~W?~vO9R ]'`gQ@iռmռUjV-OSAMV"UDHV*"YE$d"oHVVSE$d"UDHGR HV*"YE$d"UDHVVU٪*[UelUJTUZUU:*[UelUUUVU٪*[UelUVU٪*[UelU,h%,:b\`R2[+ЋQf+nj1SR ^GnAs&+N!Wk4舎% AVL@8.D82eL"4Y`j6"=&b blQ5u7v.JCJ/PQ8-y;_<W;{?PROlrxSi\R%1jBMtR (ōTr爁TҜ85I {A6CO.xoZoZa^mxc <$9N$2ȜᢂV$41ot\' K#xdz%TY =y >D <ɓS#N%s:\"Ogyܟ>H8[!h 8_9~֣!Rh}D:}ϵ!$JtA^WSn2Io0]sv[㐝MF=)*n0,/{jvLo]MVZh3$F|f>\WbmT{cWX/[THju5زz[Ǽ#|wGCzCqվۖX᫫J#\^>̾y^*"ȳ(d}c 6E\ZWQV_^+jJt޸g(bIXJnRPN #&"e693ڔ*[Ehc"1MPBY"!(zhF871e߂ZQz /[tB.o7Gt[[q*t7d+p̻2*'W=ben x`ZӺl]si#X,O[3ȭuLVw$v;5.6Ra: ~o1!mWb 76w2g}ƞiSǟ7ݶꓓm|UĪ{t}UnZnn{ZH?GMΫ4bhЯ׍pJ2f">U^{*h.,6g+ӈͶfnXY_qfg}i`LbNU% *BR'I2XFV\'kUPҁPegGHN12J16go5ELhiN^r*yb blYo`gV-qs|StTr5%=D7<^y+vJYF1):G"YNɰr,UQ@%y^^a1Xa e/SrJ;OrW #J#{!>uF3+Z?T_i;PpB}&?heG%e|#U*ȕ|X"mZD\X3yNN{ yV'+L,X`^4J1P`ĕ{5_Kp1ԡ9ZS 1PKwQÿygrZٛ7V lW8{N:I3nWvdiN]JKJ.&\=W[Y᫒kJ)<b[r1\h`C!I98V`}1 і=$4$:QRh+ܲ(yoq&it !bHVC`v{^؍%>J[ KZ% 885i>8Dȸ:3Uv2"i4-D<NŸ@*T:RDқ32Pʠ R@O/gC\4ZeŜ8'84Ƨhya$!,.EԥQjBHΑ\g[j 'zP!T :%*I% )Fbl_W)FƮX" Qp{E.nYZkv;A5hltVQ•1Z~@p+ \ek*[e7WrWP4__Փ_fx%24+\'pL;Zp[>pBytM_^d^]O=x؋c0"DGJџ~h:3|9wDCiCɈrco܌ >$F9l Z)J!L38S+|m>/avn|8`gpnHogdbaٱVh(xlu]lVUŸˢ_ '9N?NNj2~Qs26ϦcZ^׏᜝7$0;cbO|~l^c;īYeKCMxhnv6I\Oi@Tzv=t]Sq;ύ}/OLetz9|M)-'zDrX01mf_^>.3_+^#_$|.I!1N8j0Zyki~1;W|RK8K#M78{?:g8}VF^֞nf +Aձ1MlIa|Fљ()EnNbz톝ap)>n@)>Q_meLVJ* +2ДYoR@gdqXܶ)D%dG'SѦ٢ څA,ED #(KFD1I_0I騦+x_#yʳDc bt)@â`GÒ1Xfp9*RK Юa1 T'eg X@=ߣ}UT}۲]g}/ЍYߛVM :&oKOL;[ %lX/ Ds,m6-=Fzly kuVkɂ3S&eUza$2db.E(:{BZŒ7M@kS{mvU]ynh-.MoƹFiMF|_4oyO=_r7MK?_&'ɆJF0q`ʻğJ[d)ZG_XyPWٺ ?6I<+>c dݽmP䊥0U6`;}%g EZ1NJ.JCT:y#5~MZY溜K*\ԡ1V |jftjcjcQMײژB2*`kuXqCB | K,و0LizVjW(/|߱rcNc_*"dF`L.*9{w,t J󭧷z6`b1bv{Jڬ1KvR&v g{eE6Ύ\vmn=Xc;te9{'w I7G/Z#k.uߣLJ{d-?fxZ]vt%~%Wb 9wkwsoo[Cb8Hw)t)w}gJ&~ݏ@mwxBI!B6jvD;6ܳ4"auf}eT*E5IM#v˫O߂w˫?D|>~߶} ݇w{pvərA)E,&(6Ff"A]v6qm%8(316HS0 =IP:IV# UI[g"g?9{:jh9n)fhY~%2Gq<ʤ|Ag| 340,CC } Lc-hX~IuvfoY{I# g,<䬊+%\J: @[Tf)3arlsAeё(-Pa}91`t2HjG iHKHr.?iCmf~uf*~h3.ڹ9AGV χi ibGm$ky&y YφƎY%f=o -H|;\Z6zNmJ+A*eZ`8ci̾6P!C]\T+1tAFC=CrVSC`8g`(8Q8 mNRI"3iUFB>QzɈ1u=JnͬsfO *%b# p:ZZ'`Y,rlXC NIr.(l%DxͿY˄*DYH҆MZD𩵳9ίz)RzB@z(}>gޝ3[eonE.T/x610;K Fjѹ$dx޹C*DL6 PxNTA[kF]ɹ(j0}.Z*V=|"c)ʾ@GJQJXXI֌٣0x3L6Cuk Q>.z}S$/K74K_bvkl!e/ eEaEYq1z!I4 3&QM5jaVBnUںd!ZN$-|( A%Q95vN9݌;ھG}D[q X|>6031Z_0;R1G!Ut/F/4ՇX)z#-dF؊NL"YE'u1EbNBhFn}Ec[ǡ14ֈqg*8 "R1Xͦ$#4hY ۵3Km$,F ^ amL :UEȒNO-iHwc،RG֋e!:qAzQ R/;ţ^eMDMQCa芵(ld[șBd iXz$Z^|8}،;Շ>TgPa +սc!ݚVlw~Jяlc~mvɼآrmBTXsv1 !@jwc3;WlrT"gReHPt.VQHdŊ$E8 ɋ)T!K_GI2fiRH3|+2*m20|" F V9{Rg񬃘#^Jxը xu)hU(Y|)hQG*t.u b` TT[@sB:m ySXRXt#Hv0`2LYV,Z[έUj뜪qř06'ĶnZac+D"83Q==d3)k1 K"v6#gϬ6=)6ۛ(LO7,˽7经F,*J!HY9IS*G,༔RhN]A|Ř̦ ]ax?fV@l5w9)gI9gtILr}]/٘*>YLt,,d&) 3TE4y뷵Ga]ֽ?-V"dR48AD Q$!D >Mґ_jkG}kX2IcDm6A`c%RI6E&rtA.EH-e@ !暘m+?g&e2Le$ƚN;W2 nsYyX hO҇\ ߌ~U7dV]kьƥ]OmRY0w^}7-6]~BP<3H)$ӧKg)Ez|T/3[Mm,r$ oFNo)X'켼]?b[G9ft3}9oKOeoכ]b?>rO<eyt'g|uٙ|ӏVHM(]~]vZReho2Ip"={ ur)NΝZt}O'k>8tk{+uGI!JLBPD%<=N. X?Ho \L!b2EGїo:keζrxVy} L֦zX]Y~/~t\kM=>[c1O{p^2ֺbW7 3w59j\|Wsyy\ySȫQeS{̼ϟ睍~wuxyOc~)AڭmgmM>+x,(ۚ@a"`-PЪE͔ HEE.M:_C aaD9p\\J"CQafFcT0X,KGFO,@1Ggsgo.Cۊ'ӦmƘ6:ZyD^ oc0HEI^ȪDyI*TmW%"/$Yjt%Xa:t'0\=4͝k}lbhp5$YơP zY:\ y>-IoMKZC~?CJIHRc`EY8JsѣVY .sL:8T o]&wVhSt1>`}z;]w}|3smDnhC.z?ԏ_^]E7ӛɰ/އoL`v`eoے]gc Zc'[˓^?1JC@/ k#; $Rp] cC*13𿎤+'N7DRBt)!< K!`rb"70-n'Z[1 ਷ MC8l!)a +:ե!=pRM49bDEk.qJJJiADN8 *qИ)h\? >P1@C6L@3CMP3@p&e67.8Y@g-[gudt\aON?:]E1#1vX!{\;|Uo q4ŹA0 EF!Yb[%_{|Lr-vo^I5Ud"[:eRPyv́Q;E<{_⤀P\`%,x# 5/W8XA':gY5V|1&O3qZ5|+kvygDkG'?%cTYQq.&3'!\Nn< <#:j)!j#< Zg;0ZBDŽxgm66(qf֓%QNɽmY v0G-H0,k=q]wC6-@3V1 )6R$IUQzA.#GAw8Q3eU^I#@D 1I>tQC `x,r=`+c/~U:6~ [<dCqa1v:s-Wc4Uilzj*L{S̅-[heolʣwߨ}S} #Tg.M9"j|{BHAxF9lQV5xٻ) bTmA Sf$6ciz>}|vm?ϼO?^Y{,z*W9,˗FNEepI90Ĩ 5II,87Sɝ#RIs⸂<(:r3;)(q[EC-BIpRPKt*j B*]Z+ajC\Vk͚X7|O%"y/VJw`4nk$·E ;YCD2A+L*hLNHI3 94׉@%i] NxFUTZiצyP D " <ɓS#KuTF9 ` y>< h_z"{i2']-Yϸ*'yM@~:NjJp͗hCD zgꞙdgA̿x)Fq?[=DOn;?~+K11n$ǪGefGě;7i`쩍$&WrX{ KM5Ϙ'Lo*zg_G7JkmB,\s[ (5M6ќL"u"()0Z_ba~Žb %kS!|8}?@1+7D-j6\{MLq3O=tm^Wn1Jo -u._Kg`<6KmܳH~/\8c\_AM wKeX&x3@-i˪l *=wuOY6<y2|@;m>VL4jF00N0@ xCRĢy.Zq@Gic^_MKM-s)$Q,<14Xx \D oă'Q@SY{;J/@.Jxb*[mHP".Vسػw0JQhA/FJKWD)7*;j̾:(r~_kqv>!i<}s@ 5R[B2GO a\ yWQ~2*! sM'ױ0Rt/c/@x@vrA,E7^0pK BJћh2ŕmmJRKb1J&ijpt@ :g'QPg F8--&Ύtin~ 1~p>zo$g^GQ[~ +!Upl[nvݫUM.$F+Ybl)@|/}#gܹtQn1U־vS~UC6j+y>LF nos~ay6o8x=.]VgeԜ5VtswVJzP۲/ޡ9jr6^G^=6=_!>otCԥ獆.[#W)P ׅp9\D.|lvl#q1!x"RO( DJ`4`h*PHȑ,vT:%y2[5; ͮEq^5ܶl^Q~{ 8\JUOd OL%b ׎]5ErP*FxI.Rg:EK3g[\T nM+x%z8idWg9nW)ޮ,Y~%|ye, V$K ,A˒Fy -}]\ŏRGdwÏzÓLAf AR+" ! (1 )%I eJ|ZhmWILiOWIolm.eڣ>PBZV+h@4hac`m ٖ8ہܔ7̸4?( K˒Ѡ*p[4׻Ć].L,Nidę hDk /:8 ҷVt ̈́/H#QàE!2aE)Қ8;M J9B;&y `JB2&˒%c0 pdBSdJAuuffDHP&ScN3IFtG&YvdX 9*غK'jb嗡?5&K c,2)xjӸ>T'A$̵uu7[܁Vvnpu%Һw =B[t7E^^>̵jk} 垵A /;z?7hZ pcQPͺ`; B*ïnu7l2$/G8=*;t y dq6"E"RHxb ')y`NE٬dkpЧ{uwן^nծL?N_ͪ]ݥ:lv˴ᒱQFGυ'D; D\)Q P)1:s9E:U|qU=\>Av7{A]ao2Xý0Ċ ]kJoAϺn[2˾*<|wB;hMk|1+hNwힴ[?5bv2TYmv w@W.  oIȩdi%;oIR鎷-hݛezcA!2rZH"&XuZB.j[8 Mը8eEt.$ID{,Sr]ڿ,X`^OLC,hJ\)\Ct%=ԡ8ZS 1PKwQdGQ F8Bdi9}kl?XVH8pqwvvru-3椭/ងlFBͱ3d!8P{lW? k~ }8I2M߫?v:̈́]zajKo8q/z}5s 'dsD Ado4RM͝x*YcircP>_[m g~v5xciˍRnvroaKFmYCrjBo/AG=8SD!dFzD6{TlN_o-`(l{WHnOö"i a Hno>\ medIe[-,YCL;fU"YU{xMk$<ZU N9T\V,8IN%@Y+gmD6肇 A&u(Ctژ!xn1ERx='sluk'6ד܆4Toʎ%[22PRF2`C1r!E2Fa&+f`"l%y)GTɕSNfcZ6T\tDKqq? V!Y{w|ynEO<O'ʻ#9z/0% ޘȸ 6[Z]X( 21-1lmdkrJ9VBȿKYrAЃFM.dsV`sR8{䎩Uaa58 Me, wJ,o 3>X՗{Vh?_5hd2{?˯&Y L{AK 33QU[Z$ L@%#\"de*9bfʈ]M݈fӴb jWӎCQVFm;`)%l]A-'XY֧ۋZ{2a`| VsE21Cȋ aMBA8F>ThWg7vNdƓᱵPDt 8 c0 *x <8AC9j&Ԗ<{$6c2ZZ.AyϢ*":[fE1j%`LF\p&$2=D#yB#1L )8Tv]F:iA(. 8wR*19&x|EcR"`IYeTb\+$C=(?1pTv2V8?15G7~|ۏԟxOKC2j4ه ~Z>g2t6^B-#?lPl>$'gSIKyF[KiT?tU2X]6ߏڷ\觩R2¹*de\f24Dq>F0|NQ(qOB1b5㖨y!7Q_}p-$LƈXӄtu <õ#]˵'^ל\&{B%I\˵Fd9]@eL&GJ-rh3ș :e ^m1mC:-y!u.$&/Yʃӂaky#w O)éڵ]fSmY|d6N_ջ3"7(E(*Uga\(O=Pkk_xsoo..YNW)`! <DAzF1JF+b+Ɂޏ%NoɎ} +'{ǽy 7Z_0f<_i?FMW*:hQ& " jE^ JW"k֡TpD J(X䁓6c ژ̵Q5qv(tJ [4s}-3-WʱBa^Bԧ'MMOل^R~*!:`1IFYtMYFT6-a=pK={ILRrÔh Fz.fʛh$hdJLgĉ\>I2 XB7wh>|GMxn;pYeMeMRO74bŴBǓWS=+k_'_=HP=/,0m{kJPw٬4K}c2g&8"6I݄hNg-&11=GyA<Xwa:e gVsL "@~mbλ$y %T$#Jl̥}]gARe \2 zȿDH(3C2X8{Ed%({^$džΧ)Ͷwf.qK",sOڗ|˴ܽcwCmܯ/0:yV.)_X9I&kRbzMܴ޲>+!Ea1ZHӕ\yS::dR 0 vo'pONJ%NRڽX^d:s|(Ġ66X[Ңf3{I 3rc+d8?Ҵ!ՁhTiʣAP * ]ѓedL+TאI(K IyW!tM!6y(4 &ӜN(;a2fvg͜J(^@rm9G-Nrӎ/eO.oQN߭znjgR%u؟Pܫyz9QuEəߏ4nC6F)RMhY9w 􌇆`j/,Tۦ2if!vGf] 'n{ 7IZm#ތn夥\r4{az@miTsvN(%ɦjϷصlp1Jc?-|M?mqyCP#KT0ːL$ -=3n+gsvvi@J.o[g풖3n^v1 KY;m"3 ݊>ҋ_zUNRpxs;yys-]nצ^\l6Wlnqy3? btz9R׭#r>KOrǥ{.pO/s.,9\_ϦVk5>JIYjC+}~%Ӥ%<E b/r1 ЄBrfQo1b5[1~3)6Fd[Gݑnrwo`,ld_3YTmLOy>joJ4`1H2ƴVkzNZ ,mfVF#JGPt4(Q{ ̢UA&(YkF0KnCBe2N*g+J$DfU&ΞG:c3_>|S6mc2[%Y{ tv\Nﰢm20l˲B6A9-*8cs 9V *'ĀGA˽0 D}Lh^1x: PC*w!F-9|reg%& jx7}-ѷMI(j qXP1.ˠx"KJ-^N)xlJly6q訍2\qE #9JrmE`N,@6ǃD Sy ]qR^WW:m #ǫzQ32rms}P +=q(m9(HԳ g?.bġ]B=a԰'Y/b}ϻ/^_~0.wDfS"3SV0$fZrV8H `0~H$1iV82sG[K9(39Pia$H g`j : l2G߱]e{v??<_y*nѾyq9[|R[:sYwq<_|^ n;~eI~O"_W7,ή?;1w5E:Ǯ_gmO-{mTVAcϫ >vX6<P?&GnXeKC1mȴ)lV}`O^0 O+Ib5[;O+h~t\Җq>#z8jqVR"mxY(bԻlXD zwlnt]oj}#&-/o[os1ڤv&(.|Ak`œuĆ\#0xYvrp%Zrʃ74$bf)ȗ7nfnH{Rh?\P166vESH\Ddp*,$9xF_gS"].rQ$=iY2m m72$nεO{6=ʟ{y|fC6HhՁ%ҮS,\CQ~.//L>~6_WB,9X2j.yK-rGid v9j(%p<OcM0ׅiv~cFg5V GrO vKvT۰dF,:$,B|Jzu~0$g.Bi+?nѳ=;ѳ=;[NQe|ɠg)(Z(ʢt2Y%,jI=AY\y_9%>ɨ(jZ.C/tk|I;Nk5PPcZ_7K px{@j 4Bz-:@Qr7hQܪif+;Żd&{|vlb '/wtjkYrRrF_<ߚ7z? @oۉrSpw)mEAdfhC8k#4OG4 ss/vjgzx^]ŨW:C|Q<ٻ{K%,+<7Ynz{VQxOɧ~]gvoݽYpmQE^ms"rjRIo[ b|ӿzQ?.=I9}r5#cY ]F oŭ:ԣ:D "Ss/"Đٷ2޸+|OVm ̖TFQ!N1 F`R×{,]3կ-Wwv B] FYEֻ<^.m;~'sբMy6? G ry{VqmA҄AʻEB2K-;pASATΊBUp;Ie]C_? `(l*Ur2\ rGm0[l"[WTU*Rc)) J qE[%63΂$}R$Z4b1J&ijb (w[ %Q(kE'cp pvzPe&*X-@abj*A2MZ% 85i>8q)+tɽgl2"iͯBS"Q t6N)}`yl_ oUގ3àT~tuޖbrޖ=YSoju*D.?9GQ&u܌Vr)RNg9e&sPBwRZT+o*Cm.]Yfg'9zПLsMUAS:I1 B$YDʉ%QG\] =Ρ&KȬC2h>2W.6ۍal.&jm]XkNkwAb#6iِ 8핃AiNG(sE%!,NRaRYﴠ2"C E{,2%GQu!LuTևņ]Nq$+RF454b80!gH "wUlD6ŠהKMF4"opjT,e=.=2Jxޣ'Bg95bl׈C8wTiS Z^+'Ht UWTZpAp!0ӋOۢIǾևl?}*lKYm#F_6+nФQ;$uُ/ЇUųkS?ALtR?o+/.^wtiKTy5gEEu!q{|=R>Q$b !q1D^'I<%Ɖ8xM|',3o@7jo:E --Xi#1զR١m*5sfzy5lKs.p0]@>y%s#o.":-dwLSų b:BxU A@Z^Y5,V鸋*S_U@ױF$|b>ZqnHte'SܙPWJS1x\Z9KɜR) ڲaZ"B31RHp;rV#CeLveh abٱfM[6,ksGgݐd8x{tr%jP!3%M7<;g,Nֈ,zc(pN1CRK4pbǣ %!2QmKGS_u|cglkn+.gJFPym4Eg*igᐩ-.(+][1ǮZ2mb_fk|I2&BVy'Pip̨( LDI^LIs]8Ds,q2 Q|-P'qin!]HlNf\-AFI:3AQ&o3!vNڔ޶z;cSj=xgNrh-ԛA&ܣd)lEoe X4\MGMsǛP4'E8OFWg)Mań&S|֤ FɳQ{g\yRisSc0/sx=}h~L4ȑS OpM|'o-3ćD BzSS>ͺ?~xB3UD N㪐$Fw/|;"JrN?Itqj$Z@ݑqzB1v qpqap NdߟD,8Z'}\e9<}ekSQQ?wEA)&Yw8j&3fY5PǹLتj@uqcMY2uyEW. o{^f^q3Œ"gУD rOr gOɋ9Y\or$wG{+hON{wZmͧ؛E۔pT?&)e pʍ'o8x ԓd%aK{b&$+Y$WJD#s &߁পWϷ6yBsrfbBE)Ny qGȔ+1OH%`uu( 6 6OUkjhKĞc\bʩ}CέO%tmL%XJu ^&0ԯ-tZ Awܛ5o^ȿAyfIjgRKW0^MP‚7bZ ,qU$Nz9ȫjy5>!gJZ5MAJüQ3 "%OU+ 2΅d$M B'gSGq#bs 1DmpU-&g[zƫ Jt;fғj}(B[r?`}^>a8ir6 u]U(uXHTVfJ:o!᧥- SEv[q2*xC "ń$aE@:m@ &(eT6 &us㭿?;\u[Qo bĔxїav:-hW:U+)N[޶*h|6(ؔ ;zڬͧi\!| 9aK:BuҤ#&GȽ7M)"2gm#: ^\3` &N & 2eF2 ,Z#HOpv,(;뫫Mmڄ~z>}nXpէ~8AK@<)pSy /S7TͤCGG"G%뢑=G8ؠQ!Bw6<** r /C#`t3#ͭ @#8"eDRX*qgW;{r$&XXqVEjLxsFFqBm{lh[Mh>*{k[L)q|6ZP+^`jъ y~i F4E]"H!V /Ga4BkKaM1zJTϣv6#fv-ka[3Qv 1Y .~ϊdYUMlɽX!Y Dun[^?l"!c{4ܩĪkj?YL!K"f>gd}:;dLim$Y"f ~-ӱM]ݺre@8 j񊙻փX#DnQ)P#+Ap5jq.Jo=ǴS= ?OK'.'.)|Ejwmgy.`:7۟M=&܎&m"jJsA65&l$UVU$xK>6x뽰cyHv#}#X< 㠔fVIBLCy(a J8S%2aCs8 )H/x;S)D D[p/60I#P # xdB4]ʏ*`osc 70m`Jvc1UtO\` 8qTqţ \i/`\],.j+rY(Abzn2B ${!fR_YA1ض _,X[oj?^NW"Gnlv{ۯYx87o$o-7@MXJߓ8/:~|;;~9ɣc!g\yrnqpvGqs#GD(-{">uo luJ0T"! acd4Jd46JḀs(jrHRi_Gj4vCP q;-RLp7)Eo⬶A[۳sO\ IxXf.tch'/"-էG cVGά8J()qN2t02%=QjgBL /\u+K!`BL¨1jv,q(K)µszqyֳ&ΚMASGSs=NFd"DC`إF Bϩ K! DiBS؄/PawYmd};cGdT6 XEdڃP~BAEho"XJ:.)֫jb)tL|:7!-aҀ]&)u6*X4 GR@et:6 .muNSBW)鲶͠a;@ہZ !ہM3?{ iOP{v̧I戶d/U;Rwg?/1w"]pS>$0FhX|7*?Ow&-a=]MJ:܂^T DA^Y >gnexM?TCZm06³4)x3l}Zw[iCܞ`Av]f K1+lIawWIvڧٴz &e S;EY0A٧y1 \tpy5ÿC/Ê-9,ơ1){)W2:U<y,![nu '1bePC.)ɾB6RXۄ%)wkN!$*8ʝ/P1qV{tHbaiJF8c*_ 4=_CnEņl\$g$rCe߆N}t }=6N>NOIhq:gf!8BpRKwY fiW 5/qWS-`v:=_;5PEx3Ek7슪UxmUM:t%!V'Ij> b?.&o9qTGJu٨#HgfEjfޯ1CZas~uEބbp=cnBΧfGWץ"}YE#fXqܠC=7?o0ݿoЗص@P>q2МѠsfW9R.h^yC ,M?<%IcDD~߂hlzֳf_f.5-ۛz=OFH[PU)4}'x@=DFd4UJ/e\"Zˆ1ZΉ4wFm6*GT : j@C[e}Z_ހ'=vϟ~ 2ev%;TΫe^]OWpT˷d.708;<8':C0Dn7AHzB0 SNd`:Iadx*D&ptw;WaJ3c~$]lğCG,?e$t2 ȕ{}"1tvjүԮ-o&jCf" -Coh:kb۹Ԝv53G1߽;^r# $g\Kr10ω ! ,+)jYp[UF3ёadFz#`" ;n};뜂5䩆?4߾/;Cx888\+7(/5,2 O3xgb$03RZH 2VB #1͔12)Hn D'2Zx)"NoY./ya4#TXE vrI~[A;!>ܦ ɳQŏ̔idJ[ϼ1è̚;S=wϫFց;[w =|apO9pL-FvfXc&=1 #g$1s/+0}Ϝ)AQyIXGϝ TIuTt`NsB(ӵkݵV?#<| qzۖ[Ρ088`%\wfZ_,BoUYC2%2mP `1j) ""K%K{Ȁ{A4i< @ހ#)4*mAs1VFÔ72( !$a,Rg2DOS A9);%ƂwX830&j`Xoxn4%Xz *Uב"aX-QBm>QLs6zHFxG+)u Spⴊg E< =5\r%X!^ Sgb`sEl@ls`Byn8' AH h8.'`;8 %XmːJ0#΀M⨲{NL'SӟerQJY:чᐌUBmdٍj! ٓ'n =49m8350ޕUq6ܗr9gׁ!5ӿ@͙ϊ4SxO1Qz).:~·hI&djQ 7+atf"=CM$dlf:N]W@g(6Zd1S޵+bq1" 0؇هs`/^cOe2Y/ۢn$˔q!jUY+1O>3GF`VZ+5>嵓~Png|?7Ur$ϜZc,t"YQ"eadNNu)J mt ܲUjo%[Zg\dô: ^=78^KY4[W^L,S}-3awzdrժh~u㾘G~8 4";w.M ^=_S2_]QsCtq'ϝn"_h=3zf#X9٧}.cuc KRfkɁ PB &`@RBƒ*|2 "m5 Y1I 1)pJ^pv=*IE/>URD"Hc:cM3_/u+Xse=])fUOI~:Ngc\8x[5xz{w T}O{mg ІjfB,} Ȏ6M: ң *=95x#dd9oa`KtFc>[v3v^/|bqA`i w'szK?agn:_EѼ|GJԏoCˋ{Ø:Tur|ɶӁ J2#/C.D/PjoYڗɥEH6,X ւZyǠ-&LIIj$XEa)mkԬ_{o%s{s:/Bnn99b5pUcQu*?E1ɃЖO! PmP`0R`Y9_¢w%d`K@G<+U7fGq˛|pկ`BܲSLI^12Wbl√3ˍ̄CnH[ 2 ]^3Kc`3c.E]jxĻzws8 F5"a!P9 T,,D|pej$EqWGA |29|4&)JZ "d dk J@rquy~Pz)\O1LfMj"e7W+"szjy}HWh ^i*$c ̠@1lɗ%MŠ@kMuE)g Bⷆ*Xcʧt6)v`1$r$'6֕f62MUHE[AhrPd`Wk2,P2uvh M}uAd3jab^ s`-|H^-Tw/rE'I>8S%z5]M@] jPP J Fg`yrcI6! YPaޜ]X3޶@؇;p3ht;5/.x)'yooqu*%vݧٴ7%#Sje(\tp] !"v7yﹷ{ MxމuHN!=l9**c0I%DdV%Lud i54>9\/s2/Xv{V>˺ |qrEׄION?̓<}ſ]|?Vw=-ZZ -Zn+I/ Pqi|vME. fMO+^ ꁐԾs>a_7̗.Sf#Lc畏|ɪ0b1fxh̰a"8GG5`6F91)Xg,K%HlK1Զ,s66XrN )d"ɈnY{Px8{:g- _\<0SU1gJ&֓Kv~(8+8X_9g /;n=_wHwrhUW g;ybX*^sQ:^ͮnX8tZMʝczm͝.nQٕetuvR:2Wb :O5ܟO]3ӗ"YK: uo%ΒF uM[3M[jp~q(rBOhbe4e d**٨Y(HDW/mNQizӶdf/w@azRhC`qXz/T0JEV`IdYrLA2rf4*FQL5uײ(cJB4S{T@$ll٭SLw KfZ4'CiI3vOAj7ʻ\e䤯E( -WnKv"zo& P2olkh,J#kSQhՔ r4K6R-c3q(P 4fơҞk^FVdSmvrIW3}|gtlr-mQGmM‡"5Kl0R^PH>jjUy* n5P4X"9%+.3%""8-vH+s(V8jVێV{D[qb]A3 VDVI) m`DfȌ %2*ۚE4Fm!]{~[ɷLpJoa m5?EtE-r  M%jPiZ^|f^[r"T 4J! !sQIW IT;f$m̤Z~El&nxUEʐ|fAv> ..7.xe8abH/L`H֘X!DBI.-hab͎C!4p=< k]l9M?5m[Jяy|Rŧ cw}-Ԓ^"vȖ A3-r;;)H HP;xGlFHHdE᰸(T𨔆Rr2bD]Ha"6zrŃK֭xo2$2|+2ɻУf* r2Э vyHjܥrl+ϳWOo/֐jY "|sϧwcuGu3Y1#^}%j h8Z1Nۀ iE"3b霷Cd]0LJˆbq=jag%/=Y{W^e ,JN!RQ*je1cr(rDe0Ҷ_Y=چAfS-lSV {+Y}>=Ct]a)k݅w.#:z%tDì{hQhTl_;+43Ro428ReWeE cE'JPI @(sVmޮ$,ĔrI&+8M $ԝGh Mo]Q{˻)lXR\1>_.J{5#5@A`,SNkd h!& FQy(B3)iIZZd/z4/EpPyߎ߳:{&?q_&Zա >@' &c³Xy)Vc^՘c5y*W ֡C4SDAD$ j##@ UZfFVhHG݄g/'<ާ]\uW>u h}dY; up@3@B6.#] X%eF?dMA644LJI1[uS$cTTJB\}R Mя( d ݭYN&NږMIWm4]>X@,Qc1Vl1:Vݬ=s *"`3ܡP N:]|b18;HW*A S=4 teN9o@1 ]DPd%lF0 Tj Bp)z 49K2TP[olR툗ɒHSg54TEP*YxI94.N׊/Ӈײrˋ,9:'eMdIJ+ B#$+JxR|S9bnRy_bx~;οe{)|HZS&194*EZü,5"{VlF?{Fܿ@%f@ F?%A4PXdEmKLOOMwU$ X3T" ДLd4@XbDGDrϥJk[e c&""<u7v۟kiF}a5 +aDG 1\.:6;iXA%M3c41TF0dps危 G8"!0N_dpu:J& hܳ&(.{ ϊ_ƹWɫ^UŲ<''Ͻ$o%nw׳IS.8[Xؼj:qҲ~Œ}qj6$\!`zϱK>xǪOB1.6 K~볢OKw@H$4ho.GqIΊqr]&yۺra@{w%T]| =+~k̝g(95Ocd>?U@U+D N iF]?=guB8Z3u^-3IUGv{!r|eLݱގ,sH(#:mqkB*åCR`9ҽkY]a<d81`@jkS/uz_w[dӵ\<U׷h2x[w q*Ȳjuqֺ~/;_SbTis3H^Wgߛk`Tuq0HىFX| O#\;: a6,pCj..@Zs\X_2H󘄿T.mc~ ޘEDJ2L3G#(eb9E^H.綂:pS!YBB8{H{98bgZ'HĔuuh!Fygsf7Sd(1?f|rp`ӁC>O2 >Z'=R. f2A}hrVd ֡POzz9.>zW77"d|{Y]ML<\ɟq|ӨtԾ-'QBY\TOe- j-g"a7N Gڽz]ZBZrm0&{$m|@f9my>,:Gܛ Z7N7ͥ]4wIt-f: F~l5m[2)O 9)hؿԧxfBiǰ"\j:10+?zM8 E`RGTRJmXnbLO1$ojLײ}.Uq{9(nƵ\pZ}%#͟:]!^owm?Q?vxz`a*WEk˓E(7zIHUxQq{ %< J=uqQ<|:g풾oxr;1IVX'/5ont-w2=-1M.x16`[6!qa}6/d2ee̱912X˘cs,ces@)Q22X˘cs,ce#91c̱91e̱912X˘cs,ce̱912f+(2X˘cs,ce H*rpPA98(#= -MBփI9YSw"gvP% S }[O[칪ʞ+X:+1Rq^z@[#k,-ā5!5XaR!:q:9D.[? ټ0q1NhЄzV(#ÚsA"H!ȨSyEYx} ;s'[}'~ɆĬ |Yo}2B궊OV(RdKJYUF˅h&Gz㰱zdJ2^`  #zcDf!R&R/5eDDL ` @[RFΖǧָz:xW/Jnѥ]|kPon}e9xz骹oqeo XFi6(Vdj"ͼQQdd麗dxebx Ⱦ0d8eQ9F-?bnX SȠ =A۬Vkxg#&CH]l[[;nڭnپyvE?[ŃHFxIԑZ.N^i U3/Qs{FL UOɚKҶ/m]Mk͖ z@R*@ =PXkzs)Y]?5Y5y-U+'[z!Z9ƔP5d6NOk-ȌcPg|>7XQ `Y7p6Rtc}łJ[Z RgO=I J$䏟y\f ˋLJ0[Nb,0Mu~iv'qߟ93LSS]8\;P_10US~MJ!շWԶ)oƗj4ִ\k(jM֛ͅ{fjCK;^~hj{mg.^674L}7tmvVvWUEZ{Fn͋:S}aţygX7ʂ!s=3zf#@ }p[14SHMBLbgڐ#G sQ‰v(!4f N/D@bls+<%M4H۝k 4z? 0r̡ G|&`8t;&^ 1"A7BٰqC.Jק_2R4 RЂ01nW`k3UvVd mၣ룦xj \T?DTodz4%;o& WoAW74r}}÷OR>JvO% USe-ݠk9̃_=:m^p (|@ `jMVIhaTpfmy>,:=ܻO1 TZmw]śRhM흷qnu}LU|O7o~h+~4uvzxxRUn,&7"^-O=UX?[od1`̕_9(]V6Z[Qր#W\wompÎ?Ϙs.#GCk.ZԀ>+sL3%b̛ oP|Pa֭lw[\mv]:ש͓eyJP 2:JFM,7g9,[LGa'Cvst`6AE*@pÈ0$hGɘ!Rilq-{@x!IeJl:-yA$"'^qFfڡfUK0'&L]9ݞWA~WEtkfZ9rhP;+Q8Jp9ph;Ey E˥)mA ,4z\!DDإF Bϩ K! d؄/p(\} 8Vq$\@KeSZ0L*@M$xX7 aQ\ՌQÒ1XdhILRL1t6*X4 `GR@eaMKεCE~_}C[3݁/50d7R)EǸ.8+c``R2`K.-2|)PaQzte^ڥ@pm I1k$*8ʝX%BSϘ8P=d:ۑz4‚Fc/{lu\0./>fk-oC1Ua\̠]Z4oaMU8.O ׺]^_W'!Ѿ*ID/=.W}Bnt-'rq5$:w}l~2ab9(r a a[, A=EC7#}*+- I(d,PJPh6c=^`r߻q&Yxh-qq"T#ÅLZ@ h1H11 9d AHRVa S띱Vc/D佖L[FS1`l@lt!Nuk<]O.}Lۜ+୺Y9nHԤBlW}Ñ2v޾^{tc%"ג֤Mԕ& )Tu d_[KZMimw4?Iql]7|xr] -t2Y_wwsy 6n؛~2^8 gpIY{9 '6U;綇Har_ܲmBL %mtB"[ Qyhh%Uɭ@v϶{vG{wqHeg06 4`,f Lcg4xjS%`fT%XRIڰ3$|A2c r^Ed(.$\ #W ,'Z>'"~'ZrW:fh/?ӛd5G'&ǿ=^21a"іLU1e`KB$dV4h r^π\}UocمNV<>=0v q"0΃W J,!j N`SEbq*:O:Ad$ n2 8m2Hլ>TIN&"!d&Ax4&DH-@j)ץ ,D rĸYR-1T*MLyZ5s6j7>a6Y#w9٩GZIXX#2#0]Q z)ks9N =:B3(+l2s{.QݵBJPP(p|g\QnEK p~zzvv>{{@Qw;w"Վ鹝ߎN߷0Ur}wyqx\씬;U LÉ4WFiZF? [5}Uz-KV>}/3ߍ&G)ޜAi7/4ZxHo`E'U9> ).qʁm{e0\;0 oBaK;9M-{YzB:8c'ɽ=HY @2z4Hh \>zIHΉQfcGRt2}(66ALlsH=v5s{lvB}zC6VؼvC+qKR6WH`DIi1ht4(9] o#cPf  r(:dĂI`GcV q1AL%8ƽC=m#1e`A%Ͻ+UI ig()%K BpYMFX:)fmD.~GztȄ!P$-`H Tt8C\@i|?|>|q6~ﮤ{|qỿbv3NA=7Ǯכ햼.ioɏ6yU){fJkip5ҍ_H !F7Fa#o7F_22RS|`k^OKu?kzY {FeAe׃)z/i\>B DVRc<O|!lD2ܿe"GD2ӷ%\ӂb4L30L8Afx6N%4ݮ Xѫ3uQ;c,SBd!h%"*ZlJv|9)gLJ뜃ݣW,ԯ%jzefwb{uU&kD۬;D #ް$yR,Exəp0zDKδeM)q!X`@zDbcm}4hY}9ކ mHa2Gd̀Nى]`P pQI-7D-"?*׍J ]Pg.鐘S}]ܤ"|'4 (.[*rUk~YqwЫӿ1A9 D/1r#JJ @)z !m=+>"=~׻r2Uv/V,݋'ڽМ ?58M !ʥ>#]x K8WvY I؎ǽ WspIQS*DUZ#QFgyFt\A2 M`K=,׊[>'-f7yһ *K,"D0l" =3F1 .DN^x;:']{rZCKqMMaa.ؠ49H; Xeid;6ѦצM i)NGQ4YU-íЊ Q>?GJJ]>Јn^4hx̗>Uw`*HpM̜^)Ɉ"4s՝z T]{?~J]4h_&?5fu0uǯFϊ0QY]N_\|ԨHpՎ/ntOBiqM6x#䏯F? .sC07ۍVkMʡ6i 4*1{-g՜y} qCp]P`$*{:BSTngwt##Y|f\D#![ / R*3<`,pfFB@*  eT=OG ՙ{!UtG }`8MeS˜;SOlD[O |KIz }89j++f01m-R$]eۓow7غJ}wб{*gN11ލ ݒz*#; Z$W( )K>&e4V .Dr(4\2&;Yzi_ZœT [ UѢ&'h2f+P f%J7h!ȠXIywimr5l+zC0O(aU3eq|_91F1ٸҧbn^~p`p/=xOW&?_ft鿨_2X sߟ`;$_vJZckgPq;+5ۛq[$[F>rӝysb|7tt}I=$ҽFEܾa¢9| ӽkq,uތ_gP靟s7jvw4)| F8ȍ/g;FMkW#ꍣg+S ~"cȝuH_|By P%#hbېo=hukb0[.YKdr B1L` FL+xu wh.G QߘoRE_Ie!gkv}x~QFͤs}ҸSB4r1yE2u`ZVќ~hN?v4kmH_erH~X ;3g#Vx4R=3H!9HiITwWWWUW٬9- WV Ia{3j&Κ ʓЏx7Fe(ӔsL(Q0#64"( -BMJhl[!l :%pOtC85sw^|ߝ]=n>;T>@[W}A 0ITPg$(33tߺ6`ǤES1_F$J~jtǼ U?TZ8@t! Q[B5ƃx_혪=qUl/uLOr-mIkw~< K;8KA ѥ$GюQPQHƜ`R!%Iɢ!1jHXG"\JlB OXQ(kQ | \K8ҬGo5r2Ϥ T {(RIxuӟb"rHk?Z,:p̺l! NkRJb |L*1H 4 VHM.lQ /u"w0tta5 +aDG 1|]tl(I#vbHԿ)ؚZ<3Fì7D1—+oeT8@ i_WJcr= < %(#3ї\dɍɫeEǼmXwc㯣ɰI7ٿׇ.t?oPz(ryPQj/Tuhw>|-3gؒuTNB*U?.=UCB1.gCڊ%?߫DE&+ieY&C+&{$e$|2>0 fO*Fƅ_DO_?3YL?95c '|5{ZaWpM_pg7ߔ_Ap4  Zrz%R'"v$ɳW%V`%RLݥ)^8>^ߦ*a ;P& \:$KD3rWȞP҇$' fMJ7%G<*:~g\<6uǹ1y,@\Ԯהσ}[i0K^Ws ܇a XP3T88;g8{pɥU9V#/Rp_]g?SvRΚ՞rAԢr^٨-n7|T卩7yPJJЎ 94CthA@, us[AS&/wO gLr=H{98bgZ'HTNB8`%(s6l.at*[= ESnhg,!.瓏"[ V]<I\p464<>@ ",Ҩm0î蕛SoV2@K 79l7ʚ L@?ik Cp1i4ƨĭ߬`q8`1iVbł_JX̕+^`˟ ҪM'bs9V|ZɔJ7jJ?!|6[`nMg9n :c8WemХ+} ɠMGRx'W7:[G~hC^rH8+;^F//tA !|▱0 ޶&}vLWjC"M$;Z@ teu-ޖ7}zX| JBӛd iRʒ翘Qիj/=|q~/tTWH|SNV3*}:lE+7g JZRP C) N J+tQn>MT']m{K02j)bypszYˢܝtLd[KqXωr:gX\ .SݥQqr5+yLL;Úg6;BJXU=:ItE! ||@d8` !w"/LΥCz~1f8PP}/%\ק^{ulz.Xbp[mݚm%2mP a0ef;,,eTY*YKD:pxy\{':|A8JoE CJs+8hKBHºrXo; Z3kjiI?ɻfh,J-6wSW YC/x&g+xP B/dJJBBVP1㌁<<xr;ѳVtI%w\L"FUt(F1W"NF {[ wTt'yZDП,A{Ep8ʧ5؅BƎ@aaLFqk)MǚfS!:ӗx!Z9q sM dV@-`S0 US5̺՗MLd]szK6-dTBi%Hd g'|Ì%: ŭ{[.> [wIؘ0 "TBsӳSǧq(~m w#r}N_רPsq.zR;$mAoR(rZR}E9z{lPlsF:eѸIr^n{)ך [mgϖΠoyR(HgO|𢩩^bgi׍2g'"\t}=3MOػ|T`iqPJ3eD+$!Vz !ZPe &g$&kpqB§DleEprjO!T$CVhX0mx#a#g0j+w&F%s2I v--}N \3rfIL/թE$ _$EO ۪Z7] ^߶:ג)E 05XrZԥihezpX`s#Uk^~g[IkR]+_kXd`x`TƒUty= k{ʸSM$ӹ..l 7 k}W kؤ5(ތƔ.aoQMW~-4GRfW^h"S4MDS.ZMXQ $M倛thZN\ˣF_tsYs,` O[Z"Y-TD-E%*T7I49'2,وD:DV.$؝z;Jq*Pq3W\}6 UJTŕBI~F*%W\v6~2x5 4FK%⚣7w$5 jGjqHf'%0j.g T3W8]ʳW\y6*Q+c҉7)$BK?0WZZY^g P{캘 kN}$~蚂 VJʹM0]0ˁݳ ~%93 jnoRhD.bH{GohnK,#5ķ|sKEbۏGOO>"znuAp7ĽڶfPb#dթ7T'T0/wẇ 213p@O?o_}d|>s63=y x;Clͽ?lޒz6.XG2f%t]1k_u17#=wkL|JxE-/=~~{'g7#T÷z% LִwYuQN7J'}l%$Srq:vz.ԹZrSm)R>7A5SNǩFE`Hm(ZV&:?'{j%{(*#i6hFKah;ΣdsC1~-6ڔ{l Qo~7KcVaP\3: kJzH se}'Lf 0kD34Ɩjhi.QKʇ< 3zoM- m*F)]hm6JXDieѥ { cYE61b5]S=U7*)$(OpO+%IU_>͵dCUJ-NQ^.’R\[P]}wA:^D<:RIͺaհ)&tF}zsNqN>I@{"cOTs̘HHI?̷2ShJK2!ќ`/DmT.>T]X[76Вjn,ΚFZQW3j(UÆV9[0f %xDMU.Y{V+= RAvT5JZGviENeohBJv l$S օܢ i h**u(:ݡ-!xy9рO+H%eìaD]v7C]/ Ţˋu8ǖژ[Ck+.tlNc[pf祕]l{ XQ5 VQ@R7f=ak[gC *TD@5Rhٻޠ\U6#ZsA)9x X#A*+wN dΤfC2#I6>N9G1jTPTߕziZ26HQc&#L6B@F;=5(!ȮdIc@CnTzC{- ȸCLAA&-<( `-AYPT4 p;Gը |u@x`1g#~ nQ1uKs5tPkJ uF.CF>CAΛ2MC{X v-VBJAQ"Ł.0͑&A(7k:@ Q 2 }APSWPciS]TE"YUDIӽ6y{d0Wz,I$$}AIHP 6C Z~It=(aL! #A { ^M/ĥj*p BqŘAQեTei*uB(}F#v3믮ZF [ŬdG]`100!ƻAy@ TxU2%@ۈقj2AVa1 %OHv9!L %tA\xPV24ITi!Y!Xq#`8ӥi.`^(^b u6B[!q2p ? ,0PFu5w3<&[uYA; > $dmwq~6"d*S8r VL%V  XF;KrI4mŬuԆHT*Qw}T 0Au $s4ePwExgT nk 9u֬kdžQy@oBR @Nu,bY5#e$vkыA(P`8#J&^l#0  tf% VUBi5eHTyPZG(o<*"4p}XT&*b,zP>qV )ئYW/)U4yt֞Ewv4YTT3 o= Ե&zkJB i߬n$l& h6,)qjhʾL |^y<zyo77۴'gs=W&f`ҍYhl) lsZ}$|vV,Fn^kIQ3󬑲FhQAiʋӌ Ju jd <%2`O~@rXj9lzBg 37W`gr5wt `J`]P: 35diA)Kz\ Vh XW ׮uSjC;@u?Y9HH.P ê Q G PeQ-|92+(.FRu#xL z U @g ;hciaМ6P76xk+fnQnZ$ƚ5AUJFۃ@LQXEZx tmKzgAцoԈ@>bѫxdMAi "Y'XkʛrlCiAy/FF̆f@S AO8IZ{m(=mGPJ↑ttk oh J`H~֘-jUeriL  kȎEЬdLŋPpԅΝqf@Bׄr:WXD;ע`|J;& GAgq=3ʬ{Ji튢uƯKޥZ#gPzriak^;D:{רAP$sm/9zl6_w_v3?.q??U ^5q.8;P:q*0qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 mm4- Ā- p. h}Z_W^}?]{oW'@ *E [q pYq2E1.q;ފ +,K+FݡQ y܂ I+R:]1=R:갤bZ]1\BWNWi+Vgbg;? g{۷:xol_o^k(`G쌾8k^ug߽<;7O|^^=%TfmƵAs3zM);}}z}xuzgno{cz~՛Sw}=11Iݯ#7_V?V?v e/Vo^~֐g_>;&>Ii/:>M8mx:߯gM*<{[7y?-?_|f|Y=S uMC\AWy+xc?vfI\-sͳW/__/Mڭ6LM;4{,y%ƀCXTc"Հ6(uRm2T;Bm4&,pX6#ÍYZԲvtV.rs@QW@(3Z ]~>;]=p_aNzZ?]=%C\t:څJ[ ]1\b.:]1C{*Bס+R ] ]}W 5쁘}zq/l[xrybJ /}j҆?ߜ^S96ےkA_I oS5>ۖc}V;v,_.ߝU&IO'__;C Yfv7:m}6?e8<2OGNBjm-k_ƚΣUX/=e6.F2\"o/Q(o-D~Atŀ-,YRњ_ Y_ʪEǸWo {B)np0sHg[ ֜Ncʠ`i):$'7%\"ZzPbL{ "Ur;M* ]\eW Њ綆m rՋ;1OWLS.Hg ̺BW C)zu銧Ct9Jp3ohmAB)HOW&p:~jځéjeIPzE @`{8t}-hT?y}a#E`OpygZB+Z?P޳|RM!NQ`2+tJvJ(5ҕFR !HM3&3"Zqrk7FOW]+zd!˜RgZp:YzО+l=e'tE{:1fmfsJp ]%ZsZ HWB ڻ1CEB`|p&lRlTr lT 23[3>~6NU:TuXGr%y}}=uE&J]&Z[JeCtxUBKu*78]`Xg*ŝHhj;]%Lt N[1`;CW!Jhi;]%HWHd 볇WUBYJ0ҕH(!U+:ZENWR!++E)GCt+ZRvJ(9ҕR.;3 ]+@HJޫ]# C{T-\Y]B}!R1g5tujէM:Rv3tj\`=tJ"gMxqFр1Wc"wAtb9߇ڦo@>WPҕjYTVx #& (ɡ)#^/|UF}/~IƜOo28# _Dܤ0+wNz l&{\\Xa$A1Aq3$͢eQC߾L?˔oSS+l-FT>STRMº~jWf s);Ef(i˹ Ig[wsc~dӏasv5ouV93ƍw/ /rǣ4 ]"Jj06K2I!h@NRx 3a3+M\OI r4Ch  ,0]S~i 9|r6ȹQjEdrbrX+"SpHvlp]z)_%̖yCg; yw ̢mZ[Qvo?[z=P K].\oU֬.7}%&/;Y.nej\{1a磻r8._Jodf1)$/7r;lz2܄oN?2ɍݰ|Rf޼ LK-8;(eutg$d?{9[(>OG*Rڷ&Nb9@9,r:[ o&c"O#c Ɉ qnc=5+*6:!.!3TȰx_aR\#)v"~ASuNnIA =F;FA@ $ n R`;3Yuu+¥&LI @Y ؂p-K^jF5:eIÊWN[: eХ"! ۙ,?QC7t"ߏTN:MU9aHpZcl+yr+QTcD4X!7޺ #mbÅM,wzy?K<&M6Mvi+pNO_('/h姖 .s|903?>SŨZQR iL7t.[ƾ E4ɟbܫbP,NJԁZh'S_I.׳4tF_ɔŌoax5Cz(P0.q +lϦsgiPs,#hS<ڼ{L+4"bzj4U {W"0(bw$)E6 ܽ4ACd@p`LyO|0LuV[ wD-6`M@\:$U& 'ʔ~N<f!X\l*۹Zp,zaxǯֿmxy# <_d훭19TaFRE5;Y5%}/۴^]^WO!ժ+#cdFv #Y4-S̜)NJOu\6GWd/[֘?|XaP4%-d%A -|n x*ޘzcV" :D, $&i)"/s׋&+Z6yBkBx_8،3i#&yEqDtB[EpZ7*@;`AKlnd9ͷX~:@Ďv(Pj1 UNO\K!֯""鴠`Я'8zЮ. 5ә]] .]ՕJ]] WpOufyZcNtYulrpMѰO_uw}Frp=+eeO{ iaNwr_UuJ) _Wi+,P0vY4a Z{1[,3Vكe݉S)ڬ\[?\]cn~=Eb ‡/l璞up0};'w%>g}(ffﹾxvnk~$Gtj[osc_[eYo|=m`g;gr &>کޫ/"gx;SHC3KFD¿[O[[kOZ#{$| ӊh\T9KK@q`!P9,Rg2DO SDcK) wDðqMok ௶k=q:uv_#,q T}o~3+ņˁ7 gޔy|Wa=cР 6dPG5DbB2ȨSyE}mVvWnTX0oۏ<'|Gѷs&FArʖdJ֓gf e^0EF3fX?p2{5OjKy5⥽b^Mڟ'RR|!hxF04Fh  =?c~t;\8atڤNs0Kf<2hxpԓ,lFNZѾZw <҇"Ex1D@g[kֺ.tacyDФtPlɧT+C\/z^i:Ѻu syԟ+P^9yΝG!%,*:Št 򄁋Gϝ TIS>,RF/aR/vH,DDꥦ0!%1ocpeOE& ,O1؇Jջ~Q)p=r}]b>_n XFi6(Vdj"ͼQQdd麗,l2zb81mX 82aYI?bnX SȠ 4! CYo{΁H|f79(ywy\6Hnjw'SME@w/3+YAR$uLj#Ha"WRH-ݩi U3/Q{{zixե {c%w\(FUt(F1W" 'd6- ;*t`gF'uپ)h8Fgfηz#$Bڀl,C"`+Y8Qe~H'S\oe4JmXфpHb1PDQӞ>L_dCz O z~p#CXi*tHHDȰD2x.s Tzǭ^4^OBT=8>2#qxxĐ3Q&E0?vPz.ӪXR=?x`!n5]')o~C4՝p@T7hʸKlhtB偸 #M ?u)~6Q?9݆=>@_~&w-gU`~]l15PA8;JWzmŏX1pc=DbRd +Abp)6PMCj?ʝz"3dR F/i8JyO}BLǟ<>UOpt?\x3)jnUN̯ܙOUVI/\!}e}E|| KqZU6l4{m{UnTTTxؿiugyN{A&HJ3MM~~z7-|z]WqA=}8,`TG w7pz8,Z+휍%gQ5v1 ʷP/0'lO= gAv?;2wg܈LX(4៉/ONaxt:1\}o6}ɋ~N"CCE=<ɧ?M:g3O w󂚟L>s{w3O`_^w7<  ڧ#& T?Fw A!\H^S8_ZZAr&hHq҅.'%6S-`ڼɗڨVP iM`(#J˜B` '.\Zs9BSr)0 )E\pYVgNqV:Pswe6I׿W#FK5dLUQdY̊ :jt)7 &9Z늁 8 8}> LNk- (5UV+-Y qyŹζ^9Sk\+zeye LJSߞ`ۛ'e\ԗ5Kc`-4ͺƩ Tmz y$ iZ.!u$il&pd\Dmf![ G (9siy0YVUa!bUj#Aˍ FH8Q>k^HIV ">?V g'&i_pyϫ.^/ceio=Ac7l=&0$'B[vUh__jg)3*PGťN7~>0/oVݓ4x2${p{9]fkΛɹcm˹+tuVVo,yB{ߡHz^:sr V ʘB4*5+>>E2`qs&֨#Iǫsk56f[m/aN$aҝPPqݙd] -}^ eͱhFAgmCm0#S 70d\9ĩhi^)G](zЪ'?9&#d" E@YQd9KCAI@"')c8 |4ĸYAoےiB5\$7XpW%,-%5 d|^x`Jٸ_ym4yrjc؋9QpCzTp XB@\_ I6jE(,h@X OVS=IzvӺq5?cXoݸuz 뒤W3N\Fѩ]iOND,وi&n9:*p(]RɬHmc"NhmO Ա[-mqYi"Qtޡ'/t"I )`1!wܤ{?jnF{gajuw-6?a6˟^и[0b3t4\ dx(9as>s }% #cGRhidz E(nM: x,&C,hSh$f[PZMe6=j v-OqIF0().jH|0.zƅד cPB7)dbHE4aML,H$HEYEHT'j۔U g3ND}d/ZckՏ}VFD#bj|L @GwsoQyQ堘 ڜVp,E]+f=mFe rA„\dB %-&1Y6S+͈x%QQ{|6uV%{8c&..b+juX/JIzҊ H E0!2 QChmjc_<P쇇G{Yub͖fj=]-W?fRi!w @}-w.@WJmw(QջAwDN}VRvJЙMK(W%J=\Aei!*Cw઄dWD[4pʠF.;W Wݱp hpEQٳWVsûĮBt]QRExኢD{z5p,zi=\01NZO WD̜h}(n\JposM]-Mg.H3x]lyi&t2 :}‹bS3%Ϙ Ho 3Яgb@5>}s[hN+nw`D{1ݢ4=LE\(PAn/&&>a: w] zQU,8iyOnpNh.OC4t0ş;fK~ot53הJY~3O[mrxPq}qR;6EJmf7,6f1}!+oEJBJ4 {i4t\tQ lg઄k:W-UeWoCp;h%\ٙAj=\(p!*YE;bkUWռbD)z1 hzpEse:W%\vUpEQ ֳWj˺#Kt(VyD)7Wq`]ZF+!:W%eUELpj ,z}zvSzfppʼngwN[e>\aW=gh\QcW+pU=UnQZ-•vI) J5>cBq}R`^s fy v tg`+0]50]\2abٸ9DU'16+fƍQ P9x\<9jLR@*\\55kS9AٓIr+o,T1^'sP)˥>q?{ؑ\Bb# .v82OH+i&sz )m€G._wW:uwkpTrƨ6}h18{Xur-j%`*uFcךQ0u(tm~#e[˳Č?->YV4gqp3~<&Yx؛ϫ 6e|GhQȀ.?o.myn6kM]ʹv鮧f}xX6U0Y=U']QVMhYMݪ |^Z5E7\8EL2PYGfR#= q"iZc.WBi~=r6p}`{Hg^m+ZIڮ'a \=tb3\XFQJh\ eг\\G+O)R 29l噣unlMZ7j)M6l^e~g{Sm5&}ΉMh*3\ uSwB9%fw$XHL+ȕZ=uYRL ؕ\ QJh e/u\9˫/WʬE@5uYR#y ʏ"WBJ(j*8;V1(#W4\ m S+Paj*8݄ң2.k9g:#OAmnMWNT4Lnxk1j](7q `"?E\FBk-P:5[=&6n `F=\J(5rren$weUjw%G+5^"rrw~%0rčh=M^j+";\ ȕNܕPrrX] 0S +5\ -ũ˕P=w+{=^h;Z72WQ"G 0r%<mTfr%v#WqáG*=Է6j;\rybp;ʩ[UCOs+۟E/+LhY:Ghw`4x\3L L %YPekTe |-L[dXlۃmOj0B?B>B>B[Q-0ۼ'yV%H8S .Q NM*Jj)kH:8aJpG+к_˕PGjʫHŠqܕrE@4yw%+'~gʕ{=\ nݕPF?U40r%ϽV7+ڎ\=\E z(w%A #Wø+wW J͆^#j;`y-q.8%{ﶤ z@Y2up~ 0?+ nm/<^RYQCSPueuOAmnQ6hٮaz^iov}ILnв(Qf$ʘU. ؗKai^kö|hS((n)jgjUƆWM*?Zn cB n؃V?sQu71Fǧr`g9m*F+W>|\ eT\\Yc)v(WK\Z.WBfGr=ǁ a w%G++^UP7a*(Vӗ+P]\yӳeصèɛW.ť%>+W=x.]/WE޽9|ߖv(rxNRv-C%Z0B*F./n'cvqvz.^1q .l?)8;;_|>7Wp?aC_|yDѷA~=|=U_Ԭ*'盋YEq⃯4T)tdwN*E4HK$ɏ"-pt*}.5-|eL:űb!1,T,R~D{yus/_Q5xA4;m1|q(}啿}QYs1HQT'DS>mQ޽z-?-oиtzxyuv;0`Jyo;iwmu^o]<|`c??70䤻\OfwDy,[H#|u:{M-{Jb^@FS~c>5pAxX3wͯ3ď3k4tU#.ܝ] ]޽8+:v8ǖ%FMM0DzrloL.,bj{));]#hB0h~}/vkL5 MZ)ʕ!>e6&]J'M9w Fao{gΥa1Iw m( F%<Bqx= #.h(گTڇMV8Ob3A6ud&;yXi TtgvJQ"A.0͑¢A(Ӟ*@#%l~GBYGSm(ĩ(H6nY]TAT;u]0%ѧVOPT`3/7Qɷ,,$f|0@eEtPF6 #b $=(a5}AՊXq{4iA>xO 9ŌTUE9cmDr2|R1&HTUa:&D̿K`N{f`w˫vzOIez[ ޹n|-^u&xG3<.^T.¥6#Ll$_;'A+UX$T<Օ7!M9RsUh ZB &FxXXUXD10e#u1 [X9" 7C%XZ5d0j&3)[|?^ .d Uκ|`|vEDX`& fC4Yx4ă ntAk;R.2,߁*QNW8hw""x"Y`AԫBOedk!ޕQD ӻ_0|`0nS5@\[v믿ok..F?}H؍!< -LJEc7+* S/>&t).ݬ,yi1ow)izY{QWّݖX~{UֻO]ۺvW>-n,/iIOfW׃zlk}O'>}ye֊k@% D&8 M69+/۵FEqJ5qW"g(l3D cP슃)@1sR ibRVŜ]@%SfԆZcG_cYHWN,gB)3>X9Ĕ6=REztR~JKgP7⧟7v>Pe>@a+[m~w}߸V4їn7G%==7`xS7ϺہYJ8+{!**v@EJ͐G$F)"~`bBf%Lt*#9j>[?!Mk4Vm2+k,Nf*c|qs$̑,0&aRA|( Gzp=נb1TEWIN}gNf ARZZ:({ 'ΙJBh {rݽ{ݍ[BwcP˾ y܎7krFz0}4`8`oR;Z@}K=wF߆Al|% xhmojtyW@^_7_x _>Z_,f]n|OzN^aeǟm?^H "#F[??yWBokB2~;Ao1 -}Sn@ -)A~9 "gwx|or& &eޫF]h VijrtW&ݕG+/tWV":[&ZM\A2`+-qR)+7@S沧Yi^"kg5qd>m1S:~:n2Pctڟ1NOYGtv"AO= ~&^Bm> k]w]z5 rIIwv_dhu/a8Nm*{#?]вR[vypd刞Zׇt9w>F!>pq_=qtvM]CV믚Kڸ[h9EO*1W qL!"rL rL rL rL rL rL rL rL rL rL rL rL rL rL rL rL rL rL rL rL rL rL rL|ϥ҃1ۯm)5Bno/ _?)y_w?0_A/`oDs~(1)|7 ORj?"%Fd)1V>oͅFHq+0bp]i'_pAdx9:V$jWb7O;DԎ'HC" J!eY%4C> 16hIACUxҘj4H>l|nDwR7а;>!ۏM;H$ aZY|ՠ탸WjW#º߆Pĕ=[tc C *of^H0JR&~R#HQK,Fph1 '[ $}M~;z( bKfvkףx?cWz;${9~*N .:wOqVLY<vהy܎7eMr6IM .x;Өc{x?=o wvehmo]Ev/lGLm pETƋgQZ~qκ=RKӬ. ײ%&Ho'2_+=v K.fO]۔b7r\t.T'jlos6qaV04MNR2'i @saCN…V|b)S2.hs2֋Ujq-ԕmB[xV[DWEvMw;A4q2q9߱&h1'Q)AT\h< 6 H4wDب CI\d{n1\P$EFrblf&aU.pMe]MO۸b^ծZmSjS\ec0![]ԪPRqʼ),vIdRQfqwٶypxqn_>5KL]׈^yӣxW(^R)u.jnVM a4\a 4BiFRx=az]s.|+ ԭf](4ȟ >f轏k~|/TKTpuK)l`|3=ةP\td"Z2d~+Dp zےsup :t0wYcv¼Q #z w_(}47:eI 22tϾ9w];ot/+Jם3qN 'mVS+C &"r$qR. J1ҚU&5Q^M#s5rjPs^IƐM#|OP4N.jTڍ<=YwyLur,k)-'R"Me%\6R‰ZYY}H ݻ/B/]BԦ|G\[(Bed8o]I9Lc !KϬNRǡaNeBH*T7QSPۿk:f!A2.)yGVyA 3\J‡6$iq!]HgZ۸LWH_Xړ\dGr*"Xqɶ1I]ȡDZJLr4F_S+ghb)`""cpD8PiCHF8pS_L.?3,7yʈ"*p}r'~|V$7NX$OWϊ0y?&'ϵ$׷%j'Wc'ُ _ѷVҪ?0 I˲ }>\#:%{NB,}Ƣ lŒƣ:>J@b+k5B"ɬF{ʭC" 4L_%Ka@zg_f{ xic=iG7sh !K|8{8\aW,p5 27l>)D঳/ 2OYqDj`PNn3I%\Ԍ`^rA}9ӄ鈩;0Eꠘ6w~:5J`Ȏ! )7=rI [` ]JiX&=3%'>{t9,5p^N-IY>KUK0^aР 6dPG5DbB2?yd)"nxkKXSwX|շb=o;aВ tqsCa,o;u2 Lߥ|WȑKpO3.7d^﫽Y*j \o/" E p>\Eƛ'qe wQ+fmܝQ)N@Yj%,,Y gfX3{<ۯ)P]Bn `wjpo 4fp?A ~㥄{E JPK~7եW.*wˬMD5w*)'<3 J::bQB[9pcv5HMIV iOQ$hc&?ͮxl0g-8a9-r͙A6ƒE ,XOOOI&D*UBbR5BgMJ ^Z H9DŽRn t*rneZEBc@r``Z 02mqsw[ mAI]y9ޟ1|yƼ>۹d8wOg5`i-;Ky,5VaQN؃א} 3Fsə :OH-a[e7X-(c؊ݞR,#.(SX#$"D:r),zτtZ`+Z H&5OZY\a+cOoy)'mns`7N|&2ϴ=sMX9GS9MA(s6ðJzelS6w)wl 6\{ΝG!%,*ĊF ,ENdN irt?39 ^nJۤ_f} pʱڔ}?8O/j{_XtW|bxZO< D oLBX+2j) j-KEJ{H'6"/ζ N0<`־0d8eQ9F- 26AXBEնU;v`gVqxY&[\[);K(y AR$uLj#HȥWRH-(i U3/Qs{:ӎMg=KEO'y4ɣK±B:`-aQšÖ:0IV$OR׍$?`GZAs%ίzGHp# d3 $*G` ,@q9:SؚE_.֛Œ5P !)C(mˮo' -KmBW7"Otc.60R;!G4P@K2:#PD;_ @$LHo@Fz#f[.JX/Tzǭ^-X)q { fU$;Rƣ;zծbu\ 2E,ƝLMC!c0)jR쇚JJj&GkĬ[<9q )$wY4ןkl|<0i|%{ew1Z n{8!;s2*ԴN_9o.)%8 /C#T\>!\DG#)72Y?ï:ʥڟZ/LPz߽1)w*gBxs:SɆᨘBggwgϡSPB>xMMJxs|\ؔZ(ڜ-rm|]#ٔ"&!5Xe;%j֫yJ;5y )D;#\vF\%j)vqUW`wF\%r4Rg-qԊM UN\DqFe6Vjl&ev1YNS>%vkq,'=R[ZLgztH$n~.d__dûC"#҃l.޺BAէ_ᵆsxloh<JxZ{i6n{Vsv>wfj#i񛷺 EòܯRշ`$0>sz^ TS:$ P*8Σ)zq+5ŘTdxTj@ .[P_=d|*q>hIzEqKD*zgZ1TOTεA@.|Åp;\Å;Åp;wp;\Åp;w.| w.| w.| w.| wƅo 5eaA|]a]sOmMn 0Mo$?Qy@漬?5Frr`&LqwdCH`Yb  ZN};"4}xv<= d[(w v;pvRK@TӚl9B fzB^0: X=*Xi"ڷ1qf4 ?n~;rHf~~<Uo`ӳfv}}jy Az}9y C3ֺ׃ IP?;RN5eCw@u6 7;M~}3{B^.}owciX`oLKuc◣Y.(o}*rRT ʳZV^*TJ~-2Ƴ0ه%iȌ{^);02x}_sک= ǯ~j87Ae@~ee7fe@CݷM6ŕ&Hv$Rq+ןI0ņiJݿc` g&+wPT{Qe {Yfs=m`<_Jbg]p$ƒ0Tg'v -޺$jLeD;j|FHr",ʫv =yI#k;H6`YnpYsUj%ƯDrVb4E%SVf`o~~a676"f%Dۆ|w℔θpj[eǴn0ڴjӪMVQ?`VD,\#@hZ !a%!$8XB¼Va"ӣ8DLȱu`'^bV)@L\@;:-HXe0pԡ YОDNS=Q˪77=_Ԯ85vBm]r-)DHÈ>z. (2ڠ*@v)j׭ ayQsT (tO-O7پ-w}!s$W[ '3WTg$58Ҝb i;s=8'0:qB?^wX^miE5Ц4lZP(rQ!Tcv(YeUcS$GsѪkTtXogm8Q .rY6k^\(U0go&[^Sނ~#"$խatD:dɖCљbX59{9 L6(atsW[QHٺ TI>Y/o ,16.zT|ԵDUt΁a?BRc) 2@̾< djL*΢Iڣ/hMeؔ7CoCˆՌ Naf`R3 Ʒ`DrH,Tᒪ+'nkGղXs2*AB|\歶Z : +P^K`>{?dc/jc7[pm۶?q@pMB /D\yٳ ^ՐkE!8\ޔwCTUz` u@P30<<8ۇWx7|wyrK2qc[gܗid3? Tkv/b<6l_!]K?o{[$GQzJÌ=X~z/!Ug>_11㋊$1P]^~%ZZZ1[U#}-8d0Qk15EH)X5ՐMƸ6QլR2^oo}rZ.8XPbb/,}+&'MeAh< 3_ݯNvHV):Z ⹉PP`DU0BYXh)TL6dmCdk) s!䊔t"=m $sc~t__lͪs('_F%KZ'!CIbb]9Ҥ}J>6g*ѨQ,̮+;~+L֙HΆ^P g9p#of4AQ<hR%d|yjW%+iN>ԂVQ>+) +w̕W¬\DLY3 CoψR*juNBVq=CΗIP+$5ڤTI݆sI7vB?ums!-J6>r񇿸`HW.g߸V:dPUa& X JTuk6/xN qNJHIy*)m14QHns}FhE  k檲=vtuYQ̱xnX:{0y )-b 䄄"`YCMcjE9BJXTu.tաO Yaٲ_SXe4lBF@R]kY{Op9_)#78k~cg'8yĝ\8(1iaq5;e V^ \A[!jzRA1;:V6?CELr&m@2dM:{n?ey$~qS,9/JM~q[f|Lf2qD!"&$\R Tj6L 脐&x 8mvMgh'pa mA v%Mُ/@R"ś`&!iRIA0x΁H! -&nS ˆ ѐ35*m!P#1TuVmΡZEr[A[O&m+>ҬhcA[d+)RN atFqA[{^~yL7Gܾ^a{g?l4 4VIۏSOm&Y81^})U5^]=,ֳZꦴ7`珯%?Y5CGq._|~Ҁ'Wb~iYniýyO:sѮ6J+ʷb󑡲2)g$ R*15>|Ɂrh_Y q;A9./ۯ],|zswͲ}}g?/ylq3XٳE@l|_nRFgUޙ\ŝg2>_}Y図%\WwoN&)ZJV]ė夿{ѣy Yy0u9 ^j1zp&\ee*Re=68U\^ɤu8q-˹Ef96}7C%" X( R!6*`KUN\،j <~ܓ|y3!sF.@;E\^(bcd ֒ .d_}̃sonkFc0@b h@8j314>'q39L:mf7lJ+ed\MRLջhV}6;e5,q5pB:+t{0[TbK,ёM@5z p.?vnJ0zέ=V`edvS>blo ݸㄎ^9>:*ͥ/}쵕٢OՅ<ӥ.䍪/` zО1O]FGT(+jd&I[Y $=e-EU0Q.+F_I4.(P<᡿^v g -C|7S+h%tn|L f@Cwf6wXT=l"YEHff@Y7PZ)X,B,C1~4$! #2d"8!0  (8PM*/yQwQ\]\LƺaMXѸJS-gr@!4eDuraeRj?|\K!$CVe+6QR%"V&8ŀԶ$۪V !$kkUrI@u6E@ABIb6N'$f)贖Ǖds=)?ב17`Gf-s~9kQ5lulAIumo>_^?{W6 nP:" d >t7j{,KjQN[EQH^dʖ!Bux|b-)W'KNRP.%Mbq!m&dK`y<}RCJ~MkJ{Rb]duX};s4QgWt0N .rK]%4O/y. jBu7zQ|\3 SsCXStnUD x"̊F>>'),XЍWfp"N* z*<P|T+)WUҏA۲nQ$9ؑBw ;J˦l&iōp#çF>!NɢV N0u`!j[E}D=VUv4W uG5 DsmhBϫ`wq|M~{\jqPњaMK78y hFy-h@Xa kQk-]׫+˚ߖFaF~+u`HEiضyJW׉ϽX 󓓷H(qDڔC GX[ %󒓨6ГV䦬WϷCVpՉ1Jhq,(2DɈZL YID/͝8Lʮtbǖk\vrKG  6}<f0A-F˺.^ m3)9(u1h d17*RM~\}Nt`NjקH ? #K^l{:LPAӺ5)Íۥ!ʔ"9^z\(a^%+cO:|\s,>9 x4˨# /J-՞ |Ҕ!E\ND0YjLI\&@n<uܘy!8XJ̮&7s1jYnyR3v˗κ7r- c!O#+iJXf; Vfa!o7ao?Hf4f_?'=TɃ[,y+7rm,:n(. [~n~#+X }4p*r,prpRZWSp/ԯWOؚW 1 WOׁIq.{\vz lGW $h ťX e 䶇FAnT'mie\5*ޯj?rUc r$m1.Fc7CHڑXJLRn~evW2r_[lpIӫI~,oצa/g˘r5՘Gr Fz!rۭ< w@Ox(N9+0Qy[Tkb?VK7 E{3=ҳ&Y~;g;2'ae"X[t|dQ3! CqZ݁3dCg(%=C 2瘻xDpP\v4GVC+RǟS~ tV2Z)Lp'MwryRjU -xfwYVpV`=Eq99Ei8R9{ypV j:"B<q%=+ tI 1I.fn^/k`ٻ".umddSˤcLʗ6QR0:BK"ojO.+B`2/z]N?6M %!N$O^@LAi O\dQ3,ud%twﷂonJ67ykվ,+3 p9h,YNs.&祮dNئCp b%ͼ3'3?aPrMû EqL0[ϥu,AdqWB f hzM\9'%BfF2ޑ'9sCAr#悤 >iɗU|eqOXV~(p`@{aޚ~6ٗ2\V։ͱgGĎGoM&x6{LB~wz=o5?Xh?w/#be+9q@y< GaҚ) #P: 3b_!>J% IمKg,(]} \\JにF{ j>`ER3TvXo%]WWyxGϼqDR JYzOd,'h9ӾCw%qGYw4eQc"ciC#LdE%ˬ=옍xg '$WbNYH tWD*0I)}J&7?gR>:dR%,r #MbE˖Zu>еU:?mqﻧețίg鲰 =)lpIRkSky_(Puo 7E˧Цjɤ!,y56 M)#n6K4~͕'om):>9P(3q\uf'}9Mbbr'@۰T\HF/˿y)!3Zv?TwNIn\,¬hDO$E߀XPnlJ (Ԁn78?328ՊsUUAmrƶl[oINv`=u]nuר $nyԨ'ĉ4Y,v2>L腫p=sVQQջuŸB]Q7u ?N64.]h8i\+_<]iw8CoyhM0ҥbUy hFyQc.g\8;p˸a{jlimio!^oT PT4m 8=%(us/q{2vFJބ[d쥨0qDڔC ,S,gZ{H_!)٨z? Yc$ko>d` )qD%>$5IQM{0b꺷Ω"%pXNR{kK d|=$&T 3i`GL,R8"`Z oT=`AKlndyףmjNl1Fycorj-w%(tպܻJuh O {( Hlv5i¹Z(ڌAɴ1KUMLl=bF G'WBVXh+,zڵ?++ضHkOHΐC9yhv3ml)$TnRb2)h[Ֆ!sRJu \`J!0Ƒ{UXn3)F@ERXH>K0 \2N%0lixC1qoHMq򣃛1ȪM rd13˩_ẗ́\~/aBa{)[6M k829$K!#NTی퍔.&WPuɚ71xRK(vU}l}rh_U `_[?賰W3ʽJC^M$B RoXWFͿI.yc_ח/# dL%ԛEˢdXl~[m;q@AbqAe%5XtR{c@tSPzτδ :[ЇZDN${izk"Wsue{b*ibobބh*#tXψr:K30 [z˪Wl`Sp8ghNi v=#aribEQ#K7 s'U@6"8m\F,"rH# be}0zi*0dC!eLDapݘ8qh<Cr;|]GgWv#GwVqMf6^,6}-|2y9`e Jxeڠ:X!aVKIvXPkf^ʨ(TtKDZr9[=@E CJDccel4Ly#2Bz6U;޶q qͅ bp3$.6K =[n;nJlV^e_AR$uLj#HȥWRH-(ĥJTE8c_E-j O3g8^Ԛ5ˣK±B:`- aQтaK n-O#'uiޟKhJe7A dϖk=`\!69ːJ0#&qT=/s%:iAS8ϕuZ :fv=^@h B60IF$huڈ 1d A9vB hK2:#P~b*gSi9׆y>I7iϋǺTl9"so iL=_a #cR،`2ȄzΘuI,V2Ok~u^Õߏ>~,Mt\K_-~J-wƏ OgGtWQE>IP'|?Jn!th4%ubܿBߔRuк++3m?|K۹2c0'WEF1sӻQNr[8?KCךsL3mz0oRYmu7Gῥ={'/PwQ۷ktF8~pzGr:sJF/*$kq(cEDwJԖW5#,nc΢3wщhC4!.t;ȇuGɽ^x% <]`~໿Vw<ǨUoG EuO> =cX P x>Co'$OxaIeVa&JyҭZ/z(t{:O)_kfr x@[n(E_[SFh]Lv,MtrvD%\hwh6sq領sE8ޠ_3@e{a3h7S޲ǴLi&y.݊ U`œN[' = nEnb^~^x9WbٺL{R)kF$Jϴ!1D+G0Q"ChlwU~{^@ ku`xB8p Qĩ,&P/b (;@{MKE N&m6&zZ0c32.l&Iz@>\kɪӕ;WЏ_"-5Uכ0:kuʉAyD@($A#QxV;KFkTWrT &y$<5FҎ Rʃp-.Z/5^#֕p[!i5p41g3dQѐ0(Ac9`)sh M 3_Pj* pl-`2Xbi3BE8,,(h5 D4V tXRnWe$] 8$?EƍfHzK4$#@ga#H =z cձ8vmI_͠v;pOہILwVdU I]ʊZ▱ٹ PjbҙVgpb#VRMA bxSTJRh`~RԎ(kvUy&r4U=/\=ٔ~9 i؛p5,xC.wY~twA&iof2Z\Z[e<77yif/O/-ܭ/f1|^eJAr@;,<{Wg1KSYQābӘ@=aO)THht<3; K2fQLmfe`^UZ7jl^}x`bFH[PU)4}7xrK@=DF-^ ˸lVXh#XhmRFfTetۨQ1,%! j@C0Y7lj:_ٹW_-s\-D_{ $ ?}z!9P ]Owįf:3=K`0<7@pUJ+)p+yNJ{$9L.{|+]j* )7K}]ۏ 膫z"U* |Z%EoU{$S! ,5Cg>8tN>D |uh C ҫzM=rB `H !=X hKT:!z+ =zE:z۬%;vz۬vނcdsBp%[fU֍4WH:`t̥f=\VMp 'Wb0(s2p5'WZcf%WSb6|2pEu*pլ~pլ\\"VZ+1\G'Wb-wzp%n\5=sAUָUr.ʳ%8ؕtB7*jCbWzpǮǏB5 cjGY # zMBQ; k๽aN|20-wbU0vnV`t;r8X*Z!bjV碕mi}= 0`˃yiÞZ7LVjTQ]TBDL egm"dS|e{]P_ߏXʥ%Is Ik*D6=2W2 iICVSSnuqY_-|̽whGE9 Dk% Yf 18bQO(|>v]zw]DXǶZ Ħ""/"RLpk⠯ @>9ݳ_P\ؔV9eȈgE|VmBfJ; 3h\ųo,V:awigiITsN*fζ,"PRAA,L"ܳ`jUYSjw&BvB[Ts 2y@6ޙ9+Pݻv! Wח0P?]]ygįxAnۍ>$=+<-?z1GqUr^LT`mҲ1@j#P&R_N5yt()p5+'õMdM8UM*>˜, n/õI9nRk\+&ey$e(:oKAfr>̩?:JP04`iryyxnn̂|V0.A)X垶@Pj38BJEJ.SdL}&|eNŠd9i1b 7}p6喷v^t?t@W2Z: 3rkUѵuc,tZP7ʻx@ [~%S(NB,0vH8uvnlw򥥬;^k>Mɣ- [Y/[]-vU= )6d|kG#j:.>re}ūJ.)RL)Sj5F99\A8\}.U2׊JvAbR9;$>Uaa/Rg,t ޘMZ^qk<ݭ<7 a6o~sV:PfmSUx hUPX%A兽pR"CPQ^C{!1m 0YLWw5qgfvĎyEb}Q;6O=1؍<%pX5drCJHXT6Y)슇: )( YFQ 䢒3prB!jJ" kَ Qߒ\Ҁq4<}gD"NiR1Bq Ph]5Yk(kkd58 S+"zJ9B"h_iȄ6Y\+NNIAa$$b5錈َgE: .Č)m셋=q '\\3}m_`ChEbrP 8 LFaCXMg<4 X6Ht6Mѓ STES)'֒vc/g|>{]WHO$!ٳzjB?I8!Fe`SKK6>m>ly_y_y_lΜSdbfT+Li]kjuTG[$VJ u1k|,jhcƀVԶb08o*A{]|^b-C.a!_zdc3M<#l`5 *m=m5 <ֱB~}M'I8S/+p:]l8Vqջ&@{T֡|wcg1Ps7$,!jUc!Vo5p1G#AL[B&B;I#IH =|rD|rS;(RAeX8 ѡEDM5ViL&U=Yh3 2@NMraTNk Y`,2* a,oYìuR3{AWo)V3$FM91˜ CG yﳜ1_vsmM{f5efvo+1vأq['ljս(Q>_.ː H2o]=gk:DL8\_z$ ơbOl>yN$K"/-O&l9T vJOoP/I|;]֕i8NH ίn`a&\8gɻ0ÀA!hUgtIE{@2272yWbA6Xj̊X m[I1"hrF[kzO}󵁵ؐ\2ԭ]*lQIG-r&F)xzz=e3G\F7Nn:]KO9!%S|1_iKn/U@u2 mjJ4UFY΍dɖ tL*l8 Ⳇ JrV*W2gtQ0.,PzxпI-63·vIS)(%YcGls)潏5"2kɦeJPb2hokYoȩ^tb} Skd^K&g35YP@U@qALΉ GdZJ9З*X6HR1*4Ũ8pV\.z)-ffrG-2<4KnALIk `fҶ r%+k@ wy(ёhc*[2˗uÐf-dھ<.8tYIL1؇VI<qDFlƽ.e'գ&g}; :cf5(E-qdnSq^kѡ*]> 4vq$a2;-,VFGLwuwկj-È(6pQ$-*ZAWv$KajEM >l*("cpD8PiC`kvZՕd 9A,(#D1Ù;ѝI~8$zQ6 "Ns6r"~r4ΊM??&Rެ0 IQf>.SIEI2 '\!`njhB(bhC;cɏa{AbOCL淥@H$4Yh{}?sINAt]Fif?RePqξ+J5s)dd\ =~-g1X]=:Nj&Ƿ&+D~6uCOR9nK1tJVOEdIgo[d܋n7huNS1I8d Y'u^6_  (85!ʥC2MYA.&Z/a"{cJ#N41'ehFLȇǑG&m鎴?̿UY]$w ^F6{ehLv9,CI}MbLStWw+ ܇a)X6f4QX5A#'.>^79_`5"T8:W q쯛m P!+=inZ~gS]*>k;coW*oL=sc D'aas0L[V%2ĵ3"j \~2x[>xΝG!%,*fcEH ! \d@TV{dr>.mJ*T|{v:P8o}s/FWϳ֦1`e Jx[H˴A!(uB ìPkiVb(TxKDZDD|c -0<0o NYaYI -?bnX[A! ki8V#?ro#IXGCT"j+nˮg&n5KuxMd>S7q01"pm!RxD:$&3'Q~;U"kg nXt p0ӬZn/Ws]A~*`zQ u}: s|+.c͋di#gR؜`rU m1bJXTx!s=ߎގ뾝?}᤼~~[zƏތӡ^9?wU]J[K߿,^ʾŜt 8-R|Sr?p~bOq̌>L쿯ٙVn^+.Few8r^}~U6:X8'7p'vT 4)Y|{761Ͱsݣӟ~z+csnxZ,s $~ws_-LM?WUdP/;9?˟oN$uT%@< R|T^5"m7uˠ&{dP/o,d>,IVPaDƛыotno[ɧ_O@Ǘək\~^FeVЗ&'?3MpgiNVnPTnnLh=79ƽ VL n{zGcpLx2&7k:ey^IxDh1zc03xs&Ӿ[oKδ8(FjX6#G sQ‰vB"ChzUjqO, H?(^:aUyJ4E6AD[E1׆&iG;c.s(⑱"4?BƎtieu{S^xXuȇe }ӟf??R3=XsVG}3^ 99$kȈV B[ڜr ٠͈hE3>yp':^t(ϐ! 灈r vJ8C_K^|Q▚X%.hjoG(N0R>BIb3P5/{ev_3u/~:̰|^͠"ha~47*v]ؖd.Q+%ml#1?#yt5 Xp]-+dStH>x]  u5M2-`Es`$UVCHn|pP4`pc*Jypԭ'py'WAv pQ歨A\v]z k\Eh \Er+V#yp<[zZ+qUӐ('w%+ϫu6aoxx.W]p:laqgxqtu.AĈ\= &D_m^?%n&f"Pљ?|=wMZ k3dsW(]OSŻm!n +*>>JaUzeM.." sM>F#Vf~٫1,J̞k^P+)6gF\_1}ؔWTMwx{o [܇77YCnvww$a 6 c<+ySHF&*u>C2M)_E^E~pvWx6-pάQ Yi]!vnKsǬNLΰ9n2čn17_V/fi%.| [Q?7̏{4A}2Oqۚ~ Y7fâ՘5: y%^$%%޹̌y"gr8+ʫ /a^:9`LCqAokxB>Q/F7bgSO3p8͝pϽ8~a Ѻzm|[ b4#H3ֹvn.F9}NZVzQ*6i&B(%(YUpNhe'z)]NXXεMKZSƞ>$)~d^!{?ix’U ^hųy[0ܝz:n+!zRosW&[X튩p ٤/MyZsЊ6i&HJ.4_ Ի/( xSh)]]\Bw~ʪP5>g I4ǘƬ.YN]n7,R7TGt9B>}hU6Z;zzFAn!67,ڨNaWrj?><Ô"@ ( XM*L?{ &>EYs#4%)RX6 9>10 Hh\Er4=He,si&`Xc*KDS*R{4HeQ W@7 "P.*ꃇ+!0¸gWRPd  ~mG.o \EjTJ3x$ + \U*R[c9•?/\U$W4$BK"\ ^tĊ<~ v>q+r5zdڎZH귣Xg\v]z,dAp ~uU$WP:tJT#\)S%*TqQ`*exzG.$6 OՒm]kW:7g^RٻF+Wc?$ da AzIJc9{I!%6E]faͨէn:֭X7A߭8d$5Hd !:h%mDF[AN"_D憾"2$t(eDFi3+Lʆ.cWWrkK.Q |K"ռt(-JYiN 64\ vms] ]inRF "\ir+DM PZ;@2V1S sF+˳qhB]]"]Yk7(7,_DWK%~?tlzݧ1}u\ygl\hm- Jwtut*M&CJD LWK7#BRM#`)it.4hh;M#J:@fZ^yL`7":I7ݭ9b! AaBiRh!)wI^Ԝb+}kWYxǺueѬÉ-&o3Rg#Ry5B 䝺Е9M0k`N9w\vfhřv+Kqm& J ~OfչcW3ڛgO J vtulW_BgCWW\ vo%ҕdKXl%42Q5t( RQjݻz3fh-j`$i-4Z'e=ڽAKdLw͙- XG MFS%\--RN^Z|5`sLfpϽ sm ]] ]p2'uϾA"r+DZs(0Ŧ4.S$s0÷la|X8ɰ Z&oKb`Z% QaZ]$DidG$!).hس=ʠpߧI% |:wQg=Pb8_mHexceYÅe$TPՙYjh:#dIB#^E?V%O epCi^]:i;Rf4qk{ yYʽJQ8g>:˝*鲄T\& ҧ$@k:-Ke=/Pr4-FB?x!hD>ê?C{&sFE eoe=?n2^/ϖzf^g5B/UIMGGDA xO&!Wqly)IC,Vu96jJ!jS_f *&U`O@iΪD;:4NM?VTyts{ޅxbB^r9[yP Чwtk1‰A8r2z? *!NRXnq<mz51_'S#(ԀP?3甇[,ҍKnilչFpG.KW@IB$^(keJBHjbRuiMZUCEuEՎ_6I+^h\2p,_&tA.-tU /vy Fvn ޸,Ϸ 3khQc-+i= 7 ˖?F`0yw c&MoۖNdxv Y᣷ϓqh$諊7a.\EHiJ GMe^X2U)9%󒓨6Љ⦨zϯmք #}zjY*ӧL)யoF`31F)+.) ${"dDAY/@ӆYID/' 6/vBnwoSd6"?\o`zz3UMqKBuϜ]=̤=H}vꢦPf~ :5޹ex+p@g} y>ǡmV\ގc>0D}N#(^C{?5}6|L?N(ľ+퐒3-גhWʣx]i<^SBr!kQ,*h"<?~+?0/FwOoF n#6%v#v ?vJLJKh^Oi=jΣן V*ۅ!"9^x\(azJ<*(5;N<9ʹM`ur ^+E)-eT^ȒJ-՞^2)quڀբ*(t*Ia6j͙Brc$ nDA2TbC?氧N,؇_d3(@'xJXWt"XU0cx!w^x5ꭼ^z3~ LUg!ՐƂ_e03 ,A?>RRƹt[")ւD% )5?Fxyjat28 OYM Uz,sVlb4r E6 [B¯ʹsa9 SM 05wCe V'42l)]Pש@`CIeۢZz;=i'_3M0:$$dk{SIݤZ-Wm= yt, M^N[c*6UĦ|A<|@u6_K^C\G)C$<?NG EMtH "QK$I50B"+yԖ @[Ŝ`QJbO,OzsB1O6k}^?=[V=ua 0GaYz=4ث_,.%yQї HJPR oĊui8\ <5a# #-̸#@ *@w^G4 B楱?S镤8_:a8J3ѥ$Mm`7={r,*Na_̈́vVq\t1l=lz$eK<(ƃ֩Ӿ:mɽTXtܔTH!) WIG `y0DDik:F;zZ;^w-G ǡ8C넺STRwTÈ}{ :LO<鰑G`+`#ȣ6:10rBJF u6/Y*aEW=]'ِ8O |xG#oU_j\'w^&ھKp#xBrq:;x рѡDu]`GGsd( ܚ'1'GgJw.}/!%WNOx7{Vֲ߿Q94WJO_X{$;M͊0oއO?}J|)it=Y!oy)>g^W {{P5-o\F/'+<c_Lծ}%b{.̗'x37f.˜F,৔JclCtC7(l;eJ]3FRHnx7QGxӸ{Z|xFgސb?_;; -v?Xn}YοnZ_xy3JF/7(‹x$mXk-ըں“B M߯weSg| M/Ʒ"5 G>ݽ*v/Jx_Oi~E:'o'ᘾ|([26>L٨$f_[b^J~RNouF%@usb)u GMGڲ }Xl[۸oX/%Ҹ3?}_j2$wh|-[ JGԷi.Hc[i{8vB^.^uj66{8=;[1tp";؄Ɍ^yIy56i3X&i7L촽DcF3x߰ĝNݥ^Z1w5fh_WOvsa 5Y*o>B?HSXZC.O(bGta޺8(gg0qI߃0i͖;w0)ZosҋV2$IV2 iJh3?~i$amt%uG2usih:V\޶[:y-{3:wUu&6Bϵ?;v|]|?s08`旷nҿ8kGXsDm[$654yjDqtyrqv}xv٨wf퇯- ވכyyo16Pd%>76y֨1E? m$f@b6-_XLڹ4L3!ILg( :s0:*M&E]"@"3Ҕ>'퍓p<&rB);HVyLWYg`K͔QꅙHrISKU\2INϢ?WG~9\M]z\VT"J`JB Gdo ѦbIǪtRY Co d.B!AJ:P 8c *J5q6\E4ew4A B˜Ee4f/ؕ j!>)FRr6 ##y$و+cCBsARƁ@4g'07>8VvK{U8zY2vIcwL 'H.9# "&X @ɢz?7cOcg^z~/[׾"rMmeVX4^<NȭU==,5ݢicUk}XaUPeԼA\TpMJ"CC G!Mz(W/X+bc)4};%fyt9:{}}Uv Y7{!3vCeMۺZrD_O3q ؘtdž>a`rj`=ݞ0B&p0]TB R`c^ifaYC.|4Ba=VYXf12Cp LQ-kw*hUM ;jO.OO->]=\eNgomI\p mPG]lYbivͥYxK G]*醐+Ehxtͩ/zBvĠc.>jRit+im7!rZ۷]ս\q JٕZg''+7;~yBH_Zv?]sdeX+$Cs},,ؔO~OL|o r_Dn6"lN}l{{5~}xo>Oj@$wޠl@k `jȯ F P>`(vǥYH6))Ȩ,U"}Dp\RAW N,'?x~'Pd j?Sͼzi8$ g{t>!Lb.HTY JXZ"L2 Y!A$(SW#Y, q@E:+јwCF7\>߳엧CW] ,KK޶Kw?89Mp02 eNW.) +DuI6Ƿ[g>w|mJm¹f7 [i 08j(wnh M6.6J FlA8o У5oV\ !I LB&[zPRѲ'$'K$2!fa!%iN|R4T7Ul #H s]ı $C=2Abp%&"jur*pP`CjlvhXH^4 Dzd|P|5waaVN|"(j>usمId >12|5N*eYkc>rF̔;;T2v#g$geȉʠ ^i8eC)H^GY-XJu  |M.޻&B}/쓚f|b6z!;FS{og`dC1 "h0u'ӁkҀ"l%}V|JE!!A9:a6۫%zEóvKRڌ(xRw";xNa_HnQ?1f'L>!<-j9k;照̔,,Br{Lh#ߟ~l#l8ѶL}I[Ws[c־l)'ꂑcsAW3?[ ?n;/NZތ %(( L8ΊӔrQr=c+!ӱ[R SsV妐  fj{FP?85ٚ I'i\C5JXstj! X|{&Ut(lIhcb&:6{hw'ZLm)Yk w>ARMDc}V53&[]J'Z[D,=yf2cF4Cc>anlfV\uUF-9+Zg@x 5%gv́&[@@T t=h5@Yg,4D2R 3 c ]E61b5]F>890`](- Ob7!E;/>|i@yۧG$㒲a0P.;3*Jxna$XLy2B{k(|-S`גuV6^{'XEIiޘ]`{A)9x X#A*[;*9" +ʄ7Lt7 VԠ %=nR uW4r e2a̷A( `-ePf/jLi?Q5<ύMAZ <;& EY7 {G!!.A B*)UKU r 6lC@WJokLtR )tm7 U YYgvfwQqϪ J*H{m>2 YcI?5 !1H NJDBࠄfNY,Dbt\j=x_Q>Vq޹mBAi!:3@0ڛY,KT'f%$' cEUzގ6`;iBe}xL&/]cu-xUz@vuݵa6jhX ; b{PT8xiTpMO%sɕIWo#f BX9Y4<#y0 /7+XqCy[ɐ$RQd"5k*vʇ`],L5yOG30}dH֨Vx$ ;< c}`QՅY,TG76w3<&[uYA; > $dۍ)~}'kWB,UǑ+o0nOš|IP eDA!Dc6@^Z@mDBMuj W^XnDžAu $$(ڽ,=kq[SAh[Ry !z 1 bZڝXDjFHd^ZFq_Q0CV f՜%\0/F/`z'7ݏ]/NVz~M{qr~o8ser n P)n45BgekOaC5vwgU !b1j\ k1i3jFq5R 2j31M3@9>͘ՠֳʌ9:TLE쬛YN"SAC Rt,M =>8(!==` U߼Yab6 ̓+Ik׺)f7pOr;RyceUè˅PeQEҺb$U7Ơ<RuN` LB)1Q氎Ih j hNhc;(7 $ƚ5AUJFm:3/QL|YEZx tmKzzf> G_Q#AhŬ kB OU!ͺZSDàegJ k̀|12 =pe6^HߟhJ7a#U2'YkOY (T| )nXISG[E4@15fs˥ZpUn4& XTdǢhBM)PpԅΝ0qf@Bׄr:W]wE%蕲l0H5zAYr~~" 8fFUO)ܺ]4~UV8j{{7[?CĮyE27ѦulUÆ6w9J؆Y3\.6(P':8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@JښE]aƀr.eq1W1ڴYUZ@27H@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 :,f[Ȣvq.86@2EqRL˝@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $Nv ˝z7־]]J ]=uuz1tpSX ]i>+ ]]{,V/G U( b[Nߌ5w챲GRћ痛omS,Jj򜎍{!}RO?yt=P|akz2Ggؠ?}vu\_/.Oŏ_v_+&L}$) wƳjlNwO_0G?劅U-gIۓG';~N73+~P+kmY*6Vl+̣U\xRYlXJѬ,ոr>UnFH2eG[X ykYe,E2Z]2JD!kDWj)thcwJҕ-8,H]1\MK+Fk^]1ʭ:$H pBW@k޿ a_S4 +!bZb_Q ]$]7=\S ĽBWv1JgWHW1 ZN1p^ ]1ZCNWʳC<ђlUX ]1\7AFIZ껡+ȥ7UJ)Fma GM/vB teҖnlXdsd1c^s_<)[m,xfa14hwJ+4}4m [\P_+>ܭ.??5-š^1a _iUܨ~b`IEq|]T妦0nA!xIAqF Ʋ*9˪Hk:*R6vcYIgW$0hu6pUug&iթ_WEޅigW$0aWE\eVtHJXW_!\)3+XyD=*u*R6vp{FpEӟ"RWEZ˺WEJWΉ]he׊s+VEJz :9m/] \i:\)vYWFXjv6pE \ˮU-_\]7 Љk?GNzoeRʎp=\ܚZP"ZxECkd3mh9݂1uV0]6l`k\`o'2)atjd@/jWlch )3<E ?,zF -ΜzlckR!K"ԪEW[Ws>Xwgfwݻw4<=ߗAS>`(Hseg cPԿ9_{}?N__.o o疏'WxΏT+ GjڗHYDRgS?NۅυC&N*Y嘥!CCg#dy*/u`tTn󕐃[Uw?; P&DWxW=%hP :BȲKFO' xLh-y^g`x&Lo,NQ/9z|C7~JRu7fRgK9 \aKU*zޏF4=]Xf;U\RԵrG?Y|F. "4Q')tʵMk}/zDqmZUˢ]c%/m"fJĥy"Fz5GAZV'[+L(Гг$P EzT 'MKXiMce GJwF .,31{-'bW&PLmJ}#&=tZB6JVOkWƆmsA+s`mK,$h|V;%VXckcwL '^f$;Ƚ f s(d=?co:&8*܂Og*Ý^Ml9RvvfQ '# wiB/}nݷH{T;1&wF玻q UJ$?#E˥˭NJ4H8 f/WѨ/u?yJ-*/~nf)$~ +_r|?3gRl8mFp?6W/>(殛.;#j5g~8-.j覵ג͵ w4ܶ|^C۽4IF7v8vzt5<=&:㽛  ݏmߪX4x-w-޶Iw&+l5QQb*&UZ[Lq%HHc:g d1,h8À2Kqih0Yy|U8^ eH78Q2i>ҜʞL뿦\c俭}qY+enո ^L*'3V5VYэeOMfV}NZMi'y|`p ط=NnܯJ%}b+npƹlxf>TH[SM;}C+0V^X3BNvT׮U֮U֮UKkׯ[(Dd棋1v`uI3xƐ }V G* l!\}pYJdN,Y{'SA tˋ_sqrcFǻ`Ѡ kj6.{mspls}ӏVEؽʺNzŋu_ՋYw _Bm:VwUZ%Ac.ocEJWTn)]Kﮞ^hUUcc8f]!dëz?!([&wwHOm>DB:s=/ј fa;$Vlk2zSgӫ:*q9lݠR8VЮ.Ө=4 t({rm7͵lso/k}>6_ 7(+**Rh!Jso{owvǥYH6))ȨwC+!=qRfZ lv LZ6$Y&$pZxDnӑY3s-s> >Vw_N fa!ħͮ7N#?y𷶘~ IF;Ule`lKKBIfa2N M2:2HH%a4:εs{-:.!(lFύBE-= o;L-q< |-]fbnJn+z,K ȵIBlhqRYn+ȕVLUV\k65,ēڑe[O{DID 1 LId1qxҘeTBr00GDıd9qϳ h@з\YIHZ $HRܫ5qs%%ۈر.x9[dO[L׺k!Nm{[[E{'Ǟ ;LF~^MIԓ/#aK8J<Аst8#txg_@|C$i*sqitNJ{JVJ (q~o}ˤ5빈\49uTOszklk{YyMX*'ȥ'Ml]p%ֆ+HYN*geʠ ^i8eC)H^GY-XJj;Xj dwfa5;`4>i6b0̾;rz!;FS{og`d4-$Hs@bָJkXؚfe,= g_mi~i\;*? GlƽI+ Y&&<(|\B戽%pbd G¨V[Z^C1|L@nMM4$#\"de1hlˈݚ8;Gabv[ӎcQ۶ڶG)ٺd"#'"#JJgOF'c%g!Z"BYicY 9%š,jFMđ0.J,QƷ3՚86Kr aضXDt-#G]0CDNtR9*&A*K<{$6c2JZ.5xg*":6F ɀgBQs#Hb$ɒeDlMxՁpq%_gkZr.kU}=.E%n>:MZ{يj ĸ 5{ SbW58Ex(@X5biKo~dw?S Mĉ꾉CQkP!gS'0`9ݮx L%h\3.s]@2& yRt<ڬYhYlye/\,{8򐸊.rK- JޅWf mݖ8Hr2#J3[rF{ar(59t᤟?>Ufft$1=_}#pT:˾Yf)Y/0q.4YQIm'+ť7^ui)hfTIN/[bl(]L d)mjC7#əs56^e5yEh{LCn݄Y}#dx yLDIUhNeb+-uGwZv/;/[JgpNm1mr+ !:EL$漋 %AKm(V(̥};MQ*+1#r!X`Ł zT MoX=ɱN9fXiRIH2 hm;aq7?bǩa{7_tZ&U^?{6b?vln6`ϸd>d2e' USˑlKl*f0HTY]UbuUpZcH1$JBI1Fʱ4X!#[C'Ya+/:\LGl۳|wCTxGDJNRNfEwH$" (O J`+V+JPiZt"p23+d6asRlChƆBQ8,d2(0aE3`CIъ 5+HVό0Sn)D1E̕72*p҆k:~},/ѧ7y `D*=ɝ,Ln~Q&OW80y?/g6I[n"v|; 1Y_ sM塸gy0@ueԣ> Uy G]A"'\!`cxƺ  %?+ s}U4*]Y5 M=O:DH}9lJbٟco]FI#_z0ѓ韹,$GcO0FO@Nnd]I4{@/}3&n}Y/,? @~Jvz%R'"vvIgL=cKn7:>RLݱ)pe0y_%P0 tGtb քKd} hNO5+}Jz")8//C6zf*O|x8zdjQOo,œAxj!VcihLL[U .jokJ엒Oe:L^W`u2+3MkE: 5H SLyihyIeW]03'y/ :\4W3i~9WK2=~3x{oLrCQ+Z8 :D, &"hA@, u+ML!<%{Τ.8Yq 1%w`i]4Z'@:Q-܈ Qk5;&٤5wWs룡% :E1x'I'_eO6PdNqOebT9s>E5ܡ.0" (jOdN40B$h8O(rqH`r v㈘3 #;\d\vopL.9ƉHY:W"PGnN˔m1䄥 ^*m4U aU8Rn+d't:[*,BcT)~Q4Öu"o4Lۀc+MJ3ð"BNX!,R".ݾ>+f6P(0 30667 Mn\mt2Zr"KTM&x8l}{ݩcucˣ~J{\oc}mC,/%XFi6(VdRRZ2*,,]N#E#$^[S bx> Ⱦ0d8eQ9F-?bnX SȠ uajNpφX75U,nJ@qշUZM"5.Wspqm|zPKYBf\w=Jj*?E_p(l'UjBM 6&>Zg_ n,q>1#'U~DWqG/~%ecؔ c$b?/w2-דTɯAb=Ԃ9v{EyޡYQ>jtqmI8~1+ذaV\)Nrr(B tr*pWN5> u+F]%r5=uj䘝z5JbF; u EF]CQWsDUʃQ^RXH]TQW\y0*QyPɐ+TW`"I]BQW@RPԕ& KJSW#\2._\]mE$V*_V]mGz!t !.uŷPWSW.=lRW`F]a %E]%jjJTN]Ju& WRr$>,[ (Yy] ܨ6# x=N{fO뇷Gx860X,üGhc ;lZzTDg)~8S,g1.a;pu=Οqz}y5pz8f9/ȟǥ$MJ[l3X *>gJQUrpIGBF>1:PTwtϯ?qҊ;3!Qn<9H *AP? t~wMD>cv<&* Nn_/oD޹ƑcgF # /<̇ HHo5lyeyv%mwI=taԪ~I>|O/Ǖz7 Y? $-WPد:B\O޻p#=\a"+":B\&@`;TQpj[:Te\u#"FrWqAGҡVR'G+'3 e\AapɛJU5ş8wN܄3lRoE[WR^nhgqoJ _&]%3'6>fͅ2#L-'WGPW{{?T0R)++U֛Lj+~V|\heW6-WP)K;nՓ*hOJqAi\ZYYU܊#U$<ңYaprcWVﮠ2eA+ȍq[Piڡ2]#nZ9zԆxw։oLjK2 ػqAIU--~k2rfO c`2t5zñYjG- W2WMoCd řap; T-q*CZquJyIUPİ*|ܺ jr"r";.@(!2>aS#2#x#>RFބa,ʵaj=/"JZ-1Zd}2TpJ0Ri=TF+W pIqG7z<ڸqbo• qܕ}mK+Uqii'z\Ev"i \Ap ܻRqwje"{c(Lj+F• 8 T&R?yNU]#9H F0T[gʥ8 T-q*[qu 642Snqt\ʴވ;F\Eg9فpapre\Eεx\J+WLHr+ QWS/_|A7 .!׷חWgz:ˋ[)`nό<v߿zǷG5.{~ln?]6_߻Waws:6wo/ګ>_a o'w>goS$7U5P݅ӆ3P߽iErGhShn>onNNO.__~ŵ65ϭ4xM]f$K!%>wݛoNAǽvom?l}u۞ۗv/)@^ i*(^M߸WınCpzuN•ap%N KǕtkjyJޚ3BpL(Z d튫cĕDGz!, r#apw/~?T^xo՞pvp7RoObsK\[ח'u“۠П7 bA+6Widm~n(_v z.4/_\\|EPEZR\_]7חW 9wwM. hōw׹~,%/"7/?5']#>Yn;ŷQ{#zʛ;pܔ$P|R滷3;ūAw P/*\1?P-[2@xXU)Ҙ>?!^DA3wӪ+(oi?}wY_Dgxcv!Գ#f](`+6Dћm(j'Ioyo%9<#FGjo];|o~s˫Z{$;ŖTMI9xX|KO6$gȶ$Dw 'Wr!+ݛ [T{ Ki6& M>vVU ΣWB2M>ֈd$K65F1e.OX.6Poe΄ q[RkPܧZ=ubM7/1%֡-dlJN>P"EՊO@։*o_Rmi |)k SsSl/pR3\t1SJ-pϭK_b@gF3Nz(s 5&ԍ/MS+ptdR5]TK&r㨯=BW"")_79ȶ1ڧ⌱YF#&'M!:Ζ2@K"H ^|n$<4q&(s A!bl1?BWjxve=/\ϋl%Wj(Te:Dy%S!J)jgq s%=ƴ;7JګeDѵ\s8 ΙffI# 7%[3 GHZ>OdÄ)d-b'=e),}:~A?C 2UMH-)RA5 ) 3_"$U*,c.$.0-F[Ţ?h^kBq[Qdb7'-Ukss fQdbn[,T^k;9 !AnO:FVq;031^+UJܧX₱`SspQo0G~*>Nxiow%ya ptj,iUAY%SJ~+N_*#{֭1*Sl-(]L(y; .9cE֊$X4yA' Aujj .M3IM08yx>3U)u6UHB}$,M]ezVmnu_w`Uz|~ $X,$ BOB{ s0d0,]ͱM=Vf 6j iHy$bxHhbBEa "Ę#e lH0:ҥbFH^$3˰bwV< ;ad60җ":Yة׏Qߙ ~L'¶#l^ Ab; ]K0__nˋ ~'N0Nnr`)ʔv^2,8 c0]Z%ea/ @npMyr S @pSSR"haN\BV],9Q"ۚ8gd]|ƨwl <(B +$5TnnwXR&)3 KT( Мbt=-zS>7zvmkmȲ.&-! ]$3c|IbS@RA͗m$bGA.z:NsP1,PD|y2 D v@Db"*Ü |x΢bXN$JԚ5'࿠gHZP5M`F%j,޼jmJUΪrKo!n +I./5ïI'9*0 C:qh:z=­&:͆M,<] '\I:P` Ե" tBÁ$hd*B5 l=@ځaZ\<5!  J;@q}(%?^} _;Á :% /a,TRc<@ᡇlvs A0ʥP4tw` ( wBRVCwQallV >MW ㊀Ap E+fk ; CA'yM3T+1 S! %1<*!`'UzׁT=$X` (\͏!3JE ѳd4)v+bz |,J>9. ˃ &}*H#4Քqe o/Y#9󇝃r5W)P.5ʾwP4 k 6P v `93#@Jf-th55@\ v40#1>Dph0~ %\#ܞu zxv'нo;`ܔ@/ d'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@/ zZoɏcjnח NY~&#L 02'=ZQ̓q%>zJ-+k ^ 4>ڠ%Ԝ4S-^^W rbr鸤Ov b'k YFslAi%TH`k#o/p-Zڌ:vy+p*E/QJʸeB7!Z叝DW/gBv7tp7&lȏ~Q <|tТOgJ"`zCWf%/I]ezCWW=+DH]D*'BV Pjf^ ]93=+lOdGOWR{V8e;tu?72CWCiL]{Е zhs=+ ]!/(Bϵ~( 'ztڈVg9Ivf+?߂YMhf45/4h=?viHڸei]PdT6hˏR5)7ysm5Kx".jT}/-iz@{\ljQx|SJ7QGTg(mqe4uU%l5O1B(rVMrN]<^p/*e+w7RQr?{3)|/vRQz_TT\)zDWتu+c+D)^"]in=+<[/th=vB]@2){DWغuY7NJ.^"]uUvw8.J\ %|>O_ 3jggrkf֛_-h.4JR#2"km,Hbc7 ajW$vw ?ڜKl8yuQ*$,$0^1Fz1^VMJ{# FiP_1E)ի4voժ:Y,l;jZ< J꫓:fYb:6|{Lgzca <=-?_NޕCWQ;lvssmw^{8{qrwٶ;ͺa W+OrҞ\Y~?ϲAGT-Tiit%E-Vt'ٷ)SZ Cqy5ĂGz4j ' ? ?~^w8y5{Uuy8(>|ߍ!0^釔X[\Ge]]OAumhQi+6Mlv Z%fr~v2P (J蠘L̸*Y eM+KUwFn6 |ݱ^AŕW0l^6x>h lJ ?~vDK3\O% m&OͶvJSqH r*&^?;$_~H|W_WbZ$Y\)ZV$/B ^H*^| gyH.G"^L.F=X_a&&~,K|L( Z#N=IW5{ՁWH)SJQYVR/U 565𨼫ͭ;;q|pjFǪ-Q(Z%gTmɫTaIASRO/gcӒ&Xߟud6)7dF=Վwi_]~Z*辝}辿^Af?FAP;F^ ?BGq)祀N>tcZ?zVpMgcVKMs:Mxy=&vl9մ{OQg7ck]=4x?_ h烟6כl2KrO->7KWmmCLMbޕ$5އ7qYyg ZVUfS0dφm~|;ݹn:y [`G? RUIohƮlx'Ի}nfZ^Gvڇ뽍}LOdY/5=<]޳R.:KQqs 1z Ee9{[9hf\^Q/ ΞS] \eۂ/4>HnG%?S8)4j#9n眵Pgӳ4-G}܀:(#hT4!Uㄕ<6N)DTt2!@q&G=[ٟ?BYy=XKwQ`| l 9L]u:s_|Wpo<0Rlv!.$kut cDтEqLNjzY3y7n6u˜"H˒QNf ;afvې%ݥi$1Oo& @R-<&%ޞnOov3Kwlۓ^3IoEno;&*`;iL;¿bTyk򈡁x?>tONs<#3ۓj7l'xYU~Hya:Ho)uio8Y{~fw4+iZ泳lL%x[b6U ͻwu؉0X^nl3 @X)ωxS+eߧrBjv}B}pbJp[pޡFŒPktF_N7hqHyy@ƼMN9'˺ĴD)\,!s7ʌSB(1JUR%w1Pe^*a*ވD N23*?=17GZ?tnlmJY{fcK◿F* ]Pud }YXdUʂ.JJLQ0xATSpXx. WK :UM,z(8w[u֟lcfVX}E]&n)qM=DRzʚrЄQRE޳ffhdd*5K]2t ( 4Ps&mSP>j7\=&mbQ@I%AƂEeV#kGX՗e4z3==S)TjTCh} %QKKjD!gKRjZѵdw@pӍc?iWXn9Z|=hyt':j Z2:om`(сEMJe6&ns 8QLL$d- ]E&Ec4T;zyˢqm'r1Acc mj\8U]mBh$@I` Fay?}Ԭ·KaV$\tƎf\qO.Ial k VuS B56#Lٌg\. ixs*w 2|~~ xQTJ9ٔ<]k?HSt&P䜘M ,\︥zG tq6mwJŢF"%hȝ\r+s^msKFkGߩ9oyӼg{\* hХmֹ|6>OFXhWTGnS>̴uVϒЏw1J{76JW_^ִ*F~'^v],]a566b}mɱ :<2G>73v|;rsq ~`척y! 0[kѰSB8gًr BiVtV9mX/`G`iehY,p-R2IͶyN_N9U_6VqGo5"_?3_A.A9~0 1=89ؑnգ|[&!H^9ECd1$(O͉:`*DNSrL~~$˺^..5$$ӏ=}~cIϋ|eFJgJ2 b'Nκ",)CI89"?A6crmW+*KK)@!X}ŧ.Bݦ4P)JsQs7wnƨ3~.W{Z@zKwRP39:ӦXM@>LEQLu~E%l ;wbgKF9.:ȱDD/D_ MrB^-yfCX"t Pݳθz[|so-fm/u1S̗)뀝Mr7r-7R)2듏#KJƝ9[Ȟ3kl\D&0>L\H&BUb,l3r;9>i|[w9}f4#k`6XR#SoH cB-(&Ւ` ȅmΙ!ϤLXK*. kLdhay䎵 P$qD~QtJY4xީ?,uO\wE3~rprN:S>F[]*Ezĕڢ "Yk79z=*ya-Zȃˈ:_u}c*Vqcھ\LQ\Cqq]6K24JJ!ۨ!YAЇtN {qmP]*Brz?bvk( ]hL[̱p!ueإ(lz`1Y8˲\ScIهJԞNP~vW=u\EJmZ.{N[oyZMv/'g*v/&]=R>_E?uV:}Jq˺^2R dѧ7M/ovi!n.Mu_ :?/q=?ޜ7ɫ|ŗ73>E.>? $#wF Vyef3EoQ”~8!˳QX|y;w;}Ss]5ك{l. bori>nBۋ\S73ܧdzw}=y^e_V?5پmT4cz[ШE-z>kuy \Xz{}SF+|}ߌY{{{s{ٻX[߱6Yk?ǫ^֋ɛ*y[ŴD{g IZ) b$׫%2Mo5$XqbL5PfQ%u"K@&ZĬVܬ+Bl `@9mi9!T神D$E%vN6I64eOlؖk_?r'@ӍOs˔?izR] ,X(T'h&(Ns|(f,xNeKTĄ'!$BM@51Qw \:q4ov'K{`|0k [彘 J]>Aj-MP6,Ȗ{wS}XFJ9y2Y|T+URG.r# Æ2u$HOOZ4Ws mݯ(AK)β g9۸a^iv;H:"0+椓{n%+`m\X:uQICi-pr|E$ d?I:[qg %$Bjar6YS3' kI:88+b꩷[WLuuڪڪ,zCƢX2ի̴^iE(Y]Y43:4 )6sX'G%M`$Vo[V+s<))1_Ϸb敷%6kg !K]Β/ћXb7JOQ)s 9 0ρI@Ur5&u ]' CY j̎ 瑕Lˤ!_֓.Q4|#Eb4tЖ(V1BiRYGdέ et*!܏·+9z2Z)ocϯpx6͋ӓ|z`'o8RCד\Ư:DN~\:A1nߡo*1}Ц; ' }iP\r8rf@^(yʨ~*J|8X:s4w!kG5mv=x5œ"0u0:׫dry_N~0#p Q?_eH[ꤗy]38N@閺GY엧ۓu@śV7Z>>:[}ȿ4w>i#㘹*wB\o_1XYa~bᔄ_S_oehp!iJqY40%G<\>g7+xq6!&&7qm{Sı֥_dRD!PRUF[Ts0|9=\M_&z8!iB("_FґȿyXH"#$’K FEsBV:_/+tu)k&0}(ԡM-r7(uHM g2ǿc,BcN.#Qc^.|ul6nj=\.'4ۆnX$;s4頦f^T p1l2LKdsyhn77.`iwdxB[ S__xndWb[Z (Xah91RLN@H8$ Ĭ3J )e(a+J$*8zY}:+;U251srXY#yj[䅝ǪgԔ *7I}D?[rNkmrXjZ)!|){BD'Ts )r81AR>DD4hOwN=vb>;~phSb$`R:0̛HyPN#e&! #1w" []:v`)U,(QKVBP F &@K. ض\ZkTnX`2@N{uN]R۩-H+'^˝ـLǓG V>HLO?+&ϟd_a*;^ yx_q8gWؿ/KZa;?%7}:3dp,ߥpPͳj^ga?RcOI䤱@ dkDdt*tswЦ骭mE=M]\+l-kPU~^m͙h~@ij2;/^,EjR  a-{$Dzfzjv>T^Lab(JQx 80([q1sq5yןQÇCݖf>A'E8翽V4YND?{6So1xAa%ޔBiRG4HAV㡷YCoǵGoYJ)zz[.R#+ \eqV*Ki*KiV#+!$aîh6u<J 1 ,9⚣aW(-"A*KhW? \)I"GW(0#h*UwlrpˎYvVŕ6!%LY-d%MRq>&]_9`ŻR;J!//y0C<:=yWC5U&d#:/A36R-?|~y1x.grPJI[_Jy6K-Ds 2 J pYhtJHadԜ>h/lxRq <`y%|bFN1PMkJnD J|xc(D`,POYbE5*X3h٢-@R>-{Lj 5 s>;N-a6}"ط6qm>6#ˍcG=Ux|%‹r6AlQԕEV NBi7!}LI>$.9;F|2ŧjT]: k]p9>Ta 0,xb7i6Њ-T+"|>.9PTpKF8*f./@UEk 𷷃 }=6>ƟFkre*pKV!Q!`ü퓤) -g[ X(RƁz6petNݻq(D.{hRnApVIYT_?ZPZCx W LųLbzH|"n-G_x`oںMł1>c{*Us8dӇ:~yr&A) 285.DHR.KM, O,{+Uwww( +tjiF1Ġ+|&X^pDNL+ŜJkC<@&D9ϥaRk 1RHBc q˨Bdj\z%mOfXi˹j#5,9D=_'(ZcNOovXUSd$1w#'^3,+W+\tƻKQ$9͘I?. m([YIzS$+.x%]^Ms(e{W3"ZQ4sjNCdޢT㌈!*. B3=t>V 7(IsDl2ɺ2YR TRL&j\9* q؆$:*Kc)$BH^EsQ^ۿҴDB2.!UNPŒ$Q(gUtZ< yV'Ri_^ܩe \ȹPQ4 <%lPJ+ch4}y[G} m.vk}nH ʟviVnŵ9j\tId+.IzPDp%6 g_M"1(EB*wU=|0ɖ.8(&Ot~,+Kը=\Ug>MJIC,~C}~c{4iW65I +W~H*IQ#VrfͪWށDY6_cmalN֗SÏU H0şRco.ύ\?po,gCcO .WqVCUDzEj8[FW{γ\t=7՞v4inPP)% (u77CϧNDtJ@u~X ^f @O圢5ٷsoZgȧ7gs':Gи skb-F ;i)Zֆm{%MR8Jb*ٙ=0!tee9خRlMc&ƭVmVi9d}@ :2< ?omTDlDu8D5 \CsK S0J{(#T|O7C48pA&uDT2Q)3Dr UTUQhH$Ҙa6j͙Brc.7>ul=jm:[P&?Amk\x-[rP!p=v66aWlP=hqGxIP+by҂WhcvAoNQ4ȆҋMn9~AK:Э]RoNy"*YhgR> KȳK/$ */2jci5걵hZMN3mV5T[-NYڎƝ!N)!~Jޚ~HKIns5$I[-*!d`$POq& bgOm֦s;0ZBtYH7>x9AMo/巹J9>{vY6 ;4ǠJۯO^ ˏ6-@3N3J+[mL^$-|uvTJocu2 ﶔڣRzj/JEΩ&0xȳL3tJq AL$]Q(OP`+(%t$  eM6Iz]6⻉d}]n|t &\@= 13r?-@cL%`_Zlh N;)9PgX.u2D1jrj8"j T/ A9lxO%Mi_o>߭ v7ޟ焰Gu6^ok>۫N&^3*B-xjͭ TqQd8\|=j:iy f':xA}utH. N5Ozf+Iq>9a`2Ltnj |X/7|n>U\NɇUVlqs7ۆݲq!sB[%u4ZĽTIDs;/Rt3Ob'z6N|kJ-9r@S&@p* #Qa;ӊp݅~ӕgֺZGO,9=#v% A10 yAPPCL#A_Q$L{'mobehg? 6{5sa^{L%Ԇ 4 QN=Lu!N*փ&4("5A@c, N%e.r 1UrR;&c=F7/#w 8K1Tu%-^ 9 i)RhKJ*DB| h ] y/oGzoϟQ~=i察?A&5p: ~Wvp~T?"_*%rh4%N:BT01tut8y>pFNI,z7F7;|sP]B^(9dTCOz5jy qI~k?Liӫޛr̿IorS:W-/cwUo='!\ LUE">p޿j[\> ׋[Xُ__uǛMoR x̧n!Ҍ|\Fy_TU9.f\}8FoiyYp^K}sWs$l4RRj4KC…0")eѠ(' x}_Ws^{.:m"$w_PiYpuIjY~qz \ŏU] bu F~D_V*TGM)Oe@߰@M{^?ΰM6=MeRT]dunh7V]ܛv5,(ymX+:k̶֘D3uts;Y[RH)cN[ȉЌyu,AJD8$4qΪҊq|g^@j(@wq#ErK8SipBQE'J%AEzC:-~|ke]Ű6TjgysTիʞE??Q37šNJ~n̊PjZ)!B)n!"\M9XdHPm0N.N"Y}}+ՊTVsi?B$yBH Wʫ90 Xj@.Bx7{Ϲx+xY}:Ub>JR` DQ.7He"DiF)L\.4)m֨vu(T 0wdpou\ v=f]Hw˪+03yzۆ$YRpJ)T͚.3]qߙU{VU=;lJ 69Ŕ)$"I'C$T y, :z&C\9 :: @Vj)`JԞV'1l\WmwߒʖA9́RJF&]^w!5'ŋ Il*2?xQEmM^?ۋ/z > Wf;'s6]q}_AUTvX͒-_"gG)<Utߗ+kً^mI|qB/M'@UЦ1os^Q8J6"|+ԗ_|f^sqz> 0{QŨ^Zp՛E}x1†\g<@j,p7F/Q.9Nj/ۿzE(';O;aV啨/*Fz!RҖ:mC&F:ZKo$A0tw)wBwY{WHXh^0{@Viu6ڰ:$~tIyD^; Nc<%)ˢv,"iTRiD/"#^HP@)5khtJT@A!T +y [c{tzayд9+.f!t)^1~&+U?}M_ei!IV;++b۰>eG9f{v= ~?}U[7kX:K^qqgezW*)^~A0FmKa4=̆um̅}t,)_C7ٽ_:w@kq',еogAͼLbRtBm8r<(HUfbg3ePqә6ۃbll8GNҳO̗xV0ݩ~Y=~[1?{Hff+ҭ*Mց^Y[^ !w6#5cݚ2_4.a=6R3yVF#!b/.=4b;ɡƾPJ[[H;E4g3yv-7^[ST=hp14F8%3 j$F.M\PћHn<&O\ADꉀ㹧9j}É2@e !󷱲Jfrid&㲋3NY'w04ZE! ෷c2T RUҠ"$$eDBq)\eitsTP՟\`Xxg\zHfʋ7O+oFNb_O?&ҀO8h^8`pᓇ(lEA7Ni{F/sU=ja c LB G`s&f3gBt2#ߍcsP`&,Gي5?/bd,*,[Auѯe:%fŹd9$4F h̊hX HX%;dǕ츒WJv\)\+q%;+W"Jv\Ɏ+q%;kWIG#ien=VZC+{he teVZC+{he졕=VZC+{he5H:P 9 Z|*,Q^R"ZBfc*q=8츒WJv\Ɏ+q%;dǕ츒@ɞ30sh;)ymӪ$~+P*e7'@?%gÅ.qgա9,h)D<.ŸZNѾB-&]3AX' XX[MY-'k)uT]ø+:==w< #j4Ze]NUjNwnS4帒DIK[w%+ui(J!$H.Z)CZC,N AR-lB#U2ֵbX-,BUj j Қ_,31og4v0= ؄Z4h$ q LL.Q%j4N0-Q.X6'ǐ=Ρ&Kpb&Hmc"^8- Êy)Vvjuaծv'NK tl@0"87:٠dPS⼶ʜbQIˤӄ@Q{T;-(ȐzўG&D%L{m`]H"NuTp1qΩq~0^ -ZDS"jEܥ!&p aBFG DpB*O6_bPk%X&V"M 785I*ʃqD%U(#炥D^'I<%Ɖ8xM|',3|o@7jo:E --Xi#1HJRs|I7'L ̶2up)f[z2&}"/_kO vu3c^1^e'sU,D\}yh,ǭN  m`[eE>^W~}(H?os }~mTUQ5bGriC2\v3;~/;%8iJF &%0DŽ)&C{աlI>:_g,C"muaɅR0svKxwTDѣ"mZD)kԧ䙜wU }^E\iyDLK{ĕ{5D́D\6Ǯ֔z.hCL$]T'LfE2 Y]RX8v%AM_1S.ap ɕGfPaNkDL{7NJ  &:: }{7G*H]sfޝ] <4xP$ZQ$Y(ך!|2 <ӥ8 ]pq;<GH վ·uгsYԲ4+ڹVHNJ xmK40 odô XgkHAM|oeoeoe221QWњ(΄V杍8U Azn29Ce$2AB,<#:js 2%ᛕ(%(Ӧ(ӉA[sM9>q⎰UOɽS`MB>]X.Gr.w(%#vvJRUS vV}4@< 8ԨMQo$TE].BF?]lQUD08M%ɑ6Fw/~;%9_ٶn6p_K-H8j=!٘; X^&YsUDN;v95lYFmUtWto*{sS6W|^?V*yimIE=l6{\ޗA"i$O~U%MYd HDI>] C5{kBÝ Nj1N?o(aMGPjⶅY ԌlÁD.tXpa Rh/_>sT3eݵ :Y]+>*nn[yUބq-@5 sF6T p)9TCIļ$*m2K[AK(Gm8SH{"dDeL-&@ YI0D^O672[e gA7< 8iggJX\vRbhN RDRC4|i&"|{e9ς7IjTpIC6@@U.infYLG` C՘]BSQ,7(M4sL&AGjx4gүutXrJ MVPT6eXtVfa}ǕeJc0ed?X61Xf)`f s}^CRܓr*hY8Nss+Jw,mFàXEa: _cҶd:fg-MQiB UtZ|517ǽfRp}!Uu1X Lo11*HLzUتV_PGYB?%Mi4+rx|e+e)8va2H3W(J_+[}>-8{#trAVRe /J-՞A\)qլ"sb:eiDDf +$gJ:2g BAkplҲX5O3>]G qT9qYy5~op ^CAXAHƓe<ĈxR*N+-,x0,DmfVM>rx׋<ғKN^1ono{fK0R4 ^V9_ѩMmLˤcL6Qh6$Vq,̎'UO*1Ǽ?WT\.&F)C$<QÃD$RN :qAeҹIY?&@  Y̩1̣\0)93<61T۰vGz2ͼ _/ʏRTe4Uy9>F'~ t:dY~O~${wY◟[v%/g/'3*d+EBkU{bXpx>jzF#,WbCO [ `;CFrMpy6gz*&:4DZ#ic}3sl\ 9hnnZPTqw E,9QSQaT`6s/̂5q) ܗc'fg2;{깓zzјJ-Ը30Q 9PIp* ȉaQϓ+홧Au݅5ƎfXmjqe!HB4L9:/Jk`N)FF TB0ܨoN r^U-?PPzZJQӖ݉ _dCR$*tpOt4$ III4("k Q3~d 4{Mm7zˆ޽\A^e[Wà?y 3*a۷ s:x1?Ue8hK `pB+_\aSH-F!a|2ZNi osx a534Nd4>^n>}]"?M;>ҿ=S4Jt2~qߐ/n79eDWꐷGCJ@BM/i!6!kSFI;>9 OrO7>N+dua$\ Nnt5̏@?ň᱋Rנ.lGw_NFf`ϗjay ~{t.>4ZlHWo o^~5Pҧj%Ry: |}@ܕ-.\}cxߎYt7+$(:(/}]p!@4TZ[a5G<|.]/uUBKķ(Svg^&- \޽^6k}tovod?qOߎ8g2*^!/#o+`%2XaK HCȡȝoz묷%=cK <(&97*MХ_Mŏwy:1Noi~Ls;@ǛFYEe?V*SoV0ʜރ2/k~:x2R n`OMw4PUaA[=?*󅘘y4G-#֖fNj;hbylL}+:sM_ZRHi2 㝶`: XN+ILF8c1YF&-?PCKa ,;|"rf9eGzéN8y`'J%Ay4Cz(!JӘl<1rd}[Y#36ZQS#j^3ʈ7[v?WӊtƩРkdǟt;![ |7㔲MfCbڼ#%yRm(ϓ4C8I0))gtΰ`RJj)uGU<1Ldim;75j?Ia\oՏU}=V/+Jr`IfO\:7`He:$H4J0Xr 7`ri]h]UvJM7:t7?Z;&l۩-%US.m#6Dž f 0ET U2 {忯Wȫ=.ߟ e2x`k6#8;g5D`2YK(EL-|qV~Y𯾕paWv(: ~N>m#*zVbcȫf_z˼A,aҷpnqU9I>X<|X]9/Xv`Z^{Nf|wo٬dlirx0i\Lfއ9lP)3]/5Dt0+YW vBVtt4Ct&Tw]+DkD QZҕV1!BZu%]+@ LvB]+c]+lEg »BWvӪ Q2ҕ58w!B; U+th?w(k-=]u#^پv`!2\KW]tʚ̶t%{zjSc9]!`):CWWui=]!ʶ t2tePs+5g!.ՇUJ놃5?lQÓʏCɇ~}?MNE#:Gi(@26S\oF9fɨ()}0w'_Wa8#m9LΩ/g#eyt@?kOx2? *WaѲu*FeX; a_Aʈ^8 IBt GvH{'MVpL!&&/$N;4Y)59ݹ[,JhI9pV\c`Bgڒ؈WꮨV󶫷6٫ނY]!`:CWv[$Y%5=] ] A]`ICWum+D)uOW;HWx 3B3 7:v4<1+,eg *B ͷt;teK f\w$]+DijOWCWhn]`!+yW vB#^j=f|փ[j-|[6C[\AWv=s :CW=iW *nNWmUOW/CWV[\iY؝ _oF{[N2U|!JDq:C͍z Bz:tm+n" I,G0'eɽ4䅄|>e ,O*B -gu,յ=>CzoNrF:Z$*r!ނ4K`^'FuɢΨȈȈR^EAS#]!`:CWWun$4+(]D;rZ HDeOW;HWF]!`e:CWtfZv;ݡ+Ŕ.;Ѯj1(ut9x F +B3sֶ~P;"]ά+91p]+@+\x-랮v,WLv殛+GDI Q^qJ?),nl趍n[Z SMEWkt{zjS!g+L) ]!\BJe=]"]YakK׭IE􀈡C`X;wR=޸V$hY\;A~2R=ڝeTϷә-絕yeR]0_&. \{'U'?L &Ѡ'IǨ5 Tp'btH)GFtF)GVwE)+划^)A S׆OWvfnk3z(9j lU K;CWuFB=] ]Ia% N:CWWtFBWhvwkBvzD+E?c49ngz'S7X9jt燗K!qqYpGKqGS1Peұ`&BP (t)zcpyl2!;=C΍ej(/d:/oj$V7|VfEmH)}=U$V\wU[S"yR<9$% -ašHݍFwot}S\uP735Pc0rIA(L P Wa>JGJ GSE#)䌬L! 9ʿ/}sR9*h%͖x\ ²ӄ%@}67j$Hgq/Ўn}wݻwF3{,}#}6Y?878۲Ub/FloF%FWC ֤찭r1fki6ǹ,x "a^5l08mVwT~fǛ,< "#7oʁRŵ!\N,B0#BXYL^jʈhA #(6$RDWYzg'cZj~>UhвĢ5.-])-cRֽ)e j0)m壳DEA D"K"\A~;n?hv .@ܜ3c};niӮ5&{>Vl6Mu3o/bc+|x IІ)>/8_,}:]05+ׂ fϡ0\ /k7q`e Jxeڠ:X!aVKIvXPkf^ʨ(4]q9^8B1oNYT0Qi O4[1VFÔ72(!$a2,Yydc,`B DpC, Q`aHnj3p3sx|}oP%-@wI5ˠZl&Q{;DPlxm o0}(qƃHFxG+)u Iv*b%*x.zoёtxF?Vs<\r%X!^`A4:C10"sB6Z9li0QG IqW58$-1gxqWT9ːJ0#΀L⨲{JD'SGE;_֏DBE *t!8$ahPv;0H^ꂉ'}TȽo?0MyO ^>az,cT;;'=Oy8/Ws{[uMM{^DB]fo?U~dgr`V\o| [P8r&Y&Wш\h8e;c2!$yX }7Ca|aqp P˺,g ^m?#7޼8M=E=}SFȗ?,eOaYNOXqo)|(6Nit4ݘ^]V>%nx```U&satF'z}?3DRAyVKmU y-`lj{Ey]ȸ~ ַ ~ߌp=-uޖ|5g?^?_ D5:R){]}Bq(F@his)S,{la}вpVިTTV{*J2%\;zTL-spXÍn=Ô^ B-nVݖZo -}ߛ60okR[ Zsߟlvl ݈11BU]$Ut`6 ԨeƝnKq KXKG=sV B捧jt4γ Zg#34 ~ntf0HƷ^FΚaiVF3gL#=}Bd1hF=Gv|<0ÐKkAbEmEn99h L@mPF}6G1!u".魈 [M}}'4a`k΃X%BSϘ8P=h:FY^14݅Jc\;_7![\6!}: )KfZ=˲էaKۖZ G- uͭVhYl{)&n`y2,I_]%ɡCcy[LJ}7s)rK2c*-2޽ݽvᏢ3zl՟xM2\< Vj\Re=݀$vh>ɌO`'64x$Ra PnX 3bNclTPe spġ/ AHRVa S띱Vc/^ˈALGFSWgt6MMZ"D`BP B3i``LBҧMcT5oPmF@-)иN(ٜb *e\>Tfa.Rl|51>ƣ1x+"i(b 2+A@oF+p> oA"\fxnFl!o}vG|}@>(/7u)غޖ"%J!rZOrµVמOWQ|HWiH2d@Q}=ɯP7btoRJA&~{[\l2ת=|َRO-T s?o)Hsp:m^uSsﶧR\x7ᖬ:U2ӇV\eNLBZ6AMRkkx܆}ŤmN Ju٦vK:_9:n/z^4"ZR06nNU8 yӼ[q-6ȳNټ~tד]muq[cF!0GKsYڸ 鿥RɓMvh T7:P道I0Q-d׳E6{BG@/R$< \^Ӡz6~ORb)E @ Kd68`%j 8suŧҸtҫL\' G(H=LnMWm_m+ 6[vWFϜW/**~QceU罺0dcthnV~} +7L&鑶@Rh(E A<{BlF"#J)M,^ ˸얂ph#$q!1( F刊`) NPD2]Ljvv ^+g}t-`^kkNQY! aJꊣ}5kdWo]$ c!H*ʙ=9H *5ɯϙ|l=R @ 9:|{~;C7D$CHHzcK-R}_ս |w׭{L::U@yM>jbAJ57VETEbs?8:8_%~@{TaGQ3]kor+?%GGuu# vorY,r"яjY4%){A{=VԃjJ=,Sf* U:"$[O1@|pр0]$`ZyK L[y&;]zԎMw#!0 SfRQrVg,Rgs@#D*m˦Q;9) HJ.XTٛ(m&Ĺ?aY(r[ԗ%s7'Lu=+fpU{f6͟?ۂ|+;'ǁ=;R ;'xU~MBݲޓ1 l܂$l h"Ձ@SL`uTYMr irk4*ܯR5= ô%I3?N˥i u*(3Q*7;.b~w"[@F]lT_; 3wFkv?p$aw{r{Q# cDcPgHyJ0\*t9mToާt[FڳfVYDdaݖlOFK Ah؀dAPQ#c6G:%eAX(-D8ͯ2d"l$ViT$F¥Vmtx$G!TS$UȈΖK&GrR1Pr̥(R X(Q) sNe͸J3,lbm8`NJ)yeZ'/޼gٗYZ.'ta<><>Ϳr2a R( 0U7Y`Q""e 9'NAؘCbj:Fvw+lj !"d|fcITD1b7nĎ|nۢkn@Si a',d&#FkJuZ\QHm* M!Eg$P{Ie$*R.!G(>b.&N5zu3qީ֦'*0 m"o~@oLbGA ehlJ2B6bPF;jI!d@K_J-Rq&f6= $VK=iH,;YcDl&݈=PG٪>:iVl 8AZ:)*o#; }EsfXB 6U2 \IȨ  W0@@o&ЭĹǞ.V-WiV\%߾lm7 _~xi(|قUs |* 9_]Z5J6v%u"KgQB.&GҺKJq)Vv 5 3N63[L#)Sj,| H G QEhyHB0qX+TJd X=&絘%{[is]5k4#E]*IxJ'ݻ\uZ+njXߺ_s|z|aX\1󤳾}F0(JvgwhJX X'Z,bLfW]i<_u>Х6m\8VsRIYktI,}]'ٙ*Y,l.BcP҄j:|z:zX2E:%ƺ\M *&*o O,$CB }A۟jj!1فi2`7!HQ> ce_ڏUүL355j\{!אs5-uE^fF՗K|U ey~o=5h4OlSџyHRϭhɛ)y,(ax*)E+,PE)AHEEF& z}< yesl`-.R%(0J`;9m*#&dLxlnlloж҉-wivǘ6ۗ6n: ϔO`7|wa[,[Vo;\4OUe@W77~#~'}<~yN '?*[ '.'|?5[iW'%^٦9lS,YQĨY? ^gLWGWk|/Hgbz$nW6X1SX~tss^qonස<ʈEW[OU{f]siyH; U' D8|*ګ'jj^YǺsv^۱<+ O{۶]6F*MHVŘh)Z+&)d,KآYХ51j&: C2ee-z݌5l~wՈ?/w4^ӑ'<Q}h7Of5'l\7\~܀0 V GHW" 8P:[wۇET"\:H6`E$9 S0'ń3`) , d >@}f6 ;=5}h"Ѣ#b>d|AμzU\/KkzUJ5{z={;PW3u\AJa&WpvE_\Ikt/^ \`ݾK[=շW|_`//:hoGW.KU5ŏF]r~{wqĮLH#hz:_iOϘNNؑG5'AOo}J*/T|~2^}4 O[4R=4|XA/붟Nhc tLTв|L( <~Ki7yO1 l\p_}PT]%NI][I,Jj |\Z2c?|)ّ>]4wEh=j0-Fƹd)FURh;o t9 1r<){,EfQCF\EJy' ۡi }1pUU}J87W *wm$ X]` ޛE bèH-I+ϧz%zPCǀmfjfNǫo>Mp0H޳*Z jWX``ઊ>ς*ΟWUJ=\peR␜*>*qWUZwL2vmp|2^K8ioj둅Qs}-=$d^Ib4r| 66$?]ĻQU^GO=>\z6?!nBo ~_>vOy{Ld=- AH4& `uno @,2(FEwY |\ lM#"ɛg{Ë/o9Uo8C~w9n ߊpz׮WϴIHZkX2!> O2Yg'(٩:;-MQXOxϾ[olvMWؾByf溧 7.NpzZ0g)/q)-EYY=fyw5;qA{mrq(^r&B.~N_}4;l4nw.og޷3-=ӻ/X Ki_|_1b1(rgR]|G5M-r\Y}]1t1! 7a%&1>2Z{%{#Qm\3z+Yo#K1I5BDIY:6R؈SNFmhGD兏D9j 1'1H١-|%c @f~E4y4{!{oz {X3-pv031fA<(EcPНa1aH23CًN@&HV2qd7jƐ"SMc٤v)lʜh>/a_;F= n4#g0 [)CO7UnF="gRe"R 8 b%F"!IMq㭧PP,}݊G%ɘI!IȨd 1kԆ5`ltMa}.r֮&9\x3/43y]x n1/0/~v^& $1=_}%jja99K&#]Gm5>RU&F[؟lth:uU * Zu:wRk$bj.'㿮v[8C/WPP6ESc&,]%*U "@ga"\PV?Z[Mu ]C.%sr~5uӅcmQBHBKPQT`0:Kl4$cHsz|7:gzwY?Gh{\'\Cw&8OnurŅs2yBZ&gF DDH{4i޴Vg͉~umDĜ2du*U;K椲p.%H&^͵H%u[J, )e-ͮ  |1%¨tʴRubqq+#;P|`C7_xiN3U_%{ɺWhIgF`UEE@g)W){e!ۍ.w,u6]X4=sr෽J C4B*3NmpCbFRBJueY IS@A &ʢ9IS rqO,y)[(ZǾI1уqy^ko4iiZeMΡ%圁Xv/5{DE- ,QlNۇ*TB#$d<zXYb) K&.E澘#$"(^_EWkK-M1$f:0-My(F-FnK~3}(E&6RM[g5-ea-RMi|FlRF(RF"\iwuVr6'.~D_k`(74mkTL5ߧڳ2QUj編]l"~ʰz~S~qץYqKǹJ~qz҄6>P/Z~=g` 0:ʩOї _ڬ3*4!FOeo*M/1|g?ppXVk8b:,V5=b$Zxp8kڈl;oB$57GogwxlBoOtJqxv:e- ClP3v$E:ظ$\rq0J|7,}\NJ4J!}6j%KE9[jl8o-|N.MuV"{5U}m?Nk7xM|ً de%`eӌ΃bl@Ζ:j|#F՗5 )2:Tw<2ZsNEcoޟXN|%n7 IMi,(:0)*)E4S#ZAԹ_dh@dB٤$7Mk=_Z UVu`9gȺ@F%ls4sT0X,[GFOlc+`s')>]Dخtb`[R<'Y@ w}>M>t99AG&o2FLn< W%`$gFf,J"PcA$֊tƱ#/0{sO,Xi}?;FώlI0ﵘ,S#ft1 &@L^kH6BB!$咝 amE}>{6r,vl T:XPeǏRm/eT6@ŒL!,ծQK7X&j>fc}z^"A" Q-XRK/n^ 00$WV{dn_а ޴ӗ㵎LA-}/<^)0Wa!쬍N>Ót=nvBT9VPDl0VĒ%馣v5nIe7ٻ6rdUsm>!KCPvq ʥV7l <{}orl>φaƑhvGۧv?m`"_+j[9nzV5՛mi& /@ .\Lƺacy>'aas g j%2u+[ßtF`&+;BJXU}:ItE! \d>zdJ20LHس~arиX H>η_ǠoF/*rw{gtF`5+rc~k]Ó-5z_2miBP`@Y-%aAEy)Ru/$³H4' C'`xX 2aYIhl1c)odPBHºDE$*Zo; pgiHLAwnPj|G ݼj^YHFxPWRH- |ⴊӾ5/Qs{:ӎMf=0Ÿu4T0Hj'"!DucÕpEjOh& GJ GI'.I 1d k|Q#Dx[%I(>Έ(T'Ql M'=wW!J|^}]AOW^DBgJ/8;,!x^pi#lș6g!\E#r- {!ɃJ\s ZN{?$4}:C~t\]K?VLܫ<{GlFއãt{8ҥ^J"[\B{?Nj'= z E|S~G;HMF޹BÓUOL@FF<ywh`98~Z;J]mMӨL]~ ^/30X#Զ}qf3< Eu}E,~}h4ޯ f)Bmݏv'z-# vX¢͜Uz]3l @J7㽟eMV(1u/2{3;/)T]suFo'[׆z"( -~]CKbQTZʳJk+<^ ă峾>-:"R] +|p{%vKq!32pѭ+OJĭ>Cz(QΙo?wWD,p Dt }uo}+B{+BZAp}7u-ddzUVliuwf} N<1(dYH'dz\B=J}MeqoP73.*$>c`"<NcʠjtJX)Ȗ&*O>qI"ߓ6Idɠ+dA|fj}(n{.?N$>]?|۷ٕ SsK`e.Wʱ!d01<}~{K!pAo㻟{g|;Z" Z`H"JsK%gU-!+=Y՟pSS\n=ӯ˽4, f` y,ɽпA0+r1i!%u}x ?3n]n$'9KO QxxXgXXU/}jR|nOC֟h3M[4ȸRzsPl8Pz{z'/x{*$:h蠪 So A?jLnZN5-Hԁs[5XTӮW}ůhÑ9먯zc,EKlvJδ8(Fjb6$hQu0Bcpoete m)JHW xyKq(b 24m\fу ̏wB)s(⑱ kGIUh͍Gn8nt?Epeq~75C#c0 Rs!וgon4iB5vuAO)>߾)Sof~_p>\>`ɳ(A'iO祶-xb%̀/eI V 9Rb^#g@Cx+ U pL IQ5ơUu Eջpo K T!j_z鷅;GIX7PT}7|2· dM2ey v;`6fd_2}<*u/o=a?p| |<0ÐK*SbEmEn) ʕ;l!=ۖQ=Li[EKX70! XF%j16za4#TXE0ZwX7σl5?ʾ}b(_O6<":x L)vsͅR0)5UX 04S u8Ժ9vݛp>_I +s>`hh9&s+!:Pbg>*-rTZͨ/k>-WX`zy{ qj4hoiQcJSb+n'mE-r6+GyxK3rמș5GYŁP#%IXc`|)SjgBL /\ *H))9Ki'#WH2!QaP!qDT…pNe_pauܤֆL=cGdT6/ XEdV.! "ɝb&=jGUVw`kȸ Io 2I13民V Y`E)0?FY>;{Q6UyUw`܁w8:w Ƥ}w"h8 B Rٖ(Ie,:u~D&AMn{=FmXɄѵzWzm&LLryPPHx3jg>OLj9+`46j} _G/ Ӑ--mJb]a6riL~>liB/l86O߆L8 jMu'ӶrA5;HzZ~2|~;( -r1Wtr4xVB ߩz/q&Qa}PɧgZ@ȓKkEXp -51hmu1qڕ9(Zڏ`/1 ^9wX5ጺBJU0gGnġ x$ )[X1 zeĠQ-k ihy5rZH#@ɦh-qq"Z!F J{ Ƀ 1@P0&F!铡0nA3B=Onط/"/$q>1-'t*OLN@iq*ag4E:Ly4rEd:mCLHP#*@COhM3ʌMqVuK-&Asj:e'å<1P vPAQGY> fV\Gr0@JWZJ2TDUZi(K+>蟟OzOn4P泵fgV˶e~=v@:rg=Zr<7VЦ_n/ަ n87vn.! 獻SBdݳoҍ׈2_.dl.=Sn>s1Ylw#۩t^nv~Hy{c߮V{WިyYu+%)fjc΃@nRnA_yf'[߱øEﻓٌmy_`|Nͺ3 F)#+~֦MӦB||,Pèe2F;=FkC &Jj+ mJTR҉(Pd9AWTLqnTSΖWB,z{ K$>~8&UbCogxayhs a_ 1!O? t4̘1xWtVmӉJ;1 4˥3\FŰgS3vupfh>Qh x1p4`rdW{/tװ_g烇/7ހ MNT܈)*^Un%aeY<_ULʖʼA'g[z㻟߾V0#Ag93Z'|\#'"٧}Xo1\9"9NLHOp15pøzBƊ쐚FwFKr+j^Vn0QtB5a̴!qR. 3*vF\%jvqzc.+ q%rqNDW++ X1rg"IU-WJ:q ŕD|0:INcYll&4-fAt:m<:Iwe$z<"/ wxOGZ۔!>CTE$XUEdd|mEo> MꄍCLĖ5zp2t'ܬQj!}ƕպ"W'!j2RGmtd/)c>L@pS`yN-&0OkrrwD3QoyO8_33::nXY+cfQ3W9WMK&z2i^O./'tGnfoǬl/Z >^ B\ S5ڑ*eCY Ji"L2 wikSdiacL[n?.w^-z4Y?L=h[Dؗ{[ydu!gЀO1RplLD:9I(|r8U2DzFÆJH_/؅ l>\[b;Gdž qe.(A@n6I.E^4tY@9 ONW $}溄W/럾& n%$$ZZ/ӳoB]4 wԃCiTVO1܏:&VPxTl:)9 C[s8cfx!%sV@V]:H1(@-DgJ!E:lm:xn1ER7w[\dV}(샚fr&)l}z!;FC{og`ddC1)r!E2Fa&ˁ'Ӂk"l%}V|JE!!rL` v8*O)R }iߔjzr'e;MJ>.vġ^/o񢊃zWpq_*$ ޘȸ 6[]X(k1tdiDc@=&C`ӯ6e6ZeRv&3xIs-= |&'U3VgjX/BS  ֙d<,|@q1h40ξq͸7rE>Hh Bg2}#􂖜 HLH>걥1`'d)dK$̺,!8#f챫㫴b׮zm[k}xg 0()>h} `\TV ,cPf*!2@Qth$dQ 6jhCWE 먫?B_4HXZGt=kyć4>a <VArTLT<{6c2JZ.5xgQ#:(6F ɀτDQs#HB$BS#Vq-ҫ>ǔ묦%{EqxM:_/nxyp d=ӁbEcN %` 1FpOz!jڱ? *6~:H}iهZ3U?JMNzc?pq~1%L4M]To?֠*dʞ댈y.:Oa9`_Th9ڣwFXk9. d {.m~MdN0`үe jFw|:ix*8w\Z1J2v$\Ъ yLDIՅhNeb?Z >nH|]rDgwԖpf/8&LF E\bλRjP2QkƈR\M2TV(cGB$Ѓ, %GuBq-$xNij츪fMlJ]ͰHr;RWnkؽuK<@Vvχ[^˝3tVejD{^ YUF_!s&7r)J% extr:g*8mt1GO1G2 !3.sf\ wCHea&gȕM: ń")"$tp%`7yUf͜?|/SI'>~=iI\\MKj LqUIܦM=O+$S|˙5tWa?{]旚#'_;ï~P9FoYsw tMeV6s|_ʽP/xhô3Qu7W%Y/8?L8Aރ?E-TG=*yV?JfL!=5nnH-/p6H~4J+ZERj5n~JR2{rF8½v1qzNύxS O s8rb =#cG{mo %YvV%aQ0ϔc1"IVVJ(,iK*6%z"C80̥OZ,A)ӹ´1{ud[$;t/o*)nf m-sŘTuWI˩}SG_J мwӥ`C @`rUOPBV<7i0'|_]$^p~u2/(:OrQ8x9|zF\t"T4q$i%wviRʵm]L9%I F򪽝qV$?9'| }CrP?#yo.[pD}We&OM_C`~Ş ,_r`3\MΗ~X7ydQe)iEc+Q'''k͝8LtQy1nQhUmQJ˧SRE ^F!dS,B(48Z&gT#YQa1C O"RMBim>eBqkMilTLG4ctHe dE:a=#cQV5љY EdqL9뼓1͜gM&KM ]M/5VqȈ(:Af"y2<r:exVhX&ǾHFsSA[ʈZ:笽shp2Tda@ګFBQ5g 6O eLGxy8g@JvcrFnN eI)C;\)fFEZ}咳ֿڸZj}Xu^h¨[ۆ i9i:ҺRv'*- $2m紭A i΄OTj`)`kmm9}.i'Y^] 8te)A . FkyV$BAP+H qp,N9u [,uL fx* XKH ()gF&)J=pEQ5qO6~#Ўn0ݭU,Vy^g_~^S7:U[NWX7A NP <wJiEhUd"&+tx} ;#[yd9J1У{'ـo[|8^^z(F=W̫)CM\Ugfw ;ey.)v4t̙SVwXHɸaK*,Y[KU2p3j)6?N DZuvatI!eecnx='˜ݞcs4zN}zt{f?ZN~#IΥO(ʞQ9pq̧Criy&ٺL#u52TԹ<~Vg%C(:<Q!n \st:hDE sʤl. g*H ]_bCrۼ z>?$]$e&E⣗$1Q+}ɇqv`љMmb30eұ&ؚ861jh|bSؔ{\LkD1}(eLIrÌ E :$,IqC9ʠ02dj9r/*) f|L.@H[H T\۱  μ t6I_r|eN K@ؗ4φaz}}EeUOۍNܨhdj- L b:^h,G g{ab˭# 4ё##fx-?3d8t2mjlS{;VqaNJ>-5A%nMm"ZxwGK<E@mhmR)V`MvdVN }/'ng2;;=ly7R u;6 rBgOcy BلgNuFO>gSq|d8QO!H% kC#q*111& #ӯ" Twt'`}3X/K c PmE]{vC;@*X .H з |ޥ}PDR ,)b2皜D{3' ,d|,J똅=oO;몟.c`;w}br''FokqMp1i>򟓿 h*/8+x,L /nd](j≁$Hj:s}@u24) ݼD,|ۤ }+76od6n~-_l8*m"4K?>UhnQ=?իK0CZژ5B:yhPx>k'0y_vl3چKADQo,f&idK MJOkfIBiM^d=_J}(t FxPzt0 J8v^*x^Ǡ}Cj[vvYTN_ןd~ރr5'k>NK^s嘎s#|ӟlUT7k;h[^$!bt}ݢLLٝ/{Q4Nr"d4 ׾C"~jٍ#h~yOG%7ȁ&Oh |b7V)Dj>uep|Yp~6Nˀ5j7Utvz/XgƄJ6qv/3XYSYɹ\OrU&su9Gto7l׍{fn~d2!kBCq3~WK -29 =c:Ï DKVS{^.s5]t6в~{[!$]xu}Cn7Uή[nFeo{2fb->roxrI}`sm嚵~h66/ox.n%X2ƢN[n G甽Ap 60EWr:Y#<[[vB#1 nd'2&CZHN: ~P28O tCzC>*g}gc.l]䂑{dPK5?/U*P2q% Ѹh)(BωX'0!]nE/>\v1y,y,ϗt"[ MP 7,_5~]||U#4F'ˎZ^el<_PjRqxY(qZEEX}.^F&gU+=?T-H=q5T 1ԒFd![j1gtD@mR}0{ `9hӇ4b ;b>B<'}J"L^ID=c.Q$ILN+ 䜲>>(ɿTꨏ7:Q0 GR̞bB:)&TdVfhr hє\z[Wkt>/~[U '~ӡj7-R[KmJc/mܥ-rxԥ-,mRZjpE~mgJIˏz~튮 \ŏ ~?]Vtj/2$!՛겗Ӛrx;AÚl䢆 Ʊ9b7Uլ=wREjM;fb=7CygQSQh̽qO+"1 |_q18mLOsאj)tU/#^d(-Df^HJ[wdMXU-ly `T R5<Fjn8_kjvН=;W',x_gxbm:t2YNfu;yNapz.'(eI"hF" E"FD&lmO/D)afx4tU v,tUЮ-(]Zt 596ǣ \J  Jzzt%R ` GCW:uUj}tUP+e1+,,pѨW`OWOi\g Zq`A)HWHpDtU}4tUZ{,tEh NW=]=A0)lA' \v4 Zq{} J+vǮg/fOWv>Zj9 l b=]">"" \ RWۡDIҕko$AVeǍxnNR{>q|D: QǻlO4unpBf)ZMe~2jmzf#4w2& Xk!Qi]䬖L8Ci~ݹ9{9vizVxfFgUm]ur 5`{[:X3{4E8y[*v\{yt v>*pQ ]񂲧AWWtL[uH#W y^I(8JX [ʦ뤳1:o+r5&hkuțTl=V–Rdj"#TE. >ox:ÓMIcY%ŕo~iHo;IRֻF^í.2DZԦr=u]f0U7o ڈ(Ҹ "@ys\wh 0 a7my˶iϙ\ݡM|u,qۿwmrM6 ɉ)#,rϔļP,(qL%ctyr"xmjkH~]A;K'9M/f)GIF5qLp) e-&&ܨ['^yHLZȢF*];=D-#3I@^3jBI^I\j}! ^0kʅ]?cfSc3UR2/hܠNɥD oM)R/ ueqF:3#s;+^iFԴFX+p[CK&ϛl)g)C\(%0#2d&'%%&1I%]Lw D%i0S7f#k&ePFs2l2wZpE32fȷo;7TE˩wLSц̚\VtrA#YpǙhߥʷ@u4*)\luԠd_vyj niu ,H]:\H FRPMsPLY[OVK~P2ޣ * '>:vM*|?4$QKYKɈ Tې5UɎ- F= 4%gtɇ2uUɖ֕>L28hӴN?IQ[sN#_8mBkɵ\[%XP>є $5(PD`ㅐIĥՌ+5h!F4by1A C@;˅Q,,t\1^+"P.n $]:mY,sB8͎@#Zi-4P>ʤH-ݩRЧ(I呭ʒ74%=P]5,d$6 ª7HN٠(.jZѐ$ fyԞ>`0D, - 04)ukfPBay &c;IA)R#!lK%RFw 6S;oV#CQHd,!1K!` dȭ!f#V܄D\J*v ȊFgZ{8_!i"a1$q0WZӤG8{nS([EEj6n{nuV HM${4~+*ӉPuj,K8JH&[J ޯ-V(!Ȯ؁pYV ++5 e2aݚo-mPP,Lh=4F[(2#χPzQ<t+XYC3a?%ӌ/%RC%:&|B(o둙t쥃~.V!joSCAV(J892rVj e (Ez@#}[m (ĩ(Hv\F*^eFT J H/[.їOLcr^5 5$DjeD]kO YVӷJ1T{4 {xWP>\FФBufpOA9%u+fĥ"ҌYo"棊1͋BIJBm*عuczKh+ת׮z|ou_ L mZI|4xḰKy@xc#T~Ҭd :!JrIJ2Ba1%OpHv9ok=/W(39՜ HDNW2^i C.nkXe9n(!/[t=$5\Qr#bs"Jdp  J`QtQ%fdi SREշ߬GŰ%6vł*'+ IISL1^MA53"u7(V1Z8 RT XQTbz00G 9(W (]IڈJ5(Z54)hc#7 ]HkY 6)jK3e2 b2 D hs@:'/y~sP Qє?joZ 05J kJV@PVp*-]QZ4h>W&Eυ4CS3AnQ%#pkjJ’=iVA̯^#p \T"nm5fSMhpU0]"AC, c4&iPdpN۩ %wR@Ւ0Z)Vr6Bz!(L ΃rt `j?_.fz-nϗ5ZF;RPzVSKޣ8}cu 7DlNj֮6s}Xv/;]6dr wpUX87{9 }N @(v$; N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@#9&'iO ў@4m 7?J'* N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@:^'PB19X !h@@k; vH}N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@b'7B%:`DN h@ר8u"β@ ɰ@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; tBW@[BE#+och pc+Bguutmݘr4t5c@ ^]J%NǨFCW G3w6<]e\ ~;tꩰF2`c+ wnhFW;J@Ws^Z]Gjn ]P*tutCkqiH06G!/C#(btƲaIeoL=Tɿ 7yMթ9E\vRvOD8T"*]~g߷JkXMCOiVa콙է' n2a\/ZXbyV'WV^u ki%35M<3w?~t_^/L]g\&Wgo~Wvr`|gFWz/?[YrzTg]t%laOH+_)x]bP.lL_AG~FtM3E.ʹި[x@:aD&NF"T& wBg|Y;˜& ؛T׺PFtute ލ8;߱\vCki!n(큩+]izh߅q4tEp UPtutcROi9)aMN*vnjeϺ}zZ|T 0FhYhm^PAuږE:d=;ܝ_wc{:w&ETNS_ z?uaL85j>0֪CG(ou4<`'h=CB̈j؇ٍ!PilԈ }vCk e]py4!VŃ+4#]̨AގnZ'$3]!]EI+FDWlxnc+e0LW]{vy*M j7ІG±PJq`ߙ0]}nK飓#+)xuWn,tEh=t"J 7 @N<V[*1=.vƲJJݱS/kkGSc؍'~4=%2Pz(%^m,:kSotPOo*ՉWJ!ygx/^@'.>:Ñ:A?7\]Jq^cL-{I΢^O:/~ S>~ó.g^;Obz9 w| }<֗`I :aяa`_ܹ.=1;[\̆qyHwF4Ƚtu-֑zbm3^o?/{iDɼޚCЁ\Lʺ&sJ:)0(b2^o%XUbJ\6K_J9 `e gu o Βfg Zg魝˶!bZs:i:y&KG;z4PNn< ]WfS~Fv!%^[5N^w iukpu%?/9yOO /Nn ;RwtY]tzU:fy~q.|qme/O7k8O/N-:$[\ql0Kӕ6 z:L%M$Dߵr>5<&j }*W8tD*T<\?H/K -ͫmgIVCWބ;߷-r/pSA&%=GDES%O" uVZ7:֣? ZoowAЙtJBy֢,"̯>> ̾@vg'@֯mr~0b}/o)A.sRV_*Қ CL_@ҚvxʑڕZoRabm.Qzcr6 Tdd*jyd8h[8Bb#_~#G:b}2Z!=D]#T{.HogXjb|}dBK|$bOYUEknJBZzu?mot Ԡ:;L*i̦piFR`0`뼌ɋՠ?B P vkN).# ν-9Dei&Z)$h# b"> ipG D ]Їe5[S縵O^ߍ2{A-;ʜPN mu+e>5ut'fǃׂR[!'ue|! A|Ps)cYn# 6!527~}B1~PIŞHv9O-fȓС!jŜb)xXg pb@\iiN18/:$. ZS@# > Qg<6Pru\|ԖhI=ȹ֦ z:4\KRȔ CKJ 9j h\^pvϮGuzK]gD:\nA+ \tV]>$`A9ZJ=$Tf4IhC)%8ͱF$F^<.5^/0Z4wg^z ,oWCLkS@R"C I% (4PPgAc02)Aϳht[ B%d8Sn@9Ro 'ՠ,u&.BE/9jiz !a qwU^PŒ$ !6t~i[t̽( N :%J$a&WJIbRnJI(FBoCʳOuGp;YC8qjj`fx1NIf_q"o]O6U]>?MVqN(fmK8V$ί]V1i%8|}I>|ƦWSʈ^yoY|:)~RAڳ+,e9PI/غufgb]rjS_3@].u6»(+f~=~lg9!m%?E:}OOWw5[C NUQ5`@J4|!'>#9mjb6'.d^{'Ԡna'Srr甇4pG'ٺm'Pw`=uj^6IVka8X<  SPu'm M H=ue(-N'I&" Urނ! \rRj')h;MD/x{Eqp<>0I$ m|4dqrAAr3ڶy8U@7?$(σ#BLQ,7(M4qL&䠣OF, X9uX@.懆\GPTIX4eXip< >e!>ü+@qO@3TPY\ q9a` LQRz?jR||X4v;n[(«uOSaU<$yNAhTTFkJVfI4{S~IK1zrT<(l~g9j_R ug!BNQTA<jR 0OP9< \Yyz<0|q(ϴt*hMqC(i69I z- b * A`NR7#EXoC-2Pp`>~aG^Cg}^Ix_A+AA*1H$7uҨ@CǜcˑĹUD-;ǫ'L5OO~`[o>:0EpO+7}b姁fϓa>xK@v@@FsegTi q0jfֈ3R+R^z['&y fY/b.] A'_BP;-Y͔:r /4h>yY泃W gks- |_`EU7`t=ڮm!hsۛ/! WM0\:ǡRAQ,Ayc~c[77~5WwDyEgvw|S0R%LZ/@OR%!K_78\>[4P %O,_4\r3* ^WrwC^O:f\T5k;y6;1;5gիAb(.X&=|oy-~|H8y;̮EjV+NP }73xL~/G37,`G7U`rѴkp,?K,e9+K-rVo>4W6.%(ɸE Ԃ\фxmvMogoo.xA^ @ާ*azAmUMUhq1/jZ.=lܴwEٞSg;=bv5G<:8W<}깔kҿ.H=1Z TJPKvq{(a zU}.JjcE 8h>F1\4w9oMOUjWD$K@ *@a@J%)u*F)F+ hcLr%Oo|{ӛPdR)QK'J+Uj 2H4J0&x;oQWwlN~Ut?X=ޭn9^& ۃzWݼ4KrGT㈯x4 ֣erCL?Ioror&hbbdCt+1)xH];ǹ#%B^ApJ&fZ@YPcp[sc:Q"uCJ{wy! eVl a8.h5ˈ!1H΁bs6!BaG101STgJ:](q8UJq"W ,S"gD3")CfzǶS/Sh47'Iu; BȬj%*fCgYG'B]'e&rRKL+8SdF'˹֕Sf0Y:}>cU5yu\{]_s|٧޳˴2 ]ǟ1\@nӂDRY! 8)29g( m9{`tP.mWImtӁi\zGC,ǔ"Uh!\7-$YH.N$Ը|SV\C74 ѧqXbvO}^5OX[ުYwR FaI$Y D9_h  ^"QFJcY+2lWJ{V 9 &+dtNd$ZńJjA^6Q B_aؤ_Pz!IcB]82 EtH)qs(LI(L/1O Nqيˉ7njjʽUmЮ5O}eP3 0t\rF w#@@{Z !=~uS( RmU~Z;p\J8pM0)ȥFx8p1)4ٓ0--^fE[*T PuMiBҟ7y!uxL D5n>x4jp1Jyu%-3@ǽgc#M&\` dtYb53WeDk%K7rW]& IR CvK|y(> PعJuG4K_ro G}|x;],RޮǪz_?,WY'AɑCo鷾_YޯL=`Kqf*+4޳ݽv;ZPҚVbg ewPfAM e2OP&d3}}xBC0P)u9 x8hC|BWy \d$Gth^Gj*Wr՝sɁ"hY ()r,)-DH-@j)7:E=9 |Dlb,}2 m)+(:Z幉I8ՍNHm:,I`]qq\V:j?<7~'x7n3w$g+}0u- 1@TmJi"@5 I6E,,@XY T/̖mLoF?Cۅ7oV$MD.@>Sk^RIR)4\uQY@dKwr2 R+{[YH[,͠P([ގE  Fʐ$eN$-#6I!Et:،cAͯ:d65u3*׼ WlNű3e*ɹs# eY999BgoZ Lr` fV@!V) 3k7ah:X 5])4~^ ;Vm)\݋vU9ш'g,wd3mCNRqGfLd;$43j49=+'5b̒ 021[OZ*EIQs- \&jExcZ1.l2BӒ=nDžO*O?e->O~ܗ d36D8HhI  u+C,DN0#3>@(cKh`{rw mJm!a#AlsH 3vcp63%-ݘu0kcڽ]S\e"'E#JJEAəg\xSǢRNhA|6.xTC&eȁ !-I8G Ǭ" T'㚮|X;>[c}6̈gĞ6N @GwsoJGbBZ%ksFIR\2EΘ6@qR p \xGZztȄ!'-!g$7̈̈W/N݂lJEqxp{^\aR1=xaUCvL{ j#-@d<-4mƬc_> m@a Tc6W[w?xӏ7~Jh&zWMpUPW_n8X92"@ĜLUrH>:ywĖ3+q-*q8׌hl},C4G)șKfg֫l$dGO6;a#(ޡU#ɷ̽*:#8P$_jN[%XZtSp9rR|J3[hy*^_)Գ"R+M>^}5^zT̲xELWW rdY"޹ꗏ7&) ա X6QJqSze!d`x,E2alŽ_fs->Fa&N.q\%ա)Kp5S])[  R0%"U+LW誠v*(UOWHWFCtUk ]K[NWo;]K;VOWCW(CtUWwF]m+BiʖZCteBWgrsOW+ԫs-V&0HLWGVWUGrwCyJ@W}Bv `رZv"N%atb@Tci449見V{%7|.= 47jsLm/Gs-bm<'bhb۴׉܁]67 \|Vr3?>ߘ~7M~ $3L>^ѿ+d~RhEW}ß~~4WM`LKaf`lfWମZc7C a:#o xmSRI.yXI*pw Z-NWN@i-U;CW.L=UADOW'HWJ3CtEWUAtE(%銴@!"]:IRv'vU*h;]ZtuteҪK$8팺2ZsTm+B)S+w3tU΄ Zh*(Uj?Ed{KiXU *hu2{fJo9evvqj7x'-;ww+վS dA@gE"T骠  Dhk80{!sSDVeAp\,GCDs> "kҼ`V<{͍<~rԄ 6J) T0p_?~u_>Ě=1}*'@x8 WVYVx5,9gS$hӬ&QdFAS p%$\`L_ijk43ߟ,pEsZ͆7htGwdTa|=VyȬIH{QzoˮHKR^ڟ( REBWmlOW'HW.F%:CWƮUAkyPZNOZD!"Vv9O +tUjv*(,}t@R3tEp-L쪠,]PIҕ2!*uw]+c5cWWej?I*U  6EWLWWſ^R=]u̖So?v෨v{ # ebWf2=];i)"u 7**hn;]Rtut%TKK[XF^H^B-c2bFdhV34]j.h-k;MJS4}B4M*ˆ2[&w`2ml2|][l.Lo-Jl ` \0]yVmyeG2Mdp:DWyn4zf]V Jz:E :\Z=+KF0t'?~It4>p{C'9\?^s2BW<PKȼqwn)7}i2L|=ҿϯH޹UUńP?j3Zw\rFU,,H.wL\j5t^?oD\dUp844@z̯VbWKü|,g̒|wiBC( y?K͟hxMH=28olc5ןAvdq_ɀzZ,p3clpio9Zrm3`:70|(hat;Gl[3?$-}+XXvnp1 hHqh@>xWvE\ 3e+tX{5Jd:e2K+7g`uPyR&)xMj/=#- -6֒닂DjTRO$D}9DvwT>+3MØyu뷀wgϿy6{ +~Ս. ِvUWhƗ=qnɝZ΍Obɋ.苒!Ksr↗C  $ WwO|IIc>٤z8juL ɪ~KEwߞJw綴5M].&>~8q,Gz5gQ%,- E={w[$3o}F&? /f(k(bWy\T{ onG2uz߻g<߽P .wn;#:~% GCN?ݜ\k!kr&[}.?,>yY?(V|h |_OhmhݖNq#-`Y~֮~tʜsʛ0f;EVWPT qЃWW"h\ZOkǕ]#]AqJaڸ1Pi5m:C\E؎~ܕz\ڸzwl\>V?&XkNyr݉76hY*^֪a†c^[&•z\H jvkǕ\s66\" ZU_Xf]i`wMnmX]Vmg%n/twך;{>o>}ͬ%ޢw=bؖ[V_j^S"S-vgh-8DW"7Qpes?&l:C\r< Xk5 vrv\JN@#Si \A~•u~\Pim:C\ygPwE p%r0JԮnQ7wub]Ap4~\\F՟m:C\ K>KKJ:; D-qԆ3Ue vbPԺ՟~pZy9zr\Nyrq5K4pg*n:vQ@ȍf\isot\ʵȰEpes&u%k¥ї_[kY]VRh6ΝϳB Zp./j^߯_`=5&y-yC폵2\XWyV}M\"ړ߆Ye_F'O6ʚ-k3NboEmXa \A0|Y<< )v\;%نcWlNyrGJTm W/+7#+0\ø+Qkq%*Io:C\.t ɟ'5O.Qp%j_;|bqQ$aw^ s'5wWrmϓp" p{_.+4 V+Z;Dpu1]As4 D.Qp^JTm W^*uqҚNl|Rw5[n8j)]UDաr솫cnNhs`01+y\WkWpuNwxI~R5Z*_50=Q<'JmOF{5N;I͍^b'5+|b&"hȐT"ZnE5,KYdZ0 Xqp%r0' E3kǕ䭢?G\9M@`Kv\\ǣJԆ՟qsW+? YW"(1WPupubc.8JZ7 DJT-^ W@%3 DI'5[pRz+L<QQp֞׎+۽C_+֜hʥ xRzikareŒfk⧛nLT+{-;nsG݉mdWyBnw Óܭջ[QnZ@`6a\AnPv\S]2V#Q 7 8jW"S{mM q}*?|BtծMqP7uwDG{_"=.t @KzWҧ(?_uPZ}HW?`Q8}kkH_}VzʛOO|95цnٯ>fJɺoapo/)~uӚWW◑ڽf6_~ {wU١lrE}U w̟218qQ7]}fG#  ȷpO?˿9.w8q_|W_Jd//8cKB%Yk\M`kr}Mgu/:YŸB`}O?J2Yć~h#>n\w.޼/r)yӓ.JE}B *䬩I9KEHWpf&L:4j̬х:kU 88\MtΪTKSh>.44SJܺ1&X,LoI]eY;j͉!fR-Š T˭&U͵`0&jڠ5r׶WzNѦlZtmhn wןRMRΕܝZ1U]dlrᢛԔ1bN",=v@,=&zɌaxhk#Mj!SR+{$<Z9[h1tvlΖe]eXDe9w 0i#؇K@cV9j{G+ oTh]##G( uyf*ґоN>J*pHE,Zڋ Xnj(Ϡd3W$Mq{k,ZadA̙=HT'fE{V+ RAvT4Jε!Ԭ;vai#_f USb# \R,qFnQ0xi v.*68:-f0O<v޶i,/NV\bt%>,2hEgK'ǚژ[]K@0#cdzEKB`TlTdWIʱ,^4f=a=g%cUhS%vdZĭU6AL)Sbb ؎~5]6AjRs(#ȆeBC4>HqIxFEtf8MJ*T[az@Dg3 dR2em vE4Ϻr3Jg2a̷.( b- ("2]f4vҾSDUtFtQl9 E' sG`a. b 5R$8TR 3zN`մXdEfdcFx)VZ!h2ݙRpHq92rܤXT5E{Q 6#}]m2 R6+UDQv/`J ׭T%=#Y̼BJ>M#XH/eJFje":J(k` e1aI iD3r/ZQCk4mdPg.>僜ܩUŌTUE9c)"9Yx>U(6VE(]}G&lߙESp!5/7?~[_ f=f>;ˮ : oC00`ƛi(&җM*g%S2ҕkhJUaİ'x$;bEŢΈ A9JHDNk f0XbѲ= KH Al,kwkt<o Ƒ~XTujpF5%wN52  lZ .k$ SbM1~v?>]]ۥ̻fWrXdY035VEɉKL rW̰¨U8vGyQwkĢ0aQB]%1#tӈL+crj85νnrtўEw64I1FF J㭻CiRԵKoU5p/GTlXXH *54YT_lZUSraV6viq|y~|㛋(6]].~|m3Ke,6Z%v.n;cנGOʖax0JߙRIf1hkT\kM!jKBq4RVO ޮ ̘v HJOӣ"'r&% xKd}尮+9_PnDVCqPD]tR"˕-Tp=!: % 3 x!8a=beNh*'s7';L$6b]f`᚟,= +U \0Fʢ#Bȡb$ޕFn$ٿB4,Ɣ> 0<^/,w0Uۥ5IA%UIխ+lXx$_fFGjCnWu`-q <BЏ$=)Ηo̵RΙ$M`Lڐ Dz02^)wnnY)]2U,/>rMF) Dp{ ׁYK7m i:b$ZH 0_T=Jr^4S>%7Ln!Jñt'VBOn 2oC1AGIn8ms .Dn1Rf]t>p*_f ز >M/"f{RhH6L9w`J  ND40(B# \5^lߎz e+JߕQDz >t4pX FK[:] 1ɹeC]_|9{ gyo>Q.ECOMFyGuuht|L4 g7'>Ʌ7zbe wt%f)et`i>eLsIѴ[U gI{ٺټ n-^sfŕ M]dz|/S2A}B$\G: WG h &@G~׆@B%*P J T@B%*P J T@B%*P J T@B%*P J T@B%*P J T@B%*P tJ Mw`zR(2\cCWe舔@)J T@B%*P J T@B%*P J T@B%*P J T@B%*P J T@B%*P J T@B%*r:I F@W(QJcT #*P J T@B%*P J T@B%*P J T@B%*P J T@B%*P J T@B%*P J T@ǫkOJ _x p%yJ @iQ tJ -4B%*P J T@B%*P J T@B%*P J T@B%*P J T@B%*P J T@B%*P J TzZ͇qjklO O@CJq/B=^  [> bGpYKk \pKwhd*7tpc ]eT:]eL#]!]AlX>xW9EL_**OCWlˮg'nkՎpnh+b']!]=HUEo*5/th!NW%5HWGHWh˖t= T5Zr7;& p8]e:(Hx./J+&4BksȒ=~;Dכ2s>܋G_neRc|O@ڴP.1 P72k}>Gt74PJi ۔O7/U,z:骲)|*`!2 DDڡ^?ݭ'bj~Ϗ[BMo!(\]14Z$#OJ {\ԇY%ue*E ۲b9.ɼi^~A/>8Wl4xX)8Rdm/(i'U{ӽT7nE}~\՛ued> *e)Qcr@wP' 4TwyS8ZQ^bzSٛ_rj/qqtU5=ZUUY/7T,~,7[+fzti_,ז@Iǒ`,?+mYc3Ŝy LX+M3fa㽉Һ/5zt,%btvZ#Gk1}+@ˉ=t(}#]$V`~_trh>t(D:BRXzDW@${CW}VwQvtu~\e'5VX ?@`wM6Bw1ZB?+:˘g¥Y߫ߔO7/[?z.,/t_ggg=<͑06*Jh#UdI/mV~S{`zMjGn;D+ w":|syZ̻:6i[v %)|N5u XP=~y3GN[4|lUL zpzOt)kPqMaL:A%7Z"SW"K>jk:F)e=̖eL68m,M[Z9C'N׏on0,n#*s¿G?][޹3rF2 P~U-o549P|>*Ɗ;.YDI5I&8 A lrrFSL6xn0*ױB\&5<)%\h5pg<2Fmi-bceop+\)3R)^EadT9V ]`bLI+V:*&%(j X((C!V'K5@M:y c*8D9 Ucekcc gB;4DN{3IPE'Op/Wc49T=꼞޾[]C-;ʜr sګ^>rϢsA ڳu|E>ۭ/__SkgO/A?2cLv}&9 Uھ{SXIuI -I! TyYQcFXBͫ1H6uO-X<Uj8NW*t_ߟNR>Ql&QX\q1벏 DIQܾ>n۾04Ė@̇^ N|`uMN,9r5<nnݓxto"*{:7-b6Y۟'-3/oezW& q[SޙݛPt#WL]KOV]K:ɳTjU)D .@*{V){<6l ly&9eGks4jS4A@K'cRP%p,97 hk^RRboM)zAyo) huE;lJgjϹү}8<0k5q R'B{c"p`%Jiry~u~\ |WBǡX2`\_a +fee>N'a9A啕yS4ON9=8kV]r(J4/4Bsa'gn| _Ob,7䍙(Spb}?hMtv[}G*u'YSV ?avqI&K%1=YSeGl뼸Hq>u직+7z쬍9! {l){Vg$=;9Mn.xCV&+{yx@v钃.- \MotydF|5emYtgolŹA> mZmRaER?n!=X|P%\?_]W?^<4xDk6FCxk6•%\Y.ubk{%/~Ei;W>n~Ǘ;u恐|' ^|lM|EN2enb5[O@&i6=-l!ʆ]K׸9tOE##"#`]WZitH6]pj lM>rl)|%D^+l2T+q"z$z1M"EZ;K]pWFS"RPIl+qL0ϣR \ţ\0UQfs%gx&q"ٻ60s`p{&1LrZ^r-g@&YY3qy`$A&>b2.of7=0>s={??f ?%=2 ^i`EĦ]dQm!ϿJZ_J&I@!i_KSP ,($60 "%BVJ4QQzh+qv[F q iBվk_Cx_9!|2x!Vqc>I"T"d>SQǺD1{؊|~ơc+qBċ E=r@݇/mBK )i8 2%*tWR @u,b KrBiǛ= W#נÇR;CɰmnmJHLʚ A*_ :~)4,miY@A ̍ e0)]χG[O!V@ Ob%aqP ԩGe^`>91&pI{Ig .&Q+ )L. Xm̼do>IaANɿBTA'CQk@Bg6`@IKI(y%(f4~**ˌ&xHlIދ[gI?Q4N2ECeT3=ٳ2jڛ(9g;˯ }Q ȚzL%G Qg!c x䎙0T:\6:!sBEBk肧R(%QA i`9X_(ZM^IAJkIy@7fX/lB_]Wd|YRN?Odßx|i|6}[H%H>H9AT%pHZGDF/9 ¼S>AP=(J8ܭnS;Iba!2MtDn&n./Zy,^vCc= {q v@!,aPXc ()(IH|"kA0M!Y!`4Eg}MABc>7@*dT\{#N&gc<J?GG<}_LR (Ld]FVNou&1)ccFHK]QTL*lz:59V`6n#~z1:i^~Q /7/n-礢808VR D1xS$~~Xa3p&opa ic_ۘ"X]t=%[?plʘ7lޢExR)0*ΰR &Pxzp䍉5& GԘ(Ec(!K'*>`D4ҹTVIeD"&K^Dn< T,2ԩxILE H&mcV ,NSkJP$11#9K3;ZbÜvD̻vnÚ>}s?QZW?z `ÀW `a\Z5J\0lNH:;#Rh J&Ud8_gjE[)%,)prGPDjИJN;US$oMdiZR6(D(x~pFB% +l<⣤Eim&Wլ7|R%9)dZ-v?KJzXs|dkuFAguʇL8Pcs/mJ^\D)棢F¡ ]e<t7ty|=@ KJw2|O$S8E- 8$U;-(Z{"]ZN;\ǃ21$JeC M*ztQ %nPgQ>e]Яve>?\5_kl5炱~uki\]yzƷ>?PozY֟;p^VgLg2O(ގ/jf_ltgA -m zPu@+ϧg?Y3FM|'oMOM7$83fjmJ|syPoXʀz6? -ۻJR3{zFZjڍ'+wZK_titknop,&,|ߪwlb%n,P_Yҽ]?_#I+臊=Yjl< R?nuQw]2Tկ24\oqLHp*[i'5Ì}gWrZ:9Y]ozۨQfONG翎޳m+8|_>T<(x3ފ̂bpuIXVH9' Cu*Y-bޞ: Z/ -‘3ÈXJk1IE1*ᨮ2i3At.; FπW&L'^~S/Y֝^Ⱥ: SO0TdێF`! ɶtAO(Zm* ^FeSY,MB _hC(4Gx% {U0}s<!cc|dsʂqi^GSr Y։nj 6 s]X! s)-RB lٗD5m3q@#5(!>?aIR"CU Z21jWQhC4`dP.Xa,̕P Fsܹf uF@GD4ȃ[kWDS kUPX&[]r3')ȱ%tMPqg$4«ghg%9DN:%b?,U6nTշ^ns5jzz [,ϿǓֺu}3!-CZ^ƭɗRdf¼b :' -z Z{kv =jw֞#p䝓R-0H6KҚOJ(k[)&!r)#"K LLPQVT K֊HȪq8G'ǵ }g?n[ROVsG+<B?5'T}b5HֽX1*]* IZH% y#OKFr 8o]ovgdq7ȖNG~Ahgps o6!yبY՞ic~.8¾oo: %HĿ"QQ{=uT,+{*D?3:DEhȯ'X"OoS[+7Y<gBV"$YА%650j&n`t Bڳde-{ݔ5jX~;g75Ø/-lLdsS.sJZ6+>]0uՙHGɐ7ңi:"v?:{K{ZzaTƠBfU^;QdKEP4}1.  lCw<1@v6RoO~X[db >\3xSY2/a2*d81_NJ} U_u«WU7~-_EC _N>?J`k O >J\g>ߟ~GIuܿ)>">}U՝#",5kCShcA!cU_]V c㉎}rccM211FiW :rQ\mա5,|@V [w=Օi<ߌ-li4!}:z]ZݖhZbSVX 8Ξ˚ϫ.wRJ~WrQݝtu˖,xCo|Qm웏/*V_Z#7S}Yㅮ/?s隝 8 I ^\q|llZh*C ڮQѶ;k0MTMgr2!N,(>̣~;@ ēv*t1:Aٕ4p khN.+: Tmkzݨ 90XwjoܳtRCr(w0yW-{[J}IIy6!*8֊bq,Wm([[Nz -]|QuUUخS"]aXe+Ly6Va{d9ѐtնBumSyG]Poc;hY*W8lV ~*su JgU4Hq>C۴SW zmmi$ǵxPiky~Q'4~KwtԻ-y:y1ymU! R67$ZLitpzJ`q+ hRQ"Ĭ *x#IW P[s?HWDA+!j ]V9)bڠRSb NQW1 ]w FWiѥ+ *x:r{6;Q;cF0`[WpWWh=art岮1nNW+ HѢuE jҠTNn&qӄmW.Ģիč>icsub3^Uџ/|gʬwsIWfq~ :Λ^bٟl]d\@^ Ŕ0NW퍥W96͢|:]_,>EM!'^?ɼ\u|ڔ*3x~ "~$K~(Sș߭j$l~8;7mz~{/?76NƘeE,вkjmmxH3y[-ϥwݜk"¸ *FCtuKVl o ׂm0*(to'!8A26^ܩYWԕ++FwLaA+Ĭ)EK 2pbtŸRtE5˔;YWѕ+HWlLbL(ʙ$SߔQ87Z1BTH~(YWUT&2Q))bǎ+ !>Mm?Fc.ڑF3PLKW~|}T &p+֥+t̺:4juz:鲽ʏ~b*kںXjWu]1ejyYWGz NXk9ok]1O^WLYWGU$$-S1yDUwRD+ܳD~-ÀSpYWhH۹ ]]a}q  (EWLS"f]MPWfg zRNLRWiyIA r@ƆuRš͕D֓nTne9(,% o4 z)!2z̔mC䣄)+~9q1HZRSu5A]Qt銁+E+^)2-ij++Q99A3$ZQh&+tE8p#8)(RQ>YWpo ]1p+JLtŴSSZu5A]a+qi]Ay0ttlxDfJp9D`lU iDAΈpa乁RS< V qG;v )κHˉDWLc"*jλ銀]1.6N]WLs6u銀QY1b\9(L;R=6+@UtEAˉׂ]1O~)CȺ%%212+ƍb殢wcBH 6 lz8Q>hF `oȺkG #] ` MtEQW\s HRu5A]JyT ;L=:7N4 V֜n'͙vA1:tWAѩ;M'Z"޺ jy|A[>̓C#KHkL-S i(i4Nn=yZ-EWLSu5A]YtzA"`?vQV'?yȔ6똢qlxF {)bZ(Ae]MQW>j$EW r2I75K\bJ&nuEq]1 )}$h(hPآ7u &OimSz̺z2{6>Q hscO  ZڇQtJg]ݻ'G/GWVbtEN%+ :j zYGu]?_5Nmy6c\6]I֧쟕큳߾yom;5'sϗ[?ө}{ݡ6=xJ(n+ێ+k ?S.uyQVIg׿J۳trX_Sg>0x{F] /I][/XC|lO͇]J{"E*|0>RU׎ʎ|B15 ʇaXO3q@|8?毳R~b2o>b߯o0z:&cɝ/[r]oZ6W8^Rφ5[k\]h[v}I*]un:[¯b |=~=@2Y;d{hO׽[%Z_//V| 7#T÷D=*);kZS鬜%Q ~t=LɅtVTFu֪jqƅ\J3Y3,4?8l VRC(B%̹I:,oUF&Ut(Qoc΄IV&m= 5Quali4Qh) mGylNPDF_u j*pb)Uf뀬-PR)wFJXH-tϭF.} B8 9\ 1Z#M1ZrV>/HfJd~}&[hbҥ週{І~4eXdieѥ LFaΐ?q,cmFM#d c*LQNN4nA |" 4JH~_e/bjSHG:SAd*X ſ!IBާIWUA2țk#8sԬf[ ރF%k |*XÚnN';Yn)Ǟ\1cXo3&dT&΄|`5 j}IȐTp,ֺ[7.m|i"%h,֓ZFR-+bHF 9yR +7g ,,R>I 5ZaPB Bт:K+zn'0L'mT *)ёO,ڢ Ә TTPtC[B Z 4BsX?ҀO+ %eìiD)SA!W/q(t X,B^ [Fjcm ee ,NZj莬Nc[ 3YZec`t"!NU.(͛հQ/jPѦ"jх6P[ r  (ljS sx X5lGd&0VN ddR!}EQ$xNiN +̫"J}W:a j*6 FDg7d2yA Av%4/j3ki@%&뭋@4zNB€(!3] 4HAjTu>:LAZ J fJ%)>k0 !_*}AI8hl,@'@H}{P(&C@]+cGYR4yZȠόOA;t(j*A BqǘAQRr8!a%s#w>1xaY/o6~˚|uQ [ŪΎ#D m3bka%a!ƻAy@r *}r@6*MT$];-6*d`fDbQ4X|Ѭ`B!(DMkk*Vڇ`)C#aҥi0G_h^bWLd:[nmVH܁m CKEWfUSUߨFrΩN&dЗJHii&AJvzח<]D]eb:\yS+tme,#{ ԥ$7=mŬ}(T* CӠJ3nD44ePEzg@rFGK֬kdž@H'9%֋1Tdyô*a‘r;p[Tc$T:9Q܌J#x, 7]A l*cKULmyLG[c@sກ6+(Vj|ҡǚAUJFm:3Ag2vt VTXG%=Yo5נhd?jo9hET>$ڡ"`EH.֔70hlCkN3^F\ RčrXzGIz jN’↙$5[E4XCgr\U.˱XTZIgJx:ybٹjj$TipMh].?#u= A{G-F(G}`AAp]^ܲ^o[{|-GFeO)-(e)лQ m9{w|8l rKB.yM2_~X|{qŷo[k9_6i|y;˫b?ow/s m|Gz^m_&PDK>?+6|?;￴ۺ{3>ooƸjaHL*. ^sI*$:%'z;N kEO .<}"P:'3t$N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'uiI90MbDO ( <@ $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@zN Srp2"Hu2N ;W8yh qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'80Srym;'2N %@@8(8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@ z˧d#g߭x)z\P/0{/f:kNɸ^q)𥀓1.mO޸)q9ޛz87ۋKwvӾU+w9xB"ǛU{@K"_ݩ .O+ԳyyZ}}^ߕr}Ѹ?tc}iυethT(O{N3 yX*}mޭ* ~:\:C.IYjZrndU[hYc4Ɖ~$;+<$?bqgٍNk5%Ut &{2t/(㝔'b`'RKvU@0i3f\㸑ܢ,>Ua֋_.`4[N'jƖj,5KM@"MO&E6 QW}+߫vԷoTBmo{=x~&Bε8;:F*9\7_W*nf?INe:B G+yj́^>ݾ۾v}}V6B1& Ek U&J{LEk#֘ז_0JiYNW26rtiՇ 8tnx]Np#h)խ&F]V*GC Jq]yy,.g۴c0v%-k|⵫ۥ\B]2vw3Ewf&/@ )ig{}~W]Olb$_h{^& 9HkEo^ lg%LJq=[Y)m2ֳa*B J3ѕ2XѕuTm hJ@ΐvƭk6GWJMWcz@Άt%93RܔJisٔBu%M/KcW LvƮ] :]WJ j"ɐ8G3R\VtE>Ĥ\QW9 Jqь8tbt78q!z~8Ǯpj/Ł6*̕ ]ŦV= J;3RVtC}(Dt5B] V nJ+QM[@`'Yauf^ZZ|m%79si|:;>%s~rfN ]PP8zC!g;=zz72Zε>FR "9|0zyˡx/gٯW ݾY ;޹sޣEzݵ_}Rѯ&|㻛ιֿ͐]8cӼiRTǁiR+Glڠ^ºT>K0A@J>ehP+hm Y2ڤP Y 6p) S>ds-|}Z_9`G[zՏ6݈:?Pb*>pN]' ["zd*Y۱? y_ oy1A tR[)C6zJɻl'z\`%zZ$EO#0--DOJps0'߷z{] %t5B]D-EW ]).Y-bRJh%KѕC+}VRZ J)shEKo;e3S0FוRƶ*mXi1T8V-ÍUXFu(W4VTtت8E;R\VtjוRrud\_,jRo^ v};yQuRl[}vlǘy~ONSO1ocb958佘&^Pݾ]9vѭEg&Ud3.એn҇ݎ0 :!ҕs`3R܄VtjוRrj$^džte*E3ѕ֮+5]PWnC瘦t1Yѕb]WJIjBȐJpٛ~]!%u%€t%])YhSޔMWϢ+bHؕ3Ԯ!Xѕ֯+\mt5]1,Thg]q̛AR7/HWa1Q~U<\Unx쪌z3XFI툁¦V\Lѐ|0+zG2ZvJ(6]QWrc0D`4ڨs٢=18Რ\o+%ljźGǻSE4nqřn?sϢ ڪ3r]=漭d}Յbps݌Oӂ}m] 59aRm^R`Kzf"de! mY)ch#(y0+g[JqߡjJ?K LΌ]+ GWJ-4!] pdFWt6BѕP&MW#U.dHW`'R\Vtz])%uE>Tlg] -?HRt5F]G25wN])nVtjוRtbt7:o+v7y3ONf{])eMW#S@Nt%9])n2T\}tԢ+ްy(q>] XWei誌(U6 tMWz vB/7/Í`EWJX|uk6VTO{:oF ]l$ *ِlՙJC>ZWAAXwg7(kWA=l(V*nd+b=Uʌ-ax[<`Hvz㊛])-WJmpL5lrYhFWK`FWJv])ejЁΠs6+̻տUDMW#UvҫY)ѕfgEWJP21\RgP'0+ŵ3Di1֮+$h\lHWΛAE3cWJ[P;k6ʻͪ޻cWCo8VKvh 溺:1U2 =^C+hU!%5]QW<;Ȥ`!dIWftnet5B]lU~"ZѕR]WJɹjwr׶LɌw|;])u38J] qΠ{3v!ZѕbCJj<|>\ftJisB +kxt1שgggiz3v\腔u_`ê٥ǮwUPFZIUFڴCUQ:ސ 9z2\hEWB\RڢgѕWZ |0o&o+hjNQV-ߎ|;EḒ㼺-v\ i׮*(wPx+qweJKP{x[x;6$ Ro\ૠp$+RڈJ)su JqF>RʀMW#UHޒ\8TmkוRR{1F]!C4+&;3IɊyW2*gP{=nٌ7וR6v5F]Q4|.2``FW!ZѕǮ}uY0+ьLdZ_ҷ+a#0D"`CX!,i E+_+tتJLhHW@\2+B/e (&24]=ٟ|REÕ.˕Cn k,*e-|R.YJ*) $]Ihg̵ M8ۉJ,|RBj#!%KzFJqJi]WJƨ[كQ{Y/O9x/gٯW%8m5.ޙ_^\}[)?iPzNE(QKܟ彞nNu ox)H4!=:}|{ie C r?ޕOO ;V7EXW{la$̘;G&.Iǂ r6}|9k]g GG9P[(j|ϝ)r֥\ ;Ë- J4CA/Bf-pHy3~^?`~1,N[խ/ݵvwݦ 9_rY/t2J'b9<'*CV6stW<(`g&O޳_f'GG'WrJr_To_.d\,xsB8᥄׏ྜ~j3ywo>ɧΜuۧ%q SgK(ri1.}w7VG?(&NB ޒg[zړP}:.O҄O6#ȧ1Km0X p{/׋DIc;$b*ebvc%*bUxdc8LK4K7~PzkI(m$+>If5l <炼V,rW^ Ct $ڃ-G1]>uqKg~ <(H^j=I0<fl*jg$1m !uD[dxi'P>]緫UHLZ|.ϗ,쮟gN-=N9Nr rÞ$B'A Dku3 q?WEj8gO A!o/~{n\?V =sg#7eb$I m)Ҕfey"I d< (:ɩh)M!;+sjYyq^kyjI)<̳ds(D"$kʑF$) PinDA,-$Jb Y!I#cZD( {~'I?^TNjrqͻy)XEQ;SG4-7V*(_{Ju?.+RgӡB IR Qe,$"E:KTB/|֯\+A䞪mؙY= ۡ3iS0 atB=Й\uD]\4R0jb]{r%FrWNpzpi*=笵5zY窣.˞7Mƪ*Hx]\8&W!K꯾k*#~<~3K^e5Qvl}2^IçOAc#%,1e*eI"M>9D.Ar`s!K'>2Պʴ>xaYZ vYb }P:!$,ϔXeH-!ar`sXB$aL0R0)&3-ߕH%<º5S):Gn~N_Ȍq9ss<&P%N6 jeip!4TL`YNhRNJ,D5wɪ3~&q*^ǎȱD6ֹms&0ǽжў +FeqVUuf_:2̆pw#oK7uG`O.%5#}ə}N;rI1} ICk4ܛ)*>})]&ö0e`s]fl׊yvk`t{Y;*X0gp%Cm/9k- F}䬞eάj(tyvX #հ."QSFHM}7Y8e-o+Hs3VTaGf5J)zSNRK0) סrQC.e/it-)} [{4wǁxhE`ۚ>ϺrWY,ݫg+ Їc M˟Ԡ}q=+șD_*gmJ.@N9S=U2rH_YW䪖5%Н&9ԕ0֮m^ N‡p:BtcQ+ 4/٘Ӟ^>v[+^}Ijs\сq6ѫ Gp,2"T \BD$-Ri ҂]Xk `drSh 9xSEXi7/iiCG/fs~; fԘjpG ̭KAV3r Pm,Z `^gZ3+_JY%/Qj{?>XU|ѷlTx?5ɿBFt{$`C_DFj &K>_#>uPfX'}J{H領gJ͆sDQe)‡wz\&77gJ076X|H-ځ~<!# ^_=5Mag*NY&/<ɡŤ/-kʥ{S_O0"t6: >Z eP8g0/rF5F~u\i҃;)R0dLrɹN8LdeT ]F2ڕ 4D)NϞPu@:vywan˰v>rOdoܓZQKX=Xp .jJ|CX y-8@/hϾduZ€r@87iτכw}]yq*\M1NL/Er/Gv5ɥ/CDYDP9ty`PU 3z+k I cI poq@8A1ATK'ύ<2ys$7& aQ p,!xM2S\ ܶb8X5T0XWcRqM6pγRŠ̮¢ տ^w?|I٬"nְIrke_ .Ed *d?f`u$L݀rFR7=#At6֪cN䒘7dJٟ6_? iih! Ld!rCCY'ֆT_)\64xiU=_s_"Ef}. 9, `Bēэe"9>\AHZNAJWG奒,6moG"Qog%rq̴ evUDAQŲ"8wC_ !x*LDpF`ѭ=\ US rrnT+WgsRERƚ c A06/ɕgʅcWJU & â_t_[ytc;J]՚*4M̙3X  kHrA$U/h-'#7~d9ptXsM_6˽= =wvhm>=Xw"b9[8lA?=+<(zl<@YE 8q v7& S ј..gC@;~d' W{BQỄ}t{Z)x049rLaևcJkk36hF(x6-͝H"q]ir5{17lC+jąвL-BoӐ#/ n3 $ Lo6*~OlPBro7*()BsOH|$@T?m3 g)Ogagy>45RMb"k} _p'e2>7!"'DdվA֪lmm^4TxE&4:W"Hx+7ƊߚY)ϣ $h؜)Oy1[l9M `[ѝ*4\xẅ́nxf-xs657T'[HS$#'8d”7-h 25 iKsiZ_qC^zj%-d2}C[XQ> 5W4 vdnc~ܒ  }M+)pJ,ax?^8SK}_>$m,>| 2>Oe{Y$]iGSivAښX$y#8}|޵Sv.1:g/>dy7_/AMVWh_( YHMa4fNg &_caJ0XV Q%IatrG}^xHۼ⨠PZ'}e()'HtPzuvE%ZElXg`ʾn'VP/*0PPy2T0Bj6w^'Pڦfz mϘf-*Fc`LK4K7~|y9|t!\ea~xY5衄|Nh#SZ㞇ZB6[v&_o:WHVyyk&wJFY Z\QVMֺLJ)M+EMfyXh;^֦Oqjf([s,_i,;ϓ3);}>ufq||* V|N={t7 h J2l/&$84 ɬ5k6n^ohjX3Xs=xP)9>UH*m4xGtUYCu^;I :˅cZ^W\ѳ~紺sM|tIZ^'˻y~mhZ{I^|NnjTXU8!mg6n΁zDjm%mV]'y?=fN R k+qЧ 6n? <] F[\X룠EÁǟy|3৷z;e)1 umsK^$8ON1^t^x?xǙ%BqҭgKwu)@F/VskEBFW.Rl݇ ?SĀ4'?9h35+xc++/Op~ 'ЯO93ڳM]t'Ƥ`:W7>sx/;ypT8xʠx}x3FP0Xt \ǽ;0} '~GsHctKD4XQV8h>h(u 4Y^yY̩h+xrkPÁmaC}@Yqa8㫎d8#p"mҡjw$PPQqt0ra=d4&.#q@0͂qH4vS/ YaCS 5;?Szas|^ʩ،4gN}k"iBH|7,m#Z"j͸!q^ftfkYU6+ e! ɏs}aNYk;*ޠ',>\NtLh딢NN[CϣcFQ x츉PƬ[6^!S3dc6}k՘sq*xc`7Pތ$jqIAn%>k']Zr~U28( GH\<Ǚ"&euja\5\>#.D'ҩe˪**q.ᛦu4Zi'Š95πVd]=m-]u CUpѳOo?Mc둰Wv~<.?fd-Ӂ⹛.3T(-Ie;-vѫۇ_Ji5TRwZ~cgS"]s-:yoBOb4{$~Q\&wcv{Wn{Bs3p9; ;i%_zkQrׄ6~V69 ?]'GA*|cdB{1/Mۥ%c+YTe5FIve߮QT_՝N 0=]t*c$ߨiXnOhaO&uѫӚ)J){i7T=Co$mj2 ڙ~l1)6Zoc'Ƅ\=~(M=\"rT"˴vq6u4>mECdэ=?YatrCoܔ(;,iR8>uܾ(Rs+~yq&xLsL3ێf⧟!{Q :ѩֱŶRZzZ30ŤiX1h^I'q?IxSDy4Y̳K>r‘§Y=_K`׍?@j˦FoV_D8&5h&M5cwtKծbO B*N50:5f-h~SeV 9օw0 k:\q$9AULec^uTe n {];N]+VX4ME1D,' A5AZ&٢BF=)Ƨ]h:IQVn#̐"y?tʉiG4I|̮U]~ds$RJ4ɓ'Zn Ob`PDh^щm}eIJvxt3In?5ʻ''/0[]Xpe7Mu}6{5E_,]g95h. |B~,!5@~y389Y g9|C{m9oIpSz8օKTP<;rErI~~uҍah6A[G$"Cuh[`}I V : a \(uJ/PJhbϊ)ok^o2mӂ +ϒk8 ?#ic%!$D5Ügɺ6:\:fu 4A*P`4~0΀j0xKSs%J M3@PEr'3f$S1L.lhi`­["w.|pr  mv7o.*O$KE8g^L;KY/st@2!!?fN&k$\LnB3MYUwF>0z:$9 pX+ۊĢwCbCU.;aM>@]Ykn_5 $IsrDUܯMˬ]*qw[]nn0Z @65ҁbx}a9<|43r;\kOhd_L$Ü:xKlk!Z3Ő̵Xc0 ,kr=Θ&- ݁^0Ċ BrP΁ i$!mG )0Hޑb coȪu֯tK% RU!kh7s>i oE]@_'E=YE9L9< s >O>|kxɧ~ ,9g/mS5>=#XŋaMh $Eg=r-k|cS2zřݧ%5=ЇRи;Xf@Fm,02;2浬LTzƻ9+ vC 9Ә5.du)ÐR8_)C.;d٬D34p&_?rai&>91iה f'W'j:ziC3!1hZ< Xf; ]릣`/ #x5 [o,԰D~p4GI7 ƣwnbr(;U:k_n巼S~>dXCxq~۶ZMA88u1/Ooz^'8/;E,Z\tVGr΃h5d1)*n iX~N%&e/pwL>WvX7Q(wJ\XCd:t 2$7VE)iķܳ\G.#[_7#Mywy\H\rs^@H>Ry<KJ᧫q;^ah&Y 9)45Dqi}lx8RFʔP({E(C +R ,)_Oo*x|R*rΎuձ~}qFz8\XKt??BBisFJxH#߶SXQf4NJRJ $#۾xfy8~/yudx7ENqۇ 2$S"kYElЉ9GV@UgCY>Qd >Ld)OFE/"$OPJq4|*!vO z(1ƋvX\6];I/ D%؅3cRęMIx 'tHPJYvJ OҤD6-!ۋ+Zp !%2.Y }BM& Jdt)25[YQ џ%2:J5n3%VtS%h܍p&VCTS2*Bv4'T魘 CKd\fڜQc׬nލm*{?ût]@Iǂa,o&x@2LMe:,cԁ$ĥ:K5ڹXAaNuc?m>M ŸlϘ>D~qD{8iriA}'L_]4˷aϏ-t\? 9"3)Zm%#CSL%2f ~x8>pӝ H==!pJXB'c㓤z 8S`%G˯<=+`/ ѫ21jun#AytTYc cGnoPzҺK<~(=C]]bMk8tyhyc0b1l)/9%i*BN\ ?͞ŀo:IghT08FxեA DZs$]Ͷckć+(6{^ձՎwS8VDI4ua /~H76Ձcp7lHbAeB^t˿ڍ5F #,]o%Jq%*Df=YKዞv3c3zkZxbH1Aq*|lc4LH^:,Vk|:L _M R^xZȱT ''zp$uN&s VqgQ[y(T=mSZNpSfyO<{ h^OVo-5=o l w|?5!p.( v6wDPC]SJ d?຀Zڐ2EUzpY|TK⦬*>z쓩[ 74YWÐ&=~ &mƍxySV4XZ8{o0$ӉbhLrcY4u3TU|z07{0‘0L/x@cG\L!CM~E$Fx[mcIٲ-8;2FH{:I"$ } e$IIHbz@=+s>ھu"^m,\H Y/^+J]Es#맸A2_^4DFYt ==bPDF*s>3\T=$ESbwbmD'0N^0Գ 5A6zץi.O1J/vB_5VV5TBA|)uU mxI5F%SyH:#5H嵽*TivyQ!hENŜȡ%A}e%Պ,xVv YQ Z8j?hvKBrζҴvp5كr պ~)I N`3 b}m:ʪtuGF`{^)4I4bQ.a;^mS f>kBU1owR׍ A,O%2:B$h1bL5qlKw)B*"l#As6#&Uc5/*ƕl .uHHjϧ :EaAdijPB4[c+wGbCUbc'% XNvصIP}Q֙FoAjϭ*-V]i2fz#:F\Vw-rVP"N-`jnZrU"pmTRAKwIarBQߵDE^L 7POs>ʒmF!-,o8PW^ʿL%8_tUarpSU3IjziU;os`Ĭy@r@$͟ Lmw )6F'gT _^f"!GG?MxTqxXŏ]^}Qxχ 엿>D-7]_{5Dn}(x!J86"UR."p5E,Z\Rn 2Eg5Z[ G4B8qWP6l}LF=@efا*O(u;9nafL 䧹:Iu'WG)L$;Xծ!>MSVGAbkrZ*Մ7aD>ȩԍջP2"O}2G+ )}І6!cNy}l΋$G7 zb>vg?jpcfV\Rы0`t7Ώon E*01u_ٍ='RŰv}/$?a5ߓFԕ}K[Uѯ~0$wm=Xeg4KI0ջi;̊ŷj=:koZj{aa*Zq!0 % 7 C^m uwKF0w3@|LH7݊QwI ^Lhw`xh )-g5Ùoa(6–@qH0H K-gjuhȒ3I dߚ;Y]XBHIgCO1ɇ)o r_=e0=y3e]{7Yj TpUK"}OCL.øǎZ=fLnT1?)6O9n{ni>-c#_߃ }lmxdvLZ 7-h{v/Ћ$rj 0WM&\:Hrlpx^R!hn<Н>A-|gRD ¬t%~z\P%@.=%*nԷwԆl';ѭr +@-"-Ep@}qN-Pye) OE]ᅓ˞{.|L CMZe޵JzN :ot"=.ǿ| $syuqva D Dx g +*tK]s4޸gi<3@%)F,<=E|%JNҌGdgr?>8Kl78ae*̋в:!ӌLJ)4tp3kQ}{(LJkJ?n!'!HJ)^h8n8ԅGfRπH^m|n|8nn#eiyl~Mux 7^b`Ljcb.6ǔA 4$g(bb|8hY~WuI 5tRhv{O.niNA 4J&IJqu񽅃;I@Cd-?Ϯm{7=,(<ύى(Δ*\N"N0ŀ5H+J!9hʲAM~:K:ixģh֋,nsCpiA:Yl=zx"Uvk{LBiFYb?>E~ G]d'hHhĦ-f\\|Wˆd^yޝyBnoU<|`ƈ})fv'f4{D/[S$1u3UӋYշR|5ه,aX/͎M G6e~$0OҊHz\x &pgg]* R]O9+N9'޵H@KAoqI+/^M.dg}CwLۤ.l+!Vp8"Rkx'5l$7lx,vݣ]d:1E}ScU 4#Iu)k%Ws҇m7G]dm-)v!WD.CI޹?ޅO6[_Ҍ:mrh-6]g%!-rv̪kճѳ~ۖܶ?$me3 t@K>zhkf>nla;(f=D^[vC *,KrD7P:Y-VHf!j ]`~Oj3&z=eblO¡G=Aa9VQrМNFL0_P $/Fh_9JO=#?~Z,/\}5D~87l}.u'%Ú_F[ed nصMjUw.W3z cܫ`A:GQ5 h^n.,܏?m貃,)B v%m=C^/b~ *@%oiGSJݝ}H )% ^}F]:xs oհC%l}C!b  蔡 jg% #BP2VH&zi7V'ey!C1\ޝjJjÀz=E? 2Cz G/ŔLyFw_ן|%_#2C VePcŞq 4>؀lg#|;{B #|i1KꡏkHJ+t¯]2(ϬW Z<4+ P wiU S\v5_ƭ&p8*e$:;Q/jOgHu8hzhnY h )8F8}IdF-voԚƦ リG}+@KP?9Lz;s'<Ďށaj < 1%2H*עaRҁU i!{`mx]="'u:e@I{C=K6>W˹'o 4? `\*n1D{Φ|.L/IV S%^I'azK7^ g QdL0&3D$=: vu~={gc)~_D Ed kU+"; ժx˕c Pv= mEư8EQ1 J.J EClMf!3:  ? .ѵ(\;~)˷V|^15 ].@=IhϘ)f[ $x^X$r [SXj 4\2uTX%j3AFg?B__naT(j{ѕЗϕ.8;DZ\qi5_~129pYW9qrov=!RF:狒,glFs.An oZwigoV.ȅ[ne`ul鶣f ORB:YJWHT,NF;᥃*M*޿?`S2IkVPc^Rvl+b`\GJW+O0`(f6`B,[ D4hnp#9T+ݎ)ݎ;ѵ Jwm[ (Xl5====ϐ =. /-FTRJBܔ1iGkT0(h4$մ#Ro{=q%p<'-.NM{9Y,µ<9s⢋OWp )t&g]NE_XhfCn"d* [ȭ>xVivWڝUuwmᝂͣO-K},`K9ű#ZGS3pCUppy:o\ReKADsi ܸ Q(RЅ+g`R㉎቎_.*Vfgu-84]tW9\K z1cjq'_R).Qy165N!SW?jA²_-WeÄ5Ӵ#kMlʟ,ox˜2'qmOJц0}ōd^[!u!FIt9[P Zw}~0NI}_FKCjpIUGǝnJz:Ġ$êբK9`gsY}(9Z.UW- t^x/)* ֠ kƝ UkQV.J 4KS="p$.ޏ_NE.S粺O.iQz@<%`!TjjVv `N/2TBfSBhw@$5S ^GY#_]'MR.7VF>Vb[ Hrpa<b5q +ƣxŵ8rRց\'TgMD0`L+")8/}C.`C֫y|fJA] V%bb;ب[*$*,N,vuM><.YS*JtJ!;Q3/35#{ׯ|WEQj# ~:3Xۯ%C^$Ț(`FiQe,8v_5DϻX,`)t,e=3tܣF.-a $^bp#8#&w_Iuh b1 qA-|gRD ¬~Gךƽ\N%~z\z6P%Ryd)}=c3T6d;0MN]>3 b f$:hї:ijD!#(5zlQӖԾƻ%gFD{I_R1;p7VV*iLIX=93(nFdBۧӀ)XYuBx4 6MgWd9 4DeCdÚ^+Ix}$)\|zC6zڸ!g.4p(?: <; rh]FA>S!9XyR`A%\ă 'n?n2ke!*wJ#Fr^METLJkJƐ'!SКn}Zhhs\p%b+bT,o&w dMm|dϳ}3s7kq ?cL+ޢg]mdsNQsl-zHL6o34:krqF{}] OxcBFZ:% 2e=6=tM] <@˵Z'ہGDjXF0QEz2_]RnWO'T:]5/BcAS셥ӿ?en|볁O'`˜zL ϴd)!ZGR ^@b6]A~{m|6TO'z5ȧӑwuU#u,gzq_ J(y陗zл `NIܱӗKU|** @\(>"ͨm֫~PSdORn6pstgF[|% K.08ކρi%tBO,(R̂痣ݵXBQ{Y<na>P9n>gF93ϙ}άw7MJ80u(PأZ!7WkMzzu-6 >6˜FJxjGIKǙrp4"2C/vە^$vszFe{u\6-h9l[:4LM]+"PJӵT1svQݤSS{O+!V]b=[H"RG\|<u4'La\C@n|٨oqj/~x=rQBmԤ1xFsOWI&}`Əڳ/>^:R\\|s5yx)B^tzq)|N]g[c^}s]3Ǽ&")7"0Z'<|c:F1(zl%7397|؟S9Z3e>5siCtӳR+cV{aYP?j4 AOR*P 9TIR *| 샟x8=8[r ֨<oRNQN~W9{ܔU/$w]Er>_d=fOw2YªpZȀqԡ'l[UU&1X]hW>B&{-zιT]㦰e%wiT #7kCUHƭGPj3C{^E՛JRvIVŌnR(1$S7!w0lwAASʥYyGqt8a7U:3*ՍZL֌JufR>ojmi/z73=ب]׽q{T}\t{vcRnbi>xtD15RL};(崏w˪%;9s֯{ d(RZO.Z :꿮I?b$Tzł^zkJX'Eo8l:N1egLAZɷ}\ON੮ 5=3c^y>`sؽȂE3hq|ޞ n|)a^#b0?oo xt͠c?R4q6^El4DJNQ-'N@ |UNeKvt=E")T' zLRZeݚ}S8*<8f<)%\;LP_AT9|r#4`/(+]AUxs 8e vQ@7~RjA U.]o~!Gg{u2-Y,{buƤJۋjGbR8P4pH5Лrr|og qVrgAɽF@rd;eԹtZ{bEV^ $yr@wM|`A]2(WJCڤ7Q6#!Wz fHԕ՟&CrFѩGƾJZ]>*za u&/\JȁUZtJ:x4EѺjs[r-6M NIxtQSju}SՎ̬f9ڤ] qtTƠ9Wpb)FבnRQ0ؤF֫PGOj٩CяВ+[zDeݍgBZw;,HU4HY/nIJD嵭=;8A7Ly|*͞Ȁrqœr)獱wIyc~y5~]!Z ?-0!񊰛Udlp+0[cCHn&UU|6 !UGW5׿ogr0PLqkh»lR b ~!YU/?}vv1#bEZ iC(Ҫ/C#%}(۹?O+c3C vJ"nWG]rGǼnTf7fmgQWEˢdkE @Zlb;;+󜬼{A^@NwCVդއމGõ+Y)όQv&Po>\pӗ)7\]o6HBPkS}cdg;lḩ{2$Y囏yI*rȴN8k$D9<}t'^)=Zyhs%@а29?Gilˍ1>II?_7+nD}}iu}P,E8Mvīvqā]UMrs'bo󛬑L뛻Y#wt"S~Ԩ`;տq.;SNٹU?*Z}Qk/;<~U˫4[3[,n|94nϿ\N?_nǨ~ IM ԧ+}QC~m9[=Tۅ]<`׹>3T- P`ȷy6<; 2Ls ўh;2cXQ]y0} Mod;DI~ppCh|y,Wngk=+_i:H2+Ǿ".~W F瞗g0˻ -bY_QVP6R&`fiTTxl+eo!_7b曥ǫp[?-H~_ku{bdm|x$2*NW/~Y9Sx-r f(ˋJ< y~-^6r (~{̃v1(28ږi^xYiVB`qyGQN׶4ωȁMlC#'LSiW)?VIOR8n6eC@Fi(% t))&i])Vь Mٖ~ТlK?t5J(=[tQx_~c{@Rz7 A^;i5)X 5F3`lOl*]l5zu6J!0}q^{G}+21n9v\6L+\)}1>j3|q4jS0k?#wK O^3/"S ̵N-5(g@-BTݢ;|n>p^8v&\o~fxjk[Eq. )q©I0⭆\K@ɀ5`Vc(AXDΈ#dMՓ|a PN16J5*B#/.-%X"r 6G@DnӁi4xjhx'ȹk~ADzZ? }7.?Ży-2 tRݔ$\1Ht9'.Hl&ɩBv\98 bvNzdA3t<Ĝ'f[s5"'0C=k17DhQK1s1d⇥0(ҙKZ L=?R!0]x*gϓ)OBr1YihyܔsJ]t&HGV6cASj'Ri1E& 8ø|<'בv[4#ѱ{A\6R9 !@5[9@бw/7^4f_ֳ9(V"&#M0Ԧ mTB9h/hX' uNjj%#x Z!|䁽^pB$-y:@:A'w&b~ D3F?PzY͔]4wl j@:ͬg.ӻ5OYMcɧ.c ]@x0t1s-}l* mSx2TNXɥj  rmK`He ¢/ѱ['OI_$CpH;'g#<>)4gf=( &Md-6^3b3ײL45bbo ]t)S!nvʏ?v[=<P1Ǘ/{

9{6/<ȨD1 5kn͚k;G5ny.vBjϟߔD;;1hr4hq.PL#T{;3c OvUCuV6=cWi=V6:|˓C 3!yfHzl7sR}ܰIwqJ9q[>FzPO#3{ޒ)Q+5HQ)9kҞ1zE@P{X;Q8lO +a R)'O":4ISp+^b*$LEb$\+`bYPLK)ĕ}n8x}N.;b7aƯnw>Ӓ3&pO_̯꺴x#Q׬mJ\DvM8܉ßd e23{BSJrԠ'|]02̋_3uI%u[dH՟7doA&F&\h*^%sZc,̸htP5EJTK 2m9*5fN흖j>zHsϜT-eE&lq}R2)Ɩ%G=HA!VDE]VFIɁ@]T =8.0X@j(ػlyqL9S{nI1#°b&LM%8r2 @bMܳoX<2R{v6 ie혻v v}D=G V^Aiv"=t︭TN@;OE" laէdgHv}\WoUǣ6(B ٘0jE26U@\[#1\dE泒FZKG"w> D\"c3@t^l:Mgg޿y}ٷd١b$hdk-!bjX""[2ꂃQQ\Y %RͬGT[$[k+:(yR)OU kG]#۟h{bCU"W\#kd{DhTz䢳S}Hi (0 ;RL.mמqѯLYq&CnJZTC ,@1r`d%U,(#Uq~pF"!Aj!2BU{evXAv1f'd]( V.^j\2V:FVt`թ5DzM 5 tć.N<)FY>##LGߘme$zX4lE9ڦ䈦QSD 974(*г};HjFI~19*C.K G\%Z$ӟI7!ug_ʳ5hʫ͊> _ޔh,ZR[yI9k+۲v/|tk.ĨGpu hh6֑^+dPEZi m׀ :C.N ssxWy҂9KSld'D\`wwLg>r)+Hblp.ۈr@+z/Rhbf`,G-{t+?\7Kv1" uGGk׭ g ;%rΨmAz˛"]\,KΔI^ݱCaRs%h@2jJ8%R,1QP0:x!A# xkx cd!SGʇ*>+Duv|PږIݱ12[O0XlS$C3 /Hu{^g6?Qo<ge{q4n; r暞y۸淦>(^'92zz۫8ZJI6fYRbgL>6r kKvo?e9b%D"YgH[{@lVS,%[!TݕH<Hdzˋۼ́NRBZ)(t^Rt-A, d9Xa'5IٸhHl ZdX^#2.E'D=|[Sw%ǃz,n0%9Cޕ!gfss~Wn #lLJ2qPl &6AUaV,}QNzVRJl]=*F4#F؆V)zJuT ·d/[ko/k`Z\s\Dft}\'7v23W枧? }p)4 =Q'hًa*{̥wc;$ȫ?'Ame9z.<|٤{M66{OVY(;s2g鸝y<8rY^6l1 ٛ'UڂDbbc{!tmhAU =#m&*z~)0%m= U' %&?z {^Lp;ՇfU^K*wlՌO$Diw)rSFuI mlQ ^*&e"eٮv,YžI. aX XQB\XezȬ7͝>FưP tlRO.ukOo{5yOobQ~zƮ [#oClӕZt6|ʾG}?!K=4L/!:sLAq R>;E_o 20'ʨ+:ôʴW'Xvn?D@:7V0O%8ޮ#cgvְR078`y9^JVj߄J9# ٗDO 8y#JVj؜vvJέa@n; !x+[,I;0 و'v汍Vnvw@ gdmӌs򩷂1ج0,1޻'iRM٘trуBz3oW璛b`@(CS)N0&^lFe".kriiN;j/6lA"p7Na܁Ew&D_ۆ5!?8ӧY\͚iX[P ,c1@53Rt͚ݦfaV։fșba̰r "+ЫBN|#o"ucRr)t24ÓT sW|4vޗuV2M IG>66Vj dBM[`NWM@i\X23?)Ծ_d1Յge\zOsx&WӀةDk9 CGlTO; !&3ǖ`gYqEE߃oNE` ?vD GĜ½"4d "TRK4{qSh2i~~˺6fckmS2"xr!^׸j{Jݻ3xYZR1j*O:Eֽѩ8{5 ~SL\Lj8,d|5-p!XKZe}9h5||F9}{ʺW^b&2ئT`Ór ˌP78_w x@>< M$ 87(``Ս@<8 ۳X?<`)OoQ7W?MAzOӇ/qˑ2mG߷78Rc."5x~s֐9g0jiU#( {!s!HSO ^eUͥT U^m5.)=좩E^r.gjх\b)3#+B/L(/lM(R`ME9Kf,d%0M2‰sH,lti0SH-I4 3 b;ҁTF4w@Ev`Y^]A:cv7ٸ\fJ b]A!g9$+F^_S|#o0^h90wCޚjrдɧh&v%N`B6kK{7drq B>7"GxI(R'Kf`1qs⩼x:^M;LND B0>6F87DéJlT":%)P QMJ-1EQ GNN_?QBmlɉ) iBjQS6@F횺ý\ɡS4UGD0)=|,Ȉ t)_mI>MA=4fA4n/k'th/t7Jd?R# ORwȲ T^zקe٘n3F g,x}JׇX*^D>z[50g aíق.bH5h@?YV$;yx],c)'޹r+Z1b5'4 Y.s 0 >QÝѦuv{/}uwpW;{> yckH5)/ӛf/t|z}݇Vή~h[tl ׳9^ƽ#ч rw7w шi;RmR@V8v#)5X\m&TR⭁.AF'WZ{)sUpFjpˢE#?6]ˋ@>_h.ȍ 3/ FqQ$çU}f6}x8`4:zݬR%ffeiD?6RGf4Ht-0ټx4Z !'QF&T+NPFA'.5[tk<`ms̱`e=ȅ1lbUl sF=(]yh{sovmN(Px6ԣXr*q@C7ԯ y^F: B(Q%,y;QvC6k~B_l0Z[耀~ĄBα'w@ذ_[g[vki?^Kqnmi_m~=qȹ]a|K꒣QgZ9-*xbc_ 6^×T[c Uzu|>I\W}O8Ľ@q> ڻ.'(~vKDuQ]9X"ukCݓ %NYv0IJVqN{0o.>B cd yW?yj(bz#ý+ +,npyrmrt+Ca_pY]Ch`+ #:L+*V0oE*hPDU$O셌ü^E*> ^aim]ߜ{txȺhdgěCÕoҫJtphQsyur?E/]uqx lQ=Ґ3QE -p)a.Nnx\l<Ij؍^ؕ!gmIEЧC(KUwu|$7A@6&8d(-YXHvUuS]DJ8KZ!s9'撗#dHxiErn pOJ$AXEtͬWlHlqaYL? Up.U'.Tvz+;G H*<"Wl@AP($ U([ƅVB/<*<>Wm.G44n6IS]ŘClq*t `l!Y#jKJ7.Ҽ] Wfk刔#eG/y!r",xRV8>XnpOM)#ɝњviMrPllmeFFu2#&&`8IpOu^'N}9eX&FkF0,ϢA)x> UMDLMX А\e^iQt)利†6ચӆMe)Eko-W٦c9t:zG%/Ƒ7N7$j I¢:En ŖD'?;o"H%FxRKOв!e6k:J@^ f*jggwFgdty:g>_<ļi`Ĉ8v^eBUxP =kέ*^G>.fed g9?6֋:EήDM3*S¹*ٚg,q0u%gJ|cmt/a~:f>ӓO%{TW\bwPhQf"dH5hD|2j k-*VGULđߊV`Vܭz#+o!6xs[/c⺐|NzKV2LaĒGQB ,i5wRyF=KU#2v aw?8;?f?_7{7ir?]A&>h&8㜓!IHKF;6oɸzOÇ"!QJq+A g`Ia4>eqzJkދc"4co&E<¾÷FR +Ƙ=n4L ubJdR~@0|)|13nLema-UY5_B*:E3$(j`6#ýms* (PRV>0,PqacOW4 ߌN*[eYc ҾЖNWcA[] +7"%6[^߄րHt$r\L+='b\]3]K1E{qw**e`Bȹ /]{=7byJ([n JiFr@¥ [X[+hL oϳMIGWmB:)# eW}V+AO|଍JN&Cr"9)WǠЀQEti J77"*fx#ikr1xX Cy/fe )vyjk m(E>6Flrq|W90KUg_YkOM/^E Χxa ZAm]~yb{(Vx·7C{~3l>wCGw2C zrY!uIM ї.vhYYt ] '`,cx#3v4:y{]˅ceU}}qZr׽_fpT>+aY5`>Bo_{']1!/Ë}LTƮ >}|‘_-QW[LɐqٳNE!)ܢ WHq3/ޕ":7 RJ&_Q3Uei=,fs$ʲ2z6O*YFAgbNgOXuhS[h3^q{Q;_]}` USt{Sˏr2u{inN`<AݙL3oE+7i fHhXݏ G&Yt3d*?vZ]hIYXu##U>yI8>Ը:Иmv6d~s~vv@i/:7*Tn6S7Tԍ@JnlI;02X* ;DI H* HFTɖ"kqϼ^Fws@}諌<}uH q)j)yŨи&BeZާxMv~<k} ;`&biP9X~O,ܤ~ˮ1]i-w})}Ad[2eIef(Ee@W-Fo^ߘB*hMNȽA@%BP!s?J V.'6**-U:M4:}A7 ?˺G3΂ O|]e\pU> ?tz%`P<39VkqKۋKGҡxV)T.3;'K;k~CBeTxQg+ltoD5B{S {px#atbQYYtsͰsjV 3o=dD| 2s, -r)gjisתּ:.0ګV,Z.3(hŨkEWXD:e,E SE8`5a)e,G釲^pD nv2~y,Mx:ն?F#'ʢJ膼(wIGM{`iu6?(uLW,HQCF!P2 SFːxLUB㮓'ڌ'Ni[$wJ;^8%Xi!(eJ0H] L#9Zt 574TÏcG\my1e|.cRG62+rX@8>7@{'%S^8%zW )'䪚F X` )CJ1A;1e6< dQG9굼WrzQeDilz ?X$.'e>~͖Da:|+o'_Ѱ8YDOϛ~Q&rw_c{C! A-S.f]+kveDFl~3={_6Z%/l&^fv4x1|cB }W}`a¥ӊ;0X'gègw=f{/x.ɁNT R7Re1[V_lqHSYrw۫"{, uQy[umm6.p^Tw~v1 y9b$xkC5]|˺]mFguU$?b@Og9.t^o ]a~:fIg2st/uׅ#֚4e)> poIi2υE) 6y!MT_$vidaKʃWHFPHHzUh(anhʀ RX9#FF?2Cw4sBK#D2rY!uI֠lΠ/]hCƼ*JEVNHKAESI&()"o1DFn{@Y"cz.'j(}Qd *-6<189!&O0I%H./f^; ed3FX5E@Z+ x @st+-+Tѣ”s=* NJI_BDbm'#(\&e |x@ǴlNhU\fABŴp, V‚B>+R UV>/8S^om;Rkwhy[*:#(Y*%*p֓SeҴ":Y*tAF!|kK^A_g8].dTkԬ ??!eܫSS2`slj]Eʹ_y oT RTAԕS0O@z,0h>;@/޲mFt!G47CcF*wuI"ÊMi*" FKՊnxt%jUU` 5vة?E_E>L_ 7u<^~ lЛgl̿B?%_Llʬ?mvL0v[b'M@%m,m |~'y=}P,۽ItpԾ= ,Bl5ZkkZ$޵6rcbe(C",$XK׶Rw'ÒddfUKT*9<\fpJf>ȴkvÅ]ASjH/q KLt+ȤWoxe?Q/>s{}trfO;{>40?u+II|4h--tIKi,4@TF~C mʻ n7ʂ^,8ŽZ zeޡ%d0WA,`thMطKFcYI>C*ڛdxwt{w9}Svx)iNcW,^pi7M+clO/w S\[jdj ^]/;Z!5_ .d8Ernvse~UJHS{5[Kuꤏ} TԆ#Ӳhϯ\$Hdkk"!6pnIޕ;l#wj7W>)!Mz~;DDޡM]B c}b)Cbbv$&DQ؋$CbV1LQe!wGoԬGGΆq %;:<,(؛w+>~;T-A"eI!*JZN!rD,O24YJ$%oMw^k)"7&Xx6d2rP5_*0ԬsɆ~=)&.79ḴW?{]ߵᇸtwOSh*؜IBbw'>b"&xI0cDꈗZR3~er8<e[;n&nBs4nNp8F,E>!3N60JkrUU;|H| J,R>a Sq[V .q[kM! d00P X71CݒH %e_qFIL"A3J(r%Y wY(>`Q׌+Jarw8sd%dcHѐ!sE[%uQ+Yt#g2I,O(q㫽 ҘpRP&l5l ?ߠodf29^\Z,hTђ_"mV~HD4+qYHE!~䯭B:H1j*x;Sv̲Cv/y;h~M Ӟ>ߌOBGy?ŋ,ώhE4{x{wOLڗ|=wO9^ϺڎQOsènPt5?RdO7Wv2\EvO<'sֹ Y NO81:GmwRb$~jxJ=me&6&7Bk@)/Kʠl /1Q-xi^ʕ5T綨f,*~l3zb#PRР"kYw7~_iY9-YnE5j%ßp)<?UQbLEJ N9si6^jt"vo`p `Ta@nA 'CjS"AkY,H BngKc?Й8L?] C0msy%Q>?_qۜ-ԁ4zE;ǀx I)(]Dٻw3%4QAyS A I%.)M{F|B[RCP$֎طY/vVhw%-w/PUbĂ ^e'UEPoc!=,e8xUD[Uȃߖu+q8oVn?ۻmn}(g!#Qn񲑂9%O9>24EcStk)O|?GٓL66vRNsLsc˽睨A-OSi̵𓥐NJh=*i@c˰x~}i}#;\~Fk֠:Q8?tR#YUrrkހv }WJ!OMĝ!n^.9 yqY_g 9CBj$v `:"8Z]V+UrFGK.Z1}C9ۦ++٩5p( _/?O˥~.(DP CGD0xi[19)+|5v?ظ"^rmq[D 7A`( + Ub߂ޣʔ0fޝt؃ wVt_|ޥ~w盫[& )L I\IW_<2R:^~yQ̮)%' 9LMN9>; hVmk1ܠ2YN8貟_enӻXgtxU:A$ ^(1sxW;j zþ?-N>Nb}19wAEFu25RU|<-iGjzpAa'7pDGϟx^p]<ʌIWh0 Bٞ[/:Hz+B y,3&J5rn&(}k 0^ڭЩIw7M8Ճ{l/yfkE÷8w}7I7wi;ߋOg3نI[F2w b^p3lT~RkIW.MNx1-ٔhњDۢh'LD?kfe\_8Z{q2pAK-cF&Lls{lyG~i4F@dY^ z۩]g`7yS(XE:r,7>έjںF> +9P|Bƍ1%V!:B\2N(U::AU$ \Fj3-`~P~*ur3t]jI41"CFDA3)ޗT*!cD%Ŋb'm "_bOj -4c> 5X"O+V3T!A "A~tnK:ғIVuk}ms4:Uԍo} QPhH;.C8jay&mr&mu~WIje.DLQjT;򑑠 laN19iJ+)M-gxIc,XX)*P4(94+I[f!Ԛ򘔌h _HdIܙDK4\n rAB@wg244Qo'փ|!b9Ov`/2miN4_[Y 6^tП?wOuW\0v7XN$'Lk0I26` =Gn)lPDCcQ7!c&rs54[ӂ+i^T=\3>E_&|B^/%?mfE Z?Gxcg{,yhQв:,aú zvFXg@#wBh-m:LܝP '8عQ)WKEmsuWKPuԋ6)_QfRqHkT9+b'Nw C Њce|IիYk:1\j2nY2%-G$QK! /4|ֆSZ9Xn/̆л=&X`nz]ʇ[ uF2Bzq0B11ؙN;-=6".,ηXp ̜~rn)ge6? *zx=huN9aB=FsSmDkS2NO9U1Q6s0Lы(zh>bCvt;9б 4 F=ܬ^]$4[3B1sm,}⹟,A$AY@35L,@K&o][s۸+*?2-/C6L:M*KmMR.mŲ8N*},Q7ɊÇIO " fӲi-$&2a{,0gz׋we.wq%[Hxw##;InT< cN\%iW]?O]A(Kh[&fb]ϫ-HYE"{e9bUml-X" sഡTWv6G كG94Eq!H@y(01H k/;w݌&󀾏W|3&Op)KEIZW%5et) XOtKRY,c^:}8gKPՉNG3^$eaH{҅O`@3-g@hZ6Hǔ(V#m B$ARmk D~0 ˌU 0YAGB$JbPk0$6s Rߤ?4iYIJW[w`(X\_ ~bU ӑ wy,\9 B@]*cپxKZU$b%"1VHib$"%5D8O%qR1g]%ύ"BN$gV >(IxaWA1@;/WNx)}eSvBٓp0lr]-c 9vD77GIѠt#~7[Wʻ.Ƕ~ۗ7^u\\ֺvZw]A乽A "Lz= W7|FDcAzpVV -s)Uu>C) xH0ENq{惘7zNe N=44F$ỉ͉by֜؏9A=6Mg9%ekN  ukN4jN܇_xX }8K'gD̝hJFyJU(G#lC f+%0&!&4RK*6TzIKV6J{OI{Pnwe`{7] ryޜ_PϳBUiVkvg=ӵ'x^ى+J[&jBq=[M4h܂,P>9ü[3/>}U2yL]*cnE0 1YPn,_4vW)O&Yr˄ҔQh.3@ӥ~CH>s@;w$ܒHR!\L}<+3cKX;S}Hgm{ZXxErA/!K@4oW(Ђ  [@4ӚY%cDMvkFA2tc&Lm0h<[Ylg4o~'14o^Jlo NV -s"8 ֵN6;v=xsi?y=|J~h1}|7T+ƬG)_ꉞ(&E8Er@Eӈ5Bs53"钽d8G1K1L.w U^#)_^ \r`(s~z#*@$!y,Z6AV kUUc\➠ϣm*` Dt.=SeGsi\=ҌY.R4r5ؔ B ]_Ʀk0BK 0=e%aI- F&ׁ!6˵4T%5ތ{45'c"')z zϿx[GlKn M C0,v F+lسX)ςkQW`X|ȕ)(z![5dbaQ2'pR<ĦQ&0ڲTW|2!(1p -.n귊0pC+0{(AA Ǫ!ڍ}yFP 1UB 5C`%QucN\Pp$$?"X"fťo**)_W1H$vHSg?-B= :1Q`YvrW+X4VַӮ??O{C|u2,FϺ΁3ڝM.׳q7p_%?MfП _?UO3om6=L{w__WG`/N7kzzn|; Vu~N[pؕ@x=3_[p:L/NFcs˽apwxFדi]=ҹs.gw?6h?fuoWJ4_qMT/㻄izҫt?L[ љu>4N m?Y@~Zp.(߾>a08~ ~ͤ T3O_ן_]?U?1BKeWw4|;z?hq]ޙ)͢idΤ+}ۯ`ǁQ M/PJiO@t4sIh p5O~) )+x b 4WaRr-|MZ*/1WWz/ϥl)EߜXu/wDk#b0,X0 4_i0:{a l9fa9<;s5$(r8,ǝeAJGHVxRHq܌:\_꽒R,JRU(Ο'7P@VB7נo?'/R͌3InC-n.X.cXwP+1U̕Sjs40$s}`jxjPEcy{nր qfɚJ#B/Dj#0~ 9B321T: q &`A(L`Xh;<4eS3&5=@@$87ݮ*b+ǚl&3O#vB$'8Z._'n1%Ig ]WKE>Q($=E_LI&9շz^fz^ꭗC^w|Y^QZrb%sChӀ3+ Q/2re룾ۂ\#,pO`'w);PPOʊwJ!_U4WW )mm֖?$[~py|k˷N[~SOGw*j。0 I blj߃9_df<NԚKCrv=ey N0 bNC) SY@Ы "}#P|xss*ȋHF,`E-bc C FQIV=#f#/3x jfom֖,aK&6}kKd c( $`811* @J+d\*֔74^TF#Г׾wg_ѭz4GmXA m7X%Z!v]` ,KjXfͅZaIF1Q4`DHFJ8X<پf}*| 疧;Cm7m:}Όw t"振gώ~ٻq$rۓ|],f{/Hxq3_QcY۔60hdlZ"U,VqhJ)ocƛoo&'+w? VOis>.(%wYbWcM0)4[1 ·]tWŇ-ul6q6Su7| b8 !`Go&8ڭphp bfnJM%;#>N"Ѭ;xv[&w]E1I(w@!QbfO8F812lITē.8GuԂeErR0cpA@ڙ6,@7 9c&(Se͸)b&H |R}m@dpTuȉQ. %&9C0u*\YfEpGaw\8Kv]W8 Oʬv(r^m]~MP|WA3{L1hB Ks/WQAs%xMZ1!ס}km86i=Ir})9\O#.Ӏw< ,- "(O&h\ θAGI7rb[ D_On5a^Wc|"݊Ť}X [b+QULFTTi8c*堮TkJРT1ILe8q,т閗IJ~٨وg1> Tc-ɚg2-(FqFOf4-UDO7Cb X\Cl!~pcڀXHCGGa.晐Li<,B2 B Ep\]!q DDxuAMaV0hM .sG,>LbxG/D̺f'x{ VC? n +៕tOvnr!*K}oE/c{7nנSlK `v X?K#&+ӘbѫQT!UnJîOi(/ȅႨRGN2X>9lP&Wi%%S%Eϋ( Q3{MsnJr0Lv0e8n^:v3xpo!ya\99G1duJem^\uOWiFr6QHfQD :wޏM{&.ODӄ"LO_7dh ۂI_c=AIkʗ_[,.eΧFj3\l:Xlk~_廽Y*iƆEiV`=ײVs\s]eYE/O=pa;rm6Ty%L1?ӟ("Ά798ߋˋ팗&Ie~ mO%Lm^A]LUYsrum׏ H9Q6 e-d!Dж類٧ij hE3.sZ)-Kf˴`Œ[vc޻s7wUQG6🵆 p[ve@iig)OctOS.2* zٰu`bׁŮ,Ve̤)R iE%gZ c3)贞h̸͌ FI1 2ѡm_*4+<ʧ HoCUݽ7QSq *1S grx^e&; 43:7/4,{6`o`^h, tā$0t Y/XLWG|Qa5]K%~Tl,sk#R0,lbWE= p:$\2RrQ\Uf(R7wnsc,PwPk_O4]`*vG1`VDJ\ ^% g BgLzG-e|l$6C(C#Y;PVƺ^]IN k4>o9yL'_R= ů'qFPA])v-ˊicXź`_AҞO>!z)̽@\ ݮ>fbX69ME3AQ+)M!'?Og fm MlD[1JP bDZc>Yy83FhwzFdZ%IێhW? ȐH> D %3 N QMJ2qTS5ns3IQ U#$>4~,, 禦H^TA: Z;o͊uwL0JEz/xmAlQX?N <6TJ=?OM?;ڐEU]kM= TS4lwH4W }4ios{ޛi)APY׵ (:osVZ .(9h19s$t-*Xk(O;3]ciq_fT8hsc<٘80nxFEU6~Ga|"hJd_bu^u^=b(AN)G^p SK YNk` ,’c`@3K2u4X~*'s7skũ̆.%"I  0ęשEHPg[}8{,J<7[&.L1mUn&Z$0 _Y{Sm$%g,&[h҄3/SUk=4@gdHDžZK\2Rd8=I0paIRZ08p7CDJKt,aD[A"Qw4߭\]:NLΌc';:pUKjAR7oHJE(:D"3G/?PlCH#twmH7?.zNJb#Z) 5`%*@O *BYzAƷ|P _Uͤ~YYgې"/x]:gK28QK9@ZhilE>߉1E~.l9]1VeA($VAZ×%;"" 6JX-Dk=:% E $&N Lqs\ q H\ZXIeCCE=KQMla͎j)AiJjrZ"!N5y絋fΣJ۝g<)T''Hr!ME(-sT4OYOH,nz#Oل`)5mx'<ϵEj֓\W UեcqƟBhMmm C!Z*tj!-,EXI0+c} " bʇvZTgĐ5xm?T)rR4F K=Kbm G+?wU#$ʼY^]dƬ)QbY?'L\蟍s&*eXD.o PEA8uM͠Z5z:6L=jT=/[\TzW[L3ʼL)UIdZV 8F0MB[@0]SyA=)kޯ7_;*1*NaW\Vҁ.|ڼth6@ ݺ3fݼ#)7w3n1\O-ՙͽ־9֑r Ѕę*W7iŴ>&`-bʨs(wXxuޫB9NW=SE'C@r>|"k BOc#cW X,E XR(wAJ("P,)noc/w4<4!꺖FeۚkDβK,k\AbPLAiL۸qͅ8$wk+YHcno.s17@<1#P]4s櫏MNiD{ڳOݛOKFc3yw(̹3K>lѻ;6mϪ~~\7,Уu62zZ9 e\ƉmXQǓBM!m/'-Ys|j9sFi9ٽc_rZar=qZ jFcj[PGҮoXG^.n.㧶eyߏ-ř1RQ+՗fYSz<2>/>-A U̵NZ&Qo:P:YZ3=AyvY}\ݳ#$ ?,ےegM4s=ϏcׇϾxxq}ß?Kb4^3*RT̯#u N #w m.鯣bwA3!*f9,|u* m~)ͳD }eWXz w$ vA\!."IΔ8&ɒH^j©g8-8Y[nάUNúfŸ:!bg,\:0Ra`e`%S}"PFY *8n`av#DKA3KEK)՚y=6DZ["T֬jЙB:4ĸ, 3~UnwkBd ; ݥ4e`QwYj hȫwkٞb%QBMِMߔ\i$[Aj7c;9N\9ۓ>l ̜i:HHwM;_9PK#,s_/YҝӃSy]9U"EqtGž- On\tAp0+8ZlVa 5>͚]D㛇`blZU_G{pr%/fupoowCFQҥn+^en/jM`0L`O׃a?[8Y=)`cҐk:"-%kڕ݇<<َZ[!c%Z3|o àۮ&YyÛܱd2BmQq|E3jҕfo"*D`o(cuIͿ<{7 ]+vmj!{aF+q!:6'-mI 0_jm)K=z4D+B0$hV0]Z.3fΕ~c:Xĝ1$)-,72`8\<'<CdQ9X+Qqg7Θu#;vwJƸ)첏*+bQhB03 a5RAzN97(AaŽp;A ZZt`6e~jOgAP/,J㏹]ŧ.̾HX߬.~hu(Z;-<0h4 S6Q矿1Aۙ/~z3 h*j4gMT̷Ƅy7!w#_IrKaS#嶾DGhOZTI-zD開A FF֊K-ݲ*ݺ/Esx"2y}TinNmx6l[BS[hO+Y~sA#yqӱcvv(YNбA-pCT̚hEЕX+qJ20IŕLY68D+x:qz%Pi6wEx5[)-+fkjMYD0qo]R+~.qE$`eI$u0j$Ѣ ^ _9YmI(S+1AtyK2\$ Aނaw[ބON)t'MaX ”wՂ? "lEȰ$X-_JQ&'`PY>M( ơ&M~pA6Ԅ m{0B Nƚ4p)HX$ό1i{LEiA),h 0 ""|]5iT"7rbBEK?~A= [-b}|pIӠ5:j4 dxᖅ]GE7ę1'MCv}{?Y;{P\wԡ=֑ڞ[56gs C=ZΦн<2kis+NHkG-{.}g)yhuMzd)kCM/Xĥ҆ܧOg5 Ec4Rp??69<#naټ/nk|Ųs&B뽴ɥEONitUZEdu_vn۸qFo_ex=d,h‡yiw;I>HG_^MhpJ3gzF_)wvr?4`1xzeB2k[W$w7I$%%2 ̈/<""a`MycBّ =K{^2!m3A1eD{t oBO(3+"ۻf9Y(LT.BRfuFeV欁<$*ƿߟ[Bn'eBk;)Dv{R$z7{IY2Y0΍b*RnwRvH!<#N-)RcܗN-R֤n6)#iANGxgREgᇶCpAU{@tbPL*2FCX<8Y3zLRm1SVbXAv 8Ti[3L[P*H9B2i%b,?63Y|@<bJ0 !JQ(liJ@v TۛgWY#v@* P$q]X­Ri+ ppg"$\œ&ۙbE08F"c ĆHE#HW.eJa$Yaέ1&N)!D $׽N>o8IGo8}x7 Cܫ۳==N<@hܶdl%>\>xq`mwt2riN^lzm'cn^ ӭtlvelO>;7AZOa3ΎiDp(c4`I&В3go{>:ǂn/u )_ly_D@:]^Cf-;s#kTqK .)B*J@Q-Tjeti6vIЎ畬5DY[kÚ/eYGѵVHp7sf$5X`W05it*IGls`Uk* Ӫ欢FQmbrbmd%R PU!M(VJ'AQLvhmϚ 4CPw<`ڹn7<`<č!!_ɔPѻ֍Q> / u-j0m"HW.A2%`ub[(|D't>mU-P_CBrݒ))Sq*)SѰ).PݼQ!x1>? Wx7[(X $tQ*# aR][w8p^a0vCTbdeev?9-))+EZU}֛)%'ImZ>u)/>HfnfZ Q$rFBqT4KU֘Uo怜W3LZ7 *5ZٚTPN1*T q[8WE3N9Cy:;C4Jx,am0ew cc<\yG7z\B|Y]c{Xkdaa-.']S⮗})%q6KLJ{][b/9>0Ƅ%ݳDj>IR1%e% 8KZ}~*4O d o)2UmEo?>췾_zǡ#5'*cA& h;[c!vf0 A8@6uNrn6̈SV@M͇) $yɓ iJ6(޲;i(`T|*8[OcdgRT0xcWG㤞)` fB|2$$eo’#)a0P$e o2QBJp_-zn'%įq&P:渟vZ y1JmӃ%!z@/~ލ<{/^g?Bx=B -9u U)7c`sU@3AÅ54cHKP`!b!_b:%@D{%t;G_fdSSҲւjB6M&R10ր]R,妶9‚1yn{ME)'/t>)N)gi E4Hr!} SDYExUϟ:`R)Ȉ>1>'t>>qDK0̄f<}b E4@08EO 񤅜ǯs~|pO)xCyC PL߶s!xA$H_Ɇ wq"E,n]3lQ~ҾۙlD뻳(^F Λn+纚l۞kF6C qEu !Q]ιj60 { DcCԘ&>*(C1akaB+ P:-J=tvGҔ(>ĄD"]K/˨'qQ5%J P:7=P[l @H2ND uT@L Z)g&5In$/ { E$n la3-{@tt+4UX,WHab:}Չ[0q4EQ:)0UK ǯبk_q<N 3G%} 6*Tۺ.6liA`ެPFf(LuO:{Ba ɬ)3i y8QH-PWf}r&ty}wjn'|DplJg8yY#`ԩː)mz׽fba (h3xonc̼۲!o͆L,]xXVsoDZu Lcko PnNx ~_C&/;ݍLαpӿnXܷ N?.tMvrՌ giOj1$47Q6NiJ/u)} i9y={T@"}gAL|1}((EQ]Iߊ}O~ј 焏ί>,|$.Oeooܿ.J%Dsʷ)A+s]7y 4eqN-%P$f[ڂZCa‹z؉IR:>݆϶k뤴N˛x-~u{8aFTGRh!dHy;言v։X%Ft'&7=C-}ΗfcG{v+-"v:Zk||ִyn?mFz {wD5Ҿu3Q#{}[棿//*7Ջ7'` xg$'zy]]h}w/w\{ྐྵ5a tfxgB)"ܧ6y'I"TNZVJE3 ="[L}%sOK7sؖ2)eP_H?޼ݞ=xN՛c`FI>qXλpU(S}y<,E>9Ǔ@XIj7 El*5'Ǐ# wT,Ql~(q23:_Gb^2"c/oRFyCf2fBIB`s.fͱ{dԤbQ"B&% !w}0A9A)%ɰɒ(ʴB7c"d85D dS2k$Aɗiul0ٴS";NPey1jfE4mܳFp= n^Ye K-!-fX H֥FQibĻQ@AmB, /\ofX5a*e$SJEpPAkkjɑ&5r 4%dw0)PP~8ᒧJt }PdD|(͐O@[ff)v &)БDIOruPLQ;1L`>ӛ]GIO{XMd|/8.<9 ''}sr_>׸HUuk^=E\Wkߖ7Z(˥[U4g}q'/yfuwniUM]?,rjK'v_1+s-cL@t7K,qsg{͜ N#N# X/tc{x(E9rQ=ԝ%r35q^|p^Za{"|í$BbY%+YiRm"'OC7k7'/|ܼͭׯ^1ltN4')V(UpNJNBՂ=}?,lkshnQ]*&n`n/#"[?{Wq s(K;,ŚQYYEb;tO_3ݣzbu_'9@?y\Z ykDᙇiR L 4q$BT* }7\+8ƅ-;XᵳZdp̳TI̓D·3ƈ"DM4u4M։0VLF>0]B&ƧNir@ @`CF{ }sqˊ덑jʎ! }nGxmzI@X"'g/.w3f6O{X.zC`Rhyy`v{(LfКqkۍr8tvJ?";#IF1PyE+뵱E/.ޮϨ)[7 \9O#J9O* \}њP1ؕ>_:Hwc0s$ 5 'ZCPښ,\(k-D+]SwG= \~ht3<}PP*%f@ .缡nfzϽ.Qc"!&֮0Q&Q .ߚ]|k~|[ Fi{zƓOb;(}S՛_$} z] +Rҧx׾6SX˥@Z-۵oͧ8Ix^AHBIC5;iM>1T^z)9#J83+͸Jb, '0(N  #4[co`~$+p[TշӢ@.[6:ye3 6ͬw]9 1/L8ܚA;Wޣ~&}7(&\˥ٵ'aʯV,?{3@stn5FH,vYڌ2%T^M93v=YT6NAJh ֦ra]}i(ɺyYS<ET?Q:;.Ӡڵ/031H1J!% 6HP|R `>oZDkqyqqL.evZMwOY޹D]xmx`e#b7 RL$ )֐)<\& _j1$nWǪIeftKȰ/ןΦm!\) SAaa"DqizP~K*`e,2dۢio! {jX<&Sn=tmL*k*reebԅxnM_JT+q"0vu΁ <7\>>wz`)=8(ݯ/ ؕ88֍)=)H6M4t-Q"$JR(%ۍ?V!p5U?0IFR POUڦKSLy*}kd &J5jڀ tz-5U .J%&E94O{Va<=P3=JoG?+{7(P9# {R~{VpJt08W}Es_OL^>3!\_nlΊzdi (Gtt[o@v`??AE?~(VȪɧGc' mOS  (uy_ǓXIBT^Vt}[j`W+۵C 9{s.o8<]C0kpWD4I,q8C &9pd2OdKEGs._lūLmy4@,J"~8x7.fVUergdPgͻˏT167'ƓQhu}s3*eXpI|If28="@MqZQV$8ZPJ3Q\>(Т$ 7]B$ei/1xHRo GH;sʡ'׎y*zC2U֠4 d'袝' {Џy*Xpꑐ\Dcd#tƦvӘvŠľv;xZ=v&vCBr)N>&nɳnwTn3O EsnфnuHW.12?ŦvRvĠDv;]> xEZZ !_ȔBsM~mnwTn'[O|hBj:$+(R݊{Nb1(#:諸N_Lω N/陏[&=t]J6+l*T)k{+\'ZT2 U_{ 0UI0H0^Gh2Hǣ|p_EqCt J0Or.u^Կ5]g?&2k:Sx\Vǻټo޴-EV ֆM77tفNi;׶f fBTK/xP#~m`벡KʆjNQ5bV dc>N3(ӭO`'΍<ֽ,{s|K:.:$fy.V3E V'ijk.ɾ^3\Dz˰?!0Kaow7]v'v(Rܷ S0oF~w] PZכE>}mGWsiP#A BaOֈեߧ7pӞ)SoTD2 sLafWqCt.('(N_ ZM<ٺ5~$+x44~C܎{ሗ"̤] oE_&0iLp,xǼ Ŷ=Nf -ٽGEMnQnM52f޺(s^.mgoVQ5mP9mP a6.[wX?XvGBh[+1 f7m׽Ŝ X,Q|.5IT6%sB|4Sg@*LuWk ::Z`cA%imeCZEp3 I ѽpqf0nܣN2J?@]!RʈϑebvAބ1f,шIۀ,Rq⸠]O]`gfMӓ&lukqd_HF?-tP^I=gXTCށ%`OaހkJmS"#ǐϐN|xV>Gzt41D>mi#z1&B4 L*t)DX_ɧ`mLmD)8}GZ:M9L%:XsoRIu9 (Krnai8YlH,83^FVs)N[&Fgʉg䑬 @ZDN#Kbڅ|i J(=όӻ*$ )8La%G>T%pJb>!F[7%\9ɍS@ :TQ&;AXWI&?VגI 5Z?%JJdk9x* 9!UE*}B:n%LJ?CDK5 4@n[]d`,ý[F. ;v9cj-XbS&j<&)1`1>TcM0WlSvqz5r g_s^(j[jܽ Fk*fSɹ\jҵҪ$ -TR&C^ǖ[G'UEaJ/(yf &*$pT |wY§IJhMx@C)P%Hs2i0 ">ʼ $bJDh/FSJ"[ @uZx:|$|:%G33FTCTCMJtmI6p=XAhRh|(||RqF+kRiF}q&\tCHKJZ|͔D!@@jɤ|gbf2|',{Wd~Weꂽ׬*{߸i\~BH(Jr08ʝ cD#_7𕬎t: |D}0S,]i7Ŷ"GoЌ( 'IPYYIG~.[-m+sP (ER kvF(=ף8ו5D.ed;EHiqiÈ<˴IӓWj#r["\Gߏx뷎f$^&J_؊ EГ$y$'- @LYJ'PP,=ގjmGED%QLXUMeRC?XQ]Ņ8my5a}g8M(cgIl{V'|T5FV#(dʎ4ˢWcow->zy? uwL2RCo(AS0OS?QtϑFw-O |D>G^ ,=%Ⱥ!v`X6R`:M4PC`\O2$ f^*˖S`CFZ}^%~)Yuk<Evqkֿ4>N9p="G߄ڃRóIyN? 7Iw0/S㡇5Ywu™ KƥNwxcf4y $`1TpCH5:ݸqcA 7GE@Iv?Z({6Qc?#oF5$ԥ{B!blaз@/(vxjWW,%JLcW5+ ںXo=3/w:?*2˛8U/-^^޻3J:Zv-bb:5=?Ή>nH ^d/+z0Ў.|- cNjfo1 yx/v|߬';c}ၧiJ46x%BBק +ֺs2=2;|NHG{QJ S -k~A&,Z6{ÉnIKr05!1B! uI -05 fڂ8ry$8/1<[kQ@;o>e9C.!')Q;܂&; TEO}W6CA;ĈX028(*[_uʛf~jJ=XA ٹ$l * c^PO* [;4 D`]5qaK>|[Q(2KF+Wu ?H|Ej0eX އ$j~m&$Z.G/="RO.s#]7/W:}3 Lqw*03#mir9q[7n`OfTy)a楡-T8vNKc@M&=&8*S=ʒ\Fkqe`:xwܕ9P̥c t$L/XHhz'zsf)z6 o*!%rb(lLd#YN=HPn/zsԈ+_iťùCĄ3K|[?-Dbz"4wPj:g%ՓEUi\i :mZ!S#}ҵdS҇ X^{/͎pWݑ+m$sxg.][UDA]Gs2pQ )!]TP.W_XtJL|^g>15] ˴rx(d <@# E3~e1/BHGv¸F9탮RK}?>vMǣZ5"Cj~?>; uQk*^v0/;hx&gm (`D[fl}`TΕ#ߴqC;s=WbTFb~U><,\tEpTksj[ 'O(Y= RMz=iHlT Wڴl 668 ۾4,n@F63qrr8e*SdC5T_޹ɑrSg)({P9ͽAԙW(q *Ԓ_wEB+I|5v@[s>oHNi>?! pD@:B.8o%nfxn)/dfnDGJgc5Bc,:1/L+Y&u_e{&c:dj!@oc^m;a!ʀvDӑ+3DDJKÀpeY@fmZ`eP"&&SJr˿dq$pe~P<⇋8ϫ[Bxsx 2$ÈY܃mZb-NYwĂ ح׍[чsW.}! ;ywH}$D k=k9.GxA%H)򦦣9r΋%tY葲sG{M'O҇D(vǎH.N|R_KwDF\6>ew#P)Ag,='>pN,ݛ>ïNEn _ȼ>J VlBcyHQ}F&厔Ag鱗16č'0fêh[1B:<46r Bs Un [BKӧepܭ f<]d(n6!g!"lY9'LJcD)Q2,#@V\c2-J !fVd\ ,<ڠfP;B!vbHa聈̿ }arJG6 cp4chr4 އsGa2=y2!u{gڠƐڔ#wy|yC6.|@9^n9Hc%1W/Enې+.6/ ۭH[ɉU@v'1Xkv/NE;!?|}]Ize6ASGls!D%o Z43P A C!N}T|NXMWG^聟fk >1җj.jWJʸ=t+{ = O} K`ʣQH I2{\᪆0sd#) Sw.ٙBRYAPo?fBЯTIsQ/q(᭗ˤ>1]_r r\2yg)AaNjhb0A>S(WCiSRvqv%,VQ-a'@E?̞{$tv3LV:/8'IqhI*z q %Mx^֪0O٫]_kO\ ~4zw# @VS 4TSF!xg_8,nnD (|1،C g`E1A9S931Oiէs᢬Y~9!jyS[Z@mEh;kq#7D' l)nMgN R*8o_\We OfɩkAHhifrCZ#f=즙Tع9KdpD?2EQg t $^.TH4mR78d"RV?|~NŹT:FMF?GjeQliLr;SE) ~eu2-jȩT43^`B?fDU"|X &ҙ;a0Osͥbx ?޾3zQe{QaW%/Q:sE}[\LW?b*+QY:p5k&s/{t{Y6|>KYx]υWVZG{+7=S:7A"j~9Yr^q]_n21Mѧ_GFk{d}+3;sכ3@yՃ'wguv6;;T׫'7i%gW*H.IHW;:JV_#}Nѥ~61O{ԣz/Do[6ua@kx![2vc'Eы/LVmxq %AzIضOjVwş5ՋBw˨p<~dXf/em."ֵ d ɷ>y~oȦ"= f`zV/{ՋN%v3p49ie8Q~VjkO{,#7 gS2hF>' ]ݓ=l0HN褭r]XzZh6e`۬ jC;ZͥsfBk=dQ?'tdBđ>䀅buGyBJQHLhBQLM3FΆ O LNț&e7c s=z0uzzV[ $md[/YaJVdE^|SN/ͫ[Q Kr5r? 9EK $V@=P51{&CD!9?K?ٿZr{6OFSŇR&޿i Jƻlexi:?XUݮpcF/@ TpA1FWDKmD12EQ}Un颼v(KZM8a#ͼ-`1Z m+w4j'da[8OtF)aʆ[ #!嶐ħu1)vИ7P iKeBw]Jw×?Q֒up0ΥJ|2l[xU\($DvAnF/ZG=6u4qcG)1X*>(zeQ꧈U|Aw][%p.Lߏ6_Y|2Ȱ{V ^Ϻ~68dֱ/8ϊ}PQTZ|{ H*{Pi5T>PN;8e_P$ihzuGUsn*DP.[Y V9igA8JHw%מU(DERه amMFR'ruMU&s{|= 8J"rH"|]uhwHTAYSz* 8cf-WhąR QNlN;+-#!}DRKri]-\.r_ܷeXF褻ׯmY<U^|˦0<^?^/yqr -E^],?4M_2 0ě ADU)?<x2]@t |ok3.ӎ}nA[hil JAvÃ@>%Le'O)DuW.)Fʇkf`-8AsͣY#9zXR"T_)t?cxŃ01X ..!'KKVii8*la wDvo=4 %.JSS`Lt1(o%kiD`E, Ė[yZ@MɈa>>v>3֖HcHwFL zGt)R#KEihؐ"k?{#"$}&{i0)B;!c7Zk5xOĮ0)uQ!9&7?AebKKyG5ڲVL;#ypw6UZ]y1s33:PKﮒ.<ݵk;~OOW\y.Mh|>^5uc&9F2oHZi[9郩 5ϥҽɒAbZ1Ud}č}(ឹ\oǨPs̏OEZQ#TX/ەa4; kT=%E*aݒx=wЂ5/)ZN?>jx;|ˌO~_CAEq+t[/pwscp2ozOvAX%` ֺ\uDf3>Zn/ЪHђݯ6A[ܺmo0=UZF6ֽK8 J^ڈ[lAŤ=-we-MCW[:G!l%;oݻryXcyFيvoпK1ݯ]xc &AO;%'gAl")ytҨyvQ;1N;%4ψ;%ߊz?  vJEZ^8qzqٙ{Am5FS9=wwЌ M ?t!zښfH5*۞Gg-Kd:;S3 VR45+Tf5ESD!1 ׁle yb%47T*BNp&)wYlDP%[dL%@*wkQTB{h-+k9 P#TH``CǠ8QQ&Y d%I.$>&ZO KB@Y9@0Ri0."Q3qQkй&$Ȳ8] pze4"2);*h&huT(ʭtҙTO,'B\`rOT|)q`B(RCs{-֑(#H=)"f]6T{jq,H ::Y)dD1.-X6j8Hu 8xrs[| dS+$(kPc!G 1PXanHG`49F!P#( 61v D gS-B"w:,pTj',NK1㤫b%\RV+{@kU7RZaJvRzTS~-v~]QNQFdZ㵋]1K\ lRQ}pG*ho!BG>u\GŖ>AuQ Ҭ<'HS1Es"hc$>EaGfF41Ʊ֏_sj|QsIT9_=]^4r:6.FUK:-Zls;p¾[1TPf߿".ߟT̕@~ 557#"E5 dt<)O[\\-vv{ 7-I, f)pTSʕY.;BN2؉s.u"E{  *+F,RZ#iKH(Fg4R`&wVJUJ'j#s1jA2(=4n2X)yZyݷV!^6竇fH TyҟU ΃8,J PYfM\8Y[> =>Lf~p{7g_-|P%.}G&t^¤(|NGs J줺܏ҧ%4mJE}ŘOg :9ңVVw:q˿29+c+WP=7-q_Ex?x$8SG!riPurh"xI֕nmp_Esxz}#&pE&PҠD>L{n+8fGvG OE T4BHv|cF*F<3QYIq, {=&N 6s\@ e)j=A;$(3?trbOgB34) x "c0kaEUŜQj..z0D['rIa _`6=!gK!e7jG܀ۖ%ֻ/>>Pjŗ r <60c E0Gn~ָmr`66XX !_: s2,$#0HLxcȝh  d2n4(=fnߍcGw]j 0s 7ߩr2,HS)i .b} % :W8 \ydJ e(9dXzQ:anѫ{ hs䪾40QB~wu.(X|A:qhDOCr1%CjEnҠy~ßKGa]jg+6C}v~n}S+g-⻮q8~K ~f.^8PE` ,)43w6s'̝Êp ڣ05fO=kǔ /Lf1h3?$ⰫUK^H`88Cwa/g|}snHhY8֨엛ǰ޸!sOP^)T)ZpDQ.4{9fȮfWMt><CtK<˄ԪkJ?=Lj@'͙\_ *==OMH o'cMEn_llllTWt|]SNZ/M!Pшk\1C\!Ji|kjϠ U+7Nb"_m8gx"E b 0~ P?)˙^ CB^ii,D.ٙH)eD[8nB C$rV8&g8B+י  bΖg<{F1 Ţ]3cc׎n/lhN*iI)\Oo.ދ7ۦ7vvO|dsV8_< ¬l|^wSH1y O!Bkt#9+t!kuM=|  /'q6yM-+S0z䊘oS6ĕ⧲E*^z;G?Do \%meضǯ, V~ʽDDDD J, "̺Smp!PIj*8j{L?(*{8.+u䩗n~=.ƫ乚92,0o(A(cDyjǨ,OJ͖J1=Kg}xOڐ.[ߺ?|x֯n\ 51oCB 9pF;mȉMm9~ET%ڋtW > R,}tRP6[R:?(%]D2ګ ze1".MlMV2$|~\Öq44]'%*Q BS " 堝P:4f!" C!$hCC({!!HJK@yp'udD/# xHm`kTdTdCp-x.F ϾDE}ۑhyl{W&0a3S| "w.DiXf,p-zzlSr+jwVhzdK}DUd1gk=HJqIg<%>oz:$p~jį׽pSێm`j3tim`ԆqT<:h#飰ְ˿&C:IHs-{/l#i$aUAl3 poehײm-A*=Mo@i]nɵ/˨#  y >Hl ŀ H 2V+Pg )鱫Z#(}2u}RT(Вc?”lEFäz'0$l9o%75I%yL4: E">  ÜdѣebJTڃah E;8U ^9U($]F,#\p>9o*~"p{6(S}}dT8V0 ՟: 3F,+3@k_?a؂+?M&mLuҙ I+\jȁ9LOĐ6[I^YNBmDk H Jhm^0Uk):R-!AW?䯹518>׵Q q!~.VCֿ1Ij xD0%o xi+afm9DK2ě`D!|HTbZ5{nZ(ϫ:㱲֑ ̶[ UVyOCCPıJE.Ȑ4A:Ϫd& P!zi)#h.(G1B"8 !e6Lq"N*.IVdT)5Rǰ(Abd4R$e ̼sĂ4PMhq׊ULz ܒ`„ƨL$AK zU9 a dGer e^PӤGe1>!4a: iaMIPL-'&yUzaa'|To@PH4q o i"J)sy;$(6MXJ =sҎwOMkj**gAEo!x]];]|F(ed[ ;TuBE0I{pD2'A JA.1|?@ sx!Gߗ[_c}Ɣ)|=]٣4$ΦߊtO@n&TOn s5C/~: W #e! Zid|5 +6}~ =ϗ-Q:*Y5mM[NV|[vm7*^\c|Toӯ)zrrwy(PLf˕/m*tu?Z U3Cg7[;>}v^{yQ.J;,VB]Mf6LZaT C^_:#5Zː >i5z۩v <)9fV)!fi{` `}of?fCOoT-]h%=n} Xf |H19V4:FXފɑ#)¢Fs̆$\3 tx' SW:άnPIW6dDl<~t#gJGlB$J9KAYCG #ܓқNѡb@ddyN^!D<8]Uixnn eR:ȡtT봧r\S mTy9c阋\_NA48O ,m7ޢ((û߷Ĩpw(m_ooQ.]\_ JRT5:ciMy{jy8=}.g+'~g< O"M&1{h/ùfp6E)(Ppe( ƭJ>;k({Q<~Wr`@Uh}⡸7{*C/6D1F<௫,9>J>J>J>z &p*BJj(E^(F'8UV4rym_) x/U}>cs#mVL Ð1؋5JhVsO11I"̭HvIb,i:ή)u}c'/΄ӄ TDc09ӑ=E~= )MZ 贕@ bHA*|9A SQJg\]뤁& HMk*PbL򅄛ެExMףش@+"PHP6kV&2.1ޕqc"˝S2ŀ_n Krrk[$L>VXbUiJڊydxfMڔ<>)WtyCnl~ّ6Pg!S%;6;y84FL^bRdY1Pfx3m5(D#ic|=kƂDxg6 Ewo@Lz@JQ~w(+awAlԈ{o/9nCo^fU0iQTkEb )=~F6R8xzXq{8b'h*T l'#HRaZ(t7'"r1b:CP /P RJ Ud EHTQklz:aL J*FG(g":B)ufkB$E(ZVĉյ02\reS\kvxክ4p!"Q (+ O9 ݦw6Q9f1<`t 8U9b+lġ_9h*`0vl+E^i 6LͅMӐȩIC=~2Vwle,a8gdك]8?|%噠XnT aw P.Z*=1\YzZO٥hdt/! >Gx0-sɛg8ebW;5MrqD&.׭7f6vśJQ^) )]'&3Ie$pG4Y{<=one77" 4e{~R׷wa1 w[xx3yn?O"Fߜq_j8n}1|q|7\y07c/sLA uj.ŝ0V9vJX޿—/fa"tbm?;΁> *iۈ6t/:uT^vq&ɞN]5wA\H:x:w'spF/E5h:nbRr%;a΁2 |&F ɰ"Δޗõꓙ-/ö!=w] 5v^v"!F5~@z%22 X|59!I?8:Tnx/R"^$Z2L^SKn>S!V8Q]L,5P簪nSO(DB>ûɃ'6mtw%`KL-7Snz%җo1yXpw[Lgs(*=ty?wR}x<"2TVB Q KY̹xN8g(4W΢ <%%U۵2Y{|~>| $])._mO<4CDVtK=6.S׻ufa}֪Jn8d22*7Ꮩ]YIxj*;)㗊e2|r8t:xw F)4ej'Pw[0=eh]<ᘝqb(CZ\6s fZ\ݲ8a;?>]c=6˚Z۔p RΤJp7W'˅(˅d/6E6Sͱd^+be,%B!zU4~2IJ ku+TJ}J%TrA"c$蓹(R6d Ō"NʡKw\{cGFEak#qTh C&p3vrj~Q^yKDNpƬ*Δp{$rELc¬c̰`@ ,?>NxW(nz&C$Qȯa6'wEi7Ǹͫ;<,ڸ1QrQx']1~.p]5J@Dr1B1(y'w/).P劳ub#<6IVP<ա7mN*xWu`8hv] aDD &ź3o4]9PFŞ2lzr?kXџ?\4j_.KvCМ66( \.~bٲ`ERT%·r0dN<i=ҟ`D$>e&)R^A~L-R3DIçRO5Z={9޸/4nWnB` Tt{[@śFs*|x:[>> 'Â8snϳ ]d&on߄:obiϔfiiq7 h $ `HAcϒ b#Kc*T\3%IxO\J 6f''ƊS;\O>G,sq~O*wI3׌w>e;e c|)FL~6M9-5cL>a|]3=<0P܍O?V:`7h$LhRYpk2=JgYqm ZPċΨ w oİ Fu"Ru:y.Y>c,V 2qˎ1!){-drK#~Z2 LT9CH:pi)6Rg P[0 ˣ aK-}ް}G#D:W篛<|pt+B&w5tG?^>Cd4^v5 ӾN 3"4bLq}H|2QQUS4qEUTR- RF7 \x'3sd)ƒG_tq:x[kE-CZz\25j+YOڪ ՛`muzvx=Kq<9x0B)x v=Bn㮏wi5X+ nGxUR p+[ \4uA*ڔ_5nsJHUi[P$a t=5` UޟBI%4rZiWoVZiw_}m\'?_PD׋.T余$5zr\ j[{^/ERk$ЏJj%6ၶT N'vQ2qJ&BQ(8LdL.%,װ"acC !X8qpʸ@VX'9M()6uy1*a]Bi|)ٵfFL0O8dng3ZLպ ޷HpEcf ^2)L 5,kJ]A9RT8 xA"f'4{_5Y"-SG`oh1$Y]2HX녀 BLtߩP1!G^A]KΥEJ)5ˢ{i X,iG6YG"#C/Z"]K LUu0Ixj*;ųBR{wd7%CHkm7 \ԧ8[mOAɦ鏾α^weo4#A|4 ;ك w0Fіd^f|c|0<K.V0ĒaO6 ~!DsjbZtjr") HYHFB2RVMFJNVx1wr,%!,"WTptB+]HVzD(s]pp'3QU,(ʂE҈*ID$JNIݖjE?+BqY2T y82jM t42qDO&t!GAЏA? j#  Z(*K(6]: ܥ1#p*QDFg}:xw^pO  K!*J ;$1iWV:n'1R)eyN^S-4zꖴj m؂тj \I*L-ؐF^IS98d~1E}(Np#ɢf(9!.֯^]]]JCe`~oo-OۻDqo5*_3>qfC]jơpqdܬxu]o|I?Qj>NɯT RlQKd i|whiEm0'f Ù%j pI=q{u%CՏ6\ Lot'NB݇gmۘ,N8и}mP%G4#qJ'DK<ؔ9%Q\"k;1Q|Š*cq%p8@cv&jKTY5F%Y!ΎEw(VKDCo+ E։yƜ_:Nj!x2&Q=BH;&"^%eGAŞ-@Ē:I^I͐U@}@Jx -m :-N͇_/6pwEZݬVJ6RGF1m9Mtܟ Rċe@j7 *g4Ɛҕ9qQe"dI 顩MEǽcS/t|>ջ~{atLjm#J@i|nĹb Mٯ"N|^1 g8nvgWG? d`3q#$ᕙ5l6zb$2a@F|ιtydK@C?7;Q;n_9+{{6Qн0b IЌe$S S屻VtG4H>ePChCn| [']/dG~׭>s%v=&%}zN7m<@Hur+e/ɿG[a"wnp`0R]+}D!K{NKj񮹿~F t2|n1w9cN1I CE <~ ژc?88;sLxvyٴCi^ `JzD<2%:cXݔh_ fbŻ@΄Kyq"_\wî1 DFn^q;zyH”.n>yUZ'!gѮNめȗAa=X1W4S@hqf_~8<|츏8=QE )KaVr{@_3 @pg4\<mڮ #Ijt sQV7C8ezc ;z^] I~fqz0%)9MymJΛyJFKʛbr;"@vTC:_4tA8 |&48a)@B*d)/2Oµi ] tp3""=ZC"QC: B<[t֤۱cXyXbӢsbŠeI#.D.X7C**ϑ$ }D,dg|:\zմPnnjX=SAwB N1z?Qk'So.rI i/s?X"5yfOOs"kqC9L,&bxjaޣ&kTYaaxkkcDIzupNN.#dw6b#xr[3#4l'F@m>@[21ײC0>~#5g׿]Q:GƯ+>!R>NL6|,8"i|(4OL.>ܮ$# 15'c| h@M#8=mPiD)2~?zY=,2 g\VΩl'>s@ ,:BV֛l=et.2:Ѡ"㡓/p2)o/w?ӿ>M8yZ˟|~a NΑ`#u%GJ f`*A98}I5#q)/ˡcJrN 7A8sHnĭDY.{VDmsX<"dPBS$D|(($قʀ(5eB&TQtfhaKzBѥ Ikl=GEyTZ+J8y(r X% xYLxNK*Uʕ#e1KWQ p(M!}W1kw kP{bXf6~[/t@;ڍ<ؗsX;"Q93ڙ+B@N`غ? hM^8x5.sgW4@)4-sXik'j-1'Iv,\f k*7YhHI}` y[c0f6]_W9}˗栬jU1o5+t*y+p`(lpaa,zkǀ #Q]85[FLr|{'R{/CۇQΠ<1o^?zWe٩Q-Y<>0|?~uzR]\߬bx{y)=;wbD߹ 7^\]ܼՕ|&ALq'ϾmY)wIv7ظ1&,*  16u70]:uySw=iSBP.(aZjQ+W/.|6ځJ]BN`+(* hX-֓78Q-8;u:;>u[gǹ#L0/\JD}VɷW%粭hpM+^0$\ 'pUXuRyo#zBЖJ֨h2t('?P9א#GcLT.ԕ%)ltLbUw*D`"$1M9 䟅#_ qѩT⫧+C)"/9YJVn]iD+Ve 9"c 0\*1U[)20P{VkKjEZ;vg=ڽދ\ϺYJ D:Tv+#L+JƢN+2zwWr9SXd?__4g05K$QBEM{e'j-a2&U@(5^q/u_T/>Vwp\X<Xs*ngDxӡ䬣p3 &RUuiѲLs襹m-cz+~8u"g;]EyY*[)Rr TJ ԕaS2%'݋y13ω| c1Eucઘ\ iŕxI2 PAQ@XR]FC<13/BH9Z`Ĭ.Uʢ_HOyPc&z:o7#ی< ݦug_ MJWcwE6tm"z[ѣdyG| j(\^hm"-ŻaX&8}|nXpq(A!er Ձ+ZNL,֮KQ{{XKRBT aF #o[,4t\ Rbsǐ'}d/`F?Dq][oG+^v3Rul vgc$<9jiMZ,T)VdLe}N9Uu.ǻ(2F/ K7Zǽ's s}M7d rdy5 \Ë bSۋ]b f8ysy chd,Fcd4VF)E0QÿV!5U(FP5b7:'\[էIcpo ^ d3/|`r|b V)yyX0BKƀ3ǀG(׋yĽw<+yRQ8\SaΗky7(Yq9Jٝp^1 y]7ºZozQ .4*IrG n(ܘ+jvu)zkdG 9ם[tڨA#z#F[UVw0wwI5 ?;IΫp;KOOw=5~Vz2 ,

`! BNcX2$s4DtF*$#hCT;(wQ2-HP>XS4G "FOG I ZIhSd|[osWhW={-VԿhT_ '/k9X`,SKj9tCɱ'YJXW%!Io-i^3MDў[Ҵ)>ڒ=м ωFZ=˯kLQH0I6>W|YTA[rbM S_"7_5G޸PK_rЧL DΙ}b3=N`8{k|=ݿD@G K_ƌ3w?XJ '~X4LJ'qh'ρ c?T{J~OL k?Y8o:?Bp"=z!o(`uqϭwLdQCSkCv)^8XܫQ1atwg{wϲ ϯbQ9\T}ﭨz%oИH}݀*Od)MZa~XHrP{nVu n+u D AنLK\&켷YgA;FĘҜaFJy!旁++Y/Q %Y9Irp9aΈHj2 Ǭ p#i M}[/ޖԧ#X8ߩcݭ.mr%E:)ZL0s^Z e¦Tu8G<Ġ m(R>/=^ xi6gp&Z_v4bXI@q2P#ę(!nZJ9"4jqx/[Br׮gT(؟Ed*A4idSiT aFGS&`)7ڗ*ki6 Q*Br)z"WqqR,a ", ^)˓SǞ2HX`' QFtYJ0i1bĨWG0`+R(Gt݈! L?8yZ @(bFo!NORco̕ڣbvޯILfj߯?DLPвUM96ߙGI ,ưoS)B/wYgr|Ϝɪf'_&8l,rzȰC0bV+Pnn%8B&i -g,E*Jd zM>yծn3:BVZj R Pk0g43KLni`6r /L jK+EWQzU/d/ @7D(ZPn ,ŋ ӂr| fz٠|{5DQN9HrA% ObTŧf ?&O]Jmb~ؠ'ɉWl +BV0HD$DKp6E`Vj|T pĚ%%-o>gW 0֊a84Ը!Rv^KRF?r<-9I婄IkÉ^Q _L~6X(Wa#%-^\4'NԔ|RJ&ԞFΗ E_}L?E{f/u:QZkJ\E9Yu<\'%7>c"?'1zx*xUʶ2c+޿hw/"KeV()`p[)'K- ppc =gP*RZ\ح V>,xrF;}&傈^w;BV/Y-n#İwQ6 (hI&/${&_n&ڱ%${fn~lIRˢfwn$Al$X,V6tZ[c= d6s[((ͪGQA mfT1FQC[QS 7k5tb nljmMcPAuXg9u>UfEDhXXyQT o8B%CVAы#t)@mkQ!pȁ_(A"a+` %BWӄQNNUg8! %oXF$tf ,P ue- ?m\R΀b7ҜԶ2&Ȏ).:zaӟ $O-S`^Ihk3ƠAGbFLV@ڎ}t:l_FVMCh6P0:以ao>bIFfAPhː=X7<ݨ>O9a8_ipyLyv35덪 v9_-$OFs4K!ͥwdB|eł~?2Fe 9 K; 챪ÌShwwbAqp%Z^ie:uE݌?,2{%CA(~Xa)f+mft5S;VU toـjlðsPR{nm6y[n. M}QmHOZ݅]݅𮻐6ܥH0L\mO5cn=ڙKx0f a]R iH;iv+3tݍƻ<'h}ٍ9Sόͷ29(#3#;xq+ЊGK4Z9@'-u(yk!㎑W`ճXՉv]36‡;y|q7;hx=Iz f =DvR~q-CU‰*Ϸgtj&h|FZ_X=֚;`~OLd ;=%䉳VE]SDa֯2 y'-6e+NQ JXӡv>|̵Se`IeI\L0=e> V;܍7J :.1SdD29*9tT4+csA8y@ O=G?oyBj]Yzh:IZ)+ am#Ϊ[sgUOqA{f[[’>9%Lfd^"m2GH8:DL ц51p VI?E^8kЩvt#l7+qР:2>'ɾrZu밃F믭'ɇl|p0|66ju$T4FT) Z-tPS|MUd7j8.UsLHB\W:EN b sRp:TLr܂?=q9EsB?K26 _i"vi?{ hvYR( 6^%zv1x ?O I5 h(k>) "~yNߞIuDjZ~HCR`< -]F_ )<.#H )M^7Q?%ڔkCA) Exym2s9KHMfe*l'O7Ngu7sJ 9cFϴrAIc H/"p4KDMg=AэwgV:{f1ۻjlY+-ٌk2BV&ùX9 A^~- ?. yvZ3y$17y"xb!@W[$ĎD#-Q?PY ,4,.g?/??%~K#n-b5H\;TZ1@;Ɯ:_x߁(J`6 AsJMiEn6n,q)!]N XpJY 6[zWmd~'k.=(z ˳p&?}尰,.\((;M%1:M"Q*9p1Ƞ JƠA SF "X.ܣʝHܺ_So ܲ6j`3[]|}qO(S -$q@"FIbD \F; !K<5B+$B+.p4ϽC:H FċFysǁBE3^0ץPA1q( >EbH5 ""W b`M ZO`^}gnj3m]$+lK t]+2+ ) (FDJɣH<)}}#ؐ< ?ߛ1oATk{_-0K`L/U k.z SI}R}(MF<; GuK}e+>' #]Np %Vxށ,HAuNDa ƀ!l \mÚ2*R8'=sY.`P B2Lq]^sX8хr59UQ(\Z?g3P8 @;[E fҒ.Jql/ d,*MPsb!AFy%d6A^0s 3FD:#+Jr$*I-mcb@?/v~杩TpW?= snib,]? w~vK"uv#},A=пҳb#!\DdJ n.hX BD'C[{dƴ[@C[E4JJ޿nOiEb0FqZȂm؁n5H+bwMn-md.89-v[ E4J8KqTޢv Dtb8D)8nFj:$䕋hLqh߶v-щm^4U%P!!\Ddbv5+`K :w Dtb8D9%Ij=PVr)mF1v Dtb8DVpZMJݢꐐW.12Ő|v.hX BD'C[,n nuH+('w53?k S%Kg Y< 3g)N5f>곏j XS;C{-3ULZg`Υ }VCPLSu{MK0xc&0N~@GKhi3QN& BA/jY; )q)7|QlhQxL`_]y캈{A\KE6)CRaZnvh(c9Iȱ1 AM)'[Vß_x߁H)uITZ{U/ni{]ˡK5W7ɿb䧂߇kZ,0I8 Rǭ \b͔%:F@(1-|!zR˽5'˺:=&_ۻʌٵp3U e:a D&pwcjaLspQܠ4]ݨ[XTMMJc]Ts̪qv^t˔L1D_x<ÔRrL?,*C^▟q|6ݶ+ +V(vnܬ5g*bX e2$@]`Bhc]Y4pꍆ4[]o4z@*[10m=ox~B-*ft.v]r>(]jr0ƈ!Jx;j{\ _#!/'[BSn~BwA._ǽoc#X}^DNguLu4/Y`d{ S2bt;I\C2-4u<(C߼ch FiwFo7Gm\^)Zyl hUO]b@ g=I(|ѧD5IJfEg!JY̖@#䜸3&"pA!ʢ1R$b :׿@ō$>Et!h@zP'߿W-B/W GDY(&H)ꈰE5@!U~; ?c bBP調2JNVϋFbtJ~J%RZ'oB!q$!i-xT!`:!(EL^.?/<\+,$jYmH5 TRs,VH]Q)G+`.-֜Κ\ ^yҙnI->dӴ0ap8 6؞w<|ᓱ~p9aJ^|9aN`L rɲ> "2Iَ6|%oHs{ܿ # :LI(a医U0檛xi1!§_@0FT~)eJJs#ermy.3\FJ'[pA(~Uy189E\ c~G(BCȵs\kADI R?{ɍJG{HTJ"ǎ~9BW6fOhfkc.U^LQ.&pNQh|V3 {+K:0"i#u( {.38щhV`|Gɞ"|#AY{\L" $5(!J8R,q{_A8Q`]sJ^'3|IC7iNSѪ}  r,,`ȥ-Wx5dt=CGO`SmCGҒco@zhZ6~K6;WZ2*%yYᅵY'`B8'FBH.SB"ZӐZ׵ mrpkT.N֞ڨ֙G_SlQ+Z+j#'F cW j 5h/vW\K.uQm.J)߷tT$ƨR>zeV. ٟm [li՚fgiKvM3 ƂeQWmTHșgs/UvFʵܔP~r6Y(k 1$BR$L#Uٝ-ʔD7նmDX7%RT-hm.?0njM,Oss4. 0D8^BpClf!=5s4>17ggjwڋp 7z36~d8f6h@m^a~ۣ4w1Ѡo/14AF o.p DR#gLDF?ӳ?pr3rn~9g??{xy߯ޡm -|upv H:,~K1.9@~ G~SCKal"?wW jWvKIӕEgUB녚Nw_oq"M515ϻ(g;(K׍Eڊ͢Q`i@uّh_ ;Zp8rOnmjbj)+#-FkO9 jD<A|:ǬdYa)Z&^RD Wxq+c5̳1}0<? ̡cq`t3^B+m@alr*,"$M&d* rlXCYdj65Jߟ. >#oφ33^=y G4]TL4$,XI97ˌ^[tU|˕M=`5̫Kh8`pppax9q^o)-^| !5f-8/ћY9x,' ZeZ&/k llxݏZM\/ jBEt͙-@F; .Y 7;tDbZ>cn l2`a¡e2es U-)N1.,Sf}c1ZUj7H%zO"%ޕzL.dSAHO!@woQ ʙudM+NX#εgBoK.'RF Fp} WbYů._e!OL\q╏lPdSX[_CAoƴqvMCmҰI4j,Wղu{…LgMek-7y!G. 誎O|vr>5Xg- f*?mWhOF7QQ_;[%%^)аw5Wkf>k@aLY~b7ӄad#pYX\GF3^:3F}fƫPM]ȖTV 0׻R;zE-vClܲeeguRSހ8"'J&u))v r/6mךkY\l03X;,{ ^/x{Kꖙb5mՑ!zMֺ$kgN(e+וʼ*R%o#tX\8T.'j>!Gd  *: "QW;Pukו:%0u6> CP랻)ؠM|ȪዌPԳ`2 b Lx3~7E ܴ| )f7-XX3lZ6-#yM]ʐVnhhʶ ы( rq8JcENPdv%ڬ}im)ChWyσcFC#M4ul,,mV;jJ9XܽŢB#${@t&Mij@(aHȄg%tdLzF vb7Zk!#䀟:B!7ױ#$.ҏS}T[Af8G'بz\ː>-pgu\zF-v*魅j-=A@nݿ2|4ڔ7g7vua]3Wby2d"K+eP)+QR 3֥/Vg37MAbUEk{ZVqi*X ݝR+b4c m3Гet0!E6kֿoQ-n]eizv?'&]`{u^Ww-O'9AIAgkEr# {}9H T^r.\=^xͻF⿜6yh?J56\Qm5d1qtg*EU!?#VCr?~mv<^y.ᨯD1jF{؉E%WYXj͏~*-^ ;T;ߘUX^i|t­M~q9Yޫ:Yޫb~'RP lCT'5A$(9/>W~=V"{}$ao}Fڔ13%ԃt**ՊR*JM5:>:>nZ%&lF٪1Bܳ<{r Umdސrי-קݖD1Bk3ЉEjm[BtcDkוp몫 )/É`@TKgie,lDɺdMK̊ -gC(N1߉ ]lk#RnCSEJDP ْŚ1\zIE"22k#,P@;]M\DMDfG8Y@Z2\AwZ`XPr9᠑5r4.tΡ6![PB ?i)9 A hk\؄vdP$m 1j6/-I56b,6FegV:{(& >?;|?N}:66Sp*=(ͷ=NFl{ZK~//kޛ l|z8wOLbmMowQ?=Uѯwk9u}pSkʌGlv?X3R)6p dx֯ƣ.q~tCU EY[dmˍ/^=m=lXpuYrOghYeh*$B4K v=VrNw*2S7Hm38VdUu*iUiloUUiCgXXͪZTuB *G_w݁&|b߃/H${P 9qc̠#g$%,,ALݰk,Pg^v٘8 Qr[jN…>K_&dqKOvj,ZzEoE_3h AAmP Se>dٗޙw%%o:#a˂) djp^Nw9=6lh o҉e9+w;EaJ;`4`"Y/BGT:=>=g]]6|POՓ]_]vfΚ"{y_D jt|9lVy|p~/ X}h~-YKݎ9d?G_t<70_oq紅`A0&HM axtJX/J-76O?C2I-i,.g iՊ#{oH*|ӂC ݃dsthz*>/߃h ׋^Fp6$%Fۇ %z:p?h8X{>JHHl*u)H(37󌥱Xm*~w|>mז1qNB~" ^!tں晭(%'RlHb'WޯWU{A=kq#G0;|_F 7 k_p{HekHb*.~oN4MUAdU%5@@Ԝ榚NӒ}?'(hq'-1956ʇϙ>sA4wC;ELb']:4GNj1I+\$CmR^f=3k&!zX7<}QUÃڗ?Y5Jg) s հc@ji96qFFA|b"H @Jzֺ!*$\MRp,e+A-)ZYh%Eb:![2*nU6:R<".,RP+b1`Q[Ơ:ih%_P4twe2\,Hn3T- Qi2urݢ513,9kr!O.'|))d$w\zǑ%_dg2Ar۔Y!hkiH=J0(:+4V-8܅Géy{m) !A`&Woc?-HpDuc9wjK$ƕw7!! LSD)a|O3墲B/*Sq_I 8v^C蜤ҢF6=×ic$DBQ F$a-6CV2x*1c\"s}VCM`SjZXUh~\;(xB("cCV .ɳxBװl…xF}}QOͷ].=eURe8#{FzYro u-kUR /,`U]|So>y^D洢R'RIF'/W뿽LCh){VHHuUSg!&FiNjaH"~j(pZye8n^䎖Np:Fǐ\[X`-0T=.?FlGfiH.4Rz'7;]=DbчbMy{{q~ GmEɀcQs6P]Zȳ":|eWגpif:fdC2U=lpEcMIIj/΃bB׉$=L0w+y IDj.'bDS+% )=93N~gj)n$Nx9ؤƆ9+m TBPג?sG91\nKS1'P="ckr_i[ӨAMj䮺<3B;ocRrȱEINvX \tw/5H|D%eՀzdRAi;j5RMؐuA{$x@ 1(s0G,w4`RcoCrl 1&"SaǍm1ƪsPGw֞|=֣k:z7^AvL˃?r.B|P-ߨ+h;7Q2N>sizB1Ѽmߴw*fEF1T7:'hP4/| u;+ PLTGЦ (.i)rZ 8S;fff65y=c\♕8NF(p7zd .2%*]?,>}Q)O˟ 5iG [nzn m#@k9Oyש5J hN)O5:*7DJ&me8z19-J9ieiI1kVGTInM1oU=SǦBYCJ3Z_ u*p:Gɴ`CFF^">PI&j̷"&nn`OO}IG=.$M&<I;K+qWU$9jՔEUp,ۀJR" ġ #)N ߍVc^l- PD\4y[KbPAΛDh/ @aVH:R6A n՘o+/DO:L6lЫG_NB^av|B^C G 0;ρ~8xn!훋* z9Fț;]C>HsAcG2lho\~X nf__QfjiapBr_;=}7\pg~ϳ_YCop{9)=1nBP82S^]BqώfyR{Y]l2P Bዚr%$B#rX>[kB\e^ʷ<^eB؍wg>9jNy ңL =\8bB,\EǯC/hC7xZX~{p wUD FYnrdfW"(e ?)\-6)XÐq}=p *MtAA利CrZ"W@o7 _|yXrfa~6<>'B8A?l#J*k:f>@+8瞵yS cw iNYZ*zP^1S@"N>L|.SmW3nBV^D#4 Z`(ފE^OǗtqcMi<|DE(ܮOBe /?tWlk,`|gKDΗx?uvqXׯarLظYGF bM]C'0 jū?"BRj~6:ſg<&Fp Czp}K|TkV8P~bTFhs*\3|( !9n\4]ltqF_,+v}[ =E "t0~QKƝ |ez:oOp9n6_|/5f=|,g?sgFr-up/FhlY1x?o}0D DjvʊWf_~l!j3^W4.ppv$a[{wy!ð~>K[r(ÿ{4 @{B~ԡw> _|:FˀZ󡱩($:] ]}u{}j)@pAA]!`8L縌Yirw:Vq]?~Wi?(\]\>E뤱SA$T }e%\m/ލO*LABTIf4Eejܛd1OQ*Mqq33"Ls%Ɛ{biƏ%̗V+o);8S&Az8>[[H/Ύ{T3՚ Gѽ9~TR5輚n>xwM*õ̷H,bw;gِSfo]2rݲL {bȁٟ J5': =(i3w\5nY8Aӄ?^ْ=$u}}̙$kyZ /:- RN]Da >#J(bdcOǻ|INYYYMon$Q pzjcPosp.m$gzK]E ėٮ 'Fϯz{**1g;v /05 <gW?;Zp8tft~=dC_{1d7=gŽ- Ny(ݽYLPZiH,0Ż&|Wz{-Txܣop&,TOwC#H*vPDHN_HPKF(%cv'ՆaRV>~WTֈX3lG{O]Yku{U\|ߡcBH\b_Y^6_QTZRy5?We~nO(ש1GiZa{N)=T[bH*l@ElLC%Cʭ\IY. t)Ƃrڹl%k{ds׽] (wWR2KHH .i*mu I]|t]٦KIV"e[&Ov&g4ӊSI˜cIZ~zv[@)$p7?}+1FdgDȫ՟wvrx[WJ-(ADzMJRXIU|#|BnN{n#6[X๱4%=ǥYP ?o\c-}5 ewpS-G1tay?p㩇BS!ɅJ-|\Z9B%-r'm>~] SOY)맣ø{CT)u:GoNfo8Ѯ~1t<<l3se @[1$tj3^ܶgKf>:rE-ܳW_2f q_c22@+ƫz #5ܲf W*@].&y b(m'O ]Hγ/vR@; CpO2dqkxDcᾦ]e m FlAEAlp "b<,,)yFa5DUsJrLRp}fQ }cMz;Q |@ρxq쇤j,i/ cF W7-@0]Wߚb=c.,~ &*^n Yg?iHNxP`oJ`iz.͗[e^$7dpwbPؠ60D^DLXuS}FB{"^$Xɔ9pfN?&A`C-³t61VUkTA/~ǻV.BWLfj[1e{%3$꾪&^hVM# ݙu>~f3KH2oˬ40ΐ/'}5ذp\Lwyl)9 mn8Y@TOjwxLN tEVӺվDrU 68\BqTI^hszO11`vnPcލ Ͱ/ yXqː4qAI (x@.rV;\UF2B:EapA8@pQzJǵB<-nm9zg<4sv iJs]O}7fa/;Ž>Uz\IVІq2jTVtA>V\',:i}21dBXR]>S`~SLbE뿍^Wd=$؃[#^TIA%'Q/ok׏󽶨%t'9"J@gXs|q}? :!tCDkJ70EpM  T'Btە$䓀= CC: aqӔ@?CB2_W@(k= 9QJ1?' io5KyE 7CaO>rA*/z[sFԔu[w5!5kAXY fmM% 5H УuDzjFm%hp qb? Y7KSLJXD=:ӧϩ2J4fv׻v3UNʵD$Xkv'`iJXӈ$"dDo& ;M3`;듲\HSM$:OMTH2چ "&{8w݆exuh3G YD Af́&sgv%<DgYi^kj4g(H4!<+K*k v; Xљ9&An Ӝjƻ\C%^iq {4Eě*|Zʡ72iK@"-RPCF<-bkG$ʟ m,G#XPI>RdV 5Y& R(QѡA|^['EƘ2YiMFwC;IJb|ZG;t_.̀ #"TJ)=dX:|3x6ifɌA3IK37AOs0̘pP@p Q(V#?3(lj:~_Pf-qQt^{&zb ԇ;~]|;KIm gw=K莜X_~t?+"kctv]z=^%-IL3aiO^3`϶L|X%Q5_|` h)ow-ynnEZd':PVowbER:X{檤CFPxhCQ`ct*)1q"ZH`  (oWhn,F)M\z+bUI-[ }_O])A"q~ 1a0gUx8c.ԯW &uƧYɘpk_dO匞 q1Y *Zo8FJ!Dd;%X+g/?XEY#vwDa`[G=#$) )Gz\͊ 6Ī 䔂C&jL J!I $ $h Ey[HʒeX w*@%jmڪ@ @*8,)K`pBOZ !3 cvP,:p fDV)vBP8IDVZeZjCie#% NiX0Pks;b#D+2E繉`y**a"fD6L{0(SH9/E;t  H &"5LDjX$RlH=ya-/y"72R@4 R )RG(h*RkXQ , ޔN)$b=fg/!2D NHx F`]Q0J :ZU#QlI _4" kQJ2j=lp81UbcK%Xh9wh":p#UHgyjv41($=Yb)!Bs% \|"⅐"PJ4CLe`a`,B(I> D @HKzBBED^ рki)D%w.ZಐqSUduLeKju*̟ *C)mؤ@8fL!FIF0H.w˹ BGˀwo1NqZ~0||:w^Q.?j钍Hz~Q9!oݛ5ъ!"Ԉ evt6_OwWW}I# uv}W#* 2Ȯld<{PsTʹѭ(⩲h&h* k_sI':##3p+Pbm.&!h5a?O 鈁|!F!H!`m5d##[u5Ԙur.%u06z@f(ڵp6%D0d85M*}i8e:QtQu<ƕu+*6WG!I>!$1p6fՃ& R$xCBbk }dE2w}&I0uB0тq+ @\]eQHzRj+=-bpG1dٷG )ЩҧLrڧ}oov}xKfN`d⭒Չ4#@rO:j={ڤrAkepdNU`\a<@9mA:J1SG;A{D #u4:8{o‡ik{l a:h[+^c ^5XEKR o)P6WF(ٝ]N{ fV -EBRC9w N A uoURӡN,m0*: GOݚHʄcdro>@VBRu\%i D-۟8bXl>QTZ * ):B)0HpeHٝ8b#) @đZcMU薖Tp@ך( %wk0J',TRR~9yjUk٤Ɨ֖=9% ^"6T^is:B bMYwkq"pq:d&UB$LG{kf]gg/^= bx\d0X1.8qLKEշW0ofI#_U>Lg_otAYttMt r][(U/ah#vCk*ܱ=fg(f~ _iL :0թ. E1e&lH4(F{ҸO\JidznDұAڸ0zo&ԧOj?W'<'@G@#|VK<ln'.VQtfI[}w=E澻DVZPӀeM2ĠJA{[.-YHǓ)yx;޺PjFzǻW/3^{wC *H𲓣p{cC^%{l|4돿mUT~b1<0/J)Ugm~v Pxn7Dx?t\w]@6] ^.-F/pGz}O w-XĀ_L?_)yqCfdC#(&(p(ʲ5(^W&-gaaNx>#5j1̙cJ٥g1#NOBN!v$&աݾq^[g{Ulw߯gQ9Ӱݑg5J 2CGN= ?-`$!m?m{^ڨt;aZ^cI:C7Wŵ &n:ނwXuK{jptj$#pGCj ǕA?wV-_ԯoLiK8 ~-x iouCWywָ-o/AՐٯ(?s0bm}iI2[I=Z1x6@guA.\@ rlݕOٿ wCقbUޅ&c d`x襇ݫ,f0WZ3c7FIyU0'F܀}Kaz='_e[ǻr\A5#Ni]i|Bqg_Wi-Z0vo`l vڰm*Yі'FwQ;Hu G Q5EC\ 9n?'lG='}_XG"S=]-pǬ׾򦀰H7sX_U'[SUkj>݀NS`3l}]M60M5YN/w EnO/3_,7^/M:,·yb?aRV|еH>SРO+{?{ȍs"?3 8 $0lroLA)$%lj{ f*Sy_üJ~M{B1n?w MS[0(+CO}{@zxH#'%s5r4puۘU1w]^3ط.\M|k +DAG(%ЁV'qďo.ۅA'кUX"陋UVuF OkMPi{e].B0 urk$Cx\ۄ=qKJ7 WD)Qŝl9$& C{u xm`WWg7Nnʍ8]}ȍ2-jvnw[O.$T6D6d_:_.Brbwb~1AG]gg1Z]9s=ux>'z0'l[X+ e/f!;?/I , /bzˤܻ۟JȞb&al#92)S)юrNAV"'7[\'VvyA)$[G0{MI'c2!%wFQMlGw9߬NfW|аbK - a#*,!c%,Aee!JsJʼx@@ 8~iy@,uۺ"|Ef:_O  ;%eHplw541yiI|i]wLE*=IU$*T!Ւg FÆۇx=3+G Sʍ4"T&.Y,-l̐AH-`rĎLxŞw4p[ꇥDqJ[l _u{ =!aODd~wțC"t~3@c,:%5(f.:q+h!z.(̟ȑ5.UN3Jn˘3#Du׌j!/ x̱Fh:5'5 "[m|qў4~ b54icPPxK[y6%m#%-?yMvm~?HpּEA&0Jh?Hctۮf]$ikIu]DƱ?]@&K`r_.W^}\;s H.waEZ]C'3=dgWtջ<%nߎOH*}R ۗ^hf9azTn,GIr *UTAeQ({E,ݘ[qU5 >j9/Yaǻ0s4(\dz q- ٵ~\ˠ.κ :벬Vnz.%fX*i%9uH sNdrSgj0螣?&~rB,/:|( 4iPnx,̯\饟'$O~t5t `>-b6e ۣeU8Y︾E>_ߢ嚴/RDʙ8k9"b91(C23SFQc$SNJy+(CTf SPӍ &9 ɶ|jB"XTX?f!de~@\ X>z 2\* E;!+fxY_<tnn/1BdYb0 A>5ߑcBDm~{ps7 aE d86+]8(MHrpp-fU=Lq#X6ʏc3;4GǘECE6ʒft>*Fweow:RXE&qXѥu̅bä|f7 7Jvļc#a(2V9Gܨ4>J1cW^S-FB.*-l>XryUM<Htrv/;>7/Q{E$oFpqbyi+괽ZŞsִjA]>յx$%W"rb,Ьv3uYT?˪ pu7\W.nUοv/0%>d Sgɲz.H4hd #n?| 8_|HI'f ]rAz'^ˠ/j ~bM 'U$L<+˕'qV`QMX`S_'N}yt|+ntdz+_sgc]4dH12ATsɴ3:c*>W0+@Gsql.wփ˖|Y\Yl*`~V $%FpCr&-u*,Wso|ΙǎmW8f|fYp 4lr.βW7)| 5cA5U]d6~)7o[Yrri^A;d{:@HzM`Q:I Z&F< v@U+5 t>f9X::v@\ 8f X_P8:xX'e81;sD l)`Kb k2SIl-<;6-M.ֲ@%m y$wR-YIkY !@l7}6XU"e0.{ Ϳ [53 ?ۻu*Jcc]oApcIyym3H괊A:X$lb%<[rX'{'ySޟS u'TD{ʡO֩Q,G `/Y% M/ar1.JOb۩VQwR["$uLp'Owj2I&Q%cg/1;.LP\g[iT߸#_PuC͠V. HpKN8"Y NW15+jNm Kה@YB.uݕMh1ԩ.ÐZpaL:[vMbh^%v`!,wԉ);@B8%,,)kBBrM)twMk֋lD RIQ+xWe;%Ǡf;5 !_+jjpvJ *$wJ+]Srl$+2%ƄhpqWOČPc"uxT/Hbb۹YG+rQn@ߎ~W?ͪ>1o\_9~|r!Pp9 ki1u~1~+|~~v#+265f>1׃l?QLʩYP(˿\ªT\r/Fa}_Loet1\PqX S)!DŽ:=bG;5?ګyEOn/pfkxA= 7b3(aJ`)Vu/]Ah3xN8miib*rr͔:mF^kfkeN`aalB]bc jR@]:x 6\F]-EDKTF& V@V 5&H*PN\= ;MW@Ir }/g}6Y Uׂ̔Q+CSv!ZeAUř&SG9Iʽ:O3H3j8<`r/\n+;]]#5rGo]myk G&bp0v(#Q/x+L{N(Κ9yכ#V) Yn{( v\?Nᓭ},1#~ ,Rjz/oH_^&7f&oo#M%Z%=),Q%O1KY%$Ptur$*R Eh-Jb OeJS`PGgIIE],ir?Q% )/ɔF&!2g(g0vTD77c,*zSu9/;02sᯟSXZ.  φfq=wGn>_ 5n0"6nxe^t⿄34t~3_fI+N0Rgʌ28wʆR)|(,1ul"إja-3c}PiWQ^yb#rƻcl{еkAOf.V6Vse N씓q'OlQ[+vEm)t2徺IvA.E\GWh^>9 hBsbVcXvޕ5q$鿂ˬc+KzYKؙ̋'ut djd胇`+DGuVfV֗YYT}8*ՂcaBG5+ی9c3u j6S?:֌Ӫb j]R'?-ZģZMN|'ngb՜wtzZ&]͔H>|/N0gK #r ۫6\Gᩖ6{5}tȫ[dQXijO颢:/<,ĞF1Wv-KK컵z|2vɽm] qdb=[*r; >KZ:S!/ԝt⇩&c}3+D1ʃ fٞgl"Y ;H& (w$\Ur<ɧ&KLNr7ynE< 4F32}wWu7*XauMmCWȨTldqpq*Fns$޳8!Y]pp\Z͌Wgc)W?eu >h][QOˋ^#2~xM;+at~CM^) mFMcf"ZV2AqZB$C8S0K!֢BF26D/cD7#ϣaS66\YcUzQ]6oo6ýfPo7Dz5(V[?]Nci1#LeeUg~1 4k|[VWhuuG+M_넢G0]u\-Õx3gcV5\)KEr{8AhPڌW{λcxsLHN&+@O!i_E&JA'݂^ FW{q4FK\xp[ K׏-H4]f2Py'T?L?E4Ё5fE TO8#s]l?2у1$̴s]9/q)QsI2ެz8㴣ӵm9ؘB*2U2ŇoAWK>>R zNJ #8ҎY-4a0Cy:Ey9ǚ߻TєHƴ\Q0z̬\ѝLH=C`@ ]ii"jZmNQOܧ zNS'2P*; 2J0Fj]\3\ YV^}a[59a*՛gboX(Ձ@$}c).&#%lp32p9n,YKCc.!7]^4Z> \@`x cp;=I(-Z "}Ut3)Ⱦ*f|pPa7Q=ۥ7}zBY 8؜TmEF&z9NF8Do7gA'"@sr(r]<37F 'PZq6t1'1d1ϭ#t`h0TAߥN$Y0$uI#zq>]xĔ&\vOBVHhZ[xnQ:дd>s!y<W!Og,[C ^"U.ViÍ焰,CE?g+"RQ>e=ݔ3DEn@yckV[ۇ}]YC+ANqS r0 µ)].pɈs }83.I #4.䒚-܄13(' 8OM>H m`,  0(ka,X-0yɔF&)$de^5c9jJU>D;'iF0@( !Q[OI%Plʷ1 |&QD_Ǝ6wX.(9j/r3q0FH,ZC*ǭ'7ceGT#DB"ϭIDU2&h׻=ݷZ^|nY5"Mџ20}2 CUhwD7oIm :* b^HC!oP~<^{ k S$i [l% / r07`B"cwt!@NK&\ch%[hJ;O,gCd #'6=k'iiͳZ۰XHO͚ k<\e:89)7߮wgw8Ļe>_=Ǜܯ;b݅=mZ`O7_Ϯ>z2bH{ XzcT#Cܶ;6c/3%Aw|ьBî4{ ]c~Gu&x!Iuۤ8׭SP >)E |[J%cX涘ϦuD~7*bXx~Wo#hAz_#[xnum{hKk2~U? < mu{<2l8:77պu!_) OoY7%哰nmy:mĺ=25b0fnuBCrTu? ƄWEiTTorf4\Gajԍ]^}Kw~YtΓBgg?<"I *~n*;M5)ŝf2g>:f\ʧNl%ݓHE`(JOx!iZrDޕ57r鿂v4f)$kFCj8Yf5q59FuՑppqՑ 5e&FvDR"5i>):Y!#c٫]^+.tux'44m )R>b E9 T_. 6.P gZ)K4p1е z 2\Vw|VA\ OmdghCօ}JA--2Ek O\经e!' 9yV6v:Wnyt_wZWf=˿r X>Ų&Q}bDh; <;qw0Eʼnqvi^}w R#oC擽S)B,RETMV)^rGD*Nr/+)z'R :\ݚ?;h%tZ=DkgQTkpmخ:KVNqp^AÚ#]u ot JRM_+)bMtH_if7I@d;e/H17O |(3ʝupL #8j=eO(l'c%Qշ=XjMܑ.5E*֥XD\S rU/1'$K|cnc!zH!^TrXeܾSΖݾ4ܾ ݾ1}oMs.`&ߧ(.~,>~vXvSGUrWJ}3otP~W2ߦ柝wc1L*c ,*rPl p"1hg9;si4˘ߦ3ٚw oR_N:>W-y X.Xm%o9a(/ {GL%lײ(`+™eva עe'$x@MdU*e :_)S$aaHUJߎE+>9y#Ųuw BQTaLS)R;ΙSqjYԅ(+޸zASbW_'!7A!Q&CE b2 (RA*j"QqǴj6Cz>eqO!>1]%;S?Gxf7fՅq6]D51YlB=/@Q` sA!K'?# #X3IЯ,@ P1sg"C-"&Rpå$*1O8 E$Dh%LW[" ƘLOCQj"jd a-E}yG'sTu;$uh/U0+["VsmY p'l=P8;).FK=0J)Ħ]Z_-?_[ppݬL EbkU+u;R`:K,wjt}-oۺ҂eRC7K_1]x4]iyoY%6rL&~?Vga/Yt,GyEy&fIM JF"iUOoR*քcQPcMW)<:OೣTn*ƱHZz_6E9Ŭȣծ\̗`?-"Ys8=Dpfʪ;EkzITfV-#+ cx%,#|2TLl"V2E|A⤩v=̌,%D"S)y.3[x_l485A+nG0N%@G|ηeo]ݥ:L]m 0L3ɎܿJ0cM&tak: v0԰:vŖ\ЎCK 5Rsfn9FD:[3*9W/|7-uoORu>I LԯY kuBRy'KaT}+R믓Op1K-M\BWJȆJ$e1"i ƕ* s@J,gV\f-15V) P#~f,&ϼ'5_gs;JIc3ɂ=L($z`^e IX=q?WkÌj%A9l]]P(3q. VMJIœFs}0i쎾^2C#E ]Q9SKgv賓4_%U2_%U`h=Zĕ8 &U 2%vҨ#yP*EJ\r~}He"d+꣦/- 1ʽEuX/Vv"0*Qc$k65/gIi 4JU BN#,$z1l1Ǭ#x`uk· mu_M:43%cLٕ[D9k k )}s ?)R1ZcZ3-u6bT`X|j +)(Ԑ-ҝzt>-饰lJ͋..聖Jߓ!1{ Ql;>}>  "E(.`ub1~|L`WP/ݾ3F11hbeIQfL~M0%Ҫ VL9n-tHTκ=ᚻ.~s'&Z 9ȫ*0- EhkJ~nPފj O>ڙs1{ڡG7ng`HDLS[KXT)wCR٭sܿ[7<|o]]\ԑ[pEV;mJg3t_9\+E2zC8|͎w}ڬ風k칠07>wFl<\ j=V'r*:;cJ39?;/Ƙj &y,cX!gPdmʿ2 捻@pWaXt*xDvvF4 -̬ Z  VoH\cʕ^uh{7-]:j]ʟY\lilx/9r vq^/9"ΐhn֐L+vwΑi>'K~{Z)R~$b3v㛘t:\m/pEnbPP6Va6è]@[nAEyXK X%:|0aT k )UfWaCk̴V脌ߐy aI|0#`K- m8vxEn.)mu_i-;C T*c->HRgsȼR.!´J/=%m)_"$S2Ǜoofi轙֒A}h]m|$fHaGL~|qn_6s4ie,&YN-0D%8ٖE.[b~yRv1UD xTL ۜ*ζ9[Ͷ;aZ{[7֌FlJ19^z )ߦJe&fUg:05`ڎJw]:oQxIN-VL*\Sc:hTG%_NBeG-~PFT>pĔFX-Q 38*%N$[)[Q(NbFl'2*E_Hi|U4 gh$ e$^%Wī*U, S1AYɄA ۟ZP ":" 9[cD6Ȗ懋_!8TONcP=0^}:h56W͜ܣ%$$+S ;^IpczgAhZArT߸>Cu̧F?Z1HrNZJ^{(VWD㍑6Al2:UA)R٭3 R3[0vj@3laŎDncœcKbm&Y^=6`R&#f B"amIGeCAVgR6. 3 grfRgqIT&%bD >^z?QPR i\2!:m+D<`FP TGԔ] 3KK&ŋ&y_'kތ݋^f=؟ %}N{t gW?MbD?'N{. oi]]\`ug۠MKR{ɗprfqTNGKg$%i&Ơ'7hbI3Dz=|×:i51xBs*Qq-<S,859. " HMd92f<5.&y9/Jeΰ9N: $+DAX83;.nڃR4ԁ15ƍ}{K65i#z*Rv͗[Oٟ|}ك`dCMJ.7BHjͶ|hkg)ZOR:8#"cvʔJ!ACHJ&irJpI!mF#vJڱ[$߼IAxUkl6G3 >gVQ`͝ѫ.Y{i-&ɹI'-rR7Zj)f';t}㖁;R"dhE6{ɆP 'p:pjEOt'rnEA^(oBMKHƐJ-ig]2'+=v^H/k| 1m߳L~v̟12Ai:Z}?}[t5\:;ןV_kg*XuwUy%8y k0R&EBf48fO%Tp og!rn{6^UEXT+L$:ѵD2-{99!Nqvn^2E (Ѣ2֒%Z\A `ݏvWn7o^H@F4rdy^svڌcٝ:u[_VZxBސ[R,$Ax_2ߞ"O7j̱5xnIrݢ倹Noɟu:v1=|-dgsڪo,Jʋ*Z,f.b~Ugۿɵ7'믧:!/߉hq5{he?I~0|R6#󼘛SK~?-Շ &wR6R 3p0zHɂ$UV6c ɔ!wVSu9\X6uRQ꼬9,;ƗSO"΅.bQ_Uquc [<쯃1zo^Vš~jt3OeM;]rKٻT"oJ1fQ_`SU׬k=vX/">UtN9 \[zj|9ѳ`}vGL:m_ 2 GP ߛϓZr\4T ngZZuIPb*q@ю$ ~Ծ$+#wr~~׿c4uZLrZlt?[K$TyI]7UMKN_}Wgp곊BTLw&6q{C{ozL6fa=tg^Ʃbc:yYM黆A&_멮;V*I2'_˗>w=klus{u^utkC7j*\|tG~=pssL|El5I3~r/Os9Oq*OU\&e.﯇ܾq/;֓e(_kmD2s&,9Q݄F*YX6·>yI"0fɜC4{˂U6Նڎ[THK;MAN1t*'"G-1)1 NJRx ƚd2BÏzbkN5#J 9]}NˁTcVMcQ;/Ug p1W"P7ND ]IiWwAb?JOm'ڈGs~-0=N'h#p@;9 хZm*'jsxysnr^`=EmjRNټ,cPYI/kr4hSU^bG2y9̙K{,ٹ̹M~cIcϩy$]ֹ:N0v*$9Cɨ;ON -CyZt"є$ۣ@,R"KNP6흍\TPtVXMqi!0':!cdDh4C)R 2fdPFu\Ra~(c[m[u58uF2jJjP9-:!a{'s0>G^uZ֫D,XHQGA@Uu64uVudUILJDuLYL  G@5DdGzAATm0kf; Z1j:yF=BQr- |?}3CC~ U;c '^G;yQw6ч]1Ǜv>yEM;#QOlUS.[@X$U N|4|uma$yěJu0}f/Y1RQM.~BDb@O^p(-㞙ܞ[ځ8(XkHsliz }/RY`5i}6SP?]رTh8ޝs6LwM2רL4HR 9MJ%:.˫D#zt% SۼD8I7V]4` s fGP/|ԙdt eSҏ)8 Gmvs:"pD#O:'O{Lc-@Y&C =%A 4 Fs2Z<7%2ri7A4Š xtw= x=j]9r6ӚD_671L/#:,H5V[@DmUKB5s/5j,“nRwPBK1&J%ze!+9T'˕=Do]L-1@:2;?)Min{ZVbUrUr Ě,,jUkܐ5x akk~o>L^]>ɇ!/u@%XDb9XO)) I@1Duz>Jzc)fuMrgY $9%u;FDي3:+(+UAB7Pe xS8{vL[bIdeL Mx+m\%#F7u]Pci#Tk:`ڋjt&ʢW4R2Qrf.x*Ř:QCk2ZWI%a@FID=AK* RZ 'kTc84,V%_b1>۔UA4v{?uѸܝXZs?;:^lrlسSn獍3\Z /~8 rUb傣P,I;>r1vhkgO9b:!1cOE\!2N?x? ~ b8X.Y >'JaA˶o%qk sIֆ@ 0,߉FS`Z/Oǜ3 p=0tcn@}kFfxu^x%Ue\ Yxu;6i.;H1-QֻAV} Q}۠ǻS$/'6>۠ Lk3Wݣˊ?7wcO;Nwϡ?+c˳+ӫ;r2e\Ѻ~>}~^Gja w{b+F ^=ʯdd(Ƈ!9=? ڪ8>mL뙎<1odF5:ye)λ1کcLa x"E"E,+{UL )1pCE 6.{ 0|] Zy+e$ZT~~%N#xQ߿wc/?~Oj1ri;oyw\Z}v3~vlt}ۇnw oӾⴜZ͖i;7Uoy}"9O ;{\6he [mԦMKev.6ݶ%G<;%C mdXw58B6˭.sj n zLR7i G!%=nmV4(4"rTTJ*zFh;afn&Ⱦ\QRnH7B8vG43Wc0ڌXv#E)j2Ɗna0.nl`U`u,TRkϧE+Me1l.Oq@ /U1 eg$Q91뿨+nwo?Zu@'E <(c{3 4G+pIt3 ֭KsC(=K?B v jx4,\ɧ)؉pg!gʄ'VY#y#7彶+*;wNlʉ&Zl*S.[=ĮJg my9;ʆK,i8jbg1#2i+oc V#3Xy: [4}B-Km210K(d1p)l:Ub#@<:T*);ti1T~=f)RU3?U%CʜdG7D X1[4T%3d$f!J'}]p%,B1&T2!G[kEJSa$'_/G9ٸPeq쿎1ud8jTdFF ?OGK~!P6K]l:)|Aﱳe ң:5ii ^74 Z}^r7ƃH_;H`,c@LjL7T,YB0*QϯߍwӁ 6:`l.뙘6ْ/-d"CA ,?J8J$ kYXk7dMk=p;V:BXUц-EXJ5[aXDͳl=? Dٓ~^;dA;v+bS++%[&X%f$7+$'JXh=VKu{Wxט~ܤ>wvŜ;g˳,Xv}v$k:e8CUϓB諾.s5$+Hݽ~!2 g@x/_*/bE7d"Gt oYuƃve`yt6[tj/37"^tlі8|#_~󫻸y2DGϵD!H>EkcgG"wGG+XhYp=5 1~VdVݵZ[%XDM Tvfy~ iy(fJ@")l`#QP+u1=n^~lB7N`*א4SްH FTB݃yo'FT^pش )%HԊLd(T ~n°(Yki~}K=)5Rl&LJLJOVHf#Ydm6uC?/UC?}s%%tH"olшL Ԕb}d)O)v#D#yd-fQg 7-vVq{%QbJߠ7+*6( "iC-׻N'lqd(YUd e8C/?G#G5F`7H6xU$7A)eF{)ѩ$U> 9q A;ШԱo)R%6zn ;Z ђz)V@^oۘB  Ar <=F|T`8'2"C7$ԇ+Ews?mUJ ИFvreBؓdIL[ @< $'҇BDjyQ-xŋYˋYˋYˋYYLrKk,V\bN,pLkmQ 1#'鼹zg[dcQ܃mL֐Fh̟\>/i.n̸ so;⮂Ҭ J6(ڠ4,0E轚]\g7땃Y5Ѱ#p';;uSk=F4糰^ mKzy:F6Kq~qE&$6xv6"6DHH!C5W1jx#pE)*յ)9x ۤ&)!T )qEdYѽG4 gl{VGhԯ Yc!a`j3=SU]&$OMgHV M -$TGiDsQ0aY&MIp1d`6ta~F`mTdž 5KulxP /uo\͔r7S.)_g N*it.Q<&a5!g')5+0ǎXbhP3h֡ 3:AJlz* $ JQ܋J)8s*=dI9,a#,w`bO.Zu4 5K3juC(ܚ"Uu;|CiIaG?<s踧YLI8I:%?-ffе݂ OR&WG'M^D vfS6SY.H2$F Qךhm"My|g~rj+/?DIIK*j%aւ+ƥKm`y`_ &VbH,ftp_Ý[d U""%)J7"hE 2MkXXa4$m v$rNp ȺP"\| [\9~GQ見kɷ!m$_!8]ҭfy Bx,O0g+CuaҨZ ƔVZЛ@aT^@5UZc~¡)}GfgBeO A,OT- ʒ*) agdOi+41;OBO`G]t 3LƊ %ŤdD9A6({i[KcNQa m(850BD/A*̉Kx pH(#Db jyٰ֡@<|ֶjVo:4kZ. mZy"& 04vVzVfu:4,Xy7,:ն 5o,Z$L6I ZpuqE,Zl0ZH̦cx BŊ2 bP<9ZXqȦ 57`Mzi" *HF8Β̕~)$8?Dِe,JNAXĂwf y!* Q\w4lhxp+5Bj& .2pȌ2AI񉖘yi~MpFa~e9GS X-J{h9:hP3Ѱ<|";X}`QwjR% x "9w1-W3 6:,8f'[GӆYG[Ciًhf[-er֮ t,ߜVW%|NQ&(D "Z )U# o[HW[g:mI3yʈ6]`P1(e xF╚y*D'0e2Cơ*NgCH;OmtR8r TԲ߆&k9%w2s 3'(ϜI6kh$_B٬%S!C(MT  ) Dk0Ez{GDzL%pe}G :ص9A$p?z%G(8?A=l,R#%br-a.ۋ`= VS.+Cm:Rl@<:}&LJ.,޵NoCSλ͙>3KS/3ۓK+Z,I@kq͵wԵz8H{/vݔSΦ׺(bo~Vd u 1M=zU&>:_m/IYsqsqvTم, _$Ɣ)p7]uSq` %T=\-S ܺ@-Df|rF8_Q`riLw3 }ܽ^s 3VsRT6L*n^űcNG5 :q^Ҡy[~=Ʌϟ BY=_{a{>fñNDRwSnēQsoG4Rk,pR!; rDQpP8gc HN¹Snp(k$Z&dյ JXTk {g\e":Y~gO^@Y>|\ ;"tm{B껽l8>h ]??|kytַG;~L>qm}w gRfo6o/} "ٛ.@t3xY;;v?pǟ^Fo%ZaC/ݞ_7=NCj;{P9˥9y3剳xx)GYEu{x}xN;Q\/mGgv zd>JW?rU vKM:9C<3vEJ}'s<-9+~UzAzpQy_taƯ1_9ke?u}N˫gI^r}?A#_w_Lu;ڃp. ;D_.άoAΏ˽?*a7h/O {$XfY;zzbUj.?<\X !?]~7 ]y0掐;:}1;ONuy=m{>}#<{ d/q_z;,lyj稷9z~o; O]AE[Coƽc[A9/;xy m/9ߥDusߏd'޽xw?O0:^q]. +#+C]Ynḻ%4~=Ce?+|wL"rv{10% F+|~ (v\/}eaq\h=<B Y_~?OJ^( IfT@Th>7Fz^oOU ?[=|h$Ԙ`$TqXR*E.Wi}pa\yu6!x)Т2 oPˀZ2gΛ)P:S@mz:B<. ]JYrKiƼÇ57~q\)H~]UoeM 赤RuÖ1a>y$Imywq¥R& $ȦVyłdF.*E.Qh#m䢍\\LᓛfKA_'-e.HcB\R񀁋Yɟ6p1/Omϯ-ba[VTڐDk_itM,D#h/DIP8ФK "hDKyXPʵs7\/]hZbەJJ>v[bۖ.!|fVZ1 Zz*Hq|Y6oF"hFryo Q/)xfgV%F?~j+_K1y( Z5ԆriҒ'h t2* k=tdl)rۖۮ{rۖ۶ܶmYݱ9}3Gmy\*^2[*έn60YJM`@`(= Alؚ$j` K"ga_[Ӳe?w~R*#wmmIW|[c^YЃl(ƶ3D"iXj4n4`)l 2ɪV%=)}5nbtʍu3AYHK>˗>ApVlW6qާjktݍh;#I0noVJF03lrQd6F{keNZF6Lhꓝ (->IZVWbvvkUZVŮUkU]on2) V7k?`/M!- PIZ"fګبBY@=,OH~kCOO`s%q+k+ 8,ТJf% CB$(wKCr=sNפ~~9٩٩?JQ0)g#{U{(4FI.`"Eq.\ Ar_3zJ)gbJrj>e(WN{ʵ;u2i>i s,&ޜ7]vᡢ5:Ok8W sFo9ON~MzpQ˳Ӄ;=Ӄ;=xw9H*cYT/UšٜYn IӨ"YKKxg!}w== #zvI丆H̿<&WaŴy4<VfֿXXx?/)[̃IVFha&1\fi࿴SKBwkh+~orFPptU7ׯ!4" >˳O'v pJ|g>o`= R~mwcTWC-Jߏ(Gzvx67kcT$d|}OO~<=}ez5/\qҏIx}ߟEv7gm'?c\)m|5dUߴs{{ZrK]~Xjً[B "g:7\XΈz~v?ݦe[zw mf_zmLvM-7]o, d(:SM*e٠jF r&<#E#K!<)?QL9å>Rux?>^?CˏOެ>Ϋd ӰJz%!Hf+$nB؈gvMvM&g,Ik줳l,8pѱ4gɾz}uMi&Ѹ$}#K)Tɟq//r\)^1>7\V۶dma#2s*bF^(ڝaJԭaNNN'??O&s`Lrv%guJ7㋗{oƿV_/gp8*Y E0N>ZU#ո9qzhG4+9Xr8":PڛR :6L%Rfx= 2Vbk42c˒#(i_՚9{gϛ"uQn."c/?>(&mҼ"_ًpv_F &@ks`n|<)ף7_}2(9@JDvd^sbR#ixEA  HH)G1Y՜b}A;J"Soď(ďڈd8E.:%6k}l_F=Qnԃ`q7"еyJ{i  _RBts&$z}F}:>@[nlDz:M.5f"~{mBnu?jgs.bkBA}?}EͷoE.>g;p`=('fNfVJĦS%8^@$5̐m` 2FAaeoL%2 ௫D1(N$"a FsC HR^ŒKx:^(B=̋oDk~ŋb8 &[|nQ "6 F@d%,i'^ A)2x{Ҕu $ n&IAKϏ 0#.ޜ=zl0獑7 ̳!cԢ# kh=Vd ^ɌIW<9y|baܫS !u0Y`GFZЬv3Ӝ?@S@ۓn=7I@9XiKz QIpcM9M'>L晩Rf<qL\FhɷlV9lN3w3Ͱ6Ú kj3Yl8x=z?;)Q*Ef[K7lEVF;l* $#A! p)ωDń-dnr u!B*h@F˝n |-o :{KG'E7}mOB]z/~z^oZ<%Kb>$}{ԭaB11Agu{׫{ 0\{'?ޤ.t;f -,ۓWƹS)W79y!pG#9%\@&'Q1ƪu_L5>oH;ROzޮAc忼R~lʘ՛i!,m^)-e32b"5j8O"qJ/=I"R6N'X$'؂]s,[flHm>f)cWQtԥ.E4u)Y\胠1Lkڬ{ω #ƣ<IϏfkB'YN1JJ!e`a6^6x+WGy#Y8)m .J#gkXFJ+j'cdǹnCd2uvzZЩ)Q i- vÄN-䛵(+NME= UH+\IOAS`u^A8{;5U#p -錵V8a VϜ KfryXI`IrL1LG XB4.XJ{LYz;\)ޅLΧdL l=c Yץ4$dod)lEQ[;̵VydM'4`d^sPk0n:On_4WȕBA $%J&T7ME׺)r}8QC0)A1ѝԤ50yA$dK[&k:wRquaLiXj1[cI,MScґa7,i@ RtTq%p e]u.jqyޮABkp .K'n k; "~[„m$7/ #aֆ*.8 d3DUa_Z F{\HH>2̵:7Ke%U7gjւo dl+1]J#V a Vcv(.9(Iܪ=q %⛷?dd序%$ VJq\Rx!zzXe|w^u`Um'&m-)7I慟I ;al;陜0 gX#Hvk|:,9G.JS "+ESd9><&G( </ߩ坬FͼndLCL|≗н~'A!d}`Rd2G(Q\(v|u *mldFvs_¯05hBT{iJ D\%2^j.m`Caobj/zb䑤*g]&s1z&@6"1P'BYJhHQA51Zd,(:kJrhQX hhlمE0FEe7SJDS6I{`S C^pG{m! w&xV#u ӽ,WnbaRoay9v.s$T߮Z{u"7%ˍvzo'K+ٱv:ގA !>,ye[:Sי[`Y 3)m|%ҺoøSx N6o\7#DŽ$s.:ٻ6%W?.0R_!IdȊ%Q(%jJ(rHp8!E?ؖ,7[WUKHgt:YHychjSddU-hVj&`o@IъbZ^\lc_Hr UFыsE{8.X~󋼠[Dr)rVYjMJA^r( sP1NHf4;s/?~nq$U5L2Q@q!ƉYC)vs]k t|K3&hwuJ q}f"JH˂(TE])A'*V:zhe:S0I#_rʀQEk`EXɣi*EkPo%ͤrp{Y(5Lh$d;r@ g &Zv$hm蓱yWYޏ@< y0Q6fmZ"Ɂ7lMBI&V+Rζ%ŬIF) %߹IFbNɖB@.grpn#JqAbVFƀdKf'adOtr>AZ<j+ҿ HQR544OWk(PtxɆ0@1]*J2uHHK:0D%}mY1}uu#;JZK-pI`|c rNV+۲JL"?R<҃Lsos}#ci@'E(2%k$'>IhV0z7ߔVTFiͭrŕҳ!f۴(*AA." TGv[&=:ZIA^Q]/F ;J*cVH2RΪʥ2d҅t6Ix礇욾sQtѶlyB6N#V^ȓ4YrRIFK_JK yh}B :yO_]dNԊ!ympK< %RY*zH5pd)U4̶_Q%Xn452@(,ԒS:}B^@V\(<41\]8BS?H ަE%9TR悩 Сol$ |%I];?{~}X+퍭E2<OU>䢪%k)6վ)y2L 2HNDFzEnMLQQ 6:p''/BZ aa˛ x_BjkJo*K.6y;ހ _so}lC1,@zy܅%RxE Xh=Ud4xiQM&lq)(Okr#(Ԩ+iHɥ* (5 #i"\2c#Er%8/O!qAemHP Xn-&%6<\ZJ^Y. Y\]kR`*x!h=sǦXK2xsЁ%Etz%MZ_k%ct2fST$H}(MЁW~C~`^v2xs1nWmuysz\™MUj& 1a'BJV6 i&?אw Űm{s MeR\l{Z4yJq'6Q+ȳ39hJ3נpq-agW*NC '=Y*Ol#*jִL|1Alf6-z@6P̊B([!V|$%_ŬUR aYMiШ&d\eF&髺QF#ԲG^hf6 ,[JH}zPbkHæ 6;{Hjq Ӳ% |XvҊ-6g߁6uP}PͰ[k.ϰ&AlN: >o޿''מf|'Oͅ?+@TDǹ~@#Żg%#~ݸPO'vQp|Cd:Je눼EӾ9*;ԏ_t=tF?ߎut2o].;Lg߻t= N](bۥlʻc=D-18]#$ǵ>VNҝŹ 'iwTs ުoOzO^IpFeM҈~rq1|ȑ:G%^>o }er~{wwGٟwgZ/# gpW'DONVWʡ\-BF?LInV3K΋ЈbμwVdrޓN͟%ΐͧ&2άg{0hBrvoIߜrvjwfV շ}5Z VI9^)tr~6yXjXkNYؚ8+\i3+idtc1`u؝\FwpV5r/+>cل'=u47zxy|ė9.P`>斠`5Jooȍdb~QP?n;јD iiߜt`[6㽙NYURģURO-`5srikC+.7qgV|3/X6DfXc%Ɛ%_vbzQ(,|S@ѯ }}1}%bIоq}ӂ IZ%IA,Rg,ezRuu˗N4BDq1O~d+eu|<]6J9 QVIYoњy%'3 aɪ"^svRKV~BgV|n3J[-I=:P~8 \k_9JCp4cξ%_ r7Aӗr#?;aT:PN&9lqllu:FĞYLШyHؓk4H}z{=kݞO[Y)^ߞ ֹ I7ڑT4]Og @pU!WH.j I5L0Ro`^V-h˳˱'$:9RDDP6]z^ށ^(0ˣ4Ǣ^ԕb~SZQ(B9:Mz?Jm^0Őyg6jE,X\| 9痘/pvݒ CLV ߟ?lb 7I.џޗ8rSeh^a`{el݁0a_h<8:}w(_;2`.eEl_CIuv&/a&]Cvx CǕw{9d՝Bk* ;hSYtPߜܝ]O㏓镛~Haݝ?lgE+3%D~]GDz~2F oc>dU)B `B /Ķd"U$Qjqo#gJy+e1u7R`¤iۥbhrI|WNO*W^IHtU *ʚcKKJ/b PDҮ܅Q;U5^'_r.9G(R_q ~pEUZZ5p9u0o71h# (_ go5d;gP޲;%-ߝvb_X17`ىLO?ez =s.̮ŝhRhoA 8(lL:>lw$`G-_<*I/PQy#nE%.sE(Q W{6|)tQF(6m(¿(1F;7}h|d;6jgl F#<^ onNJ2SпC JƇ[ߕh#m ߗP 9ƊP}98lߖ5u+ Db%/[1Y;sSߜgoGK_=LSwg>7?yw&N~tvuuvyz4l¼{yZ_eP"qi{rO꓇ݓ]s~W'y 9_ +ϸ1 jӎL|R}Ph[Z(\B٢:)&2҇gpCk25nQCuRZW)6x8Z2#d"6*61uoaXoV s@Qsqa-8]g34mZo߲1Y,HXK{l9 &|Sz=Q3T^"ā1%"wWQx4`OFQedxPFo` AoiMP*FRŇ*@`ldok&A0p{\3 (V8}O5>'U@%Ơ7F@7Qkc(E0hL⾗fo9%_1}+Y,fksemj+_$o3 [JJKT#(8 7&x6NA#ghs+6Qϥl'|k|,/: &CIl<*Zhh{?umD@>i G1*ĥCy>$%EW#q&_Ilx"W7\ǯߑuvH 7ϊ4$ 6#"Kri/Y, pMr rÀ3س3aeK[luzٱ-OVXd=x/\x1֍ҙm1iSC2>aG4($zk?x{y3i†ݍ\9,n§E?hyD a߿^9IKh:3D^VIw?L'E\~Xڒ'I/-uKU7j&M#ٲ7;HL ٺ [J>Dswλ/Ν[qJw+ $vEo\WU$j ֆ C.kFPxƄIIL1Tu\%ţI:LUU)jw[IsFŔt&iyNȈ&RkØH#R]qYayJj :}9fEi$H?8IFqxv c!?FB@qq6Zز"{. cw:Uru+cMR123 pҏt$#$)q$K#4H@'/Q6 s9Β5jY^?`ie\됥,H6*jNWY@A ҄2UF2f4'!9.$w擖W1j 𑠅RJ$0rM85t8(5jˆBKW;sQ7j;%UyIWHȪRy㓼O4*g h1h#,Z +^uSK`Ez^_9Sdx)9JHLի┣R;t KdAUU*4he iPWa22`%)f5)J!g?/lrWd<{gtT"W{[tceG.Li֢RۏXHaZp~doՇ6 cE*=l1Ȁ]1 A(C+4lR!+r|^>~RPh\ao{w=}:XcZyiߣ}dyߍ@P2Q>53‹ ry)ϙJjhN.ۡo.zo =T@[e浒}S~} ,8.KBc%(oKI!Jh'sZ($HÌE6(l8p'WsPoS([k"m9ICR3htL6W!ABBcRP 2NIRQ[8S8Һ#E4LZr.^pLQ DYĜO0ٸ6XG7.y2,8)mɦ 6"ƼP*n5&~ڽ^~_U;7d1_S,:t, z[EAoEAHJIAq_\4Y:S#P_6~C~{~L&^z@oV $H]r5~6O͘U կ|g%y3KEwIӪlm9aY<|mk8)4Le9rTAK Tmz 1rJ8Ѱhl]bb-%(:,I9g*X2ʼmŠ4mu)k\P)R`))LuQ(tׄpFw%!iInsWH7sxK(gĔ|'OY|, b$i#JNO2I[S3 )C>8BVqDxWXQXBFnثH }59ݙ9$yL )ʥHaQUTz!+TȣJxd r 'zd>grDPCA?xc,#U57 gǘ(5r,lr^NmBq.#9>k22Rg$vyAU΍&aiz1)B詰ɹY8L8;]qWR{q)H6,Lk\:)*-DY4H5皺 !Dq*РKJb ys) ̋EWyT :Z H&vnI $"8ZXg>(zlNbЛO2`|t|mguTc\uޓ4,`txgv@P\Zski:_N o@;%E4DlYjRB9 yT6KNJt^Nu'##;^˗8͎w89"18 78eBz#$z#5D#QCs<! lL8ca;kŧVWy9\RPRpιBf2(̷j6f)g|'k`SSGZ#i)hVBk-3 *\H/ SgpI)!@L$PjD^m־m6}{ ih!T k`CH_́B(RkQ9ԉëGڴ9E 'Qi]x{tFD|Ϝ$k#CZcF Bg=Xq4D8ip%i\Kz-PTԄ2`=@m:zyV;Ll 4N h u¥~oӗasL>yffH*~y/>="*CbF_05; i՞1&4ؗ s_3P((C"pd'<[E)آ*f5gùauwUyzoo2퇄p59Z^ a5:»/lw9lGMͺM: tW+BA1 -G-(]-䇼'V^!*[Yy™h %B#gICT^wy9<̂qFg/ƅPl}EӥJgS/)]tM_QݵEKYI™D?^rTsV}$7G]ĖxJUU'ᛗ_˻_u݌FL>u{ց`}9duRv4]_uURQN N8UeV乘&0$_s#/c90V2i\e09u`F9U昏NrFyT$J02EN]8˂m X< HH|xLf3{UEA22.W'p0yJޭZ'F:,YD"3" I>D I4^{lR2xJ=H5:u4ӾT|`"&0Wᛇ+ѹpǟ'pXoxud4PӨ109瓉  '9sf#c19}C% n]ڔd^ TRczIe585#@V:9c<{-I``sv)e'\tTQ-Z7~ĥ,Gwe]}kڗbW7kS~- =cL(< o[H[󧏑(t>9+zSk{j^yk]6T5bߍuso)ZGdn=yD:?W&dDb:Z]1u l>cNnHLDҨ#D\|C !Ie);+Yó `]snh̉#dy 2)kÒCtfl_W >_U: Yz%%eA|?WWs$ hECbY;6@ }ߌ((*1o2o=y8H Ei{-` ԵxB%ԑFt:./7\af=O9DdJ..);v>9mک|W+`Q1cLhQI) J%JMԚLOg 4liD}V&ŀdD8{*jmq6RYJR81/e}{@)REdp c"0A񐛀jaS6R\2Jxtʤ03M l}Q*K)q ++3zQ6uto iF+{k3z˭i!/E5Ey1S@{вKop:ރVw{nZAe\  Sxl ' ainIC†8+ƒzHw5ж&hc e޺/y3&Q!ȳBn4lM/|D"lM˹p뮬8GXykxjъ FOɜ=Z'W_:P3^yE@@Ex^hM /`W]kt5շզt?l*Ɇ_1ITnQb^BL,PZgH9y(Vb$Y%䦛10A!tٮ - Xc룀HH Εpl"FJKޱAiO1+0g)c>sk< fN*;&>eBgybPhPz'0AIMһ >[ݙ7CfBXg;`aR볝`dqt 1dӽrJ g;AUBԒwʸ +Be[8%"^".KYfdgfїlsH T6 #TYbLej[ ڮoĮ˔^*#]ٶRV(-pNe[8%!7Bz)J@E:pmWf,wߋXٳfzmR#.-W,Dlùiyj_+񴿕{±ޕߊ (e1(gN,o rAEtů@2dSbɉY,l~"٥\7Ɲ+!)\VK΂W$/lXD!*Fqcbfl[KXF@adcڬ5G&ŵH"*m HP(4b1A7HG&pH;ꂸwknqH($Ec%VQHLDPY #,Ah:*&r!7z::{uj}をaPFktJ{p$A<Sx l\K)o?lR서˕KҚ^mzٶ@[wG>"~> ?]/OO·VsuwϴL2i&t3;(\D IO{bBT0(YJ3V 50{'Lvĩ$Zo9~Gq(قn/'AH#IL~hHtʨ,!istHW~nȅF.虌%&9N$9^\k6'˒F-tK.Lֽm&Nɺ?Gm sc^H3 #1ʇq6Ȱ q\du?YsP $fyЧާjEφ!N[?,űkl2aJAw#d奰d; O(ûuMm.pRpM""$}b&d>"ckƢ{azƒսv~9,UxQt>-2V"֜1VPR@Jj- QELDPpZu_VZu_>>X8QrRl[^}vMmEژ8FB8r{/Mh$ R!  P+s 0SJ1&{mVA+YԊl7.9 DDC\ 90I 0V8kƇ2Dإcx9(4|-Ԉd|mݘW -}=]k;)ԋ7rFf!<G{f=7t74i 7oZ|0~ }k`)iۤ[RF`k/ /8L .lx/ٟ ߜܻf!1)}Wsk~ -wJa\zNҨ@ l 'kem6|Gƶ"Kyhqs%[kiA'ϥ7[;􆃣vbuc0a<@`Q^͋ ž7[(*691&~ЍnYSL5TZ͏iK 5i1!t}*ob|MMޔNf>[XsF{ [3{`Mq^B ^QܾwmP.F[顛mva<#Y{m:ujHdn&iQG{&5 tft-MDR&Z8?~h-87۶W@~43D# $irr[ =r?EmQ폤mp~tݏMgƭՃUob^>_+/lA=\%vLճsp]|ٰ#.w^=>2Q9pMQ ]r=^?t?ܰ{ݞICۏnJ}Q?@½-x{/iY(ssn#7~m4nwqҤ܅ۯ'p~jp P:}&{i6nN6 k-AOzi~f6 rWNH J`+H49䷼nf2~KzYR+;]2ko`ӺŦع4j[0Hog%ݞJ z5 x|KBM`uD-3=RZ4<¡\2Cx);na^[amldh\؝oB;qަc;ĜyjU'6Jcjz&l.mMb0Ϗ=um5Ûڻ.j>QD@jpz qi |u?[ ]hi3rs$ov_?0BHpdtn};#'wG3N8nדW$iPǨ᥵w8 J~kvu >:{P.nfz}.׾_ /YNϠ3f痗gG0]rhtG#2 6b鎖hL{yj+7q ;}4i'rdN zg~tDoYeC !8E>`Iձ{j/4 Z0GSx$! 3ոG }5)rAMFffdx2Q|a/?~؛zj4Dr< NPEP7jAK2x-*kŵþ7}5 uYCVZpkckH@ 4 ᢆ4؄T8r`  b("<@:B[h, gZt=syOLYDX=?8ҌQ/]rJit"&\9LYrQfR0mS7+5?SrSؠv[iwTxX_A[RmL u5S yu^l]-+YpGdAZɂ,hZfi*9S D/W"1Gא 7 5W g(QԼ9닝k#aocQlBlf[KYp_0$űc;qFH3|awXG3l Vo_[\qT :%CŘ\TE{َXUʑX9+GbH#IWWW׽xT9+GDs$ƄJЏC}qk_jB0z:GJbVs$Wkth& 6BV*.Sn$Z~3F2R/kS 80OĬ7rPp4и_ˋ.TSh볟|1L2_X#33[rT}P*6}Hp #i| 3ιվ NJ Lprs̭TSAj-|9ߚ{-S/'\RRg&hh+D叨?G6 CL 8lf:U KLn*Slm&(+&<5yBSLX[ƒsK̨Tv4(LWRi4. =U-XiApnP99 {i~ӗ&uPMVqr]20Pw#ZW\!]Vܙs?S[0a4lNV_:Z͑-l1F|_7!X# d쎸(jpU6 FAAC6x7<"BiE#g\?={8׸|/??ĎF}h m;Q0IBTA $js٣f~cYIYs|/gyEV28o=>LSk9+4Ӥ=LM…Γ ~،ԏfTwfDK.ѲOvג*u)^iۻ3kuco4nx䬬?]^#sn1;{uF;ƶͅ6JjtČp0mٍ.)MVT?2N/Ѽ!- vsV? S y7Kfn/W 6ZPd}X]8)W*g6n.CxR$;#-Osף J>fmM?mٿbr +nqmvxx]")E&vvM7d[vD%t 4-Qpf8 3bH cWY8| (j,n!R"̥e˜3£p0&/: . x;udxI鳼U\Izax݂HX @{A5ni_|<<$%wr?! ̀=˓f1.ˬw*sJ3Ϡ4ܶpIfpEd\=r޿M!.%\|c=qOVI8'dj>eK환㮚v+Ȼ0af= ?|kȇj5WXzq%wFu3)bEJ&dJ hE""4c RtFh@`|P [p\Y6oТ]k&ljF?,8Mƈqehcbq*8LMTюb 2)aH),&`=jexD;R9φ>ICM=yWm& ktiP)2+<2Cha+n:.x(o?Oz{\ٱ3hk[EЗ_en ,?LʄVd`.<͌U2+2Ku&-+"e& nTde)nTpm(2 -SDN 4R( y"d MCQ& VA@%#pxbtXXń.? m_Ss޼_k2,~%EvxwoJ87/^M}(}{?7˽l;ߺrA5Mr^u 1,i!CdĤS\E`щ+x~i4VJ6v>cyIwsXeVGZRؽ &nE (KX!`p?rDs N<`Ӣxī)\[]0 H/oA_; xN]1la4 aF)x|޳\.qB<BRp<3Ľ?+SCSBK1J>IdJQ!0v9Lq2 m^NuH Mgӿ}INYQg tF-s9HH2cMj?_YW2@ )7)ȚG,6 qQj9vFRF6VUlH!݄F0,R`}>ߓ|eB̄;d&|<|U)>D>!^0wCdEx g6!ި̱P[$;zpgfLwf$ĈaG.ZF7ݯ_DqⰍ#$bq,aaMhr'01 ba?8Kʉ_kpAɟAҐB+@qz>Ϋ^}Y`zo!=f/|T2$44SAۇoMo4_/s9kKc}RGLƠ@p/aWh?}")߽?Cɉ_k`W AϟA |Pӝ{ܫ>ǩLڿ61:?ʂNyU{sC[p0+T ` ;h-|7zpZA˯?H#)˅b-OfsFY%ZD3JCJl͐_l(Jئ`u'T%RwakH8^DE'Eoh'@HN(8v璤ٿ,X`)9սEyfAjt"y^yъ.Ql`e`ա\ՃέL}M 7I<I4NV\EUFm5=,Q$d.?1%(ȺHkSN#V掘-zѥBz36NQW"NT*$]ϯL8jY;\y3} ;[ 2 vMc8gٸMzd5LR6g}akŸA]ݹ'H&_lk/5s1C/}:*6m4pcz 9:9{}RCI[k1Z=dm!y;v%QR7sU}ENGHP ՚T8Ǖp;r%w+ɊY>As`gGBm q!:f?-Xh ?$6'IAy۝}ֻN7`G/Z?v\pZ!c'2B*d}9"SƋ4z6$;F&樴Ʉi"աS.'|0+RHN(vUy덯pwϽMw?%\bUx볣jċ# 3,p٥+,D&#S:yHS*bF}Ls¬V*9k J.9J4CH_IxXr.Dke"%1 ]qd8Cimb(K !*,k^ګ8(|r ;E,OD*3G<oɅl _gk`Gl 5T 2WAiD˧qA! &Bk3kUmEHntT2x7I,YҪU1!0ibʚXIZkbI"-"*M XIxQ<\DE6a a4\{u]7^& V4lβ9PTOơrū0T2w3xt̓gylUzXa糃»sѼSdu$p ‹X,c K,g1PL)W**u91rR;'&k9i7fY_3e#y> |?)C³?}ɳK-t5p7l+P=NSd8 г'y߽*`~ t':Kޑ'G»o $DzWL.r};̄ׄ2V_˫/`\l`CeS0#a:b:#- 2[X&ϽkR1!Ix |94eM _ߘ m.puV |~YT$2E>i'2(LrOϬC+O;HM"f V,!k6;̤`6_VLa)a-H#08$%R26f a%%Wg1X_M#oO4 fZ}sN!+ٖ ,n1ʠTIRi5V$NRodR4u)hg$D>`!Bc؀lA)NipK IK6IqcY g1&;LMwŐUz~ҵ>Xw'7؊^? rv\~~;z ! ٨QXN+P=w]@#DDH4"֭zG׏): r:yoZӭb"a Ռ!FK/ v٣SA "cZo]l^@6*b.ھXSJxh;w^+,1*j8/'9#Sr{NQcGm7Kav4M JF%p9RW?rNnN ʫFHlX"b&s8|ʿ;(T.uehN zWq x!DwW%ɚDc yACI2Gk{-]]~';aI9e8ʁႌ$֔#GcES G(D}ʈVTH պ^l鐇e {Q#6hI& @#_~/U ɜu%dڼVC`0u8rR{M4%HYj|e;XJ at 2TH{@@*5P ` d%ȑ`Nj\Y9(Q%Vi(r 5=/GHAt3,Z-, FՇ^_fwKd6Vw49nYVaY0.ٗuc{]d92)3p>lƛ+U;Ռrǻ\I)DBaˍA؆I;FiXro[!&l8^`rh!OвuQHc}-&wocrSH[\R8O5ޚ1ZRlxkpkn5|0jIE9b:nmOژնQfH4=6z b. !y0[$g].рHބ3=,[ %ڇNHhCSX ƣFP .mHQ r΁fE" P td}a&55vC\zwۻ7J%q)6d$ANR$j(Xf|˧s 3,X@⯑2Id ^FU\mF; gpRzlvp|2-Rh낋NEK&p.Z)R"UUCU,l'_CSdjxGTo}09ȒZ-)xGRU_3Rn$j^] ^83ǬPbQuPH# odPebI *t2zV:Ɏ} u1e.r (꾅XK2m)XZ'V *Z E+ոUm=D+vΚ[ȌҲv˘YRɦb?q RmJV"X@Fwp,5DȀP! @#fa۹(JqSz6I=MݱEģku7mDk i 1Y肦ayav z.*r*ԶJ0dM(tb4yن<܆؏4GYRԖ`NF)޸hb34ǺzȈ_x* ɧ>:;[b7&eTBT*yS d: :Xlo7nP/Vrq1#IFXWrc8maWRQцHlٻ<+[#+EL[R})vjC8+IEi&dYDSlL nJO~I Dawptb9JGiF–p2]fЄB#qۀa6ohVk8H-\{<(I| ֶKe5+׽7|텿y沶>]k$1}u64.3aOδn}X0IjsN:J>u\|]&nWzyxjvFsO/jDrK2d*}A&oB%y~cEf9>J &C'jI"$d{HwJzر.+pVRLFGX{`M񌻲T@\ϣR$xPAFI' FH*dV KNakBD-Z-ZK0ۇ3QYeˍ U>0K::pXC򜄀w&Ռ?G6jtڠ!0+3R(4`p{LhۇR?vw ,sQ~;:yPRq]-N3 zK ˂ $1:Z(:{as═*Y6)ib3#(H1#hnBVe[VOk+մ|]wwVXJJP 7+܄ւ*fP;kwUyG?BZiгy8G]rƂf`kUܐ:6GkQRF|Ũ9uB=:]nN,n9bͶ(2=Ay;<hW*LUh kr﻾($wum51pvvөs(0 Ƶ;s% 6qzw0ȝz9j[#t sTZ=˜TG Ep[o0Ʌ$ƐrYH]ǡ@Ψڤ{GcQdIc:Zl/V9Z`\4۵mZ!Wo~8GDqFI,ӉbB1ZX-ON%-Cm1+`C6\M6{/lk+V> p~f3ΞQZT_7-NKK>'V{- %7KCC'ܩ?u;/=a䍑GJWxEpJ|pt*IT_lIT.>8S}Q@ 5c߃?'R%jI:˱X)~_nX; R؞7_fOsxajoOmK-q1}FMt7M/w渒M-'Vb&X\|ӛw@_Vw'rS'Gi!`j6^; iq&o3 :k:4 QdQkAfR B$RHFQvMdSC5Cj(??| :z2lzAKdHK٠ѡ8P"{lWQևґYX@ _ZUm Svn}̤~iMFJvc{MGFBĤ:?FϐthRs.!GOL"mix˲eH+U ;4 @9\pK:%o]}-c s_IZ?>\촗w??{-Ԯ݈gʤ*-РӦ$4#h;* JjQl%j(l;;mznm%Y~)qHy}zpj,fvjdzjӽGufUn{Q sxLvW/?uX2߫?Ȟe&bs[ÉԊ.~/^mv7+[/a3VzUOWfOܞ9 ]\-g^껯|レ׬nn?vH+F>|9va/G+YR=yo[\5vHBp%S57X g nOhg~sk/WM!!_,S4ƥ?B_7t5oaD?z lqUeX/C+>spvODum"6yn"^7UcgN|V6BmoPr%- ۟b.Z%ĿXWa_Acbm0JXzܾ͛".~b iDMFvc2pV]m7?4-'n!PX;MF ?@{IYKNH}I! ">($X#7: ;SƖ?پ ktt[;ID`"yd̼2~Ue(>$JFʥdY-fF*GҪӿJŒA|/JY!)5L`G;b43yk82(o(m !"uJvZuZtP<)gڑ:k:ccj?.wf ~7z#-Vw߶G&S\Ҭd+p*DVEmFUDJ5HdJَΌ7YG{5iReђUՇ=F+UP=NycK1ˎ K,:w(vikT2K8 (E#(UB47t-t~|۷`gjO'?:NWs5;Rkyy>Vf_jFPPt* {UIx=^MUmD^ x CN*C*p{WE>NS@{}oU䊌I"qԑ8[ 7z4` 0d"(f&8d~)e&D/  A.jr˂%pHC j`(jSP]Ld8zg>$Y%rJXUDҤy9/#{{necL}td@jq|̡+]C@xN#JO;z(tZm !h*ȨiY))X2g!b)#k5W8c68hR;g}$D%.lstFKi^Hc 5 蘒(l0#ҠPǰp6F2=l9DESVgcv>'`d.cW(;twy!+N_):PjĂ͓eػЌDv{=@ۋ^bPhX_r}'+xp i3:y 2O0yuQX.dTqpipFmZJAidUܥ V,qXZ49q(Y/܎%z =jv5Bx g[^Wf)0XﳯI,&W.Ezw}0ӯQA Ջ߅Gܽ_}˛z@'l qZp..-R m Y #aZrC-l<2cP9v$f*i[W8#ۇh8Lbцh b't7TJ Z_C&ihXp˽.ga79wX!m"v}DyY, ">I!3c[ʄˆպE 2XEE!\Hn 8Ydt?W,]n>ޭLZIp?w1č| 51DC8St^E:c7w[Vr˟'.hMy%mpqʳvR,S/P_"ǁ0_Qr1P(}̏\)b9&?/A2hv8?(0#O8CzB`0\p*rM-uyI uX`9[_wCE[q h XQ8ZPB\bo*Xt mlfIBHF座'˔8"y)b@EbЖHGUƕ1k3d@\ "_Ls gpGFAuyqsg LP*ވ&VIhm_=b656A}r %HSw x$gjJҊ&G\ia$0-8M^ɣBMz&*UzKy鍊J%48g.-nٜ=&Lے wlP^MpFam|$}u 5NƆQnM|wE Xh &PhZtS p\6H6_&5&u?@AոpH+!Y;ujY ۲ÖZ!N)Qnj߯ʨhՏ.*l@;ë,s sD8kB[}B^\tc~f;>\cۯ3~4b:M Tј*mߠJש}KbCTRv?'7=Q *dU"wNE4^/v嘲q#&QЊⲥcG5֔ /˃Gu ` 6Ҩ){f6/ ë6KL8Y{(tFeDT8+Hjύ ZʣJ>~*JsٟB'ifhUa*m:WEҪ$]Q<28?e,՛81x{CNf7,}O~%By5?Ś[p>vՆAFBݥK2D1B{$1cmyØHGel]ޱB6IXo 4jؾ%:"隖Zxx2*@kwk~3- cAؙtqj mQMMm价ruWEĐc kS8s!Js`&rCm>Zz‰S4i4JzR36̗7Qtw):]LƁJhmz?*f.b_K jN4`+Er>&{z:FQmW 7+=_u@K+2}WVd5"|Ot! Q4u'{\8a5Nqѝt緑.O,$%^|dTCNplfCMF"dFTZM^}ӲSNS,2JYEU$-c6$ `9JZ[*5_':^L=c:/`Ǜb AS,/B  KYzo)=C 4`G4c;Kͩ0/YK03ōqܹ8.2ˍ9etT}Mfv:|x MÙ̊#eUohf`li`E#$ؘZ`é$Yw@̡o /)eZ(oR6H@i񊗀ٗ,MXr%ȗ>$G{.p!Ӳoy$`s#aǖrߣ]p 䂳42ܩ*\eP + 3B%JS%@elq'^b*cbRtUsE1YQ) uG[#z>k:UǭY5H}1ƎO˖xA)pcUӌc*`=c#+tK"SO-aNz̻/}0Ge=noת4܄) G.)xïJ5҄۱Gy?+3hʫ<[ʓdz4G:a0b7rÆPC|3>@r=\Eoj㋮CH}HiKޣ}[sBۥ|r-WS~Aija@kOke̕ԕ~:yƉ&lהz>_׿/?x;nt5})i 9c|Tp% }]wY2[_f,oZ7 7p:Jd[镦JWxix2:)r , |uﷳPLUq>}ҝ!Tcڅ bJ$45a#G-eVp7e(iPʼn8AwIEe"[SlrwHjb_yONd4MIґqk0eKj$(MZ.jv 茒sT[QJײ*i^V, X举B`V&=ca1+x6b!iS˚muJ>)o }4IۤLdxA]fcp5hYWU+~g"Zs% UÅ%x6SAZ[q0DXXlw=x~}B4 iJk_Cڳt8xM#~Jw2&s6b LW9hQ CZ$PEDg}:^ /kȁt`sڭ5"kD~*[U cjs ۷+w j:_U 5C'jJ^ZpZ3o[fahuqZolAMXGTm*͋|*#ݻ`obxp;_G0īnHͧIo9Vi Ip]lJ?QgDq=3ԩtL>Dqݒ[GP:rRLɟLKhPnFEPt'IdD‘:Bjd(z[<*N!&\H?ϣٻ<% 0eEtܲsIxK$2kɨ,`=Z1BORꝏZ'Z#HIG{L = zBF yKd-yBzJ;x 0m{?-E\)6)O^3z)7TZ954u)LЄfB Ξ_WҦ/QiN42'a䨉Eɿ/1 JxQ 3 O2!"f`^ ̋yQ\# *V:PRz!Aq&BZTAwN)ery!+F%W%J[w ʧy3Pyhv9r+nǓGh:g!w%B`!#&ۈkQN^- . Jq!mc9ىbq*ѿ!SFVq(K&`t3 8\[S>s@?}yvUx:8뗃2ʕs;z~aIUJyT.&ap{7ޞ}>鷳l| ejH?z(7^.;;#{5`T7q4MBq7 嬑ͿmKSzs]PXUdϩ:| <LI:NISalwWfPM.~]ǡUa˛˛);"Ud>{N hK tZkU"ofv^surA"yr(%/-S6$X>7]UA z!8&S( WQ'SS0- 頄 ?Q$I =bzX6YW (̨!ԴrXp_Ģb6A|I?ΛdWB͚vR)$<~j9~M3{=~{3W1Mtwu=OiVr^͕(JV?QMEEn(UBz6bLnjW-\(۞,U2ީĬuЉ!;9%f ik^J]]D Jc$93.˂2ۂIECH4L]- mq>B^P Rni:JB쥊;r-vB \~<4blZhRvK&Fq|ri*.7_褁#굝l-̱\0% :t2 hۋ]vF& [0Je8%!?ſaes>I7&Xu礸A2MWsRuNkEl%+|O*"TLp\ *&QLHQW;5` l_"Mkį{)b2ž8QoaO$R,eR {/Tfģ|P@IA S 0et^&iAT=::!9#=ec娾y\kV* f/;-dff}py6ͭ=\&vQa^ba emZcK993⦰ cyJمG|;ghx<¾jdQJėkl"$b@FQZb~tWԡz_VZ9rEOpN3,B fTGk4;,Vl9v0ckw(V3O@{ v*v֯pD{-6S{aG F]E/T\E?Ֆŭ,neq-nn<G-g N1TIa<1{[jo] J"X3KVѤ3ȃȞa)Zd1Z\D,B 'fCw{`E ,x#HZ+0Ec9[jڌ1&E'@ hhQV>Ǐ5 -ae֒zdJg%%+UgT,_+B*wKTВUBE ĞD.{V ^akXû便(}'SNb; FCi`<;ߚ+Z[^ZV՛pMg%Rz:).l9 z;;վE%O+u6# cǂ^e];(1Tw.t%2ܽ !8~s?}Z[=`(آBϫ :xgVpLŜD `b ;bfbf'm}^*>/0.{ìv HH.Й#E|j/A\qĥ2RaFDꖋ0ffTf6['[x6QzXYކUC?[GxȦYWu|__P>Ҝ"@ 4oǴ]҆"6q hٌGL*!ГFC$h9e!'p#]!0Hnڌ:uZ6cM͘ws,粋Q>&& P1=XrFZOpC >Ef3JP ~lod*nVZ4v[kӭl$jq2AD,_:}~}N֧r>҆o SVBqN˨_Q f0,J)QGhR5՜~e ._pYˮ_k[H dD`ApE)k5AM7Lj{D@M*nײfwq~HNwtoLtG_Xu@NL""!,K^@ؔj0E+ٴ/c5;0Ư\ntٹnm**jziթRyn;ԥ]xg7PyEtxur.ǟ?YS%a9J<bLHo^X%͂nĩ/rw#i^ ,OV3m \T}Ww.09iˡ# BbF D,2Cz0#WI,"#*8.쯡1"(mF={wiA/t)jj=|:}⊣0UzPqP+D4 nW 9;hY ^U^W)!E3)qyb%nXtI퐤4+}cH%'/;PI|Pj} LZ" 0et^ 4׹0'q'A45^-R~[l^]8ԾZKZ<h_Ynݑ̈́9}qKyWl} o>/ެWc0Ƭ!W8p>S2R6濒{>aw?c v|(VW.Xÿyş`x//W'ze 'LIXF@ªD&x>PŇ,>e.>n.()pHh蛕ɘgD0Ĭ 0+A̫\ցrA[o Ň8OO.OʰB6EOUSt{4%nHԃz:,vFH]A_/$HFp6sqFHN ФWԞJpN1Üۭ6a[~߃=c<&*5LT=k >K|=$O /3Vgp[60/3${;&Λsw? r9>}z|aU~U}[?x[}<\{[Oc]=Ez[eA>yr=;@͌HZa+cmks̈p''g`?6m&=&=&S#k|#͂Rډc!ict-T8iTptL`" -5C1HZPUfHjs2ᴠ^pm.%e3zl>dpNUB&H2c0RNEB"H(dVt: ))p@RMdfDX 8SðRUϖ.%&z$FeB F5jFmHTĒs97 +L^lϛ%MR_RZ&J/!)j ;= 6q ~I!Z+A{ɢ3A'iXĴ "$As10Lɯ@(=K?Ymf L >Х9"jl.::K ֐beVs\HoOܧl>'K;#ܢC[҆-L{ζRUu9r"]QM"8F5-Os/D* (*)-#(s1 /iŁ_Lѥ`+-qeζXǔpLbD]ypDYH$m"0\%:u #Vw8,Ox&8%^:;53ZS-Z|VdS:4d\:HxOvy;VF EJ[ij Un4$wǿmFҿۙ5Ӭ}x:YDCӪ>Cń6J3{Yҡ^yQūj(tg ⺴BəzVЌ/ա䃭~:ע+/Wt:X篌ඹ}«*$yLF5䧌7s q7΢-<;Ϲp;ҍT9֊A 봎GQk!J1tk^htCq)SS#g(nukZ4$VT|jNǻ1UtU}Zՠ}`CNo²KT} 'yx%=HhF0}P~zrbP.gJ?`D'=F0|]2Lq_Es.0>VE!r\ DuQ-%mqOlH?-[3uu[7\`7ljLv?hX+Ns]o G)/tm  "_EUBZ䷋$DҜS=uOT!ki'p,/\'Ճt5aWǁ.,w[O_|#I_vwM}(B3f_v;OE"Ǽ$K&Ѵ9%QeVVfVVESv'3uui__*gݿz~w5yu[3inϮDHvHtl"3_Ѿ3t'RTLmI%(T`t75t*1-sĄ:VHs+te\o6\jddp֦cR* UUq7VjdB _*gĽlm%ƍf5 }Sqq9kfۤ/o>Tx=gg="9O2uy33Xf@7nY cL4oRQwԤ*]/y6I?jF# zvTgl؝&_b/ s5#`}Z@e 7;ή|)<8Jx?ռ`x pޭ2.v{s>^4VOmӍ3ѿmؔLSTI#F JS ;2e{ $|53AW ;-# *PcQI{`7ydFoPF*/v|e5 ʹ9Fɒ?.U?OyԴa4~ 95JW1)u-}gP3X\~I&2=jD|`Wzg ka="F5^;24Sen$>cBh*{XDY]}ռ0Xaw\2#нFS߸TUP8~s3Ns-߹hFDc#9ݹ: /Za4ZI,ɷ|r:'Tޱw^9$8bc,A dztکW6ymae=|zxS58BlĖ^Z[O:Y-vrOwI8xӣss?'R\$GOO 85/KrOqn56y̴ٲh Yn6A~ytta&`JLW㯵W'5;eīs`q:e\sܠ )2Ą T.& DQ3fְX&5jPpe gsAAa\4(|w` KפhY^9 PA"b.D?Ev^k\Ȉ,iFӈ!zϮdrT?Xu9E"pcAϺzLУDqDT:5lwV N#ˌ,*&U/:&^<)Lk@*F=u.hէ0,A]Fm>eږjDqSNhd}5jcWv[7vtB'8tO׊Q2DS"k{WW7?ŝkO1=i9yGcFn9THʢTv$.K5 5wߜܚUw61Ө wMX=|'ﱇd,?%{IK1s%Q'*>eX"!R#"i $plX:-bHERUhq$xƸFbY&'o[ńh\FC 4uYuf3o,;hQ,3qXTmRZʨ us:pqASBy!X;U(>)]h)~g{T9sVE:˷8y(b!F8͐ :4$j WJ0`6lQJb4DH=@ϮdrqRߣoz,Y~1$!Z,1D QbO \zϐ+ N$@XR6Ϯdr 2 lxMⷳ?\}-nkJwr6J`ZVTYK>eQuJQvfό;9 cǰ^.:E6L6s.1#5<YDCPyY1x"/ )4~Z-B0pEi|EAke` h&>XH~\hV«D҅V6,!l-R7]ߧSkf|o+1[L@{DDx naat %fH5| ^G'C 6GX/~귁gEhbv_]| эzA .Z@eC󰸨JfYZ?zXKD\Շh>^-uZw;S:S|%IU)1YN9JV0C+DRa.JLPz&2q͐򂄨FzgF APWCpuXVURURURUSVJPӈ4nJG7r~XHUh <#JF+FǂSZjYੈ ֚01Xd +PQc>B>Axj IS~q{"K;[E",@#*#*t+I*(7h 0#qXC( ~XVާ\rn2"}gk0|m+_f7V$?G}m>\s} v>Q!R5 ޞ˵oS\f4se˿KwnxO P7`9e(XuO]89P$)ȩ3h&n9K6Ek~eGn?T7p^xX~ >Rv1l*aǨ<`:Xe@a!,S{8B{p62.w 5xǕ^bh噲LGUaGEX9o%8Y)" Z cTMrc؂ǁ`b! >7~+OQbΡ}aS%J%Hkv@-!h&k|4 B|Iv$M#ɜGoӆZ"9^K %љZrsK؃)vwXZlu  -h&aQm#s2#|yG>DYTDFG0 S-4; %%Ղ^B;h`Ԙ VQ=xXr$8P{!ĜC0x0?T)mq%@LhwDC闭+nWޮJpQVNάϗZ߲)oam"qFNJMkn8v+ mWV(!€*art?ϋ~5!J&+dLXTJ*KP:Mh%(k^M%bRgq>$LTHȣNpcOpLc6R3<yK/ <`QuIK!ۅW6"J6ƱuΧt(\YEt QC5c ˨!^$z\i|,#hHjAW^3GLEWSWZˆ$XJZxǡet :y}:NC zV/ ?%%L:DQl(:G"E'{N y0h+bzϝ2so;1A&!V<~ ahE+5%3v BlVS4Q|l5fuI7L{H?{WG C/2ENzn8GܡDII3wf9:F$Q/L 9ZpX1~AWrQD>0mSv6Q)GvSeQE)^xVbH̐Y(!rm5gR61@`}}_ԫeW:T XW3=ܚݾ+ /<.ҫGFe|v@6nj2FX t$D0A@xBOo;נ/ܹ7 S=~Ϛ)T|ItJ2')3myɟ@RpZwpCkw0\JOrdq 7AdKٞ9VTa= Њ*P!f~#}m&@qv@Q[<v> ULQ3tX+p9ROXcv21Ӆ’|H貛_QĖ t8Ŋk;t٤0s/Ϧް!4(n7\7ɂң͎w ˽pʚmuSpo\/ul=(:kHli5]-O(`¥~GأԄ]Wew݌bٻ[ ;fCOGD8V X oZgde꼹Gs޴WW #$ox8zxA'X=ev೗RˑxvScٛ y8b@7fF(~޺Ec(A]3<;L ӵ;B"{]6z&g9]H=QuKgK,xr* a˧'dL6h_N׎&V!㳋Us#6Ni%HL6>g~Ri-{}?B8@,gS7X19Ud2mjmTʅͮJmlA8D|}6Wj@öo|R"W8(Jm/ s=䖱*nBe`x1ifUqSOnh6ZNmY1"}Ǻ&a6d^h-ݛuݘVòCg6!l1#:9qUFIWJ  GlR ƅ5aIc\I~Ӿeˏ,u W\PKF_z &Irж`/k ^UPsA7(ê'ĒQ *ZÞ BXsWRՇMПmV &mo <l/pBVHhYнf d(hCU~k z(tJ& oK𥤝iŗOfĭ1dA b[U` ($\<\>.VW_H Emc"3yD~:[b.,6\?շT4R%ЪMO/.ȇ?g'nۋO]?|w╻>{gϟx]. MnMC/[Quu5mky:V!&I6J&l' dere=%dM^  ,A}Dx=ϩ}cXs}k.o2B@P:$ ]57~^:gť>\br6@xcB'ܱ52(u։C9fP$:K[e؃ -AeEEԒdm3-ggmHNkӢ3%ĹdY;^˒}Y/Kve;;K3Qe3'91dXXqED, $ʋ^}_ԫhߗǿ=4E7~"Ъ3ae9,}Xٖ\d&R!&)rCZtMf:kj*j^K..F?hgw6Rz0BlRI_6_7o@(,&鰡q%~V:.ipʐSM1`EI cM;P9YDp<-O`NJɄMje3j9`5%5]C#OmBVs ]Yy p<'wM&?P҉2I)0"hru$L*.pxaM5^g@Ƚᤣib RHZGtL^h AO>}^?q PJƠO6IJ=mkx@ ʜ0n+ BZV2oSѢ@! `aR( 51>S.K2E"?/?`Vj]Hbٽ,J/r$&'Y&)xi9+;%.a A%?G{z;v`xiЋDӠIw=ZZd~ʛ3/~?U|^ ؒU,`@j =ed2T!jS(nׄyq OΘ(21VH[ʞ H"gƣ}&&Fqd TZ+VFi9} Ceq3e!ݨLpȌJMF*wg$2sA2THur1VEmmxA9`ͮ |A,dMu'fn`JZaEUhaIxzG$q.gख$v)+&)rӵWYAD vVfTX'1Gx4!m)%;?_uYN|LW f(,8#z^Wǫ|QTGQgcmoG3\%םB@;10-u>3ܭy7$I>=QPAJduY8ӭ|zrn'c. 8RpyP${Ol(0,i"7MMrV9*B dSQ[f?8G:WO:`*0aiFK w ?Q;$]a+hVwf<7$[є&e-eƠ: 1#c}d<9tQ{^b}hI$@ik$kdO%+Ub}O?9 *H1$$4?=8);Uς2w$\x1 0~騧[qR;H_FхVi̮ w_,req/h\W{$Sџ+_onn?-]Ճgѣ1]rq}/V(ӯ.V~w'nU 11΅QE") !ѦDKR!O%]d]#m$`"a#}ʔNT %-ƐV6zfO/ JX%(5Ϲ/9Ed& MQRdQ=φL``'`xr(/Hb6kRZ. !#HG$o[I$LR>Kp ^%tʠxstQ}r:P[gڍ94c%x-WLgfFXqn.coH&ݎľ^ޱVੂrXis^ ^Go E8m]9X 3ȼΓa2z-fBFfݠh#?YFL,4Y?.eicȔB"wQbsLL=@kVfqeɜUH+s(A57h_ AO󻠡wD"{AN0&OpPv 4@NOޤ} Espw;|ϩy%ܐ^iy'Zс~@2Iw|:ez}#$j䬩|:ѣfVG/=PWe p}Ez)TbF a_}}hY;1g-'RۣY6p_6(Koh] .}o}SQ=3w6NAHj#X}l,Q%,f ~ĽV`Z&}{KF ]*'J~ (\h֨s2 ,AY l̝ru^Rޕ:WG@ɑ$p+NPX/ `v f J%|!Z[0A봓ͭG&e?wbKÑ69n.Ok\_"7(ˑqf,?h'w[Lֻ_fX);5cťדOEX[Br7ڝaQa94s9mB>< 29 xG }@k4 FLK&{i6k{rLC<$F{Tm[>=91Eiaan| tibڥۆ*x5iTRGx|0L ɆU(6=Fe^rR)I IkM(VkK`L*0^UwANVf} KT[zz\éhɥu+A2X*pa%OK6Ǔ2gt1 ъ^hUS-IcCFp1`TO+ 3 VNB;3ֲ=4šbiXPj67 <}ߣWoػ޶W98{"q4'A|x[{c"TTQrlSd")J$$;;rY'a2uLx֚cy`rTKD\i\#(y*J_'JY|w}??k'לG1).`2+u: euc^7: h:GkZs\̇d"%Esh-I);^xy@^?QKd~skFpܕ$!Hf4,He 7MLO,1Y6<5DW= NJR @d.lpV[WjkUh%S s,iG 4' 8|D FYIObl4 LЃk|{~cšHǑ &LwܙfLDר48Xh[܎Qհ)>jK-P$,Y&߭ %)Hm|f{ )uWVR%rߚ>/ԑ}5в:cl:r٘.LgݹZRƤ._ԥ{*KYVKxyh3Ԕ!):o"\"m69[^޿}W&)1IOa:H~8vc(YiO2z Y%@<L)_WhD2оŭ$#dM̩Iy@a! = n-,|ϙeNR<ȩc%8SKT MdL' Vf\K35|=T9 q/: Ք"i v8ד/5Zq0aIkF>y #Ft0]<8 3;r3LA2 m i amsfBsnsZ9-Ek909j%Xvv"+NpXS r7[Ep˕f!bp8_yޚ+Nmfֹh:8p;.[AMfH]c_VL.iɂ L/{뼜䔣e]bewX,qCyDZpi\yj(\ESxe;뭹<[3PD9qK5-@TdT9 3XWU-&iUÂ1J39?o'.ٺXIYs4>s0G ;K4Cur okp J4O095Q'mJxЈԔ"I0ZR)$7 S9vXj|5)`JSfŎcq;XZҊ N)b0g:(Ψz FSRcV{PO PdhALiSQPpg"8J)P`|\O``W J"}Iv0>!tYDYn%"`bd<|1$_5WdJiwl{=j%;X^W4~x=P/abe @ 0mU G &Ak֏bI:8E8čd9 ZPLb) ϶4U fjiOkF{)[GZvsZ,ih~n;V )By#iJch|7eၴ~Fk 46g;Ƽz`(vlX|8M($iKZ@l?D(b9G]PmC*8JKT%js8O5/yD/4RR6]{DNR_ƹ;>yK9ʬjQys,8h(,Y劒>fஞ3vj<+jM[+UPb!,1V ôHżҒ i9A4% ǵcOnk/y'3i0kCV1gnMpYRpGˮ&0rFs{1I{ hZ#՚aYaH4ꍷ?j]) 4^J[8jKҢ#iŬOÜ1RU ffhs<ڂVgah$FR%.wѺ'sk6h_n Of8+ +)F k&!rfQcBiD  2B%rq{We,"5Ux>o}bTZ?Ib_t~hU+LJ~M Cihj=qL!LSʔ0 Bj ?-wPA;c"ĸK,g9@O`o_uteMv2PRD#\Gd>R"ExeR0$ Zjf2Zd:ʼnCGG[aEr|4=l}T]{М0Wh+Qt\pb mTTXg=ˇCYEu1*l;@b~qf̕ #7BE8^]E cT]΢7vQ:MMOl >neM' AkL!|4u}d/:c@o3wML)\ҞR{VcCHbG'|Ľw?ob ! TfIAs%UQ p Sgi A'yH\sni`# 1c~{y?Pd:귗7G=Py^/D(k_$!S%fl՚:oʡ83sK ks_QwuBB :~I1fÛϫ=М! *)F']~) vqd#8) اSwV$%)SV juie%)26 F̢ '^?X$ƅTa2[37~uyY[ϻvQ MSL73sFV< HW1 A蓫Zd yG0PIY=8a5cx;ʼAS{3D̓mD4c$ ?] Iz)~g ;Sh5OY_*s4vL*qM)\a(;)/0hCN-|bSc`v[oF+5>/LsIӵ3]]DSdo&37I0,)zŅyه_`sjMut'm&VwcBw@ըLuf:Jf-Q[v>1Sy0RB;,jhe#Tad_adq:̲ܶ:eFb40RQIm#Zlrcs=(YRx΍ڃøI.dH#Yu<nxweSM1z;ϻp81Ś7*TR&>qOWI*FfO hgQunѴݥsռ6SZU q9^->>0S$NTP~ixkIZl TQ6J$ouVbT*H`THq! :UJXACM;. ,-ZzRɚQ.멭Z֣Er{e<!Ryc"EP#/ GXIeM JԺڣ0 mmֈ2u^h88Rdg\%t|,}tCdӫ_! J3}S 9x_0 Gsm|j=7p!&!om#U, m6;i0UR=тZrfsX}߽ӥW,2kǕ YR>YE_CMMX}XBCVWeLnU[,̶[}ޭ y&Mw[AL%ݲEWy?\}\\d9xYfd:E&="fG 1d l׍-ߏOd'_^%e0v ?ِB Oz~&n\čqqSܸX!`h~>2vV$֏T 6\~ׁg6H٫+Ñrv*|b7M,;DDg>$K{9\^j̾\]\ޙp`/M_F J_.ۮyek;Ȋ#'L2'/+E1K#k T b=K5i:X}n) Ulk䱖-)铊7WxPFBnJdj}ƃD)$An+hYXIYF*)Tu٨.UE ]0e ؎xu٢c/9ާ3Ob LN][X[pSg'&,j~꡷\ÇA!Iɒʋ,+GڵQZe>vV;B+XY%xKVUTZlYe!擡I%!Q֋ H'MrF1H'JHQAr'0ԆԱa”`}l4qwmm$+*=$媤:BUMB JS%%fwv 6˲N====S&p0Tq)#SVGf#HS525j1lьIL*I:ԵlREE ^J[kA:V dĻqMٍ؍5+ ePu"eѢ쿼}{(2UmʳP=ɤHJІ敄鵙s1} *oTdfDU0Jfgڅu~Ⱦ)7?>&f}_jX_,[} Fg?]t,ɯec,v?eEбAW٬8*=DK7UUnPp e&RquDNvE{)ٷZ=iR<M@*zaBZRe c"s+H4K**eP? k"-[d'^<40Vq-"ZhYV0u4”]օ ^hPd υe1m!߀LMvH&BC,s#,fRB+Ym"Ϟ{l"D-f7ezRk7BYb F/(``.`Z<}]r֝Wv7;fñc7w)=^ē1N12Nds%$e|_3 aD$ƭ dnZ7p/2k|# xզ3G[Jj@{[ˠP#hvtR!,fsZ/Ya]GfknO4=b@4XPc8QXp3ʍ䑢C!٬hvZ&WqmkjY::1f5=aASDk':Dn%qa{I֍Pd", yj!utH0=[յt'l[u} [w`sRQ0XQ6("e=_cUb.Ů10Brtِg?+Olu5\IڭfSTqT*66\~ -W`fC:^-,1C=ۀ5T'Zb,Y{J7b\MpdTIgmh+%E#tg4>Dz=C8*wtc8XChuemp Vշ9)x0QgǨ#=լ%Jek8k?;sBP6'anHWf9&B.2UցTEO`'Qebd [h QXcaB~"~f]8 'L9z]2^I4CXC_ن!Rc(RT΁?{Rǥ5D:Hx=F/b랹UpL;NTn WzYr~Gl+s"v쨭OLP|,nj~}!vQPa=e y|ݎ|`ı?El5eXunY= Vo>@@0; @EдPIVۑ'rUQ'c`!8U/.ǂTbGEIa͸@RO!(h$VHॶ@aؘn Rpz,FhHPeʅ6"0p74 bt~% &Ä"dɤvsmB<215 L4zP:| =F&WJVN f04dM ̥r/-xoyO ScLz3o&Y@3{zAUm5cr]b|Y!0$!`o?~/aVdoWaMp^=Y:[C|=pua6|\ş'};0Wd 5C.A-?=83GQ锣L#ql8)9l8)_hs_ `#:FZ+rEΞ$5cY7}C~Gm~ii2G 3BłٯvkZIzxk)HV{kcklYYX ĵ zLN5c)wT_㙊O#\Av\ bB1iW4 !hyre7Pr %:F%RSe<@'@qTg :u9k4F%m11Z"y Pr8ߴ%7M\I7%\ !j6faZ▅IR VEcUnYY(Հy܆Ad _fR|-|/L S7 5:]&oC6W=|2a0ifu W&$oZGBY6$GM0: aGulP~DHZGz#B:#$L 6\fӃ{88F&L <=ócpT(=`vudnOQ8+w@٥<=x~2QĐku@2G7OOQ8Jht~z~2Q5$Gg;'stv~0%V7l Sn{ڈd6/CNӕeLNuf4܇Ld.krCY5Tw4\ ͥ:A54h]e%g ~A Y%'8PiY pg=8 N#dv-]Ibc Շ=iN@L`]s8WT~ݢ[N+AUלoOͯ/I b2Y&5{㗈 D8dRkbr.fbUՃq$SEQܗ%fHj΍ZA)7#/k3ʨޜVw P3(;@,{|M~t #Yq3lfR06cJ}|LC+ sAh<9,1Guni}r暤ݲ4ly+~.TuҰ4lۦaߓ( vʦˊvd PS A *==}=M5Z@+jJҌ /ҩ9btoz @8U XA4 "d2Q)JuH%6z!0U"M-iRo'ڮ†A޿7@MHfIvswF mHwx-t50?YD" |j4^ШJnB/Jqe,nF̖ӧMӦQ b3m^HΒs‰z,¬}?CbEͦ˅Q G_RS잡֬2(-:*b%\vrSu@i))0$pM+l0j+D]a\z59\E~BG]P\ˮkjӶ@i)qU"#-RBtJ^Q )\QVU:Jׂh)Y{1`'6,+g^Uc"֯.~:\[oH;\oLPw险C /u=n\87ݻ8_O ]>^uI2 G on,7|u'VIKdИ{!7$R"iuiv-ߤvg98D8}o4Bƶ }Hc-D@K0lŋ/DdhP9#8@6Uj\\8𡐢;(^A>c MQ@7wY|n,C2r ,TcL״< MdQt<4kPJrj MfD~t捫% φ.sg>ys~2sAzB[<0&zXu"G (+"}c7Tޓd__?!Vapu =r5'81gD:Wb?T`u+ {bha;EF'FՆxbhIG j1I  ( d*&:8k3p&-g1T4hcE|Bcr˃Zc&BsD*cA %1Ǡ96hcB8ě6lA@V孡!ة11mX"DIr(PVե$Gci⥷n= qG^!Hأ6/ QY;^,Rz1tFh5!+v̇c7A QJe`3׆YngFǍ- 2u o<},hIeXyխ$sFt.@8G{I, .ٛdSN2\̠j( ELTZqc>OK3u;D>+I{,o)?E3K=7d%-@J`?r5-hǯ؀t̑ Ry<)~aytןCi*5s M۹0Ws%E-ufEgudQ?&H-˲قcd;0*~ mLA1*z&Y/V_r币O2*Wto@(WT tjv!Y,(Q TaD ^RN#h"dR$KqD|WʇŸ#_.qLQ*q_Z| Qicu7z0Ba+ا&p D+|%!7KܛSvSfM0`?U(}ʑ͢>OA'U=gk;+~ȁ- ;ΌPCDGw1VowN]i7CJ{g[.@HLE $ TB\WyNpE]ƫF=%D9 8A@U+ctu=4,"IhL9%R)G8FZ'KܗVa`I: '##ڱT4ؔjdP8G8Xu4‚Rt)'AIT35WיAȦ>DCH{`')Y H I/E\3FwG!Nz1:-.c9FoSNQ"ϨX08^>y71}r }Ѝa߭*Ju}t{ѣuG~A@{\iF[zyme (x_Ju8S矋JHLq.7sA9fhQpKOm"Z h'% EQ3KF'%3"İcIlԠZOFbKJPk8N* EFf<,y o3Ew}TZ/ŊO{dMyW辖ٳe_fŀ0r'0s6Ub@akgdR=BOvX-UH-y)~>qusC1 ImD!'kyb#n,*ލXR摧;vOvp9%S,{oA)Ej^(l-~jtFR(B__-$8rx_XFq 9 7;A ch{w_ɌҌt,0#p;)){ w J$p@0RhJll_w޻f!3Yd'c;Q5`LɀBDY+j8CLz^h~@Õ(gAqcrj}PȪ7,ORK^Ƽ%21qmK e0& 3m'(dTi$La bF$&$D!)XF|{-ltg\ClB5<5;/$ٌfI9)Mm֓L+~3[󭙮2;!R}ס3+!:҆$FyxX*u>2kJ,j0aIK0ar4bM觀yw~{@bo8xHI˜/191!+qi 1¨/OZBu|<ͩG}bz^@ џ}LaL]iEEZls$cy;~fn%ϫOuWHt8M>ɒrl*ץo_^^Y6XgL|n:W|όEIA;~3o2ެ2{|oŒƩTa* V)C!EGPPIJM܄z_ FMab=ma1`Lhy%? ka: =)px6p_ dD/BqgBbGt=ȴ<;Hr:$DzsѝӷO$A  riƋ.)ADaHI( 5PlO"BNHGh5A",p>>?Fz2%8faAKx~=c"$n0fc3#@JNsWsml*M 8Kޯ}bsiI5E<kʋTLt샹⤩9PE$H&"EDH eӔ ͊EFQ$ceNԪC_y9L Ԙ 2^wYSAYi?{ƭ O'uи_TM69vٱ_6YfS9!) /#b("K"f|ht7{ڬdYk?^ IWt%IWt%)']cW<~k7|g/Ʀ)ԥ5m%V%V%V%VRX!Ů(̀49̞ͭ̚Ψ$#s$ԁcED9BГF:uf(ccʜTƑ3`h CF&L9sr@XQ5XaWc]b,pcۘ3B)tLS 1&s)нɉafRbdtHEf@x8"yʹ6`artQrCsӱԍYX+J$.GRs"KB `2`M3̙=-#tkZE ,a<]R  tST 6V`AExnv!h)@ul&3<%H!ha%4wԄxN5 <2YbNmkHbdFRk\]| 8#kH5D ?2bg I2,P;cb;WsEe_0ƣ&]gM끽_DznЪOo^b8H >M^~>,5XxpGLntw{m@NIgIG//y. O>I5@*?35n6gf؀R ,^SG2 DK{Iq.NXI@> u =H'u' $QDP^/0=u|ei+rғŅ[s"jI_yAȚN@^ St4c~-d ^*Ik_Vfy~!*> 9mυq) s~[^c0ۗTM59Z!`/,^ւ T܃::.6zx5n4q6G0FE"6GPc~Luc)z:$i)"=La"1MuψQr*s2m;*{#A]U\{:^ hBx0.LE;0?--^Fr SG?(֭]I5 fs[ OX {ηk$R6 GV8?m@.V3U`eGkׂ#K}JF06uRqNr̙t pT; LUAE("u ſCTY.ލ_^a1fxy)UXfB?}n_è̌5-?C[$;;{,*DHYJ/'v2Z\3*I3c:Ϧ9v^ԤpOo5i_`P̫{6 "aℓ{$D %05G"Uu Ȕ'%og#w(.|0 RaZX~ SgTXtݧg7 v(!Ow}$5mȸ6 56=Ȩ@ux5x/y{Stq}Lf[w;_|dO= 0`Ei3 Ҭ*;D=${6Yivv)i( CB񆂥: Z;gL$ө?\7LqCl;&N:_An#04>ϭkSnLJFn|&5$aځ[i)\zC^Sw]= \S] [OWyp0_?e\ewFa8 πq~w~9p h|tP-]ttJt=hUq-I7]f\Pq'wğ:i*/1hD'3{Kp鬲\^e z{V!yAI=k~j 2T.Ĕ`!A IE[CQ^3P-'N|HT^gl5i]~n2D+&n ymXXhe{/brWX2wB7ާn/>A@z:F WX2Bik폓"PdG˕OQK-K8;$8!d$N|YͩVZ2"yOdo>2p9LT ` }OBs\$0JZ_OJD"1$QъjA:[5:}2#qXDDdy)wu+p/e}r;|6-m8A \E}=A}T*oψ (v^O)Oz|ȬTyK8+AHsxSuA/pi^z \GnLW$~j86(^tu* Fb|LS/{ڛZ^/o)P48\mEZ^vmuMY&&e M!tQ⩀X*y>Qb;'uo?%v7-[AIAt߈Ӯ;Wv$3ҧ#dO$<Ҵ#c蚄0JDFVaw$*w&H }vm.pyҴ+17+IvаtM<g-.ŖBG Cc.ڱ_~;vxJzq]đ"\AR>Lmeѯ vcT>x‰ ؘFhz5uG-n]8Tlcr7u & STJ'Eg\O( ԜmH$Wlƨ˱c)W6! @mL!/6aasu}sوϛ/cr*i$Y!'LSY@ sa.'C@BTID윜hN΋H^Rb0v~Lgtr?;R_wkJ\8wܼNoSE."-C(yyW>ڏ !"R[r !)qRn6*0HE/!#5t<*r #/\@UUT&H"JKʁ̅\f#$unT4uƖ>,fFMj237dZpCS+YK%OH'ޏJtFBݱOuK刔4QǞ~zqC#F۹ͳIWw?4`Vz;/xz 2Ie٠Q{w lHʚC%'P93U2u!lGtv܅$ŠA:dN.B)QLy8LV#wx| u!PI)Wu6+ =2|5T)_): k-P %aY*C뼇m rǚ 97=V4_&;>Lpa Gdo@%: t@J@ClP*[@}D1w0X @B;^ 7=T{4:\ogU /$p184i @zm(Orb(69Eiv㬻DaՄˎo14cYc|ckK^tf4ї 26u7yB k0eyƛ͕y7l(z14^Ե,9 n93|pT#lJ ] =l({whׄZ͞7S ƏFySK8 菆ɬ1t N2ѢjZn5gưս QpT=rtS:X! ڨyյd?;a;f8p~RMyR(C ݴo7Ho/;v;_ߓ^oH}&=zyN0oSJ3&>߽x=yN؏1@1W&j?S̄ Nd -ܑ2k _.ZevOݔD㯹J^/SLݪ ϔ> ݳ8wSue*˿؂E>GzWqL[1,ԣ/ uQO_K7wNv۽MmJ%{p7.zwsU[_Gn=#㊭ ]N+!ۅ4+_77׿v{A.l?oڽїہ)JZ8]D"P$~ިeC맘 PqZYBto\S2GnLR?=9JK@(_ a}?*nƊ#|&oRJ?V?zV%%6o kk4HwuOm|_ܐR.1{^l2gs+zׇ\_;1,a̵lDw% *=dvôϵyXR+#|f$2lIY~,*0ew44Z8uI#oEcԪ}3&ff~L.~n?.>[s֯DL|}1F xG!S9Vܡ^]N 94{DOX]rψn]Ab7}f]vuF_u{Ej^bmcCcӷ^?`(TnGC?L?݄q|oF#+\rkD.aGPP EyT-"hzvsEhgɋב4mk< +ӼJgm{ OGIɋȋL^А!rh!%bl uP{~KEBL^3y0Oҷ3af` *?8 gc-D!UdLH;J GABh{B1xLvãuqp`IQz2g3xD1 ra; R-Vx+4zke?.&Ps7ׯoWLh(&-# 8pn#>(@ UO$qBh7]̀d|HmW5`/ÆEǞ&D'%;mnL EɁJ3MGiIL)A=1z,٤DӚI-a8!A pYJf@@!5܉:^5SKUqBPu#Fձ98^eP(c'(IʋG,#̈|5eF~/ܲ7bd;D2'HrdN46MEDrrt?-s \5~f%(ZlEoqTt1"o*!u8-s̷`Xȵ1":%Zo(dژBKtqN7yŠŷh<&@J^ؒ* w쇢cC)'l|ܔq`k9om#BŦ#Txҙc6$X&RRbN؊QOFp "[ #X[:9aẋ9;h%2Je]a-Ԍz)` 2=i>%@ے +yQkyKt(x GШpӴANqY'@$>o  rn7x;._±Ǐ,%^+qX;S M4y7{U]̦Ggoggt構jK8E>NLjJj\fٳǧ:9yՋ^p k0u|ioZN?O/oEZL .?H] r2lx>_{6K4yə^^/|,Y8֤y]xa<,3 OFgE:˕G(w(חE{]{˻ǿO//tb6|Vu/!,Ll aB"Vv.g6 :au,]FZX闟_oOYiF-G+vqo"Ӽc9y;6}:|9փfib R؁jӘ KŢN|De{ߖtt ҋYL#Ȫ!7kfSg^hӗ;1{ ~zu(ۓV9s!MS|bS#;9SB8%"r9,e Ǖ5,l9Du)Ss q1XI"5u Jg0`1dp*u}РWP=;ӲҶ&.ϼ~fo&5Z{a[X?=~GIfHY'U^H,Qm/*qg|a]N67+yS<v-R/XBw>|`p+OTjR s,A_ҫr1wfB#h<%cPKe1 -%q; 8)~5 ^]\XveMZWlvT|E3hTaNAZ`! Mڥ)4|bb`9)uJD/УL c!`LH W{Q1=6u3tHiɅbleKbTLBr,uHAii 1Lσ}{]TTTTM |٩|7\ p6 ]URWire31 sj+LvԷKx7U<Px&ъ6' [*0#I lP g:r y>#s!A̒fb'tc#׉h+l3:@G  Vȭ̎+>دk%sEFeaA'%b'vA3ŭ:rm楷AGb Lv'oaZRy8BfC1H"̵ w\G;m ';[{6*gJ2TɊMZ *:xL R9#K,%eD>-'EUVI]>J >KÆO Cy(MBrG 6iD"fĉ.ijyo,)e0Z3H"b4\bѹlKz9I̺ gaf脬NȜ3hJd Gs1Y,s#E̥|ƙ6w0x]7"RVlhh?~!|wʿ!_7(7-!n`2}Ab8iR58m:o9RIk@{ꔂCbgS+)oZdayg>#_"hbO%@ Xꘅ "H8'jF0`( MH.nnDO v z{?:^m?Nt::o15 $RVrA0/46 !^֚A0CS]zgc H!jDt"17&*aƻ9`P HZ"8<GBgBރR&=dnq2C!gFkB Ь$<)^ŨzA R>Y$0^`;!'4w% }X1L#:\'-L$׀ь Ζ(l6$}>!Gc]ni'lpc\(ͶlCW}QCnmyRYB.gy|e4yHRT5 eJF$+)z!;=|m2?[KWfkho|o F;R L X #b]*s-ùVZUy. gLfc?sCL<+7T*Q Ayk}_R#JlKo;DeWòRfcW#0ArCYbQC[oCPv@rE%MPd9iWy11RfMǎȁ牻N>In2%$forYJFB .@-ZɁ#gSmwsJu9Bp~֘wuf[^y M]g:Wb&;Qt]2vd-e$ZoyDZ-"F?Pɷ*3}Yc巗ږTv*;"dw,`>0)/i HrRrѧj"Z-^][uv}CӥkkwGԬ'^WS8 t뚳߹;Ƅe_(6Or&X@ввlR bVد u1v !؎຅w/ZVD/Id{D ASztI]1 )%R,n#JnF/[׭EgeΤYTڍg0:H_9mj3.jSG㠮vſuY!_|ԳVϘ;Ĵqr>qVޝ\l^} \w|U{m7>ĵ<6u4 oW#aߐo8bZB{cC-m玱H E-vD[6һlUrӧ߁*'4]<$EWm K[?XݮwѺ]Y_ewd@ڪI*N)&iMZ`f(KBCL?7tߎr.URBhDO0j 2 2AC -'Z7B$Fieu D] N8WqNƀ!ia+8Tkd־Oʶ\?OKϪS̞]\;}rTBIl n|MI@ӛiT,=aߛQ# % JMS$LRCi{鉫3IfWĩv=CRI3HUb%bNЩ^Q]D4-@ba_w0)W|CBҜvz[>8h7rK3s&xPNbDK1tcuK3M7ɓr˖?`th- NJՙ6tP}4 Lh5(t h8ՂKvg l44lFF}iwudܯ~ݮ7҂t Eص.,\ua(?$}Beўwewߣv吝f.bVvqy*D{(񧓜.n?ۓ;;[Nx #l:`5pȠ<9vfIƐR ̭}}qK2#($, ccCP/xLsUKq̜pPwlNw^W7^ž]'GB#ͣzo )j{Oq+&*$|(8,7mGBn<[Kj ;z5?XH0"jUpeG 1np PLhw:(ZW3l?@M9%dw܌S+˿ΫpYBS"t-{T>fn[tئg~SA m`pP0:(@q9;$aꄥd6kUBEU bNPJ\NhN%X4zt`@Ob6,ݍXAxUIE2y.@.TNvrOWzo?xKZTmp);evXͥ϶?n-_DQ]\Z~LadbH>.Rtr7.=:|3L̂xt,hU_6Zd>]gB#}qvц*<։UEx @eP:I9&u4owU(iW\RY[oo2,@P_0vk,[yҕ^-_u ΘuI+ 䱔:24$7H~rjrM$h̲LWdzv˅!FZوwߟL`vX=nG;ksN]HZj rq Id5Y0IeJ3ƔX30Z&Z;'Ye5$RD344G. xUFt2?n ʜ\),r/Pz2vN#m O{B+ٓ ubz&gݛ bqRYT?<s&3υg*3c,,$Į"cg IFL0LynrkH+` -PTY#8dK5pFju% (D:P I6hkiHD $,Ye9 %vGp4vg9aRBKZet\W?~8;;='4czOxH#~{kd.^n#n`žz`ŖB.|0[Q֔ul6uYC l:}7'di6x7E/4%a.[ڿ>=Q(_n7_t_;+ulSv63a<G-Nejs([ W,8}j)x0 Bm>迓hU> h!-~1vS2ג+{ z]0g/9V` [ h͌  WPҏ8ۺ(Ǘs6> B:h o_G?|I8SZnK"TP:W\V>YV 2ExҮwەA0űDasFyʲHpBB"MH&N8mdH|^R0Mx+DɃ]gwZzpB޵MO=y_ Gy]5);u'^@;I )\Τ\楔;Xxz5(PGSEGPiPuŋ@pmI)PZ_.Q_}GngscXD:ͿjDA X{.XNKߙev9'jhRL `絋pՃkyguRՙ?TEs<>NըĵmDر`5`` 4ZKŅ8tJy'*xؕUяKŤ+/u@Z!m"S!AE;SW⮈S#IoJ \@\PZ|*O&ek^S z't;j͎f,3oR#W]2B抹v, C;v&A:.bpB+k%OLv͏֧#J4U= 1l{*kg45YDjy;pO4 c7كDt"[2t%9HmǤ4)ǯ<k.O'| n㘩/C؝=tIi'>sR }`IlQg(@QIl<%IEhJ|B?@yVs/8P_:*IsaL1QAym>ӹ[HtX"?=wcqd+YS}h E@RFDʺs5.bzg)Bb]˙.J4׉x^3ٗs⺫4Vn::vI\d }Qy<%( QZnNe&$veK0}bTr20Rw$&H 늦01ԄA)dF@(CTFhPX\eq~f+*7qxXv1xXf<)7㬷VO[+lI/2x[A<\4oT@6*9"թYm}ֻEӫ+p׹-CnP$llEvy! îyOPduwx-gWŦׅK)8w7u˦x:wy˵!=}(B(=dާ"-HN*N\̼㷆ζ"4#:#&DB"B$2RJ bN]ڕWDpC+: PJ},Pzb) sys(xȝLK2ACL U ]`NB龖%W9n1[w|W:n;x}Ւ1Z` v]eW(Iϗo'5rՙh!AC[Nz !e(;זR0w㕄$NO Tzp`V\rIOHeq[a)LFNcz1w7fftYެ=<}B-"#&5|D?~dV|i.OiqNx5B2D<ڸ/i$ TZl Fez^.R m8З _0..l'p*1(I߯(H3!E$8y꩞$wdy哟Gg+ƫew3_~Hd:s"'G)T]ι4gͿsy ӏ1]_ [}]rpq`3cP Wc1@0t3V;VÇ%ͼb:gr _Fb(|"G|)׻h:>_.|W~T_dr:00hB4UvwD;˔'e4lxc =JFr=vlI )[MCיwY)9KBG)yX>3)Ao8|蘓:̉<Ӕ8c\Qe(~z7D1nj[VBk@y`BFHN"yJs|S.&2ٚ'hN̏tR3v8q)}RY(Y_-R%Al+()4BI$+,lʔ iRYmNE$sIGKeb؂C89|olNDabk%sF;t Ƥf]tº@8%e b)K[ͳr-9K C1kct7m#f Q)Wlޛ]ŗp0(9vv4nz6aZAV+6<2MFsլGpa.;0\XԺ"Y@mȕuEP+V6֦$R-7V 4aPIrSbWwkXe{yN>ioxOq("WtBeSv",'o1Mg'G5"Y=!zf`r0#UDCĐYZa9>6m>j6˿o.#s ˽?_tinw 4|W$uq7 {MݲQ){Q-1:Ɋ*mIEZn/G5Y~ͫ<\*ȕ2U. AՠɣgBCt}>%dL`#<}4<mҏP~<$PrE6O?WTsWDzf=F vH"no]^)U.ʰbo6^Ls`+%EOƇk{}c*RFo[hek&߸l+EH؏ZaJ E+"ޟ:a4S|ܾxזP/uj_i2 @kc+]`,ߓf!\-j L v{f4WoD(Ϫ 75{MRK|v @U6o/b+]vFsdn+)sBA2nNGk5߆8yb;6vSYA"ɽ -G! c uIFL[VU u܂/Jc̖gp~4B']@mr]0 ƃMȳ-SʮWGiFtEFP26:Y0Rǔf*33JO;G*\'x4cb-*ܶ_kp/}h8EV)%D-LE%s@ )H" Lu  E:UqAn9ѺUYzEYYLiÜ9JrQݰS50cVn'1=(ՃVO[8tBF79TU Gqa+H|(v\=ƤFR~ XKxg)$dbDrʃѢXkA9dᅌ{Xd@l7TO{(Аח((Ds1KK/OFh'n:Y},ADui(d&aVoq eWjAPLfzƜLZbŒ~R'fP7e pJ1JdrS&kmn?0SZ%oFl κ}76B0lz=joRJJ2":>P2'JQObZ0uZ 5+( nfֈ"h۹@wdk9]c֦ݙ!ӌ6&>(PYL[ 7Rr *:,+"ܰ]0e/UgQcE.UDBJbhܭTU^d ZqIBB PtsS4,8摪- z|7vFbQrzq5wL6͗-Vv6hV[¹)E4ی:.&mJm)_kw:}yOg?lWͫF {TCIw9k;uc[B+}!Ek)M|acΈr0!z8׽OZYۄe_ǞoyGm3ή.hiA(^ 'eQ'ZgMo4 9 %NlMGK@a·0?ҋig须R㶂Az`G2öz6W):qIʹ`0`^}NDvZ٪VȰt}N$\FUYgStirtL&ewαژjI1˂ V BXIs1,sUdAr|q3B0u}|ꢾEo H+5N'RTL/NM󲇍tSsTAv{E%pߚ_0Q ~WxU1nvu-< .u {e E z[SmuQQ[m)IKx:AFf_ڼ6[XmaZIkE wz1қ~/On-o)Z%dDæ'\ 9VЖ\WZ 5~?Ѓzj;hgW66吕xȝfj]{SBiD A-=y g՟/AN޸قsʖj l(ա+{x RJ5M~UC2p+3Je>h!r6 clX Z}Brvh\齷O@\mD 'I:3#b K v!o2[{Ad%ț]r[}3F(olƽ*5la2j4a,g>k#\7^bְcB[x5n>eC[r0NP]pFxTgYufglzu6g;כ!MOT7NnW/})z&77*̕[rAſ~WXpt4?n~ίSjmpշlVzV.Sf15j瓅d6g$"Vx]ݍ q<yٻ6n$joo$}^\>$` E*$e'!E 13taA׍Fw8fOd@qh*Қ@meg[g&"Z^7Zy[E:ao8*JzlawSߺFipH$8ilQpV͂i6&i9MAThy[䑁;G[w6(TP逕S ۳q:BF;I `LDMu M%>1: yd:zlHo2\Ɛcc{|qWvPdV;׍Ve@F~nb$&Ghy =УC/G<_LB|ki! UfbcNd^m]6B;'o,G'͞nh?+x$L~wy/5PU+:m3dIt}wԏ d*ZlDqlda@p=vXUH9lQp<ׯ7ꌛ60vC(l U.9L>^h|pXi; ^(!aQ Xpk~>DT Ql2Y1E/zX,e=D#Ne? O(r4M]hQ5ҷ;u*PV쟴ϭRXky/^cAP)B;i?tI}CKo% Z=^ ]kD:xMu2~ kOo%-ͼ,6sPt\ \2Wncrz -%knXt7c-4 ,BűFf/ Z$J֯WipyF>6nΟIZ. '~o Di(ly5}Mֳg4I`lo{F򡎩XZ ]uRWnmM~|~RaYx9#RtwT⹢j3վN.f6ai rX| O|D~69iFu56P}L}Æ[3KsS@ ]r#]G#iJQѧ 2T};J@)cLS3 ܐzYW=ԕ(MЈ@r)|VP2F ܐLBy҈RU$潻ryIfU@ߖ&9P%R&wuߓP=1FB}Fpjj*s@\wXA(wfՋ:yE]2֙b_xj4UջO):DmSGX !U0#>RX@3J}P;b!+ApZk*6!1=!"g^J SK8q$&{QLTpى&EWnɒ:ٲZ(g01lEQm gК\DZV3SU` J G޸< yO`1(&9HIBn-'AB@g@o08cc]o@sgEUp59f.2M yT̙$kUI*^YQeۛΉ;օK%AlV"(aAp1ɑ5W}(UN#H[a 3*F,\I2 $/@p)̒5d+ c3 -0[@ Qp>E sƨcA*nja]wPc04+WaU2K? ׈0'Y smGD23X($cK #08 '5u-܍]|~-{N-k%M=0FT1"(0(}RؠI2'vV{0/A[ɜh*}~8=Xhکհ;[H.Rqc0uE`^ʍm9̆R!aakg8[ n" <*$ "NH˙SRM&{ 3Av9_P)'.]߾IQՠ^oLwYap`hR.p%{, _AVǿ~7V ϯ#- |[Çe<>=Jdd[BzFx2[<KU;S,5 ,{|gpu΅R7f2?O Z0 +c#P]WCxWئaÿbu n`vlw.RaR` NYg#&"8q5Nԥ'L0ԈXs'fI:E*59?= u`_:,'37aQՇnc_=y"c!wL_+zm T,oFj -a דAߺŏYy` ~~ޅ]ޅ]5;ZLxFWFi`^ ^ +`\qn{WSKlf ޿|>+q{?mze DFpoA]A$-"Jl. sKa[[8.VqrE) E8Ǵ 5Lz*s%px-uM8-.II9!%hZMp`0|%=d]MJ}!H// iSA~WyesӽàY嗺Ē⌤*y)&@ŕdFÊ^=Q8P$ӼX)rf:sP,> F YdFaf|f#s*hnPꀽj \\νFZ$ZJr$X!BOr-FH"}AXG0RKYy K4 [\ Y8no>  r@q9OqVCG:{ylL^ǫG, R9FTpR%R tԞKs덗[9&gėz<^  Bwda%jzlx֌"T7E^+EPUIunȧ!1 9E-o7pt:$ukOǝuG=}f2v5_JlϫޯRͪI} A# rEؗn<~s5~4,\MFo©Vp ? Gאpuw3+_1h<햊A褎qv;] x+l-yڭ Ex\Hܹ MQD!vU>c;'+J(lD:Ƽyt+^>o]/OGEiq\{&I)bZ^X+Ϟfq ^jTڽ]m O Oi)a^ "%,NR"lظ9R+Lq26 pF*MFCF!]HcHS}%R+<#2rŅlCGeZ#IPyR8e݃Z) äVy0jQʸ"{ RJ b8ueNKs޺Gނ5( ⼖9eFzdn=/uSPU U 󥷚b|`9f2؏jTd;//Ǩl+GԝiLT\*J^^L[Mg$b%!K+`Z`4Tsv\:>|K qkξ5 fZs,ʍi0SH(I(3BKE#m nV RjYCeBX)$FfZƑ菋GpY H f? )~Τg1mw[,7%J|I0H2UUX,C*a8ӰLӈRB2C%1+Uav@X<4A}?e{5"z~ FP9;:\# *%Ψ: 5A8ij,&lni1CKHQǻ^-;jhewq wWKxdno 3Ό' 7=$_ᤡ<1x_z$r;F 9؇(zr/Cg<?S{}bsjb%=\w}뫗wz܂ȸb9rkھRBob!a2AW1#5jx\.h/82AI!KݝZ=*Oϵ COr>欮+vzM= $d η rM8f!t>$?_S׼LADv4kӞj f)I0pFO_:zg/ﭟ9Ӷ/[~:op}C :F*7P@\P`-]i X7JG>)Kc=E%Ҁj&|+rܞ,EY]B0Ydw>E_*4 =<~5wzw"}&XŠOvjt7-Wv(~1V^7H,._^bv}we$+JA:`E\gLV HVNWEZ(}jO%UJԴ"wx q$Ƨ[Dq7+f2$bMċw:s}~>{9p/Xo/~_q]~2! %uw)z/a1ďby:Ӓ{-rPeWkװ藏jf99~d?6r'Zz֢5-+ץn:\DdJq97n-}B햋A侣vUIj쉦j.$;,dr[ [.)&m@V$QXn5[hL6PvdA侣v;]x^iInMn]Hw.Y2Ef:01!`B XP8ICl ;8)b\@SM)epvK zyg)i~_y\GMQ| 8'Lb 'plIAرCep:Kʰ,\$DQI+7Z5J4/ 5^9Rқj$E\o}c!I$T- Ss~}5=8 w*_Z #iɿѫ[j[Pû5? #X&߻}Iwo[Gte_woHbn_Bs/z8\Sư]C0FKA; t Cb.mǡadEJ]["nbãS6HK~)P"Y_;5P1B iu$-wg%})U}5.`m#zօ1g&y!&zn^sa+ۡDc\]ew\Oj6o)I`Rwq|KXi^hd$pu1r(-VR/ƛ ~?{\/b"˫f~u3WW_ř&5Gڼ2c)J|W`<:(Ihkg͚_͹oto¿c>υ hDN>Te U%|v*VptUs)|w^V$/o봡ǹ{q//y/Par ?}R|6t@t`NKp}M R?W~z+ |٧yȑA!٠܀Ednʢ%6hF#>=iZ<+%1ZګQD+⸫aS0{mF xF9lF#hܦVT8 =B`!cY@Z,e(4 %W"h)h)|%񧎮Y aPFWoq㙰x||7xz3l5a ĢAN !G (CL!X$/i8m߿^ipcw; 9} w4!}7kzkϭ~:fo%=\unc^zǹF̎T)mohi`r\# jyyyc|A[Jq-({ڀr.Hj'ѤsIt=F;48 Q=} Gtz3 bB b_6=z҄rtWApˑs6Id;&Z)VVP)מ`ʐpnvvo0r^*gWsE bI [E:9^ 8Q$`Nos8'"CJ5\vR* 2LzT#(XPǜ8` 2 buׅ(ܞNɨ-y3:Y"lJDsƱ'/~# ,,'f_>ba~](,e@Qg'=B t J _do;% ?Ӂyao.uy7/ş/p8/.kܚW0ga֎qPc K AXYB*FBE7Bc[[}b9.#1`LJR3K d2l5WC!!8h*SS(j?chxl+)IM L$l 5!\7J H CJ0xBS-JvyLr 3osk(cZv=68IY~{PȩfphyǢ>8?ֲL(Ktw'tw'trќu&7-=G𼤈*=6p+)sl0I:/ 7LoE>ֲ+uc<¿0IY*Wuk}{U{8Ν@Ƶ+no70eZ`3D70\X!'C8oRYN ! 2 {q8΄N3tߛ@t0T+"Q$ G3cÌbN@Nb9 ;W){d+/^5Y&A[ig8FL QDzX{$=-)yZ@ǥԨZ&q`Y1. -06RqH 0ه Y]ws)x׼c̐cJW1Q0i)dt`BjzӣVOn|&F-[oעp쫎eRz!Y973CsHjQ٩Yhv7[f&R񦸿Ϸv[NC?}ݭ/7*dSt~'}r!JAZ2b9T"JO"r!IF;PSVt9֎Cϡ] ;Z{.3 ^#G&rbNgdϟi8-;V3Tx 駒i1/8Ξ +,JL'[r9Ь"Zׁa5.G6f*r$<@,uG6svcwm1 8P.lUJ go}%֖BXlsq-pnDww^^m>D7+`MyBj /ҕۻlUYJ}/<^0kOБZYDCV1"z 9CoHZ3I(5\bJPN(HSa|pމ r"TZ.jj3^wLI}VQXT9? .RT[ϙ l:9/.q*]̠_>ct$B I?^P~7|D9/3r9k4Qw8.[TY#ӭ mS?c@ 6DmzxMTT3,xT`YrݱH Fަy7wS v!y=Dkv4Kλj;ůu!o+h-h{ge{X;jFȓ"-sX8F$!84YjJdUgU{aśŬ~[!+v]G#[FoA^-K2CšUpqF7A>Xc .b`_sMː}(go~Z Ru#^>üAҭgO۫Ƴc UMJNW֤VQNK5Ly{y=?̫i'FjD>9]5>"ٜPHF+̽<̫6VJR0yjK$;5㽌pۃ<vQ^ɧ4MRp,\MI|pvLk|%2Dw x`G;{Oԯiz;%G nk6Lo] IByR9 =P m4]iigk6M 7COnQwvo߉b]؀F͘Zڷ'0=VQN; &7~< d%B(1z8n3LV7 HFpǫ+r1lO7$&臘e##B(O$`)ɁiLQ{Yge7jbEز.+P{nMc8SY Qr㏚zIo9byCYeqY3 >Sܻ1nsM۔k7;V=ǹnR&3ѡ ͙fDa3?D5Wu:2C.{ >`#Uc7D LG\BKit#\^568QRE3jhaАh]{c*F3$EA}10JMzfAP9aVrk'S-Ƿ[ۚ35eﵨ>#uN:2*[J>3}D_ TfJq@ W^R/'* wó5Y^s\(ܵܤdZ{cXJ)-gNr\=,Z9慈W@/hPAzY[bɥ&+#e1"pCd(I9^us?pSA{{`y6J1bUߨMnԆP{)BaP 5j_.Ph)ny~~y04e44~DŻLAȍOݜ=_rTɇӛ?'lEzx)F&)~,?I$ŏI+WDSj2KX`>X):\d`ܝM?Nߟ2q*gF礤NFWS ŏe2Dc:Dq{KpYII)U_2SUr4fΈYZF'!B@GDޔ n9"&ʈVTg@Ћu!yҁ4Ca(]@I둰`aH)#eIYuHEػXCٱŎBpޘ/^wb0_C3$@'`[3-l Er3(f1&}䋪4J ~kiߞt%س;hB;i.*ܶ{mHFvFj+4. mq.SGeʍ4@^gAy d2O;h2ִB*:{e;шT}hUoO:t?_e'{C n_\_tIFE$hѲ7 +#+PN*usb%_[qXn{iW+叱-tm5z%nEwo\&nԜl wӘysMOZ }ZzDc, 9sİ $dZEV@V"uZs`q}2wE2'O)6*_fETY\5넮š:XʚZk .kSYPtВ=uI:@!W,vեBz M>d>_ԗZZ3 of3W-A1CSˋ?꫰Zيg93L7j=b_B}7*Ӄz@rQ顮*m7,, My" Y|BHgcZ@(ғX=aī<: \:}0A&lo}nH,q+!y|ӗ?rꩭ;o[Jt~}:T97!&>oy2i@}s \Qwĕ!yk-!dK1%9ҙ"p̉DZmW3|HΡ.i2q*6} tʬrb4C48͕j;{> P!xKP'Gi9P qۅw3ds¨90?"e_qRxh Y&͇ ?-P>9kJAHQx^%TڐY<2J(U\6t& (x TM(+)}ǝrx4$I|0ِȅyT |9ϣ0A@[MC冥XJh8 L?x$q "|9f,5eyj[PDz.~EV<,ZrRg^p{![,`%;MHF u8m.WÍVᇲsx 8 ģ d]T!'ׇx#Js0)Fi+:N F>(nCUj(S-?oǃ?_?oh|}{bB[^MՆ9󝆛wt>>" - ǣ¯y>q!cFK̞Zr@(U6g2"DH B8`G.+gE>NQLYcwmyg;@L{,7ʳzZmWzQ*qF9pB?jp?N ؽ]٤ > Jf/[ͤ2l$>Pb0:*Uy06!X۱QXI.L:r"sGM`$.{XvR F?gJAh!p=MUJQ4EB ,p, 6 VAQ^NhuFeAٸ0yJYBBr Qp|-WhSv^ͬf1;j <D;$/ ΀ËQ]ks~QVß'" jE"߷S<Bi?I[{s=1H"Y[VAXG'l Qg.H>btӺcމ|pY~|qnYcLc dzǏi(XD* " ªDŽ=b셆R〄Bn8&l2O^9ch8pW;q2M,O/N?u;a;39 b`->w[ȧ t8gk*0*a2 WQʿٶ8ulpxd܇΋+`,461 GಓMY@L펲ǿ8{ߦzfQ  9P0ѧ .zOÏ]37 *ղ cЪ񕽸@h&~Dņ,۰s0a\r" =թkdw+pSO//ǗISxw}#ٻOĜƧLIt٫ʘ;8'pk~;?h+s9+CWzv,}\O~{^#TZ !?O㱽QJ7q5~|h~=ݽ'3ER5}:4fSm7=N'i95FJxR~rֿ}kÑZS)4g7#, $ :2%ˬr'vvџ~U{Tލ&, qBCvN_*=ƽN\SxfOrs1<ݱ@ zdJŠZ^zGܚ~7@{ećt"av؏m!j7wT*3Z?ObZ2ku7o:ϟ篿@ގkU.mp&LoG$t2hvf!]s8x7N?L.Kn͔~m A>p:?4)$Otd}xj8_q~ 9S>Ogr}A=û13}m-uw{98Q^TrDz:nĬ]ڑcYx kf>y, H j7>򧮳Ta bie&_ CS\708I L\v`q渓|G#636g(zpGh L- \)莻U+.BEFqUI@p)"/> x*rk4Za5_)rnuX:p֢6kF[/ =Ђƪ ڝK PrIf"mCAIbh%>l3G#[)(^w2|]<fc% OsZMS$O>XzmpME q>+{5c ق) t}0Xw'e<Ȯoq'{¯q5vp8an-|#-V.0N*A=  k46l-íWuRϲ_>=nw6YGD gn~N2XxUadn[7.},.PT"LV)fqeJ A¢Kj < j^ƁqHkTkP[(Gc y,V%m37P [ -\cժ 1C n^Kg<6 RD}8bJ|lQnR]myOz4ue\ۡ,4m*%6 FZ!p"TǜPDD!s0S  TkЗy~/gN`rs'H㢾|F|y57rqQ[y{λL*T-Ө](RkNw6=M8WX\Yaacaa[5E6o]o9'iT|FݲMsKRu,.o 5P'=Jѵby4\# a.KAw_>~]-0E"!gcKO4w$V!B2Ti %:D" !HahejTX^jMh3[/p&hdƭgQζZwґ8/Ir10 s,\xHvY<'4w?d/27Z-v>}>J @"xͿw _ _C'q<>IeYMۙoGxvNw@Ƿ /Ǖtx>\M-njO-+.9Là$@kzeWm%{t 77]"?7MC )Qud| !A(B„GA>>8]Ce7^(&HP}_ NFI *q`BۍONΟ&;xlkdst Vsn唗ûS'Z&A(<9X(3^XJNp"wK5\(mDZZ+$7p7{d4;'j[p眨V"`]AlBxoxG!DVՍXZybaPJF$؂=`H\*2^88k8'T8nh)u)H̼i㌼J[`Tv&YXOaqxhvmR2ǚ4S8  TePcAm+ߒѨ_+8!lS(0X"T/&[umj:RܒHVk"b8;+ FQq\ h5Z/xX쇂mD ־Ho^ah%Қhl[an7]yŏ 7o5z!hTo ZfR1lZ)VzeZ)xSheEWT`hn1]h9Fi**T>*Z{9UZ{x;):Ruu FH6JcCRvu&Yu:j[j~k]zIA:oDVLI@WD Mj6]uGmR$}ňkz`~E(4c!,<$pQXHP gG;=oΝ] gҗZ\Qgp"I~^3s m8i>)wPNK-h^UvSv4FQv DCe1EL/}14Pek+ %a * 4)OX 1 @a H K%TهN2}qUqɋO9?L4p*MF\yR\їdrQl% EI.޾,˂̲Kg FJ3/İ2Ғ#Ȕ( 1 Zx7o%hawp&B, 󹱡턦FP&†!D$RL ]4eTx (m D\Q~d"H5D\jw(NVE@+ц4V}Ij3`r_^ǯ2x?5=gnJR=K?!)vH鱱x쐽Tzɸ|ԓ3ʟfIx/g[\uRoldfl `ބh68jk.wiz~vyO t}L'ח ?T%S"㉇?P<`78A'W%.)fse||GX_iTo{+|BS:: vpl:CKٓx }kVٲXVA+gRKbmu~Wv֫=⻛m}Q%S7J%8> ,\pTF/QQx& r9Z-)g!%=M:PH8G9֯Ŀ_LX k[F}_jTWӑuGxM*ٻzoٻ8~Ӗ4g8mrU3>b ֚n[˶PB2}a`_.%} mv'. 't_S eĨ^INnݣFCwq;ߡA: nuK Ct^J|,:ߥ!:o="[iH ١KcUnRbUN]#FZ"_OV!A#Pf (SI7A+!d@%#&|im1z#;H\k:f#ݰٴ$g:$m<!'Uw\CE[wd-\ ; UyH|2qNoxC$|K;U+w( O]y/̺ǓH:^; 6sVi0y21:qsYb~WY%H3|OYJmKb3ԯ}#Z$3 6}ҽE-L^(K%R(9 +':BQGYͮ`U 9/b'#HMj>Uzg2uF~$$=5ƒhZU%XKC'w6YI0{>A[[Z<^®[!!+M+4Bk4>u5x9B:@/CɟG#joU2ee[Z7zS,뚬5m(K$Y'Af:ȮMZ*3]IŨ `1萠1X]/\0mH^BujT  Zf zA))8c1֟@r yp|&KAPL=+{x˻Kk6Fz]C.clbQAv V(dn:a:[,d:[6zA[+Ő(U1`!ƒ@NJRc`C`bPReGs{ZKbue >-Ӛ]}=D4NRj< !Ȳq3+224^:A(q^$ AI0R)lI)HǶJ0̆W#Cv`6fY`S6`S6Yrw5V,fN .ل_SW[k{zĿfg$īwtxw.n9?_\zPHdB{􏠌;,qMVhL=uK9ʢAz[v,l<>4]h\nLgVB`F$&"&iZɶWr)K,>>1IpT}/nR쭯]z'={ [&DLkFQI!.=8}5*w ixgŲ~͂XK!N)Fx5,%uB_< GȶAiPX-WThTyhj/^nZ1SWj[<3!Ea͞cF?f&(u<ϾHTf b3ZkklBNbLj S*dB$d]!E~ |Bi-|>#3l6eּ_|2{xiy{f]th<1C Տ>(:6"AP*; KA>+* իkHH.&$_]PfK"aĎ[\o`ŎDghhV^;z_B\(lӜ8}1U6RuOܑvVLnȽ%&|sۍG93Om;__x{tumG:_|W~8z;>|CktwwvOǪՁvk\. Sw]~tmύVtǿ Tqi~ywy;rd {qZb8_M"aG<3Ayo !]tc1dJJK$K3$Ask&`ަ/:%{|kwz@PЏ,nF߻F.<= uSoŇ|'Cj?iҺP)NU\"( Cm]J!ާI ?J\ dc ʝEv=7vp00y2_?F1f+PsǢcv7'd45M|aLZY'xI'atYhE׫:t`Un+C{HڑEKb]/~XAe!uK," g*zKR`+t[H٫|0XGxNUmkoqڎIJbA_ 0}7/ XY')Hxc!Vi ԶIK:{S>Tމz{n̊Zc-tUfABHwZ LuRz*pK^PMR[:À>DZC5C vûQS:c)֦f8Rd(T炀@ yԒW5?QTl`oQ*C7Q#K#l)`csO,Phیm_,O=֙Y͝UNJi6\h&zڶ6뒵]chdmks]j%Ju0jvk%w6 0J͙v>îO@eGw@ ߢ+޳R(&qS z8WsԄ@֘UCqQk% E,T]Re{3糣̘1jF7o8F缘uצ>Ϋ_LR1ogW<]^N3^0fK Y?&T;q6{ϣW4gY->18 r5QJh.J.> /0^<2]Rӣ[l졡،2+Slm K/gs6#~qʩ٬]q.-HJ(!)N*~CD 1KQ$@7t7xўr[kG&Ἷ>H@ 49gZZH$A?:hpי:AȀV3Ob6hW &߄XЍc @=M‰D>$ P|)rF  u)aA/nGxr8~JSh&!::eMxĘ&`)!c ]|= LgJ)!Y4TAHp -1:GGp4-ݒ+ݚ/Exk8sEHVGa% J1&Qv;XhZu-uѶ[YHiRY5w)H߽wO!Ht~|ȌǷ\@KXCJ áH?Pd~D-AݗJYixh<5H!}(,#D0\ l eS.xDZ|F3ؚPQs}ϛPr 7P\u ʵ-~=KHU"H6L8-m5'L~2{L\G3Ĭ˘Y#2)Yn13^<0cwM$f7ʐU['У}E^7?=z38ڬdQRcv~(GT%KZ[zԆ&T;G\Z(YP vO_G2LBp~VxRddC} t1`K) &Uud;_oCxwpQ6W )y|P^&po- r7Eq'$_8{8XE6|-fO|Ξ{lҢ ZЅIoKx_օ$CTӹG聉5S`*}HP /{d2=0˞@x峻C H^H 0yΑJF { *G<yш (tg;dwY45W{YSVhNmM{\X"ls ,MOpQ_"AaaXE?Y<p>JM&ހăOu6S}c6T{HR5φ9NH +BE`3Vkm݈ϡHh\xho}j(F݊2}&?חܞwB,{YǴ"IƮP %=16nSo6u 4.Ѱl9KT7#InP<{ujm$& 5gI kmcYFyFk/ $n'5y 7NuLlHe9aQϳ sq+rEL`]X\/.IbIiNi`䱞cO'Yb(,8pe( NQ9=~&;?y`'t=4W/E61EpE)͝x$:;/GTF{2Y<Ǟ@ ys Ũ)Sp]B^iX`Xl.p[8t%dY[*a%ڑD9SS(WVܱ!Eɴ>(":xt:hytt83TI~CҸz/Niu"RoԡS m;t:UjNg? !=Qd{z֥$DjIZ (& L MD+)FmVm.^% 4+B@Y,B$Tlk6VRR$uIN/Q&_x$zJ"h~VNgQH+ċt[xwpTQ0_#Jב "Ӟ W]QJY7Wh_ɓgH(kQ9^8+U QD<  iUiN,M ъΆP#J"+D*B`{#W.GtЎjjȆIPx~g)UOai s6ba@GJ!Hn, eߺОaWM]/,MlK5ztyFՍ`U#ZĔ8^1Z 0qS(csrvު K(q(F­\"Z6/a.&ˁKz~52=~#u?!1T>D=H 1V-bzvtx2kӫ39tK(>n܊m ]"&8^K$w#:Wm f_}8Z a+"1=@>` :,F1?p77q??~TլWH3]ٺo)m &0.;z׷g`tu?=<@Q%PmS42H5k`xL&Iط4 5秩`Gd ~Ip;2Ň$ΒsgEqe= R0!r3@"<I1Q~3@yh&f*PK)_ _{o4o2xؖrtC[nhѭ嘍`G  Fq 0R3# J%ls&,:Lfzޗ۷cÝ vR'zy?@!t}ɍ_%O3sƹGj{6-r h[ӤqEְ n,?+IJ5pI,OQ]"0˴8AA2Z!~DZ1RO!ROԓNHsPh Fh8/I耗SС=V˧NS'ߝV\4ux Vivx~L}ԷnH}Էn5270b H1B+T:$堚DD!om4ONө\V=CcׁZjK=^TQa󆻢uEaCW4k)YH{.:뫋{c<簰}^9Fpk&%f;,dj犩Zpj44UN4g,E `r /2`Rk ”Zѩ-vwBn]ޢ-\v+C1{)(W,bdzd O6xR)gS, wql)Ls>$v!rPb / {WE)hft ;f2 Mf1iSry&OkG~ӞJ= (mqk 1Հg<aLƀ1沐3Stg,K'9\Y#w4^N]cds猕Xhp@26Qk0$'2" ^Uk!=Nz(PL֝ןs,ZQz)]* L ,T"wZJ c C  +84A Y("B&ȝ3^1̓RIq@EHEg0>fRr"xQ Q{:GA)"q1Ƣ ]\11?5"RW,2{pT)h'-(kQ;e*0>QH.GyJnIty"2cczLlA-9wDx5 κMk(4!PVH*"wn3pqWIq(ɸ; nTT )Wߏ|vϲ&ÑL/Y -?kpU{tc_gQ+b~U{|bZٶ. )?>QH01K.'-&~a}݂WL>x2yØ^,na>zHUXsmr97xRrw]x9}zAeG=[x9:j8B݂{p~߿7\h8 v[0 QT? ;{bc;[/g>'w mVJE<2"-Zމ47#xeK#i5sOs'7UCeBAz$ ҉}sArn_C\1'/f4)_ U 4NU[, UPSPEy8!EE,/y$GZYY M*r$ d\x֐ M-Nl~qLR*LvQR(1u@6 _Bfk%¢~`ib(ґ! 1TTNnR"f* w#|SDMtiv5mpb&:0~{è!g:*isɓGAs̈ɱRH1ӠX%<Ε"l0KEaĢP .8MPCDURfBi$'7L$9fza{PՈIEbPNu ne!(/a?M±䈃Sϴ Q($ADiۡd2œ^s{diW4>c5j=u@Tv,̈́ԣJ.7aQXsL,N)*)d lr`3*9 Frʐ%QrL-YYMT(?AQԩy$Fh5oqGqSM)!֝oM5uJt7 ihFlb)^ZHWI _0ʜh9SC-Q0Q|h-q3i=p2~n2Un^ K9jghYwYNfl&,GC;,lοo[xVdf=8UK)}әK9 >khZCuژ[&b̥kl/[l^ZQeU{'RNm8 g$RǯW$cP uR#h1A؛tK^htkCpݙjC-SH$F+ZPNWHz{ww/r~WllT&%^;:p/Zָ1a`a5cTz|&ʫ[9\x!61*({<jcl vp2D1ۂ#z3TD~n[~Bޕ6r$BeKh ۻX,pgFj%J&),(E).-JJUE~Gq0gѬ} r5a!R#h¼,*i-lT!ťWp|!;~uM)B$Bu}S"KRv%mڕ>o)`8JT8XAAuJp*IJT6-4yk´YW"oC -)>֡ 2ʰc Bi=7D;2_̩Syǝs(PCP#ȟ'e} 7C5a#l_cN^m[q]8XzO4h5$]h<H38='/HJZ{;FHJ! girZ/^D,z[*bvʧevF!Hr1|,' ٸgTϭ̩;Vyǝ..7.3zQlˌ?^f ckֲ>W4Y;HN՞; yF!Wd܄jl ;|(fxd- 6d2Uƺ*ƌ36% ]g"iq+ͱBV8",Nⱓc͆TXԕW&5E/M%cMR]c6RDcdΑh Bxkp0,cwZ\aγ695xI>?%#xHe( Z2.`%*SM٣U7ܚF UEo_ Ҽ:x?g d}sS1⾴U}LaĹ.çi6`*^et~[@T=jzkJg >f) |Ő?gyYOF_ޖymMy0tMy@ț#-dw"nCoG#L$꿊g78&4\dRqd#PyロY#ZO$FFʈU,XYXAia1 9~h5 FLIm̱6nlȔwr by CaC( uA/tSDRze[*vļ=y9s-;2ƷTʳz; p4FNo> D>M#pjcT~wstq^&dr%@v>~٭,ϲ,r,ƅfY$y,*S4BEf ɸc%H(I{DewggV w uGB0dZ':x(D HۇrjGH DB ȕ]VݧZ}Y]AfB| 6>{`rUY.n'xN.7x(algsϮ䛓b"~Re o|jL0>TBJRC~/)M1O~|P{Z߿0.`ڄ [5D_qTp08ru4IM㎭R+зB4DG9#YrFpqeS5FJ’Q٘Gc `K.ꃜF=iߝT%܃{ VƖ1BpB R+esاFELq%Q˻0 vJM;onN[#'6J)1!F-·JZ#= ?!E)cM}ЇDS&"F\FM哩GRCK$SJQE)ӥ+4].S%e1}±FIW0A䦁Iagho7* ")HJ Zw#74`7.W*D7{R)u]X^\].shg*8'j{L) =`tAY M.^? \a\yx0h35Ygf5r}>z%852jxl)xFx+%;{7zt֓Ww =F`TYt#Igp߇=F}YhÁ 1rVwҾ|v:9=u@7Vjf%S1ʊhBd}qbSj"B].nWEW7󢦮yrY],ϒ/~v.H'Ug< %ywd|` 0[l0L(v#=Gr6LO!;ؚ T#'Qɷ\}I~R/o i*Xvqt`=38ʃ^ [*=It0QNZL  ybwLhB<݉ju?i6-0&E>"RDr>vHU¸5(g{ .vcQG6:{  V#پviMưW3'OA4ZNO]]khpT0UT;)N;܂9:O1BI6gyz\'HТo5=of 8#.'Pl'(sgo:$ p\>[@}Cfߪ6OAuw`%9 )b gxmRwˋ ]1o>Ԋ- 6CC1"){C "xR\Ot { {TVk䱃N[UcLhP"*DD$O=b)vWv0P%J`l|!}IYТ\e34҅o 6?)O)S">U#nbUa% Q(PD)'`b "(8$8bTsZ+-,ſ|>xnz (.WQ E[͂!Zf:m\-߹~~^,}?^$.! F,Ep~ vTɰNݨ\<'iOn7w0??R36M*lAfRKXo\|HB lk`7_l-O)|]nwo,^p+Ý$r%.rHFLp\D}M\Y'^=ـ.\]7':& J sx>m%{| X7>̂hp9k 8r34ݘO~ S F `y8\geC`Tᘢ/TdK([=)#ۀ,B5ͪ7dKڌ'iI`h(99@Mǝ!m 61Laو D"e>I e0V-RC+CHaNfA]% q&t2)'Y "aJMY 4mC:6UJڒ*ӯH69$-r$%D(Viy#nkappHp-2JD{-y,fh@ 2?{ܶ /{r~QR:ٗf'吔) "Ar@\Xtt0δAYXΉ`)P8J2@RI6hDNcc4Xi?,~ɨWx< uf TJThe\b Y.kC#M|\^WwCl/_tL> {!(4΂0u [9,x ̞kp.+'N.Ji)U5#)'Z14xQM6<=sJs>޴Zefu7۪Q3@umf󻛖 ^ݽHc&҇!=H}mk ! Y1xoyF!Ti/E!i!G޽}|Y>zgbM(>o/ ؅[ >}|3dl\?GaZxTMTӁ骳 PEjΨ<1'h!v?ެ%4N(o׋=A~SkI?PPb2YF*n{O>oB1'`g2Bxzl^Xe)bF"p4){|JHi~HaO2Ew13 epMap&(pg$\R bNyY$!b3@EW_3>]?Ǝa^u,q9B/>W„= `ϯUwaS=̂IIޥ_/ Ŋs8lP(~( E͓8L&z0S(^%JA`Emmy9؟ Dbdf5swԼ51z~gj qoPVBKY|{z(<ϐ[?p$ } ,Rҡm-CzbY,.V]%H7BĖ07I_9}t}7З {ZJ$qBH8Qy34`HvyƁ<}sԣ1II^bkq`'c'iy>Ld{ZaryְeU(x?,V#s#1L]ۢ\Ww^Ҍwlp6'p0j6SQB"m1HwxkK nߪi",=-@,iB47ם_<K1Xſ{;<*u뽇Cgz@'oCPhKr&M*cH1%Z/ES#Ͻ"$*K)Q>qc}8o̭p1fg׾?_)xDf{M59~b5'Tr!& (v}CwnsZh$R c6@0R+{uSnD;E#>oQiL?S@v`DZU>V}Q)`YD Hee^zZ3:?Tpz  ec| ,Kv_AΪ Wx\4`ج>G޽}|X`}A877CA$)oMg%&X/)N31:;wK~~KDs<ۃoa>54y}%7\#pM"ɜ?@y YgNsXGaݴRfܚ,0` \11*G,xOu@L>Fɭ OxX ['js̈&)XvS%cx}H#ʤ*5yTl dqgKs4NxzHL&lbQ"E>xtQ$6S2 e̛wmqXz1fJK )f #qi,btK=֥tԀ"#S!ύдԁJ Vspᾡw}8'M 0Т>j2\^tREu]1&o B( e!bN5rQLLp"۝_TM|:wYrnP ]Ҍΐ^+$ 54wfd$VΰRj0Lot*V~lW?3ce]nDTƥ9Vt $/\y'Bi(/#bupz5XiέW&ݷ:vXuH&w^~hX-5U@iOuh7S]:39-SϱOXtPt~k5dDh..P2 aXl4ꂃ҅5RJ ~) }֋ῆ in>]@tjyQivlj*o%GdU:5e2xK$VQL:VIZC]ӻZ)J74;s{LhK~tbS+;{y2=ͳ3@$zJ[νCO9c^VrJĐAGWj8Pf"B5_3-WfF3,NB_on1YH%*.;툣0gv놴zs!flwbӨKM̓=%<Ńq` JA>[Brz󠭧GmSe1}hŧ5xBw3ӳ8T~К9-6SmI8$H% nګ,R)HzT#u@ץ3 &'*õ ^`j3^Qb8Ghd# ?fw夊:qB#Ļ2_P%,s; RQ%vF\yTkymP(hYv*RiITf(!Yؓirwƍ✒zYwwrKZh7V%7Rs yRPA*ɕY࿅qZ(_5YHߙrERO<큯A$RRڹ -KBr-M=Uǭ3,T&ߪc/uš:;nĬNUѽʗ2bU5%:$BG CHxUezDk #q''_˖9tfzP­h&(h) E1 0 VTL Kr>D׊'X/1_eqc3!,/m1[U yC]z.W$d1i3Mhp4 tugѩu- ś1.3:|#Lx'=rGAie v:3N:W&hsN~ a@;]쪌݈C+9z\+q./E{ ǘl.O[1υjkpmG>tRttV-i㡏'}MJKwF_G}.YG>h >#>+挪W]1O,^/V%$3BiU^c|@ф'ID@U z'Q=9]* =?O&!5)O78*:LҚ43 7r;ЫIW$Rid@b}sM蔒AώLEY08 0FGYh!̔lkӺXA0cVfzW,t C:Y]WREbryGyaRAW9Gr-1FȾnf8_41&ǿnPaڎux:UNFfѪ:Jv gLzi.5kY݉]<ԘTi'X?y{\VFg_>Nߞh+_<Wh0QQ~~(ƿͬU]:x{ʼ)X2ИPˢS2PR(KX ^j$$P ADʄH Z%wfʕԖ"]ދ<7 w, lcIjPT Z 2O$|}=J-y9U&:CDNY(sx VT{X$TAMAHbQXR1h 4 F$H[:)d*&-,g8]`AR, 2i}%(6zoAq*k#vM\3׶U*?N(yT~x;FBХ+2~S 7dzʿ-ٔ,T=~~Z`2 U^ݜs^>;Z?ċa弿DOSZLNy;~Nnbt1~DmK =3c0uzrooϿd 75haݘU@죥i=*Ad`缻gw4p2d[u<9VLi h8DŽ!*: ^Ġ1DpYUtN]f0r&^ hǕD ?S*:')ܠ ړ2X+؆0JR 9/hyrp@Dj`9ETr@ >*+GW)\w&Rۡ6V |hEd/Ȕ2zRilԀ>< *y$?u(M+!@[;6Fv@-<hi, bR]BeP0B4\Kxlҧ!P*P6OƫwdU 6AzSj>*!W*\}9NRAԚ!Ptvgjw"U jf7rV\@R +nP)ɖv"hE9g# QFyd}!24Ƴh3m+QN^HhY2&R oUDlQoXtiH |h<ߛ`4;pqy}nW,)ư5:ZՉiΗ_03s$71c{K pCeht@IԂ^GCjɃPZIBIrTp!\rj%X_:x+c(` j.P]fqVO->M lb@2U]㊛tnhQ%Ψ4T.YzL[S; HoTɦF-(\~/sYQcQ*iDy9i[hyɳ]ߏV'j".3 1,{@6w񛳓GgW}vh5o|~g/fҺP?7+~\T !"Us/?u"+42@[/%= ե!b: O{a*%Uto>huwba1^_ӟg rc? ì:xdtt _/~?Q1f`&Yk?_ѻ =ZVM[˨{g71i-s9ڐkYM|BU20ܢ]~-]>\"^Yۻ3;GkJԐT7 DIVDA@֗YY/?:+.W0gR([9-3k2-q{^sC>|` cvtJt\R#KoJ ŎU25Ӊtɤ|2UxVa*%`re2[ͅ;-N8drrs|pҤxW\3̤pſ/ rybW65DVirγ9o#Ƭ%a. '"[L\ʨaYt6UWQH.-2pJSD36 U)y!^\LN- vtK|,eH{59wEX Jə+F 8/ab,sNs![olU[ ^!>|~iO uEFS[VtO~!uDW*):oM*Ѽ ߨ'R$:k/[uþ, !I {1yJM]SҨEmYEw|uDUX-V8$3o@|qS|e\8/5Oiܛofq5WG_7I5oWg^$^\*DPC:GVh2Ik%\ֻlv<FH3쩰Kz@"hE^u@;fp= 0\eRܺw> N۷qLN^bE):賮*HBPJYO"8ԅQb?.boQ<јP"|px 1-]~{KH;V"}, *7#ށgU<\{og]t{cU1iXe{WSϬ <:~2X?5`曫+L%%`pV3d“AD 2B16g0tȓ 3Fl傗īK:pCzCt1 N1m=@UOF+jm^lTtUK:/7webtE\G{{hx YbAqZS#m1δ@6*C866(,)1@"D"IjH~Wjt#՗_QQ/`bKfϓ=|w1z[j z)D+>?T"~0˛#~{k=20h< H?>)%?6[)Ðދι k]Gg،Im^!OA\ bD-lDlp}ooGH3 9ezsr5ɪ>ff덚CHU z;}&O;.Nb Pͳ :iDB'UРkxo|G.{` DVG,AnmniHWOW*dTk2դNCkKN8;_qo1bz&5l )QF)3y>8 ߧԞ$!\IUٶ*\a'a(幑i.v #o.Z95:Bp%a~;0N"r}T\pDr~'ȵqFicu3w\ .fJzLrC8+E^fJmc8 "#v ʷŠi6G [h#f={9 &WȠv  xW9a%:'GE<sXNWsim˃X-Gbpa-Tj,^\i1<E}1FcVٜ`RH8'pig2?Xe f-:јZa` 4ϘѰD90<,1X^D>~F9 U0-A8NL+;^KwQK:8,QsPL%!M0&gs։* nǕyald-0 ӥC}G|p˩nגKu4X{זU S<<>3$˵KPē {Wl<(daH(ÄAb&>Ch4Zۘ3, RVz* nR#-L%( n|x?y ) ,0ӏJ0!acM5#-7q}޴HnoGbESm) +QmoEp S Nh$\1/!ep1V!iVeR]HmpVRΛ܇H[Q>b\L鰭robB*(mfj,5[ބbK1fk/mXATz^0cw 19)1 ! Lwirr[DEbн b ~!fOGb֯NL Gpnm 2b\2'(11u.Fr`{cCpp$uB#br0`%/*oD0!$nʭ c[#i@J:HU5*$rǔ2A @)m ߊ]S1H QTSN  >&2zZư5wH!*aBB&xzJÊJ<T*Az[uS/Yz7S1Qo<.!l 6F2n {2G +HM`$q91+cm{1FZB Մrc\0ZY A%/EP߅LwYB]ZńYБ9wx|Ae߮OtQ2<(32v+fD<[6s150ʣmpT@?-Xh~51-z#PJ{+oNQh4 NgT~q۩EӖj9Uײ6 ԩVŀ2ɸ4#ñţqH}g*BG +ӷ3;T_'oc5Dۺ.×?u>BfWj62>ɸ?kfV51<6|XЧV/M];TB>F뮬]/GH)U o, g_L׸WZW!É?dz yuF=%cyrX.̎ yL+$-ϛ/}W`h|UN:ṬYnStLk]?sT20K sDG_KfJY˻ucMFOV5_†(d#31Q#U&8gp5^# #-ِضh.&P F # B1q5y 9\6 =aBj\*lԆaHuۆ4Cf7Q|(Z& xާN`:zl.e>yZ ʴ+qna{lHuw[gg% Uγg1 ګ݁1#D>x'u!5 g&R@p]0a̙zp6{/WY:?ܚ?wzW.-.S?⃶27ws,>O䒠SXu1V߇E +;=fdņA}&BsmeS>n[[ RL3xQZ &z.,;766%J^ȞqdBu]络 R kB3A%ؒ FS^wf; ڏ$JVxsxO~0[ a/&|_`W.nV7WW0(%IśxQ=q3x y𧷳[]F<_n`L -oX+T>0Bפ !dw ~m.X&tJ0+JET6[,Ws;1^ 'Er5!˥Êf:0!ΩaJ>qP%BrJB"g ;sb8wQM-6ڛ]ka9b ;}/&%:q뒷9c3JH4PP#ٻ6n$WXw4]{SUwT bYҒk4"9"Q-G@<-*GMNF2k2NT P*T|-@rve/Ͼ$|{,m}ZF抺WYT\PJK X|GHiT =* SEM50u\)MBxĴ *qZ%LD}ьfZxϑ l9 :rg8ѱkA%7r.jx p<֌-zz"|*5\^½{p_Uz4%έBʾa+slCpeP,x{AW"@i5dž~3-Ĝ͉r@>NQu2p˝V6RAR* A=AYk24(cij**u$z")Y!Q&8IGPDF<#H9ZQhz!cj;xwịTپ+Al1z$@{9[.0Li:H'Z1B{ߕޜ [ܒoUVǷR>6UyXUMct7_>b5À|'W#Y,FC^uM\Оw<^:bP er9~þm_Njf [QD TpCPTw:UdªҤ+1J@]sybA ~ÔNh'mA6yObݧF}G;ZbȔUc+m#(pۛ `DG{{,̈l*Ϗ<ǘFpȔýgW.!K%h="]~O??ΰ#"ۯb\;Sa^x8ƱGC`WSJx(GJe}ܔ] ɗkdh&/Q_.1!Oe*J!~nEsЬx"`3y"r`a .jl'T,z&@I-@3* kt8\jͫD@*R @h@#ZzX:$R$s.Jۨ5#cZBnjw`Xʘ0"DsK!H􋝑{V"qz<ȍ;\L(n4˗HLV(&1`$%C*jLAT{gh._%` & KuRYI~%'5iF f9 v+SP1=>* }0$i @x5M-3nduj#2 ntET nvIYh&m%$2R\ht"D.4GxYcKw;xM?6FPf<(>Rҭ0W Aᘇ˿~znWǎg ~@/^ B&\~{Iop7?YO7OE@?䧿9zpL?'ZK;_|'߹^>% Tɧ|Sp:8x"sEM~ $ˎ!nXy4WYm$K*-QSμdD#pQO,8$7LS*CdCeZ%ѧiy."NqiA`h]4 m]!l]:U%i[w%nw]3 lL7FD/Rjqrw;q9!EvA"ڣ!,Qp -LU(iILU(uJnk{` 5x#ҹ?Ͽ|vZ*lT&7~vqS]+nj)(#EE>`H B#~ lu!*d2g>_3gl̙3w wRĹ8Ob,cX==wUCⶅ1͹)[2 =)!\xE!m&Jv3+8Z_OS0ٞ3Gql܀P`,'93Tԋ(Ĩ `b"XA-OmA9O A,/gN`G kjR^PnrqeVk9Z_/:.6ryVK @eĽ6I<17&ɩ^Nq8NA<_.d*(]BU$qiPEq/F:%BJK1^S"Dn%)c9+2I{h 噈 @Q1ԶiF.\BY"Y1ZRPVER[+!)%6' ՠl$Y@[* ,^,_ak[:Tl5֫-q+9eIBG&i h8Exm:$#h$`B#AD9W1ZuIrNH44zV=Ak碇[pgQ|3F}Pn6\W;cr{h?-E-Ak3Zp]4+F!ڌxtNol]G^k'NIx49X*HW&(4wy>Hp[aGfyV>9PŔS9 @1%RZʼn2 9U2 :_nr)Cd"\I}*6m5MP̕D F2rbtQ%^ƩrJ)j~PXce`RywD(3%Vj;Ii'$dz>U`{^Bt*_8k>?dq,/pUC"8!_m<,vszy* o&˟=ZnDO°w>YA6e㻿mB0/*4qUXKFte!od/%͞G)8xMrx1f*WӼMLccDj(UlDLË[A`a-G)6>#>Rʖdeǂk.Yu\eOGL=R8RzSr? -eg,iZÍKJ6ʒrOf:o38ӻ|>x -QߡTҢ$;ʽ|Ma^jƲv:8L EKUt3F+NrX+T0(f;4eajM+mݓXr21օefP8wGFVǑmE> #LfM>c79bu 1 { A|ܽq ~q+{FdQaQw(ciw-庮xI֚H&DXf[V:Uv58);?׋cyķaT&W{85?ڌ뷫MmxH*<\]9jCVצY\?K.>?E'߬SFO>.*\%h|C͘^(t˺A[[ JN6XoNκ^.n]|<ШyRI8H!6įe%Em'q{1;#NUYǸɀ  /y ܩrP 1vܵvl" a>4c!HJJcLiEA\9>5T\κ`,yRV`WmP "g {\t(?^*?Y(LMO9 "hiM@o/E\- jX_ Gg~FF0Vee}BjpGr -P5(N Jx!G}IEiMw9P6PiLjj|B! s6+pSPJ[=Ŕ^b.e芕VQJpŹڜ&T9c椞AԍfsR# Ri: tʹ\@%'G):j<& m4,Ύ+BbIZnv0P)\ez͗8/eg$4s|/o&77fƻOD,O8kS OCaO9}R^4 }vucvl'/ϗ;i1xj5g}T,)w8lz*&2ӿ㴯z<\4b;G,s»ɷ8zή̈6瓼\n>,fn$nO}r./ٺ|^TlJs.nWۧůjek"ݢl"IrSq~B"N :%O$q+HT((B =&?n58M"(6m;^_1!ߏzr HB 3TqɒLtƔӔZB Gf `<7dEQt:aI !XN|1HcY޵-7r#_a;1C~'&vwh{k5VzEʗq7QĢj uv$ŪD' jLS&T`d?eEK*,CARhmH*PVX P_Dw+(,0f L`]Yzeq Fqm-R)pQS'Vc3qi\z!gCc+p֋Wm!UyΜYEBShJ 4i"C\-Vd+Ⱦ'-ӣ \4զ+\8G.-IKA)vNkK@`J\0׋8`-RqDަ`cil-gH@YL KP9z$n qJjm(=$}Fo R"]"J чplS@emEj|p]oЯn5Eۭ ٿ|冟W o/sv׻Oj[,~z-|x a }={ٟ ^oV۫+Tͫ# BΪ\9;[x~GɆnXLwWRS]]/v}z7\jI@hK˯'j0 \>ُ:?K^{lv-$p e-t|4 1"{Y(FNO+dUU?P<>'>$+_2#R/1_ʁc}X6NN|>( ̅,g 1a/mD)J6{Yէ^PR"T>SnP3)Y)KI»s\XvȘhGFYb&+T~Y-@|5-'&OO}i\P2CNfn]QQNq>Ӂ7""P!By)zLLn-% tD5EK"ZL$Wu?(8BN@2Oo"Ψkzժe ޏ$h|8YꢌUK=J "[BF&:CBeFQ^^ (cHy.T ".P)YR}SXQSH!gFg hfqT0:C` :(9F↶qC5ePRB{<'y\$(. XaZHp CyH߯Nc֬p 2u1W] i(ё&"Cd.be,P,D3S)]hl'!5-GxxIUS-錣Q&(]p}v~nBͷ9#L#2NWW(KljJr_m^ЋzC/ۡ;^jc\YWV[&3:3s9\=r帽ˑ͘KeP\0)QViGs5TKC&нs `hO佪׷?ZW+ֲS^9Fzj&WlZu5`5eS1! !0Ī62J2LJƻ0򭱯ߖv}r6KRW]+Q& .O `XKWB .̣9JOăD˦l^, 7t#GTD w<v| )T`jD.`$.b:b~EZ1?)Wl[P\_=Z9+/5 H4"y ^/2!Y8X!V-|aU?19us` ]ET\bt!eLmb A{Y2RNP*ߌRNh J UBPګvXiJ0sxCE'J΅ |ICMV8E>itQMt83>_?8Rc8A;BCs:YMRt6 4Qq)ĄD1pT!|=] i9P | ߠ,XT,)Jxi۹38m| &*8`S%bDdp"HC$e)\P`2R`ѓqҧT++ڔrrtLe#D)?O .eMJ,-op.>m^V#\x\ {ޟ풫rB[bKL.wԾv+^-*a/W/$>Xg Xi'oKmpZHg/V>z;igjP'}S'90 emn^O JqJlw FzU[B#41*Mpytݖ^}]߄Tt ZAjFB`i >V0%VJgU A+ҧR UKOQe= N+(pĞ_8sڎJn9\ Rr)Ay\I|X 悫fM|(+v0W'ovHzZ:FK]7֋N⥈.%)#_pr8tq\Ɯ?_ov@~$Κ.b+rt ?d'n-l?dw_D/"uĝܑ봪1H#)lqVE˝Y Cdp_&i0{sY#bO.UvW|'iPqrʙFa]`AK")߬ ]~7I@7,^-q*zkw}{7"{wC6) mqLWo>cW,_c7o_/WE+U[ݮN~6~Ye^8f4R[t;Unr d!/DslJ2=Cfj| [.)6m]SGn,DևpͰs?nBP:A>w';w?On}X 7,d&)-}F6#z$ޭ y&ݦS>mj]ZeZvBCJgEM/yKMu5L>[<ӣa?΅1?<>O޿{B/d`|1[#G1o\DDHnrFUF?(Lf"ӷG7o9'c.DK`ת`Z*cZ^3Viǻ]ׇƊ/V?ȴBb95HBؗ<#uRZڀV`aBV0X^B>|ܘ#tjDB+HOh둱XbRyɉ0^pK'K l9Z"̫TߤB: TrXx=:g~.6ov~ZD=駋Ƹ+:]LӅz;3?;>K ?! fj'q'ſԷo_>/5 S̕KO S5XC_&xUp+!` ?|Ӿ.P6). q$iHόԄ 0XǨ2v +&åU#UR2\B -V-*mNB+d{;l4@cA5|%!{Ne֧R'!($,q/J@( jeP[@(d"ZrĻJ-Ei }`ƕ hDaJUCDz'8DcВ<_#EEtˏ(Zf{n#di 0ghfQl&4!8a0B$5 @U dk i`z &onԘa;<9$DŽhJln.d1u қH-S,yTPޚA*QX`.+Y9U?Nfiӗ vak/?rE eA) ^;଑N{[-e-e!-TH7nqub+ayo_/W%cBΰ/bS"giM@T><}^ӝwKճuxx=x&>^?|LԠrh9_&ԼT5ǟnɗ`߉w os;7Y l[0۵dTuݟT w\5D -eMCnet`9q-+^Z6fF5JCfx]]p5CbgnpH#u壸Rk3eJ_8-qZQ@n3#;WHyn3T gj;jK舝T'cSe8&e =re8vbYLr(`"$(-\Дa$Nɂ`,D5i %85i*4I^Ibg.TS[wՖ<",R*#Lj֤Qm%cW8 zHnMb@ǒ!Q/k@Gm:7opifqT DOµ[c]X yeq0zRta"4+Z"}׷_:2Dw]-PY.hĪXPZUw ZOT\-4֓޹㋲KUi ;Aʶiwe %.}=דR&ew_- |w=NFe&Xk>(mn֓sm #-Xn7 ;_nR!h\ RD'w&Qrкk"$Q!!\DdREuv "2 It2hݎG"e:nTւr=X’>Vib#_#lP,$LpA 1uE U]Sf_hYujY$G~v6C󎫣;6׀TN9R4`yFЦK̵0ʹCnB!\֎:X_$.[DBTT!|NXb; /VnCi$ᾧpїx hfsfwBc3Bh5v*Zݮ{5\٬5\iHp? r.\)2L9CtP-%qQAbDi  Җx%( ČvQ#є1zjH#NhLBX@/_"֛DZid)C`}"֎ qyCey̐8x;'@m"cAPqXhL:YbeLT0MVojhA$col.Eo4 >G_'ww~.kM!PRƸ+ź 2O* ؟A|!U~,nC4SodOz!R߾}\gO=+= \.L{h.jĿv=Ԭ7Lo(rœ{rÉHӊ0!ASx0XѠ0[챀ʃ'P1̢Lضha):m(1eK+'43 Iu ]Z-h9J;w ;=!G>ƮK][o+^fsZb/,,LfμdatXY꒜`%%dVnůdYm'gr>`o@7ǣQiM 2*HJ%aS2 ½:H&w _6*V;tM(b" &ÊL*R,(4$%=Q8Ta>wHn(ҀVR` ()wN ʇUŘf 8v2c_T6ఢlmѫ ;_eij-O9ym~Ь\ /g?ޮ[Uy[rn! Bހ02!@ҌWWz"[*B ߲Ԃ@sAP")P 7䏵ش3V54BUӁrj:ƲTɎ=7ID?ڐJ`Q NGXy=\nԻʪT P<6.qeLUL8e"  )Ŧ si1&+ ?*:TBK"@{Pj|s]zs)w-:uD!H> geA>yXgy}T0jQ2p(9!R!PlfϏ_G@x(59K2_hhhn^Gw4Ӂ!6?_d o`fɶĂ|_owr2qtJD+I A&uΏ1,tR"/njGxGC,rٺ:إ'N Gb/K0%,\.gh?+'cc7F/i\~뛹;3wo]fnՂ#)e1:uĸ(i(vGdb)fEJg APF]4Tn=$bӇ,\3Xz0CpzVfԥ (}-rVgvd#PPjNq& n$-ռ0Jб`7e$`O/;UbV(}Z&`եSJZ>NBDyh0 E+/&KA3*61LFI P AUioepL983î}C[3/ }7@!o.vh@/!V%,![Ŭ#U',m b hZoۨ27oTVӎ|J1 qÂ_M->D\5Koц s|um#kwXP 89Ko fbc|<Ξ=>uS^}gWqr_g+]1x GJHQ`JPtAjW-%HIBc)C)& ΐQhapΜ 5bhFR:>/-Y5}\k`UaT*D+-`gdy,[wHʞ kƪ wYl> 6uAY[T,,0ћayě6~͐ ""Dbe77ÁqeVZfOq t+~f.B?t {h1%>zp P@ampKkA!%0S}߻'=gޛ=:"OGkCQ}p=,nu/xcR^)3$(lk+Aa8 9b~𴌠2ጠE|\}+NhmYU3&#<5L$-q3yjFb7Clg)?}3[!_#$q8MChQMɽ,Kw ՕN,1(lQ B% .#FDLq ;Nx*R'$MN0ڻld7t'.E0;pHCQB.ҳlϽi3*Apg%,`fRmS8fʥFZ=*0*-rŮ6 V5Eb49kƶqb0!<\k[`@Ju kdM13Pt`بSB"L +0Y +2IB0lD/+%.yB2ОrhJI J r$vVk܍kեR6{^e{-;g#4zsvL&dA{Ag ɱUK2JQňξvG0 o$gkDI{ TfE3e%Bc|%=7?UKdn*?OW!riu?P@`]X=O* aw{3L6ykȕl a^7gP9ko5~+fgE׃: CCS\_i/7%}V#%)|T3sg m+ٌK]QjqR[aQrG>_`s"MKXDL|0D!4sA KSq;, vjGN[`P ,PD-JN+UB$ZOWJN9 }T2TEIފ"e8R )t"}Ɂ>z.2ZNھ=plW ;ZI(h%lBHLvVgsYw|[hzl0ԛ uߚImn?@M%[oX핽}ݖ1q[ƫ70=\vT9zi:Z%jQj sy<0F3ŷ& a͚%Q_uRbt/R7j*!*w)-WZnSmꞙѩ:7$vކPԻ[y0M̼Y<ᱪ.jvޚ p99 mVN:H/QR<.<7;X> gO~ʯ^BH]쟀xv5'WDPݫUߧj)ddD3baSB1esL;m8O0TȺ|Ր+$]5峛@c%yl6ѓfHv+ڔJ k rm&`L$:]%gZ!d bɐ8 kH(LRJir Tr\:*S'\* +t.@2ham$p> ȑdrJi.IKJAoj40^,Ptq&_~/s"#`B+]-3(XҨiP @Ў66Y#QT*Aw V-1jNc JuB5lJrH&*Z409qF$1Nփ?xgX_cZcz.,ǥEWZۭs)%_dc1CΔbIJ,&guagfd&Ļ3#G4A_ Uo}R5}Rùdt-GN>|?8~Iv!JY'^xfpN5[Jŭ m=b9ℌ4p1gAZedLD6`ZSfZ[Q:PQzD{K3WaL@PTHjtGH$"# KLt]JѲ OO%O8>؀)cWurQmfZhgJ&Ǚxdړ%DU 8R z嵧LyNȀ1h0&'U n+l0[⌬U[>t=5%-iRwcb]euVK5~>~Wʶ_Kh*ݞ+k2ĊHϦn˜圧8odc$\1#%x}vI X>u 2,\D^a!SX !xtG)Ru*_R)NuwMb)mIz^$طf狚F b$19I"WV6|ҥoFZϫ^Tק''({-,;9i/=! K*%3N;] 1Ӡp:ZLc= )X ~,!q<ejgU ܖX ) -M%90Iօ[&pLO&&?\}JW>>|LS8:?>dS`/f`3O`tɲd ,l;_O'OSJ\Xu-:萓^\ܜO gi:|ߪUZո[{gYjYqXV5;=f JCn~p3ӧ$?ˉ*(i΀ `ԃA9ȥK}2V# `|&> N^n֠_ 0NĨqܪ}#"8کiV9)FDdiyT^HY)ͣX2g`=,ݬֶ]]0m /b*JTzwJ$'s^Ј{|38y,w1fs {,5(+SMl#on八旳* g]|q5 e?8o{xV[)c3>i#Hs𮤍zԧ4$bt tӃ3p;]#(7}jO|WT  (mk;Wi+&.f#֦i[w[hqpݖgC3Or |T-sZaBi)zz':@>^Q( K8ף%&,v$^]Xg\ͬRB tf6EsI_8f=آcWxJCFV:P~=~,Qx4Si)6=ՈT=49 jDA@MFN:xηR%upbWJ}^Q\w?Y%-ި/a3onSōO"*&yd9q%t|L@);WD'p_:RP#՝Nʤ1h JcX5Ӵ4@x둁o֎mY,`ʓ -Y-^s<FJw{f;8Cl$)m K@ 5h=ԇ?`&+U[)M/O~?_h^ªU\q G"" ~,A?V4S9o0ϖj႙o %޻^O ' sUXoQeq`f/@cCSTQbʄ`+'3}^7Lr޽sp|سi"&fχ4uO_|% vbypP'SFv嵳4j[QjQ{N ?Fq ~`!=g`UWv/6i2G=F vVG+f=jLvx ٛؾvGm; 'lMCFw;h ZqǦ_s Yh;pQ h0fm[q;x+.):9'%Юe)t, 'T~mqI7"Ma~wj cӂPeZȔ*9qxU ɐ. HH,!x 1CFQg&'5kԿ9`R rj;ѦSڟ>`s U*<`їvH͔Y\tSYEF}4s^:T{ZB̻CƄ7Ø:Y?P4.:\䟟#_1('fE[G2/TQ&ػBǃ =ܻ )hl`Kf,gR=2LNj5WdRb]e7iXHKkaI:a"2l׳wIKk2^ί⠥@Hw3^鷧pW}Mz>_yLͥq$-W ]Og՝h,+0NY+"2Ts,X%{˷鿴,OѤƓ4hxsxX0٘g),ecl̳f6P2h:DCV*2F`$i&2m,G}=S-)ɷ)rZ:)trc[@K},sXŃ- )'-0]D&QS$)KI1OjxE/./WWE-fr>'0s9,;CIT9M*L<Y{IAyc)kXR} CbFYjŭ0F i [La9-&R:?Jn1}~ ;M:V-=kdf?I2Pؘ®HJz`J w}x|fE,-OafXctL<6_0`7ʊ1hJ'm}qCI!a{gHkV*WwdL6Y7 J<8g" =^ 2,.#Š4vsŀ@5٩+\2$(XTED+S0 ae1ɕZIjmLc*|Nӯ^h@Ve>GH8^D~#FLX $9Td:%wHRp<ҪɀH\5'uz)Gк^Cg zTM])|;;4_n| Ş6J>8Ӈ%Gۥp1fxr' ̂I+֒Sӵ?%߮e۸f W ,z[qȐ!rr;s\ jbw,,f皿,cF32,rZ+ -\\sWSh_,7<j ep,1g/= ( bzAoqT&(Um9٪jTU*Tml9Jʤĕ1渲p h;2"!2cd Zw-+^2|J2i4aH뗈L!ZfL>gEh܂2b5 F 1V. d)z02G<@0TCY%>ٝjRx"yl5[K_a'HCE4NV-~,θ WD*or] dNǕkvC]LyQd1~D&ӽ=Ma%wʯbl˔'-0To!xLD&a,oqMjmx|/ʬ$6@mly܏mdJS.,!osE(+ L'}8&[@2pjR-]!m߭S_$hB ;I9_|]2lD5N?20&kZ 4T\JOڟ&NrkEj|%Xúe !+|ǝ s ~'й`p&<-[q$!Q4"nx}aUmD,s6z륌E'&q0$G0bLV* *fws)nTk;B+e:J[rVVPVG0$"[KNZ.g=_ 9c|W&˯nVf7-M$%cD_E SD65ƫQZ!X\1A٣ :j1V<@GE1Y!ms|9obC#\`L@P (cD4\蠅Ř5-p)zJD7jIbKjtjłcRCjg6@i!^ 4%DM;pm\_pbX!;bK!'4iM 9$>? ns=a%Nm $߅U1Q;Lz+r f)8drE7LjlQkT)Jj@MXPsbċS)VNIXvۦO{:Lg^ _x]CK-}t] ѪpC;vf&@\kAvɯZ-%EO+S>u8mL0-m(K^W>ld1kE PB0:pEZrxZ[+7s (պ4Q=qՔq@k&xP͹] 2wRΫMh#%u9/K){ɵL[zVj҂x#NZ?\_$w5c9HcjeLԮ-YԫՇU>~Ba<^ M+zn, r1Xl>),ڵM P\˞R߶.c"^*cp/V"$>Fn7@`4ΉT%lGC`!ﲳ6(jmḌn hde9eґ9o1=-*ZzN,q8Vʸ`T\Q+&U8kٚ2܌m!iFeZv Q^-\kRB(rz4ODwbN#Hy&UAbȥB"SYu!*Pf' {aIc2~; c駓|)tYH>|v>X\!Dӌɷw"PU9MUgv?KS 7=ɍ9|y]ZF=?8"S*SQ$T)gYTh(yRj;f˄cm Dh l\9pI@%d^)ch͆K\,qP9цt 7̨W8kg3jKZʉKZHy6$>3 jڕfllA(r8-ͳ|JמJ|{ªFy6驪j# dtQ=o7X+Bc T4rBI`j@Q|K"bu!z̔tLH,y| d-'̿Ͻ9Hy<dޓ0^dU<1z<#/7ʵngo>?\Mf74!b2쌳1p ʳO'ߎ,=WORL8k7z #/4..Iy$VX[8hg->Q$ɮm.Jɧ!h KmaR5cųٱdOe˅whZ<AO5z(<|1C}6$8)ج`'(<>s!bFm,6$+=?o@  wMsɻ i';~9?XF}#zXO~e]}EVv.OoHԚto`gEVvo܆^bz C&RҺ~MGRbvuͥJDL3}>ח+1G]P< (to`zyOԇx9=0h ݽr9E#$$>^=ry=|ˌO2=w@#w\k7P[9y\Ls}B[Ֆ[Q_[y:np.ZԲ{Q por$9G[-R}8E`oqf!yEoZ)ŀc3Dz}*|U _ ls\c.؃\oTzѻeH>l^CzDz}HH>A/TyzDZTa^y9=LFWaga 51˷ov˰s8.v6zQgFlȠjNFl7@84a gWm #QvXuۏ" 0"(UnZq%=kaz0EHW10y!A.%f-Y߂jɀ.gV/>=1]B'ݹf DC-lMhmji74zE(JǘL@% QLNADe*!rT#۔pq 喚b?N/wni4Q ɔ e7J+t6)Xkz7dxv ܐ1y> fuVooˍnG7wӟpa-9oN~swgWl̸l6`䞓+H,zZ+_ZZCU9KVuwy4WݟL L݆YVRf7f[6>*om<^䫥 Q'm~x!U?"~r0 r'c^Dϩ#.̕jsgj{ɍ_1.ɵC~N.E&{dg|kIC%٭X; jO,V=x[gVkv"~긻|#r}DzH)DAW׷7h7k:L^ Kd20Jiv7-1(9WZ+r4v d;EǙ&; X&ȅb2)K–Q#!7s]yn':jKB2jshY7Rglo5LPng@Wćh{rrpe V]mϻCMz%h8TN~xe6$x[دPE{B!]F]+B5jb芃d4U؟Xc2-{.d{,Cxf;$^^ ޿ P=s__<%8崜t5I%FnB/ 8M؟jJMVo},I0W[P/eV7pr%0MBym)cbѰ,loj˺rdCY7G4z<KW)Sт<3RWn\n}4tXéVd*׋S/dhZwѤS9"Eتk:G~4;heY(fdZ,|v0#sKS_hJG.jg(nPph>8[J0~d `\Ggh][Kt͝S_ h)dtb?837{1ja46Pmw A)f<8A @)GFڠLz(ѼOt{lY^*'j#yRN HlL)4yw-uyA9a-}UJq)]e,d-K|IJ'Q/oR7xq>ǟT7v&cTk)" 9\QTfre)tɷKj~[̬_Kw3b"`S`I+p)̗\%L'$dLNMHdAGd§=j\es9TܮbtV 3 "%֍P fjuB^!K {wx2kO}s{tk敽~5ا0nxYs̛qۊ !Z7jO2–T&ϙԅ'v?kSFȟ<МZ[+%設cy-bUͽ^ժ \hzG u߀l7\Q4?s4ZQ^jTVFN$J}lA2 0!w9#PG e ~ =Bh^N;k3 fki*[AŎlm*[ke@[Gؙ+%$s!J3tʢ8'Cũ:ɸN}^ E0Ԭ8>REd4k;^;`X$旇x8Zܲ\*<:0Hlkƺ` o؛>?gKZ ך͗C|7 ˏvuv3Ɋ({.b$}o@%5PPQq~vm?ir1#<ؿ~NJƕZOK-P;i]?$oנ ML*|; X_2S kL?KՖw<:,K{|R/M^iȯo߯v~_9-T, ָ-p.x= !iwqLW@|ꋽ.Sƙ9Iwލ"+&B=jHO>}ĸƝ.ȗsr| _OOJ$^Oq pj=5n=p[zz!MrGӣsR-pTbW+>e/.$2HLCÅxr|m02$淰ťs/^ߪr}!De3ʎ֠ T9Ӡ_G%)+j{<7Tt*ȴXW1VvnQ /a>D}[=1 1q A%l`*wtARRr5V #^)T{KQ`H`NU ^@TTL)@#fzL:KQlR^t$跼cDA$1]'VM@灆xGgwr& #*},+E_aX=Q_=] Lyq:Mĥ\*39LدPx) 90RS|:LWLF S8%C{abDw8Cx סt,y ' 5(uN__رV޽ '- bCJC|:Vkm(PTB Oh.i_bzd"1͈,ܪ'-pW1yM)5#7 OզU+޲|;%V:HZsv\Σ{<ϗLy)Z(?XM""R̿(JCDmӜ2U4T="mKMr)E쩕r;w@UC0w& eK2fBת;i5KxR9'՚&4-(hhwšPa$TH-q=[U@ri%-6{DtLTZ^{NB ѵI*Pvz =k@Ago1h 8jP 1~1@0V2[ ! 1P:_!ٗ?jWE5]߫ap%!hk~͢Lm:(ޟ`(#"x58N.¤3xIAicIT E+ؚ(LG9;6k6sEf'c׆QM9'3 KGNjԲ^#Ne Oֶҡj!7M(pFM;1Gc*8I*N,>Ka1\kB%;ttiD)=pNgBY"'Jo\nf4 Z0ᥲ 972 Qy,Mw/;)W\m:;&7z8L&1 eb4Wr< Y΂OQ}ؼ+ l`] ҟZ{]" RײUȆQGq2,TB-:<ֆNuO*plffgtkmkF#'֯q q_ 3t h!AG`u *L.BE@u8lCb(Uq BuO*t w5J"tO,8bIVh3 m%>-| Z1LA"]ht!b*TWAL3PbKjkyrD-,}9b͟!r.Y]WN9?l^W?N;<SwJ+d0vlْ@/Ʊk*+;fctJ֗KecZI߽Ss$/-CpбVɕY;17$=n=8 v|oJH?M4j}?d$^_ԊԃeG)Ye)Q÷7,EJ%Ҵw!Y9\}+B8(,9u&b4YH${N޷oda=ounFѸH{S/@H;Y3p |ѲP(*ﭱ%2P1PP&N_W| VQ'T(@XIkU0 iQJmW)4a@0K_\PV5\A BjZ/ZX?zT =%050AXnľuHf{tW+0F,ljHG%"& B%Ϗku32`t/g,}M>τΚɛ }=2DKf~I]JHdH$d/Y1U):KSEK 1P#1M/)(Zр{#9,T҆/'cCu9,JPg^ޱ_.D1#-։7VbDw&2Fq19K7fz:[?Mq'?4{ M >l[oZh_?͗M㽝|@_,F8\u'2 T!p1>hkW[)<)ܡJzk2i|FBHFQD 1(JAXLX){ !Q$+- =((lԊ۾v7wz[cеx\?<♺ DеK1OqkjXUohdNs465P/xJtFKnilm5%I,xY"ܣt0q39C_z̀.ݣ qw%wVU\fi\ߧ:(f+dOQ 9E0 B0,1@"6kлn}x8F=Y]AAZqS(au5!CMa+w4uEJmfM'8u}6J-߃;^.`6OuƋZzð!TfGE Ƚ }3ZA{ι^> GRޅkHl/x1~sg\ŨBJJP&6eO-xP668ӆ9`(Mg6ɳZldݬ?g׺YVTe+~:H3©NqaxDr #QQr 1'E JCQ'CċLj*إef J$X w65 N8A)2m.5M", tt* {{lSD\ %┾l/!IX%LJ`7+. haZ鬯^A q [xwpJbh(`+e`ιn^IdUk)ĩ&bBxK ;նFl0Qpն??[ڈᓀeF?rF @iQO =zIF*H8p`QU]Η@ZX{Z OfPE K ŧ`/.)SH㩼B ܉+ͷу:AW*-T6qնGh \"#t\10z=P_:S&zUg5X;SfJnS k]u {Z#Uh R_$!)ʂRWmz\AAĘ5BEM)K h ʮ]Hb0Xwl= ^Q:߁EMwV;ޝ%DA]ϻ2YJIߨzzkoa$L("QB#(Ih 2Ja"(N "+`}}sƾCu;CVOtjctNMj ^0p(ҪA2,h5C%5o x-[F U<w^<gy厹xT)z*Tw1ŭq;Fsex{tzP[̦mƊPНm<RpF w~<-`I= Z1@q/0z;ՈKŏp!P#2}o'juC>Mf>"M'2pCkfa;_zz_?͗Mܧ1`'Rw$T!$Ls>c>|CV XH\HiJn+)`19 S*¤Hh(R  "I $"ŊVS$~"'-DžA 0UB)aD$ #cRgQX-cRs@>)3Hu7vcry-!̐U#|K ^9w 5ߓCVP~nX̤|Td ST-|5Az%yrD-^E u D9en17̕7!vL'6t~Q0W?N<SwJ R%` }uW|?"_,B(vMYIEީ;x9v)m `TqfiV]9,K:%Bϋ[gvQCgs^[(DrhVW~w"Q^TwhUkI)8*^lyyRxWÐp#J](CReD5vs 'Yzӆlyuuwzl_(Rz/QVb~6g"ꚍ׬F 菎YpG/S.c5;r܉3< Oa[̙xրkul.k쎲A~?/hQ#N0tc'>HzWAs7B7nv?]\hggO`36v^yy?>8>|Oou](dɷwQ{b·}<;%yucڗXOC8u{ޚҴ5TeqsU`g62aIڰo/"7p_>ν 4YSP Vt čarۛuW;Dr\BLbGQCMPc{^Gp4~vRvΔO؂ FĹSPɍ9 E_LPޯGp mWڳipWS\e3 އOÛg  rVIvWͷ)X'؀HU{Dn5q}0Y]z߃f ZcO-:?`{|Q;r܏4eE GW@OFVō_zwްϜ)P\vB;i-3:/͠7O$]ΤE)ὴ=Fy\{_Oȿ 0>;} ~#~;,+8bqÍ-c`ceQP G{JDžzΧ#?4HECI$ЌT9*V `)R2"3aPl(hD "qv#/ٚwo匙~+* 7shk>,țUogŔ }7&+fg%V+pX20C4A4Ob}/{'%5+P'H QaJ)xd.`pݏx͸HT HG%g>}AKTq-NMtd-1 3w=2PY< cy}111Dt}lt{qhf17\I$ti'Iٷph&搧,/|YiHƺrt Q = F_ ř(ׇOH7Q#wYr# 0LlT) +OBϙdϭ Y'R9qKpv/zv x90 v DLZ -0qk"5Il?ZCߏVP}/lP> NQ:NRyN[bԹO% hR,cdB8#VYC8F!xӅtXAHc8ਲtԭtԭc Nz}ڝe<"!%{qMw%vDBs?tyѨ?q e[ڷ-[ܩྕUBlZK-[$lCQBw9)[f2dlr{_699!l|gֻAlA.2{pu!ӝ=.skA|q4.Vu}e'aץnw,2`:N*<;;yiC]eۂOTY7of:\X`?WjjIM]J(g)OmRL"W.;ggћOb@>^v-6$vY&Q$FNb:)G@V7{ 2Z2CFe9V Z2L)0 yMm*o,n瓩ƸƯv `$f =Zs45h˹AļK ^`VI-)(0H)I_jP"ICj6*BǠ1ǂd, 3 N%̉0A4hǤ =2 UJr~hƵM1@;Yt00Y+Ui9,G(kyj+"\>Iϟ/ ϸN?KPb |bҠn(w*ojn9UtRvF7`H``zC2Q5 4u !k C ,VNԳ݈HzΦ } j"Qeg bg8+`+n~ m &ӭ{5h7=q,2kRKS)xQqET*?ގe;N-6+Գ7O&xf)Z["2s0E:$9 `*7+B:PǓ(ڠ6Ԩ ?*rᬨdӣI {XdQ.pH_M^*@< { __kzvA{ Ǭ8)mS5tL6|=7Fݨۅ#h+X`l]׍˽a!N(U:\RF''t^?pA@&a-[ fuZ {R :(!EFR<Ay Y%QD ]6u[joUoj:4JpxBy\d#zCYk›㍮طnշ㭫ZV|u±Qx ϓN uxtn0atNOޚSYL8G$kv!sS\q4ʓ;ET8AJIqJl. !(GJEχfsw7')b$8)5֥RcQi陃qMN`#%[EۊE5{iF9)MԉU. K@ Ûe3dߊ뽡!ls).CJ$>e).sH:3 p*;i?JF6ww $wv$?BaE4!ԊYk8KJ{yLK䛽 R*/L:y;uy2_g .Mo5+:vP=v%-3JCS'(euwE5<{&FIkˤgϊsM9Øz`sϠs lf֛:#Xo#/rKј"⬬!Ϥ:WXxqC6đ.)0N8 lݧiA_줻'<.[3/#89Z3'^{qB L7y jQ\ W#tx`U;!|m&0. Wo7\/~^)RDVɒ:&}JQu5 UW|偶ȷڧ+jג2K)Mua+!̅} 1W+;o!>EP~6)ٓ\U|3Иt,xdB+{r<AQi?Sm1WxW{nwed4N:vlm7޴}UZ(F7{),?~p뽋IӆI0^OI [oKVO3D iPXʮks?MtF6|(VQ>8G`_f:PTG$6 ^;(0>0v@7|*j7](zz#`}# #VY~d1|M zw¥`)3in-D2HX95iUN< wmZ{~eH7ن$vcJlJi@᣸2Y  n(Wyq,DHg_ChE>q1{_fp %K2a2,]7arye{ DZ@M7 P߲o:7}DIH@y,&dXbC%BFg=2'*W߻90{խ\@5zFHk;r ^{rC\jȻʲŖc·+>^@˫W޿IW/誼GYP{ꬷ.54NY' AQ^a9E#Lވ+#Q|\:X^s- \hJ:y;nMfFù%P5ns"qoHN !gY F/ RX'XIPKH/{"K~̟L-XOÝ6;q2#'%aK%#.h|U8@he=^we(ycoiJf[ TO}M)2G1ϧ_n mVJ@h>_>KUZ-uK^ I 3?^L曆vLP!Z-*F )?P+哎EDxeG9u'rG. ctWmߕ/$&xՔ9u*B+!`Ǔ.c%}c6{}mL!%x%Z=d܄fqMܼxYD?`8BT\Vs~W2)|דS3$L-&g t#~ i$.3JJuCד;{hIב]Q"Z GJ7Ψxw !:[ MA[D0&Tg[i ( խƟ:Q0'&mчZٶPȩ!3 3y؂i  OL;38A 89lv8jO[nr;DПs LsWpԟ > zhs@K_`qfho?{)tG kyKҝ8cQD9Ԣ7/@({O_mcOB|pqo`Z {v%)%’\Ɂ &w5=܌ k}˺. 11 hsi9= Ѻ{Z #45P=F~; $<:/@v}/݀+M3p, +S=7ZqˉsR5f036xsS R[z@9R#M-ʿ1Hc;нy)7ߌM[ݔiAGvZPEb~>X a,FÍ%ՖR2"Nsa%ѹ C M%W4=>Ϗ%&ˡrazW5Iчɏ `xnh>(9Y|CE-ÆjTG&^7 "u1 OJ*4X(@ifna#L0˙`Ǵ) Co3=0}ͬiA!r fzP>>:ab[`چh,0.4@q ri)GKtx~(#ݘwIh3$UՉݸZkoSb$K}nй\qEtZ)o<:+fx!B3) KEXM M dZnC)+ZvMm";ٯ١/Al!=ܨoW]JJ!Eں6聦] v s=Ņ!<83.'e21@s;=YGKA1 ϋcʍ&_;]9?/7㫫'2cP⒅?>*NQ?>ϗC'@y;K9fl 3_@O0_D2xҬ3!C|ä~~ǻ;>XC[d4@FoOn?wu;>r;}(֭qMtK3 3@ȘJ.gW)7{^9j5 @TLdfY~1µw3ܜӼXZlnFzcQ*w}|7rA7)>ݗU P7X7_)bLy=F l^$iXdsFg޼È69w!;G;_Z yNxאz2{~A5LV'>Ju:"ny \ryIsluS0I*:ێlO?i,+8Zs?`̲d/vkyuD*ŸcGi" cӹnu[lxM[Ẉ̣$ΊT#f`u 9^`uoH%[$4Sq6'TK!?B}QVR=.kz,s(,kcEg{_Օﱔ?@2$PKK[f; Z+u#J QzVgG-4#tH6c9ZB(ȍcꅎHQX-QtngEJ[A O:4kM:<Z;2wRf.f\{9O72#^P*H_! UȀؘڌ$C;NA`@gc&A+Smysjy^X^_g%9hɉRby<ﯟ6 ymZ_DZܙ 2& ZL `Q-LF=A jsx+ $ LNT` Fp%}L QЁX|K鱠WܣP{Uc((f3uO[TJ[|2mI.~QX52fإOe?]OU4CDR{_!=[&'" vR>k?~?ݯk Scś /ȄUO  þP$)7wbD g}V;]QL_r1pOJZ>b)n~۷-9JP%dm!țWջo?]|(_Ln6y4a ׭..7y(x?+BTy$E6H6%$AΆwcV\'<[fivuhfؔõ-~^wEqPmFtZS{Sϛ-w&zG~iÇBq ͈mj(-}:z$MI3F)NƇt0Fz2 -Kyw7Fg-,Ǿb|4p6\w݆WeMWGZ.N6$nCM8j@GKFhUv銑ioNVLxۚ5qqD,f@<>ߒ˫e=PJ)H)5Y+%IcDGM7; N8Ԡ^98{GiS|l6Iln8KO$hIQ SI%0&dօ+3O Ǽ#SGX!կ2m,lñg$#zLs;}5n@kh 5ҬDh$y%ZvpY+@"O]ێ7!ڷ 􊂁PROLbaҢWL(=_9_bZE04ܷVD*o̠&*7ҶG5ZO*6r/4j(sR*k$TKEJqd;)+0çe=| >VӴi:Kj>Ɖ Es#HayN49Y). c(" w@9Fj J؞ к ?0NjfC$jX'cL0 2g:+ܪR*u!8 P'=Ƒ`Q)!Z(g1e3c {F/]]|?,˞;N/;a3; tS@dQYa䱍Ȋ_fdD f >Hih!E gdޒIө%ӚsKF&K[8*$<^$ZHAb).VZ~(`Ay͆0,*̯0fP{ʁӲR˕uk-* vTH/bu*` F*Rk摑5J= ֪剚~)Z CQ*Ap%R Wǁ߄J SPr郥v#JaW3'ԑ\|%(}7g$}SD{Kh T`@ :U7Ѡ%ٹz0)86ow5- SMɏ J'3i4w 񷥏5_vVK-@R"/mi݇~90$2k@TVBZGٴl-+tqW@K`pPJw P.& FiK(.J,*4`&| $4.E%ɥL.$S1M1%}u0Aun°M)Zh߬2AOދɺv"/^V:ޏjdvQ2WaMljsLrs10SO;Uն?{յ}H>Ѳ R;W.6|*fwe+s튉nC[ RSe~`oFy>W^/סMݖ3f97, ^(UY`5G%-㡲9 Ri*U \! ;DIHVӡGNBċt(sC9QŨJN|͟ a*t5ajtqƛKA*EGބ'ms&}0'K)|LV)yi5y!ҘݎaWx;zB L*ztxbtJ /lj!"P;+HwSrRR0G <4E 3iqI0ΑR)%4*c& <):*?ETsLŭoE"b$!Jǀh.Ɲ?wnDbDn{ lt002Ü_XRd?K?xmKG ڂUto0N[Z8hd:b̛O @(bj/-pXp$;pRcr_y?9we _QLyřrt9AĴ{xA9!SHAFjV(hR p{s۔^L l˅ʎ'viӯu݂S45?u_3)gꁉ>^#:\-k?x+1BR&e 8\wd񏘎 V,[łSeX~<>?b.1b.mFqZpA!$P W2*  %cF]p VQ&hq>W+0./m_j~ՁgZ<VꝃgZfZ':S2jÿ{3km_gޮwoiوym_La8x7]v@'^`=f|K,0^u{Dz|JVD5&ڔ lzޔ2EŕzQI;Zn^6XKh vG/[ 43I?[r\)s8ιDz3;@xҿ)ܐ'$'dH8”e)q}tBMJٱ`V3W 0*㼑p6d.q@(v 7 [sP~oy=[V)vKF(L sAL3oPmX"aJ'He+2wY(`KF rTyCIDh9ERJ#4rW%fٖZ֨!N0K06 KV2'e{H9 .TJq[U*4*1غBLxgր~X.ݴLS\){Z9U~M3w3|偤a4.m bJ+]"5J Mm _~AfL@HcwZJ= Th_["$iq=a>+A3rBQE,C 7 k),J9Y|kuNcKbMRWs'W *XMT:sܮj/uSmot54/>4f:$'.R@1lUZ[u@i١Jv%2 C8wɳJpYZ# 6bO-0|>i@D10xVXᨅtg"nZN`,XXw25"G/{,vyȃSnStqL'=|1yS'x!u6c?r3(Lt,HvZ?'#WSDP}E;5?E¸T>KL:fS/$?P85z RJjԟ٩ $hWzug̡Ҋ~]e)iXV19}c]dc5`lYlbvvm&+YKI%NO BXg T&  ;Da0Jv12 &CP'bHsj3u:.;c%XZ'VYfT)YV#@Vdrha6DZÀXdld4hi9fBf> wk!r%dӯo+`ׂ5Jvf`1fTwp!0ԶV"ʙBK5!to{o髗/{Cly%u[%dɬrX 8% 0:Jb=w ^'jh:3I;l]e'ljĜf`wGj^)0ȒR"):|P +F1*h``3KKl(+%b[P(gF JaiAR4DT"ԆrK•X}=0"Av;q\:JGi%uWi[ͺ+"GE驖(Ѻrb{^@\ s$dG'"Mx"Nu$HjHIMDqCn&i5!tfV|HΊoF*PO:)CB\9E3OC.pة 셸\~ mS(ګ^sX,(ų~&p^ϴ>~وRg%tgNoҝ(gr; `VZ w=o=VmybKt0–`gO cOlq5ĝ9KR]Q){[ogo{̺Y#f+ݭue_'ċw;\|`N?=!Rue~y[4bs𱋜rcJDbt֓qQd״kUny ORǝNO|w,"gt Zwݛ: /p H[^Nek\)d|P;H1]1/`(ƈLR"$6f8`Ν#RJhT2&ƴK/rߘ]ǫG GZj{2B}^~qnJ HK3-ATdcl* *m@`r$:Q)Uy{g}D^0 Fts }ϡurۛPvc;<iGB!*h2@еIktBK0k:bJ9t 'W$xl=+ǎ#MR#@܍QBUoBܩk 0D{crW<|2a>Lx)SlgZ`mTf_Vd+ ;Ff 3Yʱ_Y+ )]kroV?Y ׍E[q?тNyN@fqMDf#]R}t+a7·xm1r}k֭\eʂ>ت֧9˛ @q~9y+BLlS7C5U=jXPw[L{bJY/S(۞;#n{jԕ`BN@_~Շc.h4涳zO~{CnZǨi^OR%XGۊ">g%. f%8\JYՍrexXZ\ s!ĢRh?B%Ӵ%^z`*0wkRԂ gt9 \5Tbc,D ɑDc %/K$- X8̅@aJCUJ"rUUՌrfi,FTNbXXQi+䞐 & 9'TXAtkrku%T*%xRJ&im ^qoC[AKz"C+O']qXuֱ=j%Ao Szk/3wqL[lG~RV?\k[5ܶ>Jmmykd/바&[fIwǍ[IA/{9bdq8>Q]dݚQ٭O<Qdbx |C:A>k>ӗ> g |Მ[:ùm8A[L@I$ $Ps$ #.c|o26n1|M)~|>ߍ /~Fj|kј1<< gV#ח;wՅ"U/jktvUmaidjțWB]Nͮ Na킫~sV $.ٖ؛'6RbL.A0I$Ph0|99iZ_A Q,@ modc믟3ra{acUY| w~|XktKցyݼ}@i]<⍍Gm~jA3hPFv7}fs+ex!G ⇕9R~p"G9RZfxlr_Ćz҆2aw׆~Zޕ>Rj{1y02fUbh< FA JRd%Z @HM( f]vl֥ͻ6+n<1Λm8.%fvLA K9<09Y]U )eJ,L\BQd[Mucǖ$-m&쨜  V=YTqaud6-L{i!Okҹד9ӧ`jo"uWr3Ikh\"u)1v򖲭/ =^u62熷=rP&ʗU{Z״6y1\X R17LvAo鄶Q_p!1Q_B^oS%po. =3y/SA)@MȀ`Ba9`)"\ H-PUSd).P9-p 8뚦{ٝPs] L~LڋjF]D}<׸93R X!"gdPj8@X %jpn9j/I MNt9^8N& Q(ל{CS12_4C ;{DcB,\#-Oc/0_8ĝ_H1{wT 1by]I[))BHHãyvWeYQ+7M*pJٝg #<> }$Y(DR5PCàf$S1i` l;Z2xw`)P=_`t >nFfDN& BPR!EVV( *AZAeĮ0%ν, "lAOX@y&)c2IPZO&ikE\"f gԄkn @NٖIe2# dƽͭݦ-|~+婧6LqF"2Zkye.I0dYރyo>=^5U&˗Ϲu `TONf#ڤ$I@e6` gFh%H}o?QU]ٝsOݓieUMf_=5}TcRW^"[۫@mqXAӸ6^fa/J7ۯx>nfT(L5idN|;0\W @!ڤݧI.%ܺd2~hP-QEpD88kEI`kC AJJ IMI!S%G{S0f|" ]Vۄ*;sZHTd/l{^j!t<596803wm-!hIbH0w{; ,1s{ 7剽n1߸xane.]"rx&߭}$q`ݏJSۑL-GVC y)a!] LGjl|n:>jn}a~7Cby{rXDך'-&d2T5VzQ,)BmWH@!)SabpM3.OjX^ ~) *K{^(hs{ }1 R6^Z%6nm^8rqer\E77T$)n"GH;ٰ eyʘ ~Xޅr.H\xT l퉍8`@vmkcOfry@M.$kw;2 RD"@* J6OW|qB 9₁Ngی4 z6ƅ6cj޳=/xymD3Pe2znmFYbT={SFV3=|bFYsjW}{UQM!FG54 <E獫(.5D!'Uph^"}%pE_}܃58xjhCz Mnl*p`ۑa1ާ681aW5b"mkC|O愗xBO#[֍7-0 ^O(g ND*d`q&IjW%L$*i ] 9TJ$(R*M& `@'@ h "1 `( ձY/pN @9lS ̤6! d<Jֺ2-rk ,`qGeILj#b,eė<BF?c]'<s~i>7)V4al9&Ih.R&vCphKk(h2aV+@" 1d0)RÌ0P&'&̎+`?c1gm *I^^ܰEÄ4qT "Jϛ$D<_N?wcrAIN[xJ̲X6q|cůFc Tݼ},Gyة}qxY臨 @lР|{`~o bv}߱-A;_)۽C‚ ʱFja8RX;C>F$ߛ4rݾ 3Sy)?a[>pzhr#/x1tYtHJ4jdM{i!'5\rd >Ḕa)OǺ|(v[;nתB^?'nynNhUξNYs-wa!/DlJ^x7 KnNhU$NEs--p )@ϔw.]w tۨ.wOcR'Pw w`!/Dl=kVjƺ=n&H`%grUێ[w#peT+[uk#A?Ĝb*uBN7$ɠs,@h@s fG[A%>Kwx#/IhO&6 >n4cڙ8^nX0 콬B$g0Wp0 fPF}uOtXW%&(̀ }D51h(7WB׆1Tv*2QD/,<`"ׅpKDlk&' ,AI"@A _t %_mmB7T7Mԫ=3L 1U5 U;#@AQ7;πa$)˴KAib6F(M\e2sf\dؤDa][sr+,$[­qq]T*czOpvHjm*= Rw g,*{%3@wU!'(0ʼnDA"y.Zb ! r5`=N)&,ArDS[rIRY(ř8{ |J$Ko6Yέ` &JpR82mx8d,QZ*c (?DB@AOeTѬӅ8&JS)*jҹ*6L 4q$²sQ`46mPA^3Mpb,\;r wR0J |uN (TFNC tjHZ^S)@@1Z\3 k€#DWSJStdᆼpG0S)mi\$QcyIQ6By%hO M2iA~c`F359>7F7_{Çb1T>Iv4_Z CD Gv)q&!ŽOd簨=#x7˗0M fˮ? "UHxmNIL8-Elzi^xzK- 'UjGa^Z ?[)Z)2+*"&z"}@ikM_F&\"p\_C%6H{摸8RZz›IGw_ݦR1Y熓O^Nۮ'i )Z"pK%sDGJkfRrH8cȎ'"D8 *d {ݻצ`oHTgW,(]2^E⹚"AﴒW[x*x-OzW3+ϔ) m(,2g0ʼnqL-:hCXE~]f3&QMR_/J-&gsDQV``4im<a?^ Um<)2nL6ʾUh1v_]& T?}^_GLHis!m;nBrNX*8h"m /8Q QbIS3m*f'䘳x ʊԈ: = ZV3*׊pl R%ѕY(aA I 9sw|K$7DmB1+U{\~J-DhW.x'H>(RNs)N4(BeM$cn)r> k @kuljGp(*0[. SR)afhD6v=ksjB)9P,N8]I/\,Cv@ Lؼ]Aܮxw{LeԳ>Smmws |!9F0!}KボQU)׾fA/w:Qy8=D?Sj>!^Hfx5aE9։1载y8{ifO0foe3jյFwB*f磌|fnW[B NI[3Zhwݭa*T2~\*|z^@i//߳( ;}!gڸFS̷Oߛ`n~T (w onڝ)HN[L'.$6!fTo|47nev۴w:' $ F(Y'φ+Y<E fr.}dc8=Y>=|Ͱ ˒$Xu!1>Peb*TM~=4./Y J!;$-2H!$@3B;+CR+PZRGo(F}*X&U2I@qG]xD-Ȕ!azbqPɢԼPꜰ,GQYEAs~(>b Tд@VB3j2k]hF= ݕYWӶ%bcwgZrAjS(> G͞h%П{sOVS˚/փ~ q5]]U;xf\WY`-8"x8[x8p4Ǔ\f1![oX><*dW>ǍݽI2꧇3Ț}Zz;Fػhx?;O+Ŋr=Ŋ5^+$gnjSatV}soRs[VeBț rb,REՃl-UN1<{d2A!)hPk{w+h#"[ֵ"w_ί'$쟌|&UU{ܶ˳}O8nzz\-FKyv4+BFL9ent8&~)FZnDŽ)4aT>2Uf::cJ[=%^OZ^[Jߨqt|eYb{=8lSLu).|b? Eu } 7ۿ>zK(8vN#g9,sp" T7(;0Rg^̣ۛ߿y`f86Mxɠ)7`Р#n&,h~%oAo=}wWŽu+b}3bpC ɈiΊNŵ6WdioOlgp(fLpZi ] F$&R0"r#|0 `C=41qQA\hI`"b뤧2ʔ 8% :$>^QbvB%&ݥ+#Yflj| __m [`׋)Rz?;qF%nbo_F'sE:@yZ\ vV>[Oy?}.`nы +TRp,>\f QBځeuPn>B 4CEJ}?VB/j(Q 6j܍lCX׷yH$N wO@ٚ嶻)^G>q$ ]ۈ&y&ɚ%*%]yiUƁ 0T@"-l;zmHB4ݟ6v;&(R$P@5{t;u'.fQލ^sXYsG޶?7O-58!sy9d (8v֩5'OgShRJcN `b@7("կhS;E`e c_K^=] ?+IAn:DW#ϴs5⒲?s# <ІvmgV:qgTq 2;1Zue3-ֹ+?pHWn|L7ۚ4ܴ8y:m+1Fhڨy*1 &2(1j|K]smET%k1-In'5wb{=^![5rY+ffq]>ų^>e^]:O rƐ&KjL'>8-,DESˏ7L囿Yw缙^Eh:]9o)iK(KH(O? pWJmM5k{ij y\)FedR~f0W ?D{=>G;԰+̹pއqQ}˧wS(VC6=H[2xVTx7\Jq$TVLtl ]$bDoT쑜Ţmj{t4nd弾?]}1m/x\Z;yǏ7Mُ#GAڛ?(Pd@yRW}o2bwg]9*Y1kdEgUhVװPG oQJ$PʓFIdK% !]^Zw]ԊTl)VIEE5lhP8VJ*{MMR_/J-hJOJ g|ytU=;63Ezw8IY"3o獩N/~mxE@|?·37n^S^x.7zz*PFskvUVݯrė>\]0d~u/#Dxd(oonp6GȫW37v<'W'~yYe&|w{q= F8Gډ@.?T( {Efr频R_/JFr&:'LtPJu]kOìpgMq6(RӁ R+ 3ݜCMps4xhKwPMcb"f(3IA{3+ZDOs|^IKڏ7~p}WI/Th^-Vm]\rz65H?I}a5kZ} QGPFVuf ;oP]g` }GV-) lvv8HY`N?7I/Y( ɢ4 Q4MħccN`D7܇~]}"NQ xSCwp xmּ`4Nup'<@SF>(q')̟<5gsF2`6sO+Icܛ$|rl)L9drE`&Ab Բ,Vyy g{_#µ$l`'Q&fxSf2(ʥ5SLCM,CN=,pGxl =ۿ"p`sCYTu z` Z_|\M XO?zѭ/H̎.*#Yؑi{`r˘NҼ:P7,M/tзۛat]f6G@^[u{ r6``( lCB?z%+J-/fj'jS5 cWP5PO`'~.3/58E?UG+*b/ ¡Sg0Hkq@AC{[3q9a KMwŲl}N8!* X#+,A[Zy*2%&D:2&` Fia⳯c[*d7 =鉥[5N'~ ٿhdܒ\6UJ4˝'+BE~i DTr]8A )!{*I2GDj DF^Uk3v rWDr\>^z1šЃ$UUsyQ&n-95|5G2vcH 1< |1,~8cV8@٨mXrY3a:\_y Mgetyt(\3WS='<54-H@ȱ @T5=y$?))u3fOuPJk6'n>ZI!jblfנlX)􇊁5v4QK6DŪר'nJ-Tvs))؋Z+y@@KDQwČ^mxs6"8W" T ȠԪ;L3PD U)jќTVh9Vl46v:jQVOa?$Ԁ\ j.bC!rQ8r7jAOTu?|l3()2`vx =o^7nnX<#R2hS>0q =y|vKCoBUʦSjzҫzjZjFPjQv"j>Gh(2z:kU}j:~0[gutrZ'*6ebG@W&uS[$U1Ѷ; !MGzT AO0,?=!k@WZ@٦SI_WZX.HO?Q^t7"ǫq^ )AKf VT{9kwUmo~+og! lvmĜ~9Xny) r$nlQ 狫Rov{n~P TY¶ ٲo#[>5z1)y$YzՍtEYkJm|2W'2s׸|(J9[ C|HPh$ %a95bCs6m`2 0m#ǬKo}֘՗!}!%(#1cVXxm$z^-CaxfX:{HR0fC#][KMVn&_\mkycL.f~**MSS_~l0>TM=~ݼu{˟~FI.nNjt@bQ1@bnk"G'G(EuEJrx FV5:e@0#CcVXx#KlNfx0^ĒG.EǬpxqCvFTRs%Ĥ>fS `UK!U`iB4bNG FTqQw  +|Z[*% ! Qujkn4S"KƸA==4(Ye,r !Ɉ`[ \#z&k/A &,aZةi%szai&JR}0,TJ!F뤐 uSfkhB=Ֆ, Z"om=;; ؉j8, f_~~-i0J;6V#%MY+08}㇀~{>]O /KKQDp6Աeovmi vU?$pm&i ͸Qh1.Ud$#ou(.2TU{~(]栘 n]\t2W^m6VΦ7{FlCF~|ơ\}kr Fv!-GvbuލYܷWf{K<8+@q Ӛ gt!R-,^ˬȳl$ֵn]:]R&_7̚}<przX!eVK[‹?n .''p/T )ziN1FɲXnD.jXǠ^faq@v e)< @Ԗ³lk&5SNizQpB1 yIVH-tn96Ǐ.\lR|݆2QL9r7#!5m^2A|BU%EPD`2HH! a(xwEAc!V\qU<'Z\ b 42ayd>XA\ 1^cie&;~bfT\<~Y7| Vn͍*|7JJ=Zq mG&L-ʌЬ??|xegJĕ҄>=y'JJIaj0_?b3+,H" ⁒M)0xԞQ̀ .>T#]V:* hO\~Ku=+5C@˸guh~h.k@"(ye-Cm(lyΫjmm)i#A8DK/ %L @AK^R'$QP%ؕmli:r/^$6?p 4}KP+jeEasLXmAh+,8 ܋'6T6TCPZ~Y6Q-H>W@ 7T Px+tA-9oiCi6Xc''em˚Q K4 i̇XϝTEZGDY"F":$ÎP} bh8rp"zQ;Ϧ(1=! -i.FjnǘaԿ޻ϪԊ3=̳~mQjӋw_g="Me.mZk NH.5Ȏun'#d\Q9LE6h+ [uo<k&nQ^d&9A9jx0bdaBjɉ:r]3 IɅ,Y/YTros YP!åäG[` E(DA{hpVa/'MR`zEQS>w˂б_Q XsX>bKob(DXpIhA1.% Q~o1 5!9\jFF:n#t=xС#9*U6uVٙq|<పG|1=ӯoy=t .nU+G> *[F|Ϳ&R坏2b2zm"D:oJO)CN2*LML߽Ǟ[N[oךN&7\);%ˁP.F'l.;;jpLiECo,p;>78f@^Ko~˜sWt3h!_zt h@?vF󅶸ZT9NT8\j@Q+ #9ޤպ `Zw!oJ*Ơ8Kw/7eFқSK*1:*Qj(ǩ"F#'>sth9+#u.nB9vbzё[}qEûCk# kV`Jt_G z5hӛp^[F首5`w25gWIh?*27n]+y[%5-ÕM?Ofs|"])g9ǣ ;?fҎ?[ h-SMY5W|t=) OSH]B0'[sЄjm;KEkS͸kj8^3pB"@lvJPQBUaan&dVjdj6/ékzdi7FM_q[ELbx6(."XroW՚ܺV\}{M6^?lz3wmftb#v:ܙO#72>"3N$r`)V!)x}+y q?]YM@v|m͆JCNA'3V;LeNUUr=pλR$];7hr<&rFU"I-o\AIdC'܋'>)=A{q=9Qf-[O )tM|~=u_|H&#}1YtTmL$615D.KSZ(^2SqtTXLE0mIWZzsFLP0[f T6|RPP*lLRbSi&֢(l)8cE( 9$m90Pkko#C#\ha?j;-d^"&EXnai Tx:{J$lyDDE+R6'$HMy#^jTPIAwN8IjmzZ 4cW;vxbPqxq ; G ϯwZo+ v^ghrJk<|p&V^vˮ7Vbt Q##55bǃf&8q _bܦexXݲ]F$ ZÚz5[0jqVMo_޸dk#BNڵRNW(;jſv$n O~+jEt4LU 0TZm7{z*:y*R}74G[K+9^fr+AϨߤ삁8fb+)O&D^ :l7{_ ^R9ƩDg9I&Xb\iGNT ^h8]K3oHZoP[34 Ϥ2gsY)>Re EJQ'&!bˆ0xDύ ?zH &&}p9y\ŗDw0SB Ue-77əӂXd(lF.{1!Ҫ{0x?[~sY9>&st>h &vh^x\ \ДBhUzSa)x`A&=_]Ks9r+ ]>T Hd;ÆB0FɖnII~@׃%gb4C*@"eۍ 9ǁ3JܩIC.G>a 1z3h,+džOđ)U9V=>xm/i܏yTKZ"A`М}aKS 4Iʾ*fU*c2< G+U|~.c [9Syg·.φ|+"ocćN^Vdcnys2}v/e?~Lj-NlCEi&XJ @!)ak-wS"YS:R\ii 7}Z{eM 췕/y )f2!adUbhzG22[KL8i$zo԰B)W7,Sv{cnOM`s8@:4I2Uzd>ۮR)`C.eu2ūA[S^]EVL&1]!x}CʛߢĘ_Qme!dؙ1ٽkEvyP {r_!ٟ e )֮u,g4]2Y8ȋ#,~x/&{?Zqw/W - VPe{p%^]'u.!,̖P U[YŞ9'WʁQ|WYj`0=)^V^ZI[Jο=r@&p|D`8H.5w)SqM/S3ȭnN|y6qct॔}O4 E-:^/ȧGJG? 8tK5Dy=I W ʿ\d%X'F_>%@J"xf$%kу N9.NP)ـGL2;!L$Kt!&ߚ^2mtmU2AmnCDe!d]A;~vC:n:ю:hRY&{HҮWRc6̀?aC&oNdzBhw1<~R"p6zNVFREv6 ivwZ,G'5Tǿ lcfCtaײ䠟,ڛїp\kɿ/iiN5wېvwo_eO]OǶ /wu6{OEn^]:p\'?ݾ4>|~}_6oۏ^UWV}6oū}w>^=|bXyr/Fn.|ЕLk>=}]zY\& 02ٷ\K]@2MPMZc0 t{G.y3/p*s6_f5urt`䦔3ԨK nR!Sj,;ju߂Ǖ(<.)%&BDYkdԼ/s T jtg..1 ֚@%KͅR^gpSkkթkbteG'ãloe1ϯh5skksˡŨủe =@R<Ӟb)p<(Ǖ.)q~r+L|9/λNpqgT pS$ZɸqӋ' -\<<H^maڝ=Ty&rxާ$yv{8b( ݔ@ʓ߻#Y1;n:*~ɪkW*3`^Dm[Ӷ!])6)ՠ+ew0ɗ=yO$lTFIw1h!f&Hިmp̓~>,\KG/o.۷[|Vx ]佢ٺ}XxidQx^vaCN?Vjt}YFI!4/QhĈKdp<@KD;?@F9Ӳouaеep ew_< @v*4?+u:;tA>hܾ—n!&[lOI~7odg: Fطh&M]߷F`Tms71wZwCۉuo#g/sWﯮ~?8WWA HQRn2#쾖?lw߼ھ z}{uݏc\LS9ByFҞ&u^&ÚbI*$ň-tm Է=FLoaL?+<3;cH - a.4~hF֝B,g9 ŐU%d>3}.*`+TXu.-G'b%wDֻ{+3wmz-YWX2:v=R@V{K{YFk]=ǂ-l?ˠ.lu@nߚ]%,RRJi3؜4ŏ@mÂZm:6-5 nN| )3%n,1(C e9w9x(q{_ߔhg кռsȦ b;s\du0e@檛\p,U6I{~/q"nb_SZf.~\RO8|+ڤcYq)x|IwrgWɮ~]&rG@|ʵec(bj#%x7z;~hTcGǿLO}S.A1UPVgkAP{adOWT~8FFY8 ( MN?Q6s{Źu1}d(;Hpt]:D99~ Ч QcW!4/{FD&Ě "v).K!bcވȝE#]")RވzW6H9q7Hv=NNֿ.Jnll96k4ip,w_craX1t@5j qll뀕^ P̆x=[Y~OVp)!Vx XF1,{˹vn:Zwqq~|io-)\w&du\wuƱhW@8E4 LG9k3, NFT\3@?ͽ#'b=tZ fvRc;G3/Fw:ؚ צ gWOhmy{Ub<_װJ^ VY)s+"6Ye>窷/cӁe[A 44^L}6%݈j}ʼ͉o|>ìaʱ}W~q^0n_mhm> m!C;nojb)9Kk`̢mΑINDU4bG^&So;tmdkftM5wiB!֛?Ã;?fb)^\x =N/U&Z:ĭ!g~ï;ly>]b_zk~Nۆp>s}>sg0j%&|}ɐs\7^|9GN?|{VBӆ@ Mc_OԀC}߇1fvun'诱B;%q5.g}hv.s w TXe+=Vç~[DX_4~ [F-4x~4)f{zq/c" bѼs{q_gh?N-xrU2VgF79]ߢiЬơee6#8HgRA oz$7C 8AsY XaԘj2~~ԩD9ݳ3{jnQgC6Ϣl(C.{/$vepڐp]ԣ+l0w;+{zЮEkH/>l\Ok4]v)C>rç;PQB>ٻFndWl$~/;qɞd/0lXYb'ge%K6[} vX_X*Vq!ő"Ų>‘rK5'Xc\BSb 5`zVw/WW\ˮjCbG75I&[ZqT Io Ͷra+n`r?iح,niUtvf<<|wcjQ/n~jݝ0:۩] U޿??^ill+ofғ<^>9(t$䍋h)ʕ誕T\uڭ,BDE[?`ycڭ|@Vq-%SU]h7rrڭ,BDE[^`-n'ݪ7.21!>Г"bT /Nx,&T}pÿf}s93ˋ$}vݦrL8 4g;]G&dvKJq.n}7|lej03@Ÿ1O|8rx7yr|xC7fzY_yx^/|'cwJju]OtΜ]?\d?<E2L3{4=l7^]p(˞f'KGG:=bG&qj#)m$蛷V:r̰ h`!9= D:J y<;_ >J#Sqda0trd, ANYɰZǑ X;t?ІWt݈VO4s<w2Q}r'=8rI`-(}dz}bP4a%EFMژ# F1M+%(+D]29gP>p4oZVg?9X1*3Aq|͒o=GbM>Łaxim?kPQK^|wX0-C:JSŔiu,1'jh|ݦEU7#@+zw vXZv@;l  =hJUgVɮ0@\dJ7B:5Fh"Rj6lJխ&#)ɒD*_>Va3" 9`;U`-z4㑐NXB]%zۡ6 _aL+iRPlLs%iJݯ bLTP#Pͭ{dBo\!t@Ώ  W_r)&!rTf$5*".1I(AHkfcA#f##dq*nZ.h<cn-z#Ҙv+=PVq=6 [Kմ ;z~e_Nぉ Bu,eRx.rwXxx`8cR %%eOƄ@o!tnmd,'mtg!%wBu®T)ԙ _}2l[=K*5n.|&'fOo mZáɄ%@Ta/w^oab'0sU[N%jju(5 +XK54(n0+OnԴ)Ȝs`N`\ W}}G9Q6)(=R;R~|B5_Ⱥ:a&Х| *u9`BBHE >+(F֩ WU V3FS8IJDJT=[P{Lc\]wI"At[s*Dս5JJ*ZjP^` ߻ 8 )TnFJ75ݬR9i $,ו/h۬V΃(HHbdSCʌ{SnDUl4`TqIsqƞY~E`L]q5H;#cf£[9‰IB"qg;!h-m+@Q6 VEA7( Oz~powNps?+~[+cUxUv-~7uv/ FDkمM5`! G KP $UᐢAḛռe᭢Yvwo(B;at1,X[]Qߪ(iNS7RDZ!Aq0d+Nz>y <ի-̔r{72pE\xT|nKQǺV!ڤ_tՠtQJFum}Ⱥi()ӍpM`@%T7#>Z Pb \A8ֿ-nM@ +u\J>m^n6S7 !XU aoP]GgQ7@E8u@5pAՀ!hNlWS:+R&Mg0*IM.NЍ.>OR<_1iC$ᯣ cŰRݫh q0ͨ9/ f2͘xpof+WoX %sEnv > AH_{"}r2I`X#Ƒ#M$9X(6"&89bncK?|>Cy7Z.A,4(;лY b06VKGq:sqA'$.=|@  h+4CޑN]3MM Ks-<J7Mc2ҚPmD 5I0I3Be}-+%KͧW.龱Y\f3ǵuor!Ji޳}zo23:V{RN}/X1L!I'E.-OsG#DF/!jEf:vo _n/BCcD(>Ljd6Yݼ> 1JRRr8oAe3Ď#Z# iJ$VlҌGɐBJ=CAC8D(scĻmveAJMV>X{k۷'9e%[kYGXDP ?!GB 6)@!KX9v|l\%<`uXoC]G0Ǚ)᭱7oNL1|fuU_aU}HoF~͓do tz=r%}uo&ztT}̗f6vX:PiPཛĩ D;Pmwy.fd/d1 R 1+xv4Wuǵߤ+eο<4,~7wx&QĨ0bS˹!MȓVߍ7"+%c|`&K?xAE*j/bZM~m!6fM{}a4i:Y_a'D^@p>nQOLߡqcDRydHrH.3DOltO>x J6ٟ+9LbLƎ'BRƝQnLbH$,blG2d";']2,2QpC&A{2afʰP`Y$"! CVq$Ze%cK`M-(KH+ 'XOƄ`6jr5<RFy X%"%׌p)a.,V'"&qb"&y0&E<Xyx Q8&v be8f̑H:PjWk&HSIr![25R' չOy z,)@"x6BF󓆦#]tBa,ah+pGTjj^*#d@r"IxlGH{쥊)") tt8+GW0gdo{%&Wi: '\sh.ٍ3MOAjH%S؞6<@JUF@`I '=3zW?_H.s"7ZWȟR$~if7a&gvjAyLVC ..JN7_&3~?ٻ6n%WXzٽ7;v2U:'y-5%5!3ҸRv$1>FhNԓdF ϲOwn1]kUDP D`o}rQ%52f4[& ݧ7K̋gc@kht$oA^DctxS@tsD=<~mT!Waۇ5UXڟu&RS%_~ #X!q-*)OK66ßcEđ%+(HF/%)1=($[.|v;Xm쮶V*ܘ4Yd_^}iw/Wp켰MLVdzqdDNG ḅWᤣpxaA\zBjơs|X _6tn "$҈HK")ڹǹGAP` ;NNl1Sz*03+<khK9jU/ R |ԣm /}+ 易ѭL gRԍ~pRhpyxy!R3-n$\u bDsVuC #  !s< '/9[TZ8H T QEsU|9HS1PXeԾ-H}oSեFC֫D:˦<[c }XNMl]j/7Z>/y,+>}|kkN3x *l|"wqWN֮ZF͊mYc;@}՘uދ)%GkŴ *V5F=+[Jjk!{DՠkjPM{4Z_7wΥ_URPऍUW- tNS\2~E%/#nܻF+eѼC cmeDnv%nELP5#x5AilהT1ZKJ,~5wL`7`Qkqݕ$ګ+1}dm !P0֖yem9^L{e-;3D[r(vZךJ wѰpd–َ/V|g:H(EWO15 yZVpxFb¥7i١|dwJzhl)fs1NPyǛYL<ӷTϾa,yTZBUG еΨmIla_h8|pw iv{\.ip#20eغ]6PMS,JM'@$3Ew/#gG;F^)AA&U K2c[ϲMkZ8z7qtm'믮zܧ_Wmd$AxUI=ږ*@=yx[)P <5 Å=xhBU:du&,Iƅi1 4s)%,/n8RdR>S^ns3Byk@X "bj$Erd*R+bm2[w ȳJѶԔ+6X}."jGRSeogmܬHUbܬJz+=k+%faT+tNϦ7uUWPnYo74 #%C%k-mtF!y=JMTι?ө]ؘ;6!06o^%IcW\/߇aR~y{]Jt[٬Ǖ"TsŚK@2H:3H/b ]Qθ!#KMҚ8JuidEAP$$k)~9+-MPc2L?c%!@FUbٺZ$!֖ܵWW w4^Ԇ hCūoӳʡʡCRJZVMX\x%oܻ&V>Sca >ǏIEb@mIF 5F|`-7d?~Җ`c2%uAbpRZJx۵V҇OƲ]ff![ y=-WN[`뎇oT(<1b.U$ʁQ[Nkk\AWo;2F62Q=xCohGT7r(PIT7 t ^ji"Bb_$}! ӎl~U{m S,x0x8UZ&DT}(|N܇ P}3Ş=ZQ!š;6DX 讆Ą諾 o&ֹد ^D%Bt$m\=4A 9C)Y1!$suQ|B%WRD,KR1 Hu*8Ɯ2 b,Y *VMT*8tb?L?sL,eZ4ϳ٣qև) % }SWx9&HZn^?6.v ( ϋgY[KKu;5 bmbg _&ݗ]1C(L0x(U1.=_bdle1hO1%9CL] Wδe;=.I92ilPE/"vw5~=tGiٌ`A0UI=ږR373sddxi"+e,lr)iogl9sC@,>jGR$TogmY).Xi~)37+-fY[Y[Y8Y)kvJ+z-5%R nVJ *GRS鼭7+eZ)(+z-5)z+=c+%S\j+jVK=ږrԯJrR\TaXYi!5J8笕BAX)7+]J]e[9Z[T\QL ԭ8}nH劷r(~<@N(9k23$)38DJˌ**X14i`cqx ,a@ L0FTMy|O&YbLŲdMgsID(f/U3f$/iώǂ4tEIGrhS^\6Ȓ.l? %S(m3I%$4 tM+7/ s\.ݷowmV+>9JnE_9mR,ד-T6W-҆^5kA@#>7X QktI|%IŬN h 8H]w یqǰLH)"bSu{L.cJ i`̉@HE!IB8ka+|j+"N$i "bD.eq(D*&R q4#30hlr}u*Ŀ TFx&@Qo `5FXٖNJr*8Rud9<+r"5 aCݞi sID?l-[&4yNxi͵_%[G Y(F&}$ػvmvw}K Tfm/R"ĜX-A|)*]?pT3l& 刈HaP'0 PpSH?1tlW릯 .W6_Z}0 e!0Le3d2XEq` PƠK%U8a 01Ma#qHԈ 6xS4-Qq=8x9v.RD"4OADpe fZkhߙ;CO{A]\/ rĝڝ D^0aA9(]Cp|l8Xz"ye햴unI% e=jiux;btRPQA]MwrnJ{:b!yl.>~Y,r}xJ MRj1yEKݓ{cz).)`sGfK)zDR(6VSWDJq8dEcPfQ&ߩ d7]Ȟ=lT9 (r!lըR=fbKv'J遞E&bH1vR"3mm.9$ш(޷ӼH%hu`t$j}57aTޚXm{Ujkqgl(5DVs`!iiwȚ$S80QD䩂PDRd\q1a1*cAU#< ,i2kjvk(LyhzLΜMU|]Y?$`h~=?myFP@8t(#EwsAa /迂/?h1_,|\Z{UcT>k,<1LGS$1d*H|N$Xhêsk BODts W$&| K%O݀[Q%ʽ՟]Pqa)VQ1Mj4k!\I7kiͩ~*v1 +vBucgYsyPM\Ron?D|Zp)U8^:,x?Κhw=pMTgpiȒ.!-Қ}̻ʠ[7 Oj9jʅZ+9$*CqTOU~y-'X5kf>[r/|_ dhsK7G ʀ:%howN6#q AwI~)jFc9&(]BiZlXK& Ty6 T TR;0e4 (F2HHQ2B0w S4Bu(ѫL@,yJ8OP9C4iSŗH+aD] b!"#-.RqǏzP_h} QKCR.}r!֨C"BX4@"1%,E$Yc6bk֨V_KJ :ru\^mW+B"QsBF"Nx>9rH(!ƅBR N038.oFoG'rE@{d8qM+S=ۿj|sSKr09Gl"VGY?o?o9c..]XM`=t'.=0*j>o,V@ [M?-†+=Q[ XrE4WR4+ZZ>Tn@@ [فXk9xtRˎT0֒5f~ݫ܄bꡜYl.r|$>w?Gy juI.BN\TVf<"Ev F#irYq7-O EԑP!L+m"Tgw {W!]>кK@sF.;KT=_S".^`*NA=Yӯꋤ43Wא[zz (dr(qAF.OU3KP|,%cf (O{'Zܢ*`"kF{^fz[oo:E%Y:SW N9 4>.:E@–̥nDt&(t8#)݂5İtd6F&؍w@+0.dA_ޝ9y|,Dڌl|@tq~V<[?/ñR^]k"vn4Hw4Ml}Px4}YiXdw# edg^iK#9&Ò ǥϙ~H ګXw<^P޵cmz3-;c,I_SBz"?YΧ42pבP:qPM-9`97M u6',h-l=[|H8 m)ZjD'yb8fQ#E[;Vx稚E r @9{&b:7r> ~ՠ))З-"_hkmNAS#f0mqϯM O**,a$hn؊X$&q@Œ+]8 ?$(U\2J0)R2ŤtFD`/=o]X|;oa$1\gEgE|KMw9+*hc=gM.S/+zOƘ;LL7ǴD3LCL÷4SWw2 FMv:y\]W~C.\n, rZ KZ9ߍUbCg˥*{:/F'A%:!ggl "ۮo'!-|(t}}pI޽k wRKûز/#LUrzejQ_-߳Zbʒr7cf3+v[k/#gUE/?杼AgCN0urBW9euh*GS0󡮜>F,icmQRv?M;\ rUd1#1[fjq f0[~&w -f6;c-^i'_0IGsX2[d׹>hs=HgT-e>'>_i['.bm%Ŷgbm#.5#kH4^Qw 0J59P:iƻl@kNB?642\ P+6{96;c1ޕ5#鿢 )GGllD='O8P昒8$#6/P"Y$B]T/H_@  m`8gR*y] ȵ .QsMfUYh7bK#} ES@DI XvmsYu9[ /AR)siqѝ\(m0X9h%cZHfY=J?OrAf0 ?`T[b9vVfX$Z,6Y$Ҟ''$( AВ+2("ZI Ô@LI ) =,gCm/s[ LV=q9W?aE㣎Yqn ~ܯbfý/o^ %@P!(]7]bPqś^;gw3sF@xח.z/k1v/n^qAq.oP"@+^Gq^wU~XU&(x s^^|SKmK[RΒ E R֛Fs5(R<3--eg SdBb NB^]G}s[%z@-b߯}~/|u+5)w!qbӆ4v~Yߺ^9趎߼* f߻ ;c^O͊ӝluUv.|R!ɿEGnX&vA"Zm( vy?ukf!)]0SD[ƕLbe~X 7$`Aw tRW :n nMX 7$*3 5n 8IFwo0E-yޭ y&Z)!6&>5s̛UzgwlÈWW_j:ɮ֟;K̫wQ1sk|A[|>t oƶ {+,Q:ZEW|`OP>dX;>$ MZ(@3 Xݔ$P8& Joi WC2oG^qY߿Kxe0r Ĉ(FJ(2RRBC̢сo<$Tc)c<;Eg]#\5r> (xH, [xƣPΫL,6Ez^RB1TnE{<-٢eB  38Mz|(ޤ@yɘK&KRJOYAn*BA '3cyNR@ ΌP+8TDK*!NDݩbdmC̟- 6F"+scEPii!)@h`qS# T# {&j4 ,$@@BZa W 3$= 03祝;',Ac a}$ARL(|pD) ޚ``J@^w77叒29EX84 ꏁ1 pKg-\B 1Fxhv!I49;$uJOB m(0:d#npbRCp !dЩAT9R9aGȥ5%898!@u#a[C{Q*`>O7y鬳(rCdq:D&x2@wCTRK`Ʌ ;RE>a*a< @ ޵oPv%rcҶW@j&,䅛hMR^W( [BL'1ޭCL-JnEBޭ y&bSG{ލ`4TN7b[vםy@քpmS@y:k &]δD8?9뽺*B^!tjC^9b77> N3$!c;/ BYB9׹ 3¸T 2#8*t38R[VRB3!9:rOZ8xһ}Sg#EB'=K,W *trn(zNcn)dX&h=51Fn$x]c]Bp+C%aCZrn3usa Zrq $0H֚+KكCHciaKZ:S k7i%$9h$I"%+,5G" D6n iX-b|bj &+J+)P;@#YwGq.!'lZ_=Y%OZnGJ+R`LtWNXܔh0QJŖn#34^`ܚrʑw\7FQ lA)}]T;/*U@@7#\4{WCA![ }m!Dn%uaԒAjL[JVkC^Y T[jXy6a yCeK۝=FT SU;ے0ekæک,䅛hM90e߻ҲߣwKA~#ƻz"^u[M4ɦ|$h-U!J yg-n$л5a!/Dwlڔ,eOSXd)&K;b(n/ɧQގoJfO]*y֓%?]2j}*s![Л^/gWrD| OKqM𘉚^,b[8)6_xW@8Gٝ񪾒ƒz!ko|P@nʎA94!X-ld֌TnňfԷ ګ|Z~\MtelcS|6d~ߎ#| kJKjQV4aFHyK<@B*soԭ`kh=Fw28 uGA9Н!}rhdMy惴GhG\s=|#4Vz: 0;`ӝZ-Fb!Jdl~sOJ-Uc8w÷t81>WCG-;3cyLѥ.rQ]rQ`cذ.pp`î@@6#EDqЪJ͎Z/Z'm#V%xrަ@zjXU @8Ydؑ LC| > .4ȌFa/5 F%cb4*rE-d72';ztMM1Ťȕ&Wg:Ҫ>Ԓq!&8m1gw|ā>tj(t0QDӳr51RMill"V%Q%Va+{[Q׿g-O7 Xg)g8Z4~Ö|) XC؇D@;$-o AH($=ئ]C(' *`)-}҈*JKR|^O)D6ÊL>S\S$!}̔ -< 7QFSRbO%w@#Q2y4=!e6v;zՄaF)f9;MW'r+6*Of7tR6rͦ֎bMmqm3O7{ *!?\ئ'6:E2wA?f7YؔZ_,Dre.GknS/Q>zۛvXlg$zVT'&;ykf_ RнB3""Ie$_ QRu4E`79B$A t% +ʂ^80=$'J|Cg ˽[@k'B϶ JPW~_c+{=O{(*584%+2Q3w4kx@SZ|7 =dN[|D^l.Ϫxϱ԰>Ph:p9V3"h.7O#NlgbJ%%f'P-G'Ѕ7SD!T /Als}GuGl@Bʩ!b,3)& gi )#h{4Q ƥ*8m9B2cb+M,jL`XN9m#+vi_[96YvO0(ikpb,n8z4Ň( G_uuUWu2RJ!_ՁHfe|?_?do;m[)oɅ(=AIYnpo/cZr>aۿY23Nib$uvZ/ĈQ^?@ïx\3n&>cO~`pP`)syJrfc¾3m=",R^>OhF?\7M瑒vX%cJzw^]g楉>;!wRZTs;ΟE-q+ԙkeu^)#/_GlyCBA#v:"B}{syE yn5C{ԗpOY;,T 4wh8S4zҖszMU  Džz zyNc^#@Dp (:萣pLfz ˘׸drTK*j9YgrcSZ[UG2}DV+Ah8U aVx& 3i,x||a@ ;gtoMrͭt]mŠCNu$vHSw:!)#\[.#u|90ίQq̢ o~p,|{ 3@MeH"VDGKv-n&&QHԐ(, a\GatPb['~x-c_KgiB~kQ\1%k)iGsOH- ct*,wcW#JEeCa|+gc*i|HfA,J&(r#K +7hn.Q(&AO%pp-晐$PB+bJ%sd;+y|X pux@Sp==RX'2qgԵL#Fõ1?C?z%Ck=.am7Qi(w2YI|`Z?f`_3;fηr Me 2dwlk ʦX==}c"!tzovgorB#%U$k-Zͱ:r#f~U֬kfϯ>֕iƧsC+kƥr J0sWMlݺ('X?qVR.Vo1q?P: AA/k]ċm6hP\[Uw;ʱ@2XZQ,YXwBQYoM$Wne>j$J+zՕSDeK !rw淋˥ȹdrTjs]"sB5Ey*mܩHGcY.B+jqS֞[q3IvfL.@b<׃LH)wu*VTi~=Q1rK?u !9kAG)svOvN,O1BɬsPޚ$;*2;On:wڭ*|D;h^S-[n3%=[h]8_h7NnUy#:UQF]ҋHkڭ OV|"Z)K Qeczf 'N&pҳY +8pAIQ`nx%Xgӛ7DL&{wa3 /tɤcTOT}'=[uÍKRs 途zR!K5J)#~RʈR;!ǨF1RzRWh$RW`K5R84Bu7v|8ҫR?fޓbХ~SF5Z+RvKܸ{B耔zR}R1'b'ՓTk^J[J1JAԓӭx@brIB6zjqayr*INT9rn8+ 56xcA"%Zv4W^M_y'ζTK{F1_[Cg^.uj/os{g;v:edy!v& C[y?(D C\-s4'FwcaLr<5ֵ[V r%YrSy1s78;19 c-I|W1fga̺1kct0>rqcicĴ]awB ٖSTAF9%v唪7(tz8oK(7@Th@\ R0kQLHNx,2RJU9kNWV$+`a9hU yh?Y[%8"R>}|JC %OIgm5faD>Ogk+] rERyܧӅhƊj5pԞ$ig`ٖ&KH4CRR \@EHΟH8rɶ|u3+ƞOt\w8GYqUl JMsU$ [RީM 9+Kbz_KK=N]xs&C*&*9%XHK`8#7"#>/6k?@.&U0>}Ѳͻcڝ/O\"ߛ+Yt`8qS5 B2A0p9(4T0h7K9%CE%xz@}}Gc[ =n"w}uIZW1hb!Ƃ? ^ UPJ /.Tf` )$kIK~GlL$i~@@*TfH7Y6mDnMJ =&@v5s-0_{CyP2ֱ s] .^ey" ޙxiV&wFժ6 O͢ïd46[LIs`2\9QD`?eeLi{[ڴNu-ii4-mcN %E8N(WN_YyǿIkݪ(sXoqF9ծ (A݁]j2^U+ xVNKFQ=)R- G/ ī礔.]zRR^RRck4\KJ)E?)M^JZJ)8KᝐR4Z=w|Lv|&Y'=wjwKUK)~RʥRݰK҄jD{O,xKtT.'ǩF!{麥nROJxo^RPNrcTOT#;K)'~Rʉś3.=FH.j)垧<;g 'QToO۔W.ÛvDJKPͱԆInBɣ(*0BAˀ&)S!FL4BkʵL(ç oǍ3mPjqVpEIfk^YT<fBss3lZ zn3՝[F_0Y<76^І_ n)wfSo.029؝D Ks'K}OXF/snNZ{"_"s * y|]ҜPVg*(@s|քj@E_ZGa8S#hVy4Qh4"^P5/Usbvo/_Glы_=8Jy[3,^7 '=Os_Ru~GE5r.J! 5i#z/% O4ۥh3 1Ӡ/h'0$VOIi*\$T,$ 0F!)1b%y0PILLXdB.C'˲Z/'ڍx1Uʬ 1E*h"%`b BM8RtH1rbђ hbJ<4nƍyz6Ĝ iL9K";b53]zιB"Х|LX,1T%sibT"\~ -FgMcSUXOp `;fq.\lP*`]Y rm*A5P W74S%KũUD,狼wKtt(',UBr.INiri ĎiMqpǵ΋&Bؘܺ@ wa\*].~yDdK33$ K1r&T#2qZ:F-A,<@w_vab`pby6ՊIUU"pY'Wꀞ͇hY HLѷޜ.WW Q'n(O#B) 0Xywi#^؋֫-fۇ[ RcV<ïgvƆ:K)9.h¶K((ỳrJ-hqҎVrh98QQݓiO>Vqpܕ}&(%rFZ L1˸ =πkZc{$^]'`娽y'"Pd3ZVpX+d) )My:# HdQrˌL>vl&t#KDk+.mD_v Ճ7bx&PxOSˆT+tUh*(yl&YM ]1S묞˫^̓!& I Ѿ#>j#vػO2ߎ=~›v ZK<ͨDB# (-;.5 oJ0r,f(d:pfiRKR' {sJ| 5 ZߪHi?MxApr.@S?q8ӓ‰*3̨Ƶ+9v;r2 fqJbIܫsDAS >sFМR*1Ľ2<.͕\YcHW/ܨ[pLS@ʼa% UynϪs{~3=>Ixʨꑁ>ZWF< vÖ|xva!۴H|5 d.D09p)(JHh % Ir> f}9a%{%u<9cN{&Pod>e5'Ar^L3T#4m88ꏝ3pq3՚3!eƿb D+DztP-FK$ UZqLEp┴N p#?ص`A 0zVwVJ4|m߇P lv2j G&u⒔l/5#IsotڛAѡV嬞'*r' 9GKMpHGhJ-Qŀ|6;}G %pQQkHJ6!)*G?SoG7&GrmU$:V_?:X:!)F܎nF@2`t˵A trߑ݆EHIxo-:2"锁 Sv㎞)|.:e^#|\8L3)ߟq̮.5͕+5_@9gng7krF7bˏq\45 cJk;@P¤YmqN.d@UN9FnarpMw !Ҧ V/ͱ<6׸ջoʘd2t Y-5) WK/]Z] WoV{"d#̢9uЪ6G)jM.~nG:ԡ~˘+ F͵xvyYyrDïHiQIČqf0+X#ĔY[󖝴tC8#[>m?85{.4V\-ע<ܓ IeGSlG c} p][s[=è-g "O-wv|BQbH%CQ8zp% Cs.?i.G9R;;T͙ ZHc7F5!9_E%x|Nh8kHBR T;YBԌ1efZ&PƴƺE1߰H8屘#?Q#vLsFL(3@GZCp$BYTC)7qT!5'SxFVЩ^7=!2Fs ~AR-SeazW3_4!Y/B-F #buVCC֊尿wCAzZ36FuC\!9x/ÇW̄.tŌBiF~k1y|,EgҵAG^Gf:EJT_yS5t~v?>}\a=/ID"cQ5PkR!K7#6m}f"QElx+HI${ SzPdzP%"^#SzPA82SFm6Fl>uQ}|u}ӼZ*QC>X{t^-F+i*:1~s{)@## "}/~J%n8l:?wjoޗߝDR}\s}\ ,]>훤D*v&l7I%Մ ق7RsHi˧)GlB^8D0ƿ܊nTb5k#% >UEB^8D71E"+h $|Mw|/vH5S)_5:_5JЉή|4_HWzFLWI"ΔFI_UޏS[]xyl.NDB# (-1"q9Q3 UYT6 Z75Baj=__m,vuB4\d #.NIؙۛ ^P/.6}zhɟppVVgKgˏ77"|eWvBOm>\/?Z.~كer@w{,~>@rY\8JGq/ ՛#qQeɧ.'RzM-WLqYV 0UBcK&@QR,Ϭ {_%PX3'u㥾ؒ12,ԇ9T D)<fRN(=m7 #Jn^T`earN|]ikul.|cWR> \2?68=3}oދ~oau0hm==,ESjRM.yIJ1 \ֲ~H)5g܄SD)q(%<39 JxBI82T š4HM>Q RǺ[RZC)9DVQzXꋦԌ6ҫ'J C) e*jd\C8VR3VQPz(RNJJ)ġ]ғFiE]_!C4c-5LRI9 (Er 3a/R&Ii4Tݟs5 Ɲꯥfuz RBPJ*:y)(PzHꋦԔ ҸleVe+cJjU"C4.w-5M(=iBdǺ"gJja:?ekbP"Ca/RK2Q"Q D{('~1+Z}rva 6 {6$?kno~$< ,٨i`T5j TO d?=˳=ϟ*&VѸ8) tv-r(4ׯusI*Zz lWR)=m8BT،y|Fq:`4GQ*`cf?}nWD-дt 7+>1cآYQ=E!]W)Y4vsfwci,^t4"7ʹ3. 46|B:$7RIлA)/YZv:Z:C8L:R#CıB:Z [pAXhQh'VjގF&PF]I3ehg >I{dkHh (QDlK1Bۊq0pX#/Ja r̨\<ú gaWg5 ?)}W-wW?^v񶺹e$RL2q.*0]pb%VDP L3 @Yai@R;l#/1Y[ ]~\^/ޅ建@.wmm9 _TGyTkc%v 1Ery~zPAP _$|3ӗq.5;Z,V(_ f$NYQ_gmxƫ9?۝ 57C~1ö(Y<+wYMAg5IqY/OI,]6ݶg`;_CJdIkK,$6IK!$FBI"mU,$3Zʈ&1 TƩ-`T Y !WqS6nü zt޾·2\~z+҂ϯϕyWs}~t"ngW a!2,o~_пCޮ»5ɝs^}Oa⺉sahSCBUGp(I6wrO!` Ӄ1'T VS\."8KO" '/qO18x=J:G B5o~~h \]f!cOB~m~h$@v3hкɫKuK͇=6+Z[Iu: E-3ᰏ`k/>Ln{~w- FJDv}c~0{p)? BT=v ̧`毧&7@ji;Y3^zZW _&˯՟I3#Xm@a xTRT۟j%.R%exD8oUlܡhm1ip7}J;n e%v7:9&}vYbe~֑)N{o$b#:mDR}4>\Su!!g.md!{s[R [[ |D;hS"hڭ}vBB\D+2噺@\iHV4M~G 'N6rҳs;:* ܵIШY|5/ƣx;R:(]<*YYEH/)SL藀\lX䉢u j9[-?%e "/"OEvGc4`rbP9jN3mgOZY=yEַ ٧NdkN[I#ľxxK!t/-@.2|N!?=R:d|Co E!KO%7|4gP)I^q}ץJ}u0"Ya){ﺧiqlCov"wswz5Ul ؇.hCXl9z%jKv /'/T;QEU 8Ԧ57::7EsJyDHG4M##&. $LcDS҄kK3#LUM55{.$LQyf}_;EU xb9 auL.%-9MT`C݁\2ʩX`R_棥;A<«hf7Q܏wHrZm9Z2n"Șz*u5rW&mbByIlnONŖzG6f 'L 5,7+_lsbON:F(UOF4auc@j:U="'9Y/J6Ø3phq*!JcSp;%1,MyɣM*T[z1Ze 4bO9TjeE}z@!đwrhуg4Ýp!!;p- "`ݻ%GnURSzpcQ:IW)ubx#91;ڿo{5Qbq u! #iyˢ4V`\/섫\c;I@-BkkҠv㙰p*Ũ_XeFa/˧)o};9뵢t|<&<`BunpkQރS\ QqZ;ڛ>'(>2)g*]s`EpS[PLjTB%p&,J*rǐsZ9zwĵ洏sPX#NtU+'\ 1F2cu̥PZ IƤ[EE4tw}3`ԧN7C[C .fWzx4$ &i&Iymd$,5 K#H#•(IilEw Qr;)xgӹ\; + ǀ"h|Ħj|=5_UB$SH1%P7 |1hX Di$ņ$VrX!)0*SC 1$~SiFsC%="-iM+$]Z(0LHLF1d C!1x$) @N#$L)K5`Mi@SK#󄑅]{=Fk=!T 4Zb=1!m)XNKRrqp*04bYM,a3N# BB"ɐhVR3>ӭf7hi'e&Aʷ p7롮h( \}~M_?h_6Q*jnOޏn]j@CD >52_, ]eyuq^LH*^}`3f-?zjx‘.'1(ƨV &V7c$GR"jƓ`{ oAstktPAEɐ DJ.G $v[FP̞nsR`E)pR(.&MO΁p\nhJK7+Ok!sYEX"Ă0ydqv-gdb\ 6 0\URBu&\o1ͨGB+=lpvf8qn_=U(ɱ2t} ^G(8U3T !$bK)?ysow[ۇ s?62|2/[Tk5\hE,ױ\rx{pJ9h(4X\rRo;X&2y5eհji5|h5"ED4 gC^=*< !Bnv\뺂]lỲcQW&#ToaEծ,n>HС=!Sjr'Pѵ[nz$SK?\W^2LrOh&qU:-^ arlȰd-`܃>BK]C~B+z4T!)@2|oI}iGoVK;̚1,+T9~/Yywavs4cƃ|GxOx4CB@]iD,^[琿I8WZdP;ŷ飋r=*݊\ғDwq6@P:hn#~b(ў[ ժ o8RcNV}J=4Nh@:Pp]}[lO5B6 ր`Lq~N_-}$en;r) Q~q?Fja|k?t5NKYm1p}*տfhr7K7x2O/VO܈d&q>kn35.ߝjn+XuI9շjW5i%V<Ișh+~ǖvcD>hNw4n%i5ڭ 9sm%S]q_'ڭvJ߭%^;n's:Jr<߭u6ց-SJ-āJMqލRK<%!!o̎\_mVjLx0pg Cfk4sU"`ĥ>QzQ jy򊉐 \xŎ+քWѥL`?.z4_KU>DuVJ]d'0Ɵ ǼgWcOb-s*<ԙ1SJUˎȔPÒR$ʩWJ],Q뤋)RiKOF*u8ZSms$mUZ PXp/*sFk=g:"Ϣn ,ZFX$C:NRVBIDLu̹x2%Z#E x)[k}XW5. ʌ|omX'0"L%X``I:L($[@։TXHƌhFՉ-#nPR'=xx#ٔ xƏT_$_)8eb;,h&@.Zwυ XZ5?V!Bnt+d#O{Yf$'hlV_X%x`Ҋ \ {w,q n<C2(: LXN8C2wӬ!0JSD\ޛ{w'MQqa7md֝ণ?Ido_۷RTSv1sF6\r^kB" eqy]^\`rZ]e$#3 6\ sngJh8M4 ȂQ?{ @Z!fCF- Ԫ! D[MgjG~D5^(O^m#+|Rc'a/`i h%gIͨEC3LĚzWTjurgX1ɐʩ L~iS;E>[KҜHΡ7FvC;eȑCemz?D_ ~bUpkm@9]mMmeJzB`AmsS>LFz:sBiZRJ:O$\! 3g` arꊐ[=kM9Ք1 cmu=dCa*TKbéZ\vȕ 5 ZTc!/ rA|z /U7U 3@~hΣmh>T/p) |Rn1|eb:Ϩ hn҂ASNLϨUn[ 9ΗQrM˦0|Ĺݘ=ճG+}Fg5mͨw a!/Dwm 9p"~g.8;ÎL6Jjؘؘlu1bcۨ\7E n,>.L_e5rcox!B:o9M)~g8b  < II?*דZ5$YGOH"q֪C׾k_WȒoﵯ h8?҆ j~}$8Xm nɧ9?*N7'nZOw{p1%a.a_:)).O-X=ڝ!`'ɾ>l"NYc;' ̰oI#mKM XR@;56N< xH[jm Pl"Hyg0Kr%<8*&Iq%b Ars!0(ydM!|a a_FJҩ/G JOQ}SZLF+h+J154HAX)oThmY)aJRY[PM^k4\DcԲVz*XX{^BfDx4+J )dpҋRDܬ)aSTT&+e[)fnV"9+JK%.JpR"I)fL1.d+\7Ec;hoϵ0>2gJsm ^Jbykm"$)шk-+9r @iC9' Z_?@xhD@`峆v[T(g7?N'j5xw[d¾JtR%f?SR J霉jF~SE>ߕ0jAqp_j"j(^Xs8k nB)K%AދJB@*MLlR0= E-on֍d9aP9Ar3f!ĹyNu@4y8}SdY?Ƌ/>1Є;W*6Fj0NKpݧ:Os 5!D~\ʄZd#2zr3KCu t xUh2*?n*/~Y眣H%2((M&:F %YT#8Y.O)p$v*NG;ouUzl?ʀ[`n9TT&xɱ"=ݭTSnɘp{ $sRߌq 맑>"yC !f,)2@Ch};WmfŦهN}S?7t'huLϪ) )ntos_LFѽZSQd $cPƫQ5^չvJm啁u&(.8?% SftKk![C\a",u&=e;&b2+n0*55f`չ{!@ /wV Nd92 E&LS∁E$G,iH%X Ktz7:gvc|AlB궕[P͐z.{+rG+SJtnJS`*I09G0$,H*RafphA@X4R-Sͬ@xdS>߬ɿ{ńC<6,ªnGdg(_y@Aw^0bv{>zRfc/aiqT4mZϜ?LYP}Z9\NezaE0]΢W#Ȓџ >_]ǭ bA'cJ1~ z +簓&^'I&D?'n'dԼls-Π z8ȡ[njGWlMǚ>h^:ڢtxʫ;a2!Ш 2c\׹tFlt\)PIԪň : Xj )0C_ݑcDɳ8 |ZYS\>je,5e(Qñf7qiu!ͩ_m)C@m\ZR.-lL5a17J5"y@XIrbhbN`t8zl/iYGl:d)STNYdj{oYAglQͳ{*~Cz0^UԔE*a[ӿWMEU>7p):o߻IpO)'|F-[4 [ne-p) wwwf \Lu[Yw;[ y&Z)f- $,0)lO;STa՟J5!l*ڒy*Sy0_Gٽ4JќʯH;jd>MٗE oxqq>S{L/їx[B- a" }BcO[6J[ydRltosR٧]N]TK|K;]u'UT~ë߿@.ߑMu|N奷}ع%A.EZW!׍=/OڝWO&G)[^/҆ɗ:Z]/ p"! ^FI=H[0'q[i[.gM{SUFMs]QӶ{Lw-ǏYp P+̔Q/yH֚zRa c$n %1\D, Eۻ_--nGYKycLZRe9fOd/ Î"Sm^51zoא麂w%K 5{8'4)DRskGU_fL^*_է-v\u=N;Lݭg髛RE%|Yw2#t g|Oq]0C*H#3]y7*$!?C&9ET>x_Ouރf|C/^v7_ Pnk\q=+eJtI/ovfOz{-)ɢ7#:0W!Kz As_eJ,Ż?̉,_9|`Osa|^ܲY61k3Ɠ_II : q1kJI6۝Y!#icJJH7Q"EX^Nx3 l4bVl]1ILv bE։~NLWo.>O)-%> 4 ,R̼ZA;TmEfTg$^gz42aO}!\0ŒrRZsCxBRSZ`BQpQX>OKMM.5-(6>k1 Gq&j # QC\sIŝ7❗n߾Iqwӭ#)9&˺G2(Wӟ>-#1d>}'+"ˋNb/l{gO(a{q/AtFxO7i S$=E߁pKJ. &wu%Ѣ aXO˴FKyS<>^l;& SK:4ъcvSTr$5ꎘVHʍϴ:=0QsԢōfd(5 P=k5REI׋p%LXD^dkBƌ#ZI#Tm\g{HѮJTIP fŷm=F4܁yk@Q瓦F['x3R#>Z^ʩ[v3ytǥ%s3 78"+w)'ېܜ$ЄԤ )5FM6{3LlW6r.hE(;O|nR&\.~yчˇ{߾|Ig?ӫx//-"uP"$*S؏6.Vbgqow; q0`;[CoL1X -zDWr-FT*Q³6H&Uxf}`KClx8H8{Eq!R:0ewtLLl* ijm%orЈܺjo/=\DGyg3SKpp6F\LW&LgӶL>cJp+ TP q1"+ :O%&G m5id=3sPT;j0u h)AAOt K|9%b1C@s2i}Y(Pm1$ k))`T8?.~IWQ!17~"x8=r;b]n. nvܫ%Q hmXwߎl_Plx/]MGmA/qŏ?lNFFMuiM>z6:y6LMLp/Rv6t5bJ+7w5 }1e@E: RM]x~)k+rzG; |zf92sŐ#ʬRY^ Ixo|/C6jY8K8 紙pmYu:`G{|WЮI?ʙه 4"3; NWQ1*Čv PI|+%`ٖN\ww Z/ ȯۡL:xͅʢ )#+"G`'% %X~†2_жAI0~B݌!i1;P8!z,>kt# J#]UEJqBǚhrHS\8mf rD"70%yΦ_ Apr*´w2dڡwmDIN_>y(nT0h̕.~ m3m)2?4<5Z @UD$0`K`9HNEܥHQ+NQHL,i ȫw!Ŗb |sLQF`v ,PdX 9E=QDĉT Y%R)bM =`0φ3d{L*i&I $8pe$npxΠ|I(ޝktN8B=Yf.`~I%W(彪~JN8j5c]Bnr™C3W]7d y4*#y=W;|Oa6p âVa hxLb\[Cb)QF-\bSX^qN>VNMi|:Zj :iлt@9J szoƣAu (L;']9nuV3Da)FiT!epŘ1p'!K6SdH7Z<>Rz4|ywݢkTұ.$ {asQQL(073꠱qr[tEwl Lf;I02k~XݽnX<kcQE"H5ܚfgfٚZ{SSjm # [}כּC~Y=NGUMsY(CCgGzCLʌkq%C\p0,AVIX,b-خ-Mz@KD ̗а{G2sZ{~\Xv[4.2tZ 񇰫OU #c5&3YނymS'qQ$h 6ܮIpq?+naPĈ#I32pɌ2AI˹1F'Frt!:2ACvߢBY۱B3pXpɌ쓾2 B}.yM\+ ҧ.Q!'2s'(a|E^+i Ta{L!`d1i:]05A1-,dECel4Bt~w.#08* '+0E)A iXi"7̼9y &X;/zȄ6i>B8"PP#=Ȅ9 J&da׶WI͒4$yriHT."5sjIn#˝~$Y@÷JŎ,Q Wb0a_3-` qغ81#fAd+NBq>eK/J:?XznǢVlAV(m6Ar v+} ub< !t3N[HӚ%8(Lgy(ek)Ӑ05H<ڄ7K1Ao⨓|u `!*96@w 6ؐKQ]r.>s@A8oaOJ( pv/V1\SX4+u$rs* `#`}J_h 5+2i: 5s| CԹJ`\\EL(uv:2'4biG 6qyBR(1_*#d{_M*[g:*U m.blkVՈ>Rv,04 OL!˸ܡ~>_˭[zvF%7ᚳqZyKrY`cPKq<"{J5SBIs6(ļoq=FAL(PЭR -A0 i=Z2iﵶ>ŏ$E_V)C`Dl>*۠'\0hpz`na@=5~M/t2Zq@#0G{CKĩ&H^S3WX-A5dO9;r`ɵi%WYDRҦiĩB.D4&8Ljwem$I~&3C< 0 .~ yJlQDR%%%ꨋ*&Y*gf 䔇>ۭQ"Hb8t5if}"FN$xK\0>&F}1tzQYV&J]V`#KiO< /Ԗ-[2ǚZ>N ݛk ,\ȆE؈0'6LIan܏w)ނܝ*D5&#Eȉ $67Fatv9JXx!%@\N+X K(,4G/":LЄfiw?;jƊPou:KᩗYciR.C1kMaLޛZX6q$5}7H39̑ ۃ0c!a:0J}0yG*k_Ѐa{+04aYurOSu!ޕT`$ @$bv*uiFwR @{' ˆ`@.:^ݼ ^i*d2n2u:,'#C(s߽(S P.&w1BH]5Iu)=ELCxXRs7Blk6I0 >K,;>kɐyWIxYQMDfAKc|)@4Iw;RM,/˷qT"ĜVvVfnMԆ$Aq(c#%to0]c?J SBPxu擲۫}6^GtU:CnZ/5s@cZ72oL֤f@! "'K$8"Q ̹--?֘1); ."Q[AIKo'yE7w9vþݯ_Or@DJ$ OaC _,BV^Fy]؞nVƆs{`bl婻-5єò޵*@ޅd}/Sl.c1clF8RYCH-%DQ+- O;% /DZlLO5r*hةmCb9U꽀[xnԪkh|eKsR;1Υ ,vUhsS nX+cڄ>G7BINƗ&'Jz\e 4aX"a~PKf-҉[[N! {{]dP#TmFng^CKME'6L"ˠcn˶5-C527wW;f~̏v`"uETѬ1HgF4>.[f~=J淠T.Z_qfU@Cc 6?ҧrUT0  7U˖˷os=]dYsȯhQ@d>*vJ#T).b{,wcIN8 ܱxq"YQ̹e,lxyv !d d#I!pϊ FdOSfv]kZQ&t ?H6ĉSnXy>'gvA=N"SV9%I6vbluRn3 +;6jW'Q`tvBh#{y\Y')FN..O\<)c;pm-jBƒ)9nvﯝF4TfvzRB[,;`(~B甧V ryH4Lod"Z(AcMvYD$t(ŵ{t4.v(\tx8B<f TU=äڴ2I8L ` KH1lKB2]U_G)[^Tp9 |'2N<>-~昣c35jSnn PW)Ev3-+,g]|(2hFt>U#a=)3t"\*Mّӳܣq1zF0e:9qQ/A#ؽ¢&J}dª⨘$aZv~\-12~. 3nn뒺ޛ- }X6݉)z`HHT5BS^s%vR7!M߿!鄇[/ﳍ@DtXdǀt5_R~@5‰Sc]<) ? k_Y'4dfpI GwyMT'+#+DV^a5@5i"nftN82ebX&+G<ΝC>9 Hʸl馫=/~=, NˉpX˿_[QhNY\&wt՞0cLX0c0"9JZcথph1͈n=IuA jRI4V[ %m}W@ m2_Cn F6^lM+ !r ഋZh.Db X@lT R+Gg FËa q}qozgwiJ=W1[ 8eH#Y%i]&=odSl18n=֢V'K$8,193lK)tM~j@KV|Є+#iiZ<%9R/fGQ×답:<1|'?>'__."g>^S6YfU6y|l_ ~92/G1V62 ĭs+YLd1F:ė?>ꡱtd:=B+/v/8 Wj S / ;Vk1gwҭG-Cً .zT״}\%5>Ո.bD\~ֽ|Kl B 6L"xT@@ȗF'F0[;C$FZtZS c|RM ;+lAz)>/ |WS9pi6\Ambp{5]T \q:% #N1K&3Ž-C/\R~:\/QeGQD!q"OI 7SUe !: $ru`$=Cxu\S:i ;>Nr,%LdԆ )a,g-a Y23juyni0`[  5j:W(/zH%ȈKt$L/lk]j#Yү?_%'}0hOY+jub`#V+\ 8qTf84Keqb39]Xz*PG|1r3[." n'RyyTsF*e.@H-ZbZ!:oI'}!u>a`HzI)1OO=ga^Xиh[)H5E8b(|C!;8 yJ^8Lg8XNb69ZN": KP,rČ6c5@Y wH5%=SڻwV_%_~/? &Ui?H+ڹaQ2=FAESu{|Y!hcF=a0!jj_f 5[-BΤh$F4^g2dgED,",لbݖIH[?RŞ!ȭDYLWQ="7T/fP8t֠CQq҈BWvO :]/E:_J+d-v~b!:r3kJ -" m1$i1x$f{+JlÚ>剞λ=fj$91i9Nψ!С@6&E|~S9֘ 1ӕ]|ږ$*pI̹%lZm,Ok&m^6ssgV a}CpJ5:g;jSP dހ i9ڳ]Nh! jG`xr ALIvZMCDȧQSԢt558%+( T/VGe4K[b}ɇ-zLC?'/tٚL|8'@a췟eTV3lF/bvil5v]<5bwp_YFטlm$D"JUD.@*dfk9V-KTbϝgd&@#VsghhjCi a(p$nj֨*99`Zvu4YI]4suQ`jPwSX]7@c 1C]P6s:Nш!9v"/[fY(_FUT$P#yF5¿5䑽S7 R{ }')%B߲/Mx  |>O<|GQZ~J5ŴNp<>`ED\bI͸M6 %) .n#9,~uXOԖ~E~{3J?u]Z\+VqWyڜa˿2Ӗ]vLY%:' }* 9Z16 :kotr!6ӱ0oj/ܘmVԴ;>Iiƕ򸸺mk&&~{x}8gU J8K(Jl{gcoGezDL1:Ip({vL~zl墡̵٢,=w|, Gw_N%l62cJJ LUA#򇆖컕AD>\8b?rmޥ ynU<#*.PȭqŔ]D+BHAFI[0CH_f@sr~@9|ĎAAg6r-8puL+f*dj>Wd|  о#%n4s0B\\2=F˥+0t|v`HQ$~We3h.Z-++H,LNY[JǶAȍ[[ɠIK,DҿE|Dž7^-nU㡓lap][8$r"+rY;NgQYgC+T eo*,,|v)i5L>J( $pFC6o7e־ʔ53}] = (_FQx|9Cd s\= #An*P䓸W@\4`';!A5ǝ1]rs?Lf%M#Äi m|F $/Md A%Cu3I'Y o"r#u6=>'/9l{xJX;LUAYX*#CgK vzV0| ;$xys  x?6~ukj,ofd$G ]\[?,}ҀZOЗ *QxZ²ktNdTADl%,^XU1H6UdItAt&!i{.&Ian:-'Nkmڽ͢W)cm3jmZY3(MzmREaD$dW^ ظdX9q$Z3U_ P|&lf/ͿzT߯vkd9+2ژɊhh*JOBdόAD4J6M~qjrC+3  |& <3{6MLڏh]H7rCdW5իZuZ)F.e]f{$:Al0w&;6 21b Cp(]5e4_܄AY$?~t\+de´-DHSYeU38mǎ% { ىwwkl=Yv?~\>RA BZXSFCDBc}'<43E],"βbnE|jp,d$x +,z24ɢvul}x?}97jnu\ٔR[;Y^+~ӘZ1/ ԯhuN᥊t_9`hctr;H}02bTEms3%(X'R6dk՛"oz/0G4 Kox ŗuI5=XIxIwW]I8a}i|V+ hE󺓟-2/|VQdXzk+:&Es21&a "#U摟 U5)!(\s-۱{%91A=&#=nۭ>]Ql/chi rAd|;`(*d~N@!HsdaB~{^{xF,+4֯x S y W MɔȦAL%ʮRd$[[)Έ >Dv}?C>Ow'v1dlpd`A 5o _w)t;a73?hw}3oPj`,yɉ?Ωu7{>׻rsMQYqеrk1 >oQgDHPŖ[ksO*aXd=; [ 5rQ8/0Mb 2O+<<ڜfd u}v CeÐ]2H>{qp|a Rų=1\yib{~6k?a3Uy$m!Od"e!BdYZΙ6ٱ@JdFS]Y5}J3n-5zi=Ma^}|#'$c2%krVD P U]֥ʑj`I!$)߀Tf(='`  ن'0a& s~eF_J|̪D*$(-KYJT_Tc;m=φbYi~m&mu 7[gpHtD (bjڄ`힃eN'+<|)<7{Q%qo&S ozOʁLMdH\-i\zs&ٻF$W c_pL>;c]/yJl&gD"ź!!b)_FQݳj?Kj+IK?l:RKЙ+Ւϔx&8J2bKևdl}v9I3m};ߊ,McT,[pbpDpcd1ʜGR">F?Mq|C@fA~tȊBJ4! <=@f/A;zƈ9#R4s[@ 2REEe"[փLDXU5J=Cnd0дs 7&kW;Z%*ƂdXl("dӇ멋s*/ͼyhus.!U75zva#-&DQ8:<䈶AB"p@jo9 tAv7HK%K't OOiYs%F˂7-a%2yRha4zAݽ~,7yJn7rZvFNڠGc 4 ^XD1EԊDJf%KMU}&-{مabbLJIlV?kXAcc+5fvՙ9MYdT"W!%,Z.ު6:n9nla~h޺rS1@ghJx]"k+4iJ'w2cTn{/}jH{NGWj3;W:2=CV'YGr:z`dZCfł9sx'!4+ͶQz:$SbXT HImY˶m3V2FBfx Oړ$q67WCUeʰ97 }tjc:ʸßP!l~s_Űںv`o zUAdy[l 5`ΜS!4JсHoԬؼȳ4Db\)hO)3AP1~KY pZ#]8v]957FjR:w*C#BNQ9QD߅wlZ R3ۄ8/)T01y,BD_G2ɕ(;UZ:i)L*e\rImVķzE(ѽ6T9s/>?TyY ZXQAskUR+c< xd(豞϶}JXw8򨹜=Tvq_&B="LsQrn5.yDx Hyrk>W]O 2uԗZ ޅH%+<7'&L"VLɳxZm xH#bP6K6$>Zo-=CޫraZߔfcGǯG>vz;As6Cx:[fx,GFPle,oߢ`X!4ToPpРu?-nCܷx7:DiNMU)qgSճN:81vUf'`T%Lv'.ڌ^TJt2 hʻ,Rޙge`!hT'hiZ\d2=Zw1B0(g\ܨJH[HDJăLQƾۊmH34>dnCe`[2Jkcgc2|>&6:爰$Lxr[<͵>{):889ʈkg|$6֪u/?!jAZ*UEH;(9(d(dJZCXxhd 16l;v@Cb/-v;Mjn_)NNy;fhHl_/iL֤@ ix]yP֓˃P3)'4 GX$P5k.L!>V~dLʑH^H.tVBR2wP6RFhkRp`&Ix$ޗ2(Mk2BrĪ2yT[DX#j440*L?ƧԭpQ&HL˿9Oȱ_,\ŏZepT+s: ꣰,d_fĽV&Dze5^v2L+(\\'#=J굱:¡riָ "3AD'CH(Ե].t5+*vrϪ͕?md%Ig$֒B$ 7x>TY-D"`Pԣ8s./+e(UŠBbPDb'9 8 U'Nӑj8F,@dC $FYϥe:Z yXį.5^xrGH.b-äϳxIv֛RAk%!*MUXHK9fA2yo$ٻ\)YϺn /A~*Jzwb~8aAݖYjKd.ZH^GUBם$H~-] j<8݌O {5I:%# } fKub4 ]'|XEo9`2| Ƒ0|Cq,v ?,>"ǣ?h_GOϷk,yeY.ϟn{%S*|mE~IQ\r 5S<)|֟:T]Lao0#W9}ŗV\r9۔uƾCJUjC-[lkO@ZP>M״!JRMt"|Zwvfܱ/qϘ9g,>%P JB(4]ÖcQL'AB5m/_@>oq[T$!yoDu@6umNwߟ; 5C,~ñ zmI6=:)Z>mqm{,كyu{uK-LP%T֔Z7Ay~"~ ܁<؝rs q IJnzTlҶ9;⾮*5'6`JCpHtè>V$>~UEpQ'~'_{`='IP5 &kn<{lu U@1mfL7t n痷ER'߮zK{WnIVEt7gP34RT/N SHPߚ_h];tuaU d :ټ}5Q8~8qI@@ەC={tu \TRTbv&d r2:t3^>"?讹/3v0`loFcZ^U@d fU| [0K-l-D驣Jw@Լ`]Q'p>Fcx~jO>O&_bK4ʫnp+| )5F/6L͖ŭ';HeX1|w{N @R<+wm>fDf6!",X5W*'3+_bSz2Xq*Y'.E\Jb>ʩJ-[I(Y*7= ?i^oߡnڵ/mH]4uG&Ǔ=xfʆ !S3oH .xܥVxvf4j?pVӈ)eڣT-)`7ڸf䀞Z{Yvw&#: q_pC;`b Z7e[kbTmU FԽqpNDozA vV;I_MƵ;Ӝ)Tufӝx@tDz3nh#}>FBѾ7BS;o=g ȋCvRfsT]o2;N'/}@QM l%^CQڼ]ۑ<V{ͻh>D@a6FO3`AiIL4Đ+(f*rEת]lqn& YqpM~Jkz h5();L ۖ j%U$'J$S-q/3 <`,U: ^.O{Q[B`ĚY^d&~?a#UsZ`C8"a1}dM ^"}5N"6"6Dh~ze i,z돟zұ_tLL0񣬥",py˲g[CAau~E!I*eW}۫>_dCo z"۩;!YY44IA"\v [k:<x=ħ_Ywp?Ez Гe˗:;Ĩ0y|MFygdRbGmm!HA O 2;(y{qU bkr%[Πyx=5IؐI_Ѥoeq`\+Q;gbwy[R|wd0"xl &v3y/14A8g!^Dkΐ"&nFոXa8`x6&Fj8iWr|@oR6۞RA-mܩ=TO5@쇦kj R!N^~DFywTcT׶p |3Fn «wǢm`flܡXMQ/ב%a0?;fY KpӶC/AMec?J5ǘ5d"g2h!2dESv\A0%jG&+naΓ*n9duzz'R&8YUfɳcUG*[l x1Fg݁$&N!0T?NUC^G܍*Z2S.VВ짿H!bVkMXˋEЋ9i%(Y'\8]eGzGGww Z_;[Fn./vc'(dQo^ɇ-y uAi4ͬw]U@D,wVΎ>[TNO܍o f^kb}y hxmg*U5{irt`197*huEb/v721, x-QǏ-sW~?<[MEn&\#_x~zͰt1oj̳2w$O&5,&( $6KQ3PI?>H89&s6˟|Bxy^MpܭA$]Ub&0 ߽5'#>z?{ؤLo_!rFē)U5E.ʾ$"5+"vdLٰ\s(уx][ms;gZh9b6vT!ƨMBޠQczZ|B}A%f.~_UsZ(p]SrIqBp66֚C 30യŴ2D&+ͱ6oSbjϮ7C%:<1];CYR8 TH^!{y7Wżf:ʳbmzxhHj7yK-C.k)}+꣡4 +UU#ݖ-]rp6d Ab*%Ie>+LH ER(`icZڼۮƖL&އl(ƺJǮ]е HwX:Q.@kLaw/rI/ml>3KX<_noJXX V\\:]a[Dt۾[펐ʉ[/h?c_(_ZbJHLL2}Jc_Iå`O&g_.ۜ[H7 ioL:=- g5j(yf-xmRLT]BƶISV1Q} `m16Ϸkk.U`;F@nѪ.1ZA00(vMQ:mɹ%5Y5yN3 3TУ#Ł:]xNo&/.{̷i{=fՖUܲʚrN56cΥ's!I],Y*Ɵx~EBoy97ms[,g.>}m&(BKL0sjYEׄd8bcD#C Nji//˕bԳ_Z\ikȹ^5Z%ҕLyZdi :J RS :#]hG%XLcM4m&ɱ?KkB`eo;eL"]67P)>6B-?MjJo}K75A티4N+}mW)RjAUYG]X6%1nY'y:Ȑ:+$zD;J4Ew2-2dJFwr[2uvwEhev߹Th[ x?ؒ$Jhe>q{0,zgxިZ"-&&I@q<F%I¤\< kFZ G2wX-S2ü1{I ۳`}ofuP0U&|Z 8`CJV/Dfj.TI.12I0X,Fx5yzrwN> 8U m TI>c-3tLOVnT.3X&| mI +p;[^`RYW8ވEH蠢7k} ANǍ sJI5m~4Q}qSNP[ %Էaפ` %0Ya2Ήξ]m43 ;ba+'B#A+T⠧*99 G 9JKg9_]}^\`I5HZ@5DHV2z nh ]wGy#<]@V"%LFO>.NK]݊SI5(;y 0i2YJŭJ)`R\{:m ˿;߽Nv/)Ň.𻃀T]^˶Ng2U`SɓNך[3kEҙyފhW\3EN8YcI(rD)'(x<kE3'-s嵿2zlvw_ONiiO3 FagYrL'z bHJ0NW9ӧ;hf}Q f4[KoӬXyv[h-j^Q)gVt2) =,'żˎY/X1h TcY oh5)ORaDsP?s\2WfK&0 5hIL]R@s7N}{- )̃!WIvE>@؇Jk 1 2:ȧE?fwqp4irZB yuR= C1.S>߀p}B7B7v#J/G#8h*f%p YJd[m;hl+f"PHoZkV*ȬsXdPɈKFjfgӤ7 ض;ն\rlTURA#dky${mC Y5z7iHLRh)pw *f iABfFi`>~ync$ , bmaK{\ـ0 } c@ce[bߓ6\%Πuԧ/7?Ә !OS e[<׫6ѳ;J)φT+ &ިʉ\r(T1qQII:,AH=V { d6Ҍ 䋫IX\S(-6 xV"K%EQ2F]XE%I&z&ulcuheRn8e^mY2;5oz[1A D9<$u2DҨ\eVrI<L_,2i֩D}׸"Η  xȎ Hmg65WXEZڙ4S4!tiQMN܂ r%mV>u .ƀۚ;xQ,AȢ:u.E%ѤH: BtޔR҉l?{Ƒ¸-{n7X{aSRLK2I߯Zሏ!(TKCݿzݠ,CdTiSQp#Vyy#[ŀ~}(Cf ]>J#5܀?X;ϲ7rCBqݩ£t2ȷN/IK3ʭAp~kq\w.}(dLzZ;qʗ '{u&#j󇶝5C-oCnاMzPIDSB&ً@yJ,N،;*@Uk0uV k?WHPcRrr8SLTM+Cnh`h/W_Q)NU ]f7WWXL3 1# 4;M~â{l[9SM%TS&!Æj?|51<~DRrc4[ܼ͋Ցgi 䓺l+&(ǨAG(;.p< X[>\/6ǘ41L<o7T%_>:|q@׊ :%FlG/z}-:!b+VdX!uvDpϓ:srgS&`X crh֪Ū+fQԍ!R (Olpr~={_i^T»b,HxT.{<[^îL>$]ͧ!.iRK)R}5-xv SͿ+cPUdq]Y$Cy1}}A\Ez7JKy- >*-4 M*mJ,rIm*?ǧ&ćʮQMָ̭qpnH$Itr:Ч<樘(`0_!Q@/;5/1UA;}&H8aИq.gnF"wk6tGa]kɺhJ]IAMrIz_!6'qɷ- RT{3XCF:/a{rIOҖy-pH@YJ}d@Ӧ؏I 2xwT 2pV=8 !2R?Z }Uqs +P=Q#5Rm{s= _b|+tw[x++krRSVazB~7Y]TfȖHM?Z!r=T.pruIJ3vF u5ŔLnogrϒܳ}Y^&itϯpTenJD <[<]D~ڼ|vŃV/G,SGzA:%9;ɹ;3mgU z'XOɅŀ];οtKkngWi :Նw%,AX]X2όEEGV@nK ^:\,B)5 mVHĐIkFmD^nM{TР!>GTG8e~(":YBQ,o02{!];}epDuyXGTI3pSRfIc=`oryPRCMm!̑6+Sg|IҮ `^IŃp)eP$'*/s뵀{wCc{QR8X$l ӯf0}Fb'C~?nqs- ߢ\~$-Sg>p Da+83'9ݗD48 @;w[쩒r@qcS(IJ* jD(JΫLLP~ VJJpJ#cI!Y-X;a7q*05Ki-g-PBAQ5)k $q10ƤО*Jds|,s$$};MO6J7jo^@ I_(W(/ZG@;QvޞM)?Rqr{A`;oq }j}Ձ3JԿRE{YDWDž. h\)) 'oR(SPhD+F:UnFc|)Lb!wg[5a?0>@NazzJ{ZtRM ̵#ڴԦ%6-)iI6mk˫-O;–#oy0 mϸL.>]ۮ۾ ȚF2+$%#2:I KMs>qhۋpw=ϓJ$r2518Sd#񞸔&Hd=:J~M MՠVpģ*n}t^]\/`(/Wصj]j8+"1o"Pd*g@& B9k(4kCA*, &IZ5ɹ;N3 %7|:0#wl`0%mUG k-ƩVbԖ x*zkr^^Hn DPZ:p=TQ:or65A9kZrhT)JSɍTܓ_$GԚ.B BC؜[8pŨ.µh0M4CsI^ΣgB Qh D3y+1J+LAȾ-trJc/: =Tq#$ZSӯ(7nǪfQ;84 }ˠR6$ hV1P}DGdըЊ]*V⬢$C}^u6ʳ:_5zamӁ:OZ:%,qc ]BiJea2Mle/$lvAu+*Е0WFp8+ t["!6T"]ETcx$dθf&JʝLx9Lۤ@$j}6x %ڛ,R=K5ޓU; p8 ηL)41ClpQ5?U)"$q@k;JY{o0;ȸiP X1m<(J'i|yefM_F5ׂ99Lȗ<3 d@T8_ JdPn&i7!8_(sFn+`HXc *_gPG#~WCSFpr(H!yHBFMKq4V\ w.˾Vra֛-8ȩɚ5RG(Sq&<$"zĚK=Tx١ʐ%N rf N+ `TQ39֤{;&[!*Z;N8LIJ2bP dؠ[|0+M*_/h--&P)8uxQAMet&vtD1W55%vî5ա>WnC.9wUFgyFo[(A;ǘRB+fY9j0 t g+g4$qa.ww96 ï꼝 ԽKn-5RaX~Mk=|Ԁ;8sF4! 9p-< R/p@{"6E1aU2 A2ǥC2c4s.T jAHœѸXB, &uٶ| \5AhLBLĈuJ 7 VeqJy<1r2,Ct2qH\xF+>VhŦ2M;V/mzKF4Bz46==kk{Y`2>T?~<(nTS)9g>%-\}K7³z_^WgӪlM8aPRxX{@Gp菆kmnY()WZ|rL/A?͆uhgzR)=`S({: Kԓm.tSWAD{ ([0ݠv/{- 4{0auFp[Hiهa8漞PT"5bҔ_ئb%AhI1ZI&N#Iee\*|YG+ޟӷ*wHvC&+u]Us zPk] U!TVmQd$ Q ٫V'IB,jl:#bZL\$ЁBe$Y$nC3qNxn؋pk!Y=~Ն4zlu$BQs` x6+@N{*h֛_k I $%r@Y*69aCZO#}%,UOS'vTD^;D4"ѽѫF: kL Nk9Ex(Hca&ZSc :Go#{Z2(9Mq3-O" 6tYBX֣{_޿:[Z{Q/#ûݱj V VVC wFM$V4y*cEmDAl;xșy˦]5ұ8(@ckXKEɛ9 tܔHTwazL6EGZi6Hw/{p`"qOd-ƁSad>rHۜYZcZtVO0fLWC`#WÔwUX0Vf٭+V1W=;lBhsI"S<@kU nsӿOT Awa o9X[7ef#3C$ɤ#$nFb!?q 2/h߃=MX\[ӳDnRn7خ\g'a|aXmo'zH$$Vޤ d*aZgr(+h)Ԛ5DPBN s!%U()oRe*jl ^.>m宊..ΓvBtkW/ƿu :a]U;.Yh]pD8Y9f"p1,!J/cD%k5YWr]1p1e"`p^هH0矝o !%"}y{>6s'L E7BAȋe^mTIWD &.(Za[A[E]fzf Yz NX:Fn:55f)!D(p}rb}؉/ wuk:d/̿u Ǝ&۽Ut Gjtg.9;HhN4'2!!-&gȢ )k6DV)@Px]RiWߚIX#kXӺLd-HRp}uy678_ [3X{QwPq޽Ukn-}Xpvpgc䱏8}۹ױʯ泸%Hq K~3RKyb 1E]j-KHHe鮽pϏ?Nݮ-=yx f:pܠ gމl`tTXט鑍,_@{L܀w̓n2Fc^Jv(CAb~zɹ`Ө]̸:p^wn7퓊9$(ZeFrc}6gWh'qKG 5b8m o\t  q/YǓƁ2}>haXZ9}>a ȕYg3 *Ftk>6"`/%r0^qWRj*w*s@~w!E9_! q'B1f%%N굏AXHAm#'H<}LS.TtwǓ= i ~EY\0ljo<ڻc4zxR @J>fOnupHs}82<$*+'W'zF_nJj$E뫋0Y7\}\U:aWxgl>Z"KX%гY48Q谳`+Yzf}8xRA4"^)%꽞ӰynߝI4 X]%u% v/- oĬOHyOV,WA95g$-[$jo#=E_s?n뮶 Wdݔ`+yF;j#7v+/KOT'+WI1E$-I׼<93ghyv\g4}~:%*:.3egzחF~ss~碍f=LϽK xy qkº°jp/3%com:_n*9WC?8n]rS䲛ģjWGBrfi1Yԅ]lnGئ֖>teQpY h uqR %Zj5 uĹ<k΃.7ArZ8VAY̷V$sWraw$XSJy8`SLLBdՔD, 5@L9;i&s*)RcpTDjIP c\dmwXBLGݔɫ)9DfrAFD:sϫ=|lAY!عh/6W1d< 'nw6cj&!ujs=ULFgQFɵ3瞢 QOO!V~'B@?,^' ykWU|Kslsd -BCol! ЩdRB$qǗFɞحMO:oLT:[Z&Bz4.ϱH9$uZkoXrukK|=-U=ihǨrawc։sP.mL^|101*T[v5x80lԷϐ- SwM-flbM5wɷkcK ȕ,pkd+&gM_|WctLVX@Mz.Mc9>8H654ּЈ8P:EXgF7Ҫѝ)$epsP6B[sƾ^0 ЃIS '?r dL"!p?5 `J9{ p`-Ec77t.C X>t|vkŹt[`n_!Q/KZɐej gxֆ07<,%5!BԯIyL`< NQ`-9Ȁ|-9iJ\}A3sKmJċ7.xy>b~9;I7 J:Y~ hmVb[\5º@ XJ<;&p!ƒ'2LjF=VcwGx ڄ 1ܵo[rK6ZN\4Y!`@E JTUIvL\ݛax#7H"`V4E^z:8KkF *ؚxFADoN42sqU\)p`X=Xq@OuDe4qVLQa'iTRKPp!zl`UUV~Qd6z@"A@CJ\NtN|9pLbBS6MQd ec%mj1cմvG_3CC'Y^M!Y[M^C:٭z@kFKv^p>fB65KYd9%.Oy千槟G7%(i3=@ C׻OJ52b{w6ȇ^l̩ x<&vO6Q"Xn=Y=et>_'Dqd~j+X#O[a+^[g׹׏~v5!&!Cc3r>bwƻ3';K[*ROj}4viBRS|68ѕT-w:2ZT[cSt#]ubXikq7qcNo9S# f3eR̭Vᒸs0ɺqE JΊ-$w۲p2 n`9S>M~tCV6  tF٤N Գk 2b w{mp3g3$.ӒwvI)6&F&5Oh^I˽u!Hxz5vaIEƩQ;8cWth`;,}Y"67#zg"L]6f}8ctVgJS)VvcYKpDPZUMH]L"'uEop)dm#?tqɁ{ޏT!j&<:eQu⊭2fo:U"Ozi,q}{j6lHaG&)L)#P; J $WJpEŹmv^ԣv71])2R_Vsr.pgTOoD\Xk޿Mx|49WDICj #sQ^#l/R ba8LOUWM ^~|PbVؗ?^k95SsSw,]r3c=&h`*I!P0u.S􆤁6n%Db5!>6[Nޔ Ձ/tHUNF4LL!^P1Pz]͚qjhe1gv&P _'l-Au~ yj(Zׯ_nS=KoF)tH\H `(n[:q~3-'F v{ Kb*yŀ!>A^1$^S-0>J'+Umq~Q =rElv64F|֊A6.hVsFAF!Kq,.OcPՙMʃONvO74GЄ Wk>f<܃r@5lՆbt=(sԘm[.WPyVv wg̩5{,AR7ꢭvaGF_셅7P$ognmX.f/L5Z}c.3s- F[nتu?eyxs!Smp7GP_*\-.^lY8 o 9]rеWV=vتu3׍S,cE^ي9>)_ 3'q?[PY'sSwq<ݯvRϓ'<)IA&w@Jm>4G bc11"&jI.wI5S8$qށl{,?~8aucש}sv6ZF_w3lj̆`I*x~Hs5F| X~srlo)y;* y}3TGj(ӏxwCy^+or`;Qޞ/ϋEkmH  C!\ݻp{I>lmIr6~|Cg3#*bĖ4Uu=FJ tB̠L+"jrgq-z6 ס:||LVj-tߚhr*ul@c :#RY[~,UƭvHySϠQuPWНOwxBv|C^/e3Cf+ov㢫Z#M0i-jQFHg۫G:l:ۆ](#Jyx C*&. jW=wxYʴ;Bw`eއUkNH;L:%XؠWw0@Z`m%H2I_ʶԂE[|t&A6e`v@YLb-dhl~ةjEj u M]Bm@jP1եz:RM_^6y2M~ً!fYii|H>ٚ*&feޠ> GneM˘>o/[ n(c/ l^fWreCډ|;f%sz{:e5+n?L'F&M21~=L{|?q@'=Uquf ˁa s}`fc3T7#GGXeo? (ԻhW$:0 Tn8xW53rZg00 'ȏ*"]|S<Tjn=,зZZI/]BrgS73c3Qe~oE-*]٭ک\n+r659EK8;ї'|_' t^qvmxכރJz} ~:e5>bE5T}Ǖ,7:0No:L$nC=W'F!Ȏz1T,"(ڎU%Q9uT@$VQʵuȜ8ǂQj⽲J-ud9Yy]iyo!j6VoM%Bͮ:~4`^4۫ǔvDR :BݺBĨĨ>"Ѿ ɼS>+cmn[Tv( DII)*fx0Lԁ_W"BhkJAL?՘ϛ'e.UJӍ'ƌFa7xįד;w3Ow3ލ!vxZeomۛ=LaMEM:v. "rx dqIc]msbez l~^^ >ͪ 5%N;.I:E -/@r1%ؖ=Aq301."zH[Zkb}XZ|'LeR\iŲVާA*R.WxoqOoS;NeNU0S_-[Q5>7ݟ Pov+=FRgTG#%ʸf/zYC8. R=\iK_iQ_ޜs p^}[ ,; {QmΫy96Uys7w$>BhȠ/X-QXӺp=@*e{uhV_|rDZjrB#g0iacY\[ZcʑbKJu_r^%w*W*#K"]jc$Ddv9f#X*fs Rk4eP2w#f.uAlCEn4* eurRL,](YjXn4nE󀓧)<њI<_5 Y|p:8JIA Y;~UY8֬ $brB+D4P,B[}('GB?q*H՟\cwքb'Wkm Vu 0HKdi#]o34;Ts.j_S7@u}>w=aDRVsݶ)`ׂ@**¦7n[yE yZD洿Y,5隟|<خ!Ldpy`ja^bol`N> :Ԅ58^0-k(et=E$?wiGsI28;wjDLTfǩژSuהjzשD1gCت|ȡ} y+k -H7KPSUyr` ?F< q5qWa_m2{N]|؟z#ќ{C65Pk1^*a-1XebcqvK"Y|Wlja= ÔVb=-[eZ.g,}9xjrj:Q\SVYƶ')c&/*N^)T 6:J[< K3Ҿ<#Nu F`eCzL[U`i mMN`D;ī!EuCџE] U'R`u f`#r5^EL2؂94Kf8I;+u2֏Bh7~O8jLjTpOMAUbh?$ +uiRN~SXsL̚!);tۛRQ=в*:?oSTP>EGxEzZnǟ㬷$²&`mBG’ToZ9x&,iY*ٙ̐|K2ԖkauEXLbR(M"SL[t "scRJhd0"kG5A؊mp` /R9pD3 w=s<-m {% {6zN*pZǹ e$H5A2(ękk)I-rF77" >NcfO{kֲe(Ld[n`&u 6*>Oc LL#wlDxܘh A_\DibM8 .{S̊fr+>늘>R"R|WHa +o  G9Z\>p2pvx[Y4hZT@#C-% z}qds4yY`53KDiVc^x +' +Y(RK{|9oM9~?b0m~%9VL5U&a^I^JWnWMlA;gi6- (ݑF0n;I7yAZ%]`DJ 8y,RA1w\ hv ]v0P5BrF폋=p`^;j+sW *fSvOqImiL]~&kPf8lv; [{=*LGxz"(gKx7c]^}CQfγlxH`4a,rT"oaf'ܼNd4zq7f8:9r>1Eg?Mn6ן?V!SW4Uio_]zj@>agƗVrw1=`2:VsHONq#ǜ#ƣlp! izNRd얜FωeIXRnՂ)2`X S_o1!'8='g#C׳7ů),O-@5{$II>AMF?  F )>@pj}.d7dBِ i"m=qZ&Û4e8lǩ19bs[7 %ׂp`mjcW!(IЪ& *n*A|WHʹ}Np-wG4 w] !&e]\ʳy6Q2WR2>+7X7o=@|QCLnBJƾd+{ t:A+RmLsF9G`W` p΅D49M5%k9f}k®K>̀`oџ~{g#)JtGULhkVJu=҄R观ԩ BCP]/O! &17^wB.cS[TX58~<AUNI*Q9 ⽌a.ao -m 3see,p;IK)jTR=<ݍყ%}7 n|oWqd۲ꆎU۲5!Y=*mc8#BmS |[7sώhx[-0Rj!;x%c(m2؞rUt4-)VŚ8H*XjnzdmVQrI!To\x(@ >TFL^MZ#SpSsM: +0"i^`ۄڬXi/ɎkAc eA>YꉃXz'JX8F{F\(lx!./Q`ԋ'N/-D3i,׸ (QAXδi 8 wG) '!g5jI@vQd^ׇQ$̻Փ>8u'U)$ >/[W0$\ wfaI†^ǭe__:o/KCiuK' oYhn(|Z="NPtƅq^:mum'L^{; ANn#. u*9Njг 9BU6KybAT7ҫ PKζOooʗʌx?f UݲwXi/ӟ.|0}rSs%/>(^ Z@ zE;fwχo_0Jj/?^otKʍ BүLM8Z%ivϦI? RH :}/i؞NoPzE8%7o7 Tï|2%~N2%k5ދe7ѹg%(fs>5PZ3lUitdyFrJ7yT&\t[6¦E$Hi"uLN<ϧaic4LXf+aVX~`|@hì% w]:Ï~>Ka>N(߿?ZXg.,r3_ qf*9f aYdL {*I]Ȑ[oR} 1^_:gʼ p?(" 9 lFM(&C 5I!…S0q*t'Z+ű.!sge()rJ1{80'ҹ*C#kr`]rL끩ZZ2&QkwP`[~R9^s9Ld),V$:wG^ d%i=Ě3Xu4b눁:LUcX~䞠/}z^+kBm& 0$JX 10xAbyI$(F!YPL#:v#3c#Ȼ\MmXgtL>% ǜC 1/ [l7ƹ1 QHMXZoz //[ rհL,jeQaXЖduh&y+e2_}\q=ubPG`U<}x{HÚe?Gwk a\x^CX ZݙNN^=YkT6,z#ӂ]nLzjܽ-m{d] _lZUCTPzErƜ$4x3}./U^]S_am&QKDlLs[h5ًWi겁7/ޙn\ ~-qhJ69f~U- 68wd<)*5sTRlMw+8c眛  G4w4gMBV!vViߕ,Uִ}phh+PLTag-c+~'w꫔ў}>L_2@NqGTT%!8цJD% %+" k*RTSv";Tff! &%ұԿH*c6B%׬$`VX s]bT]/j-*T{]%\SiqMzʨ1kA 0e ssd9s7#nL5цrͩ \kc5+spyޝ7#a&cwޭ4}eI+z;TZWZ 5HJPtuzʝͨ;:9 , IO "zgbPTeJ'SI0\ZF,714f֡|Y5;<6.2ܒ68Id$Fu&&f`pB>UTQ ?Sbw[ {{ضo)'_h;l_ %{ l}]AI$qvg;MCZʩ }C3ұ~RLXg|4+FFlS %p{hVK} h~uel â}t_)rjٖ>,P?}B Íu5OXv[2W4Rpڏ6ʀ*\58bHWASjaΕJ(Ôi>'$q[q̸2y+e*|ZQ~CPgl%-@eKJ$}$ 0bDG1X*-(朐iTF; SB 5dž)KVg)#΋DYr9U` N1Nl5x ֮ ]>N=8qZQ.ƙXVG&Aj\]N~Duh1PrM*wgG!F\75޾8j$p$4kuo}"ڔĤfo7:g`R&QP6ѯye5ׅ)xfpC LG^q [<`%L@xǛT‚5>:(/K?tﲇ*,)F^]RtLռ<: ,h?cZ jǴ,#N>33:hsjW?M`-8[x-<ДƋ>])CHe((4,n P1Z$&g7.7`L76B5fQ݊OWJH=߽9O(w<1T(x !%M*P pݑ{7j6AUkLO8oH[WQEBKE UEUoF4d'3#̛]d&H/1 B,gٙRwQEZ"P1ʉ9,tFޢTnCׁ㴐@9(vMV'EۓsR'E98znVfKZDΌ Uǀd"Ɏ31D"'Zh" RkZ,9}Dxo=uhD+ *Yp6j)ǬuS@L7E(D &(ՙV4Nb⅊VijDAH0̐Ĥ đ,1+ X!ZlbH/ -GJhPC GXKU\HI/w{<9q-^"xˉ6 _2ZTRqEڱQRGbx̺Mm% ^~"^i}\5avŸJ7\HsQZҩG \X suZ% w]`%$8Dz7E 4sNJ,BS2T48jl\<2ƼgeuQ6Ȏp'5",jc? CV1 {[+FZlKyKqXVP;zW^ۓgo͑+b^3h;[i5vȯfӨer=!V̲t>a(;QLS(w3ܙ$^"8N0\NT(3sxMVZ\|3)Fȡ6$)0H)E3o^»mP8BuX6 ΙiYu[kQԵ T^]M߫r{Z.=UDMmO3_O_ />fOc8P"F g0!y ,aRL&Q u3z:5P3B|=b@ A^:Ѩ't(r Ny-H!&rf8+U$`?1\ϾS_|E=ܻMls.`vwO?ʝY{+L/F~sk9R Ww_)_%Nhw߬^TO r3051nooҧ;֐LTΗ4d/A.\Uf!L~Fs5L5hIq!7hv&e+A4)̴Vn;4.1z,k>gBU2z <N{ҟ0IEςXZxя@9s@e_iJM*%x'ͩaoHyN14a]/9 tl?NN'tP+eur_tezvzyOZ@zw"^7V;Վĩh JɁ:OOՄJވJnW(b-_2;!JGyԠdܣd/L$_WL@S3a㕖!Th荈lPhɩ K.(4cؖz W #mLN4%FYR@GE>Mgvϼ{ ,uDE'X!73aYk( *NjkjUȹhͭZ$a( Q8C[o6/_?(72{ʠ6-z *[,3JCQ9JSRheciͩhH$~kB PT޴xk` [2kfV@,O&mcar@V-Ԃ ܄D*>#p@$dex8k@P*Kx SK w*A5qh &\]=*H_?qՅQuLR z;tE6?7#AO'/(ttqslRJKBrpaoy`I+(t=)TTDhDQ%廊zb\R[ݡy&jr0썺LmW~@ENo~|݂A&"p B @YU{`H4E6^dun{;J+s>]α??k]}SIiLI8̏^/$lPNsiBJL9; œ =mg;2Qh;^.@2.@R'QDɾ(ra iYMO4hDFHqd?y"'TB &#/ٲfپvce f B݁`V״u@ASzCmP:ʝkJi6R6suW^3) aFIaz-qt8 AP3Q:MYri5[ɥ>&?E.et-eTO/x910RiI:<*\@=4LQr.N/mVm_77kX҃I$nO7i6;=ὃ 0Lr[0:fjj|){Qh7hC׷.;`c8Cގi|Q Xm? ]2@ "mD: *B\Op]ݫHdu ;od!u unN>1XBxմS^q7EqĵqZ#SoU * 9X&hξzuxG"(69B_ײzWOFo_^cg_#e~'QK'g{=qx#p 0Fh΍|bʮ]aCla-o]ςCG{oG7Xg_M̗oA Yc>*[;طZPOÌI'Ë"|SH)ԅy'ɶ?=}O)r;E):z-}@3@]w얧O/Y70LqdF~'ן\O`x-R @>mo yG ~1' @k@Bs7R~Zؾ rQ+6 k.BuB1wBwk\:=ʀqT܈+MG'&!";.>-=A3 CS (פyؾNDAzEu :43+N`)%g%HHGbJ Mq%TinI%(Q9" )_C w;^ūOq]C.3 2V9J$d\oj)Ǭ}EFItKCX5%5lwo?|5htxEשܓJ-<&3i"ONHj)$XñWp&PwBN;(8jN߮ω xQKK;YV>xݨf=>;ۡNlK<_]v2Ff[W\w-kWmfWCLHIU.`z FN4W "/&`UmeLg]:BW(-N")a7%n[̚g#;l,DVlcTMDeG$+6ٰ\2zy-"1[]AO= !.K\36{Tٷ[2푲_ Y2|ӸBNR}x6kcX͎@)˝)3>08\Q/- ֡5c!)ŨG56Yb֚&46{֊qfmBʰ*:PW“9D]?/At a={2t6V}j2ʯ݄QLdl7iq2N֠6ZimbJz3905'rdɠkOG9ݕvͯ^䜲+W F)ΓQ : ܁]Sf(ŋ6s[?)Cʌ31D"Oa'_6ҵKPh~ۇ-4`@Y#)J v_UѦݤy6PٗFr"}"Z`T2ᅐFhVp(>"t>;qvE~,0Z[.H8yipJ-)Ny eΟū{6v~+k.43XX}T$s~ޚU#{c.$a1 ӂ 0AjjbZڜ2zmnx3P;dG qL[{᫳Ovv>tWչ=_ŗO</kE$LrVR!Dyn+XmMS&ހfP56HBD^ZVS%Do 0ZUa &Id9PsD MRr,DKNz N+,tsi%< yH*rP]bYDtD&Rѧpx䣳FuZsjy`8gzɍ_%Ay) yl /gac;fgw(ɞ.6HUEօU kB"Xsxgv]\nJr!2_|w[҃o{lXWofnj]BݢpqY}u}mEl@9MwMРq̣fiY][dYIrsyVs~aC&]V !v!Bgm1],pqy[} .YS5.)w5:i-'<\+!r^".Vr}^m{O=D7!R!N~_Mq[KVǖ3Vrq8%ވ/&7bIp\ N&^%%(0`tL!LRuxEsk=[cpHN%C4EDn).TӅ0hřLI+UшD4{,-{|x˹[ҊCYj>h]'Oi=]:~u2e}o>z52ybVt:ɍVJ~`F͇Fͫ'i ԨBFI&7ud aa>@s}JpJAb?cR- hCHh *FTJ,p1@ފYXHlZGHG%C;N%8$u{O!p a' ` `zF`w+^JeILB'POĩ #ʂWL))_1 II#>LԚv#K.֥zwtUC!er9c+~gɕLJB Ͼo(/;Zx|k!r~}&šGwofi~wX 'cZ%$R۳oA*,fofW~úbx̴FsI!bsمc1(#:h;E趋Ҡ N@IVx}g-n| x2 iFWbkV1Ijˆ>ֺsqWkbcLk>u",YMEq3 YT :(Ɠ[xa'g-cRI9pkY\k{]ya&%3U-\S#D/Xr3)VCN&Jȍ jVq6SH9+!]柵\s ~㿜efƟzh& ?W ΐR€0j3!LhdGNP{ #$iҪwػkP`l1YXaNN4$~wK}bkeUٌg q:/g ;Y.ռe#1R$(s>t7_Cs>sw76npA4 TZqhBt/+kز˪P\yA1QRx o#I t%{}U>H'a[$ɾuBLS(;g[u1G_hKzRv=7goBLբ 7[鲹_\'/%_>|*P!x$ i,pN4yD:_wL7{ T]`7!^]( лf&eX :ٯ8c@yVys=}7*YըȭONIR?C}5 Ύܢ, R"P6Oxl> &*)@{ح11$9m-B?03DB2.8y=X4 07%i(FuXS.:yضo+%)h=I Qr@3ީQ4 5\V*W1k6F>@?{.@ 됁`L?p慵2@[ *ZFaQ@BM`Z/*4V*(k*YB P0BZF  PV j2*.yi$CgDٔQ:Ǵ``q8t5(dhI1@-VbZ55%BPZM䥑ƨRu~@zP9u[DmnAKMA>.V5 fVi&-%RT=tzs< #Uj˵fA,U;WKZ̷|bIϵDJ†t;od*Ul[ h鵑<{ AFe<3SF'kdA丳ΘGٗe0xN 2~ƫ?߰:E |)ٛo02mb[hrF[mt+v\=cQi y [pux3f0k6u( P|q>BBT[$Qo +}@+i-Q:ןlP^M԰&+686J],c9%-JvtbȇHpvk3C7;Y|%=keo'!ǪC/ rՇ tP(VU3P!wƬ mJ[BosVHj2K%r&k^1-SaPϢM\ e`5jgY4Tlm*N||a/BfN^hqg~pyom *y#ԐcT="C(gK }{IJ7g+Yˉ ͻpK>ؼ'rN{!OCS e1ѵp% O{f)]==wOc\:*܁rpHI-H k5s^{.1iXQԫ䧔DWPrAC!C 0)N4*'g}Hdm|ipee?,S9ρM?&xFZbTNT Kt3"#!92M  +:|8(y6x OULMUY0 ޱ k!92q(Yņ ?4_E{ˈJf o$d؊j7!-dЯ{nЂnK<W!'jעBpKx ϢƱna:#5םڳPv~ꎯ2RhzƣAhXE Bo}.|$ȳRl~bXgM}pUP,4`QAD?G{I( "8)5{ޱ# ߉UKF\]Z Q$__+Bɉ_fQ|(?jBߴ R`L3#F:*rE:*a 9xW(K&j)Q8%X#xH KphL UtzIWOWy^J}E}-4O:EJѸ %|,(g#733`Vs`b\20X¹VPPJt(ĥ%THO$}SY8} ?gE)2;N=uở]]'O>}:ǀ RFx?EqeEs٪9Ǎaf ;{Cl|Lsw~}ۇ]n >eωZ#oGӀ)F8%QE.|ʊ.c,)3-gL&kȅ K 5@hle ,py|m,WEDzd:,W^_jPs5_d?1Fsdbo4O gݹL11޹9¥1&9b>]^8*26S9+9j+%IGYwj*koX Tb(9_ίn>E D()S$ј,j2Fn"Ɋ7<zg{?ٻ޶n&Wj2e/.k^m%HYߡv+;G9G&@`I6}HSgnl 8Q#?"7P=l2k f: ;ST*v4: ED_XL6CԣC+I0ڲl#vqdk Z ~k惥PoM0l+b;7z-Qfyg( k u,b^-}?BY|THs"1pV]7eY_0RkS@W]Yꀓ8g, DvKɨm0cJ^jwa^Oa*$ެlE22_6rNl(TʘNCHdV:đYϮCm길 ázYleA5pRGgL#zՉ4Pu֎/^<xs@ѓE) 0Ѱnr[ك?#@+Qx蘃fdCsUzbﭷ >,nW֨JUm(JNz_)+&f9he~6>H1UբҖ,FvZɠǔatkz %c19;8xs'Svޒ~x=h@/4L!6[s~` bgTgɽ!4؇֡_֩ ؒ jBdmRi)Ys?L'I'luF$׷#L35a 3'y"]`9Z @l;[S'_ FhIr1%0bf<(*?/QdϮcxoO=%^ES,. )b/@%&}L ä k){\u1 زc~г?2#s[ l grϩRb%iɮ #<o|sǖPpy;LRH [t'6#ilgqeYs'NE A!Ҡ1µGDAsjM^UR N:9(jGҶKێ$ïm'+m;F_.K,q /q":"'"z-caڿJ) ]rVEYӳ-ZTM~30RwgP||1˱#y3R i MR )0C`;z8x]/`yC"dԮuX!*dhȡP{I]߮D|s;d2e};&q֏F`S9?qץ_<?D,'ˆvpfntص#10es-@)(6Ss#Qkg9b!j)JCJ< OF ا?jW! 4Sr$q1zBޡ6,0>5WUڌlUP8D#S䔃#p)u*HiaTtWٽ#t46e+*7]%Ve,mJN+8qܤ3Up8Ѱ%*y7;մ<гZcF~t}+[30`xd=7YJ5/qb߽mK\ç9f gE*%ɪoad >pRLzNPq6:=$ج ڃ<J(jV2/Y6,%r洐TWlUtf L//l|"Q?f,99')iZ `7֘HnȈUF2'3\:OXD{hvX~WGsǏk]fmگψ]N;"4hjR09j⡔֏,Zy0뤓*"$_ۺ!{=ұ_n ѼxtswVA37~׋Wl}lύ$^̷(~A7'ԭJ.9<ɚn_?ܷ2umr}W?oBMb/H˩?~7+'IeB$"1"`BSѶ7wV_͞^Mgv9ul!I {gm\#nOf~GY_ .lK_LJ<|#!l 6ROOya<{dsp{YrӲѝtwr5^~ب/ÿpߗau,ϿqHߵ~bծk}ܵ}ٞ?&/|GOzVFK^ D`Vcoϟ͓~!OYrA!zTb '^pG¦pXf3%E>j8en|wcXQyp«4ؘռ0۲߈D>@Xהּ3 $W2au@1j|TYCTԃSj[ kMVW6oȒBA9jdI!猥.9CXhY骣+)K\0^%f=Տ {z,Etr ? p2 '}oOvq{Pޑ* kgڶܯ9]{D\VyjoluB@i!Gt|6n=a_m >[/mFa@*x^ iT]?WLh~'ho9,TyofgCpT5u(jϐ|;6ㅯ`Z"nF) LS$ r=g%4jm =.{ }7uSPirT:bOOVqqQ9˥huW#žK e7Y7z<"fTXfؕ^HM۬n$K+[?szӨ?s+Nm7_ {+H>hqXY#bfƒA3 (kbBʄ"E휦7e]!xᯝ} &mm+GLV$LΉ_g%KJ*ilȿ<xfje^ĻY|neZp p7 frߔ /0af;Ȳ8}^։YUI)"jTUº!qzS?kk{oDXoIdZDoI @nWI|jco\LRŻs%#lli&ɫPTb> 3zй*a܆'n)o6@pP[j@nՕoIB{ ʮYl!`r*b^B^ IѵF qu]DC/)O?o <{ bi/"6>5@ؔ? pm*'wx!FB>θ; NJ}Kۮ̃#ž{o_Ȯ߹O³-QңN ل}n.Dm; v{WޣEng'h}O&rܜΗ2fVN o8F)%҆}7rA$XĞ˦Q:jm\`$+/S,:ot̆@?Y=ڋx\$RGI{_~USI63oaH 4e0g\R(;QL>r+O_'B87C2fK o-eANY]_~/d~6եȺ2K^T_5׋EO"^tVEyW"]D_nYH鏧 7p+itF\&}sH&ڒwplJVYʰJ!% _^-ta x?tryyBN_}!79 j4 Q|{՚F=^;~$jsgӨ?wޢQGq$o01_:|o7<~](Ag_ЫRCIYlINbrFU˥5օt]\}2 uNy EXX(;YPWIeTF|"}:׃,D ȟY9b,GEi#C'o^~Kx{˽}P$Sl}ShYE2~:<\Es{\B`RS,Iy&gq-H'A rboq EtZ #]ߴ(Fj)Loԗl? @RB{r'Sɺ(?0DʩfNn#|(e/z$ L-uZGV; ܶZ@|sG] wx5{C.ewJCIR<̉ y*ߏĔ蔸7vMkE%hY ?|>~Hxj9Ȗ|2qP[ӧ>* zml}npl^7űi#;]pn1m1:/J*F\e:Bz%a Ktx%* > TM``w#O8hǫC~*Q xh9dt-k9pSC88@ EŢ!xJ1Zlj22^#ڣD3y5?Q|CYO~gEZq*"!U>!Qؙ*ȣ(.LI%á Dc&<鹻nTTdL8 mVEjP#O 8hM #-o<$Q>LI`y= h6F6{z\{v]Tfl7&gj2k%XrmDϭBScݐ|洫;UQdlz)&}WւM#D$4b"C"o~ӈ hx삨b[Yx&d!2BFܚ|QDkx'C"`ThgIS:BpM" 7AN`1aD>zA4C r[DI'-i{<^}fjw]Ú㲼Y jS]{]o/VS< &HѱJ'mH eCX x9~ H _ % )jrHiNUUչ6iI3nUJLN+lHa` f }=ȱ2FT Av *Lta =.XE-Yf9sB e$lw5Hi߹:Z^JoJy+AB1-n:!(oboϫqV#p:!یq"IҊ!ڂh]"VFiWjyRmE8T@d``_Z@ mA͵-6;82ʂ19#:N]NqThա(Z/$౻*Ȑp"yάAӔrQSCi !^8U2tW@V+A<}IAVRQJj)]R?$&a !]$fE#T]G4g,9ȕV>X[tM.{IGE8ZX|ȮTO HUW"±c.q"Q88.^iJj=]j/Vr*>>o3;ٰ; T!X8 EӜ1/Fh[!%ZBV͊7ePZ]"ykU( t5ԗVt4)ŕAH)iRZB$=IqK)iRJp3ذmxU׊ sG Q kBȩtX;A cAsǛCMVlC 4@XG;@1#HQέ AXKB̐xtjAZ"ԜvBGXb 4)-48.J):]iPs]2ȤT#>*qm*w?Ig$%p92"⢕ӑ Zf-!?u? Ep]/܄\~I [DsBC;%[o1\Ā+|pڙMZ0]m o%xCq!"grso``1@ۏ9BMUvqsŇ⸈ia{տɁ] >A9[;v*J;j(O {;T*36s}~ +uTf~ fhb6GLzk8ϠbPo砖@N{&Z>F sex`K¥uֺ8*`&(&i QU,rJbhYlubƹ޺` /\[PNj]I% tK?<4U0s 9Gj)s@xs8XG| $xX-vLN nOlMյCnm'/dK<ҼeD*+h"O`_2} 4أp!vsZ:[ õkÛ5$Bv*ЃK81AC b(w DZSZcdnQjffC,2a8GYFʲZ{p10ƒrx#ٝ(ٝ(ٝ(ٝd^FAΉ5ܳ3$އ'RzDU4#*IB% ;b9|VVZt~.b5<8cG61cRΙMކڤe~O 4Qi;r.߼ogr1[\՛u[ߵdیʣ'fb/8UXЂ*wF+ü)t1xD,RPkpG$HXc mѶ6FHniANNenPLZb Fg~Z[6$˟~y3/ U/Vm|Q(^Y "rD'_=oM;cӋiY 7+On f0o~GIo]?'ֈ#i8&_Q|$Y,Q,9\>gו bCŌ%b/i-T5WYԝ< 5!_U$D*yEO}bF$JPh͆jC[}>m]:z`St-4z^=}ىpх$@b9HrV뢷m;yV@PR!p~b<0E}u: jޗ*k}ӊimg?/N^W~[e[~?Lo #p+v͹m9|9Uʖ9 [1kndo~UlFngv\T<daЇutF&.09yz݌8ٟ)%lL:;#B a>^Ip=>B:>Ѝ|oB<˺edx!՘y#'7>[eIrYYM]  1wT˟cHD$TP=$:bb"j`b2t:︠f3߭XR(-,gF\ysᩁԨf]{6d bǪ;&fP.M*~3 ?o=7=WڸRa2q6(;ypnJm{w62v'gs  c~&٦B~}3LL/tXh.O܆(o?^r1vX:Bʾ鴖"}xPnjڢY(sܞHxG-g6^x=+LvFp tp9QMpVHݞy74o5z'LKfy¤i -yF=VҲApiK 3F2 AY_2tUR~K)Og$kRD`gM`"Ֆ i*v.O.`V'~,NZJ%*|K~Č6!bf?j܁CSA/׀Rj=.Uf.\gfgu`ݘ-Osb6FY`nn0!F<4ѕsbǗ(`T,^ZdRfSNw&q&QFP[Uƈx *LjW ELmzu}8֔)tuSY%ڛvkMnې\D2qm9y(]< bu(Qhe| 'v ^noEM.XJͺLalL(]KPT ( '!c'/*W>1ԆA *wx'Vn1V}0b2Gl,ʝ BJ`}1-M>ڛ*),6rܡGKx-];[X8M,{GK bIvS{sD$pb$<0q3žg=O%Lɝ]p2#k2?zN7N6c%ѻUd K'8x` šLe$'C  _dq2P\#r@?:Z' $?M"kI CKTaNZ Hc'?P&('APx+x! 1g6࠽bcJ]*$C%8fB@e=+8聞=XiU 3/0BrN[Vt\R9 Y' oǻjMѓB:灆k@^m-I'$g݋5P+O౦{MH.u>c>H[.ޡ>U`(j7W٫ѳcio[9Ͼ.GW4m:A&uɛ7,_[TzmuثuhLҝQQ&weHV FY,vvװᙗ4`vkGn5걽!JR%ɼJ6nI_D9M {>0,G-̼T~KJ{z2sڽ;ᜐD:& |b 8o3lnɣK% Q1TIJw/7<׫_ /ͧOv'ecCTu:IdK)k<1ajsr3XHu"G100yG^b>xt~D̨*Z/3LrTjAJpRǠka2IQH>| EykQQ˻յy.>5*պЌĠsaѺQb ݼP C,~> {Θ`}sֈh=f&׿ Q: U+pF@"1L=qjFn!n_ckXrZ[HrXY ڗ*uk׏4q0.^`ͥZmRpPiHmJr]2a*`{0B bւAhl(:IZvLs- ggWi5w;uf K~َ"PLi{NÔ.yQX扇EdXIS涪IPVosKTԵk4̝E /"{i%%F;8^݌d 2sIgcŵpu+ƘT9]Ms0k^"aյr߄"ǂ<FZu"ͤPJ6J5yl@:x9nw*Ru&i T-HɌji I<AJGeA CHn s'HO!la_/.6VwuO6Y}d<ŹQ#|9Hs?Ϋx&;cޗsbpg?^߯~]66|{{{_﬛8Mתw /~jT_ٖ݉"Wx﷟@uSpFqԅ@u QŨ+p+DrCv疮|݋(a1At2wuPCpYŶȅ]֠dmYgDǠk$BrLb#&H9¨ b:ڍ^548)CB1‘XQo#V70a.,փ59S, $ ]~m҉"!=0T;YX^ҟ;z>Ɠ7ΥNNC"?5[4}_G\;\_j{Ki~oڦvm)-PZVkVΩuQȺ*Z KZُFW췫{߻̹tH}b@/nOvuЁ֬3!|tvK9oX|Ҧ8Jé )C "kWpa[MנU}}zs5_GR8mqx{RGa1z)Q )0q_ǫ/<7wF|cP@Q6W15cA0!`mР`z7,dx!;1~Dޭ܇i y)P^ʤ'&_A+ @*W!7/˧~ypW8d]\=..73)ȝ]n޽tuŤ^GO+krrT$RJUk]). 5܂=h:821Snn:қ8Lz {s]vYA;o Tf s y[[R7)$ў|2bH05?plg2@OaYʐiAH5Y t *p *#|5 qY$"aqUK׾i=" F @Mel6!cgH~g}H^h-6#F"9U&%獤@+_+:E/-Gm0Z: j+fNK-@NãU厗Sp]yܥ| -ӣa1z,Gh.fldy<-Yџdzʸam`.b(Rw8 y^}7$amsO&H-`_+9?rNgB^#?5 zGB=yy'XP(Ep~x*Kc\_K{'On~*wzEsRSnɾm[O>n-yb2[/î܂iruyzJ+/A\\97UeCms{?xm,n_ߥl~TdXMYDa %nz/6r_(;ju(2eHK9@y;[}W-oz]ow%UzmLKT,y[ӏ<*QЧJ:e_<gxMRIP7<}U,VU,!;>g Gt\ΒV*y!$)@)]h6I?j.( B8o~mkF̓baheZFA*:J9Uj= X&[ J( Ԑ97@3ʕe]J5mř00ek0E?gUH*)\Ed. zOaE!˼ ͐jI@fC ,"O<BV α}Y82 OȗmCRvYRy+߾ڌڟK7҅uiW&hj\^,~˸/%CWӛ^xMw]#\RmB^쏡qIc)^b]R#ђrBA=Z!/Ӟ oKc >H5oOS)t"Y`@BE褉sgƠ 6k }\BN?K5鑨+o+uX=x\Rd&Ri$B#\4yԊ*P\I. 틻B ŠPnaUi6/X;rkYn%Em润ܘa.Nu8}1mV"/${3+BJ+RꢒU.=#XIӵ2[;?%VlE"kTs1yGb1usw+@!s"bN(Jvg͵5y1 E KֲK8ogj`zb$|x.\|%olCä"ՌCIEͅH$h bTd0Sne,T8P2.9,YThOsYi&A*z4ރ ތ^8qϖ ֊r-#< ѱ S ! bD ؋);RAXt9adQL>Y бmU{ ql +̀ly!bts1t%^DC+5|g;X`sx;,poա+{@ܩKH=f]a@^ hݶ&r-ȥїfc65s MMGjj-vyu,Ob+Jd;p1k<-+N) obTE@{QA#qo]7,B ڹ#jӏbhqھq&O¾{փ+YIR+ k2wD5&L%ItZrJ5B\m~{}Wlǣl"Yj(#y+/(4{hjV_OUcT!f]})?<ٸw]/כmV~Sv5Is_wڻAw%^[n7t[X=QuCXyԵ[:V_|Fc2ބ/$hW-Ucg=YșhM >uz߻V_ [*!6Zʡ[:7лa!gnI6l;=&y-I}Gw!cy@ևn۔̮,!6=7lu~a{'lG_Q!\u(A %w({"V,( h' q T#*E 7U#c`GMГ%J|ȍb3ƥ5#]{<Zfr g-SSãQ)8NZT+58L\?=)'eFӼv`CIpa4ܷ^iA_At롪PTamX!/TɵLUmb}{-͒u9Z3)9wu&^ 58(κwbi]Pk8FA@$`z;uJ|NIJAc;&?;Bq9~hvR: Jx9t"fKnmM1vّbh,.yx-MLd'D/'s_ޡ'3;BOR?0`\j)h0b֠h ULKJWS{Ȥo GƱ"[ښ0~0trڡ^KGzώN~var/home/core/zuul-output/logs/kubelet.log0000644000000000000000006032005615145361754017712 0ustar rootrootFeb 18 14:31:32 crc systemd[1]: Starting Kubernetes Kubelet... Feb 18 14:31:32 crc restorecon[4676]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 18 14:31:32 crc restorecon[4676]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 18 14:31:33 crc restorecon[4676]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 18 14:31:33 crc restorecon[4676]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Feb 18 14:31:33 crc kubenswrapper[4957]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 18 14:31:33 crc kubenswrapper[4957]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Feb 18 14:31:33 crc kubenswrapper[4957]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 18 14:31:33 crc kubenswrapper[4957]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 18 14:31:33 crc kubenswrapper[4957]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 18 14:31:33 crc kubenswrapper[4957]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 18 14:31:33 crc kubenswrapper[4957]: I0218 14:31:33.940270 4957 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.949297 4957 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.949321 4957 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.949327 4957 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.949332 4957 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.949337 4957 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.949345 4957 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.949352 4957 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.949358 4957 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.949363 4957 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.949370 4957 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.949375 4957 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.949382 4957 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.949388 4957 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.949393 4957 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.949398 4957 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.949404 4957 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.949409 4957 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.949414 4957 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.949440 4957 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.949446 4957 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.949451 4957 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.949457 4957 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.949462 4957 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.949467 4957 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.949472 4957 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.949477 4957 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.949482 4957 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.949487 4957 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.949492 4957 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.949497 4957 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.949504 4957 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.949509 4957 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.949514 4957 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.949519 4957 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.949524 4957 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.949529 4957 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.949533 4957 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.949538 4957 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.949545 4957 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.949551 4957 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.949556 4957 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.949562 4957 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.949568 4957 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.949573 4957 feature_gate.go:330] unrecognized feature gate: Example Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.949578 4957 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.949582 4957 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.949587 4957 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.949591 4957 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.949596 4957 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.949601 4957 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.949605 4957 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.949610 4957 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.949615 4957 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.949619 4957 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.949624 4957 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.949628 4957 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.949636 4957 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.949643 4957 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.949648 4957 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.949653 4957 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.949658 4957 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.949663 4957 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.949668 4957 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.949672 4957 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.949677 4957 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.949682 4957 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.949686 4957 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.949691 4957 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.949695 4957 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.949700 4957 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.949705 4957 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 18 14:31:33 crc kubenswrapper[4957]: I0218 14:31:33.949809 4957 flags.go:64] FLAG: --address="0.0.0.0" Feb 18 14:31:33 crc kubenswrapper[4957]: I0218 14:31:33.949820 4957 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Feb 18 14:31:33 crc kubenswrapper[4957]: I0218 14:31:33.949829 4957 flags.go:64] FLAG: --anonymous-auth="true" Feb 18 14:31:33 crc kubenswrapper[4957]: I0218 14:31:33.949837 4957 flags.go:64] FLAG: --application-metrics-count-limit="100" Feb 18 14:31:33 crc kubenswrapper[4957]: I0218 14:31:33.949844 4957 flags.go:64] FLAG: --authentication-token-webhook="false" Feb 18 14:31:33 crc kubenswrapper[4957]: I0218 14:31:33.949850 4957 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Feb 18 14:31:33 crc kubenswrapper[4957]: I0218 14:31:33.949857 4957 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Feb 18 14:31:33 crc kubenswrapper[4957]: I0218 14:31:33.949864 4957 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Feb 18 14:31:33 crc kubenswrapper[4957]: I0218 14:31:33.949869 4957 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Feb 18 14:31:33 crc kubenswrapper[4957]: I0218 14:31:33.949875 4957 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Feb 18 14:31:33 crc kubenswrapper[4957]: I0218 14:31:33.949881 4957 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Feb 18 14:31:33 crc kubenswrapper[4957]: I0218 14:31:33.949887 4957 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Feb 18 14:31:33 crc kubenswrapper[4957]: I0218 14:31:33.949892 4957 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Feb 18 14:31:33 crc kubenswrapper[4957]: I0218 14:31:33.949898 4957 flags.go:64] FLAG: --cgroup-root="" Feb 18 14:31:33 crc kubenswrapper[4957]: I0218 14:31:33.949904 4957 flags.go:64] FLAG: --cgroups-per-qos="true" Feb 18 14:31:33 crc kubenswrapper[4957]: I0218 14:31:33.949910 4957 flags.go:64] FLAG: --client-ca-file="" Feb 18 14:31:33 crc kubenswrapper[4957]: I0218 14:31:33.949915 4957 flags.go:64] FLAG: --cloud-config="" Feb 18 14:31:33 crc kubenswrapper[4957]: I0218 14:31:33.949921 4957 flags.go:64] FLAG: --cloud-provider="" Feb 18 14:31:33 crc kubenswrapper[4957]: I0218 14:31:33.949926 4957 flags.go:64] FLAG: --cluster-dns="[]" Feb 18 14:31:33 crc kubenswrapper[4957]: I0218 14:31:33.949933 4957 flags.go:64] FLAG: --cluster-domain="" Feb 18 14:31:33 crc kubenswrapper[4957]: I0218 14:31:33.949939 4957 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Feb 18 14:31:33 crc kubenswrapper[4957]: I0218 14:31:33.949945 4957 flags.go:64] FLAG: --config-dir="" Feb 18 14:31:33 crc kubenswrapper[4957]: I0218 14:31:33.949951 4957 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Feb 18 14:31:33 crc kubenswrapper[4957]: I0218 14:31:33.949957 4957 flags.go:64] FLAG: --container-log-max-files="5" Feb 18 14:31:33 crc kubenswrapper[4957]: I0218 14:31:33.949965 4957 flags.go:64] FLAG: --container-log-max-size="10Mi" Feb 18 14:31:33 crc kubenswrapper[4957]: I0218 14:31:33.949970 4957 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Feb 18 14:31:33 crc kubenswrapper[4957]: I0218 14:31:33.949976 4957 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Feb 18 14:31:33 crc kubenswrapper[4957]: I0218 14:31:33.949982 4957 flags.go:64] FLAG: --containerd-namespace="k8s.io" Feb 18 14:31:33 crc kubenswrapper[4957]: I0218 14:31:33.949987 4957 flags.go:64] FLAG: --contention-profiling="false" Feb 18 14:31:33 crc kubenswrapper[4957]: I0218 14:31:33.949993 4957 flags.go:64] FLAG: --cpu-cfs-quota="true" Feb 18 14:31:33 crc kubenswrapper[4957]: I0218 14:31:33.949998 4957 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Feb 18 14:31:33 crc kubenswrapper[4957]: I0218 14:31:33.950004 4957 flags.go:64] FLAG: --cpu-manager-policy="none" Feb 18 14:31:33 crc kubenswrapper[4957]: I0218 14:31:33.950010 4957 flags.go:64] FLAG: --cpu-manager-policy-options="" Feb 18 14:31:33 crc kubenswrapper[4957]: I0218 14:31:33.950021 4957 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Feb 18 14:31:33 crc kubenswrapper[4957]: I0218 14:31:33.950026 4957 flags.go:64] FLAG: --enable-controller-attach-detach="true" Feb 18 14:31:33 crc kubenswrapper[4957]: I0218 14:31:33.950034 4957 flags.go:64] FLAG: --enable-debugging-handlers="true" Feb 18 14:31:33 crc kubenswrapper[4957]: I0218 14:31:33.950039 4957 flags.go:64] FLAG: --enable-load-reader="false" Feb 18 14:31:33 crc kubenswrapper[4957]: I0218 14:31:33.950045 4957 flags.go:64] FLAG: --enable-server="true" Feb 18 14:31:33 crc kubenswrapper[4957]: I0218 14:31:33.950050 4957 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Feb 18 14:31:33 crc kubenswrapper[4957]: I0218 14:31:33.950057 4957 flags.go:64] FLAG: --event-burst="100" Feb 18 14:31:33 crc kubenswrapper[4957]: I0218 14:31:33.950063 4957 flags.go:64] FLAG: --event-qps="50" Feb 18 14:31:33 crc kubenswrapper[4957]: I0218 14:31:33.950068 4957 flags.go:64] FLAG: --event-storage-age-limit="default=0" Feb 18 14:31:33 crc kubenswrapper[4957]: I0218 14:31:33.950074 4957 flags.go:64] FLAG: --event-storage-event-limit="default=0" Feb 18 14:31:33 crc kubenswrapper[4957]: I0218 14:31:33.950080 4957 flags.go:64] FLAG: --eviction-hard="" Feb 18 14:31:33 crc kubenswrapper[4957]: I0218 14:31:33.950087 4957 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Feb 18 14:31:33 crc kubenswrapper[4957]: I0218 14:31:33.950093 4957 flags.go:64] FLAG: --eviction-minimum-reclaim="" Feb 18 14:31:33 crc kubenswrapper[4957]: I0218 14:31:33.950098 4957 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Feb 18 14:31:33 crc kubenswrapper[4957]: I0218 14:31:33.950104 4957 flags.go:64] FLAG: --eviction-soft="" Feb 18 14:31:33 crc kubenswrapper[4957]: I0218 14:31:33.950109 4957 flags.go:64] FLAG: --eviction-soft-grace-period="" Feb 18 14:31:33 crc kubenswrapper[4957]: I0218 14:31:33.950115 4957 flags.go:64] FLAG: --exit-on-lock-contention="false" Feb 18 14:31:33 crc kubenswrapper[4957]: I0218 14:31:33.950121 4957 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Feb 18 14:31:33 crc kubenswrapper[4957]: I0218 14:31:33.950126 4957 flags.go:64] FLAG: --experimental-mounter-path="" Feb 18 14:31:33 crc kubenswrapper[4957]: I0218 14:31:33.950131 4957 flags.go:64] FLAG: --fail-cgroupv1="false" Feb 18 14:31:33 crc kubenswrapper[4957]: I0218 14:31:33.950137 4957 flags.go:64] FLAG: --fail-swap-on="true" Feb 18 14:31:33 crc kubenswrapper[4957]: I0218 14:31:33.950142 4957 flags.go:64] FLAG: --feature-gates="" Feb 18 14:31:33 crc kubenswrapper[4957]: I0218 14:31:33.950149 4957 flags.go:64] FLAG: --file-check-frequency="20s" Feb 18 14:31:33 crc kubenswrapper[4957]: I0218 14:31:33.950155 4957 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Feb 18 14:31:33 crc kubenswrapper[4957]: I0218 14:31:33.950161 4957 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Feb 18 14:31:33 crc kubenswrapper[4957]: I0218 14:31:33.950167 4957 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Feb 18 14:31:33 crc kubenswrapper[4957]: I0218 14:31:33.950173 4957 flags.go:64] FLAG: --healthz-port="10248" Feb 18 14:31:33 crc kubenswrapper[4957]: I0218 14:31:33.950178 4957 flags.go:64] FLAG: --help="false" Feb 18 14:31:33 crc kubenswrapper[4957]: I0218 14:31:33.950184 4957 flags.go:64] FLAG: --hostname-override="" Feb 18 14:31:33 crc kubenswrapper[4957]: I0218 14:31:33.950190 4957 flags.go:64] FLAG: --housekeeping-interval="10s" Feb 18 14:31:33 crc kubenswrapper[4957]: I0218 14:31:33.950195 4957 flags.go:64] FLAG: --http-check-frequency="20s" Feb 18 14:31:33 crc kubenswrapper[4957]: I0218 14:31:33.950200 4957 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Feb 18 14:31:33 crc kubenswrapper[4957]: I0218 14:31:33.950206 4957 flags.go:64] FLAG: --image-credential-provider-config="" Feb 18 14:31:33 crc kubenswrapper[4957]: I0218 14:31:33.950211 4957 flags.go:64] FLAG: --image-gc-high-threshold="85" Feb 18 14:31:33 crc kubenswrapper[4957]: I0218 14:31:33.950217 4957 flags.go:64] FLAG: --image-gc-low-threshold="80" Feb 18 14:31:33 crc kubenswrapper[4957]: I0218 14:31:33.950223 4957 flags.go:64] FLAG: --image-service-endpoint="" Feb 18 14:31:33 crc kubenswrapper[4957]: I0218 14:31:33.950228 4957 flags.go:64] FLAG: --kernel-memcg-notification="false" Feb 18 14:31:33 crc kubenswrapper[4957]: I0218 14:31:33.950233 4957 flags.go:64] FLAG: --kube-api-burst="100" Feb 18 14:31:33 crc kubenswrapper[4957]: I0218 14:31:33.950239 4957 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Feb 18 14:31:33 crc kubenswrapper[4957]: I0218 14:31:33.950245 4957 flags.go:64] FLAG: --kube-api-qps="50" Feb 18 14:31:33 crc kubenswrapper[4957]: I0218 14:31:33.950250 4957 flags.go:64] FLAG: --kube-reserved="" Feb 18 14:31:33 crc kubenswrapper[4957]: I0218 14:31:33.950255 4957 flags.go:64] FLAG: --kube-reserved-cgroup="" Feb 18 14:31:33 crc kubenswrapper[4957]: I0218 14:31:33.950262 4957 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Feb 18 14:31:33 crc kubenswrapper[4957]: I0218 14:31:33.950268 4957 flags.go:64] FLAG: --kubelet-cgroups="" Feb 18 14:31:33 crc kubenswrapper[4957]: I0218 14:31:33.950273 4957 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Feb 18 14:31:33 crc kubenswrapper[4957]: I0218 14:31:33.950279 4957 flags.go:64] FLAG: --lock-file="" Feb 18 14:31:33 crc kubenswrapper[4957]: I0218 14:31:33.950284 4957 flags.go:64] FLAG: --log-cadvisor-usage="false" Feb 18 14:31:33 crc kubenswrapper[4957]: I0218 14:31:33.950290 4957 flags.go:64] FLAG: --log-flush-frequency="5s" Feb 18 14:31:33 crc kubenswrapper[4957]: I0218 14:31:33.950295 4957 flags.go:64] FLAG: --log-json-info-buffer-size="0" Feb 18 14:31:33 crc kubenswrapper[4957]: I0218 14:31:33.950304 4957 flags.go:64] FLAG: --log-json-split-stream="false" Feb 18 14:31:33 crc kubenswrapper[4957]: I0218 14:31:33.950309 4957 flags.go:64] FLAG: --log-text-info-buffer-size="0" Feb 18 14:31:33 crc kubenswrapper[4957]: I0218 14:31:33.950315 4957 flags.go:64] FLAG: --log-text-split-stream="false" Feb 18 14:31:33 crc kubenswrapper[4957]: I0218 14:31:33.950320 4957 flags.go:64] FLAG: --logging-format="text" Feb 18 14:31:33 crc kubenswrapper[4957]: I0218 14:31:33.950325 4957 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Feb 18 14:31:33 crc kubenswrapper[4957]: I0218 14:31:33.950331 4957 flags.go:64] FLAG: --make-iptables-util-chains="true" Feb 18 14:31:33 crc kubenswrapper[4957]: I0218 14:31:33.950337 4957 flags.go:64] FLAG: --manifest-url="" Feb 18 14:31:33 crc kubenswrapper[4957]: I0218 14:31:33.950342 4957 flags.go:64] FLAG: --manifest-url-header="" Feb 18 14:31:33 crc kubenswrapper[4957]: I0218 14:31:33.950349 4957 flags.go:64] FLAG: --max-housekeeping-interval="15s" Feb 18 14:31:33 crc kubenswrapper[4957]: I0218 14:31:33.950355 4957 flags.go:64] FLAG: --max-open-files="1000000" Feb 18 14:31:33 crc kubenswrapper[4957]: I0218 14:31:33.950362 4957 flags.go:64] FLAG: --max-pods="110" Feb 18 14:31:33 crc kubenswrapper[4957]: I0218 14:31:33.950367 4957 flags.go:64] FLAG: --maximum-dead-containers="-1" Feb 18 14:31:33 crc kubenswrapper[4957]: I0218 14:31:33.950373 4957 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Feb 18 14:31:33 crc kubenswrapper[4957]: I0218 14:31:33.950379 4957 flags.go:64] FLAG: --memory-manager-policy="None" Feb 18 14:31:33 crc kubenswrapper[4957]: I0218 14:31:33.950384 4957 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Feb 18 14:31:33 crc kubenswrapper[4957]: I0218 14:31:33.950390 4957 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Feb 18 14:31:33 crc kubenswrapper[4957]: I0218 14:31:33.950396 4957 flags.go:64] FLAG: --node-ip="192.168.126.11" Feb 18 14:31:33 crc kubenswrapper[4957]: I0218 14:31:33.950402 4957 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Feb 18 14:31:33 crc kubenswrapper[4957]: I0218 14:31:33.950414 4957 flags.go:64] FLAG: --node-status-max-images="50" Feb 18 14:31:33 crc kubenswrapper[4957]: I0218 14:31:33.950436 4957 flags.go:64] FLAG: --node-status-update-frequency="10s" Feb 18 14:31:33 crc kubenswrapper[4957]: I0218 14:31:33.950692 4957 flags.go:64] FLAG: --oom-score-adj="-999" Feb 18 14:31:33 crc kubenswrapper[4957]: I0218 14:31:33.950705 4957 flags.go:64] FLAG: --pod-cidr="" Feb 18 14:31:33 crc kubenswrapper[4957]: I0218 14:31:33.950711 4957 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Feb 18 14:31:33 crc kubenswrapper[4957]: I0218 14:31:33.950722 4957 flags.go:64] FLAG: --pod-manifest-path="" Feb 18 14:31:33 crc kubenswrapper[4957]: I0218 14:31:33.950729 4957 flags.go:64] FLAG: --pod-max-pids="-1" Feb 18 14:31:33 crc kubenswrapper[4957]: I0218 14:31:33.950736 4957 flags.go:64] FLAG: --pods-per-core="0" Feb 18 14:31:33 crc kubenswrapper[4957]: I0218 14:31:33.950742 4957 flags.go:64] FLAG: --port="10250" Feb 18 14:31:33 crc kubenswrapper[4957]: I0218 14:31:33.950748 4957 flags.go:64] FLAG: --protect-kernel-defaults="false" Feb 18 14:31:33 crc kubenswrapper[4957]: I0218 14:31:33.951228 4957 flags.go:64] FLAG: --provider-id="" Feb 18 14:31:33 crc kubenswrapper[4957]: I0218 14:31:33.951266 4957 flags.go:64] FLAG: --qos-reserved="" Feb 18 14:31:33 crc kubenswrapper[4957]: I0218 14:31:33.951282 4957 flags.go:64] FLAG: --read-only-port="10255" Feb 18 14:31:33 crc kubenswrapper[4957]: I0218 14:31:33.951297 4957 flags.go:64] FLAG: --register-node="true" Feb 18 14:31:33 crc kubenswrapper[4957]: I0218 14:31:33.951312 4957 flags.go:64] FLAG: --register-schedulable="true" Feb 18 14:31:33 crc kubenswrapper[4957]: I0218 14:31:33.951325 4957 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Feb 18 14:31:33 crc kubenswrapper[4957]: I0218 14:31:33.951801 4957 flags.go:64] FLAG: --registry-burst="10" Feb 18 14:31:33 crc kubenswrapper[4957]: I0218 14:31:33.951850 4957 flags.go:64] FLAG: --registry-qps="5" Feb 18 14:31:33 crc kubenswrapper[4957]: I0218 14:31:33.951866 4957 flags.go:64] FLAG: --reserved-cpus="" Feb 18 14:31:33 crc kubenswrapper[4957]: I0218 14:31:33.952067 4957 flags.go:64] FLAG: --reserved-memory="" Feb 18 14:31:33 crc kubenswrapper[4957]: I0218 14:31:33.952083 4957 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Feb 18 14:31:33 crc kubenswrapper[4957]: I0218 14:31:33.952096 4957 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Feb 18 14:31:33 crc kubenswrapper[4957]: I0218 14:31:33.952109 4957 flags.go:64] FLAG: --rotate-certificates="false" Feb 18 14:31:33 crc kubenswrapper[4957]: I0218 14:31:33.952121 4957 flags.go:64] FLAG: --rotate-server-certificates="false" Feb 18 14:31:33 crc kubenswrapper[4957]: I0218 14:31:33.952134 4957 flags.go:64] FLAG: --runonce="false" Feb 18 14:31:33 crc kubenswrapper[4957]: I0218 14:31:33.952146 4957 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Feb 18 14:31:33 crc kubenswrapper[4957]: I0218 14:31:33.952168 4957 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Feb 18 14:31:33 crc kubenswrapper[4957]: I0218 14:31:33.952182 4957 flags.go:64] FLAG: --seccomp-default="false" Feb 18 14:31:33 crc kubenswrapper[4957]: I0218 14:31:33.952194 4957 flags.go:64] FLAG: --serialize-image-pulls="true" Feb 18 14:31:33 crc kubenswrapper[4957]: I0218 14:31:33.952207 4957 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Feb 18 14:31:33 crc kubenswrapper[4957]: I0218 14:31:33.952220 4957 flags.go:64] FLAG: --storage-driver-db="cadvisor" Feb 18 14:31:33 crc kubenswrapper[4957]: I0218 14:31:33.952234 4957 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Feb 18 14:31:33 crc kubenswrapper[4957]: I0218 14:31:33.952655 4957 flags.go:64] FLAG: --storage-driver-password="root" Feb 18 14:31:33 crc kubenswrapper[4957]: I0218 14:31:33.952788 4957 flags.go:64] FLAG: --storage-driver-secure="false" Feb 18 14:31:33 crc kubenswrapper[4957]: I0218 14:31:33.953134 4957 flags.go:64] FLAG: --storage-driver-table="stats" Feb 18 14:31:33 crc kubenswrapper[4957]: I0218 14:31:33.953145 4957 flags.go:64] FLAG: --storage-driver-user="root" Feb 18 14:31:33 crc kubenswrapper[4957]: I0218 14:31:33.953156 4957 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Feb 18 14:31:33 crc kubenswrapper[4957]: I0218 14:31:33.953168 4957 flags.go:64] FLAG: --sync-frequency="1m0s" Feb 18 14:31:33 crc kubenswrapper[4957]: I0218 14:31:33.953178 4957 flags.go:64] FLAG: --system-cgroups="" Feb 18 14:31:33 crc kubenswrapper[4957]: I0218 14:31:33.953189 4957 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Feb 18 14:31:33 crc kubenswrapper[4957]: I0218 14:31:33.953217 4957 flags.go:64] FLAG: --system-reserved-cgroup="" Feb 18 14:31:33 crc kubenswrapper[4957]: I0218 14:31:33.953227 4957 flags.go:64] FLAG: --tls-cert-file="" Feb 18 14:31:33 crc kubenswrapper[4957]: I0218 14:31:33.953236 4957 flags.go:64] FLAG: --tls-cipher-suites="[]" Feb 18 14:31:33 crc kubenswrapper[4957]: I0218 14:31:33.953254 4957 flags.go:64] FLAG: --tls-min-version="" Feb 18 14:31:33 crc kubenswrapper[4957]: I0218 14:31:33.953263 4957 flags.go:64] FLAG: --tls-private-key-file="" Feb 18 14:31:33 crc kubenswrapper[4957]: I0218 14:31:33.953272 4957 flags.go:64] FLAG: --topology-manager-policy="none" Feb 18 14:31:33 crc kubenswrapper[4957]: I0218 14:31:33.953280 4957 flags.go:64] FLAG: --topology-manager-policy-options="" Feb 18 14:31:33 crc kubenswrapper[4957]: I0218 14:31:33.953290 4957 flags.go:64] FLAG: --topology-manager-scope="container" Feb 18 14:31:33 crc kubenswrapper[4957]: I0218 14:31:33.953306 4957 flags.go:64] FLAG: --v="2" Feb 18 14:31:33 crc kubenswrapper[4957]: I0218 14:31:33.953322 4957 flags.go:64] FLAG: --version="false" Feb 18 14:31:33 crc kubenswrapper[4957]: I0218 14:31:33.953335 4957 flags.go:64] FLAG: --vmodule="" Feb 18 14:31:33 crc kubenswrapper[4957]: I0218 14:31:33.953346 4957 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Feb 18 14:31:33 crc kubenswrapper[4957]: I0218 14:31:33.953357 4957 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.953645 4957 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.953658 4957 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.953668 4957 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.953676 4957 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.953684 4957 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.953692 4957 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.953700 4957 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.953708 4957 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.953715 4957 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.953723 4957 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.953731 4957 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.953739 4957 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.953749 4957 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.953758 4957 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.953766 4957 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.953777 4957 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.953789 4957 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.953799 4957 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.953808 4957 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.953817 4957 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.953828 4957 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.953837 4957 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.953845 4957 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.953853 4957 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.953861 4957 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.953870 4957 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.954062 4957 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.954069 4957 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.954077 4957 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.954084 4957 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.954093 4957 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.954100 4957 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.954108 4957 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.954116 4957 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.954123 4957 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.954131 4957 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.954139 4957 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.954146 4957 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.954154 4957 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.954162 4957 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.954170 4957 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.954178 4957 feature_gate.go:330] unrecognized feature gate: Example Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.954186 4957 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.954194 4957 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.954204 4957 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.954211 4957 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.954219 4957 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.954227 4957 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.954234 4957 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.954242 4957 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.954249 4957 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.954257 4957 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.954265 4957 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.954272 4957 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.954280 4957 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.954287 4957 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.954298 4957 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.954308 4957 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.954318 4957 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.954328 4957 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.954336 4957 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.954343 4957 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.954351 4957 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.954360 4957 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.954367 4957 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.954375 4957 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.954382 4957 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.954391 4957 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.954398 4957 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.954407 4957 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.954437 4957 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 18 14:31:33 crc kubenswrapper[4957]: I0218 14:31:33.954490 4957 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 18 14:31:33 crc kubenswrapper[4957]: I0218 14:31:33.964724 4957 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Feb 18 14:31:33 crc kubenswrapper[4957]: I0218 14:31:33.964756 4957 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.964851 4957 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.964860 4957 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.964866 4957 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.964872 4957 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.964877 4957 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.964882 4957 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.964887 4957 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.964893 4957 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.964900 4957 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.964905 4957 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.964910 4957 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.964915 4957 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.964920 4957 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.964925 4957 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.964930 4957 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.964936 4957 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.964943 4957 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.964949 4957 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.964956 4957 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.964961 4957 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.964966 4957 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.964971 4957 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.964977 4957 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.964982 4957 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.964987 4957 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.964992 4957 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.964998 4957 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.965004 4957 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.965009 4957 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.965015 4957 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.965021 4957 feature_gate.go:330] unrecognized feature gate: Example Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.965026 4957 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.965032 4957 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.965037 4957 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.965044 4957 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.965050 4957 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.965055 4957 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.965060 4957 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.965065 4957 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.965070 4957 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.965076 4957 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.965081 4957 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.965087 4957 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.965094 4957 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.965100 4957 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.965105 4957 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.965111 4957 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.965117 4957 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.965122 4957 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.965127 4957 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.965132 4957 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.965137 4957 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.965142 4957 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.965147 4957 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.965152 4957 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.965158 4957 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.965162 4957 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.965168 4957 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.965172 4957 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.965179 4957 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.965185 4957 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.965191 4957 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.965196 4957 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.965203 4957 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.965209 4957 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.965213 4957 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.965219 4957 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.965225 4957 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.965230 4957 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.965235 4957 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.965241 4957 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 18 14:31:33 crc kubenswrapper[4957]: I0218 14:31:33.965249 4957 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.965392 4957 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.965400 4957 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.965405 4957 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.965410 4957 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.965431 4957 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.965436 4957 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.965441 4957 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.965446 4957 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.965451 4957 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.965455 4957 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.965460 4957 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.965465 4957 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.965470 4957 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.965475 4957 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.965480 4957 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.965485 4957 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.965490 4957 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.965495 4957 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.965499 4957 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.965505 4957 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.965509 4957 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.965515 4957 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.965522 4957 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.965529 4957 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.965534 4957 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.965540 4957 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.965546 4957 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.965552 4957 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.965558 4957 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.965563 4957 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.965569 4957 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.965574 4957 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.965579 4957 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.965584 4957 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.965590 4957 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.965596 4957 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.965601 4957 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.965606 4957 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.965611 4957 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.965617 4957 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.965623 4957 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.965628 4957 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.965633 4957 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.965638 4957 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.965643 4957 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.965648 4957 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.965653 4957 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.965658 4957 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.965662 4957 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.965667 4957 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.965672 4957 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.965677 4957 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.965683 4957 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.965687 4957 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.965693 4957 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.965698 4957 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.965704 4957 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.965710 4957 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.965715 4957 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.965720 4957 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.965726 4957 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.965732 4957 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.965736 4957 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.965741 4957 feature_gate.go:330] unrecognized feature gate: Example Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.965746 4957 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.965750 4957 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.965756 4957 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.965762 4957 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.965768 4957 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.965773 4957 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 18 14:31:33 crc kubenswrapper[4957]: W0218 14:31:33.965779 4957 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 18 14:31:33 crc kubenswrapper[4957]: I0218 14:31:33.965787 4957 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 18 14:31:33 crc kubenswrapper[4957]: I0218 14:31:33.965960 4957 server.go:940] "Client rotation is on, will bootstrap in background" Feb 18 14:31:33 crc kubenswrapper[4957]: I0218 14:31:33.970908 4957 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Feb 18 14:31:33 crc kubenswrapper[4957]: I0218 14:31:33.970992 4957 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 18 14:31:33 crc kubenswrapper[4957]: I0218 14:31:33.972949 4957 server.go:997] "Starting client certificate rotation" Feb 18 14:31:33 crc kubenswrapper[4957]: I0218 14:31:33.972975 4957 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Feb 18 14:31:33 crc kubenswrapper[4957]: I0218 14:31:33.973111 4957 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-11-08 22:35:37.961555056 +0000 UTC Feb 18 14:31:33 crc kubenswrapper[4957]: I0218 14:31:33.973178 4957 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.001794 4957 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.004265 4957 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 18 14:31:34 crc kubenswrapper[4957]: E0218 14:31:34.004260 4957 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.213:6443: connect: connection refused" logger="UnhandledError" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.025374 4957 log.go:25] "Validated CRI v1 runtime API" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.066957 4957 log.go:25] "Validated CRI v1 image API" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.069320 4957 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.075261 4957 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-02-18-14-27-04-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.075310 4957 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.099116 4957 manager.go:217] Machine: {Timestamp:2026-02-18 14:31:34.093692771 +0000 UTC m=+0.614557535 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:9fb0acd4-1ed1-4909-a63a-3f4dd7b07055 BootID:73cbc702-e999-4b05-a826-bb1b15d4d73b Filesystems:[{Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:b9:b9:df Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:b9:b9:df Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:b9:96:24 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:a3:42:b1 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:16:4f:f7 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:40:cd:64 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:32:1b:fa:e9:cb:48 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:4e:e4:3c:7f:5a:4c Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.099761 4957 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.099956 4957 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.102360 4957 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.102588 4957 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.102647 4957 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.102941 4957 topology_manager.go:138] "Creating topology manager with none policy" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.102954 4957 container_manager_linux.go:303] "Creating device plugin manager" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.103543 4957 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.103588 4957 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.104339 4957 state_mem.go:36] "Initialized new in-memory state store" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.104850 4957 server.go:1245] "Using root directory" path="/var/lib/kubelet" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.109219 4957 kubelet.go:418] "Attempting to sync node with API server" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.109252 4957 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.109286 4957 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.109305 4957 kubelet.go:324] "Adding apiserver pod source" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.109322 4957 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.114801 4957 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Feb 18 14:31:34 crc kubenswrapper[4957]: W0218 14:31:34.114897 4957 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.213:6443: connect: connection refused Feb 18 14:31:34 crc kubenswrapper[4957]: E0218 14:31:34.114995 4957 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.213:6443: connect: connection refused" logger="UnhandledError" Feb 18 14:31:34 crc kubenswrapper[4957]: W0218 14:31:34.115002 4957 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.213:6443: connect: connection refused Feb 18 14:31:34 crc kubenswrapper[4957]: E0218 14:31:34.115101 4957 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.213:6443: connect: connection refused" logger="UnhandledError" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.115861 4957 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.118283 4957 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.119846 4957 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.119884 4957 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.119897 4957 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.119908 4957 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.119924 4957 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.119934 4957 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.119943 4957 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.119957 4957 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.119970 4957 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.119979 4957 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.120003 4957 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.120011 4957 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.120786 4957 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.121301 4957 server.go:1280] "Started kubelet" Feb 18 14:31:34 crc systemd[1]: Started Kubernetes Kubelet. Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.124154 4957 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.213:6443: connect: connection refused Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.124603 4957 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.123873 4957 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.126370 4957 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.137640 4957 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.137815 4957 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.137953 4957 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 04:18:53.067272734 +0000 UTC Feb 18 14:31:34 crc kubenswrapper[4957]: E0218 14:31:34.138535 4957 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.138861 4957 server.go:460] "Adding debug handlers to kubelet server" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.139454 4957 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.139555 4957 volume_manager.go:287] "The desired_state_of_world populator starts" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.139816 4957 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 18 14:31:34 crc kubenswrapper[4957]: W0218 14:31:34.140674 4957 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.213:6443: connect: connection refused Feb 18 14:31:34 crc kubenswrapper[4957]: E0218 14:31:34.140848 4957 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.213:6443: connect: connection refused" logger="UnhandledError" Feb 18 14:31:34 crc kubenswrapper[4957]: E0218 14:31:34.140981 4957 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.213:6443: connect: connection refused" interval="200ms" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.141326 4957 factory.go:55] Registering systemd factory Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.141393 4957 factory.go:221] Registration of the systemd container factory successfully Feb 18 14:31:34 crc kubenswrapper[4957]: E0218 14:31:34.142021 4957 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.213:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.18955dbcd94f9bc1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 14:31:34.121266113 +0000 UTC m=+0.642130867,LastTimestamp:2026-02-18 14:31:34.121266113 +0000 UTC m=+0.642130867,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.147691 4957 factory.go:153] Registering CRI-O factory Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.147731 4957 factory.go:221] Registration of the crio container factory successfully Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.147967 4957 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.148000 4957 factory.go:103] Registering Raw factory Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.148016 4957 manager.go:1196] Started watching for new ooms in manager Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.148880 4957 manager.go:319] Starting recovery of all containers Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.152880 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.153065 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.153172 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.153256 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.153329 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.153445 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.153517 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.153581 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.153645 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.153705 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.153779 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.153877 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.153957 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.154027 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.154120 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.154195 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.154260 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.154338 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.154479 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.154563 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.154630 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.154693 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.154768 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.154845 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.154924 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.155065 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.155154 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.155233 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.155303 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.155360 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.155477 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.155559 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.155620 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.155693 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.155768 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.155841 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.155917 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.155979 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.156039 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.156096 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.156162 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.156227 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.156321 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.156384 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.156558 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.156633 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.156701 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.156774 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.156854 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.156949 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.157018 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.157086 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.157168 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.157281 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.157397 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.157527 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.157675 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.157808 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.157907 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.157993 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.158077 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.158161 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.158277 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.158408 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.158541 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.158636 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.158731 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.158851 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.158940 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.159021 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.159105 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.159185 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.159276 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.159360 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.159477 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.159555 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.159640 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.159726 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.159831 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.159952 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.160046 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.160123 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.160228 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.160316 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.160396 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.160476 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.160566 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.160650 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.160738 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.160825 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.160901 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.161015 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.161110 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.161208 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.161291 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.161373 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.161461 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.161547 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.161628 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.161710 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.161796 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.161888 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.161973 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.162056 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.162160 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.162250 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.162334 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.162424 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.162515 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.162681 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.162766 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.162848 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.162926 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.163014 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.163092 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.163180 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.163266 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.163352 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.163438 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.163549 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.163650 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.163737 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.163817 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.163900 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.163981 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.167263 4957 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.167359 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.167449 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.167524 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.167594 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.167658 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.167718 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.167846 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.167940 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.168021 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.168136 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.168222 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.168308 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.168497 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.168579 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.168605 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.168626 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.168648 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.168530 4957 manager.go:324] Recovery completed Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.168669 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.168692 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.168716 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.168736 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.168815 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.168837 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.168857 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.168878 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.168900 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.168919 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.168938 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.168957 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.168979 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.168998 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.169021 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.169042 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.169061 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.169079 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.169099 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.169118 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.169137 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.169158 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.169179 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.169199 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.169219 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.169239 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.169259 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.169278 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.169301 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.169320 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.169339 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.169358 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.169375 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.169399 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.169439 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.169459 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.169480 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.169501 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.169521 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.169539 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.169560 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.169579 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.169596 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.169617 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.169636 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.169653 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.169672 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.169691 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.169710 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.169729 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.169749 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.169770 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.169789 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.169808 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.169826 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.169845 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.169864 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.169884 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.169906 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.169926 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.169946 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.169964 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.169983 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.170001 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.170020 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.170039 4957 reconstruct.go:97] "Volume reconstruction finished" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.170051 4957 reconciler.go:26] "Reconciler: start to sync state" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.187013 4957 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.189549 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.189617 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.189630 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.190540 4957 cpu_manager.go:225] "Starting CPU manager" policy="none" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.190567 4957 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.190611 4957 state_mem.go:36] "Initialized new in-memory state store" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.208748 4957 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.211448 4957 policy_none.go:49] "None policy: Start" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.211551 4957 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.211589 4957 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.211643 4957 kubelet.go:2335] "Starting kubelet main sync loop" Feb 18 14:31:34 crc kubenswrapper[4957]: E0218 14:31:34.211702 4957 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.212643 4957 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.212677 4957 state_mem.go:35] "Initializing new in-memory state store" Feb 18 14:31:34 crc kubenswrapper[4957]: W0218 14:31:34.216467 4957 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.213:6443: connect: connection refused Feb 18 14:31:34 crc kubenswrapper[4957]: E0218 14:31:34.216585 4957 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.213:6443: connect: connection refused" logger="UnhandledError" Feb 18 14:31:34 crc kubenswrapper[4957]: E0218 14:31:34.239281 4957 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.271410 4957 manager.go:334] "Starting Device Plugin manager" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.271566 4957 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.271586 4957 server.go:79] "Starting device plugin registration server" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.272387 4957 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.272447 4957 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.272607 4957 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.272724 4957 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.272742 4957 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 18 14:31:34 crc kubenswrapper[4957]: E0218 14:31:34.281872 4957 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.312192 4957 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.312452 4957 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.314558 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.314611 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.314633 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.315067 4957 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.315549 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.315586 4957 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.316893 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.316957 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.316982 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.317062 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.317165 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.317187 4957 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.317208 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.317381 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.317462 4957 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.318986 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.319037 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.319057 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.319341 4957 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.319401 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.319482 4957 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.319655 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.319732 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.319746 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.321097 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.321127 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.321138 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.321160 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.321198 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.321238 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.321374 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.321456 4957 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.321512 4957 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.322709 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.322751 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.322778 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.322782 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.322792 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.322814 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.323018 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.323051 4957 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.324632 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.324683 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.324703 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:31:34 crc kubenswrapper[4957]: E0218 14:31:34.342703 4957 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.213:6443: connect: connection refused" interval="400ms" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.373494 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.373547 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.373567 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.373598 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.373636 4957 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.373721 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.373790 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.373877 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.373984 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.374099 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.374155 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.374205 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.374272 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.374320 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.374433 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.374532 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.375486 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.375536 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.375547 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.375579 4957 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 18 14:31:34 crc kubenswrapper[4957]: E0218 14:31:34.376005 4957 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.213:6443: connect: connection refused" node="crc" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.476129 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.476219 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.476258 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.476298 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.476322 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.476345 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.476383 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.476405 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.476437 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.476506 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.476535 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.476454 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.476641 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.476654 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.476582 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.476736 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.476532 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.476563 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.476721 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.476708 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.477008 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.477069 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.477115 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.477121 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.477127 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.477219 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.477185 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.477186 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.477314 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.477396 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.576854 4957 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.580076 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.580176 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.580194 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.580236 4957 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 18 14:31:34 crc kubenswrapper[4957]: E0218 14:31:34.581034 4957 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.213:6443: connect: connection refused" node="crc" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.668093 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.688159 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.697253 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 18 14:31:34 crc kubenswrapper[4957]: W0218 14:31:34.710290 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-ac095f914be2bbc3cc1ae5f0e3d1a1bb571f5316187ff22efe5c1a57465118b0 WatchSource:0}: Error finding container ac095f914be2bbc3cc1ae5f0e3d1a1bb571f5316187ff22efe5c1a57465118b0: Status 404 returned error can't find the container with id ac095f914be2bbc3cc1ae5f0e3d1a1bb571f5316187ff22efe5c1a57465118b0 Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.726093 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.732336 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 14:31:34 crc kubenswrapper[4957]: E0218 14:31:34.744541 4957 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.213:6443: connect: connection refused" interval="800ms" Feb 18 14:31:34 crc kubenswrapper[4957]: W0218 14:31:34.760438 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-eb3b541e1a1bd6a491e85d14a47b8b53f45822e01ae36c6b40f38bcf1e10b36c WatchSource:0}: Error finding container eb3b541e1a1bd6a491e85d14a47b8b53f45822e01ae36c6b40f38bcf1e10b36c: Status 404 returned error can't find the container with id eb3b541e1a1bd6a491e85d14a47b8b53f45822e01ae36c6b40f38bcf1e10b36c Feb 18 14:31:34 crc kubenswrapper[4957]: W0218 14:31:34.761708 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-048799eadc167850f8abda955b4696011fd57c35924d51148afb877cc652c6d4 WatchSource:0}: Error finding container 048799eadc167850f8abda955b4696011fd57c35924d51148afb877cc652c6d4: Status 404 returned error can't find the container with id 048799eadc167850f8abda955b4696011fd57c35924d51148afb877cc652c6d4 Feb 18 14:31:34 crc kubenswrapper[4957]: W0218 14:31:34.763904 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-293ef2c54e86da798601c58fce137093d09a9a2d25528cc5e0c96ddcb542bf2a WatchSource:0}: Error finding container 293ef2c54e86da798601c58fce137093d09a9a2d25528cc5e0c96ddcb542bf2a: Status 404 returned error can't find the container with id 293ef2c54e86da798601c58fce137093d09a9a2d25528cc5e0c96ddcb542bf2a Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.981592 4957 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.983374 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.983436 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.983446 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:31:34 crc kubenswrapper[4957]: I0218 14:31:34.983472 4957 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 18 14:31:34 crc kubenswrapper[4957]: E0218 14:31:34.983919 4957 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.213:6443: connect: connection refused" node="crc" Feb 18 14:31:35 crc kubenswrapper[4957]: I0218 14:31:35.125658 4957 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.213:6443: connect: connection refused Feb 18 14:31:35 crc kubenswrapper[4957]: I0218 14:31:35.138786 4957 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 17:58:58.256483415 +0000 UTC Feb 18 14:31:35 crc kubenswrapper[4957]: I0218 14:31:35.215267 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"487b111bb0e3c7eb282963b429483ac9292caae375c51f61268b1a2302da7ec9"} Feb 18 14:31:35 crc kubenswrapper[4957]: I0218 14:31:35.216478 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"ac095f914be2bbc3cc1ae5f0e3d1a1bb571f5316187ff22efe5c1a57465118b0"} Feb 18 14:31:35 crc kubenswrapper[4957]: I0218 14:31:35.217530 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"293ef2c54e86da798601c58fce137093d09a9a2d25528cc5e0c96ddcb542bf2a"} Feb 18 14:31:35 crc kubenswrapper[4957]: I0218 14:31:35.218230 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"048799eadc167850f8abda955b4696011fd57c35924d51148afb877cc652c6d4"} Feb 18 14:31:35 crc kubenswrapper[4957]: I0218 14:31:35.219191 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"eb3b541e1a1bd6a491e85d14a47b8b53f45822e01ae36c6b40f38bcf1e10b36c"} Feb 18 14:31:35 crc kubenswrapper[4957]: E0218 14:31:35.545189 4957 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.213:6443: connect: connection refused" interval="1.6s" Feb 18 14:31:35 crc kubenswrapper[4957]: W0218 14:31:35.559638 4957 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.213:6443: connect: connection refused Feb 18 14:31:35 crc kubenswrapper[4957]: E0218 14:31:35.559702 4957 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.213:6443: connect: connection refused" logger="UnhandledError" Feb 18 14:31:35 crc kubenswrapper[4957]: W0218 14:31:35.627611 4957 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.213:6443: connect: connection refused Feb 18 14:31:35 crc kubenswrapper[4957]: E0218 14:31:35.627755 4957 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.213:6443: connect: connection refused" logger="UnhandledError" Feb 18 14:31:35 crc kubenswrapper[4957]: W0218 14:31:35.632795 4957 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.213:6443: connect: connection refused Feb 18 14:31:35 crc kubenswrapper[4957]: E0218 14:31:35.632875 4957 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.213:6443: connect: connection refused" logger="UnhandledError" Feb 18 14:31:35 crc kubenswrapper[4957]: I0218 14:31:35.784856 4957 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 14:31:35 crc kubenswrapper[4957]: I0218 14:31:35.786450 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:31:35 crc kubenswrapper[4957]: I0218 14:31:35.786525 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:31:35 crc kubenswrapper[4957]: I0218 14:31:35.786547 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:31:35 crc kubenswrapper[4957]: I0218 14:31:35.786633 4957 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 18 14:31:35 crc kubenswrapper[4957]: E0218 14:31:35.787360 4957 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.213:6443: connect: connection refused" node="crc" Feb 18 14:31:35 crc kubenswrapper[4957]: W0218 14:31:35.797501 4957 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.213:6443: connect: connection refused Feb 18 14:31:35 crc kubenswrapper[4957]: E0218 14:31:35.797598 4957 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.213:6443: connect: connection refused" logger="UnhandledError" Feb 18 14:31:36 crc kubenswrapper[4957]: I0218 14:31:36.125265 4957 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.213:6443: connect: connection refused Feb 18 14:31:36 crc kubenswrapper[4957]: I0218 14:31:36.129395 4957 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 18 14:31:36 crc kubenswrapper[4957]: E0218 14:31:36.130408 4957 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.213:6443: connect: connection refused" logger="UnhandledError" Feb 18 14:31:36 crc kubenswrapper[4957]: I0218 14:31:36.139327 4957 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 17:50:15.393205097 +0000 UTC Feb 18 14:31:36 crc kubenswrapper[4957]: I0218 14:31:36.223769 4957 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="b68e74be3ce63b0e6fff6a8141e11a9c6e80a376e205bfc7f2745440c6526e6a" exitCode=0 Feb 18 14:31:36 crc kubenswrapper[4957]: I0218 14:31:36.223837 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"b68e74be3ce63b0e6fff6a8141e11a9c6e80a376e205bfc7f2745440c6526e6a"} Feb 18 14:31:36 crc kubenswrapper[4957]: I0218 14:31:36.223897 4957 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 14:31:36 crc kubenswrapper[4957]: I0218 14:31:36.224938 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:31:36 crc kubenswrapper[4957]: I0218 14:31:36.224977 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:31:36 crc kubenswrapper[4957]: I0218 14:31:36.224991 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:31:36 crc kubenswrapper[4957]: I0218 14:31:36.225451 4957 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="5e16888e8d272463cc6d1de95580644996c57cc899bf9668d019732a2a5beedf" exitCode=0 Feb 18 14:31:36 crc kubenswrapper[4957]: I0218 14:31:36.225555 4957 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 14:31:36 crc kubenswrapper[4957]: I0218 14:31:36.225544 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"5e16888e8d272463cc6d1de95580644996c57cc899bf9668d019732a2a5beedf"} Feb 18 14:31:36 crc kubenswrapper[4957]: I0218 14:31:36.226248 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:31:36 crc kubenswrapper[4957]: I0218 14:31:36.226286 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:31:36 crc kubenswrapper[4957]: I0218 14:31:36.226299 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:31:36 crc kubenswrapper[4957]: I0218 14:31:36.227637 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"04cb263fd49c7f580746efa2f81c4e25a4862244502ad5362a1767d6255c023c"} Feb 18 14:31:36 crc kubenswrapper[4957]: I0218 14:31:36.227683 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"cdeaa62699fcf37060225a404103ed014fff43f751f744a27891afb76c56d011"} Feb 18 14:31:36 crc kubenswrapper[4957]: I0218 14:31:36.227704 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"bb9f54ba352c3e8d1cb70e31e2321f32cc741e9ea7fa871ac8db9cb380486d6c"} Feb 18 14:31:36 crc kubenswrapper[4957]: I0218 14:31:36.229237 4957 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="09ddd547ad67f12c0d4f3b0505206fc69b4bcfcfa966c6f98da8ef4f4e979711" exitCode=0 Feb 18 14:31:36 crc kubenswrapper[4957]: I0218 14:31:36.229319 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"09ddd547ad67f12c0d4f3b0505206fc69b4bcfcfa966c6f98da8ef4f4e979711"} Feb 18 14:31:36 crc kubenswrapper[4957]: I0218 14:31:36.229325 4957 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 14:31:36 crc kubenswrapper[4957]: I0218 14:31:36.230305 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:31:36 crc kubenswrapper[4957]: I0218 14:31:36.230352 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:31:36 crc kubenswrapper[4957]: I0218 14:31:36.230369 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:31:36 crc kubenswrapper[4957]: I0218 14:31:36.231543 4957 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="b75296d3b521b12783576735982db94b0cf832d8495919aca432af9a25befee3" exitCode=0 Feb 18 14:31:36 crc kubenswrapper[4957]: I0218 14:31:36.231582 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"b75296d3b521b12783576735982db94b0cf832d8495919aca432af9a25befee3"} Feb 18 14:31:36 crc kubenswrapper[4957]: I0218 14:31:36.231818 4957 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 14:31:36 crc kubenswrapper[4957]: I0218 14:31:36.232233 4957 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 14:31:36 crc kubenswrapper[4957]: I0218 14:31:36.233217 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:31:36 crc kubenswrapper[4957]: I0218 14:31:36.233252 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:31:36 crc kubenswrapper[4957]: I0218 14:31:36.233263 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:31:36 crc kubenswrapper[4957]: I0218 14:31:36.233217 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:31:36 crc kubenswrapper[4957]: I0218 14:31:36.233324 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:31:36 crc kubenswrapper[4957]: I0218 14:31:36.233355 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:31:37 crc kubenswrapper[4957]: I0218 14:31:37.125212 4957 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.213:6443: connect: connection refused Feb 18 14:31:37 crc kubenswrapper[4957]: I0218 14:31:37.140370 4957 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 07:50:30.219149576 +0000 UTC Feb 18 14:31:37 crc kubenswrapper[4957]: E0218 14:31:37.146097 4957 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.213:6443: connect: connection refused" interval="3.2s" Feb 18 14:31:37 crc kubenswrapper[4957]: I0218 14:31:37.236215 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"0f45a9407db247a891731310afb4c1a7b81ac94e44cfd8e267854a4c4e434e95"} Feb 18 14:31:37 crc kubenswrapper[4957]: I0218 14:31:37.236267 4957 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 14:31:37 crc kubenswrapper[4957]: I0218 14:31:37.237146 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:31:37 crc kubenswrapper[4957]: I0218 14:31:37.237173 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:31:37 crc kubenswrapper[4957]: I0218 14:31:37.237184 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:31:37 crc kubenswrapper[4957]: I0218 14:31:37.240538 4957 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 14:31:37 crc kubenswrapper[4957]: I0218 14:31:37.240955 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"3ff013b6ed4aa92a4bda93a646e7ffd4a158daab436c25025c7fbee49716bcfd"} Feb 18 14:31:37 crc kubenswrapper[4957]: I0218 14:31:37.240998 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"d1c0f986e657c20a059efccb494462926f4fa62d30b012e6ce2b530d679d9d5c"} Feb 18 14:31:37 crc kubenswrapper[4957]: I0218 14:31:37.241020 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"6a2ec9911be4f4d8715bb722cdf72cdbf478111030a7311b93eb67f55ddbce77"} Feb 18 14:31:37 crc kubenswrapper[4957]: I0218 14:31:37.241035 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"6d46f625f180ab9afde6a7cb5d578b88012bf5b297e691a35f1fe1af000f1416"} Feb 18 14:31:37 crc kubenswrapper[4957]: I0218 14:31:37.241047 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"51eded762a62fcb704d09a9c4336a0e308e44c662b636772ea79f754c09dcaaa"} Feb 18 14:31:37 crc kubenswrapper[4957]: I0218 14:31:37.241437 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:31:37 crc kubenswrapper[4957]: I0218 14:31:37.241461 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:31:37 crc kubenswrapper[4957]: I0218 14:31:37.241474 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:31:37 crc kubenswrapper[4957]: I0218 14:31:37.243646 4957 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="5d7cf833b09ac67b42c14418452f0f60bf2633e53069f202491e87eb98d68e60" exitCode=0 Feb 18 14:31:37 crc kubenswrapper[4957]: I0218 14:31:37.243713 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"5d7cf833b09ac67b42c14418452f0f60bf2633e53069f202491e87eb98d68e60"} Feb 18 14:31:37 crc kubenswrapper[4957]: I0218 14:31:37.243870 4957 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 14:31:37 crc kubenswrapper[4957]: I0218 14:31:37.245328 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:31:37 crc kubenswrapper[4957]: I0218 14:31:37.245365 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:31:37 crc kubenswrapper[4957]: I0218 14:31:37.245380 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:31:37 crc kubenswrapper[4957]: I0218 14:31:37.248312 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"b902fee5b17e97ec39e941b26dd9ef6e2da95f49be1fc25192895894a4182685"} Feb 18 14:31:37 crc kubenswrapper[4957]: I0218 14:31:37.248344 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"bc2518870abf503506ce74ccbc0fc66ac6efa632a7394ceca4f58b38a0a1fac3"} Feb 18 14:31:37 crc kubenswrapper[4957]: I0218 14:31:37.248356 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"796493e4b9e802eafa557bda30c8c5442e4fb5705907cfa1f04cad2b36dc20bd"} Feb 18 14:31:37 crc kubenswrapper[4957]: I0218 14:31:37.248446 4957 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 14:31:37 crc kubenswrapper[4957]: I0218 14:31:37.249611 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:31:37 crc kubenswrapper[4957]: I0218 14:31:37.249643 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:31:37 crc kubenswrapper[4957]: I0218 14:31:37.249653 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:31:37 crc kubenswrapper[4957]: I0218 14:31:37.251106 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"66ef2dae6a762df2e06b45cba045afd41e5b0ad5fb05bbc83bab9e4aa2cb3525"} Feb 18 14:31:37 crc kubenswrapper[4957]: I0218 14:31:37.251252 4957 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 14:31:37 crc kubenswrapper[4957]: I0218 14:31:37.252209 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:31:37 crc kubenswrapper[4957]: I0218 14:31:37.252248 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:31:37 crc kubenswrapper[4957]: I0218 14:31:37.252260 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:31:37 crc kubenswrapper[4957]: W0218 14:31:37.336011 4957 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.213:6443: connect: connection refused Feb 18 14:31:37 crc kubenswrapper[4957]: E0218 14:31:37.336090 4957 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.213:6443: connect: connection refused" logger="UnhandledError" Feb 18 14:31:37 crc kubenswrapper[4957]: I0218 14:31:37.388445 4957 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 14:31:37 crc kubenswrapper[4957]: I0218 14:31:37.389792 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:31:37 crc kubenswrapper[4957]: I0218 14:31:37.389833 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:31:37 crc kubenswrapper[4957]: I0218 14:31:37.389846 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:31:37 crc kubenswrapper[4957]: I0218 14:31:37.389875 4957 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 18 14:31:37 crc kubenswrapper[4957]: E0218 14:31:37.390397 4957 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.213:6443: connect: connection refused" node="crc" Feb 18 14:31:37 crc kubenswrapper[4957]: I0218 14:31:37.422103 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 14:31:37 crc kubenswrapper[4957]: W0218 14:31:37.599258 4957 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.213:6443: connect: connection refused Feb 18 14:31:37 crc kubenswrapper[4957]: E0218 14:31:37.599356 4957 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.213:6443: connect: connection refused" logger="UnhandledError" Feb 18 14:31:38 crc kubenswrapper[4957]: I0218 14:31:38.140768 4957 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 11:13:57.841039611 +0000 UTC Feb 18 14:31:38 crc kubenswrapper[4957]: I0218 14:31:38.140883 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 14:31:38 crc kubenswrapper[4957]: I0218 14:31:38.255628 4957 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="90ce77e438c91127a56325b986c1cd12ae2845cc1551efc322fdccacdae7fe0b" exitCode=0 Feb 18 14:31:38 crc kubenswrapper[4957]: I0218 14:31:38.255730 4957 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 18 14:31:38 crc kubenswrapper[4957]: I0218 14:31:38.255753 4957 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 14:31:38 crc kubenswrapper[4957]: I0218 14:31:38.255795 4957 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 14:31:38 crc kubenswrapper[4957]: I0218 14:31:38.255839 4957 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 14:31:38 crc kubenswrapper[4957]: I0218 14:31:38.255954 4957 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 18 14:31:38 crc kubenswrapper[4957]: I0218 14:31:38.256057 4957 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 14:31:38 crc kubenswrapper[4957]: I0218 14:31:38.255783 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"90ce77e438c91127a56325b986c1cd12ae2845cc1551efc322fdccacdae7fe0b"} Feb 18 14:31:38 crc kubenswrapper[4957]: I0218 14:31:38.255728 4957 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 14:31:38 crc kubenswrapper[4957]: I0218 14:31:38.256665 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:31:38 crc kubenswrapper[4957]: I0218 14:31:38.256698 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:31:38 crc kubenswrapper[4957]: I0218 14:31:38.256710 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:31:38 crc kubenswrapper[4957]: I0218 14:31:38.256815 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:31:38 crc kubenswrapper[4957]: I0218 14:31:38.256837 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:31:38 crc kubenswrapper[4957]: I0218 14:31:38.256849 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:31:38 crc kubenswrapper[4957]: I0218 14:31:38.257794 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:31:38 crc kubenswrapper[4957]: I0218 14:31:38.257814 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:31:38 crc kubenswrapper[4957]: I0218 14:31:38.257822 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:31:38 crc kubenswrapper[4957]: I0218 14:31:38.257832 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:31:38 crc kubenswrapper[4957]: I0218 14:31:38.257845 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:31:38 crc kubenswrapper[4957]: I0218 14:31:38.257862 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:31:38 crc kubenswrapper[4957]: I0218 14:31:38.257861 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:31:38 crc kubenswrapper[4957]: I0218 14:31:38.258025 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:31:38 crc kubenswrapper[4957]: I0218 14:31:38.258044 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:31:38 crc kubenswrapper[4957]: I0218 14:31:38.909267 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 14:31:39 crc kubenswrapper[4957]: I0218 14:31:39.141231 4957 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 05:51:25.865822523 +0000 UTC Feb 18 14:31:39 crc kubenswrapper[4957]: I0218 14:31:39.261923 4957 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 14:31:39 crc kubenswrapper[4957]: I0218 14:31:39.262328 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"33b7f4c29e2794562f1c1c1dfe35751aa23b6e97f31759bac34c0838a5e08574"} Feb 18 14:31:39 crc kubenswrapper[4957]: I0218 14:31:39.262369 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"7987886eaa36b568665d2668eb140f3c180457da3f2c761d55cebef8acdeb1e1"} Feb 18 14:31:39 crc kubenswrapper[4957]: I0218 14:31:39.262385 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"93ae4c0b4df72259006d655accb6407b8b9dd1f7784fdd845a33c41135038fa8"} Feb 18 14:31:39 crc kubenswrapper[4957]: I0218 14:31:39.262391 4957 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 14:31:39 crc kubenswrapper[4957]: I0218 14:31:39.262415 4957 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 18 14:31:39 crc kubenswrapper[4957]: I0218 14:31:39.262488 4957 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 14:31:39 crc kubenswrapper[4957]: I0218 14:31:39.262396 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"045386f3f157b0457cd68c697264da83b8a963fbf7bf858f8d74cb6c56f8de0f"} Feb 18 14:31:39 crc kubenswrapper[4957]: I0218 14:31:39.262612 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"8bc659d9f5bf25c26af3b736f035a00583ddecb3c175bb67e502a408a99e1bd3"} Feb 18 14:31:39 crc kubenswrapper[4957]: I0218 14:31:39.262868 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:31:39 crc kubenswrapper[4957]: I0218 14:31:39.262904 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:31:39 crc kubenswrapper[4957]: I0218 14:31:39.262930 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:31:39 crc kubenswrapper[4957]: I0218 14:31:39.263407 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:31:39 crc kubenswrapper[4957]: I0218 14:31:39.263441 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:31:39 crc kubenswrapper[4957]: I0218 14:31:39.263451 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:31:39 crc kubenswrapper[4957]: I0218 14:31:39.264019 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:31:39 crc kubenswrapper[4957]: I0218 14:31:39.264051 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:31:39 crc kubenswrapper[4957]: I0218 14:31:39.264060 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:31:39 crc kubenswrapper[4957]: I0218 14:31:39.661102 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Feb 18 14:31:40 crc kubenswrapper[4957]: I0218 14:31:40.142314 4957 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 10:05:26.347583434 +0000 UTC Feb 18 14:31:40 crc kubenswrapper[4957]: I0218 14:31:40.194539 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 18 14:31:40 crc kubenswrapper[4957]: I0218 14:31:40.194733 4957 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 14:31:40 crc kubenswrapper[4957]: I0218 14:31:40.196802 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:31:40 crc kubenswrapper[4957]: I0218 14:31:40.196857 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:31:40 crc kubenswrapper[4957]: I0218 14:31:40.196874 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:31:40 crc kubenswrapper[4957]: I0218 14:31:40.210472 4957 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 18 14:31:40 crc kubenswrapper[4957]: I0218 14:31:40.264513 4957 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 18 14:31:40 crc kubenswrapper[4957]: I0218 14:31:40.264565 4957 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 14:31:40 crc kubenswrapper[4957]: I0218 14:31:40.264624 4957 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 14:31:40 crc kubenswrapper[4957]: I0218 14:31:40.265798 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:31:40 crc kubenswrapper[4957]: I0218 14:31:40.265832 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:31:40 crc kubenswrapper[4957]: I0218 14:31:40.265845 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:31:40 crc kubenswrapper[4957]: I0218 14:31:40.266236 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:31:40 crc kubenswrapper[4957]: I0218 14:31:40.266275 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:31:40 crc kubenswrapper[4957]: I0218 14:31:40.266291 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:31:40 crc kubenswrapper[4957]: I0218 14:31:40.422464 4957 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 14:31:40 crc kubenswrapper[4957]: I0218 14:31:40.422583 4957 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 14:31:40 crc kubenswrapper[4957]: I0218 14:31:40.470266 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Feb 18 14:31:40 crc kubenswrapper[4957]: I0218 14:31:40.591398 4957 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 14:31:40 crc kubenswrapper[4957]: I0218 14:31:40.593126 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:31:40 crc kubenswrapper[4957]: I0218 14:31:40.593177 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:31:40 crc kubenswrapper[4957]: I0218 14:31:40.593189 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:31:40 crc kubenswrapper[4957]: I0218 14:31:40.593217 4957 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 18 14:31:41 crc kubenswrapper[4957]: I0218 14:31:41.142755 4957 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 19:29:31.278988667 +0000 UTC Feb 18 14:31:41 crc kubenswrapper[4957]: I0218 14:31:41.267376 4957 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 14:31:41 crc kubenswrapper[4957]: I0218 14:31:41.268628 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:31:41 crc kubenswrapper[4957]: I0218 14:31:41.268678 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:31:41 crc kubenswrapper[4957]: I0218 14:31:41.268690 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:31:42 crc kubenswrapper[4957]: I0218 14:31:42.143452 4957 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 08:09:57.803434695 +0000 UTC Feb 18 14:31:42 crc kubenswrapper[4957]: I0218 14:31:42.269661 4957 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 14:31:42 crc kubenswrapper[4957]: I0218 14:31:42.270712 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:31:42 crc kubenswrapper[4957]: I0218 14:31:42.270745 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:31:42 crc kubenswrapper[4957]: I0218 14:31:42.270759 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:31:42 crc kubenswrapper[4957]: I0218 14:31:42.659131 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 14:31:42 crc kubenswrapper[4957]: I0218 14:31:42.659411 4957 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 14:31:42 crc kubenswrapper[4957]: I0218 14:31:42.661173 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:31:42 crc kubenswrapper[4957]: I0218 14:31:42.661242 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:31:42 crc kubenswrapper[4957]: I0218 14:31:42.661261 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:31:43 crc kubenswrapper[4957]: I0218 14:31:43.107699 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 14:31:43 crc kubenswrapper[4957]: I0218 14:31:43.107906 4957 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 14:31:43 crc kubenswrapper[4957]: I0218 14:31:43.109336 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:31:43 crc kubenswrapper[4957]: I0218 14:31:43.109380 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:31:43 crc kubenswrapper[4957]: I0218 14:31:43.109392 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:31:43 crc kubenswrapper[4957]: I0218 14:31:43.144496 4957 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 12:42:25.947812651 +0000 UTC Feb 18 14:31:44 crc kubenswrapper[4957]: I0218 14:31:44.145095 4957 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 15:28:43.648903104 +0000 UTC Feb 18 14:31:44 crc kubenswrapper[4957]: I0218 14:31:44.207642 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 14:31:44 crc kubenswrapper[4957]: I0218 14:31:44.208117 4957 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 14:31:44 crc kubenswrapper[4957]: I0218 14:31:44.209692 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:31:44 crc kubenswrapper[4957]: I0218 14:31:44.209729 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:31:44 crc kubenswrapper[4957]: I0218 14:31:44.209743 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:31:44 crc kubenswrapper[4957]: I0218 14:31:44.213326 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 14:31:44 crc kubenswrapper[4957]: I0218 14:31:44.275561 4957 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 14:31:44 crc kubenswrapper[4957]: I0218 14:31:44.275658 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 14:31:44 crc kubenswrapper[4957]: I0218 14:31:44.276372 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:31:44 crc kubenswrapper[4957]: I0218 14:31:44.276406 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:31:44 crc kubenswrapper[4957]: I0218 14:31:44.276445 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:31:44 crc kubenswrapper[4957]: E0218 14:31:44.281990 4957 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 18 14:31:45 crc kubenswrapper[4957]: I0218 14:31:45.145542 4957 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 18:18:59.672554889 +0000 UTC Feb 18 14:31:45 crc kubenswrapper[4957]: I0218 14:31:45.278121 4957 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 14:31:45 crc kubenswrapper[4957]: I0218 14:31:45.279714 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:31:45 crc kubenswrapper[4957]: I0218 14:31:45.279775 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:31:45 crc kubenswrapper[4957]: I0218 14:31:45.279789 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:31:45 crc kubenswrapper[4957]: I0218 14:31:45.283653 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 14:31:46 crc kubenswrapper[4957]: I0218 14:31:46.146468 4957 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 23:37:21.371950963 +0000 UTC Feb 18 14:31:46 crc kubenswrapper[4957]: I0218 14:31:46.280879 4957 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 14:31:46 crc kubenswrapper[4957]: I0218 14:31:46.282186 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:31:46 crc kubenswrapper[4957]: I0218 14:31:46.282229 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:31:46 crc kubenswrapper[4957]: I0218 14:31:46.282247 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:31:47 crc kubenswrapper[4957]: I0218 14:31:47.147398 4957 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 16:53:44.025713963 +0000 UTC Feb 18 14:31:47 crc kubenswrapper[4957]: W0218 14:31:47.901646 4957 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout Feb 18 14:31:47 crc kubenswrapper[4957]: I0218 14:31:47.901742 4957 trace.go:236] Trace[813904585]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (18-Feb-2026 14:31:37.900) (total time: 10001ms): Feb 18 14:31:47 crc kubenswrapper[4957]: Trace[813904585]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (14:31:47.901) Feb 18 14:31:47 crc kubenswrapper[4957]: Trace[813904585]: [10.00116258s] [10.00116258s] END Feb 18 14:31:47 crc kubenswrapper[4957]: E0218 14:31:47.901764 4957 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Feb 18 14:31:48 crc kubenswrapper[4957]: I0218 14:31:48.126377 4957 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Feb 18 14:31:48 crc kubenswrapper[4957]: I0218 14:31:48.140989 4957 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="Get \"https://192.168.126.11:6443/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 14:31:48 crc kubenswrapper[4957]: I0218 14:31:48.141128 4957 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 14:31:48 crc kubenswrapper[4957]: I0218 14:31:48.147985 4957 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 17:27:40.854231067 +0000 UTC Feb 18 14:31:48 crc kubenswrapper[4957]: I0218 14:31:48.288196 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 18 14:31:48 crc kubenswrapper[4957]: I0218 14:31:48.290885 4957 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="3ff013b6ed4aa92a4bda93a646e7ffd4a158daab436c25025c7fbee49716bcfd" exitCode=255 Feb 18 14:31:48 crc kubenswrapper[4957]: I0218 14:31:48.290951 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"3ff013b6ed4aa92a4bda93a646e7ffd4a158daab436c25025c7fbee49716bcfd"} Feb 18 14:31:48 crc kubenswrapper[4957]: I0218 14:31:48.291158 4957 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 14:31:48 crc kubenswrapper[4957]: I0218 14:31:48.292251 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:31:48 crc kubenswrapper[4957]: I0218 14:31:48.292361 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:31:48 crc kubenswrapper[4957]: I0218 14:31:48.292486 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:31:48 crc kubenswrapper[4957]: I0218 14:31:48.293153 4957 scope.go:117] "RemoveContainer" containerID="3ff013b6ed4aa92a4bda93a646e7ffd4a158daab436c25025c7fbee49716bcfd" Feb 18 14:31:48 crc kubenswrapper[4957]: W0218 14:31:48.360037 4957 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout Feb 18 14:31:48 crc kubenswrapper[4957]: I0218 14:31:48.360202 4957 trace.go:236] Trace[284047161]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (18-Feb-2026 14:31:38.358) (total time: 10001ms): Feb 18 14:31:48 crc kubenswrapper[4957]: Trace[284047161]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (14:31:48.360) Feb 18 14:31:48 crc kubenswrapper[4957]: Trace[284047161]: [10.001395172s] [10.001395172s] END Feb 18 14:31:48 crc kubenswrapper[4957]: E0218 14:31:48.360250 4957 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Feb 18 14:31:48 crc kubenswrapper[4957]: I0218 14:31:48.398381 4957 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 18 14:31:48 crc kubenswrapper[4957]: I0218 14:31:48.398485 4957 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 18 14:31:49 crc kubenswrapper[4957]: I0218 14:31:49.148569 4957 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 22:13:40.473972339 +0000 UTC Feb 18 14:31:49 crc kubenswrapper[4957]: I0218 14:31:49.296932 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 18 14:31:49 crc kubenswrapper[4957]: I0218 14:31:49.299079 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"a8a11b36734e57eec3aba0b6d96b4102850e46b96eb16d99cfcacf273f1ce69f"} Feb 18 14:31:49 crc kubenswrapper[4957]: I0218 14:31:49.299958 4957 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 14:31:49 crc kubenswrapper[4957]: I0218 14:31:49.300964 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:31:49 crc kubenswrapper[4957]: I0218 14:31:49.301004 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:31:49 crc kubenswrapper[4957]: I0218 14:31:49.301015 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:31:50 crc kubenswrapper[4957]: I0218 14:31:50.149102 4957 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 15:36:38.039441237 +0000 UTC Feb 18 14:31:50 crc kubenswrapper[4957]: I0218 14:31:50.422535 4957 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 14:31:50 crc kubenswrapper[4957]: I0218 14:31:50.422634 4957 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 14:31:50 crc kubenswrapper[4957]: I0218 14:31:50.506718 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Feb 18 14:31:50 crc kubenswrapper[4957]: I0218 14:31:50.506920 4957 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 14:31:50 crc kubenswrapper[4957]: I0218 14:31:50.508036 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:31:50 crc kubenswrapper[4957]: I0218 14:31:50.508103 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:31:50 crc kubenswrapper[4957]: I0218 14:31:50.508120 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:31:50 crc kubenswrapper[4957]: I0218 14:31:50.525801 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Feb 18 14:31:51 crc kubenswrapper[4957]: I0218 14:31:51.150177 4957 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 23:19:10.366192492 +0000 UTC Feb 18 14:31:51 crc kubenswrapper[4957]: I0218 14:31:51.304158 4957 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 14:31:51 crc kubenswrapper[4957]: I0218 14:31:51.305725 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:31:51 crc kubenswrapper[4957]: I0218 14:31:51.305779 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:31:51 crc kubenswrapper[4957]: I0218 14:31:51.305817 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:31:51 crc kubenswrapper[4957]: I0218 14:31:51.735590 4957 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 18 14:31:52 crc kubenswrapper[4957]: I0218 14:31:52.119920 4957 apiserver.go:52] "Watching apiserver" Feb 18 14:31:52 crc kubenswrapper[4957]: I0218 14:31:52.127571 4957 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 18 14:31:52 crc kubenswrapper[4957]: I0218 14:31:52.127879 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h"] Feb 18 14:31:52 crc kubenswrapper[4957]: I0218 14:31:52.128362 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 18 14:31:52 crc kubenswrapper[4957]: I0218 14:31:52.128525 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:31:52 crc kubenswrapper[4957]: I0218 14:31:52.128525 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:31:52 crc kubenswrapper[4957]: I0218 14:31:52.130536 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 18 14:31:52 crc kubenswrapper[4957]: E0218 14:31:52.130642 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 14:31:52 crc kubenswrapper[4957]: E0218 14:31:52.134654 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 14:31:52 crc kubenswrapper[4957]: I0218 14:31:52.134902 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:31:52 crc kubenswrapper[4957]: I0218 14:31:52.134951 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 18 14:31:52 crc kubenswrapper[4957]: E0218 14:31:52.134993 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 14:31:52 crc kubenswrapper[4957]: I0218 14:31:52.136695 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 18 14:31:52 crc kubenswrapper[4957]: I0218 14:31:52.138474 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 18 14:31:52 crc kubenswrapper[4957]: I0218 14:31:52.138606 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 18 14:31:52 crc kubenswrapper[4957]: I0218 14:31:52.138667 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 18 14:31:52 crc kubenswrapper[4957]: I0218 14:31:52.138700 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 18 14:31:52 crc kubenswrapper[4957]: I0218 14:31:52.138889 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 18 14:31:52 crc kubenswrapper[4957]: I0218 14:31:52.139225 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 18 14:31:52 crc kubenswrapper[4957]: I0218 14:31:52.139306 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 18 14:31:52 crc kubenswrapper[4957]: I0218 14:31:52.139397 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 18 14:31:52 crc kubenswrapper[4957]: I0218 14:31:52.140286 4957 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 18 14:31:52 crc kubenswrapper[4957]: I0218 14:31:52.150610 4957 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 15:56:28.637374571 +0000 UTC Feb 18 14:31:52 crc kubenswrapper[4957]: I0218 14:31:52.193279 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 14:31:52 crc kubenswrapper[4957]: I0218 14:31:52.210129 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 14:31:52 crc kubenswrapper[4957]: I0218 14:31:52.225676 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 14:31:52 crc kubenswrapper[4957]: I0218 14:31:52.237025 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 14:31:52 crc kubenswrapper[4957]: I0218 14:31:52.251052 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 14:31:52 crc kubenswrapper[4957]: I0218 14:31:52.264025 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 14:31:52 crc kubenswrapper[4957]: I0218 14:31:52.274811 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 14:31:52 crc kubenswrapper[4957]: I0218 14:31:52.817740 4957 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.107791 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.123476 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.149279 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.150889 4957 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 19:13:52.876063407 +0000 UTC Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.158483 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.169221 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.178926 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b3bef1f-7cee-4035-bc8e-195fadcf2d19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51eded762a62fcb704d09a9c4336a0e308e44c662b636772ea79f754c09dcaaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a2ec9911be4f4d8715bb722cdf72cdbf478111030a7311b93eb67f55ddbce77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d46f625f180ab9afde6a7cb5d578b88012bf5b297e691a35f1fe1af000f1416\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a11b36734e57eec3aba0b6d96b4102850e46b96eb16d99cfcacf273f1ce69f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ff013b6ed4aa92a4bda93a646e7ffd4a158daab436c25025c7fbee49716bcfd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T14:31:47Z\\\",\\\"message\\\":\\\"W0218 14:31:37.311828 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0218 14:31:37.312305 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771425097 cert, and key in /tmp/serving-cert-2580901173/serving-signer.crt, /tmp/serving-cert-2580901173/serving-signer.key\\\\nI0218 14:31:37.668457 1 observer_polling.go:159] Starting file observer\\\\nW0218 14:31:37.670692 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0218 14:31:37.670892 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 14:31:37.672550 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2580901173/tls.crt::/tmp/serving-cert-2580901173/tls.key\\\\\\\"\\\\nF0218 14:31:47.884142 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1c0f986e657c20a059efccb494462926f4fa62d30b012e6ce2b530d679d9d5c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:37Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09ddd547ad67f12c0d4f3b0505206fc69b4bcfcfa966c6f98da8ef4f4e979711\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09ddd547ad67f12c0d4f3b0505206fc69b4bcfcfa966c6f98da8ef4f4e979711\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.186796 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.200093 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.210053 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.211876 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:31:53 crc kubenswrapper[4957]: E0218 14:31:53.212087 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.216835 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.317490 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.336613 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.348371 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.358291 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.368058 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.378181 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b3bef1f-7cee-4035-bc8e-195fadcf2d19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51eded762a62fcb704d09a9c4336a0e308e44c662b636772ea79f754c09dcaaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a2ec9911be4f4d8715bb722cdf72cdbf478111030a7311b93eb67f55ddbce77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d46f625f180ab9afde6a7cb5d578b88012bf5b297e691a35f1fe1af000f1416\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a11b36734e57eec3aba0b6d96b4102850e46b96eb16d99cfcacf273f1ce69f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ff013b6ed4aa92a4bda93a646e7ffd4a158daab436c25025c7fbee49716bcfd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T14:31:47Z\\\",\\\"message\\\":\\\"W0218 14:31:37.311828 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0218 14:31:37.312305 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771425097 cert, and key in /tmp/serving-cert-2580901173/serving-signer.crt, /tmp/serving-cert-2580901173/serving-signer.key\\\\nI0218 14:31:37.668457 1 observer_polling.go:159] Starting file observer\\\\nW0218 14:31:37.670692 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0218 14:31:37.670892 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 14:31:37.672550 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2580901173/tls.crt::/tmp/serving-cert-2580901173/tls.key\\\\\\\"\\\\nF0218 14:31:47.884142 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1c0f986e657c20a059efccb494462926f4fa62d30b012e6ce2b530d679d9d5c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:37Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09ddd547ad67f12c0d4f3b0505206fc69b4bcfcfa966c6f98da8ef4f4e979711\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09ddd547ad67f12c0d4f3b0505206fc69b4bcfcfa966c6f98da8ef4f4e979711\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.389280 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 14:31:53 crc kubenswrapper[4957]: E0218 14:31:53.398656 4957 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.400156 4957 trace.go:236] Trace[1255052334]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (18-Feb-2026 14:31:42.426) (total time: 10973ms): Feb 18 14:31:53 crc kubenswrapper[4957]: Trace[1255052334]: ---"Objects listed" error: 10973ms (14:31:53.400) Feb 18 14:31:53 crc kubenswrapper[4957]: Trace[1255052334]: [10.973270389s] [10.973270389s] END Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.400184 4957 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 18 14:31:53 crc kubenswrapper[4957]: E0218 14:31:53.402230 4957 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.402567 4957 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.402567 4957 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.406432 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.408187 4957 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.503878 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.503945 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.503975 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.504005 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.504028 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.504052 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.504080 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.504102 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.504121 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.504145 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.504171 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.504248 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.504270 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.504291 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.504313 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.504396 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.504439 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.504407 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.504467 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.504575 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.504607 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.504638 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.504667 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.504694 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.504720 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.504755 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.504781 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.504805 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.504877 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.504910 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.504936 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.504970 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.504998 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.505027 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.505054 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.505085 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.505115 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.505141 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.505170 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.505196 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.505221 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.505246 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.505271 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.505294 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.505315 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.505336 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.505356 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.505373 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.505393 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.505408 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.505449 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.505506 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.505540 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.505567 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.505586 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.505605 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.505631 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.505659 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.505711 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.505736 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.505764 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.505799 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.505826 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.505853 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.505882 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.505918 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.505949 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.505980 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.506009 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.506038 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.506069 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.506407 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.506450 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.506479 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.506586 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.506609 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.506633 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.506653 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.506681 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.504492 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.506710 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.506742 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.506771 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.506798 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.506830 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.506857 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.506884 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.506912 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.506939 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.506968 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.507040 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.507060 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.507078 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.507097 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.507114 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.507133 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.507178 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.507198 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.508812 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.508871 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.508912 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.506710 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.504492 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.504873 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.504937 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.505229 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.505532 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.505790 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.505810 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.506090 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.506128 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.506140 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.506166 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.506452 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.506612 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.506685 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.506722 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.506952 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.507021 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.507047 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.507086 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.507116 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.507139 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.508706 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.508798 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.508961 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.509244 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.509333 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.509373 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.509826 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.510088 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.510077 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.510121 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.510143 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.510144 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.510396 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.511220 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.511875 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.511943 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.512202 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.512935 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.513059 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.515393 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.515499 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.515726 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.515811 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.515850 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.515890 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.515929 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.515962 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.515996 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.515999 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.516032 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.516179 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.516326 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.516344 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.517035 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.517098 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.517144 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.517186 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.517220 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.517256 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.517290 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.517322 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.517352 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.517374 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.517402 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.518772 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.518906 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.519008 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.519097 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.519246 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.519355 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.519452 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.519527 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.519618 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.519706 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.519874 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.519956 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.520038 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.520109 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.520177 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.520255 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.520328 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.520395 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.520481 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.520564 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.520634 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.520703 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.520781 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.520851 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.520920 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.521121 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.521199 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.521276 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.521350 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.521480 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.517142 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.517192 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.524111 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.518742 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.519248 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.519442 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.519467 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.519486 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.519561 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.519636 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.519884 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.520216 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.520470 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.521007 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: E0218 14:31:53.521585 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 14:31:54.021555763 +0000 UTC m=+20.542420717 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.524308 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.524348 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.524378 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.524401 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.524436 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.524460 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.524481 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.524500 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.524519 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.524542 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.524579 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.524641 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.524661 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.524680 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.524700 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.524721 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.524743 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.524762 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.524784 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.524803 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.524821 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.524854 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.524874 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.524895 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.524916 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.524936 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.524955 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.524971 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.524990 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.525009 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.525028 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.525045 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.525062 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.525080 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.525101 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.525120 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.525140 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.525157 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.525176 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.525197 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.525216 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.525234 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.525255 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.525273 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.525323 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.525347 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.525384 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.525433 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.525457 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.525483 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.525504 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.525530 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.525551 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.525574 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.525600 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.525618 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.525641 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.525662 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.525694 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.525738 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.521633 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.521695 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.521765 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.521988 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.522364 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.522771 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.522916 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.523040 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.523136 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.523222 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.523529 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.523597 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.523636 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.523923 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.524005 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.524057 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.521597 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.526649 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.526721 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.526725 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.526817 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.526948 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.527066 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.527103 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.527123 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.527262 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.527453 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.527502 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.527593 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.527617 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.527917 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.528053 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.528372 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.528531 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.528208 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.529394 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.529499 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.529537 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.529545 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.529707 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.529850 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.529825 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.529977 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.530101 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.530133 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.530109 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.530323 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.530539 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.530655 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.530810 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.530884 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.530858 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.530945 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.525751 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.531330 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.531339 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.531360 4957 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.531386 4957 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.531413 4957 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.531645 4957 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.531664 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.531681 4957 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.531703 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.531723 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.531746 4957 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.531793 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.531806 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.531886 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.531908 4957 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.531930 4957 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.531947 4957 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.531961 4957 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.531979 4957 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.531994 4957 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.532012 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.532027 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.532042 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.532058 4957 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.532072 4957 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.532087 4957 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.532101 4957 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.532119 4957 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.532135 4957 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.532151 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.532166 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.532181 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.532194 4957 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.532238 4957 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.532253 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.532269 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.532284 4957 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.532297 4957 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.532312 4957 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.532326 4957 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.532340 4957 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.532360 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.532375 4957 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.532390 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.532404 4957 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.532434 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.532449 4957 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.532465 4957 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.532480 4957 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.532496 4957 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.532510 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.532526 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.532539 4957 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.532554 4957 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.532569 4957 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.532583 4957 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.532597 4957 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.532612 4957 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: E0218 14:31:53.533208 4957 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 14:31:53 crc kubenswrapper[4957]: E0218 14:31:53.533275 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 14:31:54.033253985 +0000 UTC m=+20.554118969 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 14:31:53 crc kubenswrapper[4957]: E0218 14:31:53.534133 4957 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.534232 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 18 14:31:53 crc kubenswrapper[4957]: E0218 14:31:53.534820 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 14:31:54.03480173 +0000 UTC m=+20.555666714 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.535205 4957 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.535590 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.536197 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.536257 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.536261 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.536336 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.536349 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.536608 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.536995 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.537067 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.537708 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.537733 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.538340 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.538403 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.539306 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.540410 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.540785 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.540866 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.541124 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.541255 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.541535 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.542560 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.547368 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.547908 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.548151 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.548160 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.549245 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: E0218 14:31:53.549636 4957 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 14:31:53 crc kubenswrapper[4957]: E0218 14:31:53.549733 4957 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 14:31:53 crc kubenswrapper[4957]: E0218 14:31:53.549829 4957 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 14:31:53 crc kubenswrapper[4957]: E0218 14:31:53.550760 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-18 14:31:54.050615933 +0000 UTC m=+20.571480767 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.549875 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: E0218 14:31:53.552272 4957 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 14:31:53 crc kubenswrapper[4957]: E0218 14:31:53.552301 4957 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 14:31:53 crc kubenswrapper[4957]: E0218 14:31:53.552359 4957 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 14:31:53 crc kubenswrapper[4957]: E0218 14:31:53.552451 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-18 14:31:54.052436256 +0000 UTC m=+20.573301220 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.552838 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.553794 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.553272 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.553467 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.555035 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.555701 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.556334 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.556242 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.556808 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.558301 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.559348 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.559617 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.559816 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.559902 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.560034 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.560297 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.560388 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.560803 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.560979 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.561259 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.561309 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.561392 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.561964 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.562197 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.562374 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.562474 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.562684 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.562830 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.565915 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.566050 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.566249 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.570374 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.570390 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.570887 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.570919 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.571881 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.571902 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.571927 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.572050 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.572138 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.572891 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.573392 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.574325 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.573975 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.574795 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.574894 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.575236 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.575732 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.575871 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.575980 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.576578 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.578217 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.584585 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.585379 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.591384 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.600996 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.633832 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.633885 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.633981 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.634000 4957 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.634018 4957 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.634034 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.634049 4957 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.634062 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.634072 4957 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.634083 4957 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.634100 4957 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.634113 4957 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.634126 4957 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.634139 4957 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.634136 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.634150 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.634195 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.634207 4957 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.634218 4957 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.634227 4957 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.634238 4957 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.634249 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.634260 4957 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.634270 4957 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.634263 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.634280 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.634367 4957 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.634379 4957 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.634391 4957 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.634403 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.634436 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.634447 4957 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.634460 4957 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.634472 4957 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.634483 4957 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.634492 4957 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.634501 4957 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.634511 4957 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.634521 4957 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.634531 4957 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.634543 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.634576 4957 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.634587 4957 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.634596 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.634606 4957 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.634616 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.634626 4957 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.634635 4957 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.634645 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.634656 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.634666 4957 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.634675 4957 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.634687 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.634697 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.634706 4957 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.634715 4957 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.634725 4957 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.634736 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.634744 4957 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.634754 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.634765 4957 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.634774 4957 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.634784 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.634793 4957 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.634802 4957 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.634813 4957 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.634822 4957 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.634833 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.634843 4957 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.634853 4957 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.634862 4957 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.634870 4957 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.634879 4957 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.634888 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.634899 4957 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.634908 4957 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.634918 4957 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.634927 4957 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.634937 4957 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.634946 4957 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.634955 4957 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.634963 4957 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.634973 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.634983 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.634995 4957 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.635004 4957 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.635013 4957 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.635024 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.635033 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.635042 4957 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.635051 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.635060 4957 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.635070 4957 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.635080 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.635089 4957 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.635098 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.635106 4957 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.635117 4957 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.635127 4957 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.635136 4957 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.635145 4957 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.635155 4957 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.635164 4957 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.635174 4957 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.635183 4957 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.635192 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.635202 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.635213 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.635222 4957 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.635232 4957 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.635241 4957 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.635252 4957 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.635262 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.635272 4957 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.635281 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.635291 4957 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.635300 4957 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.635309 4957 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.635318 4957 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.635327 4957 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.635338 4957 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.635347 4957 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.635357 4957 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.635366 4957 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.635375 4957 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.635385 4957 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.635394 4957 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.635405 4957 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.635434 4957 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.635443 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.635452 4957 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.635462 4957 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.635472 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.635481 4957 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.635490 4957 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.635500 4957 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.635509 4957 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.635519 4957 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.635535 4957 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.635544 4957 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.635553 4957 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.635563 4957 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.651238 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.657806 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 18 14:31:53 crc kubenswrapper[4957]: I0218 14:31:53.665275 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 18 14:31:53 crc kubenswrapper[4957]: W0218 14:31:53.683715 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-32f16862171c978a3fc96fe449d2ed2f0604d49e0a092ebe98bf440bfc0febbf WatchSource:0}: Error finding container 32f16862171c978a3fc96fe449d2ed2f0604d49e0a092ebe98bf440bfc0febbf: Status 404 returned error can't find the container with id 32f16862171c978a3fc96fe449d2ed2f0604d49e0a092ebe98bf440bfc0febbf Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.038640 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 14:31:54 crc kubenswrapper[4957]: E0218 14:31:54.038784 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 14:31:55.038764752 +0000 UTC m=+21.559629496 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.039146 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.039183 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:31:54 crc kubenswrapper[4957]: E0218 14:31:54.039259 4957 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 14:31:54 crc kubenswrapper[4957]: E0218 14:31:54.039304 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 14:31:55.039293097 +0000 UTC m=+21.560157841 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 14:31:54 crc kubenswrapper[4957]: E0218 14:31:54.039599 4957 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 14:31:54 crc kubenswrapper[4957]: E0218 14:31:54.039692 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 14:31:55.039682189 +0000 UTC m=+21.560546923 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.140160 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:31:54 crc kubenswrapper[4957]: E0218 14:31:54.140454 4957 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 14:31:54 crc kubenswrapper[4957]: E0218 14:31:54.140507 4957 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 14:31:54 crc kubenswrapper[4957]: E0218 14:31:54.140522 4957 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 14:31:54 crc kubenswrapper[4957]: E0218 14:31:54.140598 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-18 14:31:55.140573928 +0000 UTC m=+21.661438672 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.140477 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:31:54 crc kubenswrapper[4957]: E0218 14:31:54.140848 4957 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 14:31:54 crc kubenswrapper[4957]: E0218 14:31:54.140921 4957 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 14:31:54 crc kubenswrapper[4957]: E0218 14:31:54.140980 4957 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 14:31:54 crc kubenswrapper[4957]: E0218 14:31:54.141098 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-18 14:31:55.141081603 +0000 UTC m=+21.661946337 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.151823 4957 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 13:28:44.402291848 +0000 UTC Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.212351 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:31:54 crc kubenswrapper[4957]: E0218 14:31:54.212540 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.212360 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:31:54 crc kubenswrapper[4957]: E0218 14:31:54.212982 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.216260 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.217132 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.217945 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.218691 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.219444 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.220034 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.220747 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.221375 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.222198 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.222826 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.223490 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.224226 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.227384 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.228109 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.229197 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.229814 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.230498 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.232448 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.233126 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.234373 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.234986 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.235700 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.236721 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.237504 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.238526 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.239312 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.240677 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.241231 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.243008 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.243617 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.244190 4957 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.244305 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.247248 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.247509 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.248309 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.249745 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.251569 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.252361 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.253545 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.255740 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.256556 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.258091 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.258883 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.260071 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.260815 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.261922 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.262674 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.262976 4957 csr.go:261] certificate signing request csr-rkdfl is approved, waiting to be issued Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.263865 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.264765 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.265919 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.266580 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.267178 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.268311 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.269043 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.270309 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.279304 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b3bef1f-7cee-4035-bc8e-195fadcf2d19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51eded762a62fcb704d09a9c4336a0e308e44c662b636772ea79f754c09dcaaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a2ec9911be4f4d8715bb722cdf72cdbf478111030a7311b93eb67f55ddbce77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d46f625f180ab9afde6a7cb5d578b88012bf5b297e691a35f1fe1af000f1416\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a11b36734e57eec3aba0b6d96b4102850e46b96eb16d99cfcacf273f1ce69f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ff013b6ed4aa92a4bda93a646e7ffd4a158daab436c25025c7fbee49716bcfd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T14:31:47Z\\\",\\\"message\\\":\\\"W0218 14:31:37.311828 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0218 14:31:37.312305 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771425097 cert, and key in /tmp/serving-cert-2580901173/serving-signer.crt, /tmp/serving-cert-2580901173/serving-signer.key\\\\nI0218 14:31:37.668457 1 observer_polling.go:159] Starting file observer\\\\nW0218 14:31:37.670692 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0218 14:31:37.670892 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 14:31:37.672550 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2580901173/tls.crt::/tmp/serving-cert-2580901173/tls.key\\\\\\\"\\\\nF0218 14:31:47.884142 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1c0f986e657c20a059efccb494462926f4fa62d30b012e6ce2b530d679d9d5c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:37Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09ddd547ad67f12c0d4f3b0505206fc69b4bcfcfa966c6f98da8ef4f4e979711\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09ddd547ad67f12c0d4f3b0505206fc69b4bcfcfa966c6f98da8ef4f4e979711\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.282576 4957 csr.go:257] certificate signing request csr-rkdfl is issued Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.305281 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.319227 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"32f16862171c978a3fc96fe449d2ed2f0604d49e0a092ebe98bf440bfc0febbf"} Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.321502 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"3339e9ff017eaa7de40b766111f5d7dcb9b5b45f154d917b2f3b1ebad21ee361"} Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.321569 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"815a22c1c4523e923dbe20bb68b1f4195562f2b3a767409555f43c82cd826391"} Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.321585 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"8bd07f1c55853a8068b26adfe146cff9e9e1d005635dbdd4c6dc75a15fa1c471"} Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.324615 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"3aa342a3ea42d2c64cf284dd2f7fed142b1ff377cc3c5384e4d940ec4364befc"} Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.324772 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"8403e3bf634ee75afec46ee85f21698b553848912e4450849bf9c91734876b56"} Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.345278 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.389021 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.417162 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.431968 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.443495 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3aa342a3ea42d2c64cf284dd2f7fed142b1ff377cc3c5384e4d940ec4364befc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.454314 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.466985 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b3bef1f-7cee-4035-bc8e-195fadcf2d19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51eded762a62fcb704d09a9c4336a0e308e44c662b636772ea79f754c09dcaaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a2ec9911be4f4d8715bb722cdf72cdbf478111030a7311b93eb67f55ddbce77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d46f625f180ab9afde6a7cb5d578b88012bf5b297e691a35f1fe1af000f1416\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a11b36734e57eec3aba0b6d96b4102850e46b96eb16d99cfcacf273f1ce69f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ff013b6ed4aa92a4bda93a646e7ffd4a158daab436c25025c7fbee49716bcfd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T14:31:47Z\\\",\\\"message\\\":\\\"W0218 14:31:37.311828 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0218 14:31:37.312305 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771425097 cert, and key in /tmp/serving-cert-2580901173/serving-signer.crt, /tmp/serving-cert-2580901173/serving-signer.key\\\\nI0218 14:31:37.668457 1 observer_polling.go:159] Starting file observer\\\\nW0218 14:31:37.670692 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0218 14:31:37.670892 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 14:31:37.672550 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2580901173/tls.crt::/tmp/serving-cert-2580901173/tls.key\\\\\\\"\\\\nF0218 14:31:47.884142 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1c0f986e657c20a059efccb494462926f4fa62d30b012e6ce2b530d679d9d5c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:37Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09ddd547ad67f12c0d4f3b0505206fc69b4bcfcfa966c6f98da8ef4f4e979711\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09ddd547ad67f12c0d4f3b0505206fc69b4bcfcfa966c6f98da8ef4f4e979711\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.484138 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.496485 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3339e9ff017eaa7de40b766111f5d7dcb9b5b45f154d917b2f3b1ebad21ee361\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://815a22c1c4523e923dbe20bb68b1f4195562f2b3a767409555f43c82cd826391\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.509684 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.520996 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.725177 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-x8wwg"] Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.725600 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-sk96m"] Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.725825 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-sk96m" Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.726167 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.726192 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-wn2pd"] Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.726678 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-wn2pd" Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.729556 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.733489 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.735904 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.736307 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.736609 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.738103 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.739331 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.739551 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.739647 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.740117 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.740883 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.747634 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.754398 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.760097 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b3bef1f-7cee-4035-bc8e-195fadcf2d19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51eded762a62fcb704d09a9c4336a0e308e44c662b636772ea79f754c09dcaaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a2ec9911be4f4d8715bb722cdf72cdbf478111030a7311b93eb67f55ddbce77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d46f625f180ab9afde6a7cb5d578b88012bf5b297e691a35f1fe1af000f1416\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a11b36734e57eec3aba0b6d96b4102850e46b96eb16d99cfcacf273f1ce69f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ff013b6ed4aa92a4bda93a646e7ffd4a158daab436c25025c7fbee49716bcfd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T14:31:47Z\\\",\\\"message\\\":\\\"W0218 14:31:37.311828 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0218 14:31:37.312305 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771425097 cert, and key in /tmp/serving-cert-2580901173/serving-signer.crt, /tmp/serving-cert-2580901173/serving-signer.key\\\\nI0218 14:31:37.668457 1 observer_polling.go:159] Starting file observer\\\\nW0218 14:31:37.670692 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0218 14:31:37.670892 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 14:31:37.672550 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2580901173/tls.crt::/tmp/serving-cert-2580901173/tls.key\\\\\\\"\\\\nF0218 14:31:47.884142 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1c0f986e657c20a059efccb494462926f4fa62d30b012e6ce2b530d679d9d5c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:37Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09ddd547ad67f12c0d4f3b0505206fc69b4bcfcfa966c6f98da8ef4f4e979711\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09ddd547ad67f12c0d4f3b0505206fc69b4bcfcfa966c6f98da8ef4f4e979711\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:31:54Z is after 2025-08-24T17:21:41Z" Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.773255 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:31:54Z is after 2025-08-24T17:21:41Z" Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.786027 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3339e9ff017eaa7de40b766111f5d7dcb9b5b45f154d917b2f3b1ebad21ee361\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://815a22c1c4523e923dbe20bb68b1f4195562f2b3a767409555f43c82cd826391\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:31:54Z is after 2025-08-24T17:21:41Z" Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.801070 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:31:54Z is after 2025-08-24T17:21:41Z" Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.814894 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:31:54Z is after 2025-08-24T17:21:41Z" Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.833041 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sk96m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7e4902e-7bb6-44ec-a3ac-3d8b3afe3fbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vssz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sk96m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:31:54Z is after 2025-08-24T17:21:41Z" Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.845440 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4cde17e3-43e9-4bed-afe8-5b76229e35cf-proxy-tls\") pod \"machine-config-daemon-x8wwg\" (UID: \"4cde17e3-43e9-4bed-afe8-5b76229e35cf\") " pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.845503 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e7e4902e-7bb6-44ec-a3ac-3d8b3afe3fbb-host-run-netns\") pod \"multus-sk96m\" (UID: \"e7e4902e-7bb6-44ec-a3ac-3d8b3afe3fbb\") " pod="openshift-multus/multus-sk96m" Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.845525 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/e7e4902e-7bb6-44ec-a3ac-3d8b3afe3fbb-host-var-lib-cni-multus\") pod \"multus-sk96m\" (UID: \"e7e4902e-7bb6-44ec-a3ac-3d8b3afe3fbb\") " pod="openshift-multus/multus-sk96m" Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.845548 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qv7rj\" (UniqueName: \"kubernetes.io/projected/2b6a720f-7d42-48e9-8073-fb4f7417e6cb-kube-api-access-qv7rj\") pod \"node-resolver-wn2pd\" (UID: \"2b6a720f-7d42-48e9-8073-fb4f7417e6cb\") " pod="openshift-dns/node-resolver-wn2pd" Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.845568 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4cde17e3-43e9-4bed-afe8-5b76229e35cf-mcd-auth-proxy-config\") pod \"machine-config-daemon-x8wwg\" (UID: \"4cde17e3-43e9-4bed-afe8-5b76229e35cf\") " pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.845600 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/e7e4902e-7bb6-44ec-a3ac-3d8b3afe3fbb-os-release\") pod \"multus-sk96m\" (UID: \"e7e4902e-7bb6-44ec-a3ac-3d8b3afe3fbb\") " pod="openshift-multus/multus-sk96m" Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.845636 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/e7e4902e-7bb6-44ec-a3ac-3d8b3afe3fbb-multus-daemon-config\") pod \"multus-sk96m\" (UID: \"e7e4902e-7bb6-44ec-a3ac-3d8b3afe3fbb\") " pod="openshift-multus/multus-sk96m" Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.845753 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/4cde17e3-43e9-4bed-afe8-5b76229e35cf-rootfs\") pod \"machine-config-daemon-x8wwg\" (UID: \"4cde17e3-43e9-4bed-afe8-5b76229e35cf\") " pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.845808 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/e7e4902e-7bb6-44ec-a3ac-3d8b3afe3fbb-host-run-multus-certs\") pod \"multus-sk96m\" (UID: \"e7e4902e-7bb6-44ec-a3ac-3d8b3afe3fbb\") " pod="openshift-multus/multus-sk96m" Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.845896 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzslj\" (UniqueName: \"kubernetes.io/projected/4cde17e3-43e9-4bed-afe8-5b76229e35cf-kube-api-access-mzslj\") pod \"machine-config-daemon-x8wwg\" (UID: \"4cde17e3-43e9-4bed-afe8-5b76229e35cf\") " pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.845924 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e7e4902e-7bb6-44ec-a3ac-3d8b3afe3fbb-multus-cni-dir\") pod \"multus-sk96m\" (UID: \"e7e4902e-7bb6-44ec-a3ac-3d8b3afe3fbb\") " pod="openshift-multus/multus-sk96m" Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.845949 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/e7e4902e-7bb6-44ec-a3ac-3d8b3afe3fbb-cni-binary-copy\") pod \"multus-sk96m\" (UID: \"e7e4902e-7bb6-44ec-a3ac-3d8b3afe3fbb\") " pod="openshift-multus/multus-sk96m" Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.845988 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e7e4902e-7bb6-44ec-a3ac-3d8b3afe3fbb-etc-kubernetes\") pod \"multus-sk96m\" (UID: \"e7e4902e-7bb6-44ec-a3ac-3d8b3afe3fbb\") " pod="openshift-multus/multus-sk96m" Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.846021 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e7e4902e-7bb6-44ec-a3ac-3d8b3afe3fbb-system-cni-dir\") pod \"multus-sk96m\" (UID: \"e7e4902e-7bb6-44ec-a3ac-3d8b3afe3fbb\") " pod="openshift-multus/multus-sk96m" Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.846046 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/e7e4902e-7bb6-44ec-a3ac-3d8b3afe3fbb-multus-socket-dir-parent\") pod \"multus-sk96m\" (UID: \"e7e4902e-7bb6-44ec-a3ac-3d8b3afe3fbb\") " pod="openshift-multus/multus-sk96m" Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.846069 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/e7e4902e-7bb6-44ec-a3ac-3d8b3afe3fbb-host-run-k8s-cni-cncf-io\") pod \"multus-sk96m\" (UID: \"e7e4902e-7bb6-44ec-a3ac-3d8b3afe3fbb\") " pod="openshift-multus/multus-sk96m" Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.846111 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e7e4902e-7bb6-44ec-a3ac-3d8b3afe3fbb-host-var-lib-cni-bin\") pod \"multus-sk96m\" (UID: \"e7e4902e-7bb6-44ec-a3ac-3d8b3afe3fbb\") " pod="openshift-multus/multus-sk96m" Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.846178 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e7e4902e-7bb6-44ec-a3ac-3d8b3afe3fbb-host-var-lib-kubelet\") pod \"multus-sk96m\" (UID: \"e7e4902e-7bb6-44ec-a3ac-3d8b3afe3fbb\") " pod="openshift-multus/multus-sk96m" Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.846202 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/e7e4902e-7bb6-44ec-a3ac-3d8b3afe3fbb-hostroot\") pod \"multus-sk96m\" (UID: \"e7e4902e-7bb6-44ec-a3ac-3d8b3afe3fbb\") " pod="openshift-multus/multus-sk96m" Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.846228 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/e7e4902e-7bb6-44ec-a3ac-3d8b3afe3fbb-multus-conf-dir\") pod \"multus-sk96m\" (UID: \"e7e4902e-7bb6-44ec-a3ac-3d8b3afe3fbb\") " pod="openshift-multus/multus-sk96m" Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.846250 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vssz4\" (UniqueName: \"kubernetes.io/projected/e7e4902e-7bb6-44ec-a3ac-3d8b3afe3fbb-kube-api-access-vssz4\") pod \"multus-sk96m\" (UID: \"e7e4902e-7bb6-44ec-a3ac-3d8b3afe3fbb\") " pod="openshift-multus/multus-sk96m" Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.846272 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/e7e4902e-7bb6-44ec-a3ac-3d8b3afe3fbb-cnibin\") pod \"multus-sk96m\" (UID: \"e7e4902e-7bb6-44ec-a3ac-3d8b3afe3fbb\") " pod="openshift-multus/multus-sk96m" Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.846307 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/2b6a720f-7d42-48e9-8073-fb4f7417e6cb-hosts-file\") pod \"node-resolver-wn2pd\" (UID: \"2b6a720f-7d42-48e9-8073-fb4f7417e6cb\") " pod="openshift-dns/node-resolver-wn2pd" Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.853840 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cde17e3-43e9-4bed-afe8-5b76229e35cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mzslj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mzslj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-x8wwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:31:54Z is after 2025-08-24T17:21:41Z" Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.873154 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3aa342a3ea42d2c64cf284dd2f7fed142b1ff377cc3c5384e4d940ec4364befc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:31:54Z is after 2025-08-24T17:21:41Z" Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.889602 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:31:54Z is after 2025-08-24T17:21:41Z" Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.904317 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:31:54Z is after 2025-08-24T17:21:41Z" Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.917735 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3aa342a3ea42d2c64cf284dd2f7fed142b1ff377cc3c5384e4d940ec4364befc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:31:54Z is after 2025-08-24T17:21:41Z" Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.933065 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3339e9ff017eaa7de40b766111f5d7dcb9b5b45f154d917b2f3b1ebad21ee361\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://815a22c1c4523e923dbe20bb68b1f4195562f2b3a767409555f43c82cd826391\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:31:54Z is after 2025-08-24T17:21:41Z" Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.946907 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e7e4902e-7bb6-44ec-a3ac-3d8b3afe3fbb-system-cni-dir\") pod \"multus-sk96m\" (UID: \"e7e4902e-7bb6-44ec-a3ac-3d8b3afe3fbb\") " pod="openshift-multus/multus-sk96m" Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.946943 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/e7e4902e-7bb6-44ec-a3ac-3d8b3afe3fbb-multus-socket-dir-parent\") pod \"multus-sk96m\" (UID: \"e7e4902e-7bb6-44ec-a3ac-3d8b3afe3fbb\") " pod="openshift-multus/multus-sk96m" Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.946956 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/e7e4902e-7bb6-44ec-a3ac-3d8b3afe3fbb-host-run-k8s-cni-cncf-io\") pod \"multus-sk96m\" (UID: \"e7e4902e-7bb6-44ec-a3ac-3d8b3afe3fbb\") " pod="openshift-multus/multus-sk96m" Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.946971 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e7e4902e-7bb6-44ec-a3ac-3d8b3afe3fbb-etc-kubernetes\") pod \"multus-sk96m\" (UID: \"e7e4902e-7bb6-44ec-a3ac-3d8b3afe3fbb\") " pod="openshift-multus/multus-sk96m" Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.946986 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e7e4902e-7bb6-44ec-a3ac-3d8b3afe3fbb-host-var-lib-cni-bin\") pod \"multus-sk96m\" (UID: \"e7e4902e-7bb6-44ec-a3ac-3d8b3afe3fbb\") " pod="openshift-multus/multus-sk96m" Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.947015 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e7e4902e-7bb6-44ec-a3ac-3d8b3afe3fbb-host-var-lib-kubelet\") pod \"multus-sk96m\" (UID: \"e7e4902e-7bb6-44ec-a3ac-3d8b3afe3fbb\") " pod="openshift-multus/multus-sk96m" Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.947031 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/e7e4902e-7bb6-44ec-a3ac-3d8b3afe3fbb-hostroot\") pod \"multus-sk96m\" (UID: \"e7e4902e-7bb6-44ec-a3ac-3d8b3afe3fbb\") " pod="openshift-multus/multus-sk96m" Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.947047 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/e7e4902e-7bb6-44ec-a3ac-3d8b3afe3fbb-multus-conf-dir\") pod \"multus-sk96m\" (UID: \"e7e4902e-7bb6-44ec-a3ac-3d8b3afe3fbb\") " pod="openshift-multus/multus-sk96m" Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.947059 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vssz4\" (UniqueName: \"kubernetes.io/projected/e7e4902e-7bb6-44ec-a3ac-3d8b3afe3fbb-kube-api-access-vssz4\") pod \"multus-sk96m\" (UID: \"e7e4902e-7bb6-44ec-a3ac-3d8b3afe3fbb\") " pod="openshift-multus/multus-sk96m" Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.947073 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/e7e4902e-7bb6-44ec-a3ac-3d8b3afe3fbb-cnibin\") pod \"multus-sk96m\" (UID: \"e7e4902e-7bb6-44ec-a3ac-3d8b3afe3fbb\") " pod="openshift-multus/multus-sk96m" Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.947087 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/2b6a720f-7d42-48e9-8073-fb4f7417e6cb-hosts-file\") pod \"node-resolver-wn2pd\" (UID: \"2b6a720f-7d42-48e9-8073-fb4f7417e6cb\") " pod="openshift-dns/node-resolver-wn2pd" Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.947087 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/e7e4902e-7bb6-44ec-a3ac-3d8b3afe3fbb-host-run-k8s-cni-cncf-io\") pod \"multus-sk96m\" (UID: \"e7e4902e-7bb6-44ec-a3ac-3d8b3afe3fbb\") " pod="openshift-multus/multus-sk96m" Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.947120 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e7e4902e-7bb6-44ec-a3ac-3d8b3afe3fbb-etc-kubernetes\") pod \"multus-sk96m\" (UID: \"e7e4902e-7bb6-44ec-a3ac-3d8b3afe3fbb\") " pod="openshift-multus/multus-sk96m" Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.947096 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/e7e4902e-7bb6-44ec-a3ac-3d8b3afe3fbb-multus-socket-dir-parent\") pod \"multus-sk96m\" (UID: \"e7e4902e-7bb6-44ec-a3ac-3d8b3afe3fbb\") " pod="openshift-multus/multus-sk96m" Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.947110 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4cde17e3-43e9-4bed-afe8-5b76229e35cf-proxy-tls\") pod \"machine-config-daemon-x8wwg\" (UID: \"4cde17e3-43e9-4bed-afe8-5b76229e35cf\") " pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.947181 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e7e4902e-7bb6-44ec-a3ac-3d8b3afe3fbb-host-run-netns\") pod \"multus-sk96m\" (UID: \"e7e4902e-7bb6-44ec-a3ac-3d8b3afe3fbb\") " pod="openshift-multus/multus-sk96m" Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.947199 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/e7e4902e-7bb6-44ec-a3ac-3d8b3afe3fbb-host-var-lib-cni-multus\") pod \"multus-sk96m\" (UID: \"e7e4902e-7bb6-44ec-a3ac-3d8b3afe3fbb\") " pod="openshift-multus/multus-sk96m" Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.947214 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qv7rj\" (UniqueName: \"kubernetes.io/projected/2b6a720f-7d42-48e9-8073-fb4f7417e6cb-kube-api-access-qv7rj\") pod \"node-resolver-wn2pd\" (UID: \"2b6a720f-7d42-48e9-8073-fb4f7417e6cb\") " pod="openshift-dns/node-resolver-wn2pd" Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.947217 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e7e4902e-7bb6-44ec-a3ac-3d8b3afe3fbb-host-run-netns\") pod \"multus-sk96m\" (UID: \"e7e4902e-7bb6-44ec-a3ac-3d8b3afe3fbb\") " pod="openshift-multus/multus-sk96m" Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.947092 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e7e4902e-7bb6-44ec-a3ac-3d8b3afe3fbb-host-var-lib-cni-bin\") pod \"multus-sk96m\" (UID: \"e7e4902e-7bb6-44ec-a3ac-3d8b3afe3fbb\") " pod="openshift-multus/multus-sk96m" Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.947142 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/e7e4902e-7bb6-44ec-a3ac-3d8b3afe3fbb-multus-conf-dir\") pod \"multus-sk96m\" (UID: \"e7e4902e-7bb6-44ec-a3ac-3d8b3afe3fbb\") " pod="openshift-multus/multus-sk96m" Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.947240 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/2b6a720f-7d42-48e9-8073-fb4f7417e6cb-hosts-file\") pod \"node-resolver-wn2pd\" (UID: \"2b6a720f-7d42-48e9-8073-fb4f7417e6cb\") " pod="openshift-dns/node-resolver-wn2pd" Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.947246 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/e7e4902e-7bb6-44ec-a3ac-3d8b3afe3fbb-host-var-lib-cni-multus\") pod \"multus-sk96m\" (UID: \"e7e4902e-7bb6-44ec-a3ac-3d8b3afe3fbb\") " pod="openshift-multus/multus-sk96m" Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.947253 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e7e4902e-7bb6-44ec-a3ac-3d8b3afe3fbb-host-var-lib-kubelet\") pod \"multus-sk96m\" (UID: \"e7e4902e-7bb6-44ec-a3ac-3d8b3afe3fbb\") " pod="openshift-multus/multus-sk96m" Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.947264 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/e7e4902e-7bb6-44ec-a3ac-3d8b3afe3fbb-hostroot\") pod \"multus-sk96m\" (UID: \"e7e4902e-7bb6-44ec-a3ac-3d8b3afe3fbb\") " pod="openshift-multus/multus-sk96m" Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.947283 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/e7e4902e-7bb6-44ec-a3ac-3d8b3afe3fbb-cnibin\") pod \"multus-sk96m\" (UID: \"e7e4902e-7bb6-44ec-a3ac-3d8b3afe3fbb\") " pod="openshift-multus/multus-sk96m" Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.947395 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4cde17e3-43e9-4bed-afe8-5b76229e35cf-mcd-auth-proxy-config\") pod \"machine-config-daemon-x8wwg\" (UID: \"4cde17e3-43e9-4bed-afe8-5b76229e35cf\") " pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.947514 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/e7e4902e-7bb6-44ec-a3ac-3d8b3afe3fbb-os-release\") pod \"multus-sk96m\" (UID: \"e7e4902e-7bb6-44ec-a3ac-3d8b3afe3fbb\") " pod="openshift-multus/multus-sk96m" Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.947537 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/e7e4902e-7bb6-44ec-a3ac-3d8b3afe3fbb-multus-daemon-config\") pod \"multus-sk96m\" (UID: \"e7e4902e-7bb6-44ec-a3ac-3d8b3afe3fbb\") " pod="openshift-multus/multus-sk96m" Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.947553 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/4cde17e3-43e9-4bed-afe8-5b76229e35cf-rootfs\") pod \"machine-config-daemon-x8wwg\" (UID: \"4cde17e3-43e9-4bed-afe8-5b76229e35cf\") " pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.947567 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/e7e4902e-7bb6-44ec-a3ac-3d8b3afe3fbb-host-run-multus-certs\") pod \"multus-sk96m\" (UID: \"e7e4902e-7bb6-44ec-a3ac-3d8b3afe3fbb\") " pod="openshift-multus/multus-sk96m" Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.947610 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/e7e4902e-7bb6-44ec-a3ac-3d8b3afe3fbb-host-run-multus-certs\") pod \"multus-sk96m\" (UID: \"e7e4902e-7bb6-44ec-a3ac-3d8b3afe3fbb\") " pod="openshift-multus/multus-sk96m" Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.947859 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/e7e4902e-7bb6-44ec-a3ac-3d8b3afe3fbb-os-release\") pod \"multus-sk96m\" (UID: \"e7e4902e-7bb6-44ec-a3ac-3d8b3afe3fbb\") " pod="openshift-multus/multus-sk96m" Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.947885 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/4cde17e3-43e9-4bed-afe8-5b76229e35cf-rootfs\") pod \"machine-config-daemon-x8wwg\" (UID: \"4cde17e3-43e9-4bed-afe8-5b76229e35cf\") " pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.947948 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mzslj\" (UniqueName: \"kubernetes.io/projected/4cde17e3-43e9-4bed-afe8-5b76229e35cf-kube-api-access-mzslj\") pod \"machine-config-daemon-x8wwg\" (UID: \"4cde17e3-43e9-4bed-afe8-5b76229e35cf\") " pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.947979 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e7e4902e-7bb6-44ec-a3ac-3d8b3afe3fbb-multus-cni-dir\") pod \"multus-sk96m\" (UID: \"e7e4902e-7bb6-44ec-a3ac-3d8b3afe3fbb\") " pod="openshift-multus/multus-sk96m" Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.947995 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/e7e4902e-7bb6-44ec-a3ac-3d8b3afe3fbb-cni-binary-copy\") pod \"multus-sk96m\" (UID: \"e7e4902e-7bb6-44ec-a3ac-3d8b3afe3fbb\") " pod="openshift-multus/multus-sk96m" Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.948273 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/e7e4902e-7bb6-44ec-a3ac-3d8b3afe3fbb-multus-daemon-config\") pod \"multus-sk96m\" (UID: \"e7e4902e-7bb6-44ec-a3ac-3d8b3afe3fbb\") " pod="openshift-multus/multus-sk96m" Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.948293 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4cde17e3-43e9-4bed-afe8-5b76229e35cf-mcd-auth-proxy-config\") pod \"machine-config-daemon-x8wwg\" (UID: \"4cde17e3-43e9-4bed-afe8-5b76229e35cf\") " pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.948353 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e7e4902e-7bb6-44ec-a3ac-3d8b3afe3fbb-multus-cni-dir\") pod \"multus-sk96m\" (UID: \"e7e4902e-7bb6-44ec-a3ac-3d8b3afe3fbb\") " pod="openshift-multus/multus-sk96m" Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.948518 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/e7e4902e-7bb6-44ec-a3ac-3d8b3afe3fbb-cni-binary-copy\") pod \"multus-sk96m\" (UID: \"e7e4902e-7bb6-44ec-a3ac-3d8b3afe3fbb\") " pod="openshift-multus/multus-sk96m" Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.948520 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e7e4902e-7bb6-44ec-a3ac-3d8b3afe3fbb-system-cni-dir\") pod \"multus-sk96m\" (UID: \"e7e4902e-7bb6-44ec-a3ac-3d8b3afe3fbb\") " pod="openshift-multus/multus-sk96m" Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.950395 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:31:54Z is after 2025-08-24T17:21:41Z" Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.968979 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wn2pd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6a720f-7d42-48e9-8073-fb4f7417e6cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qv7rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wn2pd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:31:54Z is after 2025-08-24T17:21:41Z" Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.995304 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b3bef1f-7cee-4035-bc8e-195fadcf2d19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51eded762a62fcb704d09a9c4336a0e308e44c662b636772ea79f754c09dcaaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a2ec9911be4f4d8715bb722cdf72cdbf478111030a7311b93eb67f55ddbce77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d46f625f180ab9afde6a7cb5d578b88012bf5b297e691a35f1fe1af000f1416\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a11b36734e57eec3aba0b6d96b4102850e46b96eb16d99cfcacf273f1ce69f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ff013b6ed4aa92a4bda93a646e7ffd4a158daab436c25025c7fbee49716bcfd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T14:31:47Z\\\",\\\"message\\\":\\\"W0218 14:31:37.311828 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0218 14:31:37.312305 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771425097 cert, and key in /tmp/serving-cert-2580901173/serving-signer.crt, /tmp/serving-cert-2580901173/serving-signer.key\\\\nI0218 14:31:37.668457 1 observer_polling.go:159] Starting file observer\\\\nW0218 14:31:37.670692 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0218 14:31:37.670892 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 14:31:37.672550 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2580901173/tls.crt::/tmp/serving-cert-2580901173/tls.key\\\\\\\"\\\\nF0218 14:31:47.884142 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1c0f986e657c20a059efccb494462926f4fa62d30b012e6ce2b530d679d9d5c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:37Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09ddd547ad67f12c0d4f3b0505206fc69b4bcfcfa966c6f98da8ef4f4e979711\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09ddd547ad67f12c0d4f3b0505206fc69b4bcfcfa966c6f98da8ef4f4e979711\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:31:54Z is after 2025-08-24T17:21:41Z" Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.995832 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4cde17e3-43e9-4bed-afe8-5b76229e35cf-proxy-tls\") pod \"machine-config-daemon-x8wwg\" (UID: \"4cde17e3-43e9-4bed-afe8-5b76229e35cf\") " pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.995914 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mzslj\" (UniqueName: \"kubernetes.io/projected/4cde17e3-43e9-4bed-afe8-5b76229e35cf-kube-api-access-mzslj\") pod \"machine-config-daemon-x8wwg\" (UID: \"4cde17e3-43e9-4bed-afe8-5b76229e35cf\") " pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.995920 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qv7rj\" (UniqueName: \"kubernetes.io/projected/2b6a720f-7d42-48e9-8073-fb4f7417e6cb-kube-api-access-qv7rj\") pod \"node-resolver-wn2pd\" (UID: \"2b6a720f-7d42-48e9-8073-fb4f7417e6cb\") " pod="openshift-dns/node-resolver-wn2pd" Feb 18 14:31:54 crc kubenswrapper[4957]: I0218 14:31:54.996082 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vssz4\" (UniqueName: \"kubernetes.io/projected/e7e4902e-7bb6-44ec-a3ac-3d8b3afe3fbb-kube-api-access-vssz4\") pod \"multus-sk96m\" (UID: \"e7e4902e-7bb6-44ec-a3ac-3d8b3afe3fbb\") " pod="openshift-multus/multus-sk96m" Feb 18 14:31:55 crc kubenswrapper[4957]: I0218 14:31:55.015347 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:31:55Z is after 2025-08-24T17:21:41Z" Feb 18 14:31:55 crc kubenswrapper[4957]: I0218 14:31:55.028448 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cde17e3-43e9-4bed-afe8-5b76229e35cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mzslj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mzslj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-x8wwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:31:55Z is after 2025-08-24T17:21:41Z" Feb 18 14:31:55 crc kubenswrapper[4957]: I0218 14:31:55.039187 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-sk96m" Feb 18 14:31:55 crc kubenswrapper[4957]: I0218 14:31:55.046152 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" Feb 18 14:31:55 crc kubenswrapper[4957]: I0218 14:31:55.048362 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 14:31:55 crc kubenswrapper[4957]: I0218 14:31:55.048490 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:31:55 crc kubenswrapper[4957]: I0218 14:31:55.048526 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:31:55 crc kubenswrapper[4957]: E0218 14:31:55.048635 4957 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 14:31:55 crc kubenswrapper[4957]: E0218 14:31:55.048719 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 14:31:57.048701304 +0000 UTC m=+23.569566048 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 14:31:55 crc kubenswrapper[4957]: E0218 14:31:55.049120 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 14:31:57.049107726 +0000 UTC m=+23.569972470 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:31:55 crc kubenswrapper[4957]: E0218 14:31:55.049204 4957 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 14:31:55 crc kubenswrapper[4957]: E0218 14:31:55.049243 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 14:31:57.04923384 +0000 UTC m=+23.570098584 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 14:31:55 crc kubenswrapper[4957]: I0218 14:31:55.053963 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:31:55Z is after 2025-08-24T17:21:41Z" Feb 18 14:31:55 crc kubenswrapper[4957]: I0218 14:31:55.054254 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-wn2pd" Feb 18 14:31:55 crc kubenswrapper[4957]: W0218 14:31:55.058476 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4cde17e3_43e9_4bed_afe8_5b76229e35cf.slice/crio-8992ef7608828c107f45c05d494cf5107ccf70fcae7f725a8097b499f9ecb08d WatchSource:0}: Error finding container 8992ef7608828c107f45c05d494cf5107ccf70fcae7f725a8097b499f9ecb08d: Status 404 returned error can't find the container with id 8992ef7608828c107f45c05d494cf5107ccf70fcae7f725a8097b499f9ecb08d Feb 18 14:31:55 crc kubenswrapper[4957]: I0218 14:31:55.074228 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sk96m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7e4902e-7bb6-44ec-a3ac-3d8b3afe3fbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vssz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sk96m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:31:55Z is after 2025-08-24T17:21:41Z" Feb 18 14:31:55 crc kubenswrapper[4957]: I0218 14:31:55.127621 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-s7f5j"] Feb 18 14:31:55 crc kubenswrapper[4957]: I0218 14:31:55.128846 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-s7f5j" Feb 18 14:31:55 crc kubenswrapper[4957]: I0218 14:31:55.130549 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 18 14:31:55 crc kubenswrapper[4957]: I0218 14:31:55.130556 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 18 14:31:55 crc kubenswrapper[4957]: I0218 14:31:55.142445 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:31:55Z is after 2025-08-24T17:21:41Z" Feb 18 14:31:55 crc kubenswrapper[4957]: I0218 14:31:55.148901 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:31:55 crc kubenswrapper[4957]: I0218 14:31:55.148960 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:31:55 crc kubenswrapper[4957]: E0218 14:31:55.149088 4957 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 14:31:55 crc kubenswrapper[4957]: E0218 14:31:55.149101 4957 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 14:31:55 crc kubenswrapper[4957]: E0218 14:31:55.149129 4957 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 14:31:55 crc kubenswrapper[4957]: E0218 14:31:55.149142 4957 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 14:31:55 crc kubenswrapper[4957]: E0218 14:31:55.149183 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-18 14:31:57.149168891 +0000 UTC m=+23.670033625 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 14:31:55 crc kubenswrapper[4957]: E0218 14:31:55.149111 4957 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 14:31:55 crc kubenswrapper[4957]: E0218 14:31:55.149374 4957 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 14:31:55 crc kubenswrapper[4957]: E0218 14:31:55.149397 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-18 14:31:57.149390337 +0000 UTC m=+23.670255081 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 14:31:55 crc kubenswrapper[4957]: I0218 14:31:55.153932 4957 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 18:59:46.070519122 +0000 UTC Feb 18 14:31:55 crc kubenswrapper[4957]: I0218 14:31:55.159078 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sk96m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7e4902e-7bb6-44ec-a3ac-3d8b3afe3fbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vssz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sk96m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:31:55Z is after 2025-08-24T17:21:41Z" Feb 18 14:31:55 crc kubenswrapper[4957]: I0218 14:31:55.173922 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cde17e3-43e9-4bed-afe8-5b76229e35cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mzslj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mzslj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-x8wwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:31:55Z is after 2025-08-24T17:21:41Z" Feb 18 14:31:55 crc kubenswrapper[4957]: I0218 14:31:55.186602 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3aa342a3ea42d2c64cf284dd2f7fed142b1ff377cc3c5384e4d940ec4364befc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:31:55Z is after 2025-08-24T17:21:41Z" Feb 18 14:31:55 crc kubenswrapper[4957]: I0218 14:31:55.201303 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:31:55Z is after 2025-08-24T17:21:41Z" Feb 18 14:31:55 crc kubenswrapper[4957]: I0218 14:31:55.213288 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:31:55 crc kubenswrapper[4957]: E0218 14:31:55.213447 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 14:31:55 crc kubenswrapper[4957]: I0218 14:31:55.219043 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-s7f5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77ae51a3-a3f7-4ea3-afb9-93558bf3b821\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-s7f5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:31:55Z is after 2025-08-24T17:21:41Z" Feb 18 14:31:55 crc kubenswrapper[4957]: I0218 14:31:55.234124 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b3bef1f-7cee-4035-bc8e-195fadcf2d19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51eded762a62fcb704d09a9c4336a0e308e44c662b636772ea79f754c09dcaaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a2ec9911be4f4d8715bb722cdf72cdbf478111030a7311b93eb67f55ddbce77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d46f625f180ab9afde6a7cb5d578b88012bf5b297e691a35f1fe1af000f1416\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a11b36734e57eec3aba0b6d96b4102850e46b96eb16d99cfcacf273f1ce69f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ff013b6ed4aa92a4bda93a646e7ffd4a158daab436c25025c7fbee49716bcfd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T14:31:47Z\\\",\\\"message\\\":\\\"W0218 14:31:37.311828 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0218 14:31:37.312305 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771425097 cert, and key in /tmp/serving-cert-2580901173/serving-signer.crt, /tmp/serving-cert-2580901173/serving-signer.key\\\\nI0218 14:31:37.668457 1 observer_polling.go:159] Starting file observer\\\\nW0218 14:31:37.670692 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0218 14:31:37.670892 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 14:31:37.672550 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2580901173/tls.crt::/tmp/serving-cert-2580901173/tls.key\\\\\\\"\\\\nF0218 14:31:47.884142 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1c0f986e657c20a059efccb494462926f4fa62d30b012e6ce2b530d679d9d5c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:37Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09ddd547ad67f12c0d4f3b0505206fc69b4bcfcfa966c6f98da8ef4f4e979711\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09ddd547ad67f12c0d4f3b0505206fc69b4bcfcfa966c6f98da8ef4f4e979711\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:31:55Z is after 2025-08-24T17:21:41Z" Feb 18 14:31:55 crc kubenswrapper[4957]: I0218 14:31:55.247804 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:31:55Z is after 2025-08-24T17:21:41Z" Feb 18 14:31:55 crc kubenswrapper[4957]: I0218 14:31:55.253976 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/77ae51a3-a3f7-4ea3-afb9-93558bf3b821-system-cni-dir\") pod \"multus-additional-cni-plugins-s7f5j\" (UID: \"77ae51a3-a3f7-4ea3-afb9-93558bf3b821\") " pod="openshift-multus/multus-additional-cni-plugins-s7f5j" Feb 18 14:31:55 crc kubenswrapper[4957]: I0218 14:31:55.254027 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/77ae51a3-a3f7-4ea3-afb9-93558bf3b821-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-s7f5j\" (UID: \"77ae51a3-a3f7-4ea3-afb9-93558bf3b821\") " pod="openshift-multus/multus-additional-cni-plugins-s7f5j" Feb 18 14:31:55 crc kubenswrapper[4957]: I0218 14:31:55.254049 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/77ae51a3-a3f7-4ea3-afb9-93558bf3b821-cni-binary-copy\") pod \"multus-additional-cni-plugins-s7f5j\" (UID: \"77ae51a3-a3f7-4ea3-afb9-93558bf3b821\") " pod="openshift-multus/multus-additional-cni-plugins-s7f5j" Feb 18 14:31:55 crc kubenswrapper[4957]: I0218 14:31:55.254081 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/77ae51a3-a3f7-4ea3-afb9-93558bf3b821-cnibin\") pod \"multus-additional-cni-plugins-s7f5j\" (UID: \"77ae51a3-a3f7-4ea3-afb9-93558bf3b821\") " pod="openshift-multus/multus-additional-cni-plugins-s7f5j" Feb 18 14:31:55 crc kubenswrapper[4957]: I0218 14:31:55.254097 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/77ae51a3-a3f7-4ea3-afb9-93558bf3b821-os-release\") pod \"multus-additional-cni-plugins-s7f5j\" (UID: \"77ae51a3-a3f7-4ea3-afb9-93558bf3b821\") " pod="openshift-multus/multus-additional-cni-plugins-s7f5j" Feb 18 14:31:55 crc kubenswrapper[4957]: I0218 14:31:55.254112 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5sf7\" (UniqueName: \"kubernetes.io/projected/77ae51a3-a3f7-4ea3-afb9-93558bf3b821-kube-api-access-w5sf7\") pod \"multus-additional-cni-plugins-s7f5j\" (UID: \"77ae51a3-a3f7-4ea3-afb9-93558bf3b821\") " pod="openshift-multus/multus-additional-cni-plugins-s7f5j" Feb 18 14:31:55 crc kubenswrapper[4957]: I0218 14:31:55.254134 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/77ae51a3-a3f7-4ea3-afb9-93558bf3b821-tuning-conf-dir\") pod \"multus-additional-cni-plugins-s7f5j\" (UID: \"77ae51a3-a3f7-4ea3-afb9-93558bf3b821\") " pod="openshift-multus/multus-additional-cni-plugins-s7f5j" Feb 18 14:31:55 crc kubenswrapper[4957]: I0218 14:31:55.263667 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3339e9ff017eaa7de40b766111f5d7dcb9b5b45f154d917b2f3b1ebad21ee361\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://815a22c1c4523e923dbe20bb68b1f4195562f2b3a767409555f43c82cd826391\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:31:55Z is after 2025-08-24T17:21:41Z" Feb 18 14:31:55 crc kubenswrapper[4957]: I0218 14:31:55.275762 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:31:55Z is after 2025-08-24T17:21:41Z" Feb 18 14:31:55 crc kubenswrapper[4957]: I0218 14:31:55.283939 4957 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-02-18 14:26:54 +0000 UTC, rotation deadline is 2026-12-13 14:33:42.849531147 +0000 UTC Feb 18 14:31:55 crc kubenswrapper[4957]: I0218 14:31:55.283997 4957 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7152h1m47.565538616s for next certificate rotation Feb 18 14:31:55 crc kubenswrapper[4957]: I0218 14:31:55.295116 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wn2pd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6a720f-7d42-48e9-8073-fb4f7417e6cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qv7rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wn2pd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:31:55Z is after 2025-08-24T17:21:41Z" Feb 18 14:31:55 crc kubenswrapper[4957]: I0218 14:31:55.366315 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/77ae51a3-a3f7-4ea3-afb9-93558bf3b821-tuning-conf-dir\") pod \"multus-additional-cni-plugins-s7f5j\" (UID: \"77ae51a3-a3f7-4ea3-afb9-93558bf3b821\") " pod="openshift-multus/multus-additional-cni-plugins-s7f5j" Feb 18 14:31:55 crc kubenswrapper[4957]: I0218 14:31:55.366385 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/77ae51a3-a3f7-4ea3-afb9-93558bf3b821-system-cni-dir\") pod \"multus-additional-cni-plugins-s7f5j\" (UID: \"77ae51a3-a3f7-4ea3-afb9-93558bf3b821\") " pod="openshift-multus/multus-additional-cni-plugins-s7f5j" Feb 18 14:31:55 crc kubenswrapper[4957]: I0218 14:31:55.366411 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/77ae51a3-a3f7-4ea3-afb9-93558bf3b821-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-s7f5j\" (UID: \"77ae51a3-a3f7-4ea3-afb9-93558bf3b821\") " pod="openshift-multus/multus-additional-cni-plugins-s7f5j" Feb 18 14:31:55 crc kubenswrapper[4957]: I0218 14:31:55.366460 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/77ae51a3-a3f7-4ea3-afb9-93558bf3b821-cni-binary-copy\") pod \"multus-additional-cni-plugins-s7f5j\" (UID: \"77ae51a3-a3f7-4ea3-afb9-93558bf3b821\") " pod="openshift-multus/multus-additional-cni-plugins-s7f5j" Feb 18 14:31:55 crc kubenswrapper[4957]: I0218 14:31:55.366501 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/77ae51a3-a3f7-4ea3-afb9-93558bf3b821-cnibin\") pod \"multus-additional-cni-plugins-s7f5j\" (UID: \"77ae51a3-a3f7-4ea3-afb9-93558bf3b821\") " pod="openshift-multus/multus-additional-cni-plugins-s7f5j" Feb 18 14:31:55 crc kubenswrapper[4957]: I0218 14:31:55.366520 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/77ae51a3-a3f7-4ea3-afb9-93558bf3b821-os-release\") pod \"multus-additional-cni-plugins-s7f5j\" (UID: \"77ae51a3-a3f7-4ea3-afb9-93558bf3b821\") " pod="openshift-multus/multus-additional-cni-plugins-s7f5j" Feb 18 14:31:55 crc kubenswrapper[4957]: I0218 14:31:55.366525 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/77ae51a3-a3f7-4ea3-afb9-93558bf3b821-system-cni-dir\") pod \"multus-additional-cni-plugins-s7f5j\" (UID: \"77ae51a3-a3f7-4ea3-afb9-93558bf3b821\") " pod="openshift-multus/multus-additional-cni-plugins-s7f5j" Feb 18 14:31:55 crc kubenswrapper[4957]: I0218 14:31:55.366552 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w5sf7\" (UniqueName: \"kubernetes.io/projected/77ae51a3-a3f7-4ea3-afb9-93558bf3b821-kube-api-access-w5sf7\") pod \"multus-additional-cni-plugins-s7f5j\" (UID: \"77ae51a3-a3f7-4ea3-afb9-93558bf3b821\") " pod="openshift-multus/multus-additional-cni-plugins-s7f5j" Feb 18 14:31:55 crc kubenswrapper[4957]: I0218 14:31:55.366611 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/77ae51a3-a3f7-4ea3-afb9-93558bf3b821-cnibin\") pod \"multus-additional-cni-plugins-s7f5j\" (UID: \"77ae51a3-a3f7-4ea3-afb9-93558bf3b821\") " pod="openshift-multus/multus-additional-cni-plugins-s7f5j" Feb 18 14:31:55 crc kubenswrapper[4957]: I0218 14:31:55.366861 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/77ae51a3-a3f7-4ea3-afb9-93558bf3b821-os-release\") pod \"multus-additional-cni-plugins-s7f5j\" (UID: \"77ae51a3-a3f7-4ea3-afb9-93558bf3b821\") " pod="openshift-multus/multus-additional-cni-plugins-s7f5j" Feb 18 14:31:55 crc kubenswrapper[4957]: I0218 14:31:55.367165 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/77ae51a3-a3f7-4ea3-afb9-93558bf3b821-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-s7f5j\" (UID: \"77ae51a3-a3f7-4ea3-afb9-93558bf3b821\") " pod="openshift-multus/multus-additional-cni-plugins-s7f5j" Feb 18 14:31:55 crc kubenswrapper[4957]: I0218 14:31:55.367225 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/77ae51a3-a3f7-4ea3-afb9-93558bf3b821-tuning-conf-dir\") pod \"multus-additional-cni-plugins-s7f5j\" (UID: \"77ae51a3-a3f7-4ea3-afb9-93558bf3b821\") " pod="openshift-multus/multus-additional-cni-plugins-s7f5j" Feb 18 14:31:55 crc kubenswrapper[4957]: I0218 14:31:55.367250 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/77ae51a3-a3f7-4ea3-afb9-93558bf3b821-cni-binary-copy\") pod \"multus-additional-cni-plugins-s7f5j\" (UID: \"77ae51a3-a3f7-4ea3-afb9-93558bf3b821\") " pod="openshift-multus/multus-additional-cni-plugins-s7f5j" Feb 18 14:31:55 crc kubenswrapper[4957]: I0218 14:31:55.371961 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-sk96m" event={"ID":"e7e4902e-7bb6-44ec-a3ac-3d8b3afe3fbb","Type":"ContainerStarted","Data":"644a3c29970ac9c5262a3a66390a566947b89d6020eaf5de12afb1afeaaff673"} Feb 18 14:31:55 crc kubenswrapper[4957]: I0218 14:31:55.372010 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-sk96m" event={"ID":"e7e4902e-7bb6-44ec-a3ac-3d8b3afe3fbb","Type":"ContainerStarted","Data":"c74996e3a076c8a3d12c90d330b5455ab4db88454aefa814d28589d4f2639e22"} Feb 18 14:31:55 crc kubenswrapper[4957]: I0218 14:31:55.377976 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-wn2pd" event={"ID":"2b6a720f-7d42-48e9-8073-fb4f7417e6cb","Type":"ContainerStarted","Data":"daa9eeb0acfaf2c9adc94be0182b729da9dfe1a10325efcdb825535fac69dde3"} Feb 18 14:31:55 crc kubenswrapper[4957]: I0218 14:31:55.380527 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" event={"ID":"4cde17e3-43e9-4bed-afe8-5b76229e35cf","Type":"ContainerStarted","Data":"1929ddc0bd8d5e1c054fe6b3399c053a91ace49153f8712ad155416d0df0652e"} Feb 18 14:31:55 crc kubenswrapper[4957]: I0218 14:31:55.380584 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" event={"ID":"4cde17e3-43e9-4bed-afe8-5b76229e35cf","Type":"ContainerStarted","Data":"4227d09ba91769eac5df7ac50f6da20587fad22be3d3f27f0a938b37d76b457e"} Feb 18 14:31:55 crc kubenswrapper[4957]: I0218 14:31:55.380598 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" event={"ID":"4cde17e3-43e9-4bed-afe8-5b76229e35cf","Type":"ContainerStarted","Data":"8992ef7608828c107f45c05d494cf5107ccf70fcae7f725a8097b499f9ecb08d"} Feb 18 14:31:55 crc kubenswrapper[4957]: I0218 14:31:55.391133 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w5sf7\" (UniqueName: \"kubernetes.io/projected/77ae51a3-a3f7-4ea3-afb9-93558bf3b821-kube-api-access-w5sf7\") pod \"multus-additional-cni-plugins-s7f5j\" (UID: \"77ae51a3-a3f7-4ea3-afb9-93558bf3b821\") " pod="openshift-multus/multus-additional-cni-plugins-s7f5j" Feb 18 14:31:55 crc kubenswrapper[4957]: I0218 14:31:55.392992 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:31:55Z is after 2025-08-24T17:21:41Z" Feb 18 14:31:55 crc kubenswrapper[4957]: I0218 14:31:55.409745 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3339e9ff017eaa7de40b766111f5d7dcb9b5b45f154d917b2f3b1ebad21ee361\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://815a22c1c4523e923dbe20bb68b1f4195562f2b3a767409555f43c82cd826391\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:31:55Z is after 2025-08-24T17:21:41Z" Feb 18 14:31:55 crc kubenswrapper[4957]: I0218 14:31:55.422154 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:31:55Z is after 2025-08-24T17:21:41Z" Feb 18 14:31:55 crc kubenswrapper[4957]: I0218 14:31:55.433565 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wn2pd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6a720f-7d42-48e9-8073-fb4f7417e6cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qv7rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wn2pd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:31:55Z is after 2025-08-24T17:21:41Z" Feb 18 14:31:55 crc kubenswrapper[4957]: I0218 14:31:55.447610 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b3bef1f-7cee-4035-bc8e-195fadcf2d19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51eded762a62fcb704d09a9c4336a0e308e44c662b636772ea79f754c09dcaaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a2ec9911be4f4d8715bb722cdf72cdbf478111030a7311b93eb67f55ddbce77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d46f625f180ab9afde6a7cb5d578b88012bf5b297e691a35f1fe1af000f1416\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a11b36734e57eec3aba0b6d96b4102850e46b96eb16d99cfcacf273f1ce69f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ff013b6ed4aa92a4bda93a646e7ffd4a158daab436c25025c7fbee49716bcfd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T14:31:47Z\\\",\\\"message\\\":\\\"W0218 14:31:37.311828 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0218 14:31:37.312305 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771425097 cert, and key in /tmp/serving-cert-2580901173/serving-signer.crt, /tmp/serving-cert-2580901173/serving-signer.key\\\\nI0218 14:31:37.668457 1 observer_polling.go:159] Starting file observer\\\\nW0218 14:31:37.670692 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0218 14:31:37.670892 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 14:31:37.672550 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2580901173/tls.crt::/tmp/serving-cert-2580901173/tls.key\\\\\\\"\\\\nF0218 14:31:47.884142 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1c0f986e657c20a059efccb494462926f4fa62d30b012e6ce2b530d679d9d5c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:37Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09ddd547ad67f12c0d4f3b0505206fc69b4bcfcfa966c6f98da8ef4f4e979711\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09ddd547ad67f12c0d4f3b0505206fc69b4bcfcfa966c6f98da8ef4f4e979711\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:31:55Z is after 2025-08-24T17:21:41Z" Feb 18 14:31:55 crc kubenswrapper[4957]: I0218 14:31:55.452383 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-s7f5j" Feb 18 14:31:55 crc kubenswrapper[4957]: I0218 14:31:55.460661 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:31:55Z is after 2025-08-24T17:21:41Z" Feb 18 14:31:55 crc kubenswrapper[4957]: W0218 14:31:55.466857 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod77ae51a3_a3f7_4ea3_afb9_93558bf3b821.slice/crio-be17874ce3c3f36c1aa8e34e262c8691d982f090138028e70f93b693e0774575 WatchSource:0}: Error finding container be17874ce3c3f36c1aa8e34e262c8691d982f090138028e70f93b693e0774575: Status 404 returned error can't find the container with id be17874ce3c3f36c1aa8e34e262c8691d982f090138028e70f93b693e0774575 Feb 18 14:31:55 crc kubenswrapper[4957]: I0218 14:31:55.476515 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sk96m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7e4902e-7bb6-44ec-a3ac-3d8b3afe3fbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://644a3c29970ac9c5262a3a66390a566947b89d6020eaf5de12afb1afeaaff673\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vssz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sk96m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:31:55Z is after 2025-08-24T17:21:41Z" Feb 18 14:31:55 crc kubenswrapper[4957]: I0218 14:31:55.491787 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cde17e3-43e9-4bed-afe8-5b76229e35cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mzslj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mzslj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-x8wwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:31:55Z is after 2025-08-24T17:21:41Z" Feb 18 14:31:55 crc kubenswrapper[4957]: I0218 14:31:55.497107 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-t7lp9"] Feb 18 14:31:55 crc kubenswrapper[4957]: I0218 14:31:55.501081 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-t7lp9" Feb 18 14:31:55 crc kubenswrapper[4957]: I0218 14:31:55.504185 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 18 14:31:55 crc kubenswrapper[4957]: I0218 14:31:55.504338 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 18 14:31:55 crc kubenswrapper[4957]: I0218 14:31:55.504624 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 18 14:31:55 crc kubenswrapper[4957]: I0218 14:31:55.504664 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 18 14:31:55 crc kubenswrapper[4957]: I0218 14:31:55.504753 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 18 14:31:55 crc kubenswrapper[4957]: I0218 14:31:55.504880 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 18 14:31:55 crc kubenswrapper[4957]: I0218 14:31:55.505052 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 18 14:31:55 crc kubenswrapper[4957]: I0218 14:31:55.516746 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3aa342a3ea42d2c64cf284dd2f7fed142b1ff377cc3c5384e4d940ec4364befc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:31:55Z is after 2025-08-24T17:21:41Z" Feb 18 14:31:55 crc kubenswrapper[4957]: I0218 14:31:55.531681 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:31:55Z is after 2025-08-24T17:21:41Z" Feb 18 14:31:55 crc kubenswrapper[4957]: I0218 14:31:55.546457 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-s7f5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77ae51a3-a3f7-4ea3-afb9-93558bf3b821\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-s7f5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:31:55Z is after 2025-08-24T17:21:41Z" Feb 18 14:31:55 crc kubenswrapper[4957]: I0218 14:31:55.566365 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:31:55Z is after 2025-08-24T17:21:41Z" Feb 18 14:31:55 crc kubenswrapper[4957]: I0218 14:31:55.569019 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c1ab5e7d-28c9-416b-9e12-1209987d8a2c-ovn-node-metrics-cert\") pod \"ovnkube-node-t7lp9\" (UID: \"c1ab5e7d-28c9-416b-9e12-1209987d8a2c\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7lp9" Feb 18 14:31:55 crc kubenswrapper[4957]: I0218 14:31:55.569055 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c1ab5e7d-28c9-416b-9e12-1209987d8a2c-systemd-units\") pod \"ovnkube-node-t7lp9\" (UID: \"c1ab5e7d-28c9-416b-9e12-1209987d8a2c\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7lp9" Feb 18 14:31:55 crc kubenswrapper[4957]: I0218 14:31:55.569072 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c1ab5e7d-28c9-416b-9e12-1209987d8a2c-node-log\") pod \"ovnkube-node-t7lp9\" (UID: \"c1ab5e7d-28c9-416b-9e12-1209987d8a2c\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7lp9" Feb 18 14:31:55 crc kubenswrapper[4957]: I0218 14:31:55.569096 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c1ab5e7d-28c9-416b-9e12-1209987d8a2c-host-cni-netd\") pod \"ovnkube-node-t7lp9\" (UID: \"c1ab5e7d-28c9-416b-9e12-1209987d8a2c\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7lp9" Feb 18 14:31:55 crc kubenswrapper[4957]: I0218 14:31:55.569117 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c1ab5e7d-28c9-416b-9e12-1209987d8a2c-run-systemd\") pod \"ovnkube-node-t7lp9\" (UID: \"c1ab5e7d-28c9-416b-9e12-1209987d8a2c\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7lp9" Feb 18 14:31:55 crc kubenswrapper[4957]: I0218 14:31:55.569130 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c1ab5e7d-28c9-416b-9e12-1209987d8a2c-run-openvswitch\") pod \"ovnkube-node-t7lp9\" (UID: \"c1ab5e7d-28c9-416b-9e12-1209987d8a2c\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7lp9" Feb 18 14:31:55 crc kubenswrapper[4957]: I0218 14:31:55.569143 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c1ab5e7d-28c9-416b-9e12-1209987d8a2c-ovnkube-script-lib\") pod \"ovnkube-node-t7lp9\" (UID: \"c1ab5e7d-28c9-416b-9e12-1209987d8a2c\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7lp9" Feb 18 14:31:55 crc kubenswrapper[4957]: I0218 14:31:55.569157 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c1ab5e7d-28c9-416b-9e12-1209987d8a2c-log-socket\") pod \"ovnkube-node-t7lp9\" (UID: \"c1ab5e7d-28c9-416b-9e12-1209987d8a2c\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7lp9" Feb 18 14:31:55 crc kubenswrapper[4957]: I0218 14:31:55.569201 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c1ab5e7d-28c9-416b-9e12-1209987d8a2c-host-slash\") pod \"ovnkube-node-t7lp9\" (UID: \"c1ab5e7d-28c9-416b-9e12-1209987d8a2c\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7lp9" Feb 18 14:31:55 crc kubenswrapper[4957]: I0218 14:31:55.569233 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ngss\" (UniqueName: \"kubernetes.io/projected/c1ab5e7d-28c9-416b-9e12-1209987d8a2c-kube-api-access-6ngss\") pod \"ovnkube-node-t7lp9\" (UID: \"c1ab5e7d-28c9-416b-9e12-1209987d8a2c\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7lp9" Feb 18 14:31:55 crc kubenswrapper[4957]: I0218 14:31:55.569256 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c1ab5e7d-28c9-416b-9e12-1209987d8a2c-host-cni-bin\") pod \"ovnkube-node-t7lp9\" (UID: \"c1ab5e7d-28c9-416b-9e12-1209987d8a2c\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7lp9" Feb 18 14:31:55 crc kubenswrapper[4957]: I0218 14:31:55.569275 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c1ab5e7d-28c9-416b-9e12-1209987d8a2c-ovnkube-config\") pod \"ovnkube-node-t7lp9\" (UID: \"c1ab5e7d-28c9-416b-9e12-1209987d8a2c\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7lp9" Feb 18 14:31:55 crc kubenswrapper[4957]: I0218 14:31:55.569321 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c1ab5e7d-28c9-416b-9e12-1209987d8a2c-host-run-ovn-kubernetes\") pod \"ovnkube-node-t7lp9\" (UID: \"c1ab5e7d-28c9-416b-9e12-1209987d8a2c\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7lp9" Feb 18 14:31:55 crc kubenswrapper[4957]: I0218 14:31:55.569366 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c1ab5e7d-28c9-416b-9e12-1209987d8a2c-var-lib-openvswitch\") pod \"ovnkube-node-t7lp9\" (UID: \"c1ab5e7d-28c9-416b-9e12-1209987d8a2c\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7lp9" Feb 18 14:31:55 crc kubenswrapper[4957]: I0218 14:31:55.569388 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c1ab5e7d-28c9-416b-9e12-1209987d8a2c-host-kubelet\") pod \"ovnkube-node-t7lp9\" (UID: \"c1ab5e7d-28c9-416b-9e12-1209987d8a2c\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7lp9" Feb 18 14:31:55 crc kubenswrapper[4957]: I0218 14:31:55.569408 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c1ab5e7d-28c9-416b-9e12-1209987d8a2c-host-run-netns\") pod \"ovnkube-node-t7lp9\" (UID: \"c1ab5e7d-28c9-416b-9e12-1209987d8a2c\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7lp9" Feb 18 14:31:55 crc kubenswrapper[4957]: I0218 14:31:55.569441 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c1ab5e7d-28c9-416b-9e12-1209987d8a2c-run-ovn\") pod \"ovnkube-node-t7lp9\" (UID: \"c1ab5e7d-28c9-416b-9e12-1209987d8a2c\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7lp9" Feb 18 14:31:55 crc kubenswrapper[4957]: I0218 14:31:55.569471 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c1ab5e7d-28c9-416b-9e12-1209987d8a2c-env-overrides\") pod \"ovnkube-node-t7lp9\" (UID: \"c1ab5e7d-28c9-416b-9e12-1209987d8a2c\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7lp9" Feb 18 14:31:55 crc kubenswrapper[4957]: I0218 14:31:55.569491 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c1ab5e7d-28c9-416b-9e12-1209987d8a2c-etc-openvswitch\") pod \"ovnkube-node-t7lp9\" (UID: \"c1ab5e7d-28c9-416b-9e12-1209987d8a2c\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7lp9" Feb 18 14:31:55 crc kubenswrapper[4957]: I0218 14:31:55.569510 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c1ab5e7d-28c9-416b-9e12-1209987d8a2c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-t7lp9\" (UID: \"c1ab5e7d-28c9-416b-9e12-1209987d8a2c\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7lp9" Feb 18 14:31:55 crc kubenswrapper[4957]: I0218 14:31:55.593679 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sk96m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7e4902e-7bb6-44ec-a3ac-3d8b3afe3fbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://644a3c29970ac9c5262a3a66390a566947b89d6020eaf5de12afb1afeaaff673\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vssz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sk96m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:31:55Z is after 2025-08-24T17:21:41Z" Feb 18 14:31:55 crc kubenswrapper[4957]: I0218 14:31:55.606284 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cde17e3-43e9-4bed-afe8-5b76229e35cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1929ddc0bd8d5e1c054fe6b3399c053a91ace49153f8712ad155416d0df0652e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mzslj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4227d09ba91769eac5df7ac50f6da20587fad22be3d3f27f0a938b37d76b457e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mzslj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-x8wwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:31:55Z is after 2025-08-24T17:21:41Z" Feb 18 14:31:55 crc kubenswrapper[4957]: I0218 14:31:55.624079 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3aa342a3ea42d2c64cf284dd2f7fed142b1ff377cc3c5384e4d940ec4364befc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:31:55Z is after 2025-08-24T17:21:41Z" Feb 18 14:31:55 crc kubenswrapper[4957]: I0218 14:31:55.640067 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:31:55Z is after 2025-08-24T17:21:41Z" Feb 18 14:31:55 crc kubenswrapper[4957]: I0218 14:31:55.660431 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-s7f5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77ae51a3-a3f7-4ea3-afb9-93558bf3b821\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-s7f5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:31:55Z is after 2025-08-24T17:21:41Z" Feb 18 14:31:55 crc kubenswrapper[4957]: I0218 14:31:55.670523 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c1ab5e7d-28c9-416b-9e12-1209987d8a2c-host-run-ovn-kubernetes\") pod \"ovnkube-node-t7lp9\" (UID: \"c1ab5e7d-28c9-416b-9e12-1209987d8a2c\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7lp9" Feb 18 14:31:55 crc kubenswrapper[4957]: I0218 14:31:55.670573 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c1ab5e7d-28c9-416b-9e12-1209987d8a2c-var-lib-openvswitch\") pod \"ovnkube-node-t7lp9\" (UID: \"c1ab5e7d-28c9-416b-9e12-1209987d8a2c\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7lp9" Feb 18 14:31:55 crc kubenswrapper[4957]: I0218 14:31:55.670595 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c1ab5e7d-28c9-416b-9e12-1209987d8a2c-host-kubelet\") pod \"ovnkube-node-t7lp9\" (UID: \"c1ab5e7d-28c9-416b-9e12-1209987d8a2c\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7lp9" Feb 18 14:31:55 crc kubenswrapper[4957]: I0218 14:31:55.670614 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c1ab5e7d-28c9-416b-9e12-1209987d8a2c-host-run-netns\") pod \"ovnkube-node-t7lp9\" (UID: \"c1ab5e7d-28c9-416b-9e12-1209987d8a2c\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7lp9" Feb 18 14:31:55 crc kubenswrapper[4957]: I0218 14:31:55.670636 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c1ab5e7d-28c9-416b-9e12-1209987d8a2c-run-ovn\") pod \"ovnkube-node-t7lp9\" (UID: \"c1ab5e7d-28c9-416b-9e12-1209987d8a2c\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7lp9" Feb 18 14:31:55 crc kubenswrapper[4957]: I0218 14:31:55.670644 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c1ab5e7d-28c9-416b-9e12-1209987d8a2c-host-kubelet\") pod \"ovnkube-node-t7lp9\" (UID: \"c1ab5e7d-28c9-416b-9e12-1209987d8a2c\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7lp9" Feb 18 14:31:55 crc kubenswrapper[4957]: I0218 14:31:55.670664 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c1ab5e7d-28c9-416b-9e12-1209987d8a2c-env-overrides\") pod \"ovnkube-node-t7lp9\" (UID: \"c1ab5e7d-28c9-416b-9e12-1209987d8a2c\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7lp9" Feb 18 14:31:55 crc kubenswrapper[4957]: I0218 14:31:55.670594 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c1ab5e7d-28c9-416b-9e12-1209987d8a2c-host-run-ovn-kubernetes\") pod \"ovnkube-node-t7lp9\" (UID: \"c1ab5e7d-28c9-416b-9e12-1209987d8a2c\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7lp9" Feb 18 14:31:55 crc kubenswrapper[4957]: I0218 14:31:55.670709 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c1ab5e7d-28c9-416b-9e12-1209987d8a2c-run-ovn\") pod \"ovnkube-node-t7lp9\" (UID: \"c1ab5e7d-28c9-416b-9e12-1209987d8a2c\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7lp9" Feb 18 14:31:55 crc kubenswrapper[4957]: I0218 14:31:55.670691 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c1ab5e7d-28c9-416b-9e12-1209987d8a2c-etc-openvswitch\") pod \"ovnkube-node-t7lp9\" (UID: \"c1ab5e7d-28c9-416b-9e12-1209987d8a2c\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7lp9" Feb 18 14:31:55 crc kubenswrapper[4957]: I0218 14:31:55.670734 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c1ab5e7d-28c9-416b-9e12-1209987d8a2c-etc-openvswitch\") pod \"ovnkube-node-t7lp9\" (UID: \"c1ab5e7d-28c9-416b-9e12-1209987d8a2c\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7lp9" Feb 18 14:31:55 crc kubenswrapper[4957]: I0218 14:31:55.670678 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c1ab5e7d-28c9-416b-9e12-1209987d8a2c-host-run-netns\") pod \"ovnkube-node-t7lp9\" (UID: \"c1ab5e7d-28c9-416b-9e12-1209987d8a2c\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7lp9" Feb 18 14:31:55 crc kubenswrapper[4957]: I0218 14:31:55.670757 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c1ab5e7d-28c9-416b-9e12-1209987d8a2c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-t7lp9\" (UID: \"c1ab5e7d-28c9-416b-9e12-1209987d8a2c\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7lp9" Feb 18 14:31:55 crc kubenswrapper[4957]: I0218 14:31:55.670782 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c1ab5e7d-28c9-416b-9e12-1209987d8a2c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-t7lp9\" (UID: \"c1ab5e7d-28c9-416b-9e12-1209987d8a2c\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7lp9" Feb 18 14:31:55 crc kubenswrapper[4957]: I0218 14:31:55.670794 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c1ab5e7d-28c9-416b-9e12-1209987d8a2c-ovn-node-metrics-cert\") pod \"ovnkube-node-t7lp9\" (UID: \"c1ab5e7d-28c9-416b-9e12-1209987d8a2c\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7lp9" Feb 18 14:31:55 crc kubenswrapper[4957]: I0218 14:31:55.670730 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c1ab5e7d-28c9-416b-9e12-1209987d8a2c-var-lib-openvswitch\") pod \"ovnkube-node-t7lp9\" (UID: \"c1ab5e7d-28c9-416b-9e12-1209987d8a2c\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7lp9" Feb 18 14:31:55 crc kubenswrapper[4957]: I0218 14:31:55.670836 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c1ab5e7d-28c9-416b-9e12-1209987d8a2c-systemd-units\") pod \"ovnkube-node-t7lp9\" (UID: \"c1ab5e7d-28c9-416b-9e12-1209987d8a2c\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7lp9" Feb 18 14:31:55 crc kubenswrapper[4957]: I0218 14:31:55.670814 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c1ab5e7d-28c9-416b-9e12-1209987d8a2c-systemd-units\") pod \"ovnkube-node-t7lp9\" (UID: \"c1ab5e7d-28c9-416b-9e12-1209987d8a2c\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7lp9" Feb 18 14:31:55 crc kubenswrapper[4957]: I0218 14:31:55.670910 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c1ab5e7d-28c9-416b-9e12-1209987d8a2c-node-log\") pod \"ovnkube-node-t7lp9\" (UID: \"c1ab5e7d-28c9-416b-9e12-1209987d8a2c\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7lp9" Feb 18 14:31:55 crc kubenswrapper[4957]: I0218 14:31:55.670957 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c1ab5e7d-28c9-416b-9e12-1209987d8a2c-host-cni-netd\") pod \"ovnkube-node-t7lp9\" (UID: \"c1ab5e7d-28c9-416b-9e12-1209987d8a2c\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7lp9" Feb 18 14:31:55 crc kubenswrapper[4957]: I0218 14:31:55.670978 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c1ab5e7d-28c9-416b-9e12-1209987d8a2c-node-log\") pod \"ovnkube-node-t7lp9\" (UID: \"c1ab5e7d-28c9-416b-9e12-1209987d8a2c\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7lp9" Feb 18 14:31:55 crc kubenswrapper[4957]: I0218 14:31:55.670982 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c1ab5e7d-28c9-416b-9e12-1209987d8a2c-run-systemd\") pod \"ovnkube-node-t7lp9\" (UID: \"c1ab5e7d-28c9-416b-9e12-1209987d8a2c\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7lp9" Feb 18 14:31:55 crc kubenswrapper[4957]: I0218 14:31:55.671009 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c1ab5e7d-28c9-416b-9e12-1209987d8a2c-run-systemd\") pod \"ovnkube-node-t7lp9\" (UID: \"c1ab5e7d-28c9-416b-9e12-1209987d8a2c\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7lp9" Feb 18 14:31:55 crc kubenswrapper[4957]: I0218 14:31:55.671051 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c1ab5e7d-28c9-416b-9e12-1209987d8a2c-run-openvswitch\") pod \"ovnkube-node-t7lp9\" (UID: \"c1ab5e7d-28c9-416b-9e12-1209987d8a2c\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7lp9" Feb 18 14:31:55 crc kubenswrapper[4957]: I0218 14:31:55.671076 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c1ab5e7d-28c9-416b-9e12-1209987d8a2c-ovnkube-script-lib\") pod \"ovnkube-node-t7lp9\" (UID: \"c1ab5e7d-28c9-416b-9e12-1209987d8a2c\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7lp9" Feb 18 14:31:55 crc kubenswrapper[4957]: I0218 14:31:55.671095 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c1ab5e7d-28c9-416b-9e12-1209987d8a2c-log-socket\") pod \"ovnkube-node-t7lp9\" (UID: \"c1ab5e7d-28c9-416b-9e12-1209987d8a2c\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7lp9" Feb 18 14:31:55 crc kubenswrapper[4957]: I0218 14:31:55.671118 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c1ab5e7d-28c9-416b-9e12-1209987d8a2c-host-slash\") pod \"ovnkube-node-t7lp9\" (UID: \"c1ab5e7d-28c9-416b-9e12-1209987d8a2c\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7lp9" Feb 18 14:31:55 crc kubenswrapper[4957]: I0218 14:31:55.671076 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c1ab5e7d-28c9-416b-9e12-1209987d8a2c-host-cni-netd\") pod \"ovnkube-node-t7lp9\" (UID: \"c1ab5e7d-28c9-416b-9e12-1209987d8a2c\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7lp9" Feb 18 14:31:55 crc kubenswrapper[4957]: I0218 14:31:55.671138 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6ngss\" (UniqueName: \"kubernetes.io/projected/c1ab5e7d-28c9-416b-9e12-1209987d8a2c-kube-api-access-6ngss\") pod \"ovnkube-node-t7lp9\" (UID: \"c1ab5e7d-28c9-416b-9e12-1209987d8a2c\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7lp9" Feb 18 14:31:55 crc kubenswrapper[4957]: I0218 14:31:55.671159 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c1ab5e7d-28c9-416b-9e12-1209987d8a2c-host-cni-bin\") pod \"ovnkube-node-t7lp9\" (UID: \"c1ab5e7d-28c9-416b-9e12-1209987d8a2c\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7lp9" Feb 18 14:31:55 crc kubenswrapper[4957]: I0218 14:31:55.671168 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c1ab5e7d-28c9-416b-9e12-1209987d8a2c-log-socket\") pod \"ovnkube-node-t7lp9\" (UID: \"c1ab5e7d-28c9-416b-9e12-1209987d8a2c\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7lp9" Feb 18 14:31:55 crc kubenswrapper[4957]: I0218 14:31:55.671178 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c1ab5e7d-28c9-416b-9e12-1209987d8a2c-ovnkube-config\") pod \"ovnkube-node-t7lp9\" (UID: \"c1ab5e7d-28c9-416b-9e12-1209987d8a2c\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7lp9" Feb 18 14:31:55 crc kubenswrapper[4957]: I0218 14:31:55.671196 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c1ab5e7d-28c9-416b-9e12-1209987d8a2c-host-slash\") pod \"ovnkube-node-t7lp9\" (UID: \"c1ab5e7d-28c9-416b-9e12-1209987d8a2c\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7lp9" Feb 18 14:31:55 crc kubenswrapper[4957]: I0218 14:31:55.671249 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c1ab5e7d-28c9-416b-9e12-1209987d8a2c-run-openvswitch\") pod \"ovnkube-node-t7lp9\" (UID: \"c1ab5e7d-28c9-416b-9e12-1209987d8a2c\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7lp9" Feb 18 14:31:55 crc kubenswrapper[4957]: I0218 14:31:55.671333 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c1ab5e7d-28c9-416b-9e12-1209987d8a2c-host-cni-bin\") pod \"ovnkube-node-t7lp9\" (UID: \"c1ab5e7d-28c9-416b-9e12-1209987d8a2c\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7lp9" Feb 18 14:31:55 crc kubenswrapper[4957]: I0218 14:31:55.671551 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c1ab5e7d-28c9-416b-9e12-1209987d8a2c-env-overrides\") pod \"ovnkube-node-t7lp9\" (UID: \"c1ab5e7d-28c9-416b-9e12-1209987d8a2c\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7lp9" Feb 18 14:31:55 crc kubenswrapper[4957]: I0218 14:31:55.671816 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c1ab5e7d-28c9-416b-9e12-1209987d8a2c-ovnkube-script-lib\") pod \"ovnkube-node-t7lp9\" (UID: \"c1ab5e7d-28c9-416b-9e12-1209987d8a2c\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7lp9" Feb 18 14:31:55 crc kubenswrapper[4957]: I0218 14:31:55.671836 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c1ab5e7d-28c9-416b-9e12-1209987d8a2c-ovnkube-config\") pod \"ovnkube-node-t7lp9\" (UID: \"c1ab5e7d-28c9-416b-9e12-1209987d8a2c\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7lp9" Feb 18 14:31:55 crc kubenswrapper[4957]: I0218 14:31:55.675925 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c1ab5e7d-28c9-416b-9e12-1209987d8a2c-ovn-node-metrics-cert\") pod \"ovnkube-node-t7lp9\" (UID: \"c1ab5e7d-28c9-416b-9e12-1209987d8a2c\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7lp9" Feb 18 14:31:55 crc kubenswrapper[4957]: I0218 14:31:55.690092 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:31:55Z is after 2025-08-24T17:21:41Z" Feb 18 14:31:55 crc kubenswrapper[4957]: I0218 14:31:55.696065 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6ngss\" (UniqueName: \"kubernetes.io/projected/c1ab5e7d-28c9-416b-9e12-1209987d8a2c-kube-api-access-6ngss\") pod \"ovnkube-node-t7lp9\" (UID: \"c1ab5e7d-28c9-416b-9e12-1209987d8a2c\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7lp9" Feb 18 14:31:55 crc kubenswrapper[4957]: I0218 14:31:55.728437 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3339e9ff017eaa7de40b766111f5d7dcb9b5b45f154d917b2f3b1ebad21ee361\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://815a22c1c4523e923dbe20bb68b1f4195562f2b3a767409555f43c82cd826391\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:31:55Z is after 2025-08-24T17:21:41Z" Feb 18 14:31:55 crc kubenswrapper[4957]: I0218 14:31:55.741382 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:31:55Z is after 2025-08-24T17:21:41Z" Feb 18 14:31:55 crc kubenswrapper[4957]: I0218 14:31:55.758590 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wn2pd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6a720f-7d42-48e9-8073-fb4f7417e6cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qv7rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wn2pd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:31:55Z is after 2025-08-24T17:21:41Z" Feb 18 14:31:55 crc kubenswrapper[4957]: I0218 14:31:55.775404 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b3bef1f-7cee-4035-bc8e-195fadcf2d19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51eded762a62fcb704d09a9c4336a0e308e44c662b636772ea79f754c09dcaaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a2ec9911be4f4d8715bb722cdf72cdbf478111030a7311b93eb67f55ddbce77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d46f625f180ab9afde6a7cb5d578b88012bf5b297e691a35f1fe1af000f1416\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a11b36734e57eec3aba0b6d96b4102850e46b96eb16d99cfcacf273f1ce69f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ff013b6ed4aa92a4bda93a646e7ffd4a158daab436c25025c7fbee49716bcfd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T14:31:47Z\\\",\\\"message\\\":\\\"W0218 14:31:37.311828 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0218 14:31:37.312305 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771425097 cert, and key in /tmp/serving-cert-2580901173/serving-signer.crt, /tmp/serving-cert-2580901173/serving-signer.key\\\\nI0218 14:31:37.668457 1 observer_polling.go:159] Starting file observer\\\\nW0218 14:31:37.670692 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0218 14:31:37.670892 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 14:31:37.672550 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2580901173/tls.crt::/tmp/serving-cert-2580901173/tls.key\\\\\\\"\\\\nF0218 14:31:47.884142 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1c0f986e657c20a059efccb494462926f4fa62d30b012e6ce2b530d679d9d5c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:37Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09ddd547ad67f12c0d4f3b0505206fc69b4bcfcfa966c6f98da8ef4f4e979711\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09ddd547ad67f12c0d4f3b0505206fc69b4bcfcfa966c6f98da8ef4f4e979711\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:31:55Z is after 2025-08-24T17:21:41Z" Feb 18 14:31:55 crc kubenswrapper[4957]: I0218 14:31:55.791577 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t7lp9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1ab5e7d-28c9-416b-9e12-1209987d8a2c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t7lp9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:31:55Z is after 2025-08-24T17:21:41Z" Feb 18 14:31:55 crc kubenswrapper[4957]: I0218 14:31:55.820688 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-t7lp9" Feb 18 14:31:55 crc kubenswrapper[4957]: W0218 14:31:55.831344 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc1ab5e7d_28c9_416b_9e12_1209987d8a2c.slice/crio-024ebb6ef55970f726161a8a19661c20d8e7565a53612ff6d85907f667f95251 WatchSource:0}: Error finding container 024ebb6ef55970f726161a8a19661c20d8e7565a53612ff6d85907f667f95251: Status 404 returned error can't find the container with id 024ebb6ef55970f726161a8a19661c20d8e7565a53612ff6d85907f667f95251 Feb 18 14:31:56 crc kubenswrapper[4957]: I0218 14:31:56.154789 4957 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 10:24:32.618569409 +0000 UTC Feb 18 14:31:56 crc kubenswrapper[4957]: I0218 14:31:56.212314 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:31:56 crc kubenswrapper[4957]: I0218 14:31:56.212410 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:31:56 crc kubenswrapper[4957]: E0218 14:31:56.212489 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 14:31:56 crc kubenswrapper[4957]: E0218 14:31:56.212543 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 14:31:56 crc kubenswrapper[4957]: I0218 14:31:56.386062 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-wn2pd" event={"ID":"2b6a720f-7d42-48e9-8073-fb4f7417e6cb","Type":"ContainerStarted","Data":"d3317042ce828b73a13e6c2724a7c7663e0c73dd05fef1239f5ad36a7d46b1ca"} Feb 18 14:31:56 crc kubenswrapper[4957]: I0218 14:31:56.387172 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"96ebe482278c96a988073257a64389543a692cbae60b771542dd45ffdbef9977"} Feb 18 14:31:56 crc kubenswrapper[4957]: I0218 14:31:56.389093 4957 generic.go:334] "Generic (PLEG): container finished" podID="c1ab5e7d-28c9-416b-9e12-1209987d8a2c" containerID="854d32930f1afa0a3f29b83159a4a473fd3c074f065cab0f110c4c0e62d0ab79" exitCode=0 Feb 18 14:31:56 crc kubenswrapper[4957]: I0218 14:31:56.389171 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t7lp9" event={"ID":"c1ab5e7d-28c9-416b-9e12-1209987d8a2c","Type":"ContainerDied","Data":"854d32930f1afa0a3f29b83159a4a473fd3c074f065cab0f110c4c0e62d0ab79"} Feb 18 14:31:56 crc kubenswrapper[4957]: I0218 14:31:56.389191 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t7lp9" event={"ID":"c1ab5e7d-28c9-416b-9e12-1209987d8a2c","Type":"ContainerStarted","Data":"024ebb6ef55970f726161a8a19661c20d8e7565a53612ff6d85907f667f95251"} Feb 18 14:31:56 crc kubenswrapper[4957]: I0218 14:31:56.390927 4957 generic.go:334] "Generic (PLEG): container finished" podID="77ae51a3-a3f7-4ea3-afb9-93558bf3b821" containerID="a7f4f904ea5311aa8aa0473b62433cd3684d0e1fecdb1b179bfb7a58e463cc27" exitCode=0 Feb 18 14:31:56 crc kubenswrapper[4957]: I0218 14:31:56.390946 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-s7f5j" event={"ID":"77ae51a3-a3f7-4ea3-afb9-93558bf3b821","Type":"ContainerDied","Data":"a7f4f904ea5311aa8aa0473b62433cd3684d0e1fecdb1b179bfb7a58e463cc27"} Feb 18 14:31:56 crc kubenswrapper[4957]: I0218 14:31:56.390959 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-s7f5j" event={"ID":"77ae51a3-a3f7-4ea3-afb9-93558bf3b821","Type":"ContainerStarted","Data":"be17874ce3c3f36c1aa8e34e262c8691d982f090138028e70f93b693e0774575"} Feb 18 14:31:56 crc kubenswrapper[4957]: I0218 14:31:56.401397 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:31:56Z is after 2025-08-24T17:21:41Z" Feb 18 14:31:56 crc kubenswrapper[4957]: I0218 14:31:56.412894 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3339e9ff017eaa7de40b766111f5d7dcb9b5b45f154d917b2f3b1ebad21ee361\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://815a22c1c4523e923dbe20bb68b1f4195562f2b3a767409555f43c82cd826391\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:31:56Z is after 2025-08-24T17:21:41Z" Feb 18 14:31:56 crc kubenswrapper[4957]: I0218 14:31:56.429568 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:31:56Z is after 2025-08-24T17:21:41Z" Feb 18 14:31:56 crc kubenswrapper[4957]: I0218 14:31:56.443009 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wn2pd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6a720f-7d42-48e9-8073-fb4f7417e6cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3317042ce828b73a13e6c2724a7c7663e0c73dd05fef1239f5ad36a7d46b1ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qv7rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wn2pd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:31:56Z is after 2025-08-24T17:21:41Z" Feb 18 14:31:56 crc kubenswrapper[4957]: I0218 14:31:56.460168 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b3bef1f-7cee-4035-bc8e-195fadcf2d19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51eded762a62fcb704d09a9c4336a0e308e44c662b636772ea79f754c09dcaaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a2ec9911be4f4d8715bb722cdf72cdbf478111030a7311b93eb67f55ddbce77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d46f625f180ab9afde6a7cb5d578b88012bf5b297e691a35f1fe1af000f1416\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a11b36734e57eec3aba0b6d96b4102850e46b96eb16d99cfcacf273f1ce69f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ff013b6ed4aa92a4bda93a646e7ffd4a158daab436c25025c7fbee49716bcfd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T14:31:47Z\\\",\\\"message\\\":\\\"W0218 14:31:37.311828 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0218 14:31:37.312305 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771425097 cert, and key in /tmp/serving-cert-2580901173/serving-signer.crt, /tmp/serving-cert-2580901173/serving-signer.key\\\\nI0218 14:31:37.668457 1 observer_polling.go:159] Starting file observer\\\\nW0218 14:31:37.670692 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0218 14:31:37.670892 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 14:31:37.672550 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2580901173/tls.crt::/tmp/serving-cert-2580901173/tls.key\\\\\\\"\\\\nF0218 14:31:47.884142 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1c0f986e657c20a059efccb494462926f4fa62d30b012e6ce2b530d679d9d5c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:37Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09ddd547ad67f12c0d4f3b0505206fc69b4bcfcfa966c6f98da8ef4f4e979711\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09ddd547ad67f12c0d4f3b0505206fc69b4bcfcfa966c6f98da8ef4f4e979711\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:31:56Z is after 2025-08-24T17:21:41Z" Feb 18 14:31:56 crc kubenswrapper[4957]: I0218 14:31:56.491143 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t7lp9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1ab5e7d-28c9-416b-9e12-1209987d8a2c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t7lp9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:31:56Z is after 2025-08-24T17:21:41Z" Feb 18 14:31:56 crc kubenswrapper[4957]: I0218 14:31:56.503319 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:31:56Z is after 2025-08-24T17:21:41Z" Feb 18 14:31:56 crc kubenswrapper[4957]: I0218 14:31:56.522314 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sk96m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7e4902e-7bb6-44ec-a3ac-3d8b3afe3fbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://644a3c29970ac9c5262a3a66390a566947b89d6020eaf5de12afb1afeaaff673\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vssz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sk96m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:31:56Z is after 2025-08-24T17:21:41Z" Feb 18 14:31:56 crc kubenswrapper[4957]: I0218 14:31:56.542097 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cde17e3-43e9-4bed-afe8-5b76229e35cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1929ddc0bd8d5e1c054fe6b3399c053a91ace49153f8712ad155416d0df0652e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mzslj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4227d09ba91769eac5df7ac50f6da20587fad22be3d3f27f0a938b37d76b457e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mzslj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-x8wwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:31:56Z is after 2025-08-24T17:21:41Z" Feb 18 14:31:56 crc kubenswrapper[4957]: I0218 14:31:56.558920 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3aa342a3ea42d2c64cf284dd2f7fed142b1ff377cc3c5384e4d940ec4364befc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:31:56Z is after 2025-08-24T17:21:41Z" Feb 18 14:31:56 crc kubenswrapper[4957]: I0218 14:31:56.571550 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:31:56Z is after 2025-08-24T17:21:41Z" Feb 18 14:31:56 crc kubenswrapper[4957]: I0218 14:31:56.585179 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-s7f5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77ae51a3-a3f7-4ea3-afb9-93558bf3b821\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-s7f5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:31:56Z is after 2025-08-24T17:21:41Z" Feb 18 14:31:56 crc kubenswrapper[4957]: I0218 14:31:56.603694 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3aa342a3ea42d2c64cf284dd2f7fed142b1ff377cc3c5384e4d940ec4364befc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:31:56Z is after 2025-08-24T17:21:41Z" Feb 18 14:31:56 crc kubenswrapper[4957]: I0218 14:31:56.615481 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:31:56Z is after 2025-08-24T17:21:41Z" Feb 18 14:31:56 crc kubenswrapper[4957]: I0218 14:31:56.629154 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-s7f5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77ae51a3-a3f7-4ea3-afb9-93558bf3b821\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7f4f904ea5311aa8aa0473b62433cd3684d0e1fecdb1b179bfb7a58e463cc27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7f4f904ea5311aa8aa0473b62433cd3684d0e1fecdb1b179bfb7a58e463cc27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-s7f5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:31:56Z is after 2025-08-24T17:21:41Z" Feb 18 14:31:56 crc kubenswrapper[4957]: I0218 14:31:56.642733 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b3bef1f-7cee-4035-bc8e-195fadcf2d19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51eded762a62fcb704d09a9c4336a0e308e44c662b636772ea79f754c09dcaaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a2ec9911be4f4d8715bb722cdf72cdbf478111030a7311b93eb67f55ddbce77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d46f625f180ab9afde6a7cb5d578b88012bf5b297e691a35f1fe1af000f1416\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a11b36734e57eec3aba0b6d96b4102850e46b96eb16d99cfcacf273f1ce69f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ff013b6ed4aa92a4bda93a646e7ffd4a158daab436c25025c7fbee49716bcfd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T14:31:47Z\\\",\\\"message\\\":\\\"W0218 14:31:37.311828 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0218 14:31:37.312305 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771425097 cert, and key in /tmp/serving-cert-2580901173/serving-signer.crt, /tmp/serving-cert-2580901173/serving-signer.key\\\\nI0218 14:31:37.668457 1 observer_polling.go:159] Starting file observer\\\\nW0218 14:31:37.670692 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0218 14:31:37.670892 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 14:31:37.672550 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2580901173/tls.crt::/tmp/serving-cert-2580901173/tls.key\\\\\\\"\\\\nF0218 14:31:47.884142 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1c0f986e657c20a059efccb494462926f4fa62d30b012e6ce2b530d679d9d5c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:37Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09ddd547ad67f12c0d4f3b0505206fc69b4bcfcfa966c6f98da8ef4f4e979711\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09ddd547ad67f12c0d4f3b0505206fc69b4bcfcfa966c6f98da8ef4f4e979711\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:31:56Z is after 2025-08-24T17:21:41Z" Feb 18 14:31:56 crc kubenswrapper[4957]: I0218 14:31:56.655838 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:31:56Z is after 2025-08-24T17:21:41Z" Feb 18 14:31:56 crc kubenswrapper[4957]: I0218 14:31:56.668525 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3339e9ff017eaa7de40b766111f5d7dcb9b5b45f154d917b2f3b1ebad21ee361\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://815a22c1c4523e923dbe20bb68b1f4195562f2b3a767409555f43c82cd826391\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:31:56Z is after 2025-08-24T17:21:41Z" Feb 18 14:31:56 crc kubenswrapper[4957]: I0218 14:31:56.680582 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96ebe482278c96a988073257a64389543a692cbae60b771542dd45ffdbef9977\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:31:56Z is after 2025-08-24T17:21:41Z" Feb 18 14:31:56 crc kubenswrapper[4957]: I0218 14:31:56.691890 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wn2pd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6a720f-7d42-48e9-8073-fb4f7417e6cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3317042ce828b73a13e6c2724a7c7663e0c73dd05fef1239f5ad36a7d46b1ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qv7rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wn2pd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:31:56Z is after 2025-08-24T17:21:41Z" Feb 18 14:31:56 crc kubenswrapper[4957]: I0218 14:31:56.713122 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t7lp9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1ab5e7d-28c9-416b-9e12-1209987d8a2c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854d32930f1afa0a3f29b83159a4a473fd3c074f065cab0f110c4c0e62d0ab79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://854d32930f1afa0a3f29b83159a4a473fd3c074f065cab0f110c4c0e62d0ab79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t7lp9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:31:56Z is after 2025-08-24T17:21:41Z" Feb 18 14:31:56 crc kubenswrapper[4957]: I0218 14:31:56.724880 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:31:56Z is after 2025-08-24T17:21:41Z" Feb 18 14:31:56 crc kubenswrapper[4957]: I0218 14:31:56.738077 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sk96m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7e4902e-7bb6-44ec-a3ac-3d8b3afe3fbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://644a3c29970ac9c5262a3a66390a566947b89d6020eaf5de12afb1afeaaff673\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vssz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sk96m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:31:56Z is after 2025-08-24T17:21:41Z" Feb 18 14:31:56 crc kubenswrapper[4957]: I0218 14:31:56.749455 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cde17e3-43e9-4bed-afe8-5b76229e35cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1929ddc0bd8d5e1c054fe6b3399c053a91ace49153f8712ad155416d0df0652e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mzslj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4227d09ba91769eac5df7ac50f6da20587fad22be3d3f27f0a938b37d76b457e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mzslj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-x8wwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:31:56Z is after 2025-08-24T17:21:41Z" Feb 18 14:31:57 crc kubenswrapper[4957]: I0218 14:31:57.087104 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 14:31:57 crc kubenswrapper[4957]: E0218 14:31:57.087302 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 14:32:01.087274284 +0000 UTC m=+27.608139018 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:31:57 crc kubenswrapper[4957]: I0218 14:31:57.087773 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:31:57 crc kubenswrapper[4957]: I0218 14:31:57.087842 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:31:57 crc kubenswrapper[4957]: E0218 14:31:57.087900 4957 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 14:31:57 crc kubenswrapper[4957]: E0218 14:31:57.087986 4957 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 14:31:57 crc kubenswrapper[4957]: E0218 14:31:57.087996 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 14:32:01.087981174 +0000 UTC m=+27.608845918 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 14:31:57 crc kubenswrapper[4957]: E0218 14:31:57.088052 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 14:32:01.088038486 +0000 UTC m=+27.608903230 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 14:31:57 crc kubenswrapper[4957]: I0218 14:31:57.155300 4957 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 03:19:56.748906705 +0000 UTC Feb 18 14:31:57 crc kubenswrapper[4957]: I0218 14:31:57.189110 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:31:57 crc kubenswrapper[4957]: I0218 14:31:57.189166 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:31:57 crc kubenswrapper[4957]: E0218 14:31:57.189283 4957 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 14:31:57 crc kubenswrapper[4957]: E0218 14:31:57.189299 4957 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 14:31:57 crc kubenswrapper[4957]: E0218 14:31:57.189310 4957 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 14:31:57 crc kubenswrapper[4957]: E0218 14:31:57.189360 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-18 14:32:01.189344887 +0000 UTC m=+27.710209631 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 14:31:57 crc kubenswrapper[4957]: E0218 14:31:57.189359 4957 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 14:31:57 crc kubenswrapper[4957]: E0218 14:31:57.189402 4957 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 14:31:57 crc kubenswrapper[4957]: E0218 14:31:57.189433 4957 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 14:31:57 crc kubenswrapper[4957]: E0218 14:31:57.189500 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-18 14:32:01.189483622 +0000 UTC m=+27.710348366 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 14:31:57 crc kubenswrapper[4957]: I0218 14:31:57.212299 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:31:57 crc kubenswrapper[4957]: E0218 14:31:57.212479 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 14:31:57 crc kubenswrapper[4957]: I0218 14:31:57.397125 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t7lp9" event={"ID":"c1ab5e7d-28c9-416b-9e12-1209987d8a2c","Type":"ContainerStarted","Data":"eb7e286dbf7b49a0ee8019eb660a4b977399c163bb094d77f830b3747399d702"} Feb 18 14:31:57 crc kubenswrapper[4957]: I0218 14:31:57.397190 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t7lp9" event={"ID":"c1ab5e7d-28c9-416b-9e12-1209987d8a2c","Type":"ContainerStarted","Data":"d3d55cd020086b018dcf2183c7307ce4f66a18d4dfd5c7e387419f5aaeb08e53"} Feb 18 14:31:57 crc kubenswrapper[4957]: I0218 14:31:57.397204 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t7lp9" event={"ID":"c1ab5e7d-28c9-416b-9e12-1209987d8a2c","Type":"ContainerStarted","Data":"9575c57251809ec79d9e9a63fc18fc0fe431582b023c0bfb9d71b3ea5c369e49"} Feb 18 14:31:57 crc kubenswrapper[4957]: I0218 14:31:57.397216 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t7lp9" event={"ID":"c1ab5e7d-28c9-416b-9e12-1209987d8a2c","Type":"ContainerStarted","Data":"c7fc50065803f879ae25e1ab406e1b74e142cab30c38bedd0426616a2cc6cc84"} Feb 18 14:31:57 crc kubenswrapper[4957]: I0218 14:31:57.400310 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-s7f5j" event={"ID":"77ae51a3-a3f7-4ea3-afb9-93558bf3b821","Type":"ContainerStarted","Data":"1ea791a426f30ba77c019b8f865f9cfe90b915f6c0596002f4914cbe2fa68c27"} Feb 18 14:31:57 crc kubenswrapper[4957]: I0218 14:31:57.413888 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:31:57Z is after 2025-08-24T17:21:41Z" Feb 18 14:31:57 crc kubenswrapper[4957]: I0218 14:31:57.429733 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sk96m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7e4902e-7bb6-44ec-a3ac-3d8b3afe3fbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://644a3c29970ac9c5262a3a66390a566947b89d6020eaf5de12afb1afeaaff673\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vssz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sk96m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:31:57Z is after 2025-08-24T17:21:41Z" Feb 18 14:31:57 crc kubenswrapper[4957]: I0218 14:31:57.431193 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 14:31:57 crc kubenswrapper[4957]: I0218 14:31:57.437115 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 14:31:57 crc kubenswrapper[4957]: I0218 14:31:57.440940 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Feb 18 14:31:57 crc kubenswrapper[4957]: I0218 14:31:57.443204 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cde17e3-43e9-4bed-afe8-5b76229e35cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1929ddc0bd8d5e1c054fe6b3399c053a91ace49153f8712ad155416d0df0652e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mzslj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4227d09ba91769eac5df7ac50f6da20587fad22be3d3f27f0a938b37d76b457e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mzslj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-x8wwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:31:57Z is after 2025-08-24T17:21:41Z" Feb 18 14:31:57 crc kubenswrapper[4957]: I0218 14:31:57.458320 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3aa342a3ea42d2c64cf284dd2f7fed142b1ff377cc3c5384e4d940ec4364befc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:31:57Z is after 2025-08-24T17:21:41Z" Feb 18 14:31:57 crc kubenswrapper[4957]: I0218 14:31:57.471751 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:31:57Z is after 2025-08-24T17:21:41Z" Feb 18 14:31:57 crc kubenswrapper[4957]: I0218 14:31:57.490679 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-s7f5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77ae51a3-a3f7-4ea3-afb9-93558bf3b821\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7f4f904ea5311aa8aa0473b62433cd3684d0e1fecdb1b179bfb7a58e463cc27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7f4f904ea5311aa8aa0473b62433cd3684d0e1fecdb1b179bfb7a58e463cc27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ea791a426f30ba77c019b8f865f9cfe90b915f6c0596002f4914cbe2fa68c27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-s7f5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:31:57Z is after 2025-08-24T17:21:41Z" Feb 18 14:31:57 crc kubenswrapper[4957]: I0218 14:31:57.509730 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b3bef1f-7cee-4035-bc8e-195fadcf2d19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51eded762a62fcb704d09a9c4336a0e308e44c662b636772ea79f754c09dcaaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a2ec9911be4f4d8715bb722cdf72cdbf478111030a7311b93eb67f55ddbce77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d46f625f180ab9afde6a7cb5d578b88012bf5b297e691a35f1fe1af000f1416\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a11b36734e57eec3aba0b6d96b4102850e46b96eb16d99cfcacf273f1ce69f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ff013b6ed4aa92a4bda93a646e7ffd4a158daab436c25025c7fbee49716bcfd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T14:31:47Z\\\",\\\"message\\\":\\\"W0218 14:31:37.311828 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0218 14:31:37.312305 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771425097 cert, and key in /tmp/serving-cert-2580901173/serving-signer.crt, /tmp/serving-cert-2580901173/serving-signer.key\\\\nI0218 14:31:37.668457 1 observer_polling.go:159] Starting file observer\\\\nW0218 14:31:37.670692 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0218 14:31:37.670892 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 14:31:37.672550 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2580901173/tls.crt::/tmp/serving-cert-2580901173/tls.key\\\\\\\"\\\\nF0218 14:31:47.884142 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1c0f986e657c20a059efccb494462926f4fa62d30b012e6ce2b530d679d9d5c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:37Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09ddd547ad67f12c0d4f3b0505206fc69b4bcfcfa966c6f98da8ef4f4e979711\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09ddd547ad67f12c0d4f3b0505206fc69b4bcfcfa966c6f98da8ef4f4e979711\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:31:57Z is after 2025-08-24T17:21:41Z" Feb 18 14:31:57 crc kubenswrapper[4957]: I0218 14:31:57.522432 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:31:57Z is after 2025-08-24T17:21:41Z" Feb 18 14:31:57 crc kubenswrapper[4957]: I0218 14:31:57.535223 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3339e9ff017eaa7de40b766111f5d7dcb9b5b45f154d917b2f3b1ebad21ee361\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://815a22c1c4523e923dbe20bb68b1f4195562f2b3a767409555f43c82cd826391\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:31:57Z is after 2025-08-24T17:21:41Z" Feb 18 14:31:57 crc kubenswrapper[4957]: I0218 14:31:57.546477 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96ebe482278c96a988073257a64389543a692cbae60b771542dd45ffdbef9977\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:31:57Z is after 2025-08-24T17:21:41Z" Feb 18 14:31:57 crc kubenswrapper[4957]: I0218 14:31:57.558132 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wn2pd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6a720f-7d42-48e9-8073-fb4f7417e6cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3317042ce828b73a13e6c2724a7c7663e0c73dd05fef1239f5ad36a7d46b1ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qv7rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wn2pd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:31:57Z is after 2025-08-24T17:21:41Z" Feb 18 14:31:57 crc kubenswrapper[4957]: I0218 14:31:57.579873 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t7lp9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1ab5e7d-28c9-416b-9e12-1209987d8a2c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854d32930f1afa0a3f29b83159a4a473fd3c074f065cab0f110c4c0e62d0ab79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://854d32930f1afa0a3f29b83159a4a473fd3c074f065cab0f110c4c0e62d0ab79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t7lp9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:31:57Z is after 2025-08-24T17:21:41Z" Feb 18 14:31:57 crc kubenswrapper[4957]: I0218 14:31:57.595493 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-s7f5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77ae51a3-a3f7-4ea3-afb9-93558bf3b821\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7f4f904ea5311aa8aa0473b62433cd3684d0e1fecdb1b179bfb7a58e463cc27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7f4f904ea5311aa8aa0473b62433cd3684d0e1fecdb1b179bfb7a58e463cc27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ea791a426f30ba77c019b8f865f9cfe90b915f6c0596002f4914cbe2fa68c27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-s7f5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:31:57Z is after 2025-08-24T17:21:41Z" Feb 18 14:31:57 crc kubenswrapper[4957]: I0218 14:31:57.609019 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3aa342a3ea42d2c64cf284dd2f7fed142b1ff377cc3c5384e4d940ec4364befc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:31:57Z is after 2025-08-24T17:21:41Z" Feb 18 14:31:57 crc kubenswrapper[4957]: I0218 14:31:57.621733 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:31:57Z is after 2025-08-24T17:21:41Z" Feb 18 14:31:57 crc kubenswrapper[4957]: I0218 14:31:57.632191 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96ebe482278c96a988073257a64389543a692cbae60b771542dd45ffdbef9977\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:31:57Z is after 2025-08-24T17:21:41Z" Feb 18 14:31:57 crc kubenswrapper[4957]: I0218 14:31:57.641300 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wn2pd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6a720f-7d42-48e9-8073-fb4f7417e6cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3317042ce828b73a13e6c2724a7c7663e0c73dd05fef1239f5ad36a7d46b1ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qv7rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wn2pd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:31:57Z is after 2025-08-24T17:21:41Z" Feb 18 14:31:57 crc kubenswrapper[4957]: I0218 14:31:57.655377 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b3bef1f-7cee-4035-bc8e-195fadcf2d19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51eded762a62fcb704d09a9c4336a0e308e44c662b636772ea79f754c09dcaaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a2ec9911be4f4d8715bb722cdf72cdbf478111030a7311b93eb67f55ddbce77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d46f625f180ab9afde6a7cb5d578b88012bf5b297e691a35f1fe1af000f1416\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a11b36734e57eec3aba0b6d96b4102850e46b96eb16d99cfcacf273f1ce69f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ff013b6ed4aa92a4bda93a646e7ffd4a158daab436c25025c7fbee49716bcfd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T14:31:47Z\\\",\\\"message\\\":\\\"W0218 14:31:37.311828 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0218 14:31:37.312305 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771425097 cert, and key in /tmp/serving-cert-2580901173/serving-signer.crt, /tmp/serving-cert-2580901173/serving-signer.key\\\\nI0218 14:31:37.668457 1 observer_polling.go:159] Starting file observer\\\\nW0218 14:31:37.670692 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0218 14:31:37.670892 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 14:31:37.672550 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2580901173/tls.crt::/tmp/serving-cert-2580901173/tls.key\\\\\\\"\\\\nF0218 14:31:47.884142 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1c0f986e657c20a059efccb494462926f4fa62d30b012e6ce2b530d679d9d5c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:37Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09ddd547ad67f12c0d4f3b0505206fc69b4bcfcfa966c6f98da8ef4f4e979711\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09ddd547ad67f12c0d4f3b0505206fc69b4bcfcfa966c6f98da8ef4f4e979711\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:31:57Z is after 2025-08-24T17:21:41Z" Feb 18 14:31:57 crc kubenswrapper[4957]: I0218 14:31:57.668392 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:31:57Z is after 2025-08-24T17:21:41Z" Feb 18 14:31:57 crc kubenswrapper[4957]: I0218 14:31:57.680474 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3339e9ff017eaa7de40b766111f5d7dcb9b5b45f154d917b2f3b1ebad21ee361\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://815a22c1c4523e923dbe20bb68b1f4195562f2b3a767409555f43c82cd826391\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:31:57Z is after 2025-08-24T17:21:41Z" Feb 18 14:31:57 crc kubenswrapper[4957]: I0218 14:31:57.704205 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t7lp9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1ab5e7d-28c9-416b-9e12-1209987d8a2c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854d32930f1afa0a3f29b83159a4a473fd3c074f065cab0f110c4c0e62d0ab79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://854d32930f1afa0a3f29b83159a4a473fd3c074f065cab0f110c4c0e62d0ab79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t7lp9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:31:57Z is after 2025-08-24T17:21:41Z" Feb 18 14:31:57 crc kubenswrapper[4957]: I0218 14:31:57.716077 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"820e2f52-fc4e-43af-8da0-5b1ecb52e54a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdeaa62699fcf37060225a404103ed014fff43f751f744a27891afb76c56d011\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb9f54ba352c3e8d1cb70e31e2321f32cc741e9ea7fa871ac8db9cb380486d6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04cb263fd49c7f580746efa2f81c4e25a4862244502ad5362a1767d6255c023c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f45a9407db247a891731310afb4c1a7b81ac94e44cfd8e267854a4c4e434e95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:34Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:31:57Z is after 2025-08-24T17:21:41Z" Feb 18 14:31:57 crc kubenswrapper[4957]: I0218 14:31:57.731062 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:31:57Z is after 2025-08-24T17:21:41Z" Feb 18 14:31:57 crc kubenswrapper[4957]: I0218 14:31:57.747390 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sk96m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7e4902e-7bb6-44ec-a3ac-3d8b3afe3fbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://644a3c29970ac9c5262a3a66390a566947b89d6020eaf5de12afb1afeaaff673\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vssz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sk96m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:31:57Z is after 2025-08-24T17:21:41Z" Feb 18 14:31:57 crc kubenswrapper[4957]: I0218 14:31:57.770638 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cde17e3-43e9-4bed-afe8-5b76229e35cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1929ddc0bd8d5e1c054fe6b3399c053a91ace49153f8712ad155416d0df0652e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mzslj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4227d09ba91769eac5df7ac50f6da20587fad22be3d3f27f0a938b37d76b457e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mzslj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-x8wwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:31:57Z is after 2025-08-24T17:21:41Z" Feb 18 14:31:57 crc kubenswrapper[4957]: I0218 14:31:57.791857 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-c5hxm"] Feb 18 14:31:57 crc kubenswrapper[4957]: I0218 14:31:57.792344 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-c5hxm" Feb 18 14:31:57 crc kubenswrapper[4957]: I0218 14:31:57.794511 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 18 14:31:57 crc kubenswrapper[4957]: I0218 14:31:57.794534 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 18 14:31:57 crc kubenswrapper[4957]: I0218 14:31:57.795831 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 18 14:31:57 crc kubenswrapper[4957]: I0218 14:31:57.796053 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 18 14:31:57 crc kubenswrapper[4957]: I0218 14:31:57.822300 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96ebe482278c96a988073257a64389543a692cbae60b771542dd45ffdbef9977\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:31:57Z is after 2025-08-24T17:21:41Z" Feb 18 14:31:57 crc kubenswrapper[4957]: I0218 14:31:57.837960 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wn2pd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6a720f-7d42-48e9-8073-fb4f7417e6cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3317042ce828b73a13e6c2724a7c7663e0c73dd05fef1239f5ad36a7d46b1ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qv7rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wn2pd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:31:57Z is after 2025-08-24T17:21:41Z" Feb 18 14:31:57 crc kubenswrapper[4957]: I0218 14:31:57.854767 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b3bef1f-7cee-4035-bc8e-195fadcf2d19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51eded762a62fcb704d09a9c4336a0e308e44c662b636772ea79f754c09dcaaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a2ec9911be4f4d8715bb722cdf72cdbf478111030a7311b93eb67f55ddbce77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d46f625f180ab9afde6a7cb5d578b88012bf5b297e691a35f1fe1af000f1416\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a11b36734e57eec3aba0b6d96b4102850e46b96eb16d99cfcacf273f1ce69f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ff013b6ed4aa92a4bda93a646e7ffd4a158daab436c25025c7fbee49716bcfd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T14:31:47Z\\\",\\\"message\\\":\\\"W0218 14:31:37.311828 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0218 14:31:37.312305 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771425097 cert, and key in /tmp/serving-cert-2580901173/serving-signer.crt, /tmp/serving-cert-2580901173/serving-signer.key\\\\nI0218 14:31:37.668457 1 observer_polling.go:159] Starting file observer\\\\nW0218 14:31:37.670692 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0218 14:31:37.670892 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 14:31:37.672550 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2580901173/tls.crt::/tmp/serving-cert-2580901173/tls.key\\\\\\\"\\\\nF0218 14:31:47.884142 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1c0f986e657c20a059efccb494462926f4fa62d30b012e6ce2b530d679d9d5c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:37Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09ddd547ad67f12c0d4f3b0505206fc69b4bcfcfa966c6f98da8ef4f4e979711\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09ddd547ad67f12c0d4f3b0505206fc69b4bcfcfa966c6f98da8ef4f4e979711\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:31:57Z is after 2025-08-24T17:21:41Z" Feb 18 14:31:57 crc kubenswrapper[4957]: I0218 14:31:57.867899 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:31:57Z is after 2025-08-24T17:21:41Z" Feb 18 14:31:57 crc kubenswrapper[4957]: I0218 14:31:57.881153 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3339e9ff017eaa7de40b766111f5d7dcb9b5b45f154d917b2f3b1ebad21ee361\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://815a22c1c4523e923dbe20bb68b1f4195562f2b3a767409555f43c82cd826391\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:31:57Z is after 2025-08-24T17:21:41Z" Feb 18 14:31:57 crc kubenswrapper[4957]: I0218 14:31:57.895238 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b2b8a049-72c4-4a87-bb05-a5cc4ebe7a89-host\") pod \"node-ca-c5hxm\" (UID: \"b2b8a049-72c4-4a87-bb05-a5cc4ebe7a89\") " pod="openshift-image-registry/node-ca-c5hxm" Feb 18 14:31:57 crc kubenswrapper[4957]: I0218 14:31:57.895323 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/b2b8a049-72c4-4a87-bb05-a5cc4ebe7a89-serviceca\") pod \"node-ca-c5hxm\" (UID: \"b2b8a049-72c4-4a87-bb05-a5cc4ebe7a89\") " pod="openshift-image-registry/node-ca-c5hxm" Feb 18 14:31:57 crc kubenswrapper[4957]: I0218 14:31:57.895357 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzkhv\" (UniqueName: \"kubernetes.io/projected/b2b8a049-72c4-4a87-bb05-a5cc4ebe7a89-kube-api-access-mzkhv\") pod \"node-ca-c5hxm\" (UID: \"b2b8a049-72c4-4a87-bb05-a5cc4ebe7a89\") " pod="openshift-image-registry/node-ca-c5hxm" Feb 18 14:31:57 crc kubenswrapper[4957]: I0218 14:31:57.900081 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t7lp9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1ab5e7d-28c9-416b-9e12-1209987d8a2c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854d32930f1afa0a3f29b83159a4a473fd3c074f065cab0f110c4c0e62d0ab79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://854d32930f1afa0a3f29b83159a4a473fd3c074f065cab0f110c4c0e62d0ab79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t7lp9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:31:57Z is after 2025-08-24T17:21:41Z" Feb 18 14:31:57 crc kubenswrapper[4957]: I0218 14:31:57.912315 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"820e2f52-fc4e-43af-8da0-5b1ecb52e54a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdeaa62699fcf37060225a404103ed014fff43f751f744a27891afb76c56d011\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb9f54ba352c3e8d1cb70e31e2321f32cc741e9ea7fa871ac8db9cb380486d6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04cb263fd49c7f580746efa2f81c4e25a4862244502ad5362a1767d6255c023c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f45a9407db247a891731310afb4c1a7b81ac94e44cfd8e267854a4c4e434e95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:34Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:31:57Z is after 2025-08-24T17:21:41Z" Feb 18 14:31:57 crc kubenswrapper[4957]: I0218 14:31:57.923794 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:31:57Z is after 2025-08-24T17:21:41Z" Feb 18 14:31:57 crc kubenswrapper[4957]: I0218 14:31:57.935860 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sk96m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7e4902e-7bb6-44ec-a3ac-3d8b3afe3fbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://644a3c29970ac9c5262a3a66390a566947b89d6020eaf5de12afb1afeaaff673\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vssz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sk96m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:31:57Z is after 2025-08-24T17:21:41Z" Feb 18 14:31:57 crc kubenswrapper[4957]: I0218 14:31:57.948575 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cde17e3-43e9-4bed-afe8-5b76229e35cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1929ddc0bd8d5e1c054fe6b3399c053a91ace49153f8712ad155416d0df0652e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mzslj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4227d09ba91769eac5df7ac50f6da20587fad22be3d3f27f0a938b37d76b457e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mzslj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-x8wwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:31:57Z is after 2025-08-24T17:21:41Z" Feb 18 14:31:57 crc kubenswrapper[4957]: I0218 14:31:57.964215 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-s7f5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77ae51a3-a3f7-4ea3-afb9-93558bf3b821\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7f4f904ea5311aa8aa0473b62433cd3684d0e1fecdb1b179bfb7a58e463cc27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7f4f904ea5311aa8aa0473b62433cd3684d0e1fecdb1b179bfb7a58e463cc27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ea791a426f30ba77c019b8f865f9cfe90b915f6c0596002f4914cbe2fa68c27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-s7f5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:31:57Z is after 2025-08-24T17:21:41Z" Feb 18 14:31:57 crc kubenswrapper[4957]: I0218 14:31:57.982250 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-c5hxm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2b8a049-72c4-4a87-bb05-a5cc4ebe7a89\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mzkhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-c5hxm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:31:57Z is after 2025-08-24T17:21:41Z" Feb 18 14:31:57 crc kubenswrapper[4957]: I0218 14:31:57.995858 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b2b8a049-72c4-4a87-bb05-a5cc4ebe7a89-host\") pod \"node-ca-c5hxm\" (UID: \"b2b8a049-72c4-4a87-bb05-a5cc4ebe7a89\") " pod="openshift-image-registry/node-ca-c5hxm" Feb 18 14:31:57 crc kubenswrapper[4957]: I0218 14:31:57.995893 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/b2b8a049-72c4-4a87-bb05-a5cc4ebe7a89-serviceca\") pod \"node-ca-c5hxm\" (UID: \"b2b8a049-72c4-4a87-bb05-a5cc4ebe7a89\") " pod="openshift-image-registry/node-ca-c5hxm" Feb 18 14:31:57 crc kubenswrapper[4957]: I0218 14:31:57.995910 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mzkhv\" (UniqueName: \"kubernetes.io/projected/b2b8a049-72c4-4a87-bb05-a5cc4ebe7a89-kube-api-access-mzkhv\") pod \"node-ca-c5hxm\" (UID: \"b2b8a049-72c4-4a87-bb05-a5cc4ebe7a89\") " pod="openshift-image-registry/node-ca-c5hxm" Feb 18 14:31:57 crc kubenswrapper[4957]: I0218 14:31:57.996035 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b2b8a049-72c4-4a87-bb05-a5cc4ebe7a89-host\") pod \"node-ca-c5hxm\" (UID: \"b2b8a049-72c4-4a87-bb05-a5cc4ebe7a89\") " pod="openshift-image-registry/node-ca-c5hxm" Feb 18 14:31:57 crc kubenswrapper[4957]: I0218 14:31:57.996933 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/b2b8a049-72c4-4a87-bb05-a5cc4ebe7a89-serviceca\") pod \"node-ca-c5hxm\" (UID: \"b2b8a049-72c4-4a87-bb05-a5cc4ebe7a89\") " pod="openshift-image-registry/node-ca-c5hxm" Feb 18 14:31:58 crc kubenswrapper[4957]: I0218 14:31:58.021877 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3aa342a3ea42d2c64cf284dd2f7fed142b1ff377cc3c5384e4d940ec4364befc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:31:58Z is after 2025-08-24T17:21:41Z" Feb 18 14:31:58 crc kubenswrapper[4957]: I0218 14:31:58.051000 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mzkhv\" (UniqueName: \"kubernetes.io/projected/b2b8a049-72c4-4a87-bb05-a5cc4ebe7a89-kube-api-access-mzkhv\") pod \"node-ca-c5hxm\" (UID: \"b2b8a049-72c4-4a87-bb05-a5cc4ebe7a89\") " pod="openshift-image-registry/node-ca-c5hxm" Feb 18 14:31:58 crc kubenswrapper[4957]: I0218 14:31:58.081287 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:31:58Z is after 2025-08-24T17:21:41Z" Feb 18 14:31:58 crc kubenswrapper[4957]: I0218 14:31:58.108017 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-c5hxm" Feb 18 14:31:58 crc kubenswrapper[4957]: W0218 14:31:58.123576 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb2b8a049_72c4_4a87_bb05_a5cc4ebe7a89.slice/crio-5ae57e7c9c99ac1fd7295e60a6be6ef9ed4a92e15a60c9f6d0b6df6f7dde36c3 WatchSource:0}: Error finding container 5ae57e7c9c99ac1fd7295e60a6be6ef9ed4a92e15a60c9f6d0b6df6f7dde36c3: Status 404 returned error can't find the container with id 5ae57e7c9c99ac1fd7295e60a6be6ef9ed4a92e15a60c9f6d0b6df6f7dde36c3 Feb 18 14:31:58 crc kubenswrapper[4957]: I0218 14:31:58.155969 4957 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 04:51:45.818029086 +0000 UTC Feb 18 14:31:58 crc kubenswrapper[4957]: I0218 14:31:58.212783 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:31:58 crc kubenswrapper[4957]: I0218 14:31:58.212795 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:31:58 crc kubenswrapper[4957]: E0218 14:31:58.212924 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 14:31:58 crc kubenswrapper[4957]: E0218 14:31:58.212953 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 14:31:58 crc kubenswrapper[4957]: I0218 14:31:58.410829 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t7lp9" event={"ID":"c1ab5e7d-28c9-416b-9e12-1209987d8a2c","Type":"ContainerStarted","Data":"6c773ab7da87d534f8ca400b73f89b473291b15b65118abfa48da7e8af126724"} Feb 18 14:31:58 crc kubenswrapper[4957]: I0218 14:31:58.410876 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t7lp9" event={"ID":"c1ab5e7d-28c9-416b-9e12-1209987d8a2c","Type":"ContainerStarted","Data":"0fb30623c479a6d604d07d8779f3b5661ba8a2c98fa38f145443c7f80fbfddb0"} Feb 18 14:31:58 crc kubenswrapper[4957]: I0218 14:31:58.412678 4957 generic.go:334] "Generic (PLEG): container finished" podID="77ae51a3-a3f7-4ea3-afb9-93558bf3b821" containerID="1ea791a426f30ba77c019b8f865f9cfe90b915f6c0596002f4914cbe2fa68c27" exitCode=0 Feb 18 14:31:58 crc kubenswrapper[4957]: I0218 14:31:58.412739 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-s7f5j" event={"ID":"77ae51a3-a3f7-4ea3-afb9-93558bf3b821","Type":"ContainerDied","Data":"1ea791a426f30ba77c019b8f865f9cfe90b915f6c0596002f4914cbe2fa68c27"} Feb 18 14:31:58 crc kubenswrapper[4957]: I0218 14:31:58.414917 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-c5hxm" event={"ID":"b2b8a049-72c4-4a87-bb05-a5cc4ebe7a89","Type":"ContainerStarted","Data":"8d8c16012d78884094f9be4f3cd7b67463352ff1391c8f60c109b5f734c80cd6"} Feb 18 14:31:58 crc kubenswrapper[4957]: I0218 14:31:58.414972 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-c5hxm" event={"ID":"b2b8a049-72c4-4a87-bb05-a5cc4ebe7a89","Type":"ContainerStarted","Data":"5ae57e7c9c99ac1fd7295e60a6be6ef9ed4a92e15a60c9f6d0b6df6f7dde36c3"} Feb 18 14:31:58 crc kubenswrapper[4957]: E0218 14:31:58.421098 4957 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 14:31:58 crc kubenswrapper[4957]: I0218 14:31:58.436057 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t7lp9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1ab5e7d-28c9-416b-9e12-1209987d8a2c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854d32930f1afa0a3f29b83159a4a473fd3c074f065cab0f110c4c0e62d0ab79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://854d32930f1afa0a3f29b83159a4a473fd3c074f065cab0f110c4c0e62d0ab79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t7lp9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:31:58Z is after 2025-08-24T17:21:41Z" Feb 18 14:31:58 crc kubenswrapper[4957]: I0218 14:31:58.456478 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sk96m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7e4902e-7bb6-44ec-a3ac-3d8b3afe3fbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://644a3c29970ac9c5262a3a66390a566947b89d6020eaf5de12afb1afeaaff673\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vssz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sk96m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:31:58Z is after 2025-08-24T17:21:41Z" Feb 18 14:31:58 crc kubenswrapper[4957]: I0218 14:31:58.472057 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cde17e3-43e9-4bed-afe8-5b76229e35cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1929ddc0bd8d5e1c054fe6b3399c053a91ace49153f8712ad155416d0df0652e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mzslj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4227d09ba91769eac5df7ac50f6da20587fad22be3d3f27f0a938b37d76b457e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mzslj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-x8wwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:31:58Z is after 2025-08-24T17:21:41Z" Feb 18 14:31:58 crc kubenswrapper[4957]: I0218 14:31:58.485212 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"820e2f52-fc4e-43af-8da0-5b1ecb52e54a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdeaa62699fcf37060225a404103ed014fff43f751f744a27891afb76c56d011\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb9f54ba352c3e8d1cb70e31e2321f32cc741e9ea7fa871ac8db9cb380486d6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04cb263fd49c7f580746efa2f81c4e25a4862244502ad5362a1767d6255c023c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f45a9407db247a891731310afb4c1a7b81ac94e44cfd8e267854a4c4e434e95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:34Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:31:58Z is after 2025-08-24T17:21:41Z" Feb 18 14:31:58 crc kubenswrapper[4957]: I0218 14:31:58.501203 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:31:58Z is after 2025-08-24T17:21:41Z" Feb 18 14:31:58 crc kubenswrapper[4957]: I0218 14:31:58.513748 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3aa342a3ea42d2c64cf284dd2f7fed142b1ff377cc3c5384e4d940ec4364befc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:31:58Z is after 2025-08-24T17:21:41Z" Feb 18 14:31:58 crc kubenswrapper[4957]: I0218 14:31:58.530909 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:31:58Z is after 2025-08-24T17:21:41Z" Feb 18 14:31:58 crc kubenswrapper[4957]: I0218 14:31:58.555185 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-s7f5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77ae51a3-a3f7-4ea3-afb9-93558bf3b821\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7f4f904ea5311aa8aa0473b62433cd3684d0e1fecdb1b179bfb7a58e463cc27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7f4f904ea5311aa8aa0473b62433cd3684d0e1fecdb1b179bfb7a58e463cc27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ea791a426f30ba77c019b8f865f9cfe90b915f6c0596002f4914cbe2fa68c27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ea791a426f30ba77c019b8f865f9cfe90b915f6c0596002f4914cbe2fa68c27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-s7f5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:31:58Z is after 2025-08-24T17:21:41Z" Feb 18 14:31:58 crc kubenswrapper[4957]: I0218 14:31:58.567728 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-c5hxm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2b8a049-72c4-4a87-bb05-a5cc4ebe7a89\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mzkhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-c5hxm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:31:58Z is after 2025-08-24T17:21:41Z" Feb 18 14:31:58 crc kubenswrapper[4957]: I0218 14:31:58.589328 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:31:58Z is after 2025-08-24T17:21:41Z" Feb 18 14:31:58 crc kubenswrapper[4957]: I0218 14:31:58.605373 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3339e9ff017eaa7de40b766111f5d7dcb9b5b45f154d917b2f3b1ebad21ee361\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://815a22c1c4523e923dbe20bb68b1f4195562f2b3a767409555f43c82cd826391\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:31:58Z is after 2025-08-24T17:21:41Z" Feb 18 14:31:58 crc kubenswrapper[4957]: I0218 14:31:58.620790 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96ebe482278c96a988073257a64389543a692cbae60b771542dd45ffdbef9977\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:31:58Z is after 2025-08-24T17:21:41Z" Feb 18 14:31:58 crc kubenswrapper[4957]: I0218 14:31:58.634903 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wn2pd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6a720f-7d42-48e9-8073-fb4f7417e6cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3317042ce828b73a13e6c2724a7c7663e0c73dd05fef1239f5ad36a7d46b1ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qv7rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wn2pd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:31:58Z is after 2025-08-24T17:21:41Z" Feb 18 14:31:58 crc kubenswrapper[4957]: I0218 14:31:58.663621 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b3bef1f-7cee-4035-bc8e-195fadcf2d19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51eded762a62fcb704d09a9c4336a0e308e44c662b636772ea79f754c09dcaaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a2ec9911be4f4d8715bb722cdf72cdbf478111030a7311b93eb67f55ddbce77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d46f625f180ab9afde6a7cb5d578b88012bf5b297e691a35f1fe1af000f1416\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a11b36734e57eec3aba0b6d96b4102850e46b96eb16d99cfcacf273f1ce69f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ff013b6ed4aa92a4bda93a646e7ffd4a158daab436c25025c7fbee49716bcfd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T14:31:47Z\\\",\\\"message\\\":\\\"W0218 14:31:37.311828 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0218 14:31:37.312305 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771425097 cert, and key in /tmp/serving-cert-2580901173/serving-signer.crt, /tmp/serving-cert-2580901173/serving-signer.key\\\\nI0218 14:31:37.668457 1 observer_polling.go:159] Starting file observer\\\\nW0218 14:31:37.670692 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0218 14:31:37.670892 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 14:31:37.672550 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2580901173/tls.crt::/tmp/serving-cert-2580901173/tls.key\\\\\\\"\\\\nF0218 14:31:47.884142 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1c0f986e657c20a059efccb494462926f4fa62d30b012e6ce2b530d679d9d5c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:37Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09ddd547ad67f12c0d4f3b0505206fc69b4bcfcfa966c6f98da8ef4f4e979711\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09ddd547ad67f12c0d4f3b0505206fc69b4bcfcfa966c6f98da8ef4f4e979711\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:31:58Z is after 2025-08-24T17:21:41Z" Feb 18 14:31:58 crc kubenswrapper[4957]: I0218 14:31:58.707758 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t7lp9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1ab5e7d-28c9-416b-9e12-1209987d8a2c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854d32930f1afa0a3f29b83159a4a473fd3c074f065cab0f110c4c0e62d0ab79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://854d32930f1afa0a3f29b83159a4a473fd3c074f065cab0f110c4c0e62d0ab79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t7lp9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:31:58Z is after 2025-08-24T17:21:41Z" Feb 18 14:31:58 crc kubenswrapper[4957]: I0218 14:31:58.742245 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"820e2f52-fc4e-43af-8da0-5b1ecb52e54a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdeaa62699fcf37060225a404103ed014fff43f751f744a27891afb76c56d011\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb9f54ba352c3e8d1cb70e31e2321f32cc741e9ea7fa871ac8db9cb380486d6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04cb263fd49c7f580746efa2f81c4e25a4862244502ad5362a1767d6255c023c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f45a9407db247a891731310afb4c1a7b81ac94e44cfd8e267854a4c4e434e95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:34Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:31:58Z is after 2025-08-24T17:21:41Z" Feb 18 14:31:58 crc kubenswrapper[4957]: I0218 14:31:58.781590 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:31:58Z is after 2025-08-24T17:21:41Z" Feb 18 14:31:58 crc kubenswrapper[4957]: I0218 14:31:58.821295 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sk96m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7e4902e-7bb6-44ec-a3ac-3d8b3afe3fbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://644a3c29970ac9c5262a3a66390a566947b89d6020eaf5de12afb1afeaaff673\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vssz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sk96m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:31:58Z is after 2025-08-24T17:21:41Z" Feb 18 14:31:58 crc kubenswrapper[4957]: I0218 14:31:58.861556 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cde17e3-43e9-4bed-afe8-5b76229e35cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1929ddc0bd8d5e1c054fe6b3399c053a91ace49153f8712ad155416d0df0652e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mzslj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4227d09ba91769eac5df7ac50f6da20587fad22be3d3f27f0a938b37d76b457e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mzslj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-x8wwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:31:58Z is after 2025-08-24T17:21:41Z" Feb 18 14:31:58 crc kubenswrapper[4957]: I0218 14:31:58.905412 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-s7f5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77ae51a3-a3f7-4ea3-afb9-93558bf3b821\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7f4f904ea5311aa8aa0473b62433cd3684d0e1fecdb1b179bfb7a58e463cc27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7f4f904ea5311aa8aa0473b62433cd3684d0e1fecdb1b179bfb7a58e463cc27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ea791a426f30ba77c019b8f865f9cfe90b915f6c0596002f4914cbe2fa68c27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ea791a426f30ba77c019b8f865f9cfe90b915f6c0596002f4914cbe2fa68c27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-s7f5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:31:58Z is after 2025-08-24T17:21:41Z" Feb 18 14:31:58 crc kubenswrapper[4957]: I0218 14:31:58.940910 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-c5hxm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2b8a049-72c4-4a87-bb05-a5cc4ebe7a89\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d8c16012d78884094f9be4f3cd7b67463352ff1391c8f60c109b5f734c80cd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mzkhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-c5hxm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:31:58Z is after 2025-08-24T17:21:41Z" Feb 18 14:31:58 crc kubenswrapper[4957]: I0218 14:31:58.982035 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3aa342a3ea42d2c64cf284dd2f7fed142b1ff377cc3c5384e4d940ec4364befc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:31:58Z is after 2025-08-24T17:21:41Z" Feb 18 14:31:59 crc kubenswrapper[4957]: I0218 14:31:59.023580 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:31:59Z is after 2025-08-24T17:21:41Z" Feb 18 14:31:59 crc kubenswrapper[4957]: I0218 14:31:59.061498 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96ebe482278c96a988073257a64389543a692cbae60b771542dd45ffdbef9977\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:31:59Z is after 2025-08-24T17:21:41Z" Feb 18 14:31:59 crc kubenswrapper[4957]: I0218 14:31:59.100976 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wn2pd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6a720f-7d42-48e9-8073-fb4f7417e6cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3317042ce828b73a13e6c2724a7c7663e0c73dd05fef1239f5ad36a7d46b1ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qv7rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wn2pd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:31:59Z is after 2025-08-24T17:21:41Z" Feb 18 14:31:59 crc kubenswrapper[4957]: I0218 14:31:59.143797 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b3bef1f-7cee-4035-bc8e-195fadcf2d19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51eded762a62fcb704d09a9c4336a0e308e44c662b636772ea79f754c09dcaaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a2ec9911be4f4d8715bb722cdf72cdbf478111030a7311b93eb67f55ddbce77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d46f625f180ab9afde6a7cb5d578b88012bf5b297e691a35f1fe1af000f1416\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a11b36734e57eec3aba0b6d96b4102850e46b96eb16d99cfcacf273f1ce69f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ff013b6ed4aa92a4bda93a646e7ffd4a158daab436c25025c7fbee49716bcfd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T14:31:47Z\\\",\\\"message\\\":\\\"W0218 14:31:37.311828 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0218 14:31:37.312305 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771425097 cert, and key in /tmp/serving-cert-2580901173/serving-signer.crt, /tmp/serving-cert-2580901173/serving-signer.key\\\\nI0218 14:31:37.668457 1 observer_polling.go:159] Starting file observer\\\\nW0218 14:31:37.670692 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0218 14:31:37.670892 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 14:31:37.672550 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2580901173/tls.crt::/tmp/serving-cert-2580901173/tls.key\\\\\\\"\\\\nF0218 14:31:47.884142 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1c0f986e657c20a059efccb494462926f4fa62d30b012e6ce2b530d679d9d5c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:37Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09ddd547ad67f12c0d4f3b0505206fc69b4bcfcfa966c6f98da8ef4f4e979711\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09ddd547ad67f12c0d4f3b0505206fc69b4bcfcfa966c6f98da8ef4f4e979711\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:31:59Z is after 2025-08-24T17:21:41Z" Feb 18 14:31:59 crc kubenswrapper[4957]: I0218 14:31:59.156260 4957 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 05:42:46.705133557 +0000 UTC Feb 18 14:31:59 crc kubenswrapper[4957]: I0218 14:31:59.181863 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:31:59Z is after 2025-08-24T17:21:41Z" Feb 18 14:31:59 crc kubenswrapper[4957]: I0218 14:31:59.212845 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:31:59 crc kubenswrapper[4957]: E0218 14:31:59.213002 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 14:31:59 crc kubenswrapper[4957]: I0218 14:31:59.226803 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3339e9ff017eaa7de40b766111f5d7dcb9b5b45f154d917b2f3b1ebad21ee361\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://815a22c1c4523e923dbe20bb68b1f4195562f2b3a767409555f43c82cd826391\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:31:59Z is after 2025-08-24T17:21:41Z" Feb 18 14:31:59 crc kubenswrapper[4957]: I0218 14:31:59.420380 4957 generic.go:334] "Generic (PLEG): container finished" podID="77ae51a3-a3f7-4ea3-afb9-93558bf3b821" containerID="01a7d815c8c260fb5d2a4586f2640abf157fa209aa8e8aed0e6573a07bdaced6" exitCode=0 Feb 18 14:31:59 crc kubenswrapper[4957]: I0218 14:31:59.420452 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-s7f5j" event={"ID":"77ae51a3-a3f7-4ea3-afb9-93558bf3b821","Type":"ContainerDied","Data":"01a7d815c8c260fb5d2a4586f2640abf157fa209aa8e8aed0e6573a07bdaced6"} Feb 18 14:31:59 crc kubenswrapper[4957]: I0218 14:31:59.437523 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3aa342a3ea42d2c64cf284dd2f7fed142b1ff377cc3c5384e4d940ec4364befc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:31:59Z is after 2025-08-24T17:21:41Z" Feb 18 14:31:59 crc kubenswrapper[4957]: I0218 14:31:59.451282 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:31:59Z is after 2025-08-24T17:21:41Z" Feb 18 14:31:59 crc kubenswrapper[4957]: I0218 14:31:59.464573 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-s7f5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77ae51a3-a3f7-4ea3-afb9-93558bf3b821\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7f4f904ea5311aa8aa0473b62433cd3684d0e1fecdb1b179bfb7a58e463cc27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7f4f904ea5311aa8aa0473b62433cd3684d0e1fecdb1b179bfb7a58e463cc27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ea791a426f30ba77c019b8f865f9cfe90b915f6c0596002f4914cbe2fa68c27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ea791a426f30ba77c019b8f865f9cfe90b915f6c0596002f4914cbe2fa68c27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01a7d815c8c260fb5d2a4586f2640abf157fa209aa8e8aed0e6573a07bdaced6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01a7d815c8c260fb5d2a4586f2640abf157fa209aa8e8aed0e6573a07bdaced6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-s7f5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:31:59Z is after 2025-08-24T17:21:41Z" Feb 18 14:31:59 crc kubenswrapper[4957]: I0218 14:31:59.477088 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-c5hxm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2b8a049-72c4-4a87-bb05-a5cc4ebe7a89\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d8c16012d78884094f9be4f3cd7b67463352ff1391c8f60c109b5f734c80cd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mzkhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-c5hxm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:31:59Z is after 2025-08-24T17:21:41Z" Feb 18 14:31:59 crc kubenswrapper[4957]: I0218 14:31:59.489341 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b3bef1f-7cee-4035-bc8e-195fadcf2d19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51eded762a62fcb704d09a9c4336a0e308e44c662b636772ea79f754c09dcaaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a2ec9911be4f4d8715bb722cdf72cdbf478111030a7311b93eb67f55ddbce77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d46f625f180ab9afde6a7cb5d578b88012bf5b297e691a35f1fe1af000f1416\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a11b36734e57eec3aba0b6d96b4102850e46b96eb16d99cfcacf273f1ce69f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ff013b6ed4aa92a4bda93a646e7ffd4a158daab436c25025c7fbee49716bcfd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T14:31:47Z\\\",\\\"message\\\":\\\"W0218 14:31:37.311828 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0218 14:31:37.312305 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771425097 cert, and key in /tmp/serving-cert-2580901173/serving-signer.crt, /tmp/serving-cert-2580901173/serving-signer.key\\\\nI0218 14:31:37.668457 1 observer_polling.go:159] Starting file observer\\\\nW0218 14:31:37.670692 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0218 14:31:37.670892 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 14:31:37.672550 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2580901173/tls.crt::/tmp/serving-cert-2580901173/tls.key\\\\\\\"\\\\nF0218 14:31:47.884142 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1c0f986e657c20a059efccb494462926f4fa62d30b012e6ce2b530d679d9d5c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:37Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09ddd547ad67f12c0d4f3b0505206fc69b4bcfcfa966c6f98da8ef4f4e979711\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09ddd547ad67f12c0d4f3b0505206fc69b4bcfcfa966c6f98da8ef4f4e979711\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:31:59Z is after 2025-08-24T17:21:41Z" Feb 18 14:31:59 crc kubenswrapper[4957]: I0218 14:31:59.501024 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:31:59Z is after 2025-08-24T17:21:41Z" Feb 18 14:31:59 crc kubenswrapper[4957]: I0218 14:31:59.513932 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3339e9ff017eaa7de40b766111f5d7dcb9b5b45f154d917b2f3b1ebad21ee361\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://815a22c1c4523e923dbe20bb68b1f4195562f2b3a767409555f43c82cd826391\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:31:59Z is after 2025-08-24T17:21:41Z" Feb 18 14:31:59 crc kubenswrapper[4957]: I0218 14:31:59.544016 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96ebe482278c96a988073257a64389543a692cbae60b771542dd45ffdbef9977\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:31:59Z is after 2025-08-24T17:21:41Z" Feb 18 14:31:59 crc kubenswrapper[4957]: I0218 14:31:59.584254 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wn2pd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6a720f-7d42-48e9-8073-fb4f7417e6cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3317042ce828b73a13e6c2724a7c7663e0c73dd05fef1239f5ad36a7d46b1ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qv7rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wn2pd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:31:59Z is after 2025-08-24T17:21:41Z" Feb 18 14:31:59 crc kubenswrapper[4957]: I0218 14:31:59.629470 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t7lp9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1ab5e7d-28c9-416b-9e12-1209987d8a2c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854d32930f1afa0a3f29b83159a4a473fd3c074f065cab0f110c4c0e62d0ab79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://854d32930f1afa0a3f29b83159a4a473fd3c074f065cab0f110c4c0e62d0ab79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t7lp9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:31:59Z is after 2025-08-24T17:21:41Z" Feb 18 14:31:59 crc kubenswrapper[4957]: I0218 14:31:59.663617 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"820e2f52-fc4e-43af-8da0-5b1ecb52e54a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdeaa62699fcf37060225a404103ed014fff43f751f744a27891afb76c56d011\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb9f54ba352c3e8d1cb70e31e2321f32cc741e9ea7fa871ac8db9cb380486d6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04cb263fd49c7f580746efa2f81c4e25a4862244502ad5362a1767d6255c023c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f45a9407db247a891731310afb4c1a7b81ac94e44cfd8e267854a4c4e434e95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:34Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:31:59Z is after 2025-08-24T17:21:41Z" Feb 18 14:31:59 crc kubenswrapper[4957]: I0218 14:31:59.702399 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:31:59Z is after 2025-08-24T17:21:41Z" Feb 18 14:31:59 crc kubenswrapper[4957]: I0218 14:31:59.742561 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sk96m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7e4902e-7bb6-44ec-a3ac-3d8b3afe3fbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://644a3c29970ac9c5262a3a66390a566947b89d6020eaf5de12afb1afeaaff673\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vssz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sk96m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:31:59Z is after 2025-08-24T17:21:41Z" Feb 18 14:31:59 crc kubenswrapper[4957]: I0218 14:31:59.780474 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cde17e3-43e9-4bed-afe8-5b76229e35cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1929ddc0bd8d5e1c054fe6b3399c053a91ace49153f8712ad155416d0df0652e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mzslj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4227d09ba91769eac5df7ac50f6da20587fad22be3d3f27f0a938b37d76b457e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mzslj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-x8wwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:31:59Z is after 2025-08-24T17:21:41Z" Feb 18 14:31:59 crc kubenswrapper[4957]: I0218 14:31:59.802568 4957 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 14:31:59 crc kubenswrapper[4957]: I0218 14:31:59.804377 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:31:59 crc kubenswrapper[4957]: I0218 14:31:59.804411 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:31:59 crc kubenswrapper[4957]: I0218 14:31:59.804423 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:31:59 crc kubenswrapper[4957]: I0218 14:31:59.804568 4957 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 18 14:31:59 crc kubenswrapper[4957]: I0218 14:31:59.814706 4957 kubelet_node_status.go:115] "Node was previously registered" node="crc" Feb 18 14:31:59 crc kubenswrapper[4957]: I0218 14:31:59.814941 4957 kubelet_node_status.go:79] "Successfully registered node" node="crc" Feb 18 14:31:59 crc kubenswrapper[4957]: I0218 14:31:59.815913 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:31:59 crc kubenswrapper[4957]: I0218 14:31:59.815939 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:31:59 crc kubenswrapper[4957]: I0218 14:31:59.815947 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:31:59 crc kubenswrapper[4957]: I0218 14:31:59.815963 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:31:59 crc kubenswrapper[4957]: I0218 14:31:59.815973 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:31:59Z","lastTransitionTime":"2026-02-18T14:31:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:31:59 crc kubenswrapper[4957]: E0218 14:31:59.827829 4957 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:31:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:31:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:31:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:31:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"73cbc702-e999-4b05-a826-bb1b15d4d73b\\\",\\\"systemUUID\\\":\\\"9fb0acd4-1ed1-4909-a63a-3f4dd7b07055\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:31:59Z is after 2025-08-24T17:21:41Z" Feb 18 14:31:59 crc kubenswrapper[4957]: I0218 14:31:59.832182 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:31:59 crc kubenswrapper[4957]: I0218 14:31:59.832220 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:31:59 crc kubenswrapper[4957]: I0218 14:31:59.832230 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:31:59 crc kubenswrapper[4957]: I0218 14:31:59.832251 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:31:59 crc kubenswrapper[4957]: I0218 14:31:59.832261 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:31:59Z","lastTransitionTime":"2026-02-18T14:31:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:31:59 crc kubenswrapper[4957]: E0218 14:31:59.843013 4957 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:31:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:31:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:31:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:31:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"73cbc702-e999-4b05-a826-bb1b15d4d73b\\\",\\\"systemUUID\\\":\\\"9fb0acd4-1ed1-4909-a63a-3f4dd7b07055\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:31:59Z is after 2025-08-24T17:21:41Z" Feb 18 14:31:59 crc kubenswrapper[4957]: I0218 14:31:59.846964 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:31:59 crc kubenswrapper[4957]: I0218 14:31:59.847072 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:31:59 crc kubenswrapper[4957]: I0218 14:31:59.847134 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:31:59 crc kubenswrapper[4957]: I0218 14:31:59.847195 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:31:59 crc kubenswrapper[4957]: I0218 14:31:59.847272 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:31:59Z","lastTransitionTime":"2026-02-18T14:31:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:31:59 crc kubenswrapper[4957]: E0218 14:31:59.859782 4957 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:31:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:31:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:31:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:31:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"73cbc702-e999-4b05-a826-bb1b15d4d73b\\\",\\\"systemUUID\\\":\\\"9fb0acd4-1ed1-4909-a63a-3f4dd7b07055\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:31:59Z is after 2025-08-24T17:21:41Z" Feb 18 14:31:59 crc kubenswrapper[4957]: I0218 14:31:59.863799 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:31:59 crc kubenswrapper[4957]: I0218 14:31:59.863829 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:31:59 crc kubenswrapper[4957]: I0218 14:31:59.863841 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:31:59 crc kubenswrapper[4957]: I0218 14:31:59.863858 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:31:59 crc kubenswrapper[4957]: I0218 14:31:59.863871 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:31:59Z","lastTransitionTime":"2026-02-18T14:31:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:31:59 crc kubenswrapper[4957]: E0218 14:31:59.880246 4957 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:31:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:31:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:31:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:31:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"73cbc702-e999-4b05-a826-bb1b15d4d73b\\\",\\\"systemUUID\\\":\\\"9fb0acd4-1ed1-4909-a63a-3f4dd7b07055\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:31:59Z is after 2025-08-24T17:21:41Z" Feb 18 14:31:59 crc kubenswrapper[4957]: I0218 14:31:59.883652 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:31:59 crc kubenswrapper[4957]: I0218 14:31:59.883780 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:31:59 crc kubenswrapper[4957]: I0218 14:31:59.883872 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:31:59 crc kubenswrapper[4957]: I0218 14:31:59.883961 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:31:59 crc kubenswrapper[4957]: I0218 14:31:59.884032 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:31:59Z","lastTransitionTime":"2026-02-18T14:31:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:31:59 crc kubenswrapper[4957]: E0218 14:31:59.895497 4957 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:31:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:31:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:31:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:31:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"73cbc702-e999-4b05-a826-bb1b15d4d73b\\\",\\\"systemUUID\\\":\\\"9fb0acd4-1ed1-4909-a63a-3f4dd7b07055\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:31:59Z is after 2025-08-24T17:21:41Z" Feb 18 14:31:59 crc kubenswrapper[4957]: E0218 14:31:59.895770 4957 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 18 14:31:59 crc kubenswrapper[4957]: I0218 14:31:59.897297 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:31:59 crc kubenswrapper[4957]: I0218 14:31:59.897324 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:31:59 crc kubenswrapper[4957]: I0218 14:31:59.897335 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:31:59 crc kubenswrapper[4957]: I0218 14:31:59.897349 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:31:59 crc kubenswrapper[4957]: I0218 14:31:59.897360 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:31:59Z","lastTransitionTime":"2026-02-18T14:31:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:00 crc kubenswrapper[4957]: I0218 14:32:00.000246 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:00 crc kubenswrapper[4957]: I0218 14:32:00.000293 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:00 crc kubenswrapper[4957]: I0218 14:32:00.000311 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:00 crc kubenswrapper[4957]: I0218 14:32:00.000333 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:00 crc kubenswrapper[4957]: I0218 14:32:00.000350 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:00Z","lastTransitionTime":"2026-02-18T14:32:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:00 crc kubenswrapper[4957]: I0218 14:32:00.103108 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:00 crc kubenswrapper[4957]: I0218 14:32:00.103177 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:00 crc kubenswrapper[4957]: I0218 14:32:00.103203 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:00 crc kubenswrapper[4957]: I0218 14:32:00.103234 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:00 crc kubenswrapper[4957]: I0218 14:32:00.103256 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:00Z","lastTransitionTime":"2026-02-18T14:32:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:00 crc kubenswrapper[4957]: I0218 14:32:00.157353 4957 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 04:39:57.610649384 +0000 UTC Feb 18 14:32:00 crc kubenswrapper[4957]: I0218 14:32:00.206716 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:00 crc kubenswrapper[4957]: I0218 14:32:00.206772 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:00 crc kubenswrapper[4957]: I0218 14:32:00.206791 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:00 crc kubenswrapper[4957]: I0218 14:32:00.206817 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:00 crc kubenswrapper[4957]: I0218 14:32:00.206838 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:00Z","lastTransitionTime":"2026-02-18T14:32:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:00 crc kubenswrapper[4957]: I0218 14:32:00.212545 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:32:00 crc kubenswrapper[4957]: I0218 14:32:00.212629 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:32:00 crc kubenswrapper[4957]: E0218 14:32:00.212764 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 14:32:00 crc kubenswrapper[4957]: E0218 14:32:00.212937 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 14:32:00 crc kubenswrapper[4957]: I0218 14:32:00.309519 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:00 crc kubenswrapper[4957]: I0218 14:32:00.309555 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:00 crc kubenswrapper[4957]: I0218 14:32:00.309564 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:00 crc kubenswrapper[4957]: I0218 14:32:00.309581 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:00 crc kubenswrapper[4957]: I0218 14:32:00.309601 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:00Z","lastTransitionTime":"2026-02-18T14:32:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:00 crc kubenswrapper[4957]: I0218 14:32:00.412905 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:00 crc kubenswrapper[4957]: I0218 14:32:00.412961 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:00 crc kubenswrapper[4957]: I0218 14:32:00.412975 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:00 crc kubenswrapper[4957]: I0218 14:32:00.412996 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:00 crc kubenswrapper[4957]: I0218 14:32:00.413011 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:00Z","lastTransitionTime":"2026-02-18T14:32:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:00 crc kubenswrapper[4957]: I0218 14:32:00.431886 4957 generic.go:334] "Generic (PLEG): container finished" podID="77ae51a3-a3f7-4ea3-afb9-93558bf3b821" containerID="1ae36af89d8b5cc2ad00253398fad76d78310bc8adb012b7b819ef55a26b8004" exitCode=0 Feb 18 14:32:00 crc kubenswrapper[4957]: I0218 14:32:00.431963 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-s7f5j" event={"ID":"77ae51a3-a3f7-4ea3-afb9-93558bf3b821","Type":"ContainerDied","Data":"1ae36af89d8b5cc2ad00253398fad76d78310bc8adb012b7b819ef55a26b8004"} Feb 18 14:32:00 crc kubenswrapper[4957]: I0218 14:32:00.439221 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t7lp9" event={"ID":"c1ab5e7d-28c9-416b-9e12-1209987d8a2c","Type":"ContainerStarted","Data":"07549d11e88948ba7929804722b7d49a2fefb64d0c324c2032529c13df70a23f"} Feb 18 14:32:00 crc kubenswrapper[4957]: I0218 14:32:00.447166 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"820e2f52-fc4e-43af-8da0-5b1ecb52e54a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdeaa62699fcf37060225a404103ed014fff43f751f744a27891afb76c56d011\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb9f54ba352c3e8d1cb70e31e2321f32cc741e9ea7fa871ac8db9cb380486d6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04cb263fd49c7f580746efa2f81c4e25a4862244502ad5362a1767d6255c023c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f45a9407db247a891731310afb4c1a7b81ac94e44cfd8e267854a4c4e434e95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:34Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:00Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:00 crc kubenswrapper[4957]: I0218 14:32:00.460804 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:00Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:00 crc kubenswrapper[4957]: I0218 14:32:00.475778 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sk96m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7e4902e-7bb6-44ec-a3ac-3d8b3afe3fbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://644a3c29970ac9c5262a3a66390a566947b89d6020eaf5de12afb1afeaaff673\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vssz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sk96m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:00Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:00 crc kubenswrapper[4957]: I0218 14:32:00.487295 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cde17e3-43e9-4bed-afe8-5b76229e35cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1929ddc0bd8d5e1c054fe6b3399c053a91ace49153f8712ad155416d0df0652e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mzslj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4227d09ba91769eac5df7ac50f6da20587fad22be3d3f27f0a938b37d76b457e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mzslj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-x8wwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:00Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:00 crc kubenswrapper[4957]: I0218 14:32:00.515972 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:00 crc kubenswrapper[4957]: I0218 14:32:00.516016 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:00 crc kubenswrapper[4957]: I0218 14:32:00.516024 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:00 crc kubenswrapper[4957]: I0218 14:32:00.516038 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:00 crc kubenswrapper[4957]: I0218 14:32:00.516051 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:00Z","lastTransitionTime":"2026-02-18T14:32:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:00 crc kubenswrapper[4957]: I0218 14:32:00.534147 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-s7f5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77ae51a3-a3f7-4ea3-afb9-93558bf3b821\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7f4f904ea5311aa8aa0473b62433cd3684d0e1fecdb1b179bfb7a58e463cc27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7f4f904ea5311aa8aa0473b62433cd3684d0e1fecdb1b179bfb7a58e463cc27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ea791a426f30ba77c019b8f865f9cfe90b915f6c0596002f4914cbe2fa68c27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ea791a426f30ba77c019b8f865f9cfe90b915f6c0596002f4914cbe2fa68c27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01a7d815c8c260fb5d2a4586f2640abf157fa209aa8e8aed0e6573a07bdaced6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01a7d815c8c260fb5d2a4586f2640abf157fa209aa8e8aed0e6573a07bdaced6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ae36af89d8b5cc2ad00253398fad76d78310bc8adb012b7b819ef55a26b8004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae36af89d8b5cc2ad00253398fad76d78310bc8adb012b7b819ef55a26b8004\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-s7f5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:00Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:00 crc kubenswrapper[4957]: I0218 14:32:00.544741 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-c5hxm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2b8a049-72c4-4a87-bb05-a5cc4ebe7a89\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d8c16012d78884094f9be4f3cd7b67463352ff1391c8f60c109b5f734c80cd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mzkhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-c5hxm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:00Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:00 crc kubenswrapper[4957]: I0218 14:32:00.558173 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3aa342a3ea42d2c64cf284dd2f7fed142b1ff377cc3c5384e4d940ec4364befc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:00Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:00 crc kubenswrapper[4957]: I0218 14:32:00.570919 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:00Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:00 crc kubenswrapper[4957]: I0218 14:32:00.583831 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96ebe482278c96a988073257a64389543a692cbae60b771542dd45ffdbef9977\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:00Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:00 crc kubenswrapper[4957]: I0218 14:32:00.593926 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wn2pd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6a720f-7d42-48e9-8073-fb4f7417e6cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3317042ce828b73a13e6c2724a7c7663e0c73dd05fef1239f5ad36a7d46b1ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qv7rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wn2pd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:00Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:00 crc kubenswrapper[4957]: I0218 14:32:00.607124 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b3bef1f-7cee-4035-bc8e-195fadcf2d19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51eded762a62fcb704d09a9c4336a0e308e44c662b636772ea79f754c09dcaaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a2ec9911be4f4d8715bb722cdf72cdbf478111030a7311b93eb67f55ddbce77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d46f625f180ab9afde6a7cb5d578b88012bf5b297e691a35f1fe1af000f1416\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a11b36734e57eec3aba0b6d96b4102850e46b96eb16d99cfcacf273f1ce69f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ff013b6ed4aa92a4bda93a646e7ffd4a158daab436c25025c7fbee49716bcfd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T14:31:47Z\\\",\\\"message\\\":\\\"W0218 14:31:37.311828 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0218 14:31:37.312305 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771425097 cert, and key in /tmp/serving-cert-2580901173/serving-signer.crt, /tmp/serving-cert-2580901173/serving-signer.key\\\\nI0218 14:31:37.668457 1 observer_polling.go:159] Starting file observer\\\\nW0218 14:31:37.670692 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0218 14:31:37.670892 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 14:31:37.672550 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2580901173/tls.crt::/tmp/serving-cert-2580901173/tls.key\\\\\\\"\\\\nF0218 14:31:47.884142 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1c0f986e657c20a059efccb494462926f4fa62d30b012e6ce2b530d679d9d5c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:37Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09ddd547ad67f12c0d4f3b0505206fc69b4bcfcfa966c6f98da8ef4f4e979711\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09ddd547ad67f12c0d4f3b0505206fc69b4bcfcfa966c6f98da8ef4f4e979711\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:00Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:00 crc kubenswrapper[4957]: I0218 14:32:00.618176 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:00 crc kubenswrapper[4957]: I0218 14:32:00.618220 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:00 crc kubenswrapper[4957]: I0218 14:32:00.618230 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:00 crc kubenswrapper[4957]: I0218 14:32:00.618250 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:00 crc kubenswrapper[4957]: I0218 14:32:00.618260 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:00Z","lastTransitionTime":"2026-02-18T14:32:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:00 crc kubenswrapper[4957]: I0218 14:32:00.619089 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:00Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:00 crc kubenswrapper[4957]: I0218 14:32:00.631233 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3339e9ff017eaa7de40b766111f5d7dcb9b5b45f154d917b2f3b1ebad21ee361\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://815a22c1c4523e923dbe20bb68b1f4195562f2b3a767409555f43c82cd826391\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:00Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:00 crc kubenswrapper[4957]: I0218 14:32:00.652675 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t7lp9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1ab5e7d-28c9-416b-9e12-1209987d8a2c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854d32930f1afa0a3f29b83159a4a473fd3c074f065cab0f110c4c0e62d0ab79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://854d32930f1afa0a3f29b83159a4a473fd3c074f065cab0f110c4c0e62d0ab79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t7lp9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:00Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:00 crc kubenswrapper[4957]: I0218 14:32:00.720801 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:00 crc kubenswrapper[4957]: I0218 14:32:00.720836 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:00 crc kubenswrapper[4957]: I0218 14:32:00.720844 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:00 crc kubenswrapper[4957]: I0218 14:32:00.720858 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:00 crc kubenswrapper[4957]: I0218 14:32:00.720868 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:00Z","lastTransitionTime":"2026-02-18T14:32:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:00 crc kubenswrapper[4957]: I0218 14:32:00.823345 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:00 crc kubenswrapper[4957]: I0218 14:32:00.823392 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:00 crc kubenswrapper[4957]: I0218 14:32:00.823405 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:00 crc kubenswrapper[4957]: I0218 14:32:00.823449 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:00 crc kubenswrapper[4957]: I0218 14:32:00.823485 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:00Z","lastTransitionTime":"2026-02-18T14:32:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:00 crc kubenswrapper[4957]: I0218 14:32:00.927182 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:00 crc kubenswrapper[4957]: I0218 14:32:00.927254 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:00 crc kubenswrapper[4957]: I0218 14:32:00.927278 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:00 crc kubenswrapper[4957]: I0218 14:32:00.927308 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:00 crc kubenswrapper[4957]: I0218 14:32:00.927329 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:00Z","lastTransitionTime":"2026-02-18T14:32:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:01 crc kubenswrapper[4957]: I0218 14:32:01.035200 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:01 crc kubenswrapper[4957]: I0218 14:32:01.035270 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:01 crc kubenswrapper[4957]: I0218 14:32:01.035289 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:01 crc kubenswrapper[4957]: I0218 14:32:01.035315 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:01 crc kubenswrapper[4957]: I0218 14:32:01.035333 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:01Z","lastTransitionTime":"2026-02-18T14:32:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:01 crc kubenswrapper[4957]: I0218 14:32:01.128967 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 14:32:01 crc kubenswrapper[4957]: I0218 14:32:01.129202 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:32:01 crc kubenswrapper[4957]: I0218 14:32:01.129270 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:32:01 crc kubenswrapper[4957]: E0218 14:32:01.129389 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 14:32:09.129343259 +0000 UTC m=+35.650208043 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:32:01 crc kubenswrapper[4957]: E0218 14:32:01.129472 4957 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 14:32:01 crc kubenswrapper[4957]: E0218 14:32:01.129565 4957 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 14:32:01 crc kubenswrapper[4957]: E0218 14:32:01.129597 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 14:32:09.129561136 +0000 UTC m=+35.650426060 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 14:32:01 crc kubenswrapper[4957]: E0218 14:32:01.129655 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 14:32:09.129630008 +0000 UTC m=+35.650494792 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 14:32:01 crc kubenswrapper[4957]: I0218 14:32:01.137757 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:01 crc kubenswrapper[4957]: I0218 14:32:01.137807 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:01 crc kubenswrapper[4957]: I0218 14:32:01.137819 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:01 crc kubenswrapper[4957]: I0218 14:32:01.137835 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:01 crc kubenswrapper[4957]: I0218 14:32:01.137846 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:01Z","lastTransitionTime":"2026-02-18T14:32:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:01 crc kubenswrapper[4957]: I0218 14:32:01.157757 4957 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 19:41:54.38285272 +0000 UTC Feb 18 14:32:01 crc kubenswrapper[4957]: I0218 14:32:01.212156 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:32:01 crc kubenswrapper[4957]: E0218 14:32:01.212291 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 14:32:01 crc kubenswrapper[4957]: I0218 14:32:01.230351 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:32:01 crc kubenswrapper[4957]: I0218 14:32:01.230434 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:32:01 crc kubenswrapper[4957]: E0218 14:32:01.230579 4957 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 14:32:01 crc kubenswrapper[4957]: E0218 14:32:01.230598 4957 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 14:32:01 crc kubenswrapper[4957]: E0218 14:32:01.230603 4957 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 14:32:01 crc kubenswrapper[4957]: E0218 14:32:01.230650 4957 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 14:32:01 crc kubenswrapper[4957]: E0218 14:32:01.230663 4957 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 14:32:01 crc kubenswrapper[4957]: E0218 14:32:01.230719 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-18 14:32:09.230702012 +0000 UTC m=+35.751566756 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 14:32:01 crc kubenswrapper[4957]: E0218 14:32:01.230612 4957 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 14:32:01 crc kubenswrapper[4957]: E0218 14:32:01.230791 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-18 14:32:09.230774554 +0000 UTC m=+35.751639458 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 14:32:01 crc kubenswrapper[4957]: I0218 14:32:01.240661 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:01 crc kubenswrapper[4957]: I0218 14:32:01.240693 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:01 crc kubenswrapper[4957]: I0218 14:32:01.240704 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:01 crc kubenswrapper[4957]: I0218 14:32:01.240718 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:01 crc kubenswrapper[4957]: I0218 14:32:01.240729 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:01Z","lastTransitionTime":"2026-02-18T14:32:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:01 crc kubenswrapper[4957]: I0218 14:32:01.343450 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:01 crc kubenswrapper[4957]: I0218 14:32:01.343499 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:01 crc kubenswrapper[4957]: I0218 14:32:01.343511 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:01 crc kubenswrapper[4957]: I0218 14:32:01.343527 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:01 crc kubenswrapper[4957]: I0218 14:32:01.343538 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:01Z","lastTransitionTime":"2026-02-18T14:32:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:01 crc kubenswrapper[4957]: I0218 14:32:01.446291 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:01 crc kubenswrapper[4957]: I0218 14:32:01.446363 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:01 crc kubenswrapper[4957]: I0218 14:32:01.446382 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:01 crc kubenswrapper[4957]: I0218 14:32:01.446406 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:01 crc kubenswrapper[4957]: I0218 14:32:01.446445 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:01Z","lastTransitionTime":"2026-02-18T14:32:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:01 crc kubenswrapper[4957]: I0218 14:32:01.448342 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-s7f5j" event={"ID":"77ae51a3-a3f7-4ea3-afb9-93558bf3b821","Type":"ContainerStarted","Data":"def4b27d830824baeb1b965b56d5fc50343585d0fae5a1fed75381364a94caf2"} Feb 18 14:32:01 crc kubenswrapper[4957]: I0218 14:32:01.466342 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sk96m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7e4902e-7bb6-44ec-a3ac-3d8b3afe3fbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://644a3c29970ac9c5262a3a66390a566947b89d6020eaf5de12afb1afeaaff673\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vssz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sk96m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:01Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:01 crc kubenswrapper[4957]: I0218 14:32:01.478539 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cde17e3-43e9-4bed-afe8-5b76229e35cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1929ddc0bd8d5e1c054fe6b3399c053a91ace49153f8712ad155416d0df0652e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mzslj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4227d09ba91769eac5df7ac50f6da20587fad22be3d3f27f0a938b37d76b457e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mzslj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-x8wwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:01Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:01 crc kubenswrapper[4957]: I0218 14:32:01.493385 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"820e2f52-fc4e-43af-8da0-5b1ecb52e54a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdeaa62699fcf37060225a404103ed014fff43f751f744a27891afb76c56d011\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb9f54ba352c3e8d1cb70e31e2321f32cc741e9ea7fa871ac8db9cb380486d6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04cb263fd49c7f580746efa2f81c4e25a4862244502ad5362a1767d6255c023c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f45a9407db247a891731310afb4c1a7b81ac94e44cfd8e267854a4c4e434e95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:34Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:01Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:01 crc kubenswrapper[4957]: I0218 14:32:01.506102 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:01Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:01 crc kubenswrapper[4957]: I0218 14:32:01.520100 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3aa342a3ea42d2c64cf284dd2f7fed142b1ff377cc3c5384e4d940ec4364befc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:01Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:01 crc kubenswrapper[4957]: I0218 14:32:01.535359 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:01Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:01 crc kubenswrapper[4957]: I0218 14:32:01.549519 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:01 crc kubenswrapper[4957]: I0218 14:32:01.549573 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:01 crc kubenswrapper[4957]: I0218 14:32:01.549587 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:01 crc kubenswrapper[4957]: I0218 14:32:01.549608 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:01 crc kubenswrapper[4957]: I0218 14:32:01.549623 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:01Z","lastTransitionTime":"2026-02-18T14:32:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:01 crc kubenswrapper[4957]: I0218 14:32:01.549955 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-s7f5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77ae51a3-a3f7-4ea3-afb9-93558bf3b821\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7f4f904ea5311aa8aa0473b62433cd3684d0e1fecdb1b179bfb7a58e463cc27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7f4f904ea5311aa8aa0473b62433cd3684d0e1fecdb1b179bfb7a58e463cc27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ea791a426f30ba77c019b8f865f9cfe90b915f6c0596002f4914cbe2fa68c27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ea791a426f30ba77c019b8f865f9cfe90b915f6c0596002f4914cbe2fa68c27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01a7d815c8c260fb5d2a4586f2640abf157fa209aa8e8aed0e6573a07bdaced6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01a7d815c8c260fb5d2a4586f2640abf157fa209aa8e8aed0e6573a07bdaced6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ae36af89d8b5cc2ad00253398fad76d78310bc8adb012b7b819ef55a26b8004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae36af89d8b5cc2ad00253398fad76d78310bc8adb012b7b819ef55a26b8004\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://def4b27d830824baeb1b965b56d5fc50343585d0fae5a1fed75381364a94caf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-s7f5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:01Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:01 crc kubenswrapper[4957]: I0218 14:32:01.560598 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-c5hxm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2b8a049-72c4-4a87-bb05-a5cc4ebe7a89\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d8c16012d78884094f9be4f3cd7b67463352ff1391c8f60c109b5f734c80cd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mzkhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-c5hxm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:01Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:01 crc kubenswrapper[4957]: I0218 14:32:01.576423 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:01Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:01 crc kubenswrapper[4957]: I0218 14:32:01.588593 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3339e9ff017eaa7de40b766111f5d7dcb9b5b45f154d917b2f3b1ebad21ee361\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://815a22c1c4523e923dbe20bb68b1f4195562f2b3a767409555f43c82cd826391\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:01Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:01 crc kubenswrapper[4957]: I0218 14:32:01.598627 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96ebe482278c96a988073257a64389543a692cbae60b771542dd45ffdbef9977\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:01Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:01 crc kubenswrapper[4957]: I0218 14:32:01.607235 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wn2pd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6a720f-7d42-48e9-8073-fb4f7417e6cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3317042ce828b73a13e6c2724a7c7663e0c73dd05fef1239f5ad36a7d46b1ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qv7rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wn2pd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:01Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:01 crc kubenswrapper[4957]: I0218 14:32:01.617520 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b3bef1f-7cee-4035-bc8e-195fadcf2d19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51eded762a62fcb704d09a9c4336a0e308e44c662b636772ea79f754c09dcaaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a2ec9911be4f4d8715bb722cdf72cdbf478111030a7311b93eb67f55ddbce77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d46f625f180ab9afde6a7cb5d578b88012bf5b297e691a35f1fe1af000f1416\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a11b36734e57eec3aba0b6d96b4102850e46b96eb16d99cfcacf273f1ce69f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ff013b6ed4aa92a4bda93a646e7ffd4a158daab436c25025c7fbee49716bcfd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T14:31:47Z\\\",\\\"message\\\":\\\"W0218 14:31:37.311828 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0218 14:31:37.312305 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771425097 cert, and key in /tmp/serving-cert-2580901173/serving-signer.crt, /tmp/serving-cert-2580901173/serving-signer.key\\\\nI0218 14:31:37.668457 1 observer_polling.go:159] Starting file observer\\\\nW0218 14:31:37.670692 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0218 14:31:37.670892 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 14:31:37.672550 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2580901173/tls.crt::/tmp/serving-cert-2580901173/tls.key\\\\\\\"\\\\nF0218 14:31:47.884142 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1c0f986e657c20a059efccb494462926f4fa62d30b012e6ce2b530d679d9d5c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:37Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09ddd547ad67f12c0d4f3b0505206fc69b4bcfcfa966c6f98da8ef4f4e979711\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09ddd547ad67f12c0d4f3b0505206fc69b4bcfcfa966c6f98da8ef4f4e979711\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:01Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:01 crc kubenswrapper[4957]: I0218 14:32:01.637063 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t7lp9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1ab5e7d-28c9-416b-9e12-1209987d8a2c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854d32930f1afa0a3f29b83159a4a473fd3c074f065cab0f110c4c0e62d0ab79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://854d32930f1afa0a3f29b83159a4a473fd3c074f065cab0f110c4c0e62d0ab79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t7lp9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:01Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:01 crc kubenswrapper[4957]: I0218 14:32:01.652489 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:01 crc kubenswrapper[4957]: I0218 14:32:01.652530 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:01 crc kubenswrapper[4957]: I0218 14:32:01.652542 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:01 crc kubenswrapper[4957]: I0218 14:32:01.652564 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:01 crc kubenswrapper[4957]: I0218 14:32:01.652576 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:01Z","lastTransitionTime":"2026-02-18T14:32:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:01 crc kubenswrapper[4957]: I0218 14:32:01.755199 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:01 crc kubenswrapper[4957]: I0218 14:32:01.755232 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:01 crc kubenswrapper[4957]: I0218 14:32:01.755241 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:01 crc kubenswrapper[4957]: I0218 14:32:01.755258 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:01 crc kubenswrapper[4957]: I0218 14:32:01.755269 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:01Z","lastTransitionTime":"2026-02-18T14:32:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:01 crc kubenswrapper[4957]: I0218 14:32:01.859572 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:01 crc kubenswrapper[4957]: I0218 14:32:01.859857 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:01 crc kubenswrapper[4957]: I0218 14:32:01.859867 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:01 crc kubenswrapper[4957]: I0218 14:32:01.859883 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:01 crc kubenswrapper[4957]: I0218 14:32:01.859893 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:01Z","lastTransitionTime":"2026-02-18T14:32:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:01 crc kubenswrapper[4957]: I0218 14:32:01.962330 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:01 crc kubenswrapper[4957]: I0218 14:32:01.962366 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:01 crc kubenswrapper[4957]: I0218 14:32:01.962379 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:01 crc kubenswrapper[4957]: I0218 14:32:01.962396 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:01 crc kubenswrapper[4957]: I0218 14:32:01.962408 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:01Z","lastTransitionTime":"2026-02-18T14:32:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:02 crc kubenswrapper[4957]: I0218 14:32:02.065367 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:02 crc kubenswrapper[4957]: I0218 14:32:02.065477 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:02 crc kubenswrapper[4957]: I0218 14:32:02.065498 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:02 crc kubenswrapper[4957]: I0218 14:32:02.065529 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:02 crc kubenswrapper[4957]: I0218 14:32:02.065550 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:02Z","lastTransitionTime":"2026-02-18T14:32:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:02 crc kubenswrapper[4957]: I0218 14:32:02.158743 4957 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 12:48:15.912885676 +0000 UTC Feb 18 14:32:02 crc kubenswrapper[4957]: I0218 14:32:02.167851 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:02 crc kubenswrapper[4957]: I0218 14:32:02.167895 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:02 crc kubenswrapper[4957]: I0218 14:32:02.167912 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:02 crc kubenswrapper[4957]: I0218 14:32:02.167933 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:02 crc kubenswrapper[4957]: I0218 14:32:02.167946 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:02Z","lastTransitionTime":"2026-02-18T14:32:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:02 crc kubenswrapper[4957]: I0218 14:32:02.212671 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:32:02 crc kubenswrapper[4957]: I0218 14:32:02.212758 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:32:02 crc kubenswrapper[4957]: E0218 14:32:02.212904 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 14:32:02 crc kubenswrapper[4957]: E0218 14:32:02.213043 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 14:32:02 crc kubenswrapper[4957]: I0218 14:32:02.270408 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:02 crc kubenswrapper[4957]: I0218 14:32:02.270504 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:02 crc kubenswrapper[4957]: I0218 14:32:02.270519 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:02 crc kubenswrapper[4957]: I0218 14:32:02.270544 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:02 crc kubenswrapper[4957]: I0218 14:32:02.270562 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:02Z","lastTransitionTime":"2026-02-18T14:32:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:02 crc kubenswrapper[4957]: I0218 14:32:02.373081 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:02 crc kubenswrapper[4957]: I0218 14:32:02.373115 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:02 crc kubenswrapper[4957]: I0218 14:32:02.373126 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:02 crc kubenswrapper[4957]: I0218 14:32:02.373143 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:02 crc kubenswrapper[4957]: I0218 14:32:02.373154 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:02Z","lastTransitionTime":"2026-02-18T14:32:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:02 crc kubenswrapper[4957]: I0218 14:32:02.462012 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t7lp9" event={"ID":"c1ab5e7d-28c9-416b-9e12-1209987d8a2c","Type":"ContainerStarted","Data":"8f8908d037eeb540061d23f1918042ff39dce95b7e11ae3fd0767cee52738794"} Feb 18 14:32:02 crc kubenswrapper[4957]: I0218 14:32:02.462367 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-t7lp9" Feb 18 14:32:02 crc kubenswrapper[4957]: I0218 14:32:02.466080 4957 generic.go:334] "Generic (PLEG): container finished" podID="77ae51a3-a3f7-4ea3-afb9-93558bf3b821" containerID="def4b27d830824baeb1b965b56d5fc50343585d0fae5a1fed75381364a94caf2" exitCode=0 Feb 18 14:32:02 crc kubenswrapper[4957]: I0218 14:32:02.466101 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-s7f5j" event={"ID":"77ae51a3-a3f7-4ea3-afb9-93558bf3b821","Type":"ContainerDied","Data":"def4b27d830824baeb1b965b56d5fc50343585d0fae5a1fed75381364a94caf2"} Feb 18 14:32:02 crc kubenswrapper[4957]: I0218 14:32:02.475496 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96ebe482278c96a988073257a64389543a692cbae60b771542dd45ffdbef9977\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:02Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:02 crc kubenswrapper[4957]: I0218 14:32:02.475681 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:02 crc kubenswrapper[4957]: I0218 14:32:02.475697 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:02 crc kubenswrapper[4957]: I0218 14:32:02.475704 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:02 crc kubenswrapper[4957]: I0218 14:32:02.475717 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:02 crc kubenswrapper[4957]: I0218 14:32:02.475726 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:02Z","lastTransitionTime":"2026-02-18T14:32:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:02 crc kubenswrapper[4957]: I0218 14:32:02.486930 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wn2pd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6a720f-7d42-48e9-8073-fb4f7417e6cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3317042ce828b73a13e6c2724a7c7663e0c73dd05fef1239f5ad36a7d46b1ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qv7rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wn2pd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:02Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:02 crc kubenswrapper[4957]: I0218 14:32:02.496018 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-t7lp9" Feb 18 14:32:02 crc kubenswrapper[4957]: I0218 14:32:02.501463 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b3bef1f-7cee-4035-bc8e-195fadcf2d19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51eded762a62fcb704d09a9c4336a0e308e44c662b636772ea79f754c09dcaaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a2ec9911be4f4d8715bb722cdf72cdbf478111030a7311b93eb67f55ddbce77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d46f625f180ab9afde6a7cb5d578b88012bf5b297e691a35f1fe1af000f1416\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a11b36734e57eec3aba0b6d96b4102850e46b96eb16d99cfcacf273f1ce69f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ff013b6ed4aa92a4bda93a646e7ffd4a158daab436c25025c7fbee49716bcfd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T14:31:47Z\\\",\\\"message\\\":\\\"W0218 14:31:37.311828 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0218 14:31:37.312305 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771425097 cert, and key in /tmp/serving-cert-2580901173/serving-signer.crt, /tmp/serving-cert-2580901173/serving-signer.key\\\\nI0218 14:31:37.668457 1 observer_polling.go:159] Starting file observer\\\\nW0218 14:31:37.670692 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0218 14:31:37.670892 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 14:31:37.672550 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2580901173/tls.crt::/tmp/serving-cert-2580901173/tls.key\\\\\\\"\\\\nF0218 14:31:47.884142 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1c0f986e657c20a059efccb494462926f4fa62d30b012e6ce2b530d679d9d5c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:37Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09ddd547ad67f12c0d4f3b0505206fc69b4bcfcfa966c6f98da8ef4f4e979711\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09ddd547ad67f12c0d4f3b0505206fc69b4bcfcfa966c6f98da8ef4f4e979711\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:02Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:02 crc kubenswrapper[4957]: I0218 14:32:02.516218 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:02Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:02 crc kubenswrapper[4957]: I0218 14:32:02.530976 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3339e9ff017eaa7de40b766111f5d7dcb9b5b45f154d917b2f3b1ebad21ee361\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://815a22c1c4523e923dbe20bb68b1f4195562f2b3a767409555f43c82cd826391\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:02Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:02 crc kubenswrapper[4957]: I0218 14:32:02.548755 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t7lp9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1ab5e7d-28c9-416b-9e12-1209987d8a2c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3d55cd020086b018dcf2183c7307ce4f66a18d4dfd5c7e387419f5aaeb08e53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb7e286dbf7b49a0ee8019eb660a4b977399c163bb094d77f830b3747399d702\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c773ab7da87d534f8ca400b73f89b473291b15b65118abfa48da7e8af126724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fb30623c479a6d604d07d8779f3b5661ba8a2c98fa38f145443c7f80fbfddb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9575c57251809ec79d9e9a63fc18fc0fe431582b023c0bfb9d71b3ea5c369e49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7fc50065803f879ae25e1ab406e1b74e142cab30c38bedd0426616a2cc6cc84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f8908d037eeb540061d23f1918042ff39dce95b7e11ae3fd0767cee52738794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07549d11e88948ba7929804722b7d49a2fefb64d0c324c2032529c13df70a23f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854d32930f1afa0a3f29b83159a4a473fd3c074f065cab0f110c4c0e62d0ab79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://854d32930f1afa0a3f29b83159a4a473fd3c074f065cab0f110c4c0e62d0ab79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t7lp9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:02Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:02 crc kubenswrapper[4957]: I0218 14:32:02.561697 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"820e2f52-fc4e-43af-8da0-5b1ecb52e54a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdeaa62699fcf37060225a404103ed014fff43f751f744a27891afb76c56d011\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb9f54ba352c3e8d1cb70e31e2321f32cc741e9ea7fa871ac8db9cb380486d6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04cb263fd49c7f580746efa2f81c4e25a4862244502ad5362a1767d6255c023c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f45a9407db247a891731310afb4c1a7b81ac94e44cfd8e267854a4c4e434e95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:34Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:02Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:02 crc kubenswrapper[4957]: I0218 14:32:02.573623 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:02Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:02 crc kubenswrapper[4957]: I0218 14:32:02.577568 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:02 crc kubenswrapper[4957]: I0218 14:32:02.577609 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:02 crc kubenswrapper[4957]: I0218 14:32:02.577622 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:02 crc kubenswrapper[4957]: I0218 14:32:02.577638 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:02 crc kubenswrapper[4957]: I0218 14:32:02.577650 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:02Z","lastTransitionTime":"2026-02-18T14:32:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:02 crc kubenswrapper[4957]: I0218 14:32:02.587143 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sk96m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7e4902e-7bb6-44ec-a3ac-3d8b3afe3fbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://644a3c29970ac9c5262a3a66390a566947b89d6020eaf5de12afb1afeaaff673\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vssz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sk96m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:02Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:02 crc kubenswrapper[4957]: I0218 14:32:02.603016 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cde17e3-43e9-4bed-afe8-5b76229e35cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1929ddc0bd8d5e1c054fe6b3399c053a91ace49153f8712ad155416d0df0652e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mzslj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4227d09ba91769eac5df7ac50f6da20587fad22be3d3f27f0a938b37d76b457e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mzslj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-x8wwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:02Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:02 crc kubenswrapper[4957]: I0218 14:32:02.622256 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-s7f5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77ae51a3-a3f7-4ea3-afb9-93558bf3b821\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7f4f904ea5311aa8aa0473b62433cd3684d0e1fecdb1b179bfb7a58e463cc27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7f4f904ea5311aa8aa0473b62433cd3684d0e1fecdb1b179bfb7a58e463cc27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ea791a426f30ba77c019b8f865f9cfe90b915f6c0596002f4914cbe2fa68c27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ea791a426f30ba77c019b8f865f9cfe90b915f6c0596002f4914cbe2fa68c27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01a7d815c8c260fb5d2a4586f2640abf157fa209aa8e8aed0e6573a07bdaced6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01a7d815c8c260fb5d2a4586f2640abf157fa209aa8e8aed0e6573a07bdaced6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ae36af89d8b5cc2ad00253398fad76d78310bc8adb012b7b819ef55a26b8004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae36af89d8b5cc2ad00253398fad76d78310bc8adb012b7b819ef55a26b8004\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://def4b27d830824baeb1b965b56d5fc50343585d0fae5a1fed75381364a94caf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-s7f5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:02Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:02 crc kubenswrapper[4957]: I0218 14:32:02.640413 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-c5hxm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2b8a049-72c4-4a87-bb05-a5cc4ebe7a89\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d8c16012d78884094f9be4f3cd7b67463352ff1391c8f60c109b5f734c80cd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mzkhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-c5hxm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:02Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:02 crc kubenswrapper[4957]: I0218 14:32:02.653156 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3aa342a3ea42d2c64cf284dd2f7fed142b1ff377cc3c5384e4d940ec4364befc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:02Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:02 crc kubenswrapper[4957]: I0218 14:32:02.666057 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:02Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:02 crc kubenswrapper[4957]: I0218 14:32:02.679893 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:02 crc kubenswrapper[4957]: I0218 14:32:02.679936 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:02 crc kubenswrapper[4957]: I0218 14:32:02.679946 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:02 crc kubenswrapper[4957]: I0218 14:32:02.679962 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:02 crc kubenswrapper[4957]: I0218 14:32:02.679973 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:02Z","lastTransitionTime":"2026-02-18T14:32:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:02 crc kubenswrapper[4957]: I0218 14:32:02.685873 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t7lp9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1ab5e7d-28c9-416b-9e12-1209987d8a2c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3d55cd020086b018dcf2183c7307ce4f66a18d4dfd5c7e387419f5aaeb08e53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb7e286dbf7b49a0ee8019eb660a4b977399c163bb094d77f830b3747399d702\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c773ab7da87d534f8ca400b73f89b473291b15b65118abfa48da7e8af126724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fb30623c479a6d604d07d8779f3b5661ba8a2c98fa38f145443c7f80fbfddb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9575c57251809ec79d9e9a63fc18fc0fe431582b023c0bfb9d71b3ea5c369e49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7fc50065803f879ae25e1ab406e1b74e142cab30c38bedd0426616a2cc6cc84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f8908d037eeb540061d23f1918042ff39dce95b7e11ae3fd0767cee52738794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07549d11e88948ba7929804722b7d49a2fefb64d0c324c2032529c13df70a23f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854d32930f1afa0a3f29b83159a4a473fd3c074f065cab0f110c4c0e62d0ab79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://854d32930f1afa0a3f29b83159a4a473fd3c074f065cab0f110c4c0e62d0ab79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t7lp9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:02Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:02 crc kubenswrapper[4957]: I0218 14:32:02.698135 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cde17e3-43e9-4bed-afe8-5b76229e35cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1929ddc0bd8d5e1c054fe6b3399c053a91ace49153f8712ad155416d0df0652e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mzslj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4227d09ba91769eac5df7ac50f6da20587fad22be3d3f27f0a938b37d76b457e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mzslj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-x8wwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:02Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:02 crc kubenswrapper[4957]: I0218 14:32:02.711717 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"820e2f52-fc4e-43af-8da0-5b1ecb52e54a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdeaa62699fcf37060225a404103ed014fff43f751f744a27891afb76c56d011\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb9f54ba352c3e8d1cb70e31e2321f32cc741e9ea7fa871ac8db9cb380486d6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04cb263fd49c7f580746efa2f81c4e25a4862244502ad5362a1767d6255c023c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f45a9407db247a891731310afb4c1a7b81ac94e44cfd8e267854a4c4e434e95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:34Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:02Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:02 crc kubenswrapper[4957]: I0218 14:32:02.728533 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:02Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:02 crc kubenswrapper[4957]: I0218 14:32:02.742176 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sk96m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7e4902e-7bb6-44ec-a3ac-3d8b3afe3fbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://644a3c29970ac9c5262a3a66390a566947b89d6020eaf5de12afb1afeaaff673\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vssz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sk96m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:02Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:02 crc kubenswrapper[4957]: I0218 14:32:02.755103 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:02Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:02 crc kubenswrapper[4957]: I0218 14:32:02.769149 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-s7f5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77ae51a3-a3f7-4ea3-afb9-93558bf3b821\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7f4f904ea5311aa8aa0473b62433cd3684d0e1fecdb1b179bfb7a58e463cc27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7f4f904ea5311aa8aa0473b62433cd3684d0e1fecdb1b179bfb7a58e463cc27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ea791a426f30ba77c019b8f865f9cfe90b915f6c0596002f4914cbe2fa68c27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ea791a426f30ba77c019b8f865f9cfe90b915f6c0596002f4914cbe2fa68c27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01a7d815c8c260fb5d2a4586f2640abf157fa209aa8e8aed0e6573a07bdaced6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01a7d815c8c260fb5d2a4586f2640abf157fa209aa8e8aed0e6573a07bdaced6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ae36af89d8b5cc2ad00253398fad76d78310bc8adb012b7b819ef55a26b8004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae36af89d8b5cc2ad00253398fad76d78310bc8adb012b7b819ef55a26b8004\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://def4b27d830824baeb1b965b56d5fc50343585d0fae5a1fed75381364a94caf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://def4b27d830824baeb1b965b56d5fc50343585d0fae5a1fed75381364a94caf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:32:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-s7f5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:02Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:02 crc kubenswrapper[4957]: I0218 14:32:02.778522 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-c5hxm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2b8a049-72c4-4a87-bb05-a5cc4ebe7a89\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d8c16012d78884094f9be4f3cd7b67463352ff1391c8f60c109b5f734c80cd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mzkhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-c5hxm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:02Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:02 crc kubenswrapper[4957]: I0218 14:32:02.782317 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:02 crc kubenswrapper[4957]: I0218 14:32:02.782357 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:02 crc kubenswrapper[4957]: I0218 14:32:02.782369 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:02 crc kubenswrapper[4957]: I0218 14:32:02.782388 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:02 crc kubenswrapper[4957]: I0218 14:32:02.782400 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:02Z","lastTransitionTime":"2026-02-18T14:32:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:02 crc kubenswrapper[4957]: I0218 14:32:02.791788 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3aa342a3ea42d2c64cf284dd2f7fed142b1ff377cc3c5384e4d940ec4364befc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:02Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:02 crc kubenswrapper[4957]: I0218 14:32:02.804530 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3339e9ff017eaa7de40b766111f5d7dcb9b5b45f154d917b2f3b1ebad21ee361\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://815a22c1c4523e923dbe20bb68b1f4195562f2b3a767409555f43c82cd826391\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:02Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:02 crc kubenswrapper[4957]: I0218 14:32:02.816985 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96ebe482278c96a988073257a64389543a692cbae60b771542dd45ffdbef9977\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:02Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:02 crc kubenswrapper[4957]: I0218 14:32:02.832725 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wn2pd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6a720f-7d42-48e9-8073-fb4f7417e6cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3317042ce828b73a13e6c2724a7c7663e0c73dd05fef1239f5ad36a7d46b1ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qv7rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wn2pd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:02Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:02 crc kubenswrapper[4957]: I0218 14:32:02.851724 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b3bef1f-7cee-4035-bc8e-195fadcf2d19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51eded762a62fcb704d09a9c4336a0e308e44c662b636772ea79f754c09dcaaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a2ec9911be4f4d8715bb722cdf72cdbf478111030a7311b93eb67f55ddbce77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d46f625f180ab9afde6a7cb5d578b88012bf5b297e691a35f1fe1af000f1416\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a11b36734e57eec3aba0b6d96b4102850e46b96eb16d99cfcacf273f1ce69f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ff013b6ed4aa92a4bda93a646e7ffd4a158daab436c25025c7fbee49716bcfd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T14:31:47Z\\\",\\\"message\\\":\\\"W0218 14:31:37.311828 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0218 14:31:37.312305 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771425097 cert, and key in /tmp/serving-cert-2580901173/serving-signer.crt, /tmp/serving-cert-2580901173/serving-signer.key\\\\nI0218 14:31:37.668457 1 observer_polling.go:159] Starting file observer\\\\nW0218 14:31:37.670692 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0218 14:31:37.670892 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 14:31:37.672550 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2580901173/tls.crt::/tmp/serving-cert-2580901173/tls.key\\\\\\\"\\\\nF0218 14:31:47.884142 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1c0f986e657c20a059efccb494462926f4fa62d30b012e6ce2b530d679d9d5c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:37Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09ddd547ad67f12c0d4f3b0505206fc69b4bcfcfa966c6f98da8ef4f4e979711\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09ddd547ad67f12c0d4f3b0505206fc69b4bcfcfa966c6f98da8ef4f4e979711\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:02Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:02 crc kubenswrapper[4957]: I0218 14:32:02.863221 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:02Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:02 crc kubenswrapper[4957]: I0218 14:32:02.885383 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:02 crc kubenswrapper[4957]: I0218 14:32:02.885449 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:02 crc kubenswrapper[4957]: I0218 14:32:02.885470 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:02 crc kubenswrapper[4957]: I0218 14:32:02.885490 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:02 crc kubenswrapper[4957]: I0218 14:32:02.885504 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:02Z","lastTransitionTime":"2026-02-18T14:32:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:02 crc kubenswrapper[4957]: I0218 14:32:02.989182 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:02 crc kubenswrapper[4957]: I0218 14:32:02.989261 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:02 crc kubenswrapper[4957]: I0218 14:32:02.989279 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:02 crc kubenswrapper[4957]: I0218 14:32:02.989307 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:02 crc kubenswrapper[4957]: I0218 14:32:02.989329 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:02Z","lastTransitionTime":"2026-02-18T14:32:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:03 crc kubenswrapper[4957]: I0218 14:32:03.092461 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:03 crc kubenswrapper[4957]: I0218 14:32:03.092546 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:03 crc kubenswrapper[4957]: I0218 14:32:03.092583 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:03 crc kubenswrapper[4957]: I0218 14:32:03.092610 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:03 crc kubenswrapper[4957]: I0218 14:32:03.092628 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:03Z","lastTransitionTime":"2026-02-18T14:32:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:03 crc kubenswrapper[4957]: I0218 14:32:03.114483 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 14:32:03 crc kubenswrapper[4957]: I0218 14:32:03.151056 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t7lp9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1ab5e7d-28c9-416b-9e12-1209987d8a2c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3d55cd020086b018dcf2183c7307ce4f66a18d4dfd5c7e387419f5aaeb08e53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb7e286dbf7b49a0ee8019eb660a4b977399c163bb094d77f830b3747399d702\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c773ab7da87d534f8ca400b73f89b473291b15b65118abfa48da7e8af126724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fb30623c479a6d604d07d8779f3b5661ba8a2c98fa38f145443c7f80fbfddb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9575c57251809ec79d9e9a63fc18fc0fe431582b023c0bfb9d71b3ea5c369e49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7fc50065803f879ae25e1ab406e1b74e142cab30c38bedd0426616a2cc6cc84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f8908d037eeb540061d23f1918042ff39dce95b7e11ae3fd0767cee52738794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07549d11e88948ba7929804722b7d49a2fefb64d0c324c2032529c13df70a23f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854d32930f1afa0a3f29b83159a4a473fd3c074f065cab0f110c4c0e62d0ab79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://854d32930f1afa0a3f29b83159a4a473fd3c074f065cab0f110c4c0e62d0ab79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t7lp9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:03Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:03 crc kubenswrapper[4957]: I0218 14:32:03.159680 4957 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 19:24:23.044565952 +0000 UTC Feb 18 14:32:03 crc kubenswrapper[4957]: I0218 14:32:03.182989 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sk96m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7e4902e-7bb6-44ec-a3ac-3d8b3afe3fbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://644a3c29970ac9c5262a3a66390a566947b89d6020eaf5de12afb1afeaaff673\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vssz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sk96m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:03Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:03 crc kubenswrapper[4957]: I0218 14:32:03.195005 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:03 crc kubenswrapper[4957]: I0218 14:32:03.195308 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:03 crc kubenswrapper[4957]: I0218 14:32:03.195448 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:03 crc kubenswrapper[4957]: I0218 14:32:03.195618 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:03 crc kubenswrapper[4957]: I0218 14:32:03.195736 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:03Z","lastTransitionTime":"2026-02-18T14:32:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:03 crc kubenswrapper[4957]: I0218 14:32:03.212449 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:32:03 crc kubenswrapper[4957]: E0218 14:32:03.212609 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 14:32:03 crc kubenswrapper[4957]: I0218 14:32:03.213069 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cde17e3-43e9-4bed-afe8-5b76229e35cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1929ddc0bd8d5e1c054fe6b3399c053a91ace49153f8712ad155416d0df0652e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mzslj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4227d09ba91769eac5df7ac50f6da20587fad22be3d3f27f0a938b37d76b457e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mzslj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-x8wwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:03Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:03 crc kubenswrapper[4957]: I0218 14:32:03.229397 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"820e2f52-fc4e-43af-8da0-5b1ecb52e54a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdeaa62699fcf37060225a404103ed014fff43f751f744a27891afb76c56d011\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb9f54ba352c3e8d1cb70e31e2321f32cc741e9ea7fa871ac8db9cb380486d6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04cb263fd49c7f580746efa2f81c4e25a4862244502ad5362a1767d6255c023c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f45a9407db247a891731310afb4c1a7b81ac94e44cfd8e267854a4c4e434e95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:34Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:03Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:03 crc kubenswrapper[4957]: I0218 14:32:03.240875 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:03Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:03 crc kubenswrapper[4957]: I0218 14:32:03.255462 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3aa342a3ea42d2c64cf284dd2f7fed142b1ff377cc3c5384e4d940ec4364befc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:03Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:03 crc kubenswrapper[4957]: I0218 14:32:03.268222 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:03Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:03 crc kubenswrapper[4957]: I0218 14:32:03.288933 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-s7f5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77ae51a3-a3f7-4ea3-afb9-93558bf3b821\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7f4f904ea5311aa8aa0473b62433cd3684d0e1fecdb1b179bfb7a58e463cc27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7f4f904ea5311aa8aa0473b62433cd3684d0e1fecdb1b179bfb7a58e463cc27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ea791a426f30ba77c019b8f865f9cfe90b915f6c0596002f4914cbe2fa68c27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ea791a426f30ba77c019b8f865f9cfe90b915f6c0596002f4914cbe2fa68c27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01a7d815c8c260fb5d2a4586f2640abf157fa209aa8e8aed0e6573a07bdaced6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01a7d815c8c260fb5d2a4586f2640abf157fa209aa8e8aed0e6573a07bdaced6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ae36af89d8b5cc2ad00253398fad76d78310bc8adb012b7b819ef55a26b8004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae36af89d8b5cc2ad00253398fad76d78310bc8adb012b7b819ef55a26b8004\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://def4b27d830824baeb1b965b56d5fc50343585d0fae5a1fed75381364a94caf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://def4b27d830824baeb1b965b56d5fc50343585d0fae5a1fed75381364a94caf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:32:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-s7f5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:03Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:03 crc kubenswrapper[4957]: I0218 14:32:03.298825 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:03 crc kubenswrapper[4957]: I0218 14:32:03.298860 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:03 crc kubenswrapper[4957]: I0218 14:32:03.298870 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:03 crc kubenswrapper[4957]: I0218 14:32:03.298883 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:03 crc kubenswrapper[4957]: I0218 14:32:03.298893 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:03Z","lastTransitionTime":"2026-02-18T14:32:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:03 crc kubenswrapper[4957]: I0218 14:32:03.302115 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-c5hxm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2b8a049-72c4-4a87-bb05-a5cc4ebe7a89\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d8c16012d78884094f9be4f3cd7b67463352ff1391c8f60c109b5f734c80cd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mzkhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-c5hxm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:03Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:03 crc kubenswrapper[4957]: I0218 14:32:03.316487 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:03Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:03 crc kubenswrapper[4957]: I0218 14:32:03.333308 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3339e9ff017eaa7de40b766111f5d7dcb9b5b45f154d917b2f3b1ebad21ee361\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://815a22c1c4523e923dbe20bb68b1f4195562f2b3a767409555f43c82cd826391\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:03Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:03 crc kubenswrapper[4957]: I0218 14:32:03.345655 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96ebe482278c96a988073257a64389543a692cbae60b771542dd45ffdbef9977\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:03Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:03 crc kubenswrapper[4957]: I0218 14:32:03.357978 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wn2pd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6a720f-7d42-48e9-8073-fb4f7417e6cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3317042ce828b73a13e6c2724a7c7663e0c73dd05fef1239f5ad36a7d46b1ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qv7rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wn2pd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:03Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:03 crc kubenswrapper[4957]: I0218 14:32:03.371581 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b3bef1f-7cee-4035-bc8e-195fadcf2d19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51eded762a62fcb704d09a9c4336a0e308e44c662b636772ea79f754c09dcaaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a2ec9911be4f4d8715bb722cdf72cdbf478111030a7311b93eb67f55ddbce77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d46f625f180ab9afde6a7cb5d578b88012bf5b297e691a35f1fe1af000f1416\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a11b36734e57eec3aba0b6d96b4102850e46b96eb16d99cfcacf273f1ce69f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ff013b6ed4aa92a4bda93a646e7ffd4a158daab436c25025c7fbee49716bcfd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T14:31:47Z\\\",\\\"message\\\":\\\"W0218 14:31:37.311828 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0218 14:31:37.312305 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771425097 cert, and key in /tmp/serving-cert-2580901173/serving-signer.crt, /tmp/serving-cert-2580901173/serving-signer.key\\\\nI0218 14:31:37.668457 1 observer_polling.go:159] Starting file observer\\\\nW0218 14:31:37.670692 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0218 14:31:37.670892 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 14:31:37.672550 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2580901173/tls.crt::/tmp/serving-cert-2580901173/tls.key\\\\\\\"\\\\nF0218 14:31:47.884142 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1c0f986e657c20a059efccb494462926f4fa62d30b012e6ce2b530d679d9d5c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:37Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09ddd547ad67f12c0d4f3b0505206fc69b4bcfcfa966c6f98da8ef4f4e979711\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09ddd547ad67f12c0d4f3b0505206fc69b4bcfcfa966c6f98da8ef4f4e979711\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:03Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:03 crc kubenswrapper[4957]: I0218 14:32:03.401313 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:03 crc kubenswrapper[4957]: I0218 14:32:03.401358 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:03 crc kubenswrapper[4957]: I0218 14:32:03.401366 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:03 crc kubenswrapper[4957]: I0218 14:32:03.401381 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:03 crc kubenswrapper[4957]: I0218 14:32:03.401404 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:03Z","lastTransitionTime":"2026-02-18T14:32:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:03 crc kubenswrapper[4957]: I0218 14:32:03.473641 4957 generic.go:334] "Generic (PLEG): container finished" podID="77ae51a3-a3f7-4ea3-afb9-93558bf3b821" containerID="9a13d3fcb295e963277f66a06bad27866ddb270d48474d0154efc52a8721a64a" exitCode=0 Feb 18 14:32:03 crc kubenswrapper[4957]: I0218 14:32:03.473724 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-s7f5j" event={"ID":"77ae51a3-a3f7-4ea3-afb9-93558bf3b821","Type":"ContainerDied","Data":"9a13d3fcb295e963277f66a06bad27866ddb270d48474d0154efc52a8721a64a"} Feb 18 14:32:03 crc kubenswrapper[4957]: I0218 14:32:03.473808 4957 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 18 14:32:03 crc kubenswrapper[4957]: I0218 14:32:03.474306 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-t7lp9" Feb 18 14:32:03 crc kubenswrapper[4957]: I0218 14:32:03.489918 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b3bef1f-7cee-4035-bc8e-195fadcf2d19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51eded762a62fcb704d09a9c4336a0e308e44c662b636772ea79f754c09dcaaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a2ec9911be4f4d8715bb722cdf72cdbf478111030a7311b93eb67f55ddbce77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d46f625f180ab9afde6a7cb5d578b88012bf5b297e691a35f1fe1af000f1416\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a11b36734e57eec3aba0b6d96b4102850e46b96eb16d99cfcacf273f1ce69f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ff013b6ed4aa92a4bda93a646e7ffd4a158daab436c25025c7fbee49716bcfd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T14:31:47Z\\\",\\\"message\\\":\\\"W0218 14:31:37.311828 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0218 14:31:37.312305 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771425097 cert, and key in /tmp/serving-cert-2580901173/serving-signer.crt, /tmp/serving-cert-2580901173/serving-signer.key\\\\nI0218 14:31:37.668457 1 observer_polling.go:159] Starting file observer\\\\nW0218 14:31:37.670692 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0218 14:31:37.670892 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 14:31:37.672550 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2580901173/tls.crt::/tmp/serving-cert-2580901173/tls.key\\\\\\\"\\\\nF0218 14:31:47.884142 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1c0f986e657c20a059efccb494462926f4fa62d30b012e6ce2b530d679d9d5c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:37Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09ddd547ad67f12c0d4f3b0505206fc69b4bcfcfa966c6f98da8ef4f4e979711\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09ddd547ad67f12c0d4f3b0505206fc69b4bcfcfa966c6f98da8ef4f4e979711\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:03Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:03 crc kubenswrapper[4957]: I0218 14:32:03.500545 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-t7lp9" Feb 18 14:32:03 crc kubenswrapper[4957]: I0218 14:32:03.504489 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:03 crc kubenswrapper[4957]: I0218 14:32:03.504531 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:03 crc kubenswrapper[4957]: I0218 14:32:03.504548 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:03 crc kubenswrapper[4957]: I0218 14:32:03.504570 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:03 crc kubenswrapper[4957]: I0218 14:32:03.504586 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:03Z","lastTransitionTime":"2026-02-18T14:32:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:03 crc kubenswrapper[4957]: I0218 14:32:03.508139 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:03Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:03 crc kubenswrapper[4957]: I0218 14:32:03.524465 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3339e9ff017eaa7de40b766111f5d7dcb9b5b45f154d917b2f3b1ebad21ee361\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://815a22c1c4523e923dbe20bb68b1f4195562f2b3a767409555f43c82cd826391\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:03Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:03 crc kubenswrapper[4957]: I0218 14:32:03.539446 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96ebe482278c96a988073257a64389543a692cbae60b771542dd45ffdbef9977\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:03Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:03 crc kubenswrapper[4957]: I0218 14:32:03.551030 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wn2pd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6a720f-7d42-48e9-8073-fb4f7417e6cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3317042ce828b73a13e6c2724a7c7663e0c73dd05fef1239f5ad36a7d46b1ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qv7rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wn2pd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:03Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:03 crc kubenswrapper[4957]: I0218 14:32:03.571019 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t7lp9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1ab5e7d-28c9-416b-9e12-1209987d8a2c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3d55cd020086b018dcf2183c7307ce4f66a18d4dfd5c7e387419f5aaeb08e53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb7e286dbf7b49a0ee8019eb660a4b977399c163bb094d77f830b3747399d702\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c773ab7da87d534f8ca400b73f89b473291b15b65118abfa48da7e8af126724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fb30623c479a6d604d07d8779f3b5661ba8a2c98fa38f145443c7f80fbfddb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9575c57251809ec79d9e9a63fc18fc0fe431582b023c0bfb9d71b3ea5c369e49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7fc50065803f879ae25e1ab406e1b74e142cab30c38bedd0426616a2cc6cc84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f8908d037eeb540061d23f1918042ff39dce95b7e11ae3fd0767cee52738794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07549d11e88948ba7929804722b7d49a2fefb64d0c324c2032529c13df70a23f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854d32930f1afa0a3f29b83159a4a473fd3c074f065cab0f110c4c0e62d0ab79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://854d32930f1afa0a3f29b83159a4a473fd3c074f065cab0f110c4c0e62d0ab79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t7lp9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:03Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:03 crc kubenswrapper[4957]: I0218 14:32:03.585126 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"820e2f52-fc4e-43af-8da0-5b1ecb52e54a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdeaa62699fcf37060225a404103ed014fff43f751f744a27891afb76c56d011\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb9f54ba352c3e8d1cb70e31e2321f32cc741e9ea7fa871ac8db9cb380486d6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04cb263fd49c7f580746efa2f81c4e25a4862244502ad5362a1767d6255c023c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f45a9407db247a891731310afb4c1a7b81ac94e44cfd8e267854a4c4e434e95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:34Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:03Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:03 crc kubenswrapper[4957]: I0218 14:32:03.599721 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:03Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:03 crc kubenswrapper[4957]: I0218 14:32:03.606791 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:03 crc kubenswrapper[4957]: I0218 14:32:03.606846 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:03 crc kubenswrapper[4957]: I0218 14:32:03.606861 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:03 crc kubenswrapper[4957]: I0218 14:32:03.606893 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:03 crc kubenswrapper[4957]: I0218 14:32:03.606908 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:03Z","lastTransitionTime":"2026-02-18T14:32:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:03 crc kubenswrapper[4957]: I0218 14:32:03.618777 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sk96m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7e4902e-7bb6-44ec-a3ac-3d8b3afe3fbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://644a3c29970ac9c5262a3a66390a566947b89d6020eaf5de12afb1afeaaff673\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vssz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sk96m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:03Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:03 crc kubenswrapper[4957]: I0218 14:32:03.632163 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cde17e3-43e9-4bed-afe8-5b76229e35cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1929ddc0bd8d5e1c054fe6b3399c053a91ace49153f8712ad155416d0df0652e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mzslj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4227d09ba91769eac5df7ac50f6da20587fad22be3d3f27f0a938b37d76b457e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mzslj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-x8wwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:03Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:03 crc kubenswrapper[4957]: I0218 14:32:03.646497 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3aa342a3ea42d2c64cf284dd2f7fed142b1ff377cc3c5384e4d940ec4364befc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:03Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:03 crc kubenswrapper[4957]: I0218 14:32:03.658814 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:03Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:03 crc kubenswrapper[4957]: I0218 14:32:03.675163 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-s7f5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77ae51a3-a3f7-4ea3-afb9-93558bf3b821\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7f4f904ea5311aa8aa0473b62433cd3684d0e1fecdb1b179bfb7a58e463cc27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7f4f904ea5311aa8aa0473b62433cd3684d0e1fecdb1b179bfb7a58e463cc27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ea791a426f30ba77c019b8f865f9cfe90b915f6c0596002f4914cbe2fa68c27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ea791a426f30ba77c019b8f865f9cfe90b915f6c0596002f4914cbe2fa68c27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01a7d815c8c260fb5d2a4586f2640abf157fa209aa8e8aed0e6573a07bdaced6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01a7d815c8c260fb5d2a4586f2640abf157fa209aa8e8aed0e6573a07bdaced6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ae36af89d8b5cc2ad00253398fad76d78310bc8adb012b7b819ef55a26b8004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae36af89d8b5cc2ad00253398fad76d78310bc8adb012b7b819ef55a26b8004\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://def4b27d830824baeb1b965b56d5fc50343585d0fae5a1fed75381364a94caf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://def4b27d830824baeb1b965b56d5fc50343585d0fae5a1fed75381364a94caf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:32:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a13d3fcb295e963277f66a06bad27866ddb270d48474d0154efc52a8721a64a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a13d3fcb295e963277f66a06bad27866ddb270d48474d0154efc52a8721a64a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:32:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-s7f5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:03Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:03 crc kubenswrapper[4957]: I0218 14:32:03.689539 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-c5hxm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2b8a049-72c4-4a87-bb05-a5cc4ebe7a89\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d8c16012d78884094f9be4f3cd7b67463352ff1391c8f60c109b5f734c80cd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mzkhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-c5hxm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:03Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:03 crc kubenswrapper[4957]: I0218 14:32:03.709827 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:03 crc kubenswrapper[4957]: I0218 14:32:03.709880 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:03 crc kubenswrapper[4957]: I0218 14:32:03.709895 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:03 crc kubenswrapper[4957]: I0218 14:32:03.710076 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:03 crc kubenswrapper[4957]: I0218 14:32:03.710087 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:03Z","lastTransitionTime":"2026-02-18T14:32:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:03 crc kubenswrapper[4957]: I0218 14:32:03.711069 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t7lp9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1ab5e7d-28c9-416b-9e12-1209987d8a2c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3d55cd020086b018dcf2183c7307ce4f66a18d4dfd5c7e387419f5aaeb08e53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb7e286dbf7b49a0ee8019eb660a4b977399c163bb094d77f830b3747399d702\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c773ab7da87d534f8ca400b73f89b473291b15b65118abfa48da7e8af126724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fb30623c479a6d604d07d8779f3b5661ba8a2c98fa38f145443c7f80fbfddb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9575c57251809ec79d9e9a63fc18fc0fe431582b023c0bfb9d71b3ea5c369e49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7fc50065803f879ae25e1ab406e1b74e142cab30c38bedd0426616a2cc6cc84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f8908d037eeb540061d23f1918042ff39dce95b7e11ae3fd0767cee52738794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07549d11e88948ba7929804722b7d49a2fefb64d0c324c2032529c13df70a23f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854d32930f1afa0a3f29b83159a4a473fd3c074f065cab0f110c4c0e62d0ab79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://854d32930f1afa0a3f29b83159a4a473fd3c074f065cab0f110c4c0e62d0ab79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t7lp9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:03Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:03 crc kubenswrapper[4957]: I0218 14:32:03.732115 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"820e2f52-fc4e-43af-8da0-5b1ecb52e54a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdeaa62699fcf37060225a404103ed014fff43f751f744a27891afb76c56d011\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb9f54ba352c3e8d1cb70e31e2321f32cc741e9ea7fa871ac8db9cb380486d6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04cb263fd49c7f580746efa2f81c4e25a4862244502ad5362a1767d6255c023c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f45a9407db247a891731310afb4c1a7b81ac94e44cfd8e267854a4c4e434e95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:34Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:03Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:03 crc kubenswrapper[4957]: I0218 14:32:03.751200 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:03Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:03 crc kubenswrapper[4957]: I0218 14:32:03.764187 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sk96m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7e4902e-7bb6-44ec-a3ac-3d8b3afe3fbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://644a3c29970ac9c5262a3a66390a566947b89d6020eaf5de12afb1afeaaff673\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vssz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sk96m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:03Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:03 crc kubenswrapper[4957]: I0218 14:32:03.777322 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cde17e3-43e9-4bed-afe8-5b76229e35cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1929ddc0bd8d5e1c054fe6b3399c053a91ace49153f8712ad155416d0df0652e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mzslj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4227d09ba91769eac5df7ac50f6da20587fad22be3d3f27f0a938b37d76b457e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mzslj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-x8wwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:03Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:03 crc kubenswrapper[4957]: I0218 14:32:03.791867 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3aa342a3ea42d2c64cf284dd2f7fed142b1ff377cc3c5384e4d940ec4364befc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:03Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:03 crc kubenswrapper[4957]: I0218 14:32:03.807567 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:03Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:03 crc kubenswrapper[4957]: I0218 14:32:03.813706 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:03 crc kubenswrapper[4957]: I0218 14:32:03.813780 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:03 crc kubenswrapper[4957]: I0218 14:32:03.813796 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:03 crc kubenswrapper[4957]: I0218 14:32:03.813822 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:03 crc kubenswrapper[4957]: I0218 14:32:03.813836 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:03Z","lastTransitionTime":"2026-02-18T14:32:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:03 crc kubenswrapper[4957]: I0218 14:32:03.822546 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-s7f5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77ae51a3-a3f7-4ea3-afb9-93558bf3b821\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7f4f904ea5311aa8aa0473b62433cd3684d0e1fecdb1b179bfb7a58e463cc27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7f4f904ea5311aa8aa0473b62433cd3684d0e1fecdb1b179bfb7a58e463cc27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ea791a426f30ba77c019b8f865f9cfe90b915f6c0596002f4914cbe2fa68c27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ea791a426f30ba77c019b8f865f9cfe90b915f6c0596002f4914cbe2fa68c27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01a7d815c8c260fb5d2a4586f2640abf157fa209aa8e8aed0e6573a07bdaced6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01a7d815c8c260fb5d2a4586f2640abf157fa209aa8e8aed0e6573a07bdaced6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ae36af89d8b5cc2ad00253398fad76d78310bc8adb012b7b819ef55a26b8004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae36af89d8b5cc2ad00253398fad76d78310bc8adb012b7b819ef55a26b8004\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://def4b27d830824baeb1b965b56d5fc50343585d0fae5a1fed75381364a94caf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://def4b27d830824baeb1b965b56d5fc50343585d0fae5a1fed75381364a94caf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:32:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a13d3fcb295e963277f66a06bad27866ddb270d48474d0154efc52a8721a64a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a13d3fcb295e963277f66a06bad27866ddb270d48474d0154efc52a8721a64a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:32:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-s7f5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:03Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:03 crc kubenswrapper[4957]: I0218 14:32:03.832740 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-c5hxm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2b8a049-72c4-4a87-bb05-a5cc4ebe7a89\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d8c16012d78884094f9be4f3cd7b67463352ff1391c8f60c109b5f734c80cd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mzkhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-c5hxm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:03Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:03 crc kubenswrapper[4957]: I0218 14:32:03.848921 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b3bef1f-7cee-4035-bc8e-195fadcf2d19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51eded762a62fcb704d09a9c4336a0e308e44c662b636772ea79f754c09dcaaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a2ec9911be4f4d8715bb722cdf72cdbf478111030a7311b93eb67f55ddbce77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d46f625f180ab9afde6a7cb5d578b88012bf5b297e691a35f1fe1af000f1416\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a11b36734e57eec3aba0b6d96b4102850e46b96eb16d99cfcacf273f1ce69f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ff013b6ed4aa92a4bda93a646e7ffd4a158daab436c25025c7fbee49716bcfd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T14:31:47Z\\\",\\\"message\\\":\\\"W0218 14:31:37.311828 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0218 14:31:37.312305 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771425097 cert, and key in /tmp/serving-cert-2580901173/serving-signer.crt, /tmp/serving-cert-2580901173/serving-signer.key\\\\nI0218 14:31:37.668457 1 observer_polling.go:159] Starting file observer\\\\nW0218 14:31:37.670692 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0218 14:31:37.670892 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 14:31:37.672550 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2580901173/tls.crt::/tmp/serving-cert-2580901173/tls.key\\\\\\\"\\\\nF0218 14:31:47.884142 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1c0f986e657c20a059efccb494462926f4fa62d30b012e6ce2b530d679d9d5c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:37Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09ddd547ad67f12c0d4f3b0505206fc69b4bcfcfa966c6f98da8ef4f4e979711\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09ddd547ad67f12c0d4f3b0505206fc69b4bcfcfa966c6f98da8ef4f4e979711\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:03Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:03 crc kubenswrapper[4957]: I0218 14:32:03.861212 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:03Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:03 crc kubenswrapper[4957]: I0218 14:32:03.872546 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3339e9ff017eaa7de40b766111f5d7dcb9b5b45f154d917b2f3b1ebad21ee361\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://815a22c1c4523e923dbe20bb68b1f4195562f2b3a767409555f43c82cd826391\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:03Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:03 crc kubenswrapper[4957]: I0218 14:32:03.886077 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96ebe482278c96a988073257a64389543a692cbae60b771542dd45ffdbef9977\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:03Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:03 crc kubenswrapper[4957]: I0218 14:32:03.898564 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wn2pd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6a720f-7d42-48e9-8073-fb4f7417e6cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3317042ce828b73a13e6c2724a7c7663e0c73dd05fef1239f5ad36a7d46b1ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qv7rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wn2pd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:03Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:03 crc kubenswrapper[4957]: I0218 14:32:03.916545 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:03 crc kubenswrapper[4957]: I0218 14:32:03.916577 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:03 crc kubenswrapper[4957]: I0218 14:32:03.916587 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:03 crc kubenswrapper[4957]: I0218 14:32:03.916602 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:03 crc kubenswrapper[4957]: I0218 14:32:03.916613 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:03Z","lastTransitionTime":"2026-02-18T14:32:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:03 crc kubenswrapper[4957]: I0218 14:32:03.973462 4957 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 18 14:32:04 crc kubenswrapper[4957]: I0218 14:32:04.018835 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:04 crc kubenswrapper[4957]: I0218 14:32:04.018875 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:04 crc kubenswrapper[4957]: I0218 14:32:04.018884 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:04 crc kubenswrapper[4957]: I0218 14:32:04.018899 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:04 crc kubenswrapper[4957]: I0218 14:32:04.018908 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:04Z","lastTransitionTime":"2026-02-18T14:32:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:04 crc kubenswrapper[4957]: I0218 14:32:04.121865 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:04 crc kubenswrapper[4957]: I0218 14:32:04.121904 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:04 crc kubenswrapper[4957]: I0218 14:32:04.121916 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:04 crc kubenswrapper[4957]: I0218 14:32:04.121936 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:04 crc kubenswrapper[4957]: I0218 14:32:04.121948 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:04Z","lastTransitionTime":"2026-02-18T14:32:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:04 crc kubenswrapper[4957]: I0218 14:32:04.160906 4957 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 19:44:23.546067012 +0000 UTC Feb 18 14:32:04 crc kubenswrapper[4957]: I0218 14:32:04.213834 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:32:04 crc kubenswrapper[4957]: I0218 14:32:04.213976 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:32:04 crc kubenswrapper[4957]: E0218 14:32:04.214046 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 14:32:04 crc kubenswrapper[4957]: E0218 14:32:04.214226 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 14:32:04 crc kubenswrapper[4957]: I0218 14:32:04.224666 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:04 crc kubenswrapper[4957]: I0218 14:32:04.224730 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:04 crc kubenswrapper[4957]: I0218 14:32:04.224743 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:04 crc kubenswrapper[4957]: I0218 14:32:04.224765 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:04 crc kubenswrapper[4957]: I0218 14:32:04.224779 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:04Z","lastTransitionTime":"2026-02-18T14:32:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:04 crc kubenswrapper[4957]: I0218 14:32:04.236989 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3aa342a3ea42d2c64cf284dd2f7fed142b1ff377cc3c5384e4d940ec4364befc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:04Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:04 crc kubenswrapper[4957]: I0218 14:32:04.255997 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:04Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:04 crc kubenswrapper[4957]: I0218 14:32:04.279848 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-s7f5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77ae51a3-a3f7-4ea3-afb9-93558bf3b821\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7f4f904ea5311aa8aa0473b62433cd3684d0e1fecdb1b179bfb7a58e463cc27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7f4f904ea5311aa8aa0473b62433cd3684d0e1fecdb1b179bfb7a58e463cc27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ea791a426f30ba77c019b8f865f9cfe90b915f6c0596002f4914cbe2fa68c27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ea791a426f30ba77c019b8f865f9cfe90b915f6c0596002f4914cbe2fa68c27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01a7d815c8c260fb5d2a4586f2640abf157fa209aa8e8aed0e6573a07bdaced6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01a7d815c8c260fb5d2a4586f2640abf157fa209aa8e8aed0e6573a07bdaced6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ae36af89d8b5cc2ad00253398fad76d78310bc8adb012b7b819ef55a26b8004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae36af89d8b5cc2ad00253398fad76d78310bc8adb012b7b819ef55a26b8004\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://def4b27d830824baeb1b965b56d5fc50343585d0fae5a1fed75381364a94caf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://def4b27d830824baeb1b965b56d5fc50343585d0fae5a1fed75381364a94caf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:32:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a13d3fcb295e963277f66a06bad27866ddb270d48474d0154efc52a8721a64a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a13d3fcb295e963277f66a06bad27866ddb270d48474d0154efc52a8721a64a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:32:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-s7f5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:04Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:04 crc kubenswrapper[4957]: I0218 14:32:04.301173 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-c5hxm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2b8a049-72c4-4a87-bb05-a5cc4ebe7a89\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d8c16012d78884094f9be4f3cd7b67463352ff1391c8f60c109b5f734c80cd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mzkhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-c5hxm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:04Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:04 crc kubenswrapper[4957]: I0218 14:32:04.322347 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b3bef1f-7cee-4035-bc8e-195fadcf2d19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51eded762a62fcb704d09a9c4336a0e308e44c662b636772ea79f754c09dcaaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a2ec9911be4f4d8715bb722cdf72cdbf478111030a7311b93eb67f55ddbce77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d46f625f180ab9afde6a7cb5d578b88012bf5b297e691a35f1fe1af000f1416\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a11b36734e57eec3aba0b6d96b4102850e46b96eb16d99cfcacf273f1ce69f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ff013b6ed4aa92a4bda93a646e7ffd4a158daab436c25025c7fbee49716bcfd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T14:31:47Z\\\",\\\"message\\\":\\\"W0218 14:31:37.311828 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0218 14:31:37.312305 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771425097 cert, and key in /tmp/serving-cert-2580901173/serving-signer.crt, /tmp/serving-cert-2580901173/serving-signer.key\\\\nI0218 14:31:37.668457 1 observer_polling.go:159] Starting file observer\\\\nW0218 14:31:37.670692 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0218 14:31:37.670892 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 14:31:37.672550 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2580901173/tls.crt::/tmp/serving-cert-2580901173/tls.key\\\\\\\"\\\\nF0218 14:31:47.884142 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1c0f986e657c20a059efccb494462926f4fa62d30b012e6ce2b530d679d9d5c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:37Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09ddd547ad67f12c0d4f3b0505206fc69b4bcfcfa966c6f98da8ef4f4e979711\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09ddd547ad67f12c0d4f3b0505206fc69b4bcfcfa966c6f98da8ef4f4e979711\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:04Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:04 crc kubenswrapper[4957]: I0218 14:32:04.328744 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:04 crc kubenswrapper[4957]: I0218 14:32:04.328785 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:04 crc kubenswrapper[4957]: I0218 14:32:04.328798 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:04 crc kubenswrapper[4957]: I0218 14:32:04.328814 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:04 crc kubenswrapper[4957]: I0218 14:32:04.328827 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:04Z","lastTransitionTime":"2026-02-18T14:32:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:04 crc kubenswrapper[4957]: I0218 14:32:04.337329 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:04Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:04 crc kubenswrapper[4957]: I0218 14:32:04.353803 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3339e9ff017eaa7de40b766111f5d7dcb9b5b45f154d917b2f3b1ebad21ee361\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://815a22c1c4523e923dbe20bb68b1f4195562f2b3a767409555f43c82cd826391\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:04Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:04 crc kubenswrapper[4957]: I0218 14:32:04.368757 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96ebe482278c96a988073257a64389543a692cbae60b771542dd45ffdbef9977\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:04Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:04 crc kubenswrapper[4957]: I0218 14:32:04.380928 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wn2pd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6a720f-7d42-48e9-8073-fb4f7417e6cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3317042ce828b73a13e6c2724a7c7663e0c73dd05fef1239f5ad36a7d46b1ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qv7rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wn2pd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:04Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:04 crc kubenswrapper[4957]: I0218 14:32:04.400410 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t7lp9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1ab5e7d-28c9-416b-9e12-1209987d8a2c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3d55cd020086b018dcf2183c7307ce4f66a18d4dfd5c7e387419f5aaeb08e53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb7e286dbf7b49a0ee8019eb660a4b977399c163bb094d77f830b3747399d702\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c773ab7da87d534f8ca400b73f89b473291b15b65118abfa48da7e8af126724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fb30623c479a6d604d07d8779f3b5661ba8a2c98fa38f145443c7f80fbfddb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9575c57251809ec79d9e9a63fc18fc0fe431582b023c0bfb9d71b3ea5c369e49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7fc50065803f879ae25e1ab406e1b74e142cab30c38bedd0426616a2cc6cc84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f8908d037eeb540061d23f1918042ff39dce95b7e11ae3fd0767cee52738794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07549d11e88948ba7929804722b7d49a2fefb64d0c324c2032529c13df70a23f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854d32930f1afa0a3f29b83159a4a473fd3c074f065cab0f110c4c0e62d0ab79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://854d32930f1afa0a3f29b83159a4a473fd3c074f065cab0f110c4c0e62d0ab79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t7lp9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:04Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:04 crc kubenswrapper[4957]: I0218 14:32:04.413237 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"820e2f52-fc4e-43af-8da0-5b1ecb52e54a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdeaa62699fcf37060225a404103ed014fff43f751f744a27891afb76c56d011\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb9f54ba352c3e8d1cb70e31e2321f32cc741e9ea7fa871ac8db9cb380486d6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04cb263fd49c7f580746efa2f81c4e25a4862244502ad5362a1767d6255c023c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f45a9407db247a891731310afb4c1a7b81ac94e44cfd8e267854a4c4e434e95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:34Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:04Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:04 crc kubenswrapper[4957]: I0218 14:32:04.425350 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:04Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:04 crc kubenswrapper[4957]: I0218 14:32:04.431051 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:04 crc kubenswrapper[4957]: I0218 14:32:04.431095 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:04 crc kubenswrapper[4957]: I0218 14:32:04.431104 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:04 crc kubenswrapper[4957]: I0218 14:32:04.431121 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:04 crc kubenswrapper[4957]: I0218 14:32:04.431131 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:04Z","lastTransitionTime":"2026-02-18T14:32:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:04 crc kubenswrapper[4957]: I0218 14:32:04.437283 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sk96m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7e4902e-7bb6-44ec-a3ac-3d8b3afe3fbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://644a3c29970ac9c5262a3a66390a566947b89d6020eaf5de12afb1afeaaff673\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vssz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sk96m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:04Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:04 crc kubenswrapper[4957]: I0218 14:32:04.447675 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cde17e3-43e9-4bed-afe8-5b76229e35cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1929ddc0bd8d5e1c054fe6b3399c053a91ace49153f8712ad155416d0df0652e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mzslj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4227d09ba91769eac5df7ac50f6da20587fad22be3d3f27f0a938b37d76b457e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mzslj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-x8wwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:04Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:04 crc kubenswrapper[4957]: I0218 14:32:04.483186 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-s7f5j" event={"ID":"77ae51a3-a3f7-4ea3-afb9-93558bf3b821","Type":"ContainerStarted","Data":"1ff578d3657e9ba6b371eb3c3a2e63d4c989e3232da6c9f88cf6ba18edc50dd1"} Feb 18 14:32:04 crc kubenswrapper[4957]: I0218 14:32:04.483569 4957 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 18 14:32:04 crc kubenswrapper[4957]: I0218 14:32:04.502581 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t7lp9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1ab5e7d-28c9-416b-9e12-1209987d8a2c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3d55cd020086b018dcf2183c7307ce4f66a18d4dfd5c7e387419f5aaeb08e53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb7e286dbf7b49a0ee8019eb660a4b977399c163bb094d77f830b3747399d702\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c773ab7da87d534f8ca400b73f89b473291b15b65118abfa48da7e8af126724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fb30623c479a6d604d07d8779f3b5661ba8a2c98fa38f145443c7f80fbfddb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9575c57251809ec79d9e9a63fc18fc0fe431582b023c0bfb9d71b3ea5c369e49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7fc50065803f879ae25e1ab406e1b74e142cab30c38bedd0426616a2cc6cc84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f8908d037eeb540061d23f1918042ff39dce95b7e11ae3fd0767cee52738794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07549d11e88948ba7929804722b7d49a2fefb64d0c324c2032529c13df70a23f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854d32930f1afa0a3f29b83159a4a473fd3c074f065cab0f110c4c0e62d0ab79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://854d32930f1afa0a3f29b83159a4a473fd3c074f065cab0f110c4c0e62d0ab79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t7lp9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:04Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:04 crc kubenswrapper[4957]: I0218 14:32:04.514014 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:04Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:04 crc kubenswrapper[4957]: I0218 14:32:04.529136 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sk96m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7e4902e-7bb6-44ec-a3ac-3d8b3afe3fbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://644a3c29970ac9c5262a3a66390a566947b89d6020eaf5de12afb1afeaaff673\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vssz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sk96m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:04Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:04 crc kubenswrapper[4957]: I0218 14:32:04.533117 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:04 crc kubenswrapper[4957]: I0218 14:32:04.533175 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:04 crc kubenswrapper[4957]: I0218 14:32:04.533189 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:04 crc kubenswrapper[4957]: I0218 14:32:04.533208 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:04 crc kubenswrapper[4957]: I0218 14:32:04.533220 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:04Z","lastTransitionTime":"2026-02-18T14:32:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:04 crc kubenswrapper[4957]: I0218 14:32:04.541602 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cde17e3-43e9-4bed-afe8-5b76229e35cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1929ddc0bd8d5e1c054fe6b3399c053a91ace49153f8712ad155416d0df0652e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mzslj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4227d09ba91769eac5df7ac50f6da20587fad22be3d3f27f0a938b37d76b457e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mzslj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-x8wwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:04Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:04 crc kubenswrapper[4957]: I0218 14:32:04.554847 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"820e2f52-fc4e-43af-8da0-5b1ecb52e54a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdeaa62699fcf37060225a404103ed014fff43f751f744a27891afb76c56d011\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb9f54ba352c3e8d1cb70e31e2321f32cc741e9ea7fa871ac8db9cb380486d6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04cb263fd49c7f580746efa2f81c4e25a4862244502ad5362a1767d6255c023c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f45a9407db247a891731310afb4c1a7b81ac94e44cfd8e267854a4c4e434e95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:34Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:04Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:04 crc kubenswrapper[4957]: I0218 14:32:04.569784 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3aa342a3ea42d2c64cf284dd2f7fed142b1ff377cc3c5384e4d940ec4364befc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:04Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:04 crc kubenswrapper[4957]: I0218 14:32:04.583012 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:04Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:04 crc kubenswrapper[4957]: I0218 14:32:04.623520 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-s7f5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77ae51a3-a3f7-4ea3-afb9-93558bf3b821\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ff578d3657e9ba6b371eb3c3a2e63d4c989e3232da6c9f88cf6ba18edc50dd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:32:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7f4f904ea5311aa8aa0473b62433cd3684d0e1fecdb1b179bfb7a58e463cc27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7f4f904ea5311aa8aa0473b62433cd3684d0e1fecdb1b179bfb7a58e463cc27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ea791a426f30ba77c019b8f865f9cfe90b915f6c0596002f4914cbe2fa68c27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ea791a426f30ba77c019b8f865f9cfe90b915f6c0596002f4914cbe2fa68c27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01a7d815c8c260fb5d2a4586f2640abf157fa209aa8e8aed0e6573a07bdaced6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01a7d815c8c260fb5d2a4586f2640abf157fa209aa8e8aed0e6573a07bdaced6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ae36af89d8b5cc2ad00253398fad76d78310bc8adb012b7b819ef55a26b8004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae36af89d8b5cc2ad00253398fad76d78310bc8adb012b7b819ef55a26b8004\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://def4b27d830824baeb1b965b56d5fc50343585d0fae5a1fed75381364a94caf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://def4b27d830824baeb1b965b56d5fc50343585d0fae5a1fed75381364a94caf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:32:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a13d3fcb295e963277f66a06bad27866ddb270d48474d0154efc52a8721a64a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a13d3fcb295e963277f66a06bad27866ddb270d48474d0154efc52a8721a64a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:32:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-s7f5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:04Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:04 crc kubenswrapper[4957]: I0218 14:32:04.635334 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:04 crc kubenswrapper[4957]: I0218 14:32:04.635366 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:04 crc kubenswrapper[4957]: I0218 14:32:04.635374 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:04 crc kubenswrapper[4957]: I0218 14:32:04.635388 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:04 crc kubenswrapper[4957]: I0218 14:32:04.635397 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:04Z","lastTransitionTime":"2026-02-18T14:32:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:04 crc kubenswrapper[4957]: I0218 14:32:04.662928 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-c5hxm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2b8a049-72c4-4a87-bb05-a5cc4ebe7a89\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d8c16012d78884094f9be4f3cd7b67463352ff1391c8f60c109b5f734c80cd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mzkhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-c5hxm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:04Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:04 crc kubenswrapper[4957]: I0218 14:32:04.701835 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:04Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:04 crc kubenswrapper[4957]: I0218 14:32:04.738462 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:04 crc kubenswrapper[4957]: I0218 14:32:04.738514 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:04 crc kubenswrapper[4957]: I0218 14:32:04.738531 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:04 crc kubenswrapper[4957]: I0218 14:32:04.738554 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:04 crc kubenswrapper[4957]: I0218 14:32:04.738571 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:04Z","lastTransitionTime":"2026-02-18T14:32:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:04 crc kubenswrapper[4957]: I0218 14:32:04.747814 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3339e9ff017eaa7de40b766111f5d7dcb9b5b45f154d917b2f3b1ebad21ee361\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://815a22c1c4523e923dbe20bb68b1f4195562f2b3a767409555f43c82cd826391\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:04Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:04 crc kubenswrapper[4957]: I0218 14:32:04.789050 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96ebe482278c96a988073257a64389543a692cbae60b771542dd45ffdbef9977\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:04Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:04 crc kubenswrapper[4957]: I0218 14:32:04.826907 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wn2pd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6a720f-7d42-48e9-8073-fb4f7417e6cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3317042ce828b73a13e6c2724a7c7663e0c73dd05fef1239f5ad36a7d46b1ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qv7rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wn2pd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:04Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:04 crc kubenswrapper[4957]: I0218 14:32:04.841497 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:04 crc kubenswrapper[4957]: I0218 14:32:04.841540 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:04 crc kubenswrapper[4957]: I0218 14:32:04.841553 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:04 crc kubenswrapper[4957]: I0218 14:32:04.841575 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:04 crc kubenswrapper[4957]: I0218 14:32:04.841589 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:04Z","lastTransitionTime":"2026-02-18T14:32:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:04 crc kubenswrapper[4957]: I0218 14:32:04.865217 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b3bef1f-7cee-4035-bc8e-195fadcf2d19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51eded762a62fcb704d09a9c4336a0e308e44c662b636772ea79f754c09dcaaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a2ec9911be4f4d8715bb722cdf72cdbf478111030a7311b93eb67f55ddbce77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d46f625f180ab9afde6a7cb5d578b88012bf5b297e691a35f1fe1af000f1416\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a11b36734e57eec3aba0b6d96b4102850e46b96eb16d99cfcacf273f1ce69f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ff013b6ed4aa92a4bda93a646e7ffd4a158daab436c25025c7fbee49716bcfd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T14:31:47Z\\\",\\\"message\\\":\\\"W0218 14:31:37.311828 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0218 14:31:37.312305 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771425097 cert, and key in /tmp/serving-cert-2580901173/serving-signer.crt, /tmp/serving-cert-2580901173/serving-signer.key\\\\nI0218 14:31:37.668457 1 observer_polling.go:159] Starting file observer\\\\nW0218 14:31:37.670692 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0218 14:31:37.670892 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 14:31:37.672550 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2580901173/tls.crt::/tmp/serving-cert-2580901173/tls.key\\\\\\\"\\\\nF0218 14:31:47.884142 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1c0f986e657c20a059efccb494462926f4fa62d30b012e6ce2b530d679d9d5c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:37Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09ddd547ad67f12c0d4f3b0505206fc69b4bcfcfa966c6f98da8ef4f4e979711\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09ddd547ad67f12c0d4f3b0505206fc69b4bcfcfa966c6f98da8ef4f4e979711\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:04Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:04 crc kubenswrapper[4957]: I0218 14:32:04.944969 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:04 crc kubenswrapper[4957]: I0218 14:32:04.945026 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:04 crc kubenswrapper[4957]: I0218 14:32:04.945036 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:04 crc kubenswrapper[4957]: I0218 14:32:04.945056 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:04 crc kubenswrapper[4957]: I0218 14:32:04.945069 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:04Z","lastTransitionTime":"2026-02-18T14:32:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:05 crc kubenswrapper[4957]: I0218 14:32:05.047377 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:05 crc kubenswrapper[4957]: I0218 14:32:05.047435 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:05 crc kubenswrapper[4957]: I0218 14:32:05.047444 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:05 crc kubenswrapper[4957]: I0218 14:32:05.047462 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:05 crc kubenswrapper[4957]: I0218 14:32:05.047473 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:05Z","lastTransitionTime":"2026-02-18T14:32:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:05 crc kubenswrapper[4957]: I0218 14:32:05.150172 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:05 crc kubenswrapper[4957]: I0218 14:32:05.150245 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:05 crc kubenswrapper[4957]: I0218 14:32:05.150265 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:05 crc kubenswrapper[4957]: I0218 14:32:05.150298 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:05 crc kubenswrapper[4957]: I0218 14:32:05.150316 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:05Z","lastTransitionTime":"2026-02-18T14:32:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:05 crc kubenswrapper[4957]: I0218 14:32:05.161836 4957 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 16:27:23.08702268 +0000 UTC Feb 18 14:32:05 crc kubenswrapper[4957]: I0218 14:32:05.213064 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:32:05 crc kubenswrapper[4957]: E0218 14:32:05.213460 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 14:32:05 crc kubenswrapper[4957]: I0218 14:32:05.276244 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:05 crc kubenswrapper[4957]: I0218 14:32:05.276298 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:05 crc kubenswrapper[4957]: I0218 14:32:05.276314 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:05 crc kubenswrapper[4957]: I0218 14:32:05.276336 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:05 crc kubenswrapper[4957]: I0218 14:32:05.276354 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:05Z","lastTransitionTime":"2026-02-18T14:32:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:05 crc kubenswrapper[4957]: I0218 14:32:05.378915 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:05 crc kubenswrapper[4957]: I0218 14:32:05.378961 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:05 crc kubenswrapper[4957]: I0218 14:32:05.378974 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:05 crc kubenswrapper[4957]: I0218 14:32:05.378992 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:05 crc kubenswrapper[4957]: I0218 14:32:05.379005 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:05Z","lastTransitionTime":"2026-02-18T14:32:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:05 crc kubenswrapper[4957]: I0218 14:32:05.482582 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:05 crc kubenswrapper[4957]: I0218 14:32:05.482631 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:05 crc kubenswrapper[4957]: I0218 14:32:05.482639 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:05 crc kubenswrapper[4957]: I0218 14:32:05.482658 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:05 crc kubenswrapper[4957]: I0218 14:32:05.482701 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:05Z","lastTransitionTime":"2026-02-18T14:32:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:05 crc kubenswrapper[4957]: I0218 14:32:05.487976 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-t7lp9_c1ab5e7d-28c9-416b-9e12-1209987d8a2c/ovnkube-controller/0.log" Feb 18 14:32:05 crc kubenswrapper[4957]: I0218 14:32:05.491097 4957 generic.go:334] "Generic (PLEG): container finished" podID="c1ab5e7d-28c9-416b-9e12-1209987d8a2c" containerID="8f8908d037eeb540061d23f1918042ff39dce95b7e11ae3fd0767cee52738794" exitCode=1 Feb 18 14:32:05 crc kubenswrapper[4957]: I0218 14:32:05.491157 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t7lp9" event={"ID":"c1ab5e7d-28c9-416b-9e12-1209987d8a2c","Type":"ContainerDied","Data":"8f8908d037eeb540061d23f1918042ff39dce95b7e11ae3fd0767cee52738794"} Feb 18 14:32:05 crc kubenswrapper[4957]: I0218 14:32:05.491837 4957 scope.go:117] "RemoveContainer" containerID="8f8908d037eeb540061d23f1918042ff39dce95b7e11ae3fd0767cee52738794" Feb 18 14:32:05 crc kubenswrapper[4957]: I0218 14:32:05.507411 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:05Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:05 crc kubenswrapper[4957]: I0218 14:32:05.531519 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3339e9ff017eaa7de40b766111f5d7dcb9b5b45f154d917b2f3b1ebad21ee361\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://815a22c1c4523e923dbe20bb68b1f4195562f2b3a767409555f43c82cd826391\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:05Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:05 crc kubenswrapper[4957]: I0218 14:32:05.546101 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96ebe482278c96a988073257a64389543a692cbae60b771542dd45ffdbef9977\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:05Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:05 crc kubenswrapper[4957]: I0218 14:32:05.561842 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wn2pd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6a720f-7d42-48e9-8073-fb4f7417e6cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3317042ce828b73a13e6c2724a7c7663e0c73dd05fef1239f5ad36a7d46b1ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qv7rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wn2pd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:05Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:05 crc kubenswrapper[4957]: I0218 14:32:05.579719 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b3bef1f-7cee-4035-bc8e-195fadcf2d19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51eded762a62fcb704d09a9c4336a0e308e44c662b636772ea79f754c09dcaaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a2ec9911be4f4d8715bb722cdf72cdbf478111030a7311b93eb67f55ddbce77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d46f625f180ab9afde6a7cb5d578b88012bf5b297e691a35f1fe1af000f1416\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a11b36734e57eec3aba0b6d96b4102850e46b96eb16d99cfcacf273f1ce69f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ff013b6ed4aa92a4bda93a646e7ffd4a158daab436c25025c7fbee49716bcfd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T14:31:47Z\\\",\\\"message\\\":\\\"W0218 14:31:37.311828 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0218 14:31:37.312305 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771425097 cert, and key in /tmp/serving-cert-2580901173/serving-signer.crt, /tmp/serving-cert-2580901173/serving-signer.key\\\\nI0218 14:31:37.668457 1 observer_polling.go:159] Starting file observer\\\\nW0218 14:31:37.670692 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0218 14:31:37.670892 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 14:31:37.672550 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2580901173/tls.crt::/tmp/serving-cert-2580901173/tls.key\\\\\\\"\\\\nF0218 14:31:47.884142 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1c0f986e657c20a059efccb494462926f4fa62d30b012e6ce2b530d679d9d5c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:37Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09ddd547ad67f12c0d4f3b0505206fc69b4bcfcfa966c6f98da8ef4f4e979711\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09ddd547ad67f12c0d4f3b0505206fc69b4bcfcfa966c6f98da8ef4f4e979711\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:05Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:05 crc kubenswrapper[4957]: I0218 14:32:05.584493 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:05 crc kubenswrapper[4957]: I0218 14:32:05.584527 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:05 crc kubenswrapper[4957]: I0218 14:32:05.584542 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:05 crc kubenswrapper[4957]: I0218 14:32:05.584559 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:05 crc kubenswrapper[4957]: I0218 14:32:05.584572 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:05Z","lastTransitionTime":"2026-02-18T14:32:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:05 crc kubenswrapper[4957]: I0218 14:32:05.598643 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t7lp9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1ab5e7d-28c9-416b-9e12-1209987d8a2c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3d55cd020086b018dcf2183c7307ce4f66a18d4dfd5c7e387419f5aaeb08e53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb7e286dbf7b49a0ee8019eb660a4b977399c163bb094d77f830b3747399d702\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c773ab7da87d534f8ca400b73f89b473291b15b65118abfa48da7e8af126724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fb30623c479a6d604d07d8779f3b5661ba8a2c98fa38f145443c7f80fbfddb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9575c57251809ec79d9e9a63fc18fc0fe431582b023c0bfb9d71b3ea5c369e49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7fc50065803f879ae25e1ab406e1b74e142cab30c38bedd0426616a2cc6cc84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f8908d037eeb540061d23f1918042ff39dce95b7e11ae3fd0767cee52738794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f8908d037eeb540061d23f1918042ff39dce95b7e11ae3fd0767cee52738794\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T14:32:04Z\\\",\\\"message\\\":\\\"NetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0218 14:32:04.967405 6263 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0218 14:32:04.967522 6263 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0218 14:32:04.967722 6263 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0218 14:32:04.967995 6263 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0218 14:32:04.968468 6263 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 14:32:04.968766 6263 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 14:32:04.968906 6263 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07549d11e88948ba7929804722b7d49a2fefb64d0c324c2032529c13df70a23f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854d32930f1afa0a3f29b83159a4a473fd3c074f065cab0f110c4c0e62d0ab79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://854d32930f1afa0a3f29b83159a4a473fd3c074f065cab0f110c4c0e62d0ab79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t7lp9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:05Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:05 crc kubenswrapper[4957]: I0218 14:32:05.609492 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:05Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:05 crc kubenswrapper[4957]: I0218 14:32:05.620370 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sk96m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7e4902e-7bb6-44ec-a3ac-3d8b3afe3fbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://644a3c29970ac9c5262a3a66390a566947b89d6020eaf5de12afb1afeaaff673\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vssz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sk96m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:05Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:05 crc kubenswrapper[4957]: I0218 14:32:05.629960 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cde17e3-43e9-4bed-afe8-5b76229e35cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1929ddc0bd8d5e1c054fe6b3399c053a91ace49153f8712ad155416d0df0652e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mzslj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4227d09ba91769eac5df7ac50f6da20587fad22be3d3f27f0a938b37d76b457e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mzslj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-x8wwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:05Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:05 crc kubenswrapper[4957]: I0218 14:32:05.640832 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"820e2f52-fc4e-43af-8da0-5b1ecb52e54a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdeaa62699fcf37060225a404103ed014fff43f751f744a27891afb76c56d011\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb9f54ba352c3e8d1cb70e31e2321f32cc741e9ea7fa871ac8db9cb380486d6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04cb263fd49c7f580746efa2f81c4e25a4862244502ad5362a1767d6255c023c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f45a9407db247a891731310afb4c1a7b81ac94e44cfd8e267854a4c4e434e95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:34Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:05Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:05 crc kubenswrapper[4957]: I0218 14:32:05.653007 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3aa342a3ea42d2c64cf284dd2f7fed142b1ff377cc3c5384e4d940ec4364befc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:05Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:05 crc kubenswrapper[4957]: I0218 14:32:05.667105 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:05Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:05 crc kubenswrapper[4957]: I0218 14:32:05.680352 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-s7f5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77ae51a3-a3f7-4ea3-afb9-93558bf3b821\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ff578d3657e9ba6b371eb3c3a2e63d4c989e3232da6c9f88cf6ba18edc50dd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:32:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7f4f904ea5311aa8aa0473b62433cd3684d0e1fecdb1b179bfb7a58e463cc27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7f4f904ea5311aa8aa0473b62433cd3684d0e1fecdb1b179bfb7a58e463cc27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ea791a426f30ba77c019b8f865f9cfe90b915f6c0596002f4914cbe2fa68c27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ea791a426f30ba77c019b8f865f9cfe90b915f6c0596002f4914cbe2fa68c27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01a7d815c8c260fb5d2a4586f2640abf157fa209aa8e8aed0e6573a07bdaced6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01a7d815c8c260fb5d2a4586f2640abf157fa209aa8e8aed0e6573a07bdaced6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ae36af89d8b5cc2ad00253398fad76d78310bc8adb012b7b819ef55a26b8004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae36af89d8b5cc2ad00253398fad76d78310bc8adb012b7b819ef55a26b8004\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://def4b27d830824baeb1b965b56d5fc50343585d0fae5a1fed75381364a94caf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://def4b27d830824baeb1b965b56d5fc50343585d0fae5a1fed75381364a94caf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:32:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a13d3fcb295e963277f66a06bad27866ddb270d48474d0154efc52a8721a64a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a13d3fcb295e963277f66a06bad27866ddb270d48474d0154efc52a8721a64a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:32:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-s7f5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:05Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:05 crc kubenswrapper[4957]: I0218 14:32:05.686791 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:05 crc kubenswrapper[4957]: I0218 14:32:05.686837 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:05 crc kubenswrapper[4957]: I0218 14:32:05.686846 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:05 crc kubenswrapper[4957]: I0218 14:32:05.686862 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:05 crc kubenswrapper[4957]: I0218 14:32:05.686872 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:05Z","lastTransitionTime":"2026-02-18T14:32:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:05 crc kubenswrapper[4957]: I0218 14:32:05.690263 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-c5hxm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2b8a049-72c4-4a87-bb05-a5cc4ebe7a89\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d8c16012d78884094f9be4f3cd7b67463352ff1391c8f60c109b5f734c80cd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mzkhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-c5hxm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:05Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:05 crc kubenswrapper[4957]: I0218 14:32:05.790905 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:05 crc kubenswrapper[4957]: I0218 14:32:05.791378 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:05 crc kubenswrapper[4957]: I0218 14:32:05.791457 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:05 crc kubenswrapper[4957]: I0218 14:32:05.791482 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:05 crc kubenswrapper[4957]: I0218 14:32:05.791503 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:05Z","lastTransitionTime":"2026-02-18T14:32:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:05 crc kubenswrapper[4957]: I0218 14:32:05.895405 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:05 crc kubenswrapper[4957]: I0218 14:32:05.895456 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:05 crc kubenswrapper[4957]: I0218 14:32:05.895467 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:05 crc kubenswrapper[4957]: I0218 14:32:05.895483 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:05 crc kubenswrapper[4957]: I0218 14:32:05.895495 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:05Z","lastTransitionTime":"2026-02-18T14:32:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:05 crc kubenswrapper[4957]: I0218 14:32:05.998693 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:05 crc kubenswrapper[4957]: I0218 14:32:05.998732 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:05 crc kubenswrapper[4957]: I0218 14:32:05.998753 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:05 crc kubenswrapper[4957]: I0218 14:32:05.998773 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:05 crc kubenswrapper[4957]: I0218 14:32:05.998786 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:05Z","lastTransitionTime":"2026-02-18T14:32:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:06 crc kubenswrapper[4957]: I0218 14:32:06.101341 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:06 crc kubenswrapper[4957]: I0218 14:32:06.101393 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:06 crc kubenswrapper[4957]: I0218 14:32:06.101408 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:06 crc kubenswrapper[4957]: I0218 14:32:06.101452 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:06 crc kubenswrapper[4957]: I0218 14:32:06.101469 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:06Z","lastTransitionTime":"2026-02-18T14:32:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:06 crc kubenswrapper[4957]: I0218 14:32:06.162601 4957 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 16:16:27.878399704 +0000 UTC Feb 18 14:32:06 crc kubenswrapper[4957]: I0218 14:32:06.204051 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:06 crc kubenswrapper[4957]: I0218 14:32:06.204340 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:06 crc kubenswrapper[4957]: I0218 14:32:06.204350 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:06 crc kubenswrapper[4957]: I0218 14:32:06.204365 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:06 crc kubenswrapper[4957]: I0218 14:32:06.204374 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:06Z","lastTransitionTime":"2026-02-18T14:32:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:06 crc kubenswrapper[4957]: I0218 14:32:06.212317 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:32:06 crc kubenswrapper[4957]: I0218 14:32:06.212317 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:32:06 crc kubenswrapper[4957]: E0218 14:32:06.212444 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 14:32:06 crc kubenswrapper[4957]: E0218 14:32:06.212506 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 14:32:06 crc kubenswrapper[4957]: I0218 14:32:06.306882 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:06 crc kubenswrapper[4957]: I0218 14:32:06.306924 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:06 crc kubenswrapper[4957]: I0218 14:32:06.306934 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:06 crc kubenswrapper[4957]: I0218 14:32:06.306952 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:06 crc kubenswrapper[4957]: I0218 14:32:06.306963 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:06Z","lastTransitionTime":"2026-02-18T14:32:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:06 crc kubenswrapper[4957]: I0218 14:32:06.454068 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:06 crc kubenswrapper[4957]: I0218 14:32:06.454123 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:06 crc kubenswrapper[4957]: I0218 14:32:06.454132 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:06 crc kubenswrapper[4957]: I0218 14:32:06.454147 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:06 crc kubenswrapper[4957]: I0218 14:32:06.454157 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:06Z","lastTransitionTime":"2026-02-18T14:32:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:06 crc kubenswrapper[4957]: I0218 14:32:06.497985 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-t7lp9_c1ab5e7d-28c9-416b-9e12-1209987d8a2c/ovnkube-controller/0.log" Feb 18 14:32:06 crc kubenswrapper[4957]: I0218 14:32:06.501007 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t7lp9" event={"ID":"c1ab5e7d-28c9-416b-9e12-1209987d8a2c","Type":"ContainerStarted","Data":"38a947c08dfc2f01adf41707eda957fa7da665d501ebe838b0141637dc5bded8"} Feb 18 14:32:06 crc kubenswrapper[4957]: I0218 14:32:06.501139 4957 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 18 14:32:06 crc kubenswrapper[4957]: I0218 14:32:06.521237 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"820e2f52-fc4e-43af-8da0-5b1ecb52e54a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdeaa62699fcf37060225a404103ed014fff43f751f744a27891afb76c56d011\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb9f54ba352c3e8d1cb70e31e2321f32cc741e9ea7fa871ac8db9cb380486d6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04cb263fd49c7f580746efa2f81c4e25a4862244502ad5362a1767d6255c023c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f45a9407db247a891731310afb4c1a7b81ac94e44cfd8e267854a4c4e434e95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:34Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:06Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:06 crc kubenswrapper[4957]: I0218 14:32:06.547765 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:06Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:06 crc kubenswrapper[4957]: I0218 14:32:06.556695 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:06 crc kubenswrapper[4957]: I0218 14:32:06.556734 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:06 crc kubenswrapper[4957]: I0218 14:32:06.556744 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:06 crc kubenswrapper[4957]: I0218 14:32:06.556760 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:06 crc kubenswrapper[4957]: I0218 14:32:06.556771 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:06Z","lastTransitionTime":"2026-02-18T14:32:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:06 crc kubenswrapper[4957]: I0218 14:32:06.565844 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sk96m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7e4902e-7bb6-44ec-a3ac-3d8b3afe3fbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://644a3c29970ac9c5262a3a66390a566947b89d6020eaf5de12afb1afeaaff673\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vssz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sk96m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:06Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:06 crc kubenswrapper[4957]: I0218 14:32:06.578664 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cde17e3-43e9-4bed-afe8-5b76229e35cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1929ddc0bd8d5e1c054fe6b3399c053a91ace49153f8712ad155416d0df0652e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mzslj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4227d09ba91769eac5df7ac50f6da20587fad22be3d3f27f0a938b37d76b457e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mzslj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-x8wwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:06Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:06 crc kubenswrapper[4957]: I0218 14:32:06.595172 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3aa342a3ea42d2c64cf284dd2f7fed142b1ff377cc3c5384e4d940ec4364befc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:06Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:06 crc kubenswrapper[4957]: I0218 14:32:06.610492 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:06Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:06 crc kubenswrapper[4957]: I0218 14:32:06.626960 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-s7f5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77ae51a3-a3f7-4ea3-afb9-93558bf3b821\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ff578d3657e9ba6b371eb3c3a2e63d4c989e3232da6c9f88cf6ba18edc50dd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:32:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7f4f904ea5311aa8aa0473b62433cd3684d0e1fecdb1b179bfb7a58e463cc27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7f4f904ea5311aa8aa0473b62433cd3684d0e1fecdb1b179bfb7a58e463cc27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ea791a426f30ba77c019b8f865f9cfe90b915f6c0596002f4914cbe2fa68c27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ea791a426f30ba77c019b8f865f9cfe90b915f6c0596002f4914cbe2fa68c27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01a7d815c8c260fb5d2a4586f2640abf157fa209aa8e8aed0e6573a07bdaced6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01a7d815c8c260fb5d2a4586f2640abf157fa209aa8e8aed0e6573a07bdaced6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ae36af89d8b5cc2ad00253398fad76d78310bc8adb012b7b819ef55a26b8004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae36af89d8b5cc2ad00253398fad76d78310bc8adb012b7b819ef55a26b8004\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://def4b27d830824baeb1b965b56d5fc50343585d0fae5a1fed75381364a94caf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://def4b27d830824baeb1b965b56d5fc50343585d0fae5a1fed75381364a94caf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:32:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a13d3fcb295e963277f66a06bad27866ddb270d48474d0154efc52a8721a64a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a13d3fcb295e963277f66a06bad27866ddb270d48474d0154efc52a8721a64a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:32:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-s7f5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:06Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:06 crc kubenswrapper[4957]: I0218 14:32:06.638827 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-c5hxm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2b8a049-72c4-4a87-bb05-a5cc4ebe7a89\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d8c16012d78884094f9be4f3cd7b67463352ff1391c8f60c109b5f734c80cd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mzkhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-c5hxm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:06Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:06 crc kubenswrapper[4957]: I0218 14:32:06.652927 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b3bef1f-7cee-4035-bc8e-195fadcf2d19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51eded762a62fcb704d09a9c4336a0e308e44c662b636772ea79f754c09dcaaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a2ec9911be4f4d8715bb722cdf72cdbf478111030a7311b93eb67f55ddbce77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d46f625f180ab9afde6a7cb5d578b88012bf5b297e691a35f1fe1af000f1416\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a11b36734e57eec3aba0b6d96b4102850e46b96eb16d99cfcacf273f1ce69f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ff013b6ed4aa92a4bda93a646e7ffd4a158daab436c25025c7fbee49716bcfd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T14:31:47Z\\\",\\\"message\\\":\\\"W0218 14:31:37.311828 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0218 14:31:37.312305 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771425097 cert, and key in /tmp/serving-cert-2580901173/serving-signer.crt, /tmp/serving-cert-2580901173/serving-signer.key\\\\nI0218 14:31:37.668457 1 observer_polling.go:159] Starting file observer\\\\nW0218 14:31:37.670692 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0218 14:31:37.670892 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 14:31:37.672550 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2580901173/tls.crt::/tmp/serving-cert-2580901173/tls.key\\\\\\\"\\\\nF0218 14:31:47.884142 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1c0f986e657c20a059efccb494462926f4fa62d30b012e6ce2b530d679d9d5c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:37Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09ddd547ad67f12c0d4f3b0505206fc69b4bcfcfa966c6f98da8ef4f4e979711\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09ddd547ad67f12c0d4f3b0505206fc69b4bcfcfa966c6f98da8ef4f4e979711\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:06Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:06 crc kubenswrapper[4957]: I0218 14:32:06.659377 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:06 crc kubenswrapper[4957]: I0218 14:32:06.659727 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:06 crc kubenswrapper[4957]: I0218 14:32:06.659817 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:06 crc kubenswrapper[4957]: I0218 14:32:06.659910 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:06 crc kubenswrapper[4957]: I0218 14:32:06.660005 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:06Z","lastTransitionTime":"2026-02-18T14:32:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:06 crc kubenswrapper[4957]: I0218 14:32:06.667832 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:06Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:06 crc kubenswrapper[4957]: I0218 14:32:06.681698 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3339e9ff017eaa7de40b766111f5d7dcb9b5b45f154d917b2f3b1ebad21ee361\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://815a22c1c4523e923dbe20bb68b1f4195562f2b3a767409555f43c82cd826391\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:06Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:06 crc kubenswrapper[4957]: I0218 14:32:06.698325 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96ebe482278c96a988073257a64389543a692cbae60b771542dd45ffdbef9977\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:06Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:06 crc kubenswrapper[4957]: I0218 14:32:06.708508 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wn2pd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6a720f-7d42-48e9-8073-fb4f7417e6cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3317042ce828b73a13e6c2724a7c7663e0c73dd05fef1239f5ad36a7d46b1ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qv7rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wn2pd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:06Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:06 crc kubenswrapper[4957]: I0218 14:32:06.725342 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t7lp9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1ab5e7d-28c9-416b-9e12-1209987d8a2c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3d55cd020086b018dcf2183c7307ce4f66a18d4dfd5c7e387419f5aaeb08e53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb7e286dbf7b49a0ee8019eb660a4b977399c163bb094d77f830b3747399d702\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c773ab7da87d534f8ca400b73f89b473291b15b65118abfa48da7e8af126724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fb30623c479a6d604d07d8779f3b5661ba8a2c98fa38f145443c7f80fbfddb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9575c57251809ec79d9e9a63fc18fc0fe431582b023c0bfb9d71b3ea5c369e49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7fc50065803f879ae25e1ab406e1b74e142cab30c38bedd0426616a2cc6cc84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38a947c08dfc2f01adf41707eda957fa7da665d501ebe838b0141637dc5bded8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f8908d037eeb540061d23f1918042ff39dce95b7e11ae3fd0767cee52738794\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T14:32:04Z\\\",\\\"message\\\":\\\"NetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0218 14:32:04.967405 6263 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0218 14:32:04.967522 6263 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0218 14:32:04.967722 6263 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0218 14:32:04.967995 6263 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0218 14:32:04.968468 6263 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 14:32:04.968766 6263 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 14:32:04.968906 6263 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T14:32:01Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:32:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07549d11e88948ba7929804722b7d49a2fefb64d0c324c2032529c13df70a23f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854d32930f1afa0a3f29b83159a4a473fd3c074f065cab0f110c4c0e62d0ab79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://854d32930f1afa0a3f29b83159a4a473fd3c074f065cab0f110c4c0e62d0ab79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t7lp9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:06Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:06 crc kubenswrapper[4957]: I0218 14:32:06.762022 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:06 crc kubenswrapper[4957]: I0218 14:32:06.762286 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:06 crc kubenswrapper[4957]: I0218 14:32:06.762385 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:06 crc kubenswrapper[4957]: I0218 14:32:06.762522 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:06 crc kubenswrapper[4957]: I0218 14:32:06.762603 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:06Z","lastTransitionTime":"2026-02-18T14:32:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:06 crc kubenswrapper[4957]: I0218 14:32:06.864866 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:06 crc kubenswrapper[4957]: I0218 14:32:06.865126 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:06 crc kubenswrapper[4957]: I0218 14:32:06.865202 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:06 crc kubenswrapper[4957]: I0218 14:32:06.865273 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:06 crc kubenswrapper[4957]: I0218 14:32:06.865341 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:06Z","lastTransitionTime":"2026-02-18T14:32:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:06 crc kubenswrapper[4957]: I0218 14:32:06.968651 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:06 crc kubenswrapper[4957]: I0218 14:32:06.968700 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:06 crc kubenswrapper[4957]: I0218 14:32:06.968709 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:06 crc kubenswrapper[4957]: I0218 14:32:06.968726 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:06 crc kubenswrapper[4957]: I0218 14:32:06.968739 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:06Z","lastTransitionTime":"2026-02-18T14:32:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:07 crc kubenswrapper[4957]: I0218 14:32:07.071448 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:07 crc kubenswrapper[4957]: I0218 14:32:07.071503 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:07 crc kubenswrapper[4957]: I0218 14:32:07.071515 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:07 crc kubenswrapper[4957]: I0218 14:32:07.071535 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:07 crc kubenswrapper[4957]: I0218 14:32:07.071550 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:07Z","lastTransitionTime":"2026-02-18T14:32:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:07 crc kubenswrapper[4957]: I0218 14:32:07.100162 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-52gh7"] Feb 18 14:32:07 crc kubenswrapper[4957]: I0218 14:32:07.100647 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-52gh7" Feb 18 14:32:07 crc kubenswrapper[4957]: I0218 14:32:07.103833 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 18 14:32:07 crc kubenswrapper[4957]: I0218 14:32:07.104401 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 18 14:32:07 crc kubenswrapper[4957]: I0218 14:32:07.122985 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sk96m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7e4902e-7bb6-44ec-a3ac-3d8b3afe3fbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://644a3c29970ac9c5262a3a66390a566947b89d6020eaf5de12afb1afeaaff673\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vssz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sk96m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:07Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:07 crc kubenswrapper[4957]: I0218 14:32:07.138235 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cde17e3-43e9-4bed-afe8-5b76229e35cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1929ddc0bd8d5e1c054fe6b3399c053a91ace49153f8712ad155416d0df0652e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mzslj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4227d09ba91769eac5df7ac50f6da20587fad22be3d3f27f0a938b37d76b457e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mzslj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-x8wwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:07Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:07 crc kubenswrapper[4957]: I0218 14:32:07.153032 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"820e2f52-fc4e-43af-8da0-5b1ecb52e54a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdeaa62699fcf37060225a404103ed014fff43f751f744a27891afb76c56d011\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb9f54ba352c3e8d1cb70e31e2321f32cc741e9ea7fa871ac8db9cb380486d6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04cb263fd49c7f580746efa2f81c4e25a4862244502ad5362a1767d6255c023c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f45a9407db247a891731310afb4c1a7b81ac94e44cfd8e267854a4c4e434e95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:34Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:07Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:07 crc kubenswrapper[4957]: I0218 14:32:07.164279 4957 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 18:28:47.845679409 +0000 UTC Feb 18 14:32:07 crc kubenswrapper[4957]: I0218 14:32:07.167469 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:07Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:07 crc kubenswrapper[4957]: I0218 14:32:07.179359 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:07 crc kubenswrapper[4957]: I0218 14:32:07.179409 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:07 crc kubenswrapper[4957]: I0218 14:32:07.179440 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:07 crc kubenswrapper[4957]: I0218 14:32:07.179464 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:07 crc kubenswrapper[4957]: I0218 14:32:07.179478 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:07Z","lastTransitionTime":"2026-02-18T14:32:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:07 crc kubenswrapper[4957]: I0218 14:32:07.185271 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3aa342a3ea42d2c64cf284dd2f7fed142b1ff377cc3c5384e4d940ec4364befc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:07Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:07 crc kubenswrapper[4957]: I0218 14:32:07.195501 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/841cf9b6-bfbb-4ff0-8899-acd00478a669-env-overrides\") pod \"ovnkube-control-plane-749d76644c-52gh7\" (UID: \"841cf9b6-bfbb-4ff0-8899-acd00478a669\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-52gh7" Feb 18 14:32:07 crc kubenswrapper[4957]: I0218 14:32:07.195580 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xg6v5\" (UniqueName: \"kubernetes.io/projected/841cf9b6-bfbb-4ff0-8899-acd00478a669-kube-api-access-xg6v5\") pod \"ovnkube-control-plane-749d76644c-52gh7\" (UID: \"841cf9b6-bfbb-4ff0-8899-acd00478a669\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-52gh7" Feb 18 14:32:07 crc kubenswrapper[4957]: I0218 14:32:07.195599 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/841cf9b6-bfbb-4ff0-8899-acd00478a669-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-52gh7\" (UID: \"841cf9b6-bfbb-4ff0-8899-acd00478a669\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-52gh7" Feb 18 14:32:07 crc kubenswrapper[4957]: I0218 14:32:07.195615 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/841cf9b6-bfbb-4ff0-8899-acd00478a669-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-52gh7\" (UID: \"841cf9b6-bfbb-4ff0-8899-acd00478a669\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-52gh7" Feb 18 14:32:07 crc kubenswrapper[4957]: I0218 14:32:07.198005 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:07Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:07 crc kubenswrapper[4957]: I0218 14:32:07.211688 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-s7f5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77ae51a3-a3f7-4ea3-afb9-93558bf3b821\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ff578d3657e9ba6b371eb3c3a2e63d4c989e3232da6c9f88cf6ba18edc50dd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:32:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7f4f904ea5311aa8aa0473b62433cd3684d0e1fecdb1b179bfb7a58e463cc27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7f4f904ea5311aa8aa0473b62433cd3684d0e1fecdb1b179bfb7a58e463cc27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ea791a426f30ba77c019b8f865f9cfe90b915f6c0596002f4914cbe2fa68c27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ea791a426f30ba77c019b8f865f9cfe90b915f6c0596002f4914cbe2fa68c27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01a7d815c8c260fb5d2a4586f2640abf157fa209aa8e8aed0e6573a07bdaced6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01a7d815c8c260fb5d2a4586f2640abf157fa209aa8e8aed0e6573a07bdaced6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ae36af89d8b5cc2ad00253398fad76d78310bc8adb012b7b819ef55a26b8004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae36af89d8b5cc2ad00253398fad76d78310bc8adb012b7b819ef55a26b8004\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://def4b27d830824baeb1b965b56d5fc50343585d0fae5a1fed75381364a94caf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://def4b27d830824baeb1b965b56d5fc50343585d0fae5a1fed75381364a94caf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:32:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a13d3fcb295e963277f66a06bad27866ddb270d48474d0154efc52a8721a64a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a13d3fcb295e963277f66a06bad27866ddb270d48474d0154efc52a8721a64a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:32:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-s7f5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:07Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:07 crc kubenswrapper[4957]: I0218 14:32:07.212027 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:32:07 crc kubenswrapper[4957]: E0218 14:32:07.212129 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 14:32:07 crc kubenswrapper[4957]: I0218 14:32:07.222565 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-c5hxm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2b8a049-72c4-4a87-bb05-a5cc4ebe7a89\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d8c16012d78884094f9be4f3cd7b67463352ff1391c8f60c109b5f734c80cd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mzkhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-c5hxm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:07Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:07 crc kubenswrapper[4957]: I0218 14:32:07.234384 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:07Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:07 crc kubenswrapper[4957]: I0218 14:32:07.248512 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3339e9ff017eaa7de40b766111f5d7dcb9b5b45f154d917b2f3b1ebad21ee361\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://815a22c1c4523e923dbe20bb68b1f4195562f2b3a767409555f43c82cd826391\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:07Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:07 crc kubenswrapper[4957]: I0218 14:32:07.258895 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96ebe482278c96a988073257a64389543a692cbae60b771542dd45ffdbef9977\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:07Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:07 crc kubenswrapper[4957]: I0218 14:32:07.271542 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wn2pd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6a720f-7d42-48e9-8073-fb4f7417e6cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3317042ce828b73a13e6c2724a7c7663e0c73dd05fef1239f5ad36a7d46b1ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qv7rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wn2pd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:07Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:07 crc kubenswrapper[4957]: I0218 14:32:07.282474 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:07 crc kubenswrapper[4957]: I0218 14:32:07.282513 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:07 crc kubenswrapper[4957]: I0218 14:32:07.282522 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:07 crc kubenswrapper[4957]: I0218 14:32:07.282534 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:07 crc kubenswrapper[4957]: I0218 14:32:07.282545 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:07Z","lastTransitionTime":"2026-02-18T14:32:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:07 crc kubenswrapper[4957]: I0218 14:32:07.286057 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b3bef1f-7cee-4035-bc8e-195fadcf2d19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51eded762a62fcb704d09a9c4336a0e308e44c662b636772ea79f754c09dcaaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a2ec9911be4f4d8715bb722cdf72cdbf478111030a7311b93eb67f55ddbce77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d46f625f180ab9afde6a7cb5d578b88012bf5b297e691a35f1fe1af000f1416\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a11b36734e57eec3aba0b6d96b4102850e46b96eb16d99cfcacf273f1ce69f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ff013b6ed4aa92a4bda93a646e7ffd4a158daab436c25025c7fbee49716bcfd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T14:31:47Z\\\",\\\"message\\\":\\\"W0218 14:31:37.311828 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0218 14:31:37.312305 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771425097 cert, and key in /tmp/serving-cert-2580901173/serving-signer.crt, /tmp/serving-cert-2580901173/serving-signer.key\\\\nI0218 14:31:37.668457 1 observer_polling.go:159] Starting file observer\\\\nW0218 14:31:37.670692 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0218 14:31:37.670892 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 14:31:37.672550 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2580901173/tls.crt::/tmp/serving-cert-2580901173/tls.key\\\\\\\"\\\\nF0218 14:31:47.884142 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1c0f986e657c20a059efccb494462926f4fa62d30b012e6ce2b530d679d9d5c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:37Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09ddd547ad67f12c0d4f3b0505206fc69b4bcfcfa966c6f98da8ef4f4e979711\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09ddd547ad67f12c0d4f3b0505206fc69b4bcfcfa966c6f98da8ef4f4e979711\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:07Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:07 crc kubenswrapper[4957]: I0218 14:32:07.296124 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xg6v5\" (UniqueName: \"kubernetes.io/projected/841cf9b6-bfbb-4ff0-8899-acd00478a669-kube-api-access-xg6v5\") pod \"ovnkube-control-plane-749d76644c-52gh7\" (UID: \"841cf9b6-bfbb-4ff0-8899-acd00478a669\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-52gh7" Feb 18 14:32:07 crc kubenswrapper[4957]: I0218 14:32:07.296157 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/841cf9b6-bfbb-4ff0-8899-acd00478a669-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-52gh7\" (UID: \"841cf9b6-bfbb-4ff0-8899-acd00478a669\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-52gh7" Feb 18 14:32:07 crc kubenswrapper[4957]: I0218 14:32:07.296175 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/841cf9b6-bfbb-4ff0-8899-acd00478a669-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-52gh7\" (UID: \"841cf9b6-bfbb-4ff0-8899-acd00478a669\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-52gh7" Feb 18 14:32:07 crc kubenswrapper[4957]: I0218 14:32:07.296219 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/841cf9b6-bfbb-4ff0-8899-acd00478a669-env-overrides\") pod \"ovnkube-control-plane-749d76644c-52gh7\" (UID: \"841cf9b6-bfbb-4ff0-8899-acd00478a669\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-52gh7" Feb 18 14:32:07 crc kubenswrapper[4957]: I0218 14:32:07.298612 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/841cf9b6-bfbb-4ff0-8899-acd00478a669-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-52gh7\" (UID: \"841cf9b6-bfbb-4ff0-8899-acd00478a669\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-52gh7" Feb 18 14:32:07 crc kubenswrapper[4957]: I0218 14:32:07.300059 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/841cf9b6-bfbb-4ff0-8899-acd00478a669-env-overrides\") pod \"ovnkube-control-plane-749d76644c-52gh7\" (UID: \"841cf9b6-bfbb-4ff0-8899-acd00478a669\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-52gh7" Feb 18 14:32:07 crc kubenswrapper[4957]: I0218 14:32:07.302413 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/841cf9b6-bfbb-4ff0-8899-acd00478a669-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-52gh7\" (UID: \"841cf9b6-bfbb-4ff0-8899-acd00478a669\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-52gh7" Feb 18 14:32:07 crc kubenswrapper[4957]: I0218 14:32:07.306524 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t7lp9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1ab5e7d-28c9-416b-9e12-1209987d8a2c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3d55cd020086b018dcf2183c7307ce4f66a18d4dfd5c7e387419f5aaeb08e53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb7e286dbf7b49a0ee8019eb660a4b977399c163bb094d77f830b3747399d702\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c773ab7da87d534f8ca400b73f89b473291b15b65118abfa48da7e8af126724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fb30623c479a6d604d07d8779f3b5661ba8a2c98fa38f145443c7f80fbfddb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9575c57251809ec79d9e9a63fc18fc0fe431582b023c0bfb9d71b3ea5c369e49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7fc50065803f879ae25e1ab406e1b74e142cab30c38bedd0426616a2cc6cc84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38a947c08dfc2f01adf41707eda957fa7da665d501ebe838b0141637dc5bded8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f8908d037eeb540061d23f1918042ff39dce95b7e11ae3fd0767cee52738794\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T14:32:04Z\\\",\\\"message\\\":\\\"NetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0218 14:32:04.967405 6263 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0218 14:32:04.967522 6263 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0218 14:32:04.967722 6263 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0218 14:32:04.967995 6263 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0218 14:32:04.968468 6263 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 14:32:04.968766 6263 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 14:32:04.968906 6263 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T14:32:01Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:32:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07549d11e88948ba7929804722b7d49a2fefb64d0c324c2032529c13df70a23f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854d32930f1afa0a3f29b83159a4a473fd3c074f065cab0f110c4c0e62d0ab79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://854d32930f1afa0a3f29b83159a4a473fd3c074f065cab0f110c4c0e62d0ab79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t7lp9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:07Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:07 crc kubenswrapper[4957]: I0218 14:32:07.318024 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xg6v5\" (UniqueName: \"kubernetes.io/projected/841cf9b6-bfbb-4ff0-8899-acd00478a669-kube-api-access-xg6v5\") pod \"ovnkube-control-plane-749d76644c-52gh7\" (UID: \"841cf9b6-bfbb-4ff0-8899-acd00478a669\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-52gh7" Feb 18 14:32:07 crc kubenswrapper[4957]: I0218 14:32:07.323566 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-52gh7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"841cf9b6-bfbb-4ff0-8899-acd00478a669\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xg6v5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xg6v5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:32:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-52gh7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:07Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:07 crc kubenswrapper[4957]: I0218 14:32:07.385559 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:07 crc kubenswrapper[4957]: I0218 14:32:07.385621 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:07 crc kubenswrapper[4957]: I0218 14:32:07.385644 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:07 crc kubenswrapper[4957]: I0218 14:32:07.385675 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:07 crc kubenswrapper[4957]: I0218 14:32:07.385696 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:07Z","lastTransitionTime":"2026-02-18T14:32:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:07 crc kubenswrapper[4957]: I0218 14:32:07.419122 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-52gh7" Feb 18 14:32:07 crc kubenswrapper[4957]: W0218 14:32:07.433523 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod841cf9b6_bfbb_4ff0_8899_acd00478a669.slice/crio-9cdef918b512885712ecd4b06b3c6fbdfd5e4760ce2e861c28d7a9d8a34fb523 WatchSource:0}: Error finding container 9cdef918b512885712ecd4b06b3c6fbdfd5e4760ce2e861c28d7a9d8a34fb523: Status 404 returned error can't find the container with id 9cdef918b512885712ecd4b06b3c6fbdfd5e4760ce2e861c28d7a9d8a34fb523 Feb 18 14:32:07 crc kubenswrapper[4957]: I0218 14:32:07.489954 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:07 crc kubenswrapper[4957]: I0218 14:32:07.489995 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:07 crc kubenswrapper[4957]: I0218 14:32:07.490007 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:07 crc kubenswrapper[4957]: I0218 14:32:07.490025 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:07 crc kubenswrapper[4957]: I0218 14:32:07.490038 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:07Z","lastTransitionTime":"2026-02-18T14:32:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:07 crc kubenswrapper[4957]: I0218 14:32:07.507204 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-t7lp9_c1ab5e7d-28c9-416b-9e12-1209987d8a2c/ovnkube-controller/1.log" Feb 18 14:32:07 crc kubenswrapper[4957]: I0218 14:32:07.508273 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-t7lp9_c1ab5e7d-28c9-416b-9e12-1209987d8a2c/ovnkube-controller/0.log" Feb 18 14:32:07 crc kubenswrapper[4957]: I0218 14:32:07.512398 4957 generic.go:334] "Generic (PLEG): container finished" podID="c1ab5e7d-28c9-416b-9e12-1209987d8a2c" containerID="38a947c08dfc2f01adf41707eda957fa7da665d501ebe838b0141637dc5bded8" exitCode=1 Feb 18 14:32:07 crc kubenswrapper[4957]: I0218 14:32:07.512507 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t7lp9" event={"ID":"c1ab5e7d-28c9-416b-9e12-1209987d8a2c","Type":"ContainerDied","Data":"38a947c08dfc2f01adf41707eda957fa7da665d501ebe838b0141637dc5bded8"} Feb 18 14:32:07 crc kubenswrapper[4957]: I0218 14:32:07.512573 4957 scope.go:117] "RemoveContainer" containerID="8f8908d037eeb540061d23f1918042ff39dce95b7e11ae3fd0767cee52738794" Feb 18 14:32:07 crc kubenswrapper[4957]: I0218 14:32:07.513343 4957 scope.go:117] "RemoveContainer" containerID="38a947c08dfc2f01adf41707eda957fa7da665d501ebe838b0141637dc5bded8" Feb 18 14:32:07 crc kubenswrapper[4957]: E0218 14:32:07.513554 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-t7lp9_openshift-ovn-kubernetes(c1ab5e7d-28c9-416b-9e12-1209987d8a2c)\"" pod="openshift-ovn-kubernetes/ovnkube-node-t7lp9" podUID="c1ab5e7d-28c9-416b-9e12-1209987d8a2c" Feb 18 14:32:07 crc kubenswrapper[4957]: I0218 14:32:07.516131 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-52gh7" event={"ID":"841cf9b6-bfbb-4ff0-8899-acd00478a669","Type":"ContainerStarted","Data":"9cdef918b512885712ecd4b06b3c6fbdfd5e4760ce2e861c28d7a9d8a34fb523"} Feb 18 14:32:07 crc kubenswrapper[4957]: I0218 14:32:07.529199 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3aa342a3ea42d2c64cf284dd2f7fed142b1ff377cc3c5384e4d940ec4364befc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:07Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:07 crc kubenswrapper[4957]: I0218 14:32:07.543885 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:07Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:07 crc kubenswrapper[4957]: I0218 14:32:07.559468 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-s7f5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77ae51a3-a3f7-4ea3-afb9-93558bf3b821\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ff578d3657e9ba6b371eb3c3a2e63d4c989e3232da6c9f88cf6ba18edc50dd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:32:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7f4f904ea5311aa8aa0473b62433cd3684d0e1fecdb1b179bfb7a58e463cc27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7f4f904ea5311aa8aa0473b62433cd3684d0e1fecdb1b179bfb7a58e463cc27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ea791a426f30ba77c019b8f865f9cfe90b915f6c0596002f4914cbe2fa68c27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ea791a426f30ba77c019b8f865f9cfe90b915f6c0596002f4914cbe2fa68c27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01a7d815c8c260fb5d2a4586f2640abf157fa209aa8e8aed0e6573a07bdaced6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01a7d815c8c260fb5d2a4586f2640abf157fa209aa8e8aed0e6573a07bdaced6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ae36af89d8b5cc2ad00253398fad76d78310bc8adb012b7b819ef55a26b8004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae36af89d8b5cc2ad00253398fad76d78310bc8adb012b7b819ef55a26b8004\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://def4b27d830824baeb1b965b56d5fc50343585d0fae5a1fed75381364a94caf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://def4b27d830824baeb1b965b56d5fc50343585d0fae5a1fed75381364a94caf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:32:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a13d3fcb295e963277f66a06bad27866ddb270d48474d0154efc52a8721a64a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a13d3fcb295e963277f66a06bad27866ddb270d48474d0154efc52a8721a64a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:32:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-s7f5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:07Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:07 crc kubenswrapper[4957]: I0218 14:32:07.570027 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-c5hxm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2b8a049-72c4-4a87-bb05-a5cc4ebe7a89\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d8c16012d78884094f9be4f3cd7b67463352ff1391c8f60c109b5f734c80cd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mzkhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-c5hxm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:07Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:07 crc kubenswrapper[4957]: I0218 14:32:07.585505 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b3bef1f-7cee-4035-bc8e-195fadcf2d19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51eded762a62fcb704d09a9c4336a0e308e44c662b636772ea79f754c09dcaaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a2ec9911be4f4d8715bb722cdf72cdbf478111030a7311b93eb67f55ddbce77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d46f625f180ab9afde6a7cb5d578b88012bf5b297e691a35f1fe1af000f1416\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a11b36734e57eec3aba0b6d96b4102850e46b96eb16d99cfcacf273f1ce69f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ff013b6ed4aa92a4bda93a646e7ffd4a158daab436c25025c7fbee49716bcfd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T14:31:47Z\\\",\\\"message\\\":\\\"W0218 14:31:37.311828 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0218 14:31:37.312305 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771425097 cert, and key in /tmp/serving-cert-2580901173/serving-signer.crt, /tmp/serving-cert-2580901173/serving-signer.key\\\\nI0218 14:31:37.668457 1 observer_polling.go:159] Starting file observer\\\\nW0218 14:31:37.670692 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0218 14:31:37.670892 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 14:31:37.672550 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2580901173/tls.crt::/tmp/serving-cert-2580901173/tls.key\\\\\\\"\\\\nF0218 14:31:47.884142 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1c0f986e657c20a059efccb494462926f4fa62d30b012e6ce2b530d679d9d5c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:37Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09ddd547ad67f12c0d4f3b0505206fc69b4bcfcfa966c6f98da8ef4f4e979711\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09ddd547ad67f12c0d4f3b0505206fc69b4bcfcfa966c6f98da8ef4f4e979711\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:07Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:07 crc kubenswrapper[4957]: I0218 14:32:07.592796 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:07 crc kubenswrapper[4957]: I0218 14:32:07.592856 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:07 crc kubenswrapper[4957]: I0218 14:32:07.592873 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:07 crc kubenswrapper[4957]: I0218 14:32:07.592895 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:07 crc kubenswrapper[4957]: I0218 14:32:07.592911 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:07Z","lastTransitionTime":"2026-02-18T14:32:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:07 crc kubenswrapper[4957]: I0218 14:32:07.605608 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:07Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:07 crc kubenswrapper[4957]: I0218 14:32:07.620085 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3339e9ff017eaa7de40b766111f5d7dcb9b5b45f154d917b2f3b1ebad21ee361\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://815a22c1c4523e923dbe20bb68b1f4195562f2b3a767409555f43c82cd826391\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:07Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:07 crc kubenswrapper[4957]: I0218 14:32:07.633343 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96ebe482278c96a988073257a64389543a692cbae60b771542dd45ffdbef9977\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:07Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:07 crc kubenswrapper[4957]: I0218 14:32:07.643101 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wn2pd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6a720f-7d42-48e9-8073-fb4f7417e6cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3317042ce828b73a13e6c2724a7c7663e0c73dd05fef1239f5ad36a7d46b1ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qv7rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wn2pd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:07Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:07 crc kubenswrapper[4957]: I0218 14:32:07.664235 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t7lp9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1ab5e7d-28c9-416b-9e12-1209987d8a2c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3d55cd020086b018dcf2183c7307ce4f66a18d4dfd5c7e387419f5aaeb08e53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb7e286dbf7b49a0ee8019eb660a4b977399c163bb094d77f830b3747399d702\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c773ab7da87d534f8ca400b73f89b473291b15b65118abfa48da7e8af126724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fb30623c479a6d604d07d8779f3b5661ba8a2c98fa38f145443c7f80fbfddb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9575c57251809ec79d9e9a63fc18fc0fe431582b023c0bfb9d71b3ea5c369e49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7fc50065803f879ae25e1ab406e1b74e142cab30c38bedd0426616a2cc6cc84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38a947c08dfc2f01adf41707eda957fa7da665d501ebe838b0141637dc5bded8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f8908d037eeb540061d23f1918042ff39dce95b7e11ae3fd0767cee52738794\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T14:32:04Z\\\",\\\"message\\\":\\\"NetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0218 14:32:04.967405 6263 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0218 14:32:04.967522 6263 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0218 14:32:04.967722 6263 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0218 14:32:04.967995 6263 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0218 14:32:04.968468 6263 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 14:32:04.968766 6263 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 14:32:04.968906 6263 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T14:32:01Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38a947c08dfc2f01adf41707eda957fa7da665d501ebe838b0141637dc5bded8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T14:32:06Z\\\",\\\"message\\\":\\\"14:32:06.547502 6408 services_controller.go:356] Processing sync for service openshift-network-operator/metrics for network=default\\\\nI0218 14:32:06.547539 6408 services_controller.go:356] Processing sync for service openshift-cluster-machine-approver/machine-approver for network=default\\\\nI0218 14:32:06.547544 6408 services_controller.go:356] Processing sync for service openshift-cluster-samples-operator/metrics for network=default\\\\nI0218 14:32:06.547552 6408 services_controller.go:360] Finished syncing service metrics on namespace openshift-cluster-samples-operator for network=default : 9.4µs\\\\nI0218 14:32:06.547552 6408 services_controller.go:360] Finished syncing service metrics on namespace openshift-network-operator for network=default : 47.872µs\\\\nI0218 14:32:06.547528 6408 services_controller.go:356] Processing sync for service openshift-ovn-kubernetes/ovn-kubernetes-control-plane for network=default\\\\nI0218 14:32:06.547575 6408 services_controller.go:360] Finished syncing service ovn-kubernetes-control-plane on namespace openshift-ovn-kubernetes for network=default : 49.831µs\\\\nI0218 14:32:06.547558 6408 services_controller.go:360] Finished syncing service machine-approver on namespace openshift-cluster-machine-approver for network=default : 19.081µs\\\\nI0218 14:32:06.547639 6408 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T14:32:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07549d11e88948ba7929804722b7d49a2fefb64d0c324c2032529c13df70a23f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854d32930f1afa0a3f29b83159a4a473fd3c074f065cab0f110c4c0e62d0ab79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://854d32930f1afa0a3f29b83159a4a473fd3c074f065cab0f110c4c0e62d0ab79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t7lp9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:07Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:07 crc kubenswrapper[4957]: I0218 14:32:07.676365 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-52gh7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"841cf9b6-bfbb-4ff0-8899-acd00478a669\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xg6v5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xg6v5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:32:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-52gh7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:07Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:07 crc kubenswrapper[4957]: I0218 14:32:07.689558 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"820e2f52-fc4e-43af-8da0-5b1ecb52e54a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdeaa62699fcf37060225a404103ed014fff43f751f744a27891afb76c56d011\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb9f54ba352c3e8d1cb70e31e2321f32cc741e9ea7fa871ac8db9cb380486d6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04cb263fd49c7f580746efa2f81c4e25a4862244502ad5362a1767d6255c023c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f45a9407db247a891731310afb4c1a7b81ac94e44cfd8e267854a4c4e434e95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:34Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:07Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:07 crc kubenswrapper[4957]: I0218 14:32:07.696067 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:07 crc kubenswrapper[4957]: I0218 14:32:07.696100 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:07 crc kubenswrapper[4957]: I0218 14:32:07.696108 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:07 crc kubenswrapper[4957]: I0218 14:32:07.696124 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:07 crc kubenswrapper[4957]: I0218 14:32:07.696133 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:07Z","lastTransitionTime":"2026-02-18T14:32:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:07 crc kubenswrapper[4957]: I0218 14:32:07.704182 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:07Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:07 crc kubenswrapper[4957]: I0218 14:32:07.724728 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sk96m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7e4902e-7bb6-44ec-a3ac-3d8b3afe3fbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://644a3c29970ac9c5262a3a66390a566947b89d6020eaf5de12afb1afeaaff673\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vssz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sk96m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:07Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:07 crc kubenswrapper[4957]: I0218 14:32:07.741414 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cde17e3-43e9-4bed-afe8-5b76229e35cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1929ddc0bd8d5e1c054fe6b3399c053a91ace49153f8712ad155416d0df0652e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mzslj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4227d09ba91769eac5df7ac50f6da20587fad22be3d3f27f0a938b37d76b457e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mzslj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-x8wwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:07Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:07 crc kubenswrapper[4957]: I0218 14:32:07.798394 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:07 crc kubenswrapper[4957]: I0218 14:32:07.798474 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:07 crc kubenswrapper[4957]: I0218 14:32:07.798493 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:07 crc kubenswrapper[4957]: I0218 14:32:07.798516 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:07 crc kubenswrapper[4957]: I0218 14:32:07.798533 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:07Z","lastTransitionTime":"2026-02-18T14:32:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:07 crc kubenswrapper[4957]: I0218 14:32:07.901445 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:07 crc kubenswrapper[4957]: I0218 14:32:07.901500 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:07 crc kubenswrapper[4957]: I0218 14:32:07.901516 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:07 crc kubenswrapper[4957]: I0218 14:32:07.901597 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:07 crc kubenswrapper[4957]: I0218 14:32:07.901615 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:07Z","lastTransitionTime":"2026-02-18T14:32:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:08 crc kubenswrapper[4957]: I0218 14:32:08.004182 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:08 crc kubenswrapper[4957]: I0218 14:32:08.004232 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:08 crc kubenswrapper[4957]: I0218 14:32:08.004242 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:08 crc kubenswrapper[4957]: I0218 14:32:08.004282 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:08 crc kubenswrapper[4957]: I0218 14:32:08.004295 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:08Z","lastTransitionTime":"2026-02-18T14:32:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:08 crc kubenswrapper[4957]: I0218 14:32:08.106867 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:08 crc kubenswrapper[4957]: I0218 14:32:08.106925 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:08 crc kubenswrapper[4957]: I0218 14:32:08.106938 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:08 crc kubenswrapper[4957]: I0218 14:32:08.106958 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:08 crc kubenswrapper[4957]: I0218 14:32:08.106978 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:08Z","lastTransitionTime":"2026-02-18T14:32:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:08 crc kubenswrapper[4957]: I0218 14:32:08.165233 4957 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 06:12:05.603447272 +0000 UTC Feb 18 14:32:08 crc kubenswrapper[4957]: I0218 14:32:08.209273 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:08 crc kubenswrapper[4957]: I0218 14:32:08.209350 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:08 crc kubenswrapper[4957]: I0218 14:32:08.209368 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:08 crc kubenswrapper[4957]: I0218 14:32:08.209393 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:08 crc kubenswrapper[4957]: I0218 14:32:08.209415 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:08Z","lastTransitionTime":"2026-02-18T14:32:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:08 crc kubenswrapper[4957]: I0218 14:32:08.212566 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:32:08 crc kubenswrapper[4957]: I0218 14:32:08.212589 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:32:08 crc kubenswrapper[4957]: E0218 14:32:08.212725 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 14:32:08 crc kubenswrapper[4957]: E0218 14:32:08.212899 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 14:32:08 crc kubenswrapper[4957]: I0218 14:32:08.311956 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:08 crc kubenswrapper[4957]: I0218 14:32:08.312011 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:08 crc kubenswrapper[4957]: I0218 14:32:08.312030 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:08 crc kubenswrapper[4957]: I0218 14:32:08.312054 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:08 crc kubenswrapper[4957]: I0218 14:32:08.312071 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:08Z","lastTransitionTime":"2026-02-18T14:32:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:08 crc kubenswrapper[4957]: I0218 14:32:08.414596 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:08 crc kubenswrapper[4957]: I0218 14:32:08.414631 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:08 crc kubenswrapper[4957]: I0218 14:32:08.414651 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:08 crc kubenswrapper[4957]: I0218 14:32:08.414676 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:08 crc kubenswrapper[4957]: I0218 14:32:08.414687 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:08Z","lastTransitionTime":"2026-02-18T14:32:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:08 crc kubenswrapper[4957]: I0218 14:32:08.518715 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:08 crc kubenswrapper[4957]: I0218 14:32:08.518787 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:08 crc kubenswrapper[4957]: I0218 14:32:08.518802 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:08 crc kubenswrapper[4957]: I0218 14:32:08.518822 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:08 crc kubenswrapper[4957]: I0218 14:32:08.518834 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:08Z","lastTransitionTime":"2026-02-18T14:32:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:08 crc kubenswrapper[4957]: I0218 14:32:08.523290 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-t7lp9_c1ab5e7d-28c9-416b-9e12-1209987d8a2c/ovnkube-controller/1.log" Feb 18 14:32:08 crc kubenswrapper[4957]: I0218 14:32:08.528142 4957 scope.go:117] "RemoveContainer" containerID="38a947c08dfc2f01adf41707eda957fa7da665d501ebe838b0141637dc5bded8" Feb 18 14:32:08 crc kubenswrapper[4957]: I0218 14:32:08.531466 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-52gh7" event={"ID":"841cf9b6-bfbb-4ff0-8899-acd00478a669","Type":"ContainerStarted","Data":"d556fc2e5fb106135a544401595696797ede468aa93818660d86424c49486fff"} Feb 18 14:32:08 crc kubenswrapper[4957]: I0218 14:32:08.531509 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-52gh7" event={"ID":"841cf9b6-bfbb-4ff0-8899-acd00478a669","Type":"ContainerStarted","Data":"e52fe12bee0a96d77a82c8edb65574b11ae3d6b6dd3e2a68e918f1d21c0b1153"} Feb 18 14:32:08 crc kubenswrapper[4957]: E0218 14:32:08.531820 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-t7lp9_openshift-ovn-kubernetes(c1ab5e7d-28c9-416b-9e12-1209987d8a2c)\"" pod="openshift-ovn-kubernetes/ovnkube-node-t7lp9" podUID="c1ab5e7d-28c9-416b-9e12-1209987d8a2c" Feb 18 14:32:08 crc kubenswrapper[4957]: I0218 14:32:08.555019 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t7lp9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1ab5e7d-28c9-416b-9e12-1209987d8a2c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3d55cd020086b018dcf2183c7307ce4f66a18d4dfd5c7e387419f5aaeb08e53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb7e286dbf7b49a0ee8019eb660a4b977399c163bb094d77f830b3747399d702\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c773ab7da87d534f8ca400b73f89b473291b15b65118abfa48da7e8af126724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fb30623c479a6d604d07d8779f3b5661ba8a2c98fa38f145443c7f80fbfddb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9575c57251809ec79d9e9a63fc18fc0fe431582b023c0bfb9d71b3ea5c369e49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7fc50065803f879ae25e1ab406e1b74e142cab30c38bedd0426616a2cc6cc84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38a947c08dfc2f01adf41707eda957fa7da665d501ebe838b0141637dc5bded8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38a947c08dfc2f01adf41707eda957fa7da665d501ebe838b0141637dc5bded8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T14:32:06Z\\\",\\\"message\\\":\\\"14:32:06.547502 6408 services_controller.go:356] Processing sync for service openshift-network-operator/metrics for network=default\\\\nI0218 14:32:06.547539 6408 services_controller.go:356] Processing sync for service openshift-cluster-machine-approver/machine-approver for network=default\\\\nI0218 14:32:06.547544 6408 services_controller.go:356] Processing sync for service openshift-cluster-samples-operator/metrics for network=default\\\\nI0218 14:32:06.547552 6408 services_controller.go:360] Finished syncing service metrics on namespace openshift-cluster-samples-operator for network=default : 9.4µs\\\\nI0218 14:32:06.547552 6408 services_controller.go:360] Finished syncing service metrics on namespace openshift-network-operator for network=default : 47.872µs\\\\nI0218 14:32:06.547528 6408 services_controller.go:356] Processing sync for service openshift-ovn-kubernetes/ovn-kubernetes-control-plane for network=default\\\\nI0218 14:32:06.547575 6408 services_controller.go:360] Finished syncing service ovn-kubernetes-control-plane on namespace openshift-ovn-kubernetes for network=default : 49.831µs\\\\nI0218 14:32:06.547558 6408 services_controller.go:360] Finished syncing service machine-approver on namespace openshift-cluster-machine-approver for network=default : 19.081µs\\\\nI0218 14:32:06.547639 6408 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T14:32:05Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-t7lp9_openshift-ovn-kubernetes(c1ab5e7d-28c9-416b-9e12-1209987d8a2c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07549d11e88948ba7929804722b7d49a2fefb64d0c324c2032529c13df70a23f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854d32930f1afa0a3f29b83159a4a473fd3c074f065cab0f110c4c0e62d0ab79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://854d32930f1afa0a3f29b83159a4a473fd3c074f065cab0f110c4c0e62d0ab79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t7lp9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:08Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:08 crc kubenswrapper[4957]: I0218 14:32:08.567195 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-52gh7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"841cf9b6-bfbb-4ff0-8899-acd00478a669\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xg6v5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xg6v5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:32:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-52gh7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:08Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:08 crc kubenswrapper[4957]: I0218 14:32:08.578025 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cde17e3-43e9-4bed-afe8-5b76229e35cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1929ddc0bd8d5e1c054fe6b3399c053a91ace49153f8712ad155416d0df0652e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mzslj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4227d09ba91769eac5df7ac50f6da20587fad22be3d3f27f0a938b37d76b457e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mzslj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-x8wwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:08Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:08 crc kubenswrapper[4957]: I0218 14:32:08.591177 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"820e2f52-fc4e-43af-8da0-5b1ecb52e54a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdeaa62699fcf37060225a404103ed014fff43f751f744a27891afb76c56d011\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb9f54ba352c3e8d1cb70e31e2321f32cc741e9ea7fa871ac8db9cb380486d6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04cb263fd49c7f580746efa2f81c4e25a4862244502ad5362a1767d6255c023c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f45a9407db247a891731310afb4c1a7b81ac94e44cfd8e267854a4c4e434e95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:34Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:08Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:08 crc kubenswrapper[4957]: I0218 14:32:08.603558 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:08Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:08 crc kubenswrapper[4957]: I0218 14:32:08.620179 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sk96m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7e4902e-7bb6-44ec-a3ac-3d8b3afe3fbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://644a3c29970ac9c5262a3a66390a566947b89d6020eaf5de12afb1afeaaff673\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vssz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sk96m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:08Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:08 crc kubenswrapper[4957]: I0218 14:32:08.621539 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:08 crc kubenswrapper[4957]: I0218 14:32:08.621574 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:08 crc kubenswrapper[4957]: I0218 14:32:08.621585 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:08 crc kubenswrapper[4957]: I0218 14:32:08.621619 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:08 crc kubenswrapper[4957]: I0218 14:32:08.621632 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:08Z","lastTransitionTime":"2026-02-18T14:32:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:08 crc kubenswrapper[4957]: I0218 14:32:08.633103 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:08Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:08 crc kubenswrapper[4957]: I0218 14:32:08.645374 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-s7f5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77ae51a3-a3f7-4ea3-afb9-93558bf3b821\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ff578d3657e9ba6b371eb3c3a2e63d4c989e3232da6c9f88cf6ba18edc50dd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:32:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7f4f904ea5311aa8aa0473b62433cd3684d0e1fecdb1b179bfb7a58e463cc27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7f4f904ea5311aa8aa0473b62433cd3684d0e1fecdb1b179bfb7a58e463cc27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ea791a426f30ba77c019b8f865f9cfe90b915f6c0596002f4914cbe2fa68c27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ea791a426f30ba77c019b8f865f9cfe90b915f6c0596002f4914cbe2fa68c27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01a7d815c8c260fb5d2a4586f2640abf157fa209aa8e8aed0e6573a07bdaced6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01a7d815c8c260fb5d2a4586f2640abf157fa209aa8e8aed0e6573a07bdaced6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ae36af89d8b5cc2ad00253398fad76d78310bc8adb012b7b819ef55a26b8004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae36af89d8b5cc2ad00253398fad76d78310bc8adb012b7b819ef55a26b8004\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://def4b27d830824baeb1b965b56d5fc50343585d0fae5a1fed75381364a94caf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://def4b27d830824baeb1b965b56d5fc50343585d0fae5a1fed75381364a94caf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:32:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a13d3fcb295e963277f66a06bad27866ddb270d48474d0154efc52a8721a64a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a13d3fcb295e963277f66a06bad27866ddb270d48474d0154efc52a8721a64a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:32:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-s7f5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:08Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:08 crc kubenswrapper[4957]: I0218 14:32:08.654513 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-c5hxm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2b8a049-72c4-4a87-bb05-a5cc4ebe7a89\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d8c16012d78884094f9be4f3cd7b67463352ff1391c8f60c109b5f734c80cd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mzkhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-c5hxm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:08Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:08 crc kubenswrapper[4957]: I0218 14:32:08.666252 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3aa342a3ea42d2c64cf284dd2f7fed142b1ff377cc3c5384e4d940ec4364befc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:08Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:08 crc kubenswrapper[4957]: I0218 14:32:08.678973 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3339e9ff017eaa7de40b766111f5d7dcb9b5b45f154d917b2f3b1ebad21ee361\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://815a22c1c4523e923dbe20bb68b1f4195562f2b3a767409555f43c82cd826391\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:08Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:08 crc kubenswrapper[4957]: I0218 14:32:08.690294 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96ebe482278c96a988073257a64389543a692cbae60b771542dd45ffdbef9977\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:08Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:08 crc kubenswrapper[4957]: I0218 14:32:08.701350 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wn2pd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6a720f-7d42-48e9-8073-fb4f7417e6cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3317042ce828b73a13e6c2724a7c7663e0c73dd05fef1239f5ad36a7d46b1ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qv7rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wn2pd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:08Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:08 crc kubenswrapper[4957]: I0218 14:32:08.719841 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b3bef1f-7cee-4035-bc8e-195fadcf2d19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51eded762a62fcb704d09a9c4336a0e308e44c662b636772ea79f754c09dcaaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a2ec9911be4f4d8715bb722cdf72cdbf478111030a7311b93eb67f55ddbce77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d46f625f180ab9afde6a7cb5d578b88012bf5b297e691a35f1fe1af000f1416\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a11b36734e57eec3aba0b6d96b4102850e46b96eb16d99cfcacf273f1ce69f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ff013b6ed4aa92a4bda93a646e7ffd4a158daab436c25025c7fbee49716bcfd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T14:31:47Z\\\",\\\"message\\\":\\\"W0218 14:31:37.311828 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0218 14:31:37.312305 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771425097 cert, and key in /tmp/serving-cert-2580901173/serving-signer.crt, /tmp/serving-cert-2580901173/serving-signer.key\\\\nI0218 14:31:37.668457 1 observer_polling.go:159] Starting file observer\\\\nW0218 14:31:37.670692 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0218 14:31:37.670892 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 14:31:37.672550 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2580901173/tls.crt::/tmp/serving-cert-2580901173/tls.key\\\\\\\"\\\\nF0218 14:31:47.884142 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1c0f986e657c20a059efccb494462926f4fa62d30b012e6ce2b530d679d9d5c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:37Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09ddd547ad67f12c0d4f3b0505206fc69b4bcfcfa966c6f98da8ef4f4e979711\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09ddd547ad67f12c0d4f3b0505206fc69b4bcfcfa966c6f98da8ef4f4e979711\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:08Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:08 crc kubenswrapper[4957]: I0218 14:32:08.723724 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:08 crc kubenswrapper[4957]: I0218 14:32:08.723762 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:08 crc kubenswrapper[4957]: I0218 14:32:08.723775 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:08 crc kubenswrapper[4957]: I0218 14:32:08.723793 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:08 crc kubenswrapper[4957]: I0218 14:32:08.723804 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:08Z","lastTransitionTime":"2026-02-18T14:32:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:08 crc kubenswrapper[4957]: I0218 14:32:08.732655 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:08Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:08 crc kubenswrapper[4957]: I0218 14:32:08.744970 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wn2pd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6a720f-7d42-48e9-8073-fb4f7417e6cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3317042ce828b73a13e6c2724a7c7663e0c73dd05fef1239f5ad36a7d46b1ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qv7rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wn2pd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:08Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:08 crc kubenswrapper[4957]: I0218 14:32:08.764287 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b3bef1f-7cee-4035-bc8e-195fadcf2d19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51eded762a62fcb704d09a9c4336a0e308e44c662b636772ea79f754c09dcaaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a2ec9911be4f4d8715bb722cdf72cdbf478111030a7311b93eb67f55ddbce77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d46f625f180ab9afde6a7cb5d578b88012bf5b297e691a35f1fe1af000f1416\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a11b36734e57eec3aba0b6d96b4102850e46b96eb16d99cfcacf273f1ce69f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ff013b6ed4aa92a4bda93a646e7ffd4a158daab436c25025c7fbee49716bcfd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T14:31:47Z\\\",\\\"message\\\":\\\"W0218 14:31:37.311828 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0218 14:31:37.312305 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771425097 cert, and key in /tmp/serving-cert-2580901173/serving-signer.crt, /tmp/serving-cert-2580901173/serving-signer.key\\\\nI0218 14:31:37.668457 1 observer_polling.go:159] Starting file observer\\\\nW0218 14:31:37.670692 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0218 14:31:37.670892 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 14:31:37.672550 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2580901173/tls.crt::/tmp/serving-cert-2580901173/tls.key\\\\\\\"\\\\nF0218 14:31:47.884142 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1c0f986e657c20a059efccb494462926f4fa62d30b012e6ce2b530d679d9d5c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:37Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09ddd547ad67f12c0d4f3b0505206fc69b4bcfcfa966c6f98da8ef4f4e979711\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09ddd547ad67f12c0d4f3b0505206fc69b4bcfcfa966c6f98da8ef4f4e979711\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:08Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:08 crc kubenswrapper[4957]: I0218 14:32:08.778848 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:08Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:08 crc kubenswrapper[4957]: I0218 14:32:08.793162 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3339e9ff017eaa7de40b766111f5d7dcb9b5b45f154d917b2f3b1ebad21ee361\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://815a22c1c4523e923dbe20bb68b1f4195562f2b3a767409555f43c82cd826391\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:08Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:08 crc kubenswrapper[4957]: I0218 14:32:08.805145 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96ebe482278c96a988073257a64389543a692cbae60b771542dd45ffdbef9977\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:08Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:08 crc kubenswrapper[4957]: I0218 14:32:08.822580 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t7lp9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1ab5e7d-28c9-416b-9e12-1209987d8a2c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3d55cd020086b018dcf2183c7307ce4f66a18d4dfd5c7e387419f5aaeb08e53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb7e286dbf7b49a0ee8019eb660a4b977399c163bb094d77f830b3747399d702\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c773ab7da87d534f8ca400b73f89b473291b15b65118abfa48da7e8af126724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fb30623c479a6d604d07d8779f3b5661ba8a2c98fa38f145443c7f80fbfddb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9575c57251809ec79d9e9a63fc18fc0fe431582b023c0bfb9d71b3ea5c369e49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7fc50065803f879ae25e1ab406e1b74e142cab30c38bedd0426616a2cc6cc84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38a947c08dfc2f01adf41707eda957fa7da665d501ebe838b0141637dc5bded8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38a947c08dfc2f01adf41707eda957fa7da665d501ebe838b0141637dc5bded8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T14:32:06Z\\\",\\\"message\\\":\\\"14:32:06.547502 6408 services_controller.go:356] Processing sync for service openshift-network-operator/metrics for network=default\\\\nI0218 14:32:06.547539 6408 services_controller.go:356] Processing sync for service openshift-cluster-machine-approver/machine-approver for network=default\\\\nI0218 14:32:06.547544 6408 services_controller.go:356] Processing sync for service openshift-cluster-samples-operator/metrics for network=default\\\\nI0218 14:32:06.547552 6408 services_controller.go:360] Finished syncing service metrics on namespace openshift-cluster-samples-operator for network=default : 9.4µs\\\\nI0218 14:32:06.547552 6408 services_controller.go:360] Finished syncing service metrics on namespace openshift-network-operator for network=default : 47.872µs\\\\nI0218 14:32:06.547528 6408 services_controller.go:356] Processing sync for service openshift-ovn-kubernetes/ovn-kubernetes-control-plane for network=default\\\\nI0218 14:32:06.547575 6408 services_controller.go:360] Finished syncing service ovn-kubernetes-control-plane on namespace openshift-ovn-kubernetes for network=default : 49.831µs\\\\nI0218 14:32:06.547558 6408 services_controller.go:360] Finished syncing service machine-approver on namespace openshift-cluster-machine-approver for network=default : 19.081µs\\\\nI0218 14:32:06.547639 6408 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T14:32:05Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-t7lp9_openshift-ovn-kubernetes(c1ab5e7d-28c9-416b-9e12-1209987d8a2c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07549d11e88948ba7929804722b7d49a2fefb64d0c324c2032529c13df70a23f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854d32930f1afa0a3f29b83159a4a473fd3c074f065cab0f110c4c0e62d0ab79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://854d32930f1afa0a3f29b83159a4a473fd3c074f065cab0f110c4c0e62d0ab79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t7lp9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:08Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:08 crc kubenswrapper[4957]: I0218 14:32:08.826246 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:08 crc kubenswrapper[4957]: I0218 14:32:08.826274 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:08 crc kubenswrapper[4957]: I0218 14:32:08.826283 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:08 crc kubenswrapper[4957]: I0218 14:32:08.826324 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:08 crc kubenswrapper[4957]: I0218 14:32:08.826344 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:08Z","lastTransitionTime":"2026-02-18T14:32:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:08 crc kubenswrapper[4957]: I0218 14:32:08.838190 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-52gh7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"841cf9b6-bfbb-4ff0-8899-acd00478a669\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e52fe12bee0a96d77a82c8edb65574b11ae3d6b6dd3e2a68e918f1d21c0b1153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xg6v5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d556fc2e5fb106135a544401595696797ede468aa93818660d86424c49486fff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xg6v5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:32:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-52gh7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:08Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:08 crc kubenswrapper[4957]: I0218 14:32:08.855090 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"820e2f52-fc4e-43af-8da0-5b1ecb52e54a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdeaa62699fcf37060225a404103ed014fff43f751f744a27891afb76c56d011\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb9f54ba352c3e8d1cb70e31e2321f32cc741e9ea7fa871ac8db9cb380486d6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04cb263fd49c7f580746efa2f81c4e25a4862244502ad5362a1767d6255c023c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f45a9407db247a891731310afb4c1a7b81ac94e44cfd8e267854a4c4e434e95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:34Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:08Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:08 crc kubenswrapper[4957]: I0218 14:32:08.867862 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:08Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:08 crc kubenswrapper[4957]: I0218 14:32:08.883443 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sk96m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7e4902e-7bb6-44ec-a3ac-3d8b3afe3fbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://644a3c29970ac9c5262a3a66390a566947b89d6020eaf5de12afb1afeaaff673\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vssz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sk96m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:08Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:08 crc kubenswrapper[4957]: I0218 14:32:08.896482 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cde17e3-43e9-4bed-afe8-5b76229e35cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1929ddc0bd8d5e1c054fe6b3399c053a91ace49153f8712ad155416d0df0652e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mzslj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4227d09ba91769eac5df7ac50f6da20587fad22be3d3f27f0a938b37d76b457e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mzslj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-x8wwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:08Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:08 crc kubenswrapper[4957]: I0218 14:32:08.907071 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-c5hxm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2b8a049-72c4-4a87-bb05-a5cc4ebe7a89\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d8c16012d78884094f9be4f3cd7b67463352ff1391c8f60c109b5f734c80cd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mzkhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-c5hxm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:08Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:08 crc kubenswrapper[4957]: I0218 14:32:08.924153 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3aa342a3ea42d2c64cf284dd2f7fed142b1ff377cc3c5384e4d940ec4364befc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:08Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:08 crc kubenswrapper[4957]: I0218 14:32:08.928300 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:08 crc kubenswrapper[4957]: I0218 14:32:08.928351 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:08 crc kubenswrapper[4957]: I0218 14:32:08.928362 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:08 crc kubenswrapper[4957]: I0218 14:32:08.928383 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:08 crc kubenswrapper[4957]: I0218 14:32:08.928396 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:08Z","lastTransitionTime":"2026-02-18T14:32:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:08 crc kubenswrapper[4957]: I0218 14:32:08.940661 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:08Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:08 crc kubenswrapper[4957]: I0218 14:32:08.942560 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-jkmlc"] Feb 18 14:32:08 crc kubenswrapper[4957]: I0218 14:32:08.943164 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jkmlc" Feb 18 14:32:08 crc kubenswrapper[4957]: E0218 14:32:08.943256 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jkmlc" podUID="58c40982-35c8-4670-ad21-513a7a5a458e" Feb 18 14:32:08 crc kubenswrapper[4957]: I0218 14:32:08.957790 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-s7f5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77ae51a3-a3f7-4ea3-afb9-93558bf3b821\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ff578d3657e9ba6b371eb3c3a2e63d4c989e3232da6c9f88cf6ba18edc50dd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:32:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7f4f904ea5311aa8aa0473b62433cd3684d0e1fecdb1b179bfb7a58e463cc27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7f4f904ea5311aa8aa0473b62433cd3684d0e1fecdb1b179bfb7a58e463cc27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ea791a426f30ba77c019b8f865f9cfe90b915f6c0596002f4914cbe2fa68c27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ea791a426f30ba77c019b8f865f9cfe90b915f6c0596002f4914cbe2fa68c27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01a7d815c8c260fb5d2a4586f2640abf157fa209aa8e8aed0e6573a07bdaced6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01a7d815c8c260fb5d2a4586f2640abf157fa209aa8e8aed0e6573a07bdaced6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ae36af89d8b5cc2ad00253398fad76d78310bc8adb012b7b819ef55a26b8004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae36af89d8b5cc2ad00253398fad76d78310bc8adb012b7b819ef55a26b8004\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://def4b27d830824baeb1b965b56d5fc50343585d0fae5a1fed75381364a94caf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://def4b27d830824baeb1b965b56d5fc50343585d0fae5a1fed75381364a94caf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:32:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a13d3fcb295e963277f66a06bad27866ddb270d48474d0154efc52a8721a64a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a13d3fcb295e963277f66a06bad27866ddb270d48474d0154efc52a8721a64a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:32:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-s7f5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:08Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:09 crc kubenswrapper[4957]: I0218 14:32:09.016823 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkmzc\" (UniqueName: \"kubernetes.io/projected/58c40982-35c8-4670-ad21-513a7a5a458e-kube-api-access-gkmzc\") pod \"network-metrics-daemon-jkmlc\" (UID: \"58c40982-35c8-4670-ad21-513a7a5a458e\") " pod="openshift-multus/network-metrics-daemon-jkmlc" Feb 18 14:32:09 crc kubenswrapper[4957]: I0218 14:32:09.016909 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/58c40982-35c8-4670-ad21-513a7a5a458e-metrics-certs\") pod \"network-metrics-daemon-jkmlc\" (UID: \"58c40982-35c8-4670-ad21-513a7a5a458e\") " pod="openshift-multus/network-metrics-daemon-jkmlc" Feb 18 14:32:09 crc kubenswrapper[4957]: I0218 14:32:09.018067 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cde17e3-43e9-4bed-afe8-5b76229e35cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1929ddc0bd8d5e1c054fe6b3399c053a91ace49153f8712ad155416d0df0652e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mzslj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4227d09ba91769eac5df7ac50f6da20587fad22be3d3f27f0a938b37d76b457e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mzslj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-x8wwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:09Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:09 crc kubenswrapper[4957]: I0218 14:32:09.033358 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:09 crc kubenswrapper[4957]: I0218 14:32:09.033397 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:09 crc kubenswrapper[4957]: I0218 14:32:09.033406 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:09 crc kubenswrapper[4957]: I0218 14:32:09.033435 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:09 crc kubenswrapper[4957]: I0218 14:32:09.033446 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:09Z","lastTransitionTime":"2026-02-18T14:32:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:09 crc kubenswrapper[4957]: I0218 14:32:09.046391 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"820e2f52-fc4e-43af-8da0-5b1ecb52e54a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdeaa62699fcf37060225a404103ed014fff43f751f744a27891afb76c56d011\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb9f54ba352c3e8d1cb70e31e2321f32cc741e9ea7fa871ac8db9cb380486d6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04cb263fd49c7f580746efa2f81c4e25a4862244502ad5362a1767d6255c023c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f45a9407db247a891731310afb4c1a7b81ac94e44cfd8e267854a4c4e434e95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:34Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:09Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:09 crc kubenswrapper[4957]: I0218 14:32:09.061028 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:09Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:09 crc kubenswrapper[4957]: I0218 14:32:09.077242 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sk96m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7e4902e-7bb6-44ec-a3ac-3d8b3afe3fbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://644a3c29970ac9c5262a3a66390a566947b89d6020eaf5de12afb1afeaaff673\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vssz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sk96m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:09Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:09 crc kubenswrapper[4957]: I0218 14:32:09.091496 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:09Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:09 crc kubenswrapper[4957]: I0218 14:32:09.107904 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-s7f5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77ae51a3-a3f7-4ea3-afb9-93558bf3b821\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ff578d3657e9ba6b371eb3c3a2e63d4c989e3232da6c9f88cf6ba18edc50dd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:32:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7f4f904ea5311aa8aa0473b62433cd3684d0e1fecdb1b179bfb7a58e463cc27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7f4f904ea5311aa8aa0473b62433cd3684d0e1fecdb1b179bfb7a58e463cc27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ea791a426f30ba77c019b8f865f9cfe90b915f6c0596002f4914cbe2fa68c27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ea791a426f30ba77c019b8f865f9cfe90b915f6c0596002f4914cbe2fa68c27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01a7d815c8c260fb5d2a4586f2640abf157fa209aa8e8aed0e6573a07bdaced6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01a7d815c8c260fb5d2a4586f2640abf157fa209aa8e8aed0e6573a07bdaced6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ae36af89d8b5cc2ad00253398fad76d78310bc8adb012b7b819ef55a26b8004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae36af89d8b5cc2ad00253398fad76d78310bc8adb012b7b819ef55a26b8004\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://def4b27d830824baeb1b965b56d5fc50343585d0fae5a1fed75381364a94caf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://def4b27d830824baeb1b965b56d5fc50343585d0fae5a1fed75381364a94caf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:32:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a13d3fcb295e963277f66a06bad27866ddb270d48474d0154efc52a8721a64a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a13d3fcb295e963277f66a06bad27866ddb270d48474d0154efc52a8721a64a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:32:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-s7f5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:09Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:09 crc kubenswrapper[4957]: I0218 14:32:09.118493 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gkmzc\" (UniqueName: \"kubernetes.io/projected/58c40982-35c8-4670-ad21-513a7a5a458e-kube-api-access-gkmzc\") pod \"network-metrics-daemon-jkmlc\" (UID: \"58c40982-35c8-4670-ad21-513a7a5a458e\") " pod="openshift-multus/network-metrics-daemon-jkmlc" Feb 18 14:32:09 crc kubenswrapper[4957]: I0218 14:32:09.118540 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/58c40982-35c8-4670-ad21-513a7a5a458e-metrics-certs\") pod \"network-metrics-daemon-jkmlc\" (UID: \"58c40982-35c8-4670-ad21-513a7a5a458e\") " pod="openshift-multus/network-metrics-daemon-jkmlc" Feb 18 14:32:09 crc kubenswrapper[4957]: E0218 14:32:09.118650 4957 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 14:32:09 crc kubenswrapper[4957]: E0218 14:32:09.118706 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/58c40982-35c8-4670-ad21-513a7a5a458e-metrics-certs podName:58c40982-35c8-4670-ad21-513a7a5a458e nodeName:}" failed. No retries permitted until 2026-02-18 14:32:09.618689819 +0000 UTC m=+36.139554563 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/58c40982-35c8-4670-ad21-513a7a5a458e-metrics-certs") pod "network-metrics-daemon-jkmlc" (UID: "58c40982-35c8-4670-ad21-513a7a5a458e") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 14:32:09 crc kubenswrapper[4957]: I0218 14:32:09.119523 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-c5hxm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2b8a049-72c4-4a87-bb05-a5cc4ebe7a89\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d8c16012d78884094f9be4f3cd7b67463352ff1391c8f60c109b5f734c80cd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mzkhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-c5hxm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:09Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:09 crc kubenswrapper[4957]: I0218 14:32:09.130488 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jkmlc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58c40982-35c8-4670-ad21-513a7a5a458e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gkmzc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gkmzc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:32:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jkmlc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:09Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:09 crc kubenswrapper[4957]: I0218 14:32:09.135275 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:09 crc kubenswrapper[4957]: I0218 14:32:09.135320 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:09 crc kubenswrapper[4957]: I0218 14:32:09.135330 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:09 crc kubenswrapper[4957]: I0218 14:32:09.135350 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:09 crc kubenswrapper[4957]: I0218 14:32:09.135361 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:09Z","lastTransitionTime":"2026-02-18T14:32:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:09 crc kubenswrapper[4957]: I0218 14:32:09.138097 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gkmzc\" (UniqueName: \"kubernetes.io/projected/58c40982-35c8-4670-ad21-513a7a5a458e-kube-api-access-gkmzc\") pod \"network-metrics-daemon-jkmlc\" (UID: \"58c40982-35c8-4670-ad21-513a7a5a458e\") " pod="openshift-multus/network-metrics-daemon-jkmlc" Feb 18 14:32:09 crc kubenswrapper[4957]: I0218 14:32:09.147881 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3aa342a3ea42d2c64cf284dd2f7fed142b1ff377cc3c5384e4d940ec4364befc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:09Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:09 crc kubenswrapper[4957]: I0218 14:32:09.162149 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3339e9ff017eaa7de40b766111f5d7dcb9b5b45f154d917b2f3b1ebad21ee361\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://815a22c1c4523e923dbe20bb68b1f4195562f2b3a767409555f43c82cd826391\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:09Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:09 crc kubenswrapper[4957]: I0218 14:32:09.165864 4957 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 03:58:21.308147907 +0000 UTC Feb 18 14:32:09 crc kubenswrapper[4957]: I0218 14:32:09.175327 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96ebe482278c96a988073257a64389543a692cbae60b771542dd45ffdbef9977\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:09Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:09 crc kubenswrapper[4957]: I0218 14:32:09.188145 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wn2pd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6a720f-7d42-48e9-8073-fb4f7417e6cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3317042ce828b73a13e6c2724a7c7663e0c73dd05fef1239f5ad36a7d46b1ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qv7rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wn2pd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:09Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:09 crc kubenswrapper[4957]: I0218 14:32:09.203748 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b3bef1f-7cee-4035-bc8e-195fadcf2d19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51eded762a62fcb704d09a9c4336a0e308e44c662b636772ea79f754c09dcaaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a2ec9911be4f4d8715bb722cdf72cdbf478111030a7311b93eb67f55ddbce77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d46f625f180ab9afde6a7cb5d578b88012bf5b297e691a35f1fe1af000f1416\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a11b36734e57eec3aba0b6d96b4102850e46b96eb16d99cfcacf273f1ce69f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ff013b6ed4aa92a4bda93a646e7ffd4a158daab436c25025c7fbee49716bcfd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T14:31:47Z\\\",\\\"message\\\":\\\"W0218 14:31:37.311828 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0218 14:31:37.312305 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771425097 cert, and key in /tmp/serving-cert-2580901173/serving-signer.crt, /tmp/serving-cert-2580901173/serving-signer.key\\\\nI0218 14:31:37.668457 1 observer_polling.go:159] Starting file observer\\\\nW0218 14:31:37.670692 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0218 14:31:37.670892 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 14:31:37.672550 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2580901173/tls.crt::/tmp/serving-cert-2580901173/tls.key\\\\\\\"\\\\nF0218 14:31:47.884142 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1c0f986e657c20a059efccb494462926f4fa62d30b012e6ce2b530d679d9d5c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:37Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09ddd547ad67f12c0d4f3b0505206fc69b4bcfcfa966c6f98da8ef4f4e979711\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09ddd547ad67f12c0d4f3b0505206fc69b4bcfcfa966c6f98da8ef4f4e979711\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:09Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:09 crc kubenswrapper[4957]: I0218 14:32:09.212850 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:32:09 crc kubenswrapper[4957]: E0218 14:32:09.213192 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 14:32:09 crc kubenswrapper[4957]: I0218 14:32:09.216676 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:09Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:09 crc kubenswrapper[4957]: I0218 14:32:09.218954 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 14:32:09 crc kubenswrapper[4957]: E0218 14:32:09.219110 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 14:32:25.219084124 +0000 UTC m=+51.739948888 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:32:09 crc kubenswrapper[4957]: I0218 14:32:09.219175 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:32:09 crc kubenswrapper[4957]: I0218 14:32:09.219274 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:32:09 crc kubenswrapper[4957]: E0218 14:32:09.219283 4957 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 14:32:09 crc kubenswrapper[4957]: E0218 14:32:09.219371 4957 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 14:32:09 crc kubenswrapper[4957]: E0218 14:32:09.219382 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 14:32:25.219365802 +0000 UTC m=+51.740230566 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 14:32:09 crc kubenswrapper[4957]: E0218 14:32:09.219455 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 14:32:25.219439394 +0000 UTC m=+51.740304148 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 14:32:09 crc kubenswrapper[4957]: I0218 14:32:09.237277 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t7lp9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1ab5e7d-28c9-416b-9e12-1209987d8a2c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3d55cd020086b018dcf2183c7307ce4f66a18d4dfd5c7e387419f5aaeb08e53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb7e286dbf7b49a0ee8019eb660a4b977399c163bb094d77f830b3747399d702\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c773ab7da87d534f8ca400b73f89b473291b15b65118abfa48da7e8af126724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fb30623c479a6d604d07d8779f3b5661ba8a2c98fa38f145443c7f80fbfddb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9575c57251809ec79d9e9a63fc18fc0fe431582b023c0bfb9d71b3ea5c369e49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7fc50065803f879ae25e1ab406e1b74e142cab30c38bedd0426616a2cc6cc84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38a947c08dfc2f01adf41707eda957fa7da665d501ebe838b0141637dc5bded8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38a947c08dfc2f01adf41707eda957fa7da665d501ebe838b0141637dc5bded8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T14:32:06Z\\\",\\\"message\\\":\\\"14:32:06.547502 6408 services_controller.go:356] Processing sync for service openshift-network-operator/metrics for network=default\\\\nI0218 14:32:06.547539 6408 services_controller.go:356] Processing sync for service openshift-cluster-machine-approver/machine-approver for network=default\\\\nI0218 14:32:06.547544 6408 services_controller.go:356] Processing sync for service openshift-cluster-samples-operator/metrics for network=default\\\\nI0218 14:32:06.547552 6408 services_controller.go:360] Finished syncing service metrics on namespace openshift-cluster-samples-operator for network=default : 9.4µs\\\\nI0218 14:32:06.547552 6408 services_controller.go:360] Finished syncing service metrics on namespace openshift-network-operator for network=default : 47.872µs\\\\nI0218 14:32:06.547528 6408 services_controller.go:356] Processing sync for service openshift-ovn-kubernetes/ovn-kubernetes-control-plane for network=default\\\\nI0218 14:32:06.547575 6408 services_controller.go:360] Finished syncing service ovn-kubernetes-control-plane on namespace openshift-ovn-kubernetes for network=default : 49.831µs\\\\nI0218 14:32:06.547558 6408 services_controller.go:360] Finished syncing service machine-approver on namespace openshift-cluster-machine-approver for network=default : 19.081µs\\\\nI0218 14:32:06.547639 6408 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T14:32:05Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-t7lp9_openshift-ovn-kubernetes(c1ab5e7d-28c9-416b-9e12-1209987d8a2c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07549d11e88948ba7929804722b7d49a2fefb64d0c324c2032529c13df70a23f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854d32930f1afa0a3f29b83159a4a473fd3c074f065cab0f110c4c0e62d0ab79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://854d32930f1afa0a3f29b83159a4a473fd3c074f065cab0f110c4c0e62d0ab79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t7lp9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:09Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:09 crc kubenswrapper[4957]: I0218 14:32:09.237974 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:09 crc kubenswrapper[4957]: I0218 14:32:09.238010 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:09 crc kubenswrapper[4957]: I0218 14:32:09.238022 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:09 crc kubenswrapper[4957]: I0218 14:32:09.238039 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:09 crc kubenswrapper[4957]: I0218 14:32:09.238053 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:09Z","lastTransitionTime":"2026-02-18T14:32:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:09 crc kubenswrapper[4957]: I0218 14:32:09.251763 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-52gh7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"841cf9b6-bfbb-4ff0-8899-acd00478a669\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e52fe12bee0a96d77a82c8edb65574b11ae3d6b6dd3e2a68e918f1d21c0b1153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xg6v5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d556fc2e5fb106135a544401595696797ede468aa93818660d86424c49486fff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xg6v5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:32:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-52gh7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:09Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:09 crc kubenswrapper[4957]: I0218 14:32:09.320693 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:32:09 crc kubenswrapper[4957]: I0218 14:32:09.320793 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:32:09 crc kubenswrapper[4957]: E0218 14:32:09.320918 4957 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 14:32:09 crc kubenswrapper[4957]: E0218 14:32:09.320936 4957 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 14:32:09 crc kubenswrapper[4957]: E0218 14:32:09.320939 4957 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 14:32:09 crc kubenswrapper[4957]: E0218 14:32:09.320976 4957 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 14:32:09 crc kubenswrapper[4957]: E0218 14:32:09.320992 4957 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 14:32:09 crc kubenswrapper[4957]: E0218 14:32:09.321071 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-18 14:32:25.321044834 +0000 UTC m=+51.841909738 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 14:32:09 crc kubenswrapper[4957]: E0218 14:32:09.320947 4957 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 14:32:09 crc kubenswrapper[4957]: E0218 14:32:09.321204 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-18 14:32:25.321183948 +0000 UTC m=+51.842048872 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 14:32:09 crc kubenswrapper[4957]: I0218 14:32:09.340174 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:09 crc kubenswrapper[4957]: I0218 14:32:09.340216 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:09 crc kubenswrapper[4957]: I0218 14:32:09.340228 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:09 crc kubenswrapper[4957]: I0218 14:32:09.340243 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:09 crc kubenswrapper[4957]: I0218 14:32:09.340253 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:09Z","lastTransitionTime":"2026-02-18T14:32:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:09 crc kubenswrapper[4957]: I0218 14:32:09.443916 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:09 crc kubenswrapper[4957]: I0218 14:32:09.443985 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:09 crc kubenswrapper[4957]: I0218 14:32:09.444009 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:09 crc kubenswrapper[4957]: I0218 14:32:09.444042 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:09 crc kubenswrapper[4957]: I0218 14:32:09.444065 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:09Z","lastTransitionTime":"2026-02-18T14:32:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:09 crc kubenswrapper[4957]: I0218 14:32:09.546249 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:09 crc kubenswrapper[4957]: I0218 14:32:09.546292 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:09 crc kubenswrapper[4957]: I0218 14:32:09.546303 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:09 crc kubenswrapper[4957]: I0218 14:32:09.546319 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:09 crc kubenswrapper[4957]: I0218 14:32:09.546329 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:09Z","lastTransitionTime":"2026-02-18T14:32:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:09 crc kubenswrapper[4957]: I0218 14:32:09.624517 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/58c40982-35c8-4670-ad21-513a7a5a458e-metrics-certs\") pod \"network-metrics-daemon-jkmlc\" (UID: \"58c40982-35c8-4670-ad21-513a7a5a458e\") " pod="openshift-multus/network-metrics-daemon-jkmlc" Feb 18 14:32:09 crc kubenswrapper[4957]: E0218 14:32:09.624615 4957 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 14:32:09 crc kubenswrapper[4957]: E0218 14:32:09.624662 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/58c40982-35c8-4670-ad21-513a7a5a458e-metrics-certs podName:58c40982-35c8-4670-ad21-513a7a5a458e nodeName:}" failed. No retries permitted until 2026-02-18 14:32:10.624647848 +0000 UTC m=+37.145512582 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/58c40982-35c8-4670-ad21-513a7a5a458e-metrics-certs") pod "network-metrics-daemon-jkmlc" (UID: "58c40982-35c8-4670-ad21-513a7a5a458e") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 14:32:09 crc kubenswrapper[4957]: I0218 14:32:09.648296 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:09 crc kubenswrapper[4957]: I0218 14:32:09.648603 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:09 crc kubenswrapper[4957]: I0218 14:32:09.648616 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:09 crc kubenswrapper[4957]: I0218 14:32:09.648630 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:09 crc kubenswrapper[4957]: I0218 14:32:09.648640 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:09Z","lastTransitionTime":"2026-02-18T14:32:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:09 crc kubenswrapper[4957]: I0218 14:32:09.750929 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:09 crc kubenswrapper[4957]: I0218 14:32:09.750968 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:09 crc kubenswrapper[4957]: I0218 14:32:09.750982 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:09 crc kubenswrapper[4957]: I0218 14:32:09.750998 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:09 crc kubenswrapper[4957]: I0218 14:32:09.751010 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:09Z","lastTransitionTime":"2026-02-18T14:32:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:09 crc kubenswrapper[4957]: I0218 14:32:09.853628 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:09 crc kubenswrapper[4957]: I0218 14:32:09.853768 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:09 crc kubenswrapper[4957]: I0218 14:32:09.853792 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:09 crc kubenswrapper[4957]: I0218 14:32:09.853845 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:09 crc kubenswrapper[4957]: I0218 14:32:09.853859 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:09Z","lastTransitionTime":"2026-02-18T14:32:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:09 crc kubenswrapper[4957]: I0218 14:32:09.956875 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:09 crc kubenswrapper[4957]: I0218 14:32:09.956928 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:09 crc kubenswrapper[4957]: I0218 14:32:09.956943 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:09 crc kubenswrapper[4957]: I0218 14:32:09.956962 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:09 crc kubenswrapper[4957]: I0218 14:32:09.956973 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:09Z","lastTransitionTime":"2026-02-18T14:32:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:10 crc kubenswrapper[4957]: I0218 14:32:10.059509 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:10 crc kubenswrapper[4957]: I0218 14:32:10.059552 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:10 crc kubenswrapper[4957]: I0218 14:32:10.059564 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:10 crc kubenswrapper[4957]: I0218 14:32:10.059583 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:10 crc kubenswrapper[4957]: I0218 14:32:10.059594 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:10Z","lastTransitionTime":"2026-02-18T14:32:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:10 crc kubenswrapper[4957]: I0218 14:32:10.162385 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:10 crc kubenswrapper[4957]: I0218 14:32:10.162457 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:10 crc kubenswrapper[4957]: I0218 14:32:10.162474 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:10 crc kubenswrapper[4957]: I0218 14:32:10.162495 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:10 crc kubenswrapper[4957]: I0218 14:32:10.162512 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:10Z","lastTransitionTime":"2026-02-18T14:32:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:10 crc kubenswrapper[4957]: I0218 14:32:10.167002 4957 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 23:00:39.963988774 +0000 UTC Feb 18 14:32:10 crc kubenswrapper[4957]: I0218 14:32:10.212690 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:32:10 crc kubenswrapper[4957]: I0218 14:32:10.212786 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:32:10 crc kubenswrapper[4957]: E0218 14:32:10.212850 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 14:32:10 crc kubenswrapper[4957]: E0218 14:32:10.212946 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 14:32:10 crc kubenswrapper[4957]: I0218 14:32:10.240013 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:10 crc kubenswrapper[4957]: I0218 14:32:10.240100 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:10 crc kubenswrapper[4957]: I0218 14:32:10.240114 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:10 crc kubenswrapper[4957]: I0218 14:32:10.240152 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:10 crc kubenswrapper[4957]: I0218 14:32:10.240185 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:10Z","lastTransitionTime":"2026-02-18T14:32:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:10 crc kubenswrapper[4957]: E0218 14:32:10.257459 4957 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:32:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:32:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:32:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:32:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"73cbc702-e999-4b05-a826-bb1b15d4d73b\\\",\\\"systemUUID\\\":\\\"9fb0acd4-1ed1-4909-a63a-3f4dd7b07055\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:10Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:10 crc kubenswrapper[4957]: I0218 14:32:10.262551 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:10 crc kubenswrapper[4957]: I0218 14:32:10.262632 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:10 crc kubenswrapper[4957]: I0218 14:32:10.262650 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:10 crc kubenswrapper[4957]: I0218 14:32:10.262670 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:10 crc kubenswrapper[4957]: I0218 14:32:10.262685 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:10Z","lastTransitionTime":"2026-02-18T14:32:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:10 crc kubenswrapper[4957]: E0218 14:32:10.275629 4957 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:32:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:32:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:32:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:32:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"73cbc702-e999-4b05-a826-bb1b15d4d73b\\\",\\\"systemUUID\\\":\\\"9fb0acd4-1ed1-4909-a63a-3f4dd7b07055\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:10Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:10 crc kubenswrapper[4957]: I0218 14:32:10.278697 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:10 crc kubenswrapper[4957]: I0218 14:32:10.278752 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:10 crc kubenswrapper[4957]: I0218 14:32:10.278765 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:10 crc kubenswrapper[4957]: I0218 14:32:10.278779 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:10 crc kubenswrapper[4957]: I0218 14:32:10.278789 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:10Z","lastTransitionTime":"2026-02-18T14:32:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:10 crc kubenswrapper[4957]: E0218 14:32:10.290636 4957 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:32:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:32:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:32:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:32:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"73cbc702-e999-4b05-a826-bb1b15d4d73b\\\",\\\"systemUUID\\\":\\\"9fb0acd4-1ed1-4909-a63a-3f4dd7b07055\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:10Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:10 crc kubenswrapper[4957]: I0218 14:32:10.294147 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:10 crc kubenswrapper[4957]: I0218 14:32:10.294361 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:10 crc kubenswrapper[4957]: I0218 14:32:10.294465 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:10 crc kubenswrapper[4957]: I0218 14:32:10.294570 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:10 crc kubenswrapper[4957]: I0218 14:32:10.294649 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:10Z","lastTransitionTime":"2026-02-18T14:32:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:10 crc kubenswrapper[4957]: E0218 14:32:10.307476 4957 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:32:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:32:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:32:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:32:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"73cbc702-e999-4b05-a826-bb1b15d4d73b\\\",\\\"systemUUID\\\":\\\"9fb0acd4-1ed1-4909-a63a-3f4dd7b07055\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:10Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:10 crc kubenswrapper[4957]: I0218 14:32:10.311215 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:10 crc kubenswrapper[4957]: I0218 14:32:10.311245 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:10 crc kubenswrapper[4957]: I0218 14:32:10.311253 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:10 crc kubenswrapper[4957]: I0218 14:32:10.311268 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:10 crc kubenswrapper[4957]: I0218 14:32:10.311276 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:10Z","lastTransitionTime":"2026-02-18T14:32:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:10 crc kubenswrapper[4957]: E0218 14:32:10.322536 4957 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:32:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:32:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:32:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:32:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"73cbc702-e999-4b05-a826-bb1b15d4d73b\\\",\\\"systemUUID\\\":\\\"9fb0acd4-1ed1-4909-a63a-3f4dd7b07055\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:10Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:10 crc kubenswrapper[4957]: E0218 14:32:10.322937 4957 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 18 14:32:10 crc kubenswrapper[4957]: I0218 14:32:10.324515 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:10 crc kubenswrapper[4957]: I0218 14:32:10.324604 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:10 crc kubenswrapper[4957]: I0218 14:32:10.324662 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:10 crc kubenswrapper[4957]: I0218 14:32:10.324737 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:10 crc kubenswrapper[4957]: I0218 14:32:10.324798 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:10Z","lastTransitionTime":"2026-02-18T14:32:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:10 crc kubenswrapper[4957]: I0218 14:32:10.427678 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:10 crc kubenswrapper[4957]: I0218 14:32:10.427732 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:10 crc kubenswrapper[4957]: I0218 14:32:10.427746 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:10 crc kubenswrapper[4957]: I0218 14:32:10.427764 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:10 crc kubenswrapper[4957]: I0218 14:32:10.427776 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:10Z","lastTransitionTime":"2026-02-18T14:32:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:10 crc kubenswrapper[4957]: I0218 14:32:10.530294 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:10 crc kubenswrapper[4957]: I0218 14:32:10.530329 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:10 crc kubenswrapper[4957]: I0218 14:32:10.530340 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:10 crc kubenswrapper[4957]: I0218 14:32:10.530355 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:10 crc kubenswrapper[4957]: I0218 14:32:10.530364 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:10Z","lastTransitionTime":"2026-02-18T14:32:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:10 crc kubenswrapper[4957]: I0218 14:32:10.633175 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:10 crc kubenswrapper[4957]: I0218 14:32:10.633209 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:10 crc kubenswrapper[4957]: I0218 14:32:10.633218 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:10 crc kubenswrapper[4957]: I0218 14:32:10.633232 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:10 crc kubenswrapper[4957]: I0218 14:32:10.633241 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:10Z","lastTransitionTime":"2026-02-18T14:32:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:10 crc kubenswrapper[4957]: I0218 14:32:10.637739 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/58c40982-35c8-4670-ad21-513a7a5a458e-metrics-certs\") pod \"network-metrics-daemon-jkmlc\" (UID: \"58c40982-35c8-4670-ad21-513a7a5a458e\") " pod="openshift-multus/network-metrics-daemon-jkmlc" Feb 18 14:32:10 crc kubenswrapper[4957]: E0218 14:32:10.637877 4957 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 14:32:10 crc kubenswrapper[4957]: E0218 14:32:10.637939 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/58c40982-35c8-4670-ad21-513a7a5a458e-metrics-certs podName:58c40982-35c8-4670-ad21-513a7a5a458e nodeName:}" failed. No retries permitted until 2026-02-18 14:32:12.637922178 +0000 UTC m=+39.158786922 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/58c40982-35c8-4670-ad21-513a7a5a458e-metrics-certs") pod "network-metrics-daemon-jkmlc" (UID: "58c40982-35c8-4670-ad21-513a7a5a458e") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 14:32:10 crc kubenswrapper[4957]: I0218 14:32:10.735648 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:10 crc kubenswrapper[4957]: I0218 14:32:10.735677 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:10 crc kubenswrapper[4957]: I0218 14:32:10.735685 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:10 crc kubenswrapper[4957]: I0218 14:32:10.735700 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:10 crc kubenswrapper[4957]: I0218 14:32:10.735709 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:10Z","lastTransitionTime":"2026-02-18T14:32:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:10 crc kubenswrapper[4957]: I0218 14:32:10.838227 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:10 crc kubenswrapper[4957]: I0218 14:32:10.838274 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:10 crc kubenswrapper[4957]: I0218 14:32:10.838286 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:10 crc kubenswrapper[4957]: I0218 14:32:10.838301 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:10 crc kubenswrapper[4957]: I0218 14:32:10.838314 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:10Z","lastTransitionTime":"2026-02-18T14:32:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:10 crc kubenswrapper[4957]: I0218 14:32:10.941063 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:10 crc kubenswrapper[4957]: I0218 14:32:10.941535 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:10 crc kubenswrapper[4957]: I0218 14:32:10.941755 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:10 crc kubenswrapper[4957]: I0218 14:32:10.941957 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:10 crc kubenswrapper[4957]: I0218 14:32:10.942133 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:10Z","lastTransitionTime":"2026-02-18T14:32:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:11 crc kubenswrapper[4957]: I0218 14:32:11.044630 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:11 crc kubenswrapper[4957]: I0218 14:32:11.044687 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:11 crc kubenswrapper[4957]: I0218 14:32:11.044709 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:11 crc kubenswrapper[4957]: I0218 14:32:11.044736 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:11 crc kubenswrapper[4957]: I0218 14:32:11.044758 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:11Z","lastTransitionTime":"2026-02-18T14:32:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:11 crc kubenswrapper[4957]: I0218 14:32:11.147973 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:11 crc kubenswrapper[4957]: I0218 14:32:11.148014 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:11 crc kubenswrapper[4957]: I0218 14:32:11.148029 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:11 crc kubenswrapper[4957]: I0218 14:32:11.148049 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:11 crc kubenswrapper[4957]: I0218 14:32:11.148064 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:11Z","lastTransitionTime":"2026-02-18T14:32:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:11 crc kubenswrapper[4957]: I0218 14:32:11.167385 4957 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 10:12:47.687087034 +0000 UTC Feb 18 14:32:11 crc kubenswrapper[4957]: I0218 14:32:11.212877 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jkmlc" Feb 18 14:32:11 crc kubenswrapper[4957]: I0218 14:32:11.212895 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:32:11 crc kubenswrapper[4957]: E0218 14:32:11.213029 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jkmlc" podUID="58c40982-35c8-4670-ad21-513a7a5a458e" Feb 18 14:32:11 crc kubenswrapper[4957]: E0218 14:32:11.213145 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 14:32:11 crc kubenswrapper[4957]: I0218 14:32:11.250301 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:11 crc kubenswrapper[4957]: I0218 14:32:11.250338 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:11 crc kubenswrapper[4957]: I0218 14:32:11.250348 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:11 crc kubenswrapper[4957]: I0218 14:32:11.250363 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:11 crc kubenswrapper[4957]: I0218 14:32:11.250374 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:11Z","lastTransitionTime":"2026-02-18T14:32:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:11 crc kubenswrapper[4957]: I0218 14:32:11.352956 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:11 crc kubenswrapper[4957]: I0218 14:32:11.352995 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:11 crc kubenswrapper[4957]: I0218 14:32:11.353007 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:11 crc kubenswrapper[4957]: I0218 14:32:11.353026 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:11 crc kubenswrapper[4957]: I0218 14:32:11.353037 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:11Z","lastTransitionTime":"2026-02-18T14:32:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:11 crc kubenswrapper[4957]: I0218 14:32:11.456950 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:11 crc kubenswrapper[4957]: I0218 14:32:11.457013 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:11 crc kubenswrapper[4957]: I0218 14:32:11.457026 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:11 crc kubenswrapper[4957]: I0218 14:32:11.457049 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:11 crc kubenswrapper[4957]: I0218 14:32:11.457063 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:11Z","lastTransitionTime":"2026-02-18T14:32:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:11 crc kubenswrapper[4957]: I0218 14:32:11.560157 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:11 crc kubenswrapper[4957]: I0218 14:32:11.560222 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:11 crc kubenswrapper[4957]: I0218 14:32:11.560239 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:11 crc kubenswrapper[4957]: I0218 14:32:11.560264 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:11 crc kubenswrapper[4957]: I0218 14:32:11.560283 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:11Z","lastTransitionTime":"2026-02-18T14:32:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:11 crc kubenswrapper[4957]: I0218 14:32:11.663454 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:11 crc kubenswrapper[4957]: I0218 14:32:11.663514 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:11 crc kubenswrapper[4957]: I0218 14:32:11.663534 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:11 crc kubenswrapper[4957]: I0218 14:32:11.663555 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:11 crc kubenswrapper[4957]: I0218 14:32:11.663568 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:11Z","lastTransitionTime":"2026-02-18T14:32:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:11 crc kubenswrapper[4957]: I0218 14:32:11.767038 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:11 crc kubenswrapper[4957]: I0218 14:32:11.767111 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:11 crc kubenswrapper[4957]: I0218 14:32:11.767134 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:11 crc kubenswrapper[4957]: I0218 14:32:11.767166 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:11 crc kubenswrapper[4957]: I0218 14:32:11.767188 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:11Z","lastTransitionTime":"2026-02-18T14:32:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:11 crc kubenswrapper[4957]: I0218 14:32:11.870528 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:11 crc kubenswrapper[4957]: I0218 14:32:11.870593 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:11 crc kubenswrapper[4957]: I0218 14:32:11.870610 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:11 crc kubenswrapper[4957]: I0218 14:32:11.870635 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:11 crc kubenswrapper[4957]: I0218 14:32:11.870652 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:11Z","lastTransitionTime":"2026-02-18T14:32:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:11 crc kubenswrapper[4957]: I0218 14:32:11.974332 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:11 crc kubenswrapper[4957]: I0218 14:32:11.974402 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:11 crc kubenswrapper[4957]: I0218 14:32:11.974732 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:11 crc kubenswrapper[4957]: I0218 14:32:11.974846 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:11 crc kubenswrapper[4957]: I0218 14:32:11.974868 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:11Z","lastTransitionTime":"2026-02-18T14:32:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:12 crc kubenswrapper[4957]: I0218 14:32:12.078130 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:12 crc kubenswrapper[4957]: I0218 14:32:12.078203 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:12 crc kubenswrapper[4957]: I0218 14:32:12.078226 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:12 crc kubenswrapper[4957]: I0218 14:32:12.078255 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:12 crc kubenswrapper[4957]: I0218 14:32:12.078280 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:12Z","lastTransitionTime":"2026-02-18T14:32:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:12 crc kubenswrapper[4957]: I0218 14:32:12.168589 4957 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 09:51:16.379111366 +0000 UTC Feb 18 14:32:12 crc kubenswrapper[4957]: I0218 14:32:12.181382 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:12 crc kubenswrapper[4957]: I0218 14:32:12.181466 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:12 crc kubenswrapper[4957]: I0218 14:32:12.181481 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:12 crc kubenswrapper[4957]: I0218 14:32:12.181503 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:12 crc kubenswrapper[4957]: I0218 14:32:12.181517 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:12Z","lastTransitionTime":"2026-02-18T14:32:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:12 crc kubenswrapper[4957]: I0218 14:32:12.212183 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:32:12 crc kubenswrapper[4957]: E0218 14:32:12.212648 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 14:32:12 crc kubenswrapper[4957]: I0218 14:32:12.212372 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:32:12 crc kubenswrapper[4957]: E0218 14:32:12.212900 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 14:32:12 crc kubenswrapper[4957]: I0218 14:32:12.284976 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:12 crc kubenswrapper[4957]: I0218 14:32:12.285050 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:12 crc kubenswrapper[4957]: I0218 14:32:12.285070 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:12 crc kubenswrapper[4957]: I0218 14:32:12.285095 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:12 crc kubenswrapper[4957]: I0218 14:32:12.285113 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:12Z","lastTransitionTime":"2026-02-18T14:32:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:12 crc kubenswrapper[4957]: I0218 14:32:12.387758 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:12 crc kubenswrapper[4957]: I0218 14:32:12.388118 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:12 crc kubenswrapper[4957]: I0218 14:32:12.388317 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:12 crc kubenswrapper[4957]: I0218 14:32:12.388658 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:12 crc kubenswrapper[4957]: I0218 14:32:12.388914 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:12Z","lastTransitionTime":"2026-02-18T14:32:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:12 crc kubenswrapper[4957]: I0218 14:32:12.492895 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:12 crc kubenswrapper[4957]: I0218 14:32:12.493562 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:12 crc kubenswrapper[4957]: I0218 14:32:12.493614 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:12 crc kubenswrapper[4957]: I0218 14:32:12.493636 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:12 crc kubenswrapper[4957]: I0218 14:32:12.493663 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:12Z","lastTransitionTime":"2026-02-18T14:32:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:12 crc kubenswrapper[4957]: I0218 14:32:12.597474 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:12 crc kubenswrapper[4957]: I0218 14:32:12.597558 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:12 crc kubenswrapper[4957]: I0218 14:32:12.597581 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:12 crc kubenswrapper[4957]: I0218 14:32:12.597633 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:12 crc kubenswrapper[4957]: I0218 14:32:12.597662 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:12Z","lastTransitionTime":"2026-02-18T14:32:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:12 crc kubenswrapper[4957]: I0218 14:32:12.660283 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/58c40982-35c8-4670-ad21-513a7a5a458e-metrics-certs\") pod \"network-metrics-daemon-jkmlc\" (UID: \"58c40982-35c8-4670-ad21-513a7a5a458e\") " pod="openshift-multus/network-metrics-daemon-jkmlc" Feb 18 14:32:12 crc kubenswrapper[4957]: E0218 14:32:12.660523 4957 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 14:32:12 crc kubenswrapper[4957]: E0218 14:32:12.660631 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/58c40982-35c8-4670-ad21-513a7a5a458e-metrics-certs podName:58c40982-35c8-4670-ad21-513a7a5a458e nodeName:}" failed. No retries permitted until 2026-02-18 14:32:16.660602114 +0000 UTC m=+43.181467038 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/58c40982-35c8-4670-ad21-513a7a5a458e-metrics-certs") pod "network-metrics-daemon-jkmlc" (UID: "58c40982-35c8-4670-ad21-513a7a5a458e") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 14:32:12 crc kubenswrapper[4957]: I0218 14:32:12.701292 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:12 crc kubenswrapper[4957]: I0218 14:32:12.701340 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:12 crc kubenswrapper[4957]: I0218 14:32:12.701351 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:12 crc kubenswrapper[4957]: I0218 14:32:12.701368 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:12 crc kubenswrapper[4957]: I0218 14:32:12.701383 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:12Z","lastTransitionTime":"2026-02-18T14:32:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:12 crc kubenswrapper[4957]: I0218 14:32:12.805306 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:12 crc kubenswrapper[4957]: I0218 14:32:12.805348 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:12 crc kubenswrapper[4957]: I0218 14:32:12.805359 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:12 crc kubenswrapper[4957]: I0218 14:32:12.805377 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:12 crc kubenswrapper[4957]: I0218 14:32:12.805391 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:12Z","lastTransitionTime":"2026-02-18T14:32:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:12 crc kubenswrapper[4957]: I0218 14:32:12.907967 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:12 crc kubenswrapper[4957]: I0218 14:32:12.908018 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:12 crc kubenswrapper[4957]: I0218 14:32:12.908030 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:12 crc kubenswrapper[4957]: I0218 14:32:12.908046 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:12 crc kubenswrapper[4957]: I0218 14:32:12.908055 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:12Z","lastTransitionTime":"2026-02-18T14:32:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:13 crc kubenswrapper[4957]: I0218 14:32:13.011533 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:13 crc kubenswrapper[4957]: I0218 14:32:13.011589 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:13 crc kubenswrapper[4957]: I0218 14:32:13.011605 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:13 crc kubenswrapper[4957]: I0218 14:32:13.011626 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:13 crc kubenswrapper[4957]: I0218 14:32:13.011639 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:13Z","lastTransitionTime":"2026-02-18T14:32:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:13 crc kubenswrapper[4957]: I0218 14:32:13.114474 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:13 crc kubenswrapper[4957]: I0218 14:32:13.114552 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:13 crc kubenswrapper[4957]: I0218 14:32:13.114569 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:13 crc kubenswrapper[4957]: I0218 14:32:13.114595 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:13 crc kubenswrapper[4957]: I0218 14:32:13.114612 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:13Z","lastTransitionTime":"2026-02-18T14:32:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:13 crc kubenswrapper[4957]: I0218 14:32:13.169620 4957 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 01:35:06.827996611 +0000 UTC Feb 18 14:32:13 crc kubenswrapper[4957]: I0218 14:32:13.212534 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jkmlc" Feb 18 14:32:13 crc kubenswrapper[4957]: I0218 14:32:13.212607 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:32:13 crc kubenswrapper[4957]: E0218 14:32:13.212692 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jkmlc" podUID="58c40982-35c8-4670-ad21-513a7a5a458e" Feb 18 14:32:13 crc kubenswrapper[4957]: E0218 14:32:13.212785 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 14:32:13 crc kubenswrapper[4957]: I0218 14:32:13.217544 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:13 crc kubenswrapper[4957]: I0218 14:32:13.217566 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:13 crc kubenswrapper[4957]: I0218 14:32:13.217574 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:13 crc kubenswrapper[4957]: I0218 14:32:13.217587 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:13 crc kubenswrapper[4957]: I0218 14:32:13.217596 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:13Z","lastTransitionTime":"2026-02-18T14:32:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:13 crc kubenswrapper[4957]: I0218 14:32:13.320346 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:13 crc kubenswrapper[4957]: I0218 14:32:13.320392 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:13 crc kubenswrapper[4957]: I0218 14:32:13.320404 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:13 crc kubenswrapper[4957]: I0218 14:32:13.320439 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:13 crc kubenswrapper[4957]: I0218 14:32:13.320452 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:13Z","lastTransitionTime":"2026-02-18T14:32:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:13 crc kubenswrapper[4957]: I0218 14:32:13.423051 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:13 crc kubenswrapper[4957]: I0218 14:32:13.423104 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:13 crc kubenswrapper[4957]: I0218 14:32:13.423115 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:13 crc kubenswrapper[4957]: I0218 14:32:13.423138 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:13 crc kubenswrapper[4957]: I0218 14:32:13.423147 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:13Z","lastTransitionTime":"2026-02-18T14:32:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:13 crc kubenswrapper[4957]: I0218 14:32:13.525330 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:13 crc kubenswrapper[4957]: I0218 14:32:13.525385 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:13 crc kubenswrapper[4957]: I0218 14:32:13.525400 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:13 crc kubenswrapper[4957]: I0218 14:32:13.525442 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:13 crc kubenswrapper[4957]: I0218 14:32:13.525457 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:13Z","lastTransitionTime":"2026-02-18T14:32:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:13 crc kubenswrapper[4957]: I0218 14:32:13.627887 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:13 crc kubenswrapper[4957]: I0218 14:32:13.627950 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:13 crc kubenswrapper[4957]: I0218 14:32:13.627968 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:13 crc kubenswrapper[4957]: I0218 14:32:13.627994 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:13 crc kubenswrapper[4957]: I0218 14:32:13.628015 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:13Z","lastTransitionTime":"2026-02-18T14:32:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:13 crc kubenswrapper[4957]: I0218 14:32:13.730685 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:13 crc kubenswrapper[4957]: I0218 14:32:13.730741 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:13 crc kubenswrapper[4957]: I0218 14:32:13.730760 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:13 crc kubenswrapper[4957]: I0218 14:32:13.730783 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:13 crc kubenswrapper[4957]: I0218 14:32:13.730800 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:13Z","lastTransitionTime":"2026-02-18T14:32:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:13 crc kubenswrapper[4957]: I0218 14:32:13.833509 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:13 crc kubenswrapper[4957]: I0218 14:32:13.833588 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:13 crc kubenswrapper[4957]: I0218 14:32:13.833605 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:13 crc kubenswrapper[4957]: I0218 14:32:13.833631 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:13 crc kubenswrapper[4957]: I0218 14:32:13.833653 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:13Z","lastTransitionTime":"2026-02-18T14:32:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:13 crc kubenswrapper[4957]: I0218 14:32:13.937689 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:13 crc kubenswrapper[4957]: I0218 14:32:13.937772 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:13 crc kubenswrapper[4957]: I0218 14:32:13.937796 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:13 crc kubenswrapper[4957]: I0218 14:32:13.937828 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:13 crc kubenswrapper[4957]: I0218 14:32:13.937850 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:13Z","lastTransitionTime":"2026-02-18T14:32:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:14 crc kubenswrapper[4957]: I0218 14:32:14.040919 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:14 crc kubenswrapper[4957]: I0218 14:32:14.041319 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:14 crc kubenswrapper[4957]: I0218 14:32:14.041575 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:14 crc kubenswrapper[4957]: I0218 14:32:14.041772 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:14 crc kubenswrapper[4957]: I0218 14:32:14.041911 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:14Z","lastTransitionTime":"2026-02-18T14:32:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:14 crc kubenswrapper[4957]: I0218 14:32:14.145677 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:14 crc kubenswrapper[4957]: I0218 14:32:14.145764 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:14 crc kubenswrapper[4957]: I0218 14:32:14.145798 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:14 crc kubenswrapper[4957]: I0218 14:32:14.145831 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:14 crc kubenswrapper[4957]: I0218 14:32:14.145854 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:14Z","lastTransitionTime":"2026-02-18T14:32:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:14 crc kubenswrapper[4957]: I0218 14:32:14.170462 4957 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 23:58:49.637849182 +0000 UTC Feb 18 14:32:14 crc kubenswrapper[4957]: I0218 14:32:14.212304 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:32:14 crc kubenswrapper[4957]: E0218 14:32:14.212444 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 14:32:14 crc kubenswrapper[4957]: I0218 14:32:14.212632 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:32:14 crc kubenswrapper[4957]: E0218 14:32:14.212786 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 14:32:14 crc kubenswrapper[4957]: I0218 14:32:14.224247 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wn2pd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6a720f-7d42-48e9-8073-fb4f7417e6cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3317042ce828b73a13e6c2724a7c7663e0c73dd05fef1239f5ad36a7d46b1ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qv7rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wn2pd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:14Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:14 crc kubenswrapper[4957]: I0218 14:32:14.242622 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b3bef1f-7cee-4035-bc8e-195fadcf2d19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51eded762a62fcb704d09a9c4336a0e308e44c662b636772ea79f754c09dcaaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a2ec9911be4f4d8715bb722cdf72cdbf478111030a7311b93eb67f55ddbce77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d46f625f180ab9afde6a7cb5d578b88012bf5b297e691a35f1fe1af000f1416\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a11b36734e57eec3aba0b6d96b4102850e46b96eb16d99cfcacf273f1ce69f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ff013b6ed4aa92a4bda93a646e7ffd4a158daab436c25025c7fbee49716bcfd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T14:31:47Z\\\",\\\"message\\\":\\\"W0218 14:31:37.311828 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0218 14:31:37.312305 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771425097 cert, and key in /tmp/serving-cert-2580901173/serving-signer.crt, /tmp/serving-cert-2580901173/serving-signer.key\\\\nI0218 14:31:37.668457 1 observer_polling.go:159] Starting file observer\\\\nW0218 14:31:37.670692 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0218 14:31:37.670892 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 14:31:37.672550 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2580901173/tls.crt::/tmp/serving-cert-2580901173/tls.key\\\\\\\"\\\\nF0218 14:31:47.884142 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1c0f986e657c20a059efccb494462926f4fa62d30b012e6ce2b530d679d9d5c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:37Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09ddd547ad67f12c0d4f3b0505206fc69b4bcfcfa966c6f98da8ef4f4e979711\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09ddd547ad67f12c0d4f3b0505206fc69b4bcfcfa966c6f98da8ef4f4e979711\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:14Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:14 crc kubenswrapper[4957]: I0218 14:32:14.251696 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:14 crc kubenswrapper[4957]: I0218 14:32:14.251743 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:14 crc kubenswrapper[4957]: I0218 14:32:14.251756 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:14 crc kubenswrapper[4957]: I0218 14:32:14.251775 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:14 crc kubenswrapper[4957]: I0218 14:32:14.251786 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:14Z","lastTransitionTime":"2026-02-18T14:32:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:14 crc kubenswrapper[4957]: I0218 14:32:14.255986 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:14Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:14 crc kubenswrapper[4957]: I0218 14:32:14.269203 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3339e9ff017eaa7de40b766111f5d7dcb9b5b45f154d917b2f3b1ebad21ee361\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://815a22c1c4523e923dbe20bb68b1f4195562f2b3a767409555f43c82cd826391\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:14Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:14 crc kubenswrapper[4957]: I0218 14:32:14.280257 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96ebe482278c96a988073257a64389543a692cbae60b771542dd45ffdbef9977\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:14Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:14 crc kubenswrapper[4957]: I0218 14:32:14.308252 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t7lp9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1ab5e7d-28c9-416b-9e12-1209987d8a2c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3d55cd020086b018dcf2183c7307ce4f66a18d4dfd5c7e387419f5aaeb08e53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb7e286dbf7b49a0ee8019eb660a4b977399c163bb094d77f830b3747399d702\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c773ab7da87d534f8ca400b73f89b473291b15b65118abfa48da7e8af126724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fb30623c479a6d604d07d8779f3b5661ba8a2c98fa38f145443c7f80fbfddb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9575c57251809ec79d9e9a63fc18fc0fe431582b023c0bfb9d71b3ea5c369e49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7fc50065803f879ae25e1ab406e1b74e142cab30c38bedd0426616a2cc6cc84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38a947c08dfc2f01adf41707eda957fa7da665d501ebe838b0141637dc5bded8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38a947c08dfc2f01adf41707eda957fa7da665d501ebe838b0141637dc5bded8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T14:32:06Z\\\",\\\"message\\\":\\\"14:32:06.547502 6408 services_controller.go:356] Processing sync for service openshift-network-operator/metrics for network=default\\\\nI0218 14:32:06.547539 6408 services_controller.go:356] Processing sync for service openshift-cluster-machine-approver/machine-approver for network=default\\\\nI0218 14:32:06.547544 6408 services_controller.go:356] Processing sync for service openshift-cluster-samples-operator/metrics for network=default\\\\nI0218 14:32:06.547552 6408 services_controller.go:360] Finished syncing service metrics on namespace openshift-cluster-samples-operator for network=default : 9.4µs\\\\nI0218 14:32:06.547552 6408 services_controller.go:360] Finished syncing service metrics on namespace openshift-network-operator for network=default : 47.872µs\\\\nI0218 14:32:06.547528 6408 services_controller.go:356] Processing sync for service openshift-ovn-kubernetes/ovn-kubernetes-control-plane for network=default\\\\nI0218 14:32:06.547575 6408 services_controller.go:360] Finished syncing service ovn-kubernetes-control-plane on namespace openshift-ovn-kubernetes for network=default : 49.831µs\\\\nI0218 14:32:06.547558 6408 services_controller.go:360] Finished syncing service machine-approver on namespace openshift-cluster-machine-approver for network=default : 19.081µs\\\\nI0218 14:32:06.547639 6408 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T14:32:05Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-t7lp9_openshift-ovn-kubernetes(c1ab5e7d-28c9-416b-9e12-1209987d8a2c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07549d11e88948ba7929804722b7d49a2fefb64d0c324c2032529c13df70a23f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854d32930f1afa0a3f29b83159a4a473fd3c074f065cab0f110c4c0e62d0ab79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://854d32930f1afa0a3f29b83159a4a473fd3c074f065cab0f110c4c0e62d0ab79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t7lp9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:14Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:14 crc kubenswrapper[4957]: I0218 14:32:14.322486 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-52gh7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"841cf9b6-bfbb-4ff0-8899-acd00478a669\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e52fe12bee0a96d77a82c8edb65574b11ae3d6b6dd3e2a68e918f1d21c0b1153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xg6v5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d556fc2e5fb106135a544401595696797ede468aa93818660d86424c49486fff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xg6v5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:32:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-52gh7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:14Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:14 crc kubenswrapper[4957]: I0218 14:32:14.338031 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"820e2f52-fc4e-43af-8da0-5b1ecb52e54a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdeaa62699fcf37060225a404103ed014fff43f751f744a27891afb76c56d011\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb9f54ba352c3e8d1cb70e31e2321f32cc741e9ea7fa871ac8db9cb380486d6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04cb263fd49c7f580746efa2f81c4e25a4862244502ad5362a1767d6255c023c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f45a9407db247a891731310afb4c1a7b81ac94e44cfd8e267854a4c4e434e95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:34Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:14Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:14 crc kubenswrapper[4957]: I0218 14:32:14.353715 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:14Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:14 crc kubenswrapper[4957]: I0218 14:32:14.354022 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:14 crc kubenswrapper[4957]: I0218 14:32:14.354065 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:14 crc kubenswrapper[4957]: I0218 14:32:14.354075 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:14 crc kubenswrapper[4957]: I0218 14:32:14.354093 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:14 crc kubenswrapper[4957]: I0218 14:32:14.354103 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:14Z","lastTransitionTime":"2026-02-18T14:32:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:14 crc kubenswrapper[4957]: I0218 14:32:14.373543 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sk96m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7e4902e-7bb6-44ec-a3ac-3d8b3afe3fbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://644a3c29970ac9c5262a3a66390a566947b89d6020eaf5de12afb1afeaaff673\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vssz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sk96m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:14Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:14 crc kubenswrapper[4957]: I0218 14:32:14.389537 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cde17e3-43e9-4bed-afe8-5b76229e35cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1929ddc0bd8d5e1c054fe6b3399c053a91ace49153f8712ad155416d0df0652e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mzslj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4227d09ba91769eac5df7ac50f6da20587fad22be3d3f27f0a938b37d76b457e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mzslj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-x8wwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:14Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:14 crc kubenswrapper[4957]: I0218 14:32:14.403381 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-c5hxm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2b8a049-72c4-4a87-bb05-a5cc4ebe7a89\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d8c16012d78884094f9be4f3cd7b67463352ff1391c8f60c109b5f734c80cd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mzkhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-c5hxm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:14Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:14 crc kubenswrapper[4957]: I0218 14:32:14.427903 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jkmlc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58c40982-35c8-4670-ad21-513a7a5a458e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gkmzc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gkmzc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:32:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jkmlc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:14Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:14 crc kubenswrapper[4957]: I0218 14:32:14.456957 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:14 crc kubenswrapper[4957]: I0218 14:32:14.457017 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:14 crc kubenswrapper[4957]: I0218 14:32:14.457030 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:14 crc kubenswrapper[4957]: I0218 14:32:14.457080 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:14 crc kubenswrapper[4957]: I0218 14:32:14.457097 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:14Z","lastTransitionTime":"2026-02-18T14:32:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:14 crc kubenswrapper[4957]: I0218 14:32:14.458926 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3aa342a3ea42d2c64cf284dd2f7fed142b1ff377cc3c5384e4d940ec4364befc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:14Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:14 crc kubenswrapper[4957]: I0218 14:32:14.471648 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:14Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:14 crc kubenswrapper[4957]: I0218 14:32:14.490229 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-s7f5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77ae51a3-a3f7-4ea3-afb9-93558bf3b821\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ff578d3657e9ba6b371eb3c3a2e63d4c989e3232da6c9f88cf6ba18edc50dd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:32:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7f4f904ea5311aa8aa0473b62433cd3684d0e1fecdb1b179bfb7a58e463cc27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7f4f904ea5311aa8aa0473b62433cd3684d0e1fecdb1b179bfb7a58e463cc27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ea791a426f30ba77c019b8f865f9cfe90b915f6c0596002f4914cbe2fa68c27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ea791a426f30ba77c019b8f865f9cfe90b915f6c0596002f4914cbe2fa68c27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01a7d815c8c260fb5d2a4586f2640abf157fa209aa8e8aed0e6573a07bdaced6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01a7d815c8c260fb5d2a4586f2640abf157fa209aa8e8aed0e6573a07bdaced6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ae36af89d8b5cc2ad00253398fad76d78310bc8adb012b7b819ef55a26b8004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae36af89d8b5cc2ad00253398fad76d78310bc8adb012b7b819ef55a26b8004\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://def4b27d830824baeb1b965b56d5fc50343585d0fae5a1fed75381364a94caf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://def4b27d830824baeb1b965b56d5fc50343585d0fae5a1fed75381364a94caf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:32:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a13d3fcb295e963277f66a06bad27866ddb270d48474d0154efc52a8721a64a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a13d3fcb295e963277f66a06bad27866ddb270d48474d0154efc52a8721a64a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:32:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-s7f5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:14Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:14 crc kubenswrapper[4957]: I0218 14:32:14.559078 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:14 crc kubenswrapper[4957]: I0218 14:32:14.559130 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:14 crc kubenswrapper[4957]: I0218 14:32:14.559144 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:14 crc kubenswrapper[4957]: I0218 14:32:14.559166 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:14 crc kubenswrapper[4957]: I0218 14:32:14.559185 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:14Z","lastTransitionTime":"2026-02-18T14:32:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:14 crc kubenswrapper[4957]: I0218 14:32:14.661192 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:14 crc kubenswrapper[4957]: I0218 14:32:14.661233 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:14 crc kubenswrapper[4957]: I0218 14:32:14.661245 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:14 crc kubenswrapper[4957]: I0218 14:32:14.661262 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:14 crc kubenswrapper[4957]: I0218 14:32:14.661276 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:14Z","lastTransitionTime":"2026-02-18T14:32:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:14 crc kubenswrapper[4957]: I0218 14:32:14.763900 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:14 crc kubenswrapper[4957]: I0218 14:32:14.763943 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:14 crc kubenswrapper[4957]: I0218 14:32:14.763953 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:14 crc kubenswrapper[4957]: I0218 14:32:14.763970 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:14 crc kubenswrapper[4957]: I0218 14:32:14.763979 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:14Z","lastTransitionTime":"2026-02-18T14:32:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:14 crc kubenswrapper[4957]: I0218 14:32:14.867613 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:14 crc kubenswrapper[4957]: I0218 14:32:14.868157 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:14 crc kubenswrapper[4957]: I0218 14:32:14.868375 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:14 crc kubenswrapper[4957]: I0218 14:32:14.868664 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:14 crc kubenswrapper[4957]: I0218 14:32:14.868836 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:14Z","lastTransitionTime":"2026-02-18T14:32:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:14 crc kubenswrapper[4957]: I0218 14:32:14.971810 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:14 crc kubenswrapper[4957]: I0218 14:32:14.972223 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:14 crc kubenswrapper[4957]: I0218 14:32:14.972372 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:14 crc kubenswrapper[4957]: I0218 14:32:14.972514 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:14 crc kubenswrapper[4957]: I0218 14:32:14.972631 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:14Z","lastTransitionTime":"2026-02-18T14:32:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:15 crc kubenswrapper[4957]: I0218 14:32:15.075566 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:15 crc kubenswrapper[4957]: I0218 14:32:15.075867 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:15 crc kubenswrapper[4957]: I0218 14:32:15.075951 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:15 crc kubenswrapper[4957]: I0218 14:32:15.076086 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:15 crc kubenswrapper[4957]: I0218 14:32:15.076171 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:15Z","lastTransitionTime":"2026-02-18T14:32:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:15 crc kubenswrapper[4957]: I0218 14:32:15.172020 4957 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 20:52:19.424988103 +0000 UTC Feb 18 14:32:15 crc kubenswrapper[4957]: I0218 14:32:15.179478 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:15 crc kubenswrapper[4957]: I0218 14:32:15.179536 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:15 crc kubenswrapper[4957]: I0218 14:32:15.179551 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:15 crc kubenswrapper[4957]: I0218 14:32:15.179570 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:15 crc kubenswrapper[4957]: I0218 14:32:15.179581 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:15Z","lastTransitionTime":"2026-02-18T14:32:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:15 crc kubenswrapper[4957]: I0218 14:32:15.212590 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jkmlc" Feb 18 14:32:15 crc kubenswrapper[4957]: I0218 14:32:15.212638 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:32:15 crc kubenswrapper[4957]: E0218 14:32:15.212759 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jkmlc" podUID="58c40982-35c8-4670-ad21-513a7a5a458e" Feb 18 14:32:15 crc kubenswrapper[4957]: E0218 14:32:15.212842 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 14:32:15 crc kubenswrapper[4957]: I0218 14:32:15.282201 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:15 crc kubenswrapper[4957]: I0218 14:32:15.282251 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:15 crc kubenswrapper[4957]: I0218 14:32:15.282267 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:15 crc kubenswrapper[4957]: I0218 14:32:15.282290 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:15 crc kubenswrapper[4957]: I0218 14:32:15.282306 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:15Z","lastTransitionTime":"2026-02-18T14:32:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:15 crc kubenswrapper[4957]: I0218 14:32:15.384952 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:15 crc kubenswrapper[4957]: I0218 14:32:15.385049 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:15 crc kubenswrapper[4957]: I0218 14:32:15.385087 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:15 crc kubenswrapper[4957]: I0218 14:32:15.385125 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:15 crc kubenswrapper[4957]: I0218 14:32:15.385163 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:15Z","lastTransitionTime":"2026-02-18T14:32:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:15 crc kubenswrapper[4957]: I0218 14:32:15.488060 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:15 crc kubenswrapper[4957]: I0218 14:32:15.488116 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:15 crc kubenswrapper[4957]: I0218 14:32:15.488131 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:15 crc kubenswrapper[4957]: I0218 14:32:15.488155 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:15 crc kubenswrapper[4957]: I0218 14:32:15.488168 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:15Z","lastTransitionTime":"2026-02-18T14:32:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:15 crc kubenswrapper[4957]: I0218 14:32:15.591385 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:15 crc kubenswrapper[4957]: I0218 14:32:15.591481 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:15 crc kubenswrapper[4957]: I0218 14:32:15.591495 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:15 crc kubenswrapper[4957]: I0218 14:32:15.591710 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:15 crc kubenswrapper[4957]: I0218 14:32:15.591726 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:15Z","lastTransitionTime":"2026-02-18T14:32:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:15 crc kubenswrapper[4957]: I0218 14:32:15.695049 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:15 crc kubenswrapper[4957]: I0218 14:32:15.695158 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:15 crc kubenswrapper[4957]: I0218 14:32:15.695171 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:15 crc kubenswrapper[4957]: I0218 14:32:15.695193 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:15 crc kubenswrapper[4957]: I0218 14:32:15.695208 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:15Z","lastTransitionTime":"2026-02-18T14:32:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:15 crc kubenswrapper[4957]: I0218 14:32:15.797906 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:15 crc kubenswrapper[4957]: I0218 14:32:15.797976 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:15 crc kubenswrapper[4957]: I0218 14:32:15.797988 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:15 crc kubenswrapper[4957]: I0218 14:32:15.798011 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:15 crc kubenswrapper[4957]: I0218 14:32:15.798024 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:15Z","lastTransitionTime":"2026-02-18T14:32:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:15 crc kubenswrapper[4957]: I0218 14:32:15.901408 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:15 crc kubenswrapper[4957]: I0218 14:32:15.901522 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:15 crc kubenswrapper[4957]: I0218 14:32:15.901549 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:15 crc kubenswrapper[4957]: I0218 14:32:15.901587 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:15 crc kubenswrapper[4957]: I0218 14:32:15.901607 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:15Z","lastTransitionTime":"2026-02-18T14:32:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:16 crc kubenswrapper[4957]: I0218 14:32:16.004876 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:16 crc kubenswrapper[4957]: I0218 14:32:16.004919 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:16 crc kubenswrapper[4957]: I0218 14:32:16.004927 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:16 crc kubenswrapper[4957]: I0218 14:32:16.004944 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:16 crc kubenswrapper[4957]: I0218 14:32:16.004955 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:16Z","lastTransitionTime":"2026-02-18T14:32:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:16 crc kubenswrapper[4957]: I0218 14:32:16.106897 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:16 crc kubenswrapper[4957]: I0218 14:32:16.106958 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:16 crc kubenswrapper[4957]: I0218 14:32:16.106969 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:16 crc kubenswrapper[4957]: I0218 14:32:16.106986 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:16 crc kubenswrapper[4957]: I0218 14:32:16.106996 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:16Z","lastTransitionTime":"2026-02-18T14:32:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:16 crc kubenswrapper[4957]: I0218 14:32:16.173949 4957 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 23:17:56.554860167 +0000 UTC Feb 18 14:32:16 crc kubenswrapper[4957]: I0218 14:32:16.209100 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:16 crc kubenswrapper[4957]: I0218 14:32:16.209147 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:16 crc kubenswrapper[4957]: I0218 14:32:16.209162 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:16 crc kubenswrapper[4957]: I0218 14:32:16.209181 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:16 crc kubenswrapper[4957]: I0218 14:32:16.209192 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:16Z","lastTransitionTime":"2026-02-18T14:32:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:16 crc kubenswrapper[4957]: I0218 14:32:16.212166 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:32:16 crc kubenswrapper[4957]: I0218 14:32:16.212215 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:32:16 crc kubenswrapper[4957]: E0218 14:32:16.212291 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 14:32:16 crc kubenswrapper[4957]: E0218 14:32:16.212376 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 14:32:16 crc kubenswrapper[4957]: I0218 14:32:16.312276 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:16 crc kubenswrapper[4957]: I0218 14:32:16.312327 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:16 crc kubenswrapper[4957]: I0218 14:32:16.312343 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:16 crc kubenswrapper[4957]: I0218 14:32:16.312367 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:16 crc kubenswrapper[4957]: I0218 14:32:16.312385 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:16Z","lastTransitionTime":"2026-02-18T14:32:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:16 crc kubenswrapper[4957]: I0218 14:32:16.414668 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:16 crc kubenswrapper[4957]: I0218 14:32:16.414743 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:16 crc kubenswrapper[4957]: I0218 14:32:16.414766 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:16 crc kubenswrapper[4957]: I0218 14:32:16.414796 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:16 crc kubenswrapper[4957]: I0218 14:32:16.414818 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:16Z","lastTransitionTime":"2026-02-18T14:32:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:16 crc kubenswrapper[4957]: I0218 14:32:16.517163 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:16 crc kubenswrapper[4957]: I0218 14:32:16.517201 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:16 crc kubenswrapper[4957]: I0218 14:32:16.517277 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:16 crc kubenswrapper[4957]: I0218 14:32:16.517292 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:16 crc kubenswrapper[4957]: I0218 14:32:16.517302 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:16Z","lastTransitionTime":"2026-02-18T14:32:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:16 crc kubenswrapper[4957]: I0218 14:32:16.619686 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:16 crc kubenswrapper[4957]: I0218 14:32:16.619735 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:16 crc kubenswrapper[4957]: I0218 14:32:16.619746 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:16 crc kubenswrapper[4957]: I0218 14:32:16.619761 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:16 crc kubenswrapper[4957]: I0218 14:32:16.619773 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:16Z","lastTransitionTime":"2026-02-18T14:32:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:16 crc kubenswrapper[4957]: I0218 14:32:16.703804 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/58c40982-35c8-4670-ad21-513a7a5a458e-metrics-certs\") pod \"network-metrics-daemon-jkmlc\" (UID: \"58c40982-35c8-4670-ad21-513a7a5a458e\") " pod="openshift-multus/network-metrics-daemon-jkmlc" Feb 18 14:32:16 crc kubenswrapper[4957]: E0218 14:32:16.703974 4957 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 14:32:16 crc kubenswrapper[4957]: E0218 14:32:16.704042 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/58c40982-35c8-4670-ad21-513a7a5a458e-metrics-certs podName:58c40982-35c8-4670-ad21-513a7a5a458e nodeName:}" failed. No retries permitted until 2026-02-18 14:32:24.704022288 +0000 UTC m=+51.224887032 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/58c40982-35c8-4670-ad21-513a7a5a458e-metrics-certs") pod "network-metrics-daemon-jkmlc" (UID: "58c40982-35c8-4670-ad21-513a7a5a458e") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 14:32:16 crc kubenswrapper[4957]: I0218 14:32:16.722099 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:16 crc kubenswrapper[4957]: I0218 14:32:16.722142 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:16 crc kubenswrapper[4957]: I0218 14:32:16.722160 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:16 crc kubenswrapper[4957]: I0218 14:32:16.722180 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:16 crc kubenswrapper[4957]: I0218 14:32:16.722192 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:16Z","lastTransitionTime":"2026-02-18T14:32:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:16 crc kubenswrapper[4957]: I0218 14:32:16.825595 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:16 crc kubenswrapper[4957]: I0218 14:32:16.825683 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:16 crc kubenswrapper[4957]: I0218 14:32:16.825704 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:16 crc kubenswrapper[4957]: I0218 14:32:16.825737 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:16 crc kubenswrapper[4957]: I0218 14:32:16.825760 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:16Z","lastTransitionTime":"2026-02-18T14:32:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:16 crc kubenswrapper[4957]: I0218 14:32:16.928107 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:16 crc kubenswrapper[4957]: I0218 14:32:16.928197 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:16 crc kubenswrapper[4957]: I0218 14:32:16.928214 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:16 crc kubenswrapper[4957]: I0218 14:32:16.928233 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:16 crc kubenswrapper[4957]: I0218 14:32:16.928246 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:16Z","lastTransitionTime":"2026-02-18T14:32:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:17 crc kubenswrapper[4957]: I0218 14:32:17.034277 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:17 crc kubenswrapper[4957]: I0218 14:32:17.034359 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:17 crc kubenswrapper[4957]: I0218 14:32:17.034385 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:17 crc kubenswrapper[4957]: I0218 14:32:17.034453 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:17 crc kubenswrapper[4957]: I0218 14:32:17.034481 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:17Z","lastTransitionTime":"2026-02-18T14:32:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:17 crc kubenswrapper[4957]: I0218 14:32:17.137183 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:17 crc kubenswrapper[4957]: I0218 14:32:17.137231 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:17 crc kubenswrapper[4957]: I0218 14:32:17.137242 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:17 crc kubenswrapper[4957]: I0218 14:32:17.137260 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:17 crc kubenswrapper[4957]: I0218 14:32:17.137272 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:17Z","lastTransitionTime":"2026-02-18T14:32:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:17 crc kubenswrapper[4957]: I0218 14:32:17.175176 4957 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 13:18:04.54884215 +0000 UTC Feb 18 14:32:17 crc kubenswrapper[4957]: I0218 14:32:17.212075 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jkmlc" Feb 18 14:32:17 crc kubenswrapper[4957]: I0218 14:32:17.212172 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:32:17 crc kubenswrapper[4957]: E0218 14:32:17.212235 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jkmlc" podUID="58c40982-35c8-4670-ad21-513a7a5a458e" Feb 18 14:32:17 crc kubenswrapper[4957]: E0218 14:32:17.212453 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 14:32:17 crc kubenswrapper[4957]: I0218 14:32:17.239362 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:17 crc kubenswrapper[4957]: I0218 14:32:17.239403 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:17 crc kubenswrapper[4957]: I0218 14:32:17.239412 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:17 crc kubenswrapper[4957]: I0218 14:32:17.239445 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:17 crc kubenswrapper[4957]: I0218 14:32:17.239455 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:17Z","lastTransitionTime":"2026-02-18T14:32:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:17 crc kubenswrapper[4957]: I0218 14:32:17.342064 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:17 crc kubenswrapper[4957]: I0218 14:32:17.342110 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:17 crc kubenswrapper[4957]: I0218 14:32:17.342122 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:17 crc kubenswrapper[4957]: I0218 14:32:17.342142 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:17 crc kubenswrapper[4957]: I0218 14:32:17.342155 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:17Z","lastTransitionTime":"2026-02-18T14:32:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:17 crc kubenswrapper[4957]: I0218 14:32:17.445029 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:17 crc kubenswrapper[4957]: I0218 14:32:17.445104 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:17 crc kubenswrapper[4957]: I0218 14:32:17.445138 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:17 crc kubenswrapper[4957]: I0218 14:32:17.445170 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:17 crc kubenswrapper[4957]: I0218 14:32:17.445191 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:17Z","lastTransitionTime":"2026-02-18T14:32:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:17 crc kubenswrapper[4957]: I0218 14:32:17.548272 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:17 crc kubenswrapper[4957]: I0218 14:32:17.548359 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:17 crc kubenswrapper[4957]: I0218 14:32:17.548384 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:17 crc kubenswrapper[4957]: I0218 14:32:17.548450 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:17 crc kubenswrapper[4957]: I0218 14:32:17.548491 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:17Z","lastTransitionTime":"2026-02-18T14:32:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:17 crc kubenswrapper[4957]: I0218 14:32:17.651498 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:17 crc kubenswrapper[4957]: I0218 14:32:17.651590 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:17 crc kubenswrapper[4957]: I0218 14:32:17.651606 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:17 crc kubenswrapper[4957]: I0218 14:32:17.651627 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:17 crc kubenswrapper[4957]: I0218 14:32:17.651640 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:17Z","lastTransitionTime":"2026-02-18T14:32:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:17 crc kubenswrapper[4957]: I0218 14:32:17.754029 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:17 crc kubenswrapper[4957]: I0218 14:32:17.754084 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:17 crc kubenswrapper[4957]: I0218 14:32:17.754094 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:17 crc kubenswrapper[4957]: I0218 14:32:17.754109 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:17 crc kubenswrapper[4957]: I0218 14:32:17.754119 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:17Z","lastTransitionTime":"2026-02-18T14:32:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:17 crc kubenswrapper[4957]: I0218 14:32:17.857055 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:17 crc kubenswrapper[4957]: I0218 14:32:17.857113 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:17 crc kubenswrapper[4957]: I0218 14:32:17.857127 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:17 crc kubenswrapper[4957]: I0218 14:32:17.857148 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:17 crc kubenswrapper[4957]: I0218 14:32:17.857164 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:17Z","lastTransitionTime":"2026-02-18T14:32:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:17 crc kubenswrapper[4957]: I0218 14:32:17.960648 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:17 crc kubenswrapper[4957]: I0218 14:32:17.960850 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:17 crc kubenswrapper[4957]: I0218 14:32:17.960876 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:17 crc kubenswrapper[4957]: I0218 14:32:17.960905 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:17 crc kubenswrapper[4957]: I0218 14:32:17.960927 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:17Z","lastTransitionTime":"2026-02-18T14:32:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:18 crc kubenswrapper[4957]: I0218 14:32:18.064063 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:18 crc kubenswrapper[4957]: I0218 14:32:18.064140 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:18 crc kubenswrapper[4957]: I0218 14:32:18.064163 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:18 crc kubenswrapper[4957]: I0218 14:32:18.064194 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:18 crc kubenswrapper[4957]: I0218 14:32:18.064216 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:18Z","lastTransitionTime":"2026-02-18T14:32:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:18 crc kubenswrapper[4957]: I0218 14:32:18.167022 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:18 crc kubenswrapper[4957]: I0218 14:32:18.167069 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:18 crc kubenswrapper[4957]: I0218 14:32:18.167081 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:18 crc kubenswrapper[4957]: I0218 14:32:18.167097 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:18 crc kubenswrapper[4957]: I0218 14:32:18.167109 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:18Z","lastTransitionTime":"2026-02-18T14:32:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:18 crc kubenswrapper[4957]: I0218 14:32:18.175365 4957 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 03:59:27.850858913 +0000 UTC Feb 18 14:32:18 crc kubenswrapper[4957]: I0218 14:32:18.211960 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:32:18 crc kubenswrapper[4957]: I0218 14:32:18.212025 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:32:18 crc kubenswrapper[4957]: E0218 14:32:18.212153 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 14:32:18 crc kubenswrapper[4957]: E0218 14:32:18.212204 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 14:32:18 crc kubenswrapper[4957]: I0218 14:32:18.269069 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:18 crc kubenswrapper[4957]: I0218 14:32:18.269116 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:18 crc kubenswrapper[4957]: I0218 14:32:18.269125 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:18 crc kubenswrapper[4957]: I0218 14:32:18.269140 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:18 crc kubenswrapper[4957]: I0218 14:32:18.269150 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:18Z","lastTransitionTime":"2026-02-18T14:32:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:18 crc kubenswrapper[4957]: I0218 14:32:18.371235 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:18 crc kubenswrapper[4957]: I0218 14:32:18.371280 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:18 crc kubenswrapper[4957]: I0218 14:32:18.371292 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:18 crc kubenswrapper[4957]: I0218 14:32:18.371310 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:18 crc kubenswrapper[4957]: I0218 14:32:18.371321 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:18Z","lastTransitionTime":"2026-02-18T14:32:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:18 crc kubenswrapper[4957]: I0218 14:32:18.473959 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:18 crc kubenswrapper[4957]: I0218 14:32:18.474006 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:18 crc kubenswrapper[4957]: I0218 14:32:18.474016 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:18 crc kubenswrapper[4957]: I0218 14:32:18.474034 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:18 crc kubenswrapper[4957]: I0218 14:32:18.474044 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:18Z","lastTransitionTime":"2026-02-18T14:32:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:18 crc kubenswrapper[4957]: I0218 14:32:18.576562 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:18 crc kubenswrapper[4957]: I0218 14:32:18.576592 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:18 crc kubenswrapper[4957]: I0218 14:32:18.576599 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:18 crc kubenswrapper[4957]: I0218 14:32:18.576614 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:18 crc kubenswrapper[4957]: I0218 14:32:18.576651 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:18Z","lastTransitionTime":"2026-02-18T14:32:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:18 crc kubenswrapper[4957]: I0218 14:32:18.678958 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:18 crc kubenswrapper[4957]: I0218 14:32:18.679014 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:18 crc kubenswrapper[4957]: I0218 14:32:18.679024 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:18 crc kubenswrapper[4957]: I0218 14:32:18.679040 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:18 crc kubenswrapper[4957]: I0218 14:32:18.679051 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:18Z","lastTransitionTime":"2026-02-18T14:32:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:18 crc kubenswrapper[4957]: I0218 14:32:18.782535 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:18 crc kubenswrapper[4957]: I0218 14:32:18.782586 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:18 crc kubenswrapper[4957]: I0218 14:32:18.782600 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:18 crc kubenswrapper[4957]: I0218 14:32:18.782619 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:18 crc kubenswrapper[4957]: I0218 14:32:18.782632 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:18Z","lastTransitionTime":"2026-02-18T14:32:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:18 crc kubenswrapper[4957]: I0218 14:32:18.886571 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:18 crc kubenswrapper[4957]: I0218 14:32:18.886624 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:18 crc kubenswrapper[4957]: I0218 14:32:18.886638 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:18 crc kubenswrapper[4957]: I0218 14:32:18.886657 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:18 crc kubenswrapper[4957]: I0218 14:32:18.886670 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:18Z","lastTransitionTime":"2026-02-18T14:32:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:18 crc kubenswrapper[4957]: I0218 14:32:18.989773 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:18 crc kubenswrapper[4957]: I0218 14:32:18.989857 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:18 crc kubenswrapper[4957]: I0218 14:32:18.989880 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:18 crc kubenswrapper[4957]: I0218 14:32:18.989903 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:18 crc kubenswrapper[4957]: I0218 14:32:18.989949 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:18Z","lastTransitionTime":"2026-02-18T14:32:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:19 crc kubenswrapper[4957]: I0218 14:32:19.093255 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:19 crc kubenswrapper[4957]: I0218 14:32:19.093307 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:19 crc kubenswrapper[4957]: I0218 14:32:19.093320 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:19 crc kubenswrapper[4957]: I0218 14:32:19.093339 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:19 crc kubenswrapper[4957]: I0218 14:32:19.093351 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:19Z","lastTransitionTime":"2026-02-18T14:32:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:19 crc kubenswrapper[4957]: I0218 14:32:19.175500 4957 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 16:06:15.865715079 +0000 UTC Feb 18 14:32:19 crc kubenswrapper[4957]: I0218 14:32:19.195760 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:19 crc kubenswrapper[4957]: I0218 14:32:19.195818 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:19 crc kubenswrapper[4957]: I0218 14:32:19.195830 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:19 crc kubenswrapper[4957]: I0218 14:32:19.195869 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:19 crc kubenswrapper[4957]: I0218 14:32:19.195885 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:19Z","lastTransitionTime":"2026-02-18T14:32:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:19 crc kubenswrapper[4957]: I0218 14:32:19.212443 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jkmlc" Feb 18 14:32:19 crc kubenswrapper[4957]: E0218 14:32:19.212673 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jkmlc" podUID="58c40982-35c8-4670-ad21-513a7a5a458e" Feb 18 14:32:19 crc kubenswrapper[4957]: I0218 14:32:19.212744 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:32:19 crc kubenswrapper[4957]: E0218 14:32:19.212949 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 14:32:19 crc kubenswrapper[4957]: I0218 14:32:19.298222 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:19 crc kubenswrapper[4957]: I0218 14:32:19.298269 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:19 crc kubenswrapper[4957]: I0218 14:32:19.298281 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:19 crc kubenswrapper[4957]: I0218 14:32:19.298297 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:19 crc kubenswrapper[4957]: I0218 14:32:19.298309 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:19Z","lastTransitionTime":"2026-02-18T14:32:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:19 crc kubenswrapper[4957]: I0218 14:32:19.401411 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:19 crc kubenswrapper[4957]: I0218 14:32:19.401478 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:19 crc kubenswrapper[4957]: I0218 14:32:19.401489 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:19 crc kubenswrapper[4957]: I0218 14:32:19.401509 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:19 crc kubenswrapper[4957]: I0218 14:32:19.401519 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:19Z","lastTransitionTime":"2026-02-18T14:32:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:19 crc kubenswrapper[4957]: I0218 14:32:19.504405 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:19 crc kubenswrapper[4957]: I0218 14:32:19.504475 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:19 crc kubenswrapper[4957]: I0218 14:32:19.504488 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:19 crc kubenswrapper[4957]: I0218 14:32:19.504506 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:19 crc kubenswrapper[4957]: I0218 14:32:19.504518 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:19Z","lastTransitionTime":"2026-02-18T14:32:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:19 crc kubenswrapper[4957]: I0218 14:32:19.607702 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:19 crc kubenswrapper[4957]: I0218 14:32:19.607768 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:19 crc kubenswrapper[4957]: I0218 14:32:19.607787 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:19 crc kubenswrapper[4957]: I0218 14:32:19.607819 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:19 crc kubenswrapper[4957]: I0218 14:32:19.607843 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:19Z","lastTransitionTime":"2026-02-18T14:32:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:19 crc kubenswrapper[4957]: I0218 14:32:19.710916 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:19 crc kubenswrapper[4957]: I0218 14:32:19.710962 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:19 crc kubenswrapper[4957]: I0218 14:32:19.710970 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:19 crc kubenswrapper[4957]: I0218 14:32:19.710986 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:19 crc kubenswrapper[4957]: I0218 14:32:19.710996 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:19Z","lastTransitionTime":"2026-02-18T14:32:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:19 crc kubenswrapper[4957]: I0218 14:32:19.814010 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:19 crc kubenswrapper[4957]: I0218 14:32:19.814076 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:19 crc kubenswrapper[4957]: I0218 14:32:19.814092 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:19 crc kubenswrapper[4957]: I0218 14:32:19.814113 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:19 crc kubenswrapper[4957]: I0218 14:32:19.814127 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:19Z","lastTransitionTime":"2026-02-18T14:32:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:19 crc kubenswrapper[4957]: I0218 14:32:19.917216 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:19 crc kubenswrapper[4957]: I0218 14:32:19.917270 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:19 crc kubenswrapper[4957]: I0218 14:32:19.917280 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:19 crc kubenswrapper[4957]: I0218 14:32:19.917296 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:19 crc kubenswrapper[4957]: I0218 14:32:19.917309 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:19Z","lastTransitionTime":"2026-02-18T14:32:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:20 crc kubenswrapper[4957]: I0218 14:32:20.019583 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:20 crc kubenswrapper[4957]: I0218 14:32:20.019623 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:20 crc kubenswrapper[4957]: I0218 14:32:20.019636 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:20 crc kubenswrapper[4957]: I0218 14:32:20.019658 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:20 crc kubenswrapper[4957]: I0218 14:32:20.019671 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:20Z","lastTransitionTime":"2026-02-18T14:32:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:20 crc kubenswrapper[4957]: I0218 14:32:20.122756 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:20 crc kubenswrapper[4957]: I0218 14:32:20.122811 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:20 crc kubenswrapper[4957]: I0218 14:32:20.122875 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:20 crc kubenswrapper[4957]: I0218 14:32:20.122902 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:20 crc kubenswrapper[4957]: I0218 14:32:20.122923 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:20Z","lastTransitionTime":"2026-02-18T14:32:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:20 crc kubenswrapper[4957]: I0218 14:32:20.176149 4957 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 16:14:57.334224881 +0000 UTC Feb 18 14:32:20 crc kubenswrapper[4957]: I0218 14:32:20.212052 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:32:20 crc kubenswrapper[4957]: I0218 14:32:20.212206 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:32:20 crc kubenswrapper[4957]: E0218 14:32:20.212372 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 14:32:20 crc kubenswrapper[4957]: E0218 14:32:20.212635 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 14:32:20 crc kubenswrapper[4957]: I0218 14:32:20.213324 4957 scope.go:117] "RemoveContainer" containerID="38a947c08dfc2f01adf41707eda957fa7da665d501ebe838b0141637dc5bded8" Feb 18 14:32:20 crc kubenswrapper[4957]: I0218 14:32:20.226985 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:20 crc kubenswrapper[4957]: I0218 14:32:20.227132 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:20 crc kubenswrapper[4957]: I0218 14:32:20.227195 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:20 crc kubenswrapper[4957]: I0218 14:32:20.227269 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:20 crc kubenswrapper[4957]: I0218 14:32:20.227332 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:20Z","lastTransitionTime":"2026-02-18T14:32:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:20 crc kubenswrapper[4957]: I0218 14:32:20.329488 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:20 crc kubenswrapper[4957]: I0218 14:32:20.329533 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:20 crc kubenswrapper[4957]: I0218 14:32:20.329551 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:20 crc kubenswrapper[4957]: I0218 14:32:20.329571 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:20 crc kubenswrapper[4957]: I0218 14:32:20.329590 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:20Z","lastTransitionTime":"2026-02-18T14:32:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:20 crc kubenswrapper[4957]: I0218 14:32:20.433506 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:20 crc kubenswrapper[4957]: I0218 14:32:20.433903 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:20 crc kubenswrapper[4957]: I0218 14:32:20.433916 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:20 crc kubenswrapper[4957]: I0218 14:32:20.433937 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:20 crc kubenswrapper[4957]: I0218 14:32:20.433950 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:20Z","lastTransitionTime":"2026-02-18T14:32:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:20 crc kubenswrapper[4957]: I0218 14:32:20.538689 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:20 crc kubenswrapper[4957]: I0218 14:32:20.538733 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:20 crc kubenswrapper[4957]: I0218 14:32:20.538748 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:20 crc kubenswrapper[4957]: I0218 14:32:20.538767 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:20 crc kubenswrapper[4957]: I0218 14:32:20.538781 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:20Z","lastTransitionTime":"2026-02-18T14:32:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:20 crc kubenswrapper[4957]: I0218 14:32:20.570736 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:20 crc kubenswrapper[4957]: I0218 14:32:20.570778 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:20 crc kubenswrapper[4957]: I0218 14:32:20.570790 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:20 crc kubenswrapper[4957]: I0218 14:32:20.570807 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:20 crc kubenswrapper[4957]: I0218 14:32:20.570820 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:20Z","lastTransitionTime":"2026-02-18T14:32:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:20 crc kubenswrapper[4957]: I0218 14:32:20.576528 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-t7lp9_c1ab5e7d-28c9-416b-9e12-1209987d8a2c/ovnkube-controller/1.log" Feb 18 14:32:20 crc kubenswrapper[4957]: I0218 14:32:20.579639 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t7lp9" event={"ID":"c1ab5e7d-28c9-416b-9e12-1209987d8a2c","Type":"ContainerStarted","Data":"01eae4440ba56e59be4a568c08e1c676d03e370e9abdd72986eca6f68f01e0f1"} Feb 18 14:32:20 crc kubenswrapper[4957]: I0218 14:32:20.579790 4957 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 18 14:32:20 crc kubenswrapper[4957]: E0218 14:32:20.582932 4957 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:32:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:32:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:32:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:32:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"73cbc702-e999-4b05-a826-bb1b15d4d73b\\\",\\\"systemUUID\\\":\\\"9fb0acd4-1ed1-4909-a63a-3f4dd7b07055\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:20Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:20 crc kubenswrapper[4957]: I0218 14:32:20.590652 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:20 crc kubenswrapper[4957]: I0218 14:32:20.590706 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:20 crc kubenswrapper[4957]: I0218 14:32:20.590717 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:20 crc kubenswrapper[4957]: I0218 14:32:20.590734 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:20 crc kubenswrapper[4957]: I0218 14:32:20.590745 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:20Z","lastTransitionTime":"2026-02-18T14:32:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:20 crc kubenswrapper[4957]: E0218 14:32:20.611880 4957 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:32:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:32:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:32:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:32:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"73cbc702-e999-4b05-a826-bb1b15d4d73b\\\",\\\"systemUUID\\\":\\\"9fb0acd4-1ed1-4909-a63a-3f4dd7b07055\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:20Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:20 crc kubenswrapper[4957]: I0218 14:32:20.613404 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"820e2f52-fc4e-43af-8da0-5b1ecb52e54a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdeaa62699fcf37060225a404103ed014fff43f751f744a27891afb76c56d011\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb9f54ba352c3e8d1cb70e31e2321f32cc741e9ea7fa871ac8db9cb380486d6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04cb263fd49c7f580746efa2f81c4e25a4862244502ad5362a1767d6255c023c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f45a9407db247a891731310afb4c1a7b81ac94e44cfd8e267854a4c4e434e95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:34Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:20Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:20 crc kubenswrapper[4957]: I0218 14:32:20.618297 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:20 crc kubenswrapper[4957]: I0218 14:32:20.618324 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:20 crc kubenswrapper[4957]: I0218 14:32:20.618348 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:20 crc kubenswrapper[4957]: I0218 14:32:20.618362 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:20 crc kubenswrapper[4957]: I0218 14:32:20.618372 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:20Z","lastTransitionTime":"2026-02-18T14:32:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:20 crc kubenswrapper[4957]: E0218 14:32:20.638662 4957 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:32:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:32:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:32:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:32:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"73cbc702-e999-4b05-a826-bb1b15d4d73b\\\",\\\"systemUUID\\\":\\\"9fb0acd4-1ed1-4909-a63a-3f4dd7b07055\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:20Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:20 crc kubenswrapper[4957]: I0218 14:32:20.640412 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:20Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:20 crc kubenswrapper[4957]: I0218 14:32:20.642653 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:20 crc kubenswrapper[4957]: I0218 14:32:20.642690 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:20 crc kubenswrapper[4957]: I0218 14:32:20.642701 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:20 crc kubenswrapper[4957]: I0218 14:32:20.642717 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:20 crc kubenswrapper[4957]: I0218 14:32:20.642728 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:20Z","lastTransitionTime":"2026-02-18T14:32:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:20 crc kubenswrapper[4957]: I0218 14:32:20.653405 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sk96m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7e4902e-7bb6-44ec-a3ac-3d8b3afe3fbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://644a3c29970ac9c5262a3a66390a566947b89d6020eaf5de12afb1afeaaff673\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vssz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sk96m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:20Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:20 crc kubenswrapper[4957]: E0218 14:32:20.653972 4957 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:32:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:32:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:32:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:32:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"73cbc702-e999-4b05-a826-bb1b15d4d73b\\\",\\\"systemUUID\\\":\\\"9fb0acd4-1ed1-4909-a63a-3f4dd7b07055\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:20Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:20 crc kubenswrapper[4957]: I0218 14:32:20.659684 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:20 crc kubenswrapper[4957]: I0218 14:32:20.659716 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:20 crc kubenswrapper[4957]: I0218 14:32:20.659726 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:20 crc kubenswrapper[4957]: I0218 14:32:20.659742 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:20 crc kubenswrapper[4957]: I0218 14:32:20.659752 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:20Z","lastTransitionTime":"2026-02-18T14:32:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:20 crc kubenswrapper[4957]: I0218 14:32:20.669647 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cde17e3-43e9-4bed-afe8-5b76229e35cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1929ddc0bd8d5e1c054fe6b3399c053a91ace49153f8712ad155416d0df0652e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mzslj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4227d09ba91769eac5df7ac50f6da20587fad22be3d3f27f0a938b37d76b457e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mzslj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-x8wwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:20Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:20 crc kubenswrapper[4957]: E0218 14:32:20.684517 4957 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:32:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:32:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:32:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:32:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"73cbc702-e999-4b05-a826-bb1b15d4d73b\\\",\\\"systemUUID\\\":\\\"9fb0acd4-1ed1-4909-a63a-3f4dd7b07055\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:20Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:20 crc kubenswrapper[4957]: E0218 14:32:20.684691 4957 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 18 14:32:20 crc kubenswrapper[4957]: I0218 14:32:20.686312 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:20 crc kubenswrapper[4957]: I0218 14:32:20.686335 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:20 crc kubenswrapper[4957]: I0218 14:32:20.686345 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:20 crc kubenswrapper[4957]: I0218 14:32:20.686361 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:20 crc kubenswrapper[4957]: I0218 14:32:20.686374 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:20Z","lastTransitionTime":"2026-02-18T14:32:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:20 crc kubenswrapper[4957]: I0218 14:32:20.689812 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3aa342a3ea42d2c64cf284dd2f7fed142b1ff377cc3c5384e4d940ec4364befc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:20Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:20 crc kubenswrapper[4957]: I0218 14:32:20.700919 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:20Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:20 crc kubenswrapper[4957]: I0218 14:32:20.714712 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-s7f5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77ae51a3-a3f7-4ea3-afb9-93558bf3b821\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ff578d3657e9ba6b371eb3c3a2e63d4c989e3232da6c9f88cf6ba18edc50dd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:32:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7f4f904ea5311aa8aa0473b62433cd3684d0e1fecdb1b179bfb7a58e463cc27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7f4f904ea5311aa8aa0473b62433cd3684d0e1fecdb1b179bfb7a58e463cc27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ea791a426f30ba77c019b8f865f9cfe90b915f6c0596002f4914cbe2fa68c27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ea791a426f30ba77c019b8f865f9cfe90b915f6c0596002f4914cbe2fa68c27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01a7d815c8c260fb5d2a4586f2640abf157fa209aa8e8aed0e6573a07bdaced6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01a7d815c8c260fb5d2a4586f2640abf157fa209aa8e8aed0e6573a07bdaced6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ae36af89d8b5cc2ad00253398fad76d78310bc8adb012b7b819ef55a26b8004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae36af89d8b5cc2ad00253398fad76d78310bc8adb012b7b819ef55a26b8004\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://def4b27d830824baeb1b965b56d5fc50343585d0fae5a1fed75381364a94caf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://def4b27d830824baeb1b965b56d5fc50343585d0fae5a1fed75381364a94caf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:32:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a13d3fcb295e963277f66a06bad27866ddb270d48474d0154efc52a8721a64a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a13d3fcb295e963277f66a06bad27866ddb270d48474d0154efc52a8721a64a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:32:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-s7f5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:20Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:20 crc kubenswrapper[4957]: I0218 14:32:20.724550 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-c5hxm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2b8a049-72c4-4a87-bb05-a5cc4ebe7a89\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d8c16012d78884094f9be4f3cd7b67463352ff1391c8f60c109b5f734c80cd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mzkhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-c5hxm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:20Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:20 crc kubenswrapper[4957]: I0218 14:32:20.735666 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jkmlc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58c40982-35c8-4670-ad21-513a7a5a458e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gkmzc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gkmzc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:32:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jkmlc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:20Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:20 crc kubenswrapper[4957]: I0218 14:32:20.750754 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b3bef1f-7cee-4035-bc8e-195fadcf2d19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51eded762a62fcb704d09a9c4336a0e308e44c662b636772ea79f754c09dcaaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a2ec9911be4f4d8715bb722cdf72cdbf478111030a7311b93eb67f55ddbce77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d46f625f180ab9afde6a7cb5d578b88012bf5b297e691a35f1fe1af000f1416\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a11b36734e57eec3aba0b6d96b4102850e46b96eb16d99cfcacf273f1ce69f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ff013b6ed4aa92a4bda93a646e7ffd4a158daab436c25025c7fbee49716bcfd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T14:31:47Z\\\",\\\"message\\\":\\\"W0218 14:31:37.311828 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0218 14:31:37.312305 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771425097 cert, and key in /tmp/serving-cert-2580901173/serving-signer.crt, /tmp/serving-cert-2580901173/serving-signer.key\\\\nI0218 14:31:37.668457 1 observer_polling.go:159] Starting file observer\\\\nW0218 14:31:37.670692 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0218 14:31:37.670892 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 14:31:37.672550 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2580901173/tls.crt::/tmp/serving-cert-2580901173/tls.key\\\\\\\"\\\\nF0218 14:31:47.884142 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1c0f986e657c20a059efccb494462926f4fa62d30b012e6ce2b530d679d9d5c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:37Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09ddd547ad67f12c0d4f3b0505206fc69b4bcfcfa966c6f98da8ef4f4e979711\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09ddd547ad67f12c0d4f3b0505206fc69b4bcfcfa966c6f98da8ef4f4e979711\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:20Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:20 crc kubenswrapper[4957]: I0218 14:32:20.765154 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:20Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:20 crc kubenswrapper[4957]: I0218 14:32:20.780221 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3339e9ff017eaa7de40b766111f5d7dcb9b5b45f154d917b2f3b1ebad21ee361\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://815a22c1c4523e923dbe20bb68b1f4195562f2b3a767409555f43c82cd826391\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:20Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:20 crc kubenswrapper[4957]: I0218 14:32:20.788860 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:20 crc kubenswrapper[4957]: I0218 14:32:20.788910 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:20 crc kubenswrapper[4957]: I0218 14:32:20.788947 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:20 crc kubenswrapper[4957]: I0218 14:32:20.788976 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:20 crc kubenswrapper[4957]: I0218 14:32:20.788990 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:20Z","lastTransitionTime":"2026-02-18T14:32:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:20 crc kubenswrapper[4957]: I0218 14:32:20.798710 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96ebe482278c96a988073257a64389543a692cbae60b771542dd45ffdbef9977\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:20Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:20 crc kubenswrapper[4957]: I0218 14:32:20.814992 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wn2pd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6a720f-7d42-48e9-8073-fb4f7417e6cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3317042ce828b73a13e6c2724a7c7663e0c73dd05fef1239f5ad36a7d46b1ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qv7rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wn2pd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:20Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:20 crc kubenswrapper[4957]: I0218 14:32:20.834096 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t7lp9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1ab5e7d-28c9-416b-9e12-1209987d8a2c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3d55cd020086b018dcf2183c7307ce4f66a18d4dfd5c7e387419f5aaeb08e53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb7e286dbf7b49a0ee8019eb660a4b977399c163bb094d77f830b3747399d702\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c773ab7da87d534f8ca400b73f89b473291b15b65118abfa48da7e8af126724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fb30623c479a6d604d07d8779f3b5661ba8a2c98fa38f145443c7f80fbfddb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9575c57251809ec79d9e9a63fc18fc0fe431582b023c0bfb9d71b3ea5c369e49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7fc50065803f879ae25e1ab406e1b74e142cab30c38bedd0426616a2cc6cc84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01eae4440ba56e59be4a568c08e1c676d03e370e9abdd72986eca6f68f01e0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38a947c08dfc2f01adf41707eda957fa7da665d501ebe838b0141637dc5bded8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T14:32:06Z\\\",\\\"message\\\":\\\"14:32:06.547502 6408 services_controller.go:356] Processing sync for service openshift-network-operator/metrics for network=default\\\\nI0218 14:32:06.547539 6408 services_controller.go:356] Processing sync for service openshift-cluster-machine-approver/machine-approver for network=default\\\\nI0218 14:32:06.547544 6408 services_controller.go:356] Processing sync for service openshift-cluster-samples-operator/metrics for network=default\\\\nI0218 14:32:06.547552 6408 services_controller.go:360] Finished syncing service metrics on namespace openshift-cluster-samples-operator for network=default : 9.4µs\\\\nI0218 14:32:06.547552 6408 services_controller.go:360] Finished syncing service metrics on namespace openshift-network-operator for network=default : 47.872µs\\\\nI0218 14:32:06.547528 6408 services_controller.go:356] Processing sync for service openshift-ovn-kubernetes/ovn-kubernetes-control-plane for network=default\\\\nI0218 14:32:06.547575 6408 services_controller.go:360] Finished syncing service ovn-kubernetes-control-plane on namespace openshift-ovn-kubernetes for network=default : 49.831µs\\\\nI0218 14:32:06.547558 6408 services_controller.go:360] Finished syncing service machine-approver on namespace openshift-cluster-machine-approver for network=default : 19.081µs\\\\nI0218 14:32:06.547639 6408 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T14:32:05Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07549d11e88948ba7929804722b7d49a2fefb64d0c324c2032529c13df70a23f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854d32930f1afa0a3f29b83159a4a473fd3c074f065cab0f110c4c0e62d0ab79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://854d32930f1afa0a3f29b83159a4a473fd3c074f065cab0f110c4c0e62d0ab79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t7lp9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:20Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:20 crc kubenswrapper[4957]: I0218 14:32:20.845792 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-52gh7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"841cf9b6-bfbb-4ff0-8899-acd00478a669\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e52fe12bee0a96d77a82c8edb65574b11ae3d6b6dd3e2a68e918f1d21c0b1153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xg6v5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d556fc2e5fb106135a544401595696797ede468aa93818660d86424c49486fff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xg6v5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:32:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-52gh7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:20Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:20 crc kubenswrapper[4957]: I0218 14:32:20.891783 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:20 crc kubenswrapper[4957]: I0218 14:32:20.891843 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:20 crc kubenswrapper[4957]: I0218 14:32:20.891855 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:20 crc kubenswrapper[4957]: I0218 14:32:20.891875 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:20 crc kubenswrapper[4957]: I0218 14:32:20.891889 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:20Z","lastTransitionTime":"2026-02-18T14:32:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:20 crc kubenswrapper[4957]: I0218 14:32:20.994810 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:20 crc kubenswrapper[4957]: I0218 14:32:20.994846 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:20 crc kubenswrapper[4957]: I0218 14:32:20.994854 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:20 crc kubenswrapper[4957]: I0218 14:32:20.994869 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:20 crc kubenswrapper[4957]: I0218 14:32:20.994878 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:20Z","lastTransitionTime":"2026-02-18T14:32:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:21 crc kubenswrapper[4957]: I0218 14:32:21.097733 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:21 crc kubenswrapper[4957]: I0218 14:32:21.097810 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:21 crc kubenswrapper[4957]: I0218 14:32:21.097836 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:21 crc kubenswrapper[4957]: I0218 14:32:21.097860 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:21 crc kubenswrapper[4957]: I0218 14:32:21.097878 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:21Z","lastTransitionTime":"2026-02-18T14:32:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:21 crc kubenswrapper[4957]: I0218 14:32:21.176756 4957 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 07:52:36.999992074 +0000 UTC Feb 18 14:32:21 crc kubenswrapper[4957]: I0218 14:32:21.201321 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:21 crc kubenswrapper[4957]: I0218 14:32:21.201396 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:21 crc kubenswrapper[4957]: I0218 14:32:21.201454 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:21 crc kubenswrapper[4957]: I0218 14:32:21.201486 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:21 crc kubenswrapper[4957]: I0218 14:32:21.201510 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:21Z","lastTransitionTime":"2026-02-18T14:32:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:21 crc kubenswrapper[4957]: I0218 14:32:21.212729 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jkmlc" Feb 18 14:32:21 crc kubenswrapper[4957]: E0218 14:32:21.212924 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jkmlc" podUID="58c40982-35c8-4670-ad21-513a7a5a458e" Feb 18 14:32:21 crc kubenswrapper[4957]: I0218 14:32:21.213112 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:32:21 crc kubenswrapper[4957]: E0218 14:32:21.213397 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 14:32:21 crc kubenswrapper[4957]: I0218 14:32:21.304642 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:21 crc kubenswrapper[4957]: I0218 14:32:21.304719 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:21 crc kubenswrapper[4957]: I0218 14:32:21.304744 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:21 crc kubenswrapper[4957]: I0218 14:32:21.304776 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:21 crc kubenswrapper[4957]: I0218 14:32:21.304799 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:21Z","lastTransitionTime":"2026-02-18T14:32:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:21 crc kubenswrapper[4957]: I0218 14:32:21.408800 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:21 crc kubenswrapper[4957]: I0218 14:32:21.408885 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:21 crc kubenswrapper[4957]: I0218 14:32:21.408904 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:21 crc kubenswrapper[4957]: I0218 14:32:21.408931 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:21 crc kubenswrapper[4957]: I0218 14:32:21.408950 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:21Z","lastTransitionTime":"2026-02-18T14:32:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:21 crc kubenswrapper[4957]: I0218 14:32:21.513514 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:21 crc kubenswrapper[4957]: I0218 14:32:21.513571 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:21 crc kubenswrapper[4957]: I0218 14:32:21.513583 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:21 crc kubenswrapper[4957]: I0218 14:32:21.513604 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:21 crc kubenswrapper[4957]: I0218 14:32:21.513616 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:21Z","lastTransitionTime":"2026-02-18T14:32:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:21 crc kubenswrapper[4957]: I0218 14:32:21.586210 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-t7lp9_c1ab5e7d-28c9-416b-9e12-1209987d8a2c/ovnkube-controller/2.log" Feb 18 14:32:21 crc kubenswrapper[4957]: I0218 14:32:21.586990 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-t7lp9_c1ab5e7d-28c9-416b-9e12-1209987d8a2c/ovnkube-controller/1.log" Feb 18 14:32:21 crc kubenswrapper[4957]: I0218 14:32:21.591205 4957 generic.go:334] "Generic (PLEG): container finished" podID="c1ab5e7d-28c9-416b-9e12-1209987d8a2c" containerID="01eae4440ba56e59be4a568c08e1c676d03e370e9abdd72986eca6f68f01e0f1" exitCode=1 Feb 18 14:32:21 crc kubenswrapper[4957]: I0218 14:32:21.591251 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t7lp9" event={"ID":"c1ab5e7d-28c9-416b-9e12-1209987d8a2c","Type":"ContainerDied","Data":"01eae4440ba56e59be4a568c08e1c676d03e370e9abdd72986eca6f68f01e0f1"} Feb 18 14:32:21 crc kubenswrapper[4957]: I0218 14:32:21.591299 4957 scope.go:117] "RemoveContainer" containerID="38a947c08dfc2f01adf41707eda957fa7da665d501ebe838b0141637dc5bded8" Feb 18 14:32:21 crc kubenswrapper[4957]: I0218 14:32:21.592136 4957 scope.go:117] "RemoveContainer" containerID="01eae4440ba56e59be4a568c08e1c676d03e370e9abdd72986eca6f68f01e0f1" Feb 18 14:32:21 crc kubenswrapper[4957]: E0218 14:32:21.592317 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-t7lp9_openshift-ovn-kubernetes(c1ab5e7d-28c9-416b-9e12-1209987d8a2c)\"" pod="openshift-ovn-kubernetes/ovnkube-node-t7lp9" podUID="c1ab5e7d-28c9-416b-9e12-1209987d8a2c" Feb 18 14:32:21 crc kubenswrapper[4957]: I0218 14:32:21.616522 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:21 crc kubenswrapper[4957]: I0218 14:32:21.616585 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:21 crc kubenswrapper[4957]: I0218 14:32:21.616596 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:21 crc kubenswrapper[4957]: I0218 14:32:21.616617 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:21 crc kubenswrapper[4957]: I0218 14:32:21.616629 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:21Z","lastTransitionTime":"2026-02-18T14:32:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:21 crc kubenswrapper[4957]: I0218 14:32:21.626995 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t7lp9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1ab5e7d-28c9-416b-9e12-1209987d8a2c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3d55cd020086b018dcf2183c7307ce4f66a18d4dfd5c7e387419f5aaeb08e53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb7e286dbf7b49a0ee8019eb660a4b977399c163bb094d77f830b3747399d702\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c773ab7da87d534f8ca400b73f89b473291b15b65118abfa48da7e8af126724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fb30623c479a6d604d07d8779f3b5661ba8a2c98fa38f145443c7f80fbfddb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9575c57251809ec79d9e9a63fc18fc0fe431582b023c0bfb9d71b3ea5c369e49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7fc50065803f879ae25e1ab406e1b74e142cab30c38bedd0426616a2cc6cc84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01eae4440ba56e59be4a568c08e1c676d03e370e9abdd72986eca6f68f01e0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38a947c08dfc2f01adf41707eda957fa7da665d501ebe838b0141637dc5bded8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T14:32:06Z\\\",\\\"message\\\":\\\"14:32:06.547502 6408 services_controller.go:356] Processing sync for service openshift-network-operator/metrics for network=default\\\\nI0218 14:32:06.547539 6408 services_controller.go:356] Processing sync for service openshift-cluster-machine-approver/machine-approver for network=default\\\\nI0218 14:32:06.547544 6408 services_controller.go:356] Processing sync for service openshift-cluster-samples-operator/metrics for network=default\\\\nI0218 14:32:06.547552 6408 services_controller.go:360] Finished syncing service metrics on namespace openshift-cluster-samples-operator for network=default : 9.4µs\\\\nI0218 14:32:06.547552 6408 services_controller.go:360] Finished syncing service metrics on namespace openshift-network-operator for network=default : 47.872µs\\\\nI0218 14:32:06.547528 6408 services_controller.go:356] Processing sync for service openshift-ovn-kubernetes/ovn-kubernetes-control-plane for network=default\\\\nI0218 14:32:06.547575 6408 services_controller.go:360] Finished syncing service ovn-kubernetes-control-plane on namespace openshift-ovn-kubernetes for network=default : 49.831µs\\\\nI0218 14:32:06.547558 6408 services_controller.go:360] Finished syncing service machine-approver on namespace openshift-cluster-machine-approver for network=default : 19.081µs\\\\nI0218 14:32:06.547639 6408 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T14:32:05Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01eae4440ba56e59be4a568c08e1c676d03e370e9abdd72986eca6f68f01e0f1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T14:32:21Z\\\",\\\"message\\\":\\\"erving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc0075eb187 \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:metrics,Protocol:TCP,Port:8383,TargetPort:{0 8383 },NodePort:0,AppProtocol:nil,},ServicePort{Name:https-metrics,Protocol:TCP,Port:8081,TargetPort:{0 8081 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{name: marketplace-operator,},ClusterIP:10.217.5.53,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.53],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF0218 14:32:21.034484 6635 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not add\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T14:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07549d11e88948ba7929804722b7d49a2fefb64d0c324c2032529c13df70a23f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854d32930f1afa0a3f29b83159a4a473fd3c074f065cab0f110c4c0e62d0ab79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://854d32930f1afa0a3f29b83159a4a473fd3c074f065cab0f110c4c0e62d0ab79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t7lp9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:21Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:21 crc kubenswrapper[4957]: I0218 14:32:21.638961 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-52gh7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"841cf9b6-bfbb-4ff0-8899-acd00478a669\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e52fe12bee0a96d77a82c8edb65574b11ae3d6b6dd3e2a68e918f1d21c0b1153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xg6v5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d556fc2e5fb106135a544401595696797ede468aa93818660d86424c49486fff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xg6v5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:32:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-52gh7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:21Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:21 crc kubenswrapper[4957]: I0218 14:32:21.651555 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"820e2f52-fc4e-43af-8da0-5b1ecb52e54a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdeaa62699fcf37060225a404103ed014fff43f751f744a27891afb76c56d011\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb9f54ba352c3e8d1cb70e31e2321f32cc741e9ea7fa871ac8db9cb380486d6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04cb263fd49c7f580746efa2f81c4e25a4862244502ad5362a1767d6255c023c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f45a9407db247a891731310afb4c1a7b81ac94e44cfd8e267854a4c4e434e95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:34Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:21Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:21 crc kubenswrapper[4957]: I0218 14:32:21.662723 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:21Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:21 crc kubenswrapper[4957]: I0218 14:32:21.674935 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sk96m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7e4902e-7bb6-44ec-a3ac-3d8b3afe3fbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://644a3c29970ac9c5262a3a66390a566947b89d6020eaf5de12afb1afeaaff673\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vssz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sk96m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:21Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:21 crc kubenswrapper[4957]: I0218 14:32:21.686900 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cde17e3-43e9-4bed-afe8-5b76229e35cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1929ddc0bd8d5e1c054fe6b3399c053a91ace49153f8712ad155416d0df0652e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mzslj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4227d09ba91769eac5df7ac50f6da20587fad22be3d3f27f0a938b37d76b457e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mzslj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-x8wwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:21Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:21 crc kubenswrapper[4957]: I0218 14:32:21.704861 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-s7f5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77ae51a3-a3f7-4ea3-afb9-93558bf3b821\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ff578d3657e9ba6b371eb3c3a2e63d4c989e3232da6c9f88cf6ba18edc50dd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:32:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7f4f904ea5311aa8aa0473b62433cd3684d0e1fecdb1b179bfb7a58e463cc27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7f4f904ea5311aa8aa0473b62433cd3684d0e1fecdb1b179bfb7a58e463cc27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ea791a426f30ba77c019b8f865f9cfe90b915f6c0596002f4914cbe2fa68c27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ea791a426f30ba77c019b8f865f9cfe90b915f6c0596002f4914cbe2fa68c27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01a7d815c8c260fb5d2a4586f2640abf157fa209aa8e8aed0e6573a07bdaced6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01a7d815c8c260fb5d2a4586f2640abf157fa209aa8e8aed0e6573a07bdaced6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ae36af89d8b5cc2ad00253398fad76d78310bc8adb012b7b819ef55a26b8004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae36af89d8b5cc2ad00253398fad76d78310bc8adb012b7b819ef55a26b8004\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://def4b27d830824baeb1b965b56d5fc50343585d0fae5a1fed75381364a94caf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://def4b27d830824baeb1b965b56d5fc50343585d0fae5a1fed75381364a94caf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:32:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a13d3fcb295e963277f66a06bad27866ddb270d48474d0154efc52a8721a64a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a13d3fcb295e963277f66a06bad27866ddb270d48474d0154efc52a8721a64a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:32:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-s7f5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:21Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:21 crc kubenswrapper[4957]: I0218 14:32:21.714240 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-c5hxm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2b8a049-72c4-4a87-bb05-a5cc4ebe7a89\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d8c16012d78884094f9be4f3cd7b67463352ff1391c8f60c109b5f734c80cd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mzkhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-c5hxm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:21Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:21 crc kubenswrapper[4957]: I0218 14:32:21.719084 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:21 crc kubenswrapper[4957]: I0218 14:32:21.719119 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:21 crc kubenswrapper[4957]: I0218 14:32:21.719135 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:21 crc kubenswrapper[4957]: I0218 14:32:21.719152 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:21 crc kubenswrapper[4957]: I0218 14:32:21.719165 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:21Z","lastTransitionTime":"2026-02-18T14:32:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:21 crc kubenswrapper[4957]: I0218 14:32:21.725975 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jkmlc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58c40982-35c8-4670-ad21-513a7a5a458e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gkmzc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gkmzc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:32:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jkmlc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:21Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:21 crc kubenswrapper[4957]: I0218 14:32:21.739997 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3aa342a3ea42d2c64cf284dd2f7fed142b1ff377cc3c5384e4d940ec4364befc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:21Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:21 crc kubenswrapper[4957]: I0218 14:32:21.752191 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:21Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:21 crc kubenswrapper[4957]: I0218 14:32:21.763589 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96ebe482278c96a988073257a64389543a692cbae60b771542dd45ffdbef9977\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:21Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:21 crc kubenswrapper[4957]: I0218 14:32:21.773697 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wn2pd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6a720f-7d42-48e9-8073-fb4f7417e6cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3317042ce828b73a13e6c2724a7c7663e0c73dd05fef1239f5ad36a7d46b1ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qv7rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wn2pd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:21Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:21 crc kubenswrapper[4957]: I0218 14:32:21.786298 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b3bef1f-7cee-4035-bc8e-195fadcf2d19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51eded762a62fcb704d09a9c4336a0e308e44c662b636772ea79f754c09dcaaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a2ec9911be4f4d8715bb722cdf72cdbf478111030a7311b93eb67f55ddbce77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d46f625f180ab9afde6a7cb5d578b88012bf5b297e691a35f1fe1af000f1416\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a11b36734e57eec3aba0b6d96b4102850e46b96eb16d99cfcacf273f1ce69f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ff013b6ed4aa92a4bda93a646e7ffd4a158daab436c25025c7fbee49716bcfd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T14:31:47Z\\\",\\\"message\\\":\\\"W0218 14:31:37.311828 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0218 14:31:37.312305 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771425097 cert, and key in /tmp/serving-cert-2580901173/serving-signer.crt, /tmp/serving-cert-2580901173/serving-signer.key\\\\nI0218 14:31:37.668457 1 observer_polling.go:159] Starting file observer\\\\nW0218 14:31:37.670692 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0218 14:31:37.670892 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 14:31:37.672550 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2580901173/tls.crt::/tmp/serving-cert-2580901173/tls.key\\\\\\\"\\\\nF0218 14:31:47.884142 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1c0f986e657c20a059efccb494462926f4fa62d30b012e6ce2b530d679d9d5c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:37Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09ddd547ad67f12c0d4f3b0505206fc69b4bcfcfa966c6f98da8ef4f4e979711\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09ddd547ad67f12c0d4f3b0505206fc69b4bcfcfa966c6f98da8ef4f4e979711\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:21Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:21 crc kubenswrapper[4957]: I0218 14:32:21.798797 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:21Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:21 crc kubenswrapper[4957]: I0218 14:32:21.814004 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3339e9ff017eaa7de40b766111f5d7dcb9b5b45f154d917b2f3b1ebad21ee361\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://815a22c1c4523e923dbe20bb68b1f4195562f2b3a767409555f43c82cd826391\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:21Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:21 crc kubenswrapper[4957]: I0218 14:32:21.821378 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:21 crc kubenswrapper[4957]: I0218 14:32:21.821406 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:21 crc kubenswrapper[4957]: I0218 14:32:21.821431 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:21 crc kubenswrapper[4957]: I0218 14:32:21.821449 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:21 crc kubenswrapper[4957]: I0218 14:32:21.821461 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:21Z","lastTransitionTime":"2026-02-18T14:32:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:21 crc kubenswrapper[4957]: I0218 14:32:21.924499 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:21 crc kubenswrapper[4957]: I0218 14:32:21.924547 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:21 crc kubenswrapper[4957]: I0218 14:32:21.924560 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:21 crc kubenswrapper[4957]: I0218 14:32:21.924580 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:21 crc kubenswrapper[4957]: I0218 14:32:21.924592 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:21Z","lastTransitionTime":"2026-02-18T14:32:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:22 crc kubenswrapper[4957]: I0218 14:32:22.026957 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:22 crc kubenswrapper[4957]: I0218 14:32:22.026998 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:22 crc kubenswrapper[4957]: I0218 14:32:22.027008 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:22 crc kubenswrapper[4957]: I0218 14:32:22.027025 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:22 crc kubenswrapper[4957]: I0218 14:32:22.027035 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:22Z","lastTransitionTime":"2026-02-18T14:32:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:22 crc kubenswrapper[4957]: I0218 14:32:22.130458 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:22 crc kubenswrapper[4957]: I0218 14:32:22.130502 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:22 crc kubenswrapper[4957]: I0218 14:32:22.130511 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:22 crc kubenswrapper[4957]: I0218 14:32:22.130526 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:22 crc kubenswrapper[4957]: I0218 14:32:22.130539 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:22Z","lastTransitionTime":"2026-02-18T14:32:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:22 crc kubenswrapper[4957]: I0218 14:32:22.177416 4957 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 10:59:51.05225146 +0000 UTC Feb 18 14:32:22 crc kubenswrapper[4957]: I0218 14:32:22.212133 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:32:22 crc kubenswrapper[4957]: I0218 14:32:22.212188 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:32:22 crc kubenswrapper[4957]: E0218 14:32:22.212329 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 14:32:22 crc kubenswrapper[4957]: E0218 14:32:22.212409 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 14:32:22 crc kubenswrapper[4957]: I0218 14:32:22.233243 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:22 crc kubenswrapper[4957]: I0218 14:32:22.233291 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:22 crc kubenswrapper[4957]: I0218 14:32:22.233304 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:22 crc kubenswrapper[4957]: I0218 14:32:22.233322 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:22 crc kubenswrapper[4957]: I0218 14:32:22.233334 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:22Z","lastTransitionTime":"2026-02-18T14:32:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:22 crc kubenswrapper[4957]: I0218 14:32:22.335771 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:22 crc kubenswrapper[4957]: I0218 14:32:22.335817 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:22 crc kubenswrapper[4957]: I0218 14:32:22.335828 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:22 crc kubenswrapper[4957]: I0218 14:32:22.335844 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:22 crc kubenswrapper[4957]: I0218 14:32:22.335855 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:22Z","lastTransitionTime":"2026-02-18T14:32:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:22 crc kubenswrapper[4957]: I0218 14:32:22.426618 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-t7lp9" Feb 18 14:32:22 crc kubenswrapper[4957]: I0218 14:32:22.438769 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:22 crc kubenswrapper[4957]: I0218 14:32:22.438819 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:22 crc kubenswrapper[4957]: I0218 14:32:22.438830 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:22 crc kubenswrapper[4957]: I0218 14:32:22.438847 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:22 crc kubenswrapper[4957]: I0218 14:32:22.438858 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:22Z","lastTransitionTime":"2026-02-18T14:32:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:22 crc kubenswrapper[4957]: I0218 14:32:22.541400 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:22 crc kubenswrapper[4957]: I0218 14:32:22.541495 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:22 crc kubenswrapper[4957]: I0218 14:32:22.541514 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:22 crc kubenswrapper[4957]: I0218 14:32:22.541539 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:22 crc kubenswrapper[4957]: I0218 14:32:22.541556 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:22Z","lastTransitionTime":"2026-02-18T14:32:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:22 crc kubenswrapper[4957]: I0218 14:32:22.596581 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-t7lp9_c1ab5e7d-28c9-416b-9e12-1209987d8a2c/ovnkube-controller/2.log" Feb 18 14:32:22 crc kubenswrapper[4957]: I0218 14:32:22.601881 4957 scope.go:117] "RemoveContainer" containerID="01eae4440ba56e59be4a568c08e1c676d03e370e9abdd72986eca6f68f01e0f1" Feb 18 14:32:22 crc kubenswrapper[4957]: E0218 14:32:22.602169 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-t7lp9_openshift-ovn-kubernetes(c1ab5e7d-28c9-416b-9e12-1209987d8a2c)\"" pod="openshift-ovn-kubernetes/ovnkube-node-t7lp9" podUID="c1ab5e7d-28c9-416b-9e12-1209987d8a2c" Feb 18 14:32:22 crc kubenswrapper[4957]: I0218 14:32:22.614476 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jkmlc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58c40982-35c8-4670-ad21-513a7a5a458e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gkmzc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gkmzc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:32:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jkmlc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:22Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:22 crc kubenswrapper[4957]: I0218 14:32:22.627818 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3aa342a3ea42d2c64cf284dd2f7fed142b1ff377cc3c5384e4d940ec4364befc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:22Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:22 crc kubenswrapper[4957]: I0218 14:32:22.646500 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:22 crc kubenswrapper[4957]: I0218 14:32:22.646516 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:22Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:22 crc kubenswrapper[4957]: I0218 14:32:22.646559 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:22 crc kubenswrapper[4957]: I0218 14:32:22.646758 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:22 crc kubenswrapper[4957]: I0218 14:32:22.646789 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:22 crc kubenswrapper[4957]: I0218 14:32:22.646808 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:22Z","lastTransitionTime":"2026-02-18T14:32:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:22 crc kubenswrapper[4957]: I0218 14:32:22.668284 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-s7f5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77ae51a3-a3f7-4ea3-afb9-93558bf3b821\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ff578d3657e9ba6b371eb3c3a2e63d4c989e3232da6c9f88cf6ba18edc50dd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:32:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7f4f904ea5311aa8aa0473b62433cd3684d0e1fecdb1b179bfb7a58e463cc27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7f4f904ea5311aa8aa0473b62433cd3684d0e1fecdb1b179bfb7a58e463cc27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ea791a426f30ba77c019b8f865f9cfe90b915f6c0596002f4914cbe2fa68c27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ea791a426f30ba77c019b8f865f9cfe90b915f6c0596002f4914cbe2fa68c27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01a7d815c8c260fb5d2a4586f2640abf157fa209aa8e8aed0e6573a07bdaced6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01a7d815c8c260fb5d2a4586f2640abf157fa209aa8e8aed0e6573a07bdaced6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ae36af89d8b5cc2ad00253398fad76d78310bc8adb012b7b819ef55a26b8004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae36af89d8b5cc2ad00253398fad76d78310bc8adb012b7b819ef55a26b8004\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://def4b27d830824baeb1b965b56d5fc50343585d0fae5a1fed75381364a94caf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://def4b27d830824baeb1b965b56d5fc50343585d0fae5a1fed75381364a94caf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:32:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a13d3fcb295e963277f66a06bad27866ddb270d48474d0154efc52a8721a64a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a13d3fcb295e963277f66a06bad27866ddb270d48474d0154efc52a8721a64a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:32:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-s7f5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:22Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:22 crc kubenswrapper[4957]: I0218 14:32:22.680015 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-c5hxm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2b8a049-72c4-4a87-bb05-a5cc4ebe7a89\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d8c16012d78884094f9be4f3cd7b67463352ff1391c8f60c109b5f734c80cd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mzkhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-c5hxm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:22Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:22 crc kubenswrapper[4957]: I0218 14:32:22.695641 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b3bef1f-7cee-4035-bc8e-195fadcf2d19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51eded762a62fcb704d09a9c4336a0e308e44c662b636772ea79f754c09dcaaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a2ec9911be4f4d8715bb722cdf72cdbf478111030a7311b93eb67f55ddbce77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d46f625f180ab9afde6a7cb5d578b88012bf5b297e691a35f1fe1af000f1416\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a11b36734e57eec3aba0b6d96b4102850e46b96eb16d99cfcacf273f1ce69f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ff013b6ed4aa92a4bda93a646e7ffd4a158daab436c25025c7fbee49716bcfd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T14:31:47Z\\\",\\\"message\\\":\\\"W0218 14:31:37.311828 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0218 14:31:37.312305 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771425097 cert, and key in /tmp/serving-cert-2580901173/serving-signer.crt, /tmp/serving-cert-2580901173/serving-signer.key\\\\nI0218 14:31:37.668457 1 observer_polling.go:159] Starting file observer\\\\nW0218 14:31:37.670692 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0218 14:31:37.670892 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 14:31:37.672550 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2580901173/tls.crt::/tmp/serving-cert-2580901173/tls.key\\\\\\\"\\\\nF0218 14:31:47.884142 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1c0f986e657c20a059efccb494462926f4fa62d30b012e6ce2b530d679d9d5c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:37Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09ddd547ad67f12c0d4f3b0505206fc69b4bcfcfa966c6f98da8ef4f4e979711\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09ddd547ad67f12c0d4f3b0505206fc69b4bcfcfa966c6f98da8ef4f4e979711\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:22Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:22 crc kubenswrapper[4957]: I0218 14:32:22.712699 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:22Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:22 crc kubenswrapper[4957]: I0218 14:32:22.732290 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3339e9ff017eaa7de40b766111f5d7dcb9b5b45f154d917b2f3b1ebad21ee361\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://815a22c1c4523e923dbe20bb68b1f4195562f2b3a767409555f43c82cd826391\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:22Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:22 crc kubenswrapper[4957]: I0218 14:32:22.749617 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96ebe482278c96a988073257a64389543a692cbae60b771542dd45ffdbef9977\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:22Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:22 crc kubenswrapper[4957]: I0218 14:32:22.749690 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:22 crc kubenswrapper[4957]: I0218 14:32:22.750262 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:22 crc kubenswrapper[4957]: I0218 14:32:22.750284 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:22 crc kubenswrapper[4957]: I0218 14:32:22.750311 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:22 crc kubenswrapper[4957]: I0218 14:32:22.750329 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:22Z","lastTransitionTime":"2026-02-18T14:32:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:22 crc kubenswrapper[4957]: I0218 14:32:22.760300 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wn2pd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6a720f-7d42-48e9-8073-fb4f7417e6cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3317042ce828b73a13e6c2724a7c7663e0c73dd05fef1239f5ad36a7d46b1ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qv7rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wn2pd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:22Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:22 crc kubenswrapper[4957]: I0218 14:32:22.781103 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t7lp9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1ab5e7d-28c9-416b-9e12-1209987d8a2c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3d55cd020086b018dcf2183c7307ce4f66a18d4dfd5c7e387419f5aaeb08e53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb7e286dbf7b49a0ee8019eb660a4b977399c163bb094d77f830b3747399d702\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c773ab7da87d534f8ca400b73f89b473291b15b65118abfa48da7e8af126724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fb30623c479a6d604d07d8779f3b5661ba8a2c98fa38f145443c7f80fbfddb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9575c57251809ec79d9e9a63fc18fc0fe431582b023c0bfb9d71b3ea5c369e49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7fc50065803f879ae25e1ab406e1b74e142cab30c38bedd0426616a2cc6cc84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01eae4440ba56e59be4a568c08e1c676d03e370e9abdd72986eca6f68f01e0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01eae4440ba56e59be4a568c08e1c676d03e370e9abdd72986eca6f68f01e0f1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T14:32:21Z\\\",\\\"message\\\":\\\"erving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc0075eb187 \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:metrics,Protocol:TCP,Port:8383,TargetPort:{0 8383 },NodePort:0,AppProtocol:nil,},ServicePort{Name:https-metrics,Protocol:TCP,Port:8081,TargetPort:{0 8081 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{name: marketplace-operator,},ClusterIP:10.217.5.53,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.53],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF0218 14:32:21.034484 6635 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not add\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T14:32:20Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-t7lp9_openshift-ovn-kubernetes(c1ab5e7d-28c9-416b-9e12-1209987d8a2c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07549d11e88948ba7929804722b7d49a2fefb64d0c324c2032529c13df70a23f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854d32930f1afa0a3f29b83159a4a473fd3c074f065cab0f110c4c0e62d0ab79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://854d32930f1afa0a3f29b83159a4a473fd3c074f065cab0f110c4c0e62d0ab79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t7lp9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:22Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:22 crc kubenswrapper[4957]: I0218 14:32:22.794718 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-52gh7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"841cf9b6-bfbb-4ff0-8899-acd00478a669\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e52fe12bee0a96d77a82c8edb65574b11ae3d6b6dd3e2a68e918f1d21c0b1153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xg6v5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d556fc2e5fb106135a544401595696797ede468aa93818660d86424c49486fff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xg6v5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:32:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-52gh7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:22Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:22 crc kubenswrapper[4957]: I0218 14:32:22.807064 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"820e2f52-fc4e-43af-8da0-5b1ecb52e54a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdeaa62699fcf37060225a404103ed014fff43f751f744a27891afb76c56d011\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb9f54ba352c3e8d1cb70e31e2321f32cc741e9ea7fa871ac8db9cb380486d6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04cb263fd49c7f580746efa2f81c4e25a4862244502ad5362a1767d6255c023c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f45a9407db247a891731310afb4c1a7b81ac94e44cfd8e267854a4c4e434e95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:34Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:22Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:22 crc kubenswrapper[4957]: I0218 14:32:22.820713 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:22Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:22 crc kubenswrapper[4957]: I0218 14:32:22.836559 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sk96m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7e4902e-7bb6-44ec-a3ac-3d8b3afe3fbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://644a3c29970ac9c5262a3a66390a566947b89d6020eaf5de12afb1afeaaff673\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vssz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sk96m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:22Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:22 crc kubenswrapper[4957]: I0218 14:32:22.849971 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cde17e3-43e9-4bed-afe8-5b76229e35cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1929ddc0bd8d5e1c054fe6b3399c053a91ace49153f8712ad155416d0df0652e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mzslj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4227d09ba91769eac5df7ac50f6da20587fad22be3d3f27f0a938b37d76b457e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mzslj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-x8wwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:22Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:22 crc kubenswrapper[4957]: I0218 14:32:22.852663 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:22 crc kubenswrapper[4957]: I0218 14:32:22.852709 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:22 crc kubenswrapper[4957]: I0218 14:32:22.852720 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:22 crc kubenswrapper[4957]: I0218 14:32:22.852739 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:22 crc kubenswrapper[4957]: I0218 14:32:22.852765 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:22Z","lastTransitionTime":"2026-02-18T14:32:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:22 crc kubenswrapper[4957]: I0218 14:32:22.956238 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:22 crc kubenswrapper[4957]: I0218 14:32:22.956283 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:22 crc kubenswrapper[4957]: I0218 14:32:22.956293 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:22 crc kubenswrapper[4957]: I0218 14:32:22.956310 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:22 crc kubenswrapper[4957]: I0218 14:32:22.956323 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:22Z","lastTransitionTime":"2026-02-18T14:32:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:23 crc kubenswrapper[4957]: I0218 14:32:23.058999 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:23 crc kubenswrapper[4957]: I0218 14:32:23.059041 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:23 crc kubenswrapper[4957]: I0218 14:32:23.059053 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:23 crc kubenswrapper[4957]: I0218 14:32:23.059068 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:23 crc kubenswrapper[4957]: I0218 14:32:23.059079 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:23Z","lastTransitionTime":"2026-02-18T14:32:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:23 crc kubenswrapper[4957]: I0218 14:32:23.162264 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:23 crc kubenswrapper[4957]: I0218 14:32:23.162299 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:23 crc kubenswrapper[4957]: I0218 14:32:23.162311 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:23 crc kubenswrapper[4957]: I0218 14:32:23.162327 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:23 crc kubenswrapper[4957]: I0218 14:32:23.162338 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:23Z","lastTransitionTime":"2026-02-18T14:32:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:23 crc kubenswrapper[4957]: I0218 14:32:23.177926 4957 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 07:21:54.194411666 +0000 UTC Feb 18 14:32:23 crc kubenswrapper[4957]: I0218 14:32:23.212522 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:32:23 crc kubenswrapper[4957]: I0218 14:32:23.212602 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jkmlc" Feb 18 14:32:23 crc kubenswrapper[4957]: E0218 14:32:23.212658 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 14:32:23 crc kubenswrapper[4957]: E0218 14:32:23.212786 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jkmlc" podUID="58c40982-35c8-4670-ad21-513a7a5a458e" Feb 18 14:32:23 crc kubenswrapper[4957]: I0218 14:32:23.265140 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:23 crc kubenswrapper[4957]: I0218 14:32:23.265229 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:23 crc kubenswrapper[4957]: I0218 14:32:23.265247 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:23 crc kubenswrapper[4957]: I0218 14:32:23.265275 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:23 crc kubenswrapper[4957]: I0218 14:32:23.265295 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:23Z","lastTransitionTime":"2026-02-18T14:32:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:23 crc kubenswrapper[4957]: I0218 14:32:23.368482 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:23 crc kubenswrapper[4957]: I0218 14:32:23.368534 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:23 crc kubenswrapper[4957]: I0218 14:32:23.368547 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:23 crc kubenswrapper[4957]: I0218 14:32:23.368565 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:23 crc kubenswrapper[4957]: I0218 14:32:23.368578 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:23Z","lastTransitionTime":"2026-02-18T14:32:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:23 crc kubenswrapper[4957]: I0218 14:32:23.471379 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:23 crc kubenswrapper[4957]: I0218 14:32:23.471461 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:23 crc kubenswrapper[4957]: I0218 14:32:23.471477 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:23 crc kubenswrapper[4957]: I0218 14:32:23.471495 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:23 crc kubenswrapper[4957]: I0218 14:32:23.471508 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:23Z","lastTransitionTime":"2026-02-18T14:32:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:23 crc kubenswrapper[4957]: I0218 14:32:23.574213 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:23 crc kubenswrapper[4957]: I0218 14:32:23.574302 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:23 crc kubenswrapper[4957]: I0218 14:32:23.574327 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:23 crc kubenswrapper[4957]: I0218 14:32:23.574354 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:23 crc kubenswrapper[4957]: I0218 14:32:23.574373 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:23Z","lastTransitionTime":"2026-02-18T14:32:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:23 crc kubenswrapper[4957]: I0218 14:32:23.677072 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:23 crc kubenswrapper[4957]: I0218 14:32:23.677128 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:23 crc kubenswrapper[4957]: I0218 14:32:23.677140 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:23 crc kubenswrapper[4957]: I0218 14:32:23.677159 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:23 crc kubenswrapper[4957]: I0218 14:32:23.677172 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:23Z","lastTransitionTime":"2026-02-18T14:32:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:23 crc kubenswrapper[4957]: I0218 14:32:23.780116 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:23 crc kubenswrapper[4957]: I0218 14:32:23.780539 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:23 crc kubenswrapper[4957]: I0218 14:32:23.780670 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:23 crc kubenswrapper[4957]: I0218 14:32:23.780777 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:23 crc kubenswrapper[4957]: I0218 14:32:23.780883 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:23Z","lastTransitionTime":"2026-02-18T14:32:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:23 crc kubenswrapper[4957]: I0218 14:32:23.883235 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:23 crc kubenswrapper[4957]: I0218 14:32:23.883295 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:23 crc kubenswrapper[4957]: I0218 14:32:23.883308 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:23 crc kubenswrapper[4957]: I0218 14:32:23.883326 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:23 crc kubenswrapper[4957]: I0218 14:32:23.883338 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:23Z","lastTransitionTime":"2026-02-18T14:32:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:23 crc kubenswrapper[4957]: I0218 14:32:23.985733 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:23 crc kubenswrapper[4957]: I0218 14:32:23.985851 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:23 crc kubenswrapper[4957]: I0218 14:32:23.985864 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:23 crc kubenswrapper[4957]: I0218 14:32:23.985885 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:23 crc kubenswrapper[4957]: I0218 14:32:23.985901 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:23Z","lastTransitionTime":"2026-02-18T14:32:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:24 crc kubenswrapper[4957]: I0218 14:32:24.088130 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:24 crc kubenswrapper[4957]: I0218 14:32:24.088185 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:24 crc kubenswrapper[4957]: I0218 14:32:24.088197 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:24 crc kubenswrapper[4957]: I0218 14:32:24.088218 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:24 crc kubenswrapper[4957]: I0218 14:32:24.088233 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:24Z","lastTransitionTime":"2026-02-18T14:32:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:24 crc kubenswrapper[4957]: I0218 14:32:24.178839 4957 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 03:22:48.161331921 +0000 UTC Feb 18 14:32:24 crc kubenswrapper[4957]: I0218 14:32:24.192856 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:24 crc kubenswrapper[4957]: I0218 14:32:24.193288 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:24 crc kubenswrapper[4957]: I0218 14:32:24.193396 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:24 crc kubenswrapper[4957]: I0218 14:32:24.193503 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:24 crc kubenswrapper[4957]: I0218 14:32:24.193581 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:24Z","lastTransitionTime":"2026-02-18T14:32:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:24 crc kubenswrapper[4957]: I0218 14:32:24.212586 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:32:24 crc kubenswrapper[4957]: I0218 14:32:24.212761 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:32:24 crc kubenswrapper[4957]: E0218 14:32:24.212914 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 14:32:24 crc kubenswrapper[4957]: E0218 14:32:24.213133 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 14:32:24 crc kubenswrapper[4957]: I0218 14:32:24.229068 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"820e2f52-fc4e-43af-8da0-5b1ecb52e54a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdeaa62699fcf37060225a404103ed014fff43f751f744a27891afb76c56d011\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb9f54ba352c3e8d1cb70e31e2321f32cc741e9ea7fa871ac8db9cb380486d6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04cb263fd49c7f580746efa2f81c4e25a4862244502ad5362a1767d6255c023c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f45a9407db247a891731310afb4c1a7b81ac94e44cfd8e267854a4c4e434e95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:34Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:24Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:24 crc kubenswrapper[4957]: I0218 14:32:24.244486 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:24Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:24 crc kubenswrapper[4957]: I0218 14:32:24.257633 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sk96m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7e4902e-7bb6-44ec-a3ac-3d8b3afe3fbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://644a3c29970ac9c5262a3a66390a566947b89d6020eaf5de12afb1afeaaff673\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vssz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sk96m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:24Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:24 crc kubenswrapper[4957]: I0218 14:32:24.269155 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cde17e3-43e9-4bed-afe8-5b76229e35cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1929ddc0bd8d5e1c054fe6b3399c053a91ace49153f8712ad155416d0df0652e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mzslj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4227d09ba91769eac5df7ac50f6da20587fad22be3d3f27f0a938b37d76b457e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mzslj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-x8wwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:24Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:24 crc kubenswrapper[4957]: I0218 14:32:24.285207 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-s7f5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77ae51a3-a3f7-4ea3-afb9-93558bf3b821\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ff578d3657e9ba6b371eb3c3a2e63d4c989e3232da6c9f88cf6ba18edc50dd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:32:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7f4f904ea5311aa8aa0473b62433cd3684d0e1fecdb1b179bfb7a58e463cc27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7f4f904ea5311aa8aa0473b62433cd3684d0e1fecdb1b179bfb7a58e463cc27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ea791a426f30ba77c019b8f865f9cfe90b915f6c0596002f4914cbe2fa68c27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ea791a426f30ba77c019b8f865f9cfe90b915f6c0596002f4914cbe2fa68c27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01a7d815c8c260fb5d2a4586f2640abf157fa209aa8e8aed0e6573a07bdaced6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01a7d815c8c260fb5d2a4586f2640abf157fa209aa8e8aed0e6573a07bdaced6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ae36af89d8b5cc2ad00253398fad76d78310bc8adb012b7b819ef55a26b8004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae36af89d8b5cc2ad00253398fad76d78310bc8adb012b7b819ef55a26b8004\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://def4b27d830824baeb1b965b56d5fc50343585d0fae5a1fed75381364a94caf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://def4b27d830824baeb1b965b56d5fc50343585d0fae5a1fed75381364a94caf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:32:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a13d3fcb295e963277f66a06bad27866ddb270d48474d0154efc52a8721a64a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a13d3fcb295e963277f66a06bad27866ddb270d48474d0154efc52a8721a64a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:32:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-s7f5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:24Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:24 crc kubenswrapper[4957]: I0218 14:32:24.297119 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:24 crc kubenswrapper[4957]: I0218 14:32:24.297345 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:24 crc kubenswrapper[4957]: I0218 14:32:24.297488 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:24 crc kubenswrapper[4957]: I0218 14:32:24.297716 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:24 crc kubenswrapper[4957]: I0218 14:32:24.298074 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:24Z","lastTransitionTime":"2026-02-18T14:32:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:24 crc kubenswrapper[4957]: I0218 14:32:24.298117 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-c5hxm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2b8a049-72c4-4a87-bb05-a5cc4ebe7a89\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d8c16012d78884094f9be4f3cd7b67463352ff1391c8f60c109b5f734c80cd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mzkhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-c5hxm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:24Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:24 crc kubenswrapper[4957]: I0218 14:32:24.312020 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jkmlc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58c40982-35c8-4670-ad21-513a7a5a458e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gkmzc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gkmzc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:32:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jkmlc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:24Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:24 crc kubenswrapper[4957]: I0218 14:32:24.326718 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3aa342a3ea42d2c64cf284dd2f7fed142b1ff377cc3c5384e4d940ec4364befc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:24Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:24 crc kubenswrapper[4957]: I0218 14:32:24.342774 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:24Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:24 crc kubenswrapper[4957]: I0218 14:32:24.358815 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96ebe482278c96a988073257a64389543a692cbae60b771542dd45ffdbef9977\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:24Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:24 crc kubenswrapper[4957]: I0218 14:32:24.370495 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wn2pd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6a720f-7d42-48e9-8073-fb4f7417e6cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3317042ce828b73a13e6c2724a7c7663e0c73dd05fef1239f5ad36a7d46b1ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qv7rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wn2pd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:24Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:24 crc kubenswrapper[4957]: I0218 14:32:24.385666 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b3bef1f-7cee-4035-bc8e-195fadcf2d19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51eded762a62fcb704d09a9c4336a0e308e44c662b636772ea79f754c09dcaaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a2ec9911be4f4d8715bb722cdf72cdbf478111030a7311b93eb67f55ddbce77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d46f625f180ab9afde6a7cb5d578b88012bf5b297e691a35f1fe1af000f1416\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a11b36734e57eec3aba0b6d96b4102850e46b96eb16d99cfcacf273f1ce69f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ff013b6ed4aa92a4bda93a646e7ffd4a158daab436c25025c7fbee49716bcfd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T14:31:47Z\\\",\\\"message\\\":\\\"W0218 14:31:37.311828 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0218 14:31:37.312305 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771425097 cert, and key in /tmp/serving-cert-2580901173/serving-signer.crt, /tmp/serving-cert-2580901173/serving-signer.key\\\\nI0218 14:31:37.668457 1 observer_polling.go:159] Starting file observer\\\\nW0218 14:31:37.670692 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0218 14:31:37.670892 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 14:31:37.672550 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2580901173/tls.crt::/tmp/serving-cert-2580901173/tls.key\\\\\\\"\\\\nF0218 14:31:47.884142 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1c0f986e657c20a059efccb494462926f4fa62d30b012e6ce2b530d679d9d5c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:37Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09ddd547ad67f12c0d4f3b0505206fc69b4bcfcfa966c6f98da8ef4f4e979711\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09ddd547ad67f12c0d4f3b0505206fc69b4bcfcfa966c6f98da8ef4f4e979711\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:24Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:24 crc kubenswrapper[4957]: I0218 14:32:24.398728 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:24Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:24 crc kubenswrapper[4957]: I0218 14:32:24.400548 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:24 crc kubenswrapper[4957]: I0218 14:32:24.400688 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:24 crc kubenswrapper[4957]: I0218 14:32:24.400733 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:24 crc kubenswrapper[4957]: I0218 14:32:24.400759 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:24 crc kubenswrapper[4957]: I0218 14:32:24.400818 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:24Z","lastTransitionTime":"2026-02-18T14:32:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:24 crc kubenswrapper[4957]: I0218 14:32:24.415190 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3339e9ff017eaa7de40b766111f5d7dcb9b5b45f154d917b2f3b1ebad21ee361\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://815a22c1c4523e923dbe20bb68b1f4195562f2b3a767409555f43c82cd826391\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:24Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:24 crc kubenswrapper[4957]: I0218 14:32:24.435775 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t7lp9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1ab5e7d-28c9-416b-9e12-1209987d8a2c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3d55cd020086b018dcf2183c7307ce4f66a18d4dfd5c7e387419f5aaeb08e53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb7e286dbf7b49a0ee8019eb660a4b977399c163bb094d77f830b3747399d702\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c773ab7da87d534f8ca400b73f89b473291b15b65118abfa48da7e8af126724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fb30623c479a6d604d07d8779f3b5661ba8a2c98fa38f145443c7f80fbfddb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9575c57251809ec79d9e9a63fc18fc0fe431582b023c0bfb9d71b3ea5c369e49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7fc50065803f879ae25e1ab406e1b74e142cab30c38bedd0426616a2cc6cc84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01eae4440ba56e59be4a568c08e1c676d03e370e9abdd72986eca6f68f01e0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01eae4440ba56e59be4a568c08e1c676d03e370e9abdd72986eca6f68f01e0f1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T14:32:21Z\\\",\\\"message\\\":\\\"erving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc0075eb187 \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:metrics,Protocol:TCP,Port:8383,TargetPort:{0 8383 },NodePort:0,AppProtocol:nil,},ServicePort{Name:https-metrics,Protocol:TCP,Port:8081,TargetPort:{0 8081 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{name: marketplace-operator,},ClusterIP:10.217.5.53,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.53],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF0218 14:32:21.034484 6635 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not add\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T14:32:20Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-t7lp9_openshift-ovn-kubernetes(c1ab5e7d-28c9-416b-9e12-1209987d8a2c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07549d11e88948ba7929804722b7d49a2fefb64d0c324c2032529c13df70a23f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854d32930f1afa0a3f29b83159a4a473fd3c074f065cab0f110c4c0e62d0ab79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://854d32930f1afa0a3f29b83159a4a473fd3c074f065cab0f110c4c0e62d0ab79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t7lp9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:24Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:24 crc kubenswrapper[4957]: I0218 14:32:24.453545 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-52gh7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"841cf9b6-bfbb-4ff0-8899-acd00478a669\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e52fe12bee0a96d77a82c8edb65574b11ae3d6b6dd3e2a68e918f1d21c0b1153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xg6v5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d556fc2e5fb106135a544401595696797ede468aa93818660d86424c49486fff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xg6v5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:32:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-52gh7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:24Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:24 crc kubenswrapper[4957]: I0218 14:32:24.503818 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:24 crc kubenswrapper[4957]: I0218 14:32:24.503863 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:24 crc kubenswrapper[4957]: I0218 14:32:24.503872 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:24 crc kubenswrapper[4957]: I0218 14:32:24.503888 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:24 crc kubenswrapper[4957]: I0218 14:32:24.503897 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:24Z","lastTransitionTime":"2026-02-18T14:32:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:24 crc kubenswrapper[4957]: I0218 14:32:24.607264 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:24 crc kubenswrapper[4957]: I0218 14:32:24.607324 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:24 crc kubenswrapper[4957]: I0218 14:32:24.607334 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:24 crc kubenswrapper[4957]: I0218 14:32:24.607353 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:24 crc kubenswrapper[4957]: I0218 14:32:24.607364 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:24Z","lastTransitionTime":"2026-02-18T14:32:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:24 crc kubenswrapper[4957]: I0218 14:32:24.707262 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/58c40982-35c8-4670-ad21-513a7a5a458e-metrics-certs\") pod \"network-metrics-daemon-jkmlc\" (UID: \"58c40982-35c8-4670-ad21-513a7a5a458e\") " pod="openshift-multus/network-metrics-daemon-jkmlc" Feb 18 14:32:24 crc kubenswrapper[4957]: E0218 14:32:24.707558 4957 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 14:32:24 crc kubenswrapper[4957]: E0218 14:32:24.707676 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/58c40982-35c8-4670-ad21-513a7a5a458e-metrics-certs podName:58c40982-35c8-4670-ad21-513a7a5a458e nodeName:}" failed. No retries permitted until 2026-02-18 14:32:40.707648725 +0000 UTC m=+67.228513499 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/58c40982-35c8-4670-ad21-513a7a5a458e-metrics-certs") pod "network-metrics-daemon-jkmlc" (UID: "58c40982-35c8-4670-ad21-513a7a5a458e") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 14:32:24 crc kubenswrapper[4957]: I0218 14:32:24.710595 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:24 crc kubenswrapper[4957]: I0218 14:32:24.710650 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:24 crc kubenswrapper[4957]: I0218 14:32:24.710667 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:24 crc kubenswrapper[4957]: I0218 14:32:24.710693 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:24 crc kubenswrapper[4957]: I0218 14:32:24.710710 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:24Z","lastTransitionTime":"2026-02-18T14:32:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:24 crc kubenswrapper[4957]: I0218 14:32:24.813774 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:24 crc kubenswrapper[4957]: I0218 14:32:24.813838 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:24 crc kubenswrapper[4957]: I0218 14:32:24.813856 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:24 crc kubenswrapper[4957]: I0218 14:32:24.813882 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:24 crc kubenswrapper[4957]: I0218 14:32:24.813898 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:24Z","lastTransitionTime":"2026-02-18T14:32:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:24 crc kubenswrapper[4957]: I0218 14:32:24.916892 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:24 crc kubenswrapper[4957]: I0218 14:32:24.916941 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:24 crc kubenswrapper[4957]: I0218 14:32:24.916956 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:24 crc kubenswrapper[4957]: I0218 14:32:24.916977 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:24 crc kubenswrapper[4957]: I0218 14:32:24.916994 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:24Z","lastTransitionTime":"2026-02-18T14:32:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:25 crc kubenswrapper[4957]: I0218 14:32:25.020027 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:25 crc kubenswrapper[4957]: I0218 14:32:25.020062 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:25 crc kubenswrapper[4957]: I0218 14:32:25.020074 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:25 crc kubenswrapper[4957]: I0218 14:32:25.020089 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:25 crc kubenswrapper[4957]: I0218 14:32:25.020100 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:25Z","lastTransitionTime":"2026-02-18T14:32:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:25 crc kubenswrapper[4957]: I0218 14:32:25.121824 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:25 crc kubenswrapper[4957]: I0218 14:32:25.121876 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:25 crc kubenswrapper[4957]: I0218 14:32:25.121889 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:25 crc kubenswrapper[4957]: I0218 14:32:25.121907 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:25 crc kubenswrapper[4957]: I0218 14:32:25.121918 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:25Z","lastTransitionTime":"2026-02-18T14:32:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:25 crc kubenswrapper[4957]: I0218 14:32:25.179919 4957 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 19:22:49.85002102 +0000 UTC Feb 18 14:32:25 crc kubenswrapper[4957]: I0218 14:32:25.212290 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:32:25 crc kubenswrapper[4957]: E0218 14:32:25.212548 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 14:32:25 crc kubenswrapper[4957]: I0218 14:32:25.212553 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jkmlc" Feb 18 14:32:25 crc kubenswrapper[4957]: E0218 14:32:25.212685 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jkmlc" podUID="58c40982-35c8-4670-ad21-513a7a5a458e" Feb 18 14:32:25 crc kubenswrapper[4957]: I0218 14:32:25.226019 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:25 crc kubenswrapper[4957]: I0218 14:32:25.226062 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:25 crc kubenswrapper[4957]: I0218 14:32:25.226075 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:25 crc kubenswrapper[4957]: I0218 14:32:25.226090 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:25 crc kubenswrapper[4957]: I0218 14:32:25.226099 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:25Z","lastTransitionTime":"2026-02-18T14:32:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:25 crc kubenswrapper[4957]: I0218 14:32:25.349335 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 14:32:25 crc kubenswrapper[4957]: I0218 14:32:25.349489 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:32:25 crc kubenswrapper[4957]: E0218 14:32:25.349534 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 14:32:57.349510617 +0000 UTC m=+83.870375381 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:32:25 crc kubenswrapper[4957]: I0218 14:32:25.349567 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:32:25 crc kubenswrapper[4957]: E0218 14:32:25.349624 4957 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 14:32:25 crc kubenswrapper[4957]: E0218 14:32:25.349643 4957 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 14:32:25 crc kubenswrapper[4957]: E0218 14:32:25.349657 4957 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 14:32:25 crc kubenswrapper[4957]: E0218 14:32:25.349699 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-18 14:32:57.349688632 +0000 UTC m=+83.870553376 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 14:32:25 crc kubenswrapper[4957]: E0218 14:32:25.349709 4957 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 14:32:25 crc kubenswrapper[4957]: E0218 14:32:25.349731 4957 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 14:32:25 crc kubenswrapper[4957]: E0218 14:32:25.349746 4957 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 14:32:25 crc kubenswrapper[4957]: E0218 14:32:25.349782 4957 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 14:32:25 crc kubenswrapper[4957]: I0218 14:32:25.349625 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:32:25 crc kubenswrapper[4957]: E0218 14:32:25.349789 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-18 14:32:57.349775695 +0000 UTC m=+83.870640459 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 14:32:25 crc kubenswrapper[4957]: E0218 14:32:25.349994 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 14:32:57.34995113 +0000 UTC m=+83.870815914 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 14:32:25 crc kubenswrapper[4957]: I0218 14:32:25.350039 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:32:25 crc kubenswrapper[4957]: E0218 14:32:25.350214 4957 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 14:32:25 crc kubenswrapper[4957]: E0218 14:32:25.350277 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 14:32:57.350262659 +0000 UTC m=+83.871127433 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 14:32:25 crc kubenswrapper[4957]: I0218 14:32:25.352301 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:25 crc kubenswrapper[4957]: I0218 14:32:25.352346 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:25 crc kubenswrapper[4957]: I0218 14:32:25.352362 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:25 crc kubenswrapper[4957]: I0218 14:32:25.352388 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:25 crc kubenswrapper[4957]: I0218 14:32:25.352410 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:25Z","lastTransitionTime":"2026-02-18T14:32:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:25 crc kubenswrapper[4957]: I0218 14:32:25.454133 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:25 crc kubenswrapper[4957]: I0218 14:32:25.454166 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:25 crc kubenswrapper[4957]: I0218 14:32:25.454174 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:25 crc kubenswrapper[4957]: I0218 14:32:25.454189 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:25 crc kubenswrapper[4957]: I0218 14:32:25.454199 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:25Z","lastTransitionTime":"2026-02-18T14:32:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:25 crc kubenswrapper[4957]: I0218 14:32:25.558134 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:25 crc kubenswrapper[4957]: I0218 14:32:25.558204 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:25 crc kubenswrapper[4957]: I0218 14:32:25.558226 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:25 crc kubenswrapper[4957]: I0218 14:32:25.558256 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:25 crc kubenswrapper[4957]: I0218 14:32:25.558282 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:25Z","lastTransitionTime":"2026-02-18T14:32:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:25 crc kubenswrapper[4957]: I0218 14:32:25.660914 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:25 crc kubenswrapper[4957]: I0218 14:32:25.660967 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:25 crc kubenswrapper[4957]: I0218 14:32:25.660980 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:25 crc kubenswrapper[4957]: I0218 14:32:25.660996 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:25 crc kubenswrapper[4957]: I0218 14:32:25.661011 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:25Z","lastTransitionTime":"2026-02-18T14:32:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:25 crc kubenswrapper[4957]: I0218 14:32:25.763193 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:25 crc kubenswrapper[4957]: I0218 14:32:25.763241 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:25 crc kubenswrapper[4957]: I0218 14:32:25.763250 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:25 crc kubenswrapper[4957]: I0218 14:32:25.763267 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:25 crc kubenswrapper[4957]: I0218 14:32:25.763277 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:25Z","lastTransitionTime":"2026-02-18T14:32:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:25 crc kubenswrapper[4957]: I0218 14:32:25.866062 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:25 crc kubenswrapper[4957]: I0218 14:32:25.866114 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:25 crc kubenswrapper[4957]: I0218 14:32:25.866124 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:25 crc kubenswrapper[4957]: I0218 14:32:25.866141 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:25 crc kubenswrapper[4957]: I0218 14:32:25.866152 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:25Z","lastTransitionTime":"2026-02-18T14:32:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:25 crc kubenswrapper[4957]: I0218 14:32:25.969580 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:25 crc kubenswrapper[4957]: I0218 14:32:25.969648 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:25 crc kubenswrapper[4957]: I0218 14:32:25.969665 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:25 crc kubenswrapper[4957]: I0218 14:32:25.969690 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:25 crc kubenswrapper[4957]: I0218 14:32:25.969708 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:25Z","lastTransitionTime":"2026-02-18T14:32:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:26 crc kubenswrapper[4957]: I0218 14:32:26.071821 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:26 crc kubenswrapper[4957]: I0218 14:32:26.071893 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:26 crc kubenswrapper[4957]: I0218 14:32:26.071905 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:26 crc kubenswrapper[4957]: I0218 14:32:26.071922 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:26 crc kubenswrapper[4957]: I0218 14:32:26.071934 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:26Z","lastTransitionTime":"2026-02-18T14:32:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:26 crc kubenswrapper[4957]: I0218 14:32:26.174744 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:26 crc kubenswrapper[4957]: I0218 14:32:26.174789 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:26 crc kubenswrapper[4957]: I0218 14:32:26.174799 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:26 crc kubenswrapper[4957]: I0218 14:32:26.174820 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:26 crc kubenswrapper[4957]: I0218 14:32:26.174829 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:26Z","lastTransitionTime":"2026-02-18T14:32:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:26 crc kubenswrapper[4957]: I0218 14:32:26.180939 4957 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 22:41:37.284213737 +0000 UTC Feb 18 14:32:26 crc kubenswrapper[4957]: I0218 14:32:26.212606 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:32:26 crc kubenswrapper[4957]: I0218 14:32:26.212606 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:32:26 crc kubenswrapper[4957]: E0218 14:32:26.212757 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 14:32:26 crc kubenswrapper[4957]: E0218 14:32:26.212821 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 14:32:26 crc kubenswrapper[4957]: I0218 14:32:26.277960 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:26 crc kubenswrapper[4957]: I0218 14:32:26.277998 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:26 crc kubenswrapper[4957]: I0218 14:32:26.278007 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:26 crc kubenswrapper[4957]: I0218 14:32:26.278051 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:26 crc kubenswrapper[4957]: I0218 14:32:26.278065 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:26Z","lastTransitionTime":"2026-02-18T14:32:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:26 crc kubenswrapper[4957]: I0218 14:32:26.381051 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:26 crc kubenswrapper[4957]: I0218 14:32:26.381494 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:26 crc kubenswrapper[4957]: I0218 14:32:26.381505 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:26 crc kubenswrapper[4957]: I0218 14:32:26.381519 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:26 crc kubenswrapper[4957]: I0218 14:32:26.381554 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:26Z","lastTransitionTime":"2026-02-18T14:32:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:26 crc kubenswrapper[4957]: I0218 14:32:26.484059 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:26 crc kubenswrapper[4957]: I0218 14:32:26.484106 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:26 crc kubenswrapper[4957]: I0218 14:32:26.484122 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:26 crc kubenswrapper[4957]: I0218 14:32:26.484141 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:26 crc kubenswrapper[4957]: I0218 14:32:26.484176 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:26Z","lastTransitionTime":"2026-02-18T14:32:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:26 crc kubenswrapper[4957]: I0218 14:32:26.586913 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:26 crc kubenswrapper[4957]: I0218 14:32:26.586949 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:26 crc kubenswrapper[4957]: I0218 14:32:26.586960 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:26 crc kubenswrapper[4957]: I0218 14:32:26.586973 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:26 crc kubenswrapper[4957]: I0218 14:32:26.586992 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:26Z","lastTransitionTime":"2026-02-18T14:32:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:26 crc kubenswrapper[4957]: I0218 14:32:26.690311 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:26 crc kubenswrapper[4957]: I0218 14:32:26.690381 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:26 crc kubenswrapper[4957]: I0218 14:32:26.690404 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:26 crc kubenswrapper[4957]: I0218 14:32:26.690486 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:26 crc kubenswrapper[4957]: I0218 14:32:26.690508 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:26Z","lastTransitionTime":"2026-02-18T14:32:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:26 crc kubenswrapper[4957]: I0218 14:32:26.793775 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:26 crc kubenswrapper[4957]: I0218 14:32:26.793846 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:26 crc kubenswrapper[4957]: I0218 14:32:26.793858 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:26 crc kubenswrapper[4957]: I0218 14:32:26.793878 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:26 crc kubenswrapper[4957]: I0218 14:32:26.793917 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:26Z","lastTransitionTime":"2026-02-18T14:32:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:26 crc kubenswrapper[4957]: I0218 14:32:26.896279 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:26 crc kubenswrapper[4957]: I0218 14:32:26.896320 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:26 crc kubenswrapper[4957]: I0218 14:32:26.896330 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:26 crc kubenswrapper[4957]: I0218 14:32:26.896346 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:26 crc kubenswrapper[4957]: I0218 14:32:26.896356 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:26Z","lastTransitionTime":"2026-02-18T14:32:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:26 crc kubenswrapper[4957]: I0218 14:32:26.999238 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:26 crc kubenswrapper[4957]: I0218 14:32:26.999289 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:26 crc kubenswrapper[4957]: I0218 14:32:26.999301 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:26 crc kubenswrapper[4957]: I0218 14:32:26.999322 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:26 crc kubenswrapper[4957]: I0218 14:32:26.999335 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:26Z","lastTransitionTime":"2026-02-18T14:32:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:27 crc kubenswrapper[4957]: I0218 14:32:27.102272 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:27 crc kubenswrapper[4957]: I0218 14:32:27.102330 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:27 crc kubenswrapper[4957]: I0218 14:32:27.102342 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:27 crc kubenswrapper[4957]: I0218 14:32:27.102367 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:27 crc kubenswrapper[4957]: I0218 14:32:27.102382 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:27Z","lastTransitionTime":"2026-02-18T14:32:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:27 crc kubenswrapper[4957]: I0218 14:32:27.181589 4957 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 15:10:04.839253303 +0000 UTC Feb 18 14:32:27 crc kubenswrapper[4957]: I0218 14:32:27.204510 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:27 crc kubenswrapper[4957]: I0218 14:32:27.204555 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:27 crc kubenswrapper[4957]: I0218 14:32:27.204567 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:27 crc kubenswrapper[4957]: I0218 14:32:27.204584 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:27 crc kubenswrapper[4957]: I0218 14:32:27.204596 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:27Z","lastTransitionTime":"2026-02-18T14:32:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:27 crc kubenswrapper[4957]: I0218 14:32:27.211912 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jkmlc" Feb 18 14:32:27 crc kubenswrapper[4957]: I0218 14:32:27.212185 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:32:27 crc kubenswrapper[4957]: E0218 14:32:27.212274 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jkmlc" podUID="58c40982-35c8-4670-ad21-513a7a5a458e" Feb 18 14:32:27 crc kubenswrapper[4957]: E0218 14:32:27.212460 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 14:32:27 crc kubenswrapper[4957]: I0218 14:32:27.307514 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:27 crc kubenswrapper[4957]: I0218 14:32:27.307557 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:27 crc kubenswrapper[4957]: I0218 14:32:27.307566 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:27 crc kubenswrapper[4957]: I0218 14:32:27.307584 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:27 crc kubenswrapper[4957]: I0218 14:32:27.307594 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:27Z","lastTransitionTime":"2026-02-18T14:32:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:27 crc kubenswrapper[4957]: I0218 14:32:27.410385 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:27 crc kubenswrapper[4957]: I0218 14:32:27.410497 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:27 crc kubenswrapper[4957]: I0218 14:32:27.410511 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:27 crc kubenswrapper[4957]: I0218 14:32:27.410531 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:27 crc kubenswrapper[4957]: I0218 14:32:27.410544 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:27Z","lastTransitionTime":"2026-02-18T14:32:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:27 crc kubenswrapper[4957]: I0218 14:32:27.513964 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:27 crc kubenswrapper[4957]: I0218 14:32:27.514018 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:27 crc kubenswrapper[4957]: I0218 14:32:27.514026 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:27 crc kubenswrapper[4957]: I0218 14:32:27.514044 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:27 crc kubenswrapper[4957]: I0218 14:32:27.514053 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:27Z","lastTransitionTime":"2026-02-18T14:32:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:27 crc kubenswrapper[4957]: I0218 14:32:27.616799 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:27 crc kubenswrapper[4957]: I0218 14:32:27.616894 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:27 crc kubenswrapper[4957]: I0218 14:32:27.616908 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:27 crc kubenswrapper[4957]: I0218 14:32:27.616929 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:27 crc kubenswrapper[4957]: I0218 14:32:27.616947 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:27Z","lastTransitionTime":"2026-02-18T14:32:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:27 crc kubenswrapper[4957]: I0218 14:32:27.719907 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:27 crc kubenswrapper[4957]: I0218 14:32:27.720621 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:27 crc kubenswrapper[4957]: I0218 14:32:27.720659 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:27 crc kubenswrapper[4957]: I0218 14:32:27.720684 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:27 crc kubenswrapper[4957]: I0218 14:32:27.720700 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:27Z","lastTransitionTime":"2026-02-18T14:32:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:27 crc kubenswrapper[4957]: I0218 14:32:27.823586 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:27 crc kubenswrapper[4957]: I0218 14:32:27.823626 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:27 crc kubenswrapper[4957]: I0218 14:32:27.823638 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:27 crc kubenswrapper[4957]: I0218 14:32:27.823654 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:27 crc kubenswrapper[4957]: I0218 14:32:27.823667 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:27Z","lastTransitionTime":"2026-02-18T14:32:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:27 crc kubenswrapper[4957]: I0218 14:32:27.926146 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:27 crc kubenswrapper[4957]: I0218 14:32:27.926190 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:27 crc kubenswrapper[4957]: I0218 14:32:27.926202 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:27 crc kubenswrapper[4957]: I0218 14:32:27.926219 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:27 crc kubenswrapper[4957]: I0218 14:32:27.926230 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:27Z","lastTransitionTime":"2026-02-18T14:32:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:28 crc kubenswrapper[4957]: I0218 14:32:28.029956 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:28 crc kubenswrapper[4957]: I0218 14:32:28.030008 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:28 crc kubenswrapper[4957]: I0218 14:32:28.030021 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:28 crc kubenswrapper[4957]: I0218 14:32:28.030040 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:28 crc kubenswrapper[4957]: I0218 14:32:28.030053 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:28Z","lastTransitionTime":"2026-02-18T14:32:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:28 crc kubenswrapper[4957]: I0218 14:32:28.132879 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:28 crc kubenswrapper[4957]: I0218 14:32:28.132940 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:28 crc kubenswrapper[4957]: I0218 14:32:28.132951 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:28 crc kubenswrapper[4957]: I0218 14:32:28.132970 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:28 crc kubenswrapper[4957]: I0218 14:32:28.132983 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:28Z","lastTransitionTime":"2026-02-18T14:32:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:28 crc kubenswrapper[4957]: I0218 14:32:28.182000 4957 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 05:49:19.207777468 +0000 UTC Feb 18 14:32:28 crc kubenswrapper[4957]: I0218 14:32:28.212820 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:32:28 crc kubenswrapper[4957]: I0218 14:32:28.212866 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:32:28 crc kubenswrapper[4957]: E0218 14:32:28.213001 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 14:32:28 crc kubenswrapper[4957]: E0218 14:32:28.213131 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 14:32:28 crc kubenswrapper[4957]: I0218 14:32:28.236451 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:28 crc kubenswrapper[4957]: I0218 14:32:28.236497 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:28 crc kubenswrapper[4957]: I0218 14:32:28.236509 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:28 crc kubenswrapper[4957]: I0218 14:32:28.236525 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:28 crc kubenswrapper[4957]: I0218 14:32:28.236539 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:28Z","lastTransitionTime":"2026-02-18T14:32:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:28 crc kubenswrapper[4957]: I0218 14:32:28.339264 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:28 crc kubenswrapper[4957]: I0218 14:32:28.339304 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:28 crc kubenswrapper[4957]: I0218 14:32:28.339315 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:28 crc kubenswrapper[4957]: I0218 14:32:28.339330 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:28 crc kubenswrapper[4957]: I0218 14:32:28.339342 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:28Z","lastTransitionTime":"2026-02-18T14:32:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:28 crc kubenswrapper[4957]: I0218 14:32:28.442640 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:28 crc kubenswrapper[4957]: I0218 14:32:28.442688 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:28 crc kubenswrapper[4957]: I0218 14:32:28.442700 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:28 crc kubenswrapper[4957]: I0218 14:32:28.442718 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:28 crc kubenswrapper[4957]: I0218 14:32:28.442730 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:28Z","lastTransitionTime":"2026-02-18T14:32:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:28 crc kubenswrapper[4957]: I0218 14:32:28.545249 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:28 crc kubenswrapper[4957]: I0218 14:32:28.545295 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:28 crc kubenswrapper[4957]: I0218 14:32:28.545306 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:28 crc kubenswrapper[4957]: I0218 14:32:28.545324 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:28 crc kubenswrapper[4957]: I0218 14:32:28.545334 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:28Z","lastTransitionTime":"2026-02-18T14:32:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:28 crc kubenswrapper[4957]: I0218 14:32:28.648247 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:28 crc kubenswrapper[4957]: I0218 14:32:28.648296 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:28 crc kubenswrapper[4957]: I0218 14:32:28.648307 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:28 crc kubenswrapper[4957]: I0218 14:32:28.648325 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:28 crc kubenswrapper[4957]: I0218 14:32:28.648337 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:28Z","lastTransitionTime":"2026-02-18T14:32:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:28 crc kubenswrapper[4957]: I0218 14:32:28.751336 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:28 crc kubenswrapper[4957]: I0218 14:32:28.751402 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:28 crc kubenswrapper[4957]: I0218 14:32:28.751452 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:28 crc kubenswrapper[4957]: I0218 14:32:28.751478 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:28 crc kubenswrapper[4957]: I0218 14:32:28.751533 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:28Z","lastTransitionTime":"2026-02-18T14:32:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:28 crc kubenswrapper[4957]: I0218 14:32:28.854364 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:28 crc kubenswrapper[4957]: I0218 14:32:28.854397 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:28 crc kubenswrapper[4957]: I0218 14:32:28.854410 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:28 crc kubenswrapper[4957]: I0218 14:32:28.854441 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:28 crc kubenswrapper[4957]: I0218 14:32:28.854453 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:28Z","lastTransitionTime":"2026-02-18T14:32:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:28 crc kubenswrapper[4957]: I0218 14:32:28.958167 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:28 crc kubenswrapper[4957]: I0218 14:32:28.958205 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:28 crc kubenswrapper[4957]: I0218 14:32:28.958217 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:28 crc kubenswrapper[4957]: I0218 14:32:28.958240 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:28 crc kubenswrapper[4957]: I0218 14:32:28.958252 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:28Z","lastTransitionTime":"2026-02-18T14:32:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:29 crc kubenswrapper[4957]: I0218 14:32:29.060919 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:29 crc kubenswrapper[4957]: I0218 14:32:29.060966 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:29 crc kubenswrapper[4957]: I0218 14:32:29.060976 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:29 crc kubenswrapper[4957]: I0218 14:32:29.060992 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:29 crc kubenswrapper[4957]: I0218 14:32:29.061001 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:29Z","lastTransitionTime":"2026-02-18T14:32:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:29 crc kubenswrapper[4957]: I0218 14:32:29.169294 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:29 crc kubenswrapper[4957]: I0218 14:32:29.169347 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:29 crc kubenswrapper[4957]: I0218 14:32:29.169360 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:29 crc kubenswrapper[4957]: I0218 14:32:29.169380 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:29 crc kubenswrapper[4957]: I0218 14:32:29.169402 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:29Z","lastTransitionTime":"2026-02-18T14:32:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:29 crc kubenswrapper[4957]: I0218 14:32:29.183052 4957 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 16:11:58.975611979 +0000 UTC Feb 18 14:32:29 crc kubenswrapper[4957]: I0218 14:32:29.212623 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jkmlc" Feb 18 14:32:29 crc kubenswrapper[4957]: I0218 14:32:29.212850 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:32:29 crc kubenswrapper[4957]: E0218 14:32:29.213090 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jkmlc" podUID="58c40982-35c8-4670-ad21-513a7a5a458e" Feb 18 14:32:29 crc kubenswrapper[4957]: E0218 14:32:29.213227 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 14:32:29 crc kubenswrapper[4957]: I0218 14:32:29.274078 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:29 crc kubenswrapper[4957]: I0218 14:32:29.274165 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:29 crc kubenswrapper[4957]: I0218 14:32:29.274191 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:29 crc kubenswrapper[4957]: I0218 14:32:29.274227 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:29 crc kubenswrapper[4957]: I0218 14:32:29.274245 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:29Z","lastTransitionTime":"2026-02-18T14:32:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:29 crc kubenswrapper[4957]: I0218 14:32:29.376746 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:29 crc kubenswrapper[4957]: I0218 14:32:29.376802 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:29 crc kubenswrapper[4957]: I0218 14:32:29.376815 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:29 crc kubenswrapper[4957]: I0218 14:32:29.376831 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:29 crc kubenswrapper[4957]: I0218 14:32:29.376843 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:29Z","lastTransitionTime":"2026-02-18T14:32:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:29 crc kubenswrapper[4957]: I0218 14:32:29.479958 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:29 crc kubenswrapper[4957]: I0218 14:32:29.479994 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:29 crc kubenswrapper[4957]: I0218 14:32:29.480004 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:29 crc kubenswrapper[4957]: I0218 14:32:29.480020 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:29 crc kubenswrapper[4957]: I0218 14:32:29.480030 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:29Z","lastTransitionTime":"2026-02-18T14:32:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:29 crc kubenswrapper[4957]: I0218 14:32:29.583477 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:29 crc kubenswrapper[4957]: I0218 14:32:29.583533 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:29 crc kubenswrapper[4957]: I0218 14:32:29.583554 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:29 crc kubenswrapper[4957]: I0218 14:32:29.583584 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:29 crc kubenswrapper[4957]: I0218 14:32:29.583604 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:29Z","lastTransitionTime":"2026-02-18T14:32:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:29 crc kubenswrapper[4957]: I0218 14:32:29.686762 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:29 crc kubenswrapper[4957]: I0218 14:32:29.687094 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:29 crc kubenswrapper[4957]: I0218 14:32:29.687338 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:29 crc kubenswrapper[4957]: I0218 14:32:29.687573 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:29 crc kubenswrapper[4957]: I0218 14:32:29.687738 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:29Z","lastTransitionTime":"2026-02-18T14:32:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:29 crc kubenswrapper[4957]: I0218 14:32:29.790705 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:29 crc kubenswrapper[4957]: I0218 14:32:29.791012 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:29 crc kubenswrapper[4957]: I0218 14:32:29.791082 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:29 crc kubenswrapper[4957]: I0218 14:32:29.791163 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:29 crc kubenswrapper[4957]: I0218 14:32:29.791223 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:29Z","lastTransitionTime":"2026-02-18T14:32:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:29 crc kubenswrapper[4957]: I0218 14:32:29.893280 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:29 crc kubenswrapper[4957]: I0218 14:32:29.893611 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:29 crc kubenswrapper[4957]: I0218 14:32:29.893735 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:29 crc kubenswrapper[4957]: I0218 14:32:29.893846 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:29 crc kubenswrapper[4957]: I0218 14:32:29.893992 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:29Z","lastTransitionTime":"2026-02-18T14:32:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:29 crc kubenswrapper[4957]: I0218 14:32:29.996737 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:29 crc kubenswrapper[4957]: I0218 14:32:29.996810 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:29 crc kubenswrapper[4957]: I0218 14:32:29.996823 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:29 crc kubenswrapper[4957]: I0218 14:32:29.996843 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:29 crc kubenswrapper[4957]: I0218 14:32:29.996856 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:29Z","lastTransitionTime":"2026-02-18T14:32:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:30 crc kubenswrapper[4957]: I0218 14:32:30.099794 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:30 crc kubenswrapper[4957]: I0218 14:32:30.099853 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:30 crc kubenswrapper[4957]: I0218 14:32:30.099865 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:30 crc kubenswrapper[4957]: I0218 14:32:30.099882 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:30 crc kubenswrapper[4957]: I0218 14:32:30.099898 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:30Z","lastTransitionTime":"2026-02-18T14:32:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:30 crc kubenswrapper[4957]: I0218 14:32:30.183774 4957 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 00:47:00.504613702 +0000 UTC Feb 18 14:32:30 crc kubenswrapper[4957]: I0218 14:32:30.199938 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 18 14:32:30 crc kubenswrapper[4957]: I0218 14:32:30.202740 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:30 crc kubenswrapper[4957]: I0218 14:32:30.202774 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:30 crc kubenswrapper[4957]: I0218 14:32:30.202786 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:30 crc kubenswrapper[4957]: I0218 14:32:30.202802 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:30 crc kubenswrapper[4957]: I0218 14:32:30.202817 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:30Z","lastTransitionTime":"2026-02-18T14:32:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:30 crc kubenswrapper[4957]: I0218 14:32:30.209445 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 18 14:32:30 crc kubenswrapper[4957]: I0218 14:32:30.212561 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:32:30 crc kubenswrapper[4957]: I0218 14:32:30.212565 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:32:30 crc kubenswrapper[4957]: E0218 14:32:30.212712 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 14:32:30 crc kubenswrapper[4957]: E0218 14:32:30.212921 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 14:32:30 crc kubenswrapper[4957]: I0218 14:32:30.216787 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b3bef1f-7cee-4035-bc8e-195fadcf2d19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51eded762a62fcb704d09a9c4336a0e308e44c662b636772ea79f754c09dcaaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a2ec9911be4f4d8715bb722cdf72cdbf478111030a7311b93eb67f55ddbce77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d46f625f180ab9afde6a7cb5d578b88012bf5b297e691a35f1fe1af000f1416\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a11b36734e57eec3aba0b6d96b4102850e46b96eb16d99cfcacf273f1ce69f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ff013b6ed4aa92a4bda93a646e7ffd4a158daab436c25025c7fbee49716bcfd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T14:31:47Z\\\",\\\"message\\\":\\\"W0218 14:31:37.311828 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0218 14:31:37.312305 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771425097 cert, and key in /tmp/serving-cert-2580901173/serving-signer.crt, /tmp/serving-cert-2580901173/serving-signer.key\\\\nI0218 14:31:37.668457 1 observer_polling.go:159] Starting file observer\\\\nW0218 14:31:37.670692 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0218 14:31:37.670892 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 14:31:37.672550 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2580901173/tls.crt::/tmp/serving-cert-2580901173/tls.key\\\\\\\"\\\\nF0218 14:31:47.884142 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1c0f986e657c20a059efccb494462926f4fa62d30b012e6ce2b530d679d9d5c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:37Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09ddd547ad67f12c0d4f3b0505206fc69b4bcfcfa966c6f98da8ef4f4e979711\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09ddd547ad67f12c0d4f3b0505206fc69b4bcfcfa966c6f98da8ef4f4e979711\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:30Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:30 crc kubenswrapper[4957]: I0218 14:32:30.230115 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:30Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:30 crc kubenswrapper[4957]: I0218 14:32:30.247465 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3339e9ff017eaa7de40b766111f5d7dcb9b5b45f154d917b2f3b1ebad21ee361\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://815a22c1c4523e923dbe20bb68b1f4195562f2b3a767409555f43c82cd826391\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:30Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:30 crc kubenswrapper[4957]: I0218 14:32:30.259322 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96ebe482278c96a988073257a64389543a692cbae60b771542dd45ffdbef9977\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:30Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:30 crc kubenswrapper[4957]: I0218 14:32:30.268279 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wn2pd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6a720f-7d42-48e9-8073-fb4f7417e6cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3317042ce828b73a13e6c2724a7c7663e0c73dd05fef1239f5ad36a7d46b1ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qv7rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wn2pd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:30Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:30 crc kubenswrapper[4957]: I0218 14:32:30.287058 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t7lp9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1ab5e7d-28c9-416b-9e12-1209987d8a2c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3d55cd020086b018dcf2183c7307ce4f66a18d4dfd5c7e387419f5aaeb08e53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb7e286dbf7b49a0ee8019eb660a4b977399c163bb094d77f830b3747399d702\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c773ab7da87d534f8ca400b73f89b473291b15b65118abfa48da7e8af126724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fb30623c479a6d604d07d8779f3b5661ba8a2c98fa38f145443c7f80fbfddb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9575c57251809ec79d9e9a63fc18fc0fe431582b023c0bfb9d71b3ea5c369e49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7fc50065803f879ae25e1ab406e1b74e142cab30c38bedd0426616a2cc6cc84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01eae4440ba56e59be4a568c08e1c676d03e370e9abdd72986eca6f68f01e0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01eae4440ba56e59be4a568c08e1c676d03e370e9abdd72986eca6f68f01e0f1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T14:32:21Z\\\",\\\"message\\\":\\\"erving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc0075eb187 \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:metrics,Protocol:TCP,Port:8383,TargetPort:{0 8383 },NodePort:0,AppProtocol:nil,},ServicePort{Name:https-metrics,Protocol:TCP,Port:8081,TargetPort:{0 8081 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{name: marketplace-operator,},ClusterIP:10.217.5.53,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.53],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF0218 14:32:21.034484 6635 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not add\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T14:32:20Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-t7lp9_openshift-ovn-kubernetes(c1ab5e7d-28c9-416b-9e12-1209987d8a2c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07549d11e88948ba7929804722b7d49a2fefb64d0c324c2032529c13df70a23f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854d32930f1afa0a3f29b83159a4a473fd3c074f065cab0f110c4c0e62d0ab79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://854d32930f1afa0a3f29b83159a4a473fd3c074f065cab0f110c4c0e62d0ab79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t7lp9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:30Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:30 crc kubenswrapper[4957]: I0218 14:32:30.300390 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-52gh7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"841cf9b6-bfbb-4ff0-8899-acd00478a669\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e52fe12bee0a96d77a82c8edb65574b11ae3d6b6dd3e2a68e918f1d21c0b1153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xg6v5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d556fc2e5fb106135a544401595696797ede468aa93818660d86424c49486fff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xg6v5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:32:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-52gh7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:30Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:30 crc kubenswrapper[4957]: I0218 14:32:30.304903 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:30 crc kubenswrapper[4957]: I0218 14:32:30.304946 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:30 crc kubenswrapper[4957]: I0218 14:32:30.304957 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:30 crc kubenswrapper[4957]: I0218 14:32:30.304976 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:30 crc kubenswrapper[4957]: I0218 14:32:30.304991 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:30Z","lastTransitionTime":"2026-02-18T14:32:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:30 crc kubenswrapper[4957]: I0218 14:32:30.311958 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"820e2f52-fc4e-43af-8da0-5b1ecb52e54a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdeaa62699fcf37060225a404103ed014fff43f751f744a27891afb76c56d011\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb9f54ba352c3e8d1cb70e31e2321f32cc741e9ea7fa871ac8db9cb380486d6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04cb263fd49c7f580746efa2f81c4e25a4862244502ad5362a1767d6255c023c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f45a9407db247a891731310afb4c1a7b81ac94e44cfd8e267854a4c4e434e95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:34Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:30Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:30 crc kubenswrapper[4957]: I0218 14:32:30.323923 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:30Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:30 crc kubenswrapper[4957]: I0218 14:32:30.336648 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sk96m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7e4902e-7bb6-44ec-a3ac-3d8b3afe3fbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://644a3c29970ac9c5262a3a66390a566947b89d6020eaf5de12afb1afeaaff673\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vssz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sk96m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:30Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:30 crc kubenswrapper[4957]: I0218 14:32:30.347251 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cde17e3-43e9-4bed-afe8-5b76229e35cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1929ddc0bd8d5e1c054fe6b3399c053a91ace49153f8712ad155416d0df0652e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mzslj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4227d09ba91769eac5df7ac50f6da20587fad22be3d3f27f0a938b37d76b457e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mzslj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-x8wwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:30Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:30 crc kubenswrapper[4957]: I0218 14:32:30.381473 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jkmlc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58c40982-35c8-4670-ad21-513a7a5a458e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gkmzc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gkmzc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:32:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jkmlc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:30Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:30 crc kubenswrapper[4957]: I0218 14:32:30.405676 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3aa342a3ea42d2c64cf284dd2f7fed142b1ff377cc3c5384e4d940ec4364befc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:30Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:30 crc kubenswrapper[4957]: I0218 14:32:30.407125 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:30 crc kubenswrapper[4957]: I0218 14:32:30.407163 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:30 crc kubenswrapper[4957]: I0218 14:32:30.407176 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:30 crc kubenswrapper[4957]: I0218 14:32:30.407195 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:30 crc kubenswrapper[4957]: I0218 14:32:30.407209 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:30Z","lastTransitionTime":"2026-02-18T14:32:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:30 crc kubenswrapper[4957]: I0218 14:32:30.417760 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:30Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:30 crc kubenswrapper[4957]: I0218 14:32:30.431685 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-s7f5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77ae51a3-a3f7-4ea3-afb9-93558bf3b821\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ff578d3657e9ba6b371eb3c3a2e63d4c989e3232da6c9f88cf6ba18edc50dd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:32:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7f4f904ea5311aa8aa0473b62433cd3684d0e1fecdb1b179bfb7a58e463cc27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7f4f904ea5311aa8aa0473b62433cd3684d0e1fecdb1b179bfb7a58e463cc27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ea791a426f30ba77c019b8f865f9cfe90b915f6c0596002f4914cbe2fa68c27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ea791a426f30ba77c019b8f865f9cfe90b915f6c0596002f4914cbe2fa68c27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01a7d815c8c260fb5d2a4586f2640abf157fa209aa8e8aed0e6573a07bdaced6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01a7d815c8c260fb5d2a4586f2640abf157fa209aa8e8aed0e6573a07bdaced6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ae36af89d8b5cc2ad00253398fad76d78310bc8adb012b7b819ef55a26b8004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae36af89d8b5cc2ad00253398fad76d78310bc8adb012b7b819ef55a26b8004\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://def4b27d830824baeb1b965b56d5fc50343585d0fae5a1fed75381364a94caf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://def4b27d830824baeb1b965b56d5fc50343585d0fae5a1fed75381364a94caf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:32:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a13d3fcb295e963277f66a06bad27866ddb270d48474d0154efc52a8721a64a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a13d3fcb295e963277f66a06bad27866ddb270d48474d0154efc52a8721a64a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:32:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-s7f5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:30Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:30 crc kubenswrapper[4957]: I0218 14:32:30.442270 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-c5hxm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2b8a049-72c4-4a87-bb05-a5cc4ebe7a89\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d8c16012d78884094f9be4f3cd7b67463352ff1391c8f60c109b5f734c80cd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mzkhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-c5hxm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:30Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:30 crc kubenswrapper[4957]: I0218 14:32:30.509517 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:30 crc kubenswrapper[4957]: I0218 14:32:30.509590 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:30 crc kubenswrapper[4957]: I0218 14:32:30.509612 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:30 crc kubenswrapper[4957]: I0218 14:32:30.509643 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:30 crc kubenswrapper[4957]: I0218 14:32:30.509666 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:30Z","lastTransitionTime":"2026-02-18T14:32:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:30 crc kubenswrapper[4957]: I0218 14:32:30.611702 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:30 crc kubenswrapper[4957]: I0218 14:32:30.611762 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:30 crc kubenswrapper[4957]: I0218 14:32:30.611776 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:30 crc kubenswrapper[4957]: I0218 14:32:30.611794 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:30 crc kubenswrapper[4957]: I0218 14:32:30.611808 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:30Z","lastTransitionTime":"2026-02-18T14:32:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:30 crc kubenswrapper[4957]: I0218 14:32:30.713927 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:30 crc kubenswrapper[4957]: I0218 14:32:30.714010 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:30 crc kubenswrapper[4957]: I0218 14:32:30.714027 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:30 crc kubenswrapper[4957]: I0218 14:32:30.714047 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:30 crc kubenswrapper[4957]: I0218 14:32:30.714060 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:30Z","lastTransitionTime":"2026-02-18T14:32:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:30 crc kubenswrapper[4957]: I0218 14:32:30.817449 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:30 crc kubenswrapper[4957]: I0218 14:32:30.817543 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:30 crc kubenswrapper[4957]: I0218 14:32:30.817570 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:30 crc kubenswrapper[4957]: I0218 14:32:30.817595 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:30 crc kubenswrapper[4957]: I0218 14:32:30.817615 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:30Z","lastTransitionTime":"2026-02-18T14:32:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:30 crc kubenswrapper[4957]: I0218 14:32:30.870725 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:30 crc kubenswrapper[4957]: I0218 14:32:30.870875 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:30 crc kubenswrapper[4957]: I0218 14:32:30.870893 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:30 crc kubenswrapper[4957]: I0218 14:32:30.870911 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:30 crc kubenswrapper[4957]: I0218 14:32:30.870923 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:30Z","lastTransitionTime":"2026-02-18T14:32:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:30 crc kubenswrapper[4957]: E0218 14:32:30.890016 4957 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:32:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:32:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:32:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:32:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"73cbc702-e999-4b05-a826-bb1b15d4d73b\\\",\\\"systemUUID\\\":\\\"9fb0acd4-1ed1-4909-a63a-3f4dd7b07055\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:30Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:30 crc kubenswrapper[4957]: I0218 14:32:30.894436 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:30 crc kubenswrapper[4957]: I0218 14:32:30.894500 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:30 crc kubenswrapper[4957]: I0218 14:32:30.894522 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:30 crc kubenswrapper[4957]: I0218 14:32:30.894541 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:30 crc kubenswrapper[4957]: I0218 14:32:30.894553 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:30Z","lastTransitionTime":"2026-02-18T14:32:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:30 crc kubenswrapper[4957]: E0218 14:32:30.908938 4957 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:32:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:32:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:32:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:32:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"73cbc702-e999-4b05-a826-bb1b15d4d73b\\\",\\\"systemUUID\\\":\\\"9fb0acd4-1ed1-4909-a63a-3f4dd7b07055\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:30Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:30 crc kubenswrapper[4957]: I0218 14:32:30.913249 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:30 crc kubenswrapper[4957]: I0218 14:32:30.913272 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:30 crc kubenswrapper[4957]: I0218 14:32:30.913280 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:30 crc kubenswrapper[4957]: I0218 14:32:30.913293 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:30 crc kubenswrapper[4957]: I0218 14:32:30.913301 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:30Z","lastTransitionTime":"2026-02-18T14:32:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:30 crc kubenswrapper[4957]: E0218 14:32:30.926570 4957 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:32:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:32:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:32:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:32:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"73cbc702-e999-4b05-a826-bb1b15d4d73b\\\",\\\"systemUUID\\\":\\\"9fb0acd4-1ed1-4909-a63a-3f4dd7b07055\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:30Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:30 crc kubenswrapper[4957]: I0218 14:32:30.931592 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:30 crc kubenswrapper[4957]: I0218 14:32:30.931648 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:30 crc kubenswrapper[4957]: I0218 14:32:30.931659 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:30 crc kubenswrapper[4957]: I0218 14:32:30.931681 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:30 crc kubenswrapper[4957]: I0218 14:32:30.931698 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:30Z","lastTransitionTime":"2026-02-18T14:32:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:30 crc kubenswrapper[4957]: E0218 14:32:30.946018 4957 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:32:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:32:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:32:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:32:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"73cbc702-e999-4b05-a826-bb1b15d4d73b\\\",\\\"systemUUID\\\":\\\"9fb0acd4-1ed1-4909-a63a-3f4dd7b07055\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:30Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:30 crc kubenswrapper[4957]: I0218 14:32:30.950469 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:30 crc kubenswrapper[4957]: I0218 14:32:30.950515 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:30 crc kubenswrapper[4957]: I0218 14:32:30.950525 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:30 crc kubenswrapper[4957]: I0218 14:32:30.950545 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:30 crc kubenswrapper[4957]: I0218 14:32:30.950558 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:30Z","lastTransitionTime":"2026-02-18T14:32:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:30 crc kubenswrapper[4957]: E0218 14:32:30.962871 4957 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:32:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:32:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:32:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:32:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"73cbc702-e999-4b05-a826-bb1b15d4d73b\\\",\\\"systemUUID\\\":\\\"9fb0acd4-1ed1-4909-a63a-3f4dd7b07055\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:30Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:30 crc kubenswrapper[4957]: E0218 14:32:30.963049 4957 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 18 14:32:30 crc kubenswrapper[4957]: I0218 14:32:30.965052 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:30 crc kubenswrapper[4957]: I0218 14:32:30.965122 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:30 crc kubenswrapper[4957]: I0218 14:32:30.965136 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:30 crc kubenswrapper[4957]: I0218 14:32:30.965154 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:30 crc kubenswrapper[4957]: I0218 14:32:30.965169 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:30Z","lastTransitionTime":"2026-02-18T14:32:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:31 crc kubenswrapper[4957]: I0218 14:32:31.068526 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:31 crc kubenswrapper[4957]: I0218 14:32:31.068595 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:31 crc kubenswrapper[4957]: I0218 14:32:31.068609 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:31 crc kubenswrapper[4957]: I0218 14:32:31.068634 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:31 crc kubenswrapper[4957]: I0218 14:32:31.068649 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:31Z","lastTransitionTime":"2026-02-18T14:32:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:31 crc kubenswrapper[4957]: I0218 14:32:31.171921 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:31 crc kubenswrapper[4957]: I0218 14:32:31.171981 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:31 crc kubenswrapper[4957]: I0218 14:32:31.171997 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:31 crc kubenswrapper[4957]: I0218 14:32:31.172020 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:31 crc kubenswrapper[4957]: I0218 14:32:31.172034 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:31Z","lastTransitionTime":"2026-02-18T14:32:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:31 crc kubenswrapper[4957]: I0218 14:32:31.185535 4957 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 11:45:24.748559031 +0000 UTC Feb 18 14:32:31 crc kubenswrapper[4957]: I0218 14:32:31.212228 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:32:31 crc kubenswrapper[4957]: I0218 14:32:31.212291 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jkmlc" Feb 18 14:32:31 crc kubenswrapper[4957]: E0218 14:32:31.212465 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 14:32:31 crc kubenswrapper[4957]: E0218 14:32:31.212573 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jkmlc" podUID="58c40982-35c8-4670-ad21-513a7a5a458e" Feb 18 14:32:31 crc kubenswrapper[4957]: I0218 14:32:31.274530 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:31 crc kubenswrapper[4957]: I0218 14:32:31.274602 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:31 crc kubenswrapper[4957]: I0218 14:32:31.274622 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:31 crc kubenswrapper[4957]: I0218 14:32:31.274646 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:31 crc kubenswrapper[4957]: I0218 14:32:31.274667 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:31Z","lastTransitionTime":"2026-02-18T14:32:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:31 crc kubenswrapper[4957]: I0218 14:32:31.378233 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:31 crc kubenswrapper[4957]: I0218 14:32:31.378324 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:31 crc kubenswrapper[4957]: I0218 14:32:31.378360 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:31 crc kubenswrapper[4957]: I0218 14:32:31.378389 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:31 crc kubenswrapper[4957]: I0218 14:32:31.378406 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:31Z","lastTransitionTime":"2026-02-18T14:32:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:31 crc kubenswrapper[4957]: I0218 14:32:31.481655 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:31 crc kubenswrapper[4957]: I0218 14:32:31.481702 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:31 crc kubenswrapper[4957]: I0218 14:32:31.481715 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:31 crc kubenswrapper[4957]: I0218 14:32:31.481736 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:31 crc kubenswrapper[4957]: I0218 14:32:31.481755 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:31Z","lastTransitionTime":"2026-02-18T14:32:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:31 crc kubenswrapper[4957]: I0218 14:32:31.584645 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:31 crc kubenswrapper[4957]: I0218 14:32:31.585033 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:31 crc kubenswrapper[4957]: I0218 14:32:31.585175 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:31 crc kubenswrapper[4957]: I0218 14:32:31.585310 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:31 crc kubenswrapper[4957]: I0218 14:32:31.585463 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:31Z","lastTransitionTime":"2026-02-18T14:32:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:31 crc kubenswrapper[4957]: I0218 14:32:31.688082 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:31 crc kubenswrapper[4957]: I0218 14:32:31.688118 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:31 crc kubenswrapper[4957]: I0218 14:32:31.688134 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:31 crc kubenswrapper[4957]: I0218 14:32:31.688152 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:31 crc kubenswrapper[4957]: I0218 14:32:31.688163 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:31Z","lastTransitionTime":"2026-02-18T14:32:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:31 crc kubenswrapper[4957]: I0218 14:32:31.790897 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:31 crc kubenswrapper[4957]: I0218 14:32:31.790982 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:31 crc kubenswrapper[4957]: I0218 14:32:31.791006 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:31 crc kubenswrapper[4957]: I0218 14:32:31.791034 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:31 crc kubenswrapper[4957]: I0218 14:32:31.791056 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:31Z","lastTransitionTime":"2026-02-18T14:32:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:31 crc kubenswrapper[4957]: I0218 14:32:31.893868 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:31 crc kubenswrapper[4957]: I0218 14:32:31.893903 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:31 crc kubenswrapper[4957]: I0218 14:32:31.893911 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:31 crc kubenswrapper[4957]: I0218 14:32:31.893925 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:31 crc kubenswrapper[4957]: I0218 14:32:31.893935 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:31Z","lastTransitionTime":"2026-02-18T14:32:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:31 crc kubenswrapper[4957]: I0218 14:32:31.996556 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:31 crc kubenswrapper[4957]: I0218 14:32:31.996607 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:31 crc kubenswrapper[4957]: I0218 14:32:31.996620 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:31 crc kubenswrapper[4957]: I0218 14:32:31.996639 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:31 crc kubenswrapper[4957]: I0218 14:32:31.996651 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:31Z","lastTransitionTime":"2026-02-18T14:32:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:32 crc kubenswrapper[4957]: I0218 14:32:32.099980 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:32 crc kubenswrapper[4957]: I0218 14:32:32.100046 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:32 crc kubenswrapper[4957]: I0218 14:32:32.100065 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:32 crc kubenswrapper[4957]: I0218 14:32:32.100095 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:32 crc kubenswrapper[4957]: I0218 14:32:32.100114 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:32Z","lastTransitionTime":"2026-02-18T14:32:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:32 crc kubenswrapper[4957]: I0218 14:32:32.185949 4957 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 22:58:09.063634625 +0000 UTC Feb 18 14:32:32 crc kubenswrapper[4957]: I0218 14:32:32.203192 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:32 crc kubenswrapper[4957]: I0218 14:32:32.203261 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:32 crc kubenswrapper[4957]: I0218 14:32:32.203275 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:32 crc kubenswrapper[4957]: I0218 14:32:32.203296 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:32 crc kubenswrapper[4957]: I0218 14:32:32.203314 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:32Z","lastTransitionTime":"2026-02-18T14:32:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:32 crc kubenswrapper[4957]: I0218 14:32:32.212765 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:32:32 crc kubenswrapper[4957]: I0218 14:32:32.212792 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:32:32 crc kubenswrapper[4957]: E0218 14:32:32.212908 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 14:32:32 crc kubenswrapper[4957]: E0218 14:32:32.213029 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 14:32:32 crc kubenswrapper[4957]: I0218 14:32:32.306268 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:32 crc kubenswrapper[4957]: I0218 14:32:32.306316 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:32 crc kubenswrapper[4957]: I0218 14:32:32.306327 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:32 crc kubenswrapper[4957]: I0218 14:32:32.306346 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:32 crc kubenswrapper[4957]: I0218 14:32:32.306357 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:32Z","lastTransitionTime":"2026-02-18T14:32:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:32 crc kubenswrapper[4957]: I0218 14:32:32.408880 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:32 crc kubenswrapper[4957]: I0218 14:32:32.409235 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:32 crc kubenswrapper[4957]: I0218 14:32:32.409389 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:32 crc kubenswrapper[4957]: I0218 14:32:32.409586 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:32 crc kubenswrapper[4957]: I0218 14:32:32.409806 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:32Z","lastTransitionTime":"2026-02-18T14:32:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:32 crc kubenswrapper[4957]: I0218 14:32:32.513179 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:32 crc kubenswrapper[4957]: I0218 14:32:32.513530 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:32 crc kubenswrapper[4957]: I0218 14:32:32.513679 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:32 crc kubenswrapper[4957]: I0218 14:32:32.513788 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:32 crc kubenswrapper[4957]: I0218 14:32:32.513873 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:32Z","lastTransitionTime":"2026-02-18T14:32:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:32 crc kubenswrapper[4957]: I0218 14:32:32.616990 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:32 crc kubenswrapper[4957]: I0218 14:32:32.617046 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:32 crc kubenswrapper[4957]: I0218 14:32:32.617062 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:32 crc kubenswrapper[4957]: I0218 14:32:32.617086 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:32 crc kubenswrapper[4957]: I0218 14:32:32.617104 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:32Z","lastTransitionTime":"2026-02-18T14:32:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:32 crc kubenswrapper[4957]: I0218 14:32:32.719575 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:32 crc kubenswrapper[4957]: I0218 14:32:32.719615 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:32 crc kubenswrapper[4957]: I0218 14:32:32.719625 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:32 crc kubenswrapper[4957]: I0218 14:32:32.719642 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:32 crc kubenswrapper[4957]: I0218 14:32:32.719653 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:32Z","lastTransitionTime":"2026-02-18T14:32:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:32 crc kubenswrapper[4957]: I0218 14:32:32.821783 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:32 crc kubenswrapper[4957]: I0218 14:32:32.821825 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:32 crc kubenswrapper[4957]: I0218 14:32:32.821836 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:32 crc kubenswrapper[4957]: I0218 14:32:32.821852 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:32 crc kubenswrapper[4957]: I0218 14:32:32.821863 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:32Z","lastTransitionTime":"2026-02-18T14:32:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:32 crc kubenswrapper[4957]: I0218 14:32:32.924677 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:32 crc kubenswrapper[4957]: I0218 14:32:32.924723 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:32 crc kubenswrapper[4957]: I0218 14:32:32.924735 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:32 crc kubenswrapper[4957]: I0218 14:32:32.924753 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:32 crc kubenswrapper[4957]: I0218 14:32:32.924764 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:32Z","lastTransitionTime":"2026-02-18T14:32:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:33 crc kubenswrapper[4957]: I0218 14:32:33.027389 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:33 crc kubenswrapper[4957]: I0218 14:32:33.027468 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:33 crc kubenswrapper[4957]: I0218 14:32:33.027483 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:33 crc kubenswrapper[4957]: I0218 14:32:33.027501 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:33 crc kubenswrapper[4957]: I0218 14:32:33.027514 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:33Z","lastTransitionTime":"2026-02-18T14:32:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:33 crc kubenswrapper[4957]: I0218 14:32:33.129924 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:33 crc kubenswrapper[4957]: I0218 14:32:33.130007 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:33 crc kubenswrapper[4957]: I0218 14:32:33.130030 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:33 crc kubenswrapper[4957]: I0218 14:32:33.130064 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:33 crc kubenswrapper[4957]: I0218 14:32:33.130087 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:33Z","lastTransitionTime":"2026-02-18T14:32:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:33 crc kubenswrapper[4957]: I0218 14:32:33.186557 4957 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 01:46:51.327857686 +0000 UTC Feb 18 14:32:33 crc kubenswrapper[4957]: I0218 14:32:33.211843 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jkmlc" Feb 18 14:32:33 crc kubenswrapper[4957]: I0218 14:32:33.211924 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:32:33 crc kubenswrapper[4957]: E0218 14:32:33.212040 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jkmlc" podUID="58c40982-35c8-4670-ad21-513a7a5a458e" Feb 18 14:32:33 crc kubenswrapper[4957]: E0218 14:32:33.212634 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 14:32:33 crc kubenswrapper[4957]: I0218 14:32:33.212724 4957 scope.go:117] "RemoveContainer" containerID="01eae4440ba56e59be4a568c08e1c676d03e370e9abdd72986eca6f68f01e0f1" Feb 18 14:32:33 crc kubenswrapper[4957]: E0218 14:32:33.212862 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-t7lp9_openshift-ovn-kubernetes(c1ab5e7d-28c9-416b-9e12-1209987d8a2c)\"" pod="openshift-ovn-kubernetes/ovnkube-node-t7lp9" podUID="c1ab5e7d-28c9-416b-9e12-1209987d8a2c" Feb 18 14:32:33 crc kubenswrapper[4957]: I0218 14:32:33.232370 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:33 crc kubenswrapper[4957]: I0218 14:32:33.232414 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:33 crc kubenswrapper[4957]: I0218 14:32:33.232447 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:33 crc kubenswrapper[4957]: I0218 14:32:33.232471 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:33 crc kubenswrapper[4957]: I0218 14:32:33.232488 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:33Z","lastTransitionTime":"2026-02-18T14:32:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:33 crc kubenswrapper[4957]: I0218 14:32:33.335353 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:33 crc kubenswrapper[4957]: I0218 14:32:33.335465 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:33 crc kubenswrapper[4957]: I0218 14:32:33.335493 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:33 crc kubenswrapper[4957]: I0218 14:32:33.335527 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:33 crc kubenswrapper[4957]: I0218 14:32:33.335552 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:33Z","lastTransitionTime":"2026-02-18T14:32:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:33 crc kubenswrapper[4957]: I0218 14:32:33.438091 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:33 crc kubenswrapper[4957]: I0218 14:32:33.438144 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:33 crc kubenswrapper[4957]: I0218 14:32:33.438158 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:33 crc kubenswrapper[4957]: I0218 14:32:33.438182 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:33 crc kubenswrapper[4957]: I0218 14:32:33.438200 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:33Z","lastTransitionTime":"2026-02-18T14:32:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:33 crc kubenswrapper[4957]: I0218 14:32:33.541196 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:33 crc kubenswrapper[4957]: I0218 14:32:33.541246 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:33 crc kubenswrapper[4957]: I0218 14:32:33.541258 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:33 crc kubenswrapper[4957]: I0218 14:32:33.541276 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:33 crc kubenswrapper[4957]: I0218 14:32:33.541289 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:33Z","lastTransitionTime":"2026-02-18T14:32:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:33 crc kubenswrapper[4957]: I0218 14:32:33.683778 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:33 crc kubenswrapper[4957]: I0218 14:32:33.683838 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:33 crc kubenswrapper[4957]: I0218 14:32:33.683855 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:33 crc kubenswrapper[4957]: I0218 14:32:33.683880 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:33 crc kubenswrapper[4957]: I0218 14:32:33.683899 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:33Z","lastTransitionTime":"2026-02-18T14:32:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:33 crc kubenswrapper[4957]: I0218 14:32:33.787506 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:33 crc kubenswrapper[4957]: I0218 14:32:33.787568 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:33 crc kubenswrapper[4957]: I0218 14:32:33.787582 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:33 crc kubenswrapper[4957]: I0218 14:32:33.787602 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:33 crc kubenswrapper[4957]: I0218 14:32:33.787615 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:33Z","lastTransitionTime":"2026-02-18T14:32:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:33 crc kubenswrapper[4957]: I0218 14:32:33.890039 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:33 crc kubenswrapper[4957]: I0218 14:32:33.890094 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:33 crc kubenswrapper[4957]: I0218 14:32:33.890112 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:33 crc kubenswrapper[4957]: I0218 14:32:33.890138 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:33 crc kubenswrapper[4957]: I0218 14:32:33.890156 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:33Z","lastTransitionTime":"2026-02-18T14:32:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:33 crc kubenswrapper[4957]: I0218 14:32:33.993100 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:33 crc kubenswrapper[4957]: I0218 14:32:33.993360 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:33 crc kubenswrapper[4957]: I0218 14:32:33.993442 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:33 crc kubenswrapper[4957]: I0218 14:32:33.993513 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:33 crc kubenswrapper[4957]: I0218 14:32:33.993594 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:33Z","lastTransitionTime":"2026-02-18T14:32:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:34 crc kubenswrapper[4957]: I0218 14:32:34.097931 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:34 crc kubenswrapper[4957]: I0218 14:32:34.097991 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:34 crc kubenswrapper[4957]: I0218 14:32:34.098007 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:34 crc kubenswrapper[4957]: I0218 14:32:34.098027 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:34 crc kubenswrapper[4957]: I0218 14:32:34.098039 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:34Z","lastTransitionTime":"2026-02-18T14:32:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:34 crc kubenswrapper[4957]: I0218 14:32:34.187276 4957 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 09:16:57.101549719 +0000 UTC Feb 18 14:32:34 crc kubenswrapper[4957]: I0218 14:32:34.200531 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:34 crc kubenswrapper[4957]: I0218 14:32:34.200573 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:34 crc kubenswrapper[4957]: I0218 14:32:34.200584 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:34 crc kubenswrapper[4957]: I0218 14:32:34.200601 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:34 crc kubenswrapper[4957]: I0218 14:32:34.200615 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:34Z","lastTransitionTime":"2026-02-18T14:32:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:34 crc kubenswrapper[4957]: I0218 14:32:34.212079 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:32:34 crc kubenswrapper[4957]: I0218 14:32:34.212159 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:32:34 crc kubenswrapper[4957]: E0218 14:32:34.212232 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 14:32:34 crc kubenswrapper[4957]: E0218 14:32:34.212459 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 14:32:34 crc kubenswrapper[4957]: I0218 14:32:34.226394 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"820e2f52-fc4e-43af-8da0-5b1ecb52e54a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdeaa62699fcf37060225a404103ed014fff43f751f744a27891afb76c56d011\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb9f54ba352c3e8d1cb70e31e2321f32cc741e9ea7fa871ac8db9cb380486d6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04cb263fd49c7f580746efa2f81c4e25a4862244502ad5362a1767d6255c023c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f45a9407db247a891731310afb4c1a7b81ac94e44cfd8e267854a4c4e434e95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:34Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:34Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:34 crc kubenswrapper[4957]: I0218 14:32:34.237184 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:34Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:34 crc kubenswrapper[4957]: I0218 14:32:34.254262 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sk96m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7e4902e-7bb6-44ec-a3ac-3d8b3afe3fbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://644a3c29970ac9c5262a3a66390a566947b89d6020eaf5de12afb1afeaaff673\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vssz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sk96m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:34Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:34 crc kubenswrapper[4957]: I0218 14:32:34.266231 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cde17e3-43e9-4bed-afe8-5b76229e35cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1929ddc0bd8d5e1c054fe6b3399c053a91ace49153f8712ad155416d0df0652e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mzslj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4227d09ba91769eac5df7ac50f6da20587fad22be3d3f27f0a938b37d76b457e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mzslj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-x8wwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:34Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:34 crc kubenswrapper[4957]: I0218 14:32:34.284479 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3aa342a3ea42d2c64cf284dd2f7fed142b1ff377cc3c5384e4d940ec4364befc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:34Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:34 crc kubenswrapper[4957]: I0218 14:32:34.302038 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:34Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:34 crc kubenswrapper[4957]: I0218 14:32:34.303209 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:34 crc kubenswrapper[4957]: I0218 14:32:34.303244 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:34 crc kubenswrapper[4957]: I0218 14:32:34.303255 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:34 crc kubenswrapper[4957]: I0218 14:32:34.303272 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:34 crc kubenswrapper[4957]: I0218 14:32:34.303287 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:34Z","lastTransitionTime":"2026-02-18T14:32:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:34 crc kubenswrapper[4957]: I0218 14:32:34.325372 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-s7f5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77ae51a3-a3f7-4ea3-afb9-93558bf3b821\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ff578d3657e9ba6b371eb3c3a2e63d4c989e3232da6c9f88cf6ba18edc50dd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:32:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7f4f904ea5311aa8aa0473b62433cd3684d0e1fecdb1b179bfb7a58e463cc27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7f4f904ea5311aa8aa0473b62433cd3684d0e1fecdb1b179bfb7a58e463cc27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ea791a426f30ba77c019b8f865f9cfe90b915f6c0596002f4914cbe2fa68c27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ea791a426f30ba77c019b8f865f9cfe90b915f6c0596002f4914cbe2fa68c27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01a7d815c8c260fb5d2a4586f2640abf157fa209aa8e8aed0e6573a07bdaced6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01a7d815c8c260fb5d2a4586f2640abf157fa209aa8e8aed0e6573a07bdaced6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ae36af89d8b5cc2ad00253398fad76d78310bc8adb012b7b819ef55a26b8004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae36af89d8b5cc2ad00253398fad76d78310bc8adb012b7b819ef55a26b8004\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://def4b27d830824baeb1b965b56d5fc50343585d0fae5a1fed75381364a94caf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://def4b27d830824baeb1b965b56d5fc50343585d0fae5a1fed75381364a94caf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:32:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a13d3fcb295e963277f66a06bad27866ddb270d48474d0154efc52a8721a64a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a13d3fcb295e963277f66a06bad27866ddb270d48474d0154efc52a8721a64a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:32:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-s7f5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:34Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:34 crc kubenswrapper[4957]: I0218 14:32:34.339802 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-c5hxm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2b8a049-72c4-4a87-bb05-a5cc4ebe7a89\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d8c16012d78884094f9be4f3cd7b67463352ff1391c8f60c109b5f734c80cd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mzkhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-c5hxm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:34Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:34 crc kubenswrapper[4957]: I0218 14:32:34.353299 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jkmlc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58c40982-35c8-4670-ad21-513a7a5a458e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gkmzc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gkmzc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:32:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jkmlc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:34Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:34 crc kubenswrapper[4957]: I0218 14:32:34.373186 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b3bef1f-7cee-4035-bc8e-195fadcf2d19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51eded762a62fcb704d09a9c4336a0e308e44c662b636772ea79f754c09dcaaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a2ec9911be4f4d8715bb722cdf72cdbf478111030a7311b93eb67f55ddbce77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d46f625f180ab9afde6a7cb5d578b88012bf5b297e691a35f1fe1af000f1416\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a11b36734e57eec3aba0b6d96b4102850e46b96eb16d99cfcacf273f1ce69f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ff013b6ed4aa92a4bda93a646e7ffd4a158daab436c25025c7fbee49716bcfd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T14:31:47Z\\\",\\\"message\\\":\\\"W0218 14:31:37.311828 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0218 14:31:37.312305 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771425097 cert, and key in /tmp/serving-cert-2580901173/serving-signer.crt, /tmp/serving-cert-2580901173/serving-signer.key\\\\nI0218 14:31:37.668457 1 observer_polling.go:159] Starting file observer\\\\nW0218 14:31:37.670692 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0218 14:31:37.670892 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 14:31:37.672550 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2580901173/tls.crt::/tmp/serving-cert-2580901173/tls.key\\\\\\\"\\\\nF0218 14:31:47.884142 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1c0f986e657c20a059efccb494462926f4fa62d30b012e6ce2b530d679d9d5c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:37Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09ddd547ad67f12c0d4f3b0505206fc69b4bcfcfa966c6f98da8ef4f4e979711\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09ddd547ad67f12c0d4f3b0505206fc69b4bcfcfa966c6f98da8ef4f4e979711\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:34Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:34 crc kubenswrapper[4957]: I0218 14:32:34.390243 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76c85d37-4ff8-4c5d-83aa-4c5fbf9535c0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://796493e4b9e802eafa557bda30c8c5442e4fb5705907cfa1f04cad2b36dc20bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc2518870abf503506ce74ccbc0fc66ac6efa632a7394ceca4f58b38a0a1fac3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b902fee5b17e97ec39e941b26dd9ef6e2da95f49be1fc25192895894a4182685\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b68e74be3ce63b0e6fff6a8141e11a9c6e80a376e205bfc7f2745440c6526e6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b68e74be3ce63b0e6fff6a8141e11a9c6e80a376e205bfc7f2745440c6526e6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:35Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:34Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:34Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:34 crc kubenswrapper[4957]: I0218 14:32:34.405937 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:34 crc kubenswrapper[4957]: I0218 14:32:34.406026 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:34 crc kubenswrapper[4957]: I0218 14:32:34.406051 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:34 crc kubenswrapper[4957]: I0218 14:32:34.406082 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:34 crc kubenswrapper[4957]: I0218 14:32:34.406107 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:34Z","lastTransitionTime":"2026-02-18T14:32:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:34 crc kubenswrapper[4957]: I0218 14:32:34.407178 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:34Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:34 crc kubenswrapper[4957]: I0218 14:32:34.434476 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3339e9ff017eaa7de40b766111f5d7dcb9b5b45f154d917b2f3b1ebad21ee361\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://815a22c1c4523e923dbe20bb68b1f4195562f2b3a767409555f43c82cd826391\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:34Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:34 crc kubenswrapper[4957]: I0218 14:32:34.449940 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96ebe482278c96a988073257a64389543a692cbae60b771542dd45ffdbef9977\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:34Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:34 crc kubenswrapper[4957]: I0218 14:32:34.460096 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wn2pd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6a720f-7d42-48e9-8073-fb4f7417e6cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3317042ce828b73a13e6c2724a7c7663e0c73dd05fef1239f5ad36a7d46b1ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qv7rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wn2pd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:34Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:34 crc kubenswrapper[4957]: I0218 14:32:34.487323 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t7lp9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1ab5e7d-28c9-416b-9e12-1209987d8a2c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3d55cd020086b018dcf2183c7307ce4f66a18d4dfd5c7e387419f5aaeb08e53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb7e286dbf7b49a0ee8019eb660a4b977399c163bb094d77f830b3747399d702\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c773ab7da87d534f8ca400b73f89b473291b15b65118abfa48da7e8af126724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fb30623c479a6d604d07d8779f3b5661ba8a2c98fa38f145443c7f80fbfddb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9575c57251809ec79d9e9a63fc18fc0fe431582b023c0bfb9d71b3ea5c369e49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7fc50065803f879ae25e1ab406e1b74e142cab30c38bedd0426616a2cc6cc84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01eae4440ba56e59be4a568c08e1c676d03e370e9abdd72986eca6f68f01e0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01eae4440ba56e59be4a568c08e1c676d03e370e9abdd72986eca6f68f01e0f1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T14:32:21Z\\\",\\\"message\\\":\\\"erving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc0075eb187 \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:metrics,Protocol:TCP,Port:8383,TargetPort:{0 8383 },NodePort:0,AppProtocol:nil,},ServicePort{Name:https-metrics,Protocol:TCP,Port:8081,TargetPort:{0 8081 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{name: marketplace-operator,},ClusterIP:10.217.5.53,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.53],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF0218 14:32:21.034484 6635 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not add\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T14:32:20Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-t7lp9_openshift-ovn-kubernetes(c1ab5e7d-28c9-416b-9e12-1209987d8a2c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07549d11e88948ba7929804722b7d49a2fefb64d0c324c2032529c13df70a23f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854d32930f1afa0a3f29b83159a4a473fd3c074f065cab0f110c4c0e62d0ab79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://854d32930f1afa0a3f29b83159a4a473fd3c074f065cab0f110c4c0e62d0ab79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t7lp9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:34Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:34 crc kubenswrapper[4957]: I0218 14:32:34.505197 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-52gh7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"841cf9b6-bfbb-4ff0-8899-acd00478a669\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e52fe12bee0a96d77a82c8edb65574b11ae3d6b6dd3e2a68e918f1d21c0b1153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xg6v5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d556fc2e5fb106135a544401595696797ede468aa93818660d86424c49486fff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xg6v5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:32:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-52gh7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:34Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:34 crc kubenswrapper[4957]: I0218 14:32:34.507989 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:34 crc kubenswrapper[4957]: I0218 14:32:34.508043 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:34 crc kubenswrapper[4957]: I0218 14:32:34.508052 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:34 crc kubenswrapper[4957]: I0218 14:32:34.508068 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:34 crc kubenswrapper[4957]: I0218 14:32:34.508078 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:34Z","lastTransitionTime":"2026-02-18T14:32:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:34 crc kubenswrapper[4957]: I0218 14:32:34.610791 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:34 crc kubenswrapper[4957]: I0218 14:32:34.610879 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:34 crc kubenswrapper[4957]: I0218 14:32:34.610893 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:34 crc kubenswrapper[4957]: I0218 14:32:34.610912 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:34 crc kubenswrapper[4957]: I0218 14:32:34.610923 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:34Z","lastTransitionTime":"2026-02-18T14:32:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:34 crc kubenswrapper[4957]: I0218 14:32:34.714264 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:34 crc kubenswrapper[4957]: I0218 14:32:34.714682 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:34 crc kubenswrapper[4957]: I0218 14:32:34.714935 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:34 crc kubenswrapper[4957]: I0218 14:32:34.715186 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:34 crc kubenswrapper[4957]: I0218 14:32:34.715410 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:34Z","lastTransitionTime":"2026-02-18T14:32:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:34 crc kubenswrapper[4957]: I0218 14:32:34.818280 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:34 crc kubenswrapper[4957]: I0218 14:32:34.818361 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:34 crc kubenswrapper[4957]: I0218 14:32:34.818375 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:34 crc kubenswrapper[4957]: I0218 14:32:34.818395 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:34 crc kubenswrapper[4957]: I0218 14:32:34.818407 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:34Z","lastTransitionTime":"2026-02-18T14:32:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:34 crc kubenswrapper[4957]: I0218 14:32:34.921260 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:34 crc kubenswrapper[4957]: I0218 14:32:34.921698 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:34 crc kubenswrapper[4957]: I0218 14:32:34.921888 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:34 crc kubenswrapper[4957]: I0218 14:32:34.922129 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:34 crc kubenswrapper[4957]: I0218 14:32:34.922595 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:34Z","lastTransitionTime":"2026-02-18T14:32:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:35 crc kubenswrapper[4957]: I0218 14:32:35.026457 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:35 crc kubenswrapper[4957]: I0218 14:32:35.026727 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:35 crc kubenswrapper[4957]: I0218 14:32:35.026811 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:35 crc kubenswrapper[4957]: I0218 14:32:35.026885 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:35 crc kubenswrapper[4957]: I0218 14:32:35.027001 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:35Z","lastTransitionTime":"2026-02-18T14:32:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:35 crc kubenswrapper[4957]: I0218 14:32:35.129916 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:35 crc kubenswrapper[4957]: I0218 14:32:35.129991 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:35 crc kubenswrapper[4957]: I0218 14:32:35.130016 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:35 crc kubenswrapper[4957]: I0218 14:32:35.130048 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:35 crc kubenswrapper[4957]: I0218 14:32:35.130071 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:35Z","lastTransitionTime":"2026-02-18T14:32:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:35 crc kubenswrapper[4957]: I0218 14:32:35.187924 4957 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 15:23:16.25233345 +0000 UTC Feb 18 14:32:35 crc kubenswrapper[4957]: I0218 14:32:35.212446 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jkmlc" Feb 18 14:32:35 crc kubenswrapper[4957]: I0218 14:32:35.212451 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:32:35 crc kubenswrapper[4957]: E0218 14:32:35.212668 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jkmlc" podUID="58c40982-35c8-4670-ad21-513a7a5a458e" Feb 18 14:32:35 crc kubenswrapper[4957]: E0218 14:32:35.212838 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 14:32:35 crc kubenswrapper[4957]: I0218 14:32:35.233805 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:35 crc kubenswrapper[4957]: I0218 14:32:35.233875 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:35 crc kubenswrapper[4957]: I0218 14:32:35.233891 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:35 crc kubenswrapper[4957]: I0218 14:32:35.233911 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:35 crc kubenswrapper[4957]: I0218 14:32:35.233924 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:35Z","lastTransitionTime":"2026-02-18T14:32:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:35 crc kubenswrapper[4957]: I0218 14:32:35.335838 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:35 crc kubenswrapper[4957]: I0218 14:32:35.336132 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:35 crc kubenswrapper[4957]: I0218 14:32:35.336213 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:35 crc kubenswrapper[4957]: I0218 14:32:35.336489 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:35 crc kubenswrapper[4957]: I0218 14:32:35.336573 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:35Z","lastTransitionTime":"2026-02-18T14:32:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:35 crc kubenswrapper[4957]: I0218 14:32:35.439332 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:35 crc kubenswrapper[4957]: I0218 14:32:35.439727 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:35 crc kubenswrapper[4957]: I0218 14:32:35.439841 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:35 crc kubenswrapper[4957]: I0218 14:32:35.439933 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:35 crc kubenswrapper[4957]: I0218 14:32:35.440026 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:35Z","lastTransitionTime":"2026-02-18T14:32:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:35 crc kubenswrapper[4957]: I0218 14:32:35.543094 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:35 crc kubenswrapper[4957]: I0218 14:32:35.543127 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:35 crc kubenswrapper[4957]: I0218 14:32:35.543138 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:35 crc kubenswrapper[4957]: I0218 14:32:35.543152 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:35 crc kubenswrapper[4957]: I0218 14:32:35.543161 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:35Z","lastTransitionTime":"2026-02-18T14:32:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:35 crc kubenswrapper[4957]: I0218 14:32:35.645952 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:35 crc kubenswrapper[4957]: I0218 14:32:35.646242 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:35 crc kubenswrapper[4957]: I0218 14:32:35.646337 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:35 crc kubenswrapper[4957]: I0218 14:32:35.646486 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:35 crc kubenswrapper[4957]: I0218 14:32:35.646671 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:35Z","lastTransitionTime":"2026-02-18T14:32:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:35 crc kubenswrapper[4957]: I0218 14:32:35.750390 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:35 crc kubenswrapper[4957]: I0218 14:32:35.750782 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:35 crc kubenswrapper[4957]: I0218 14:32:35.750857 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:35 crc kubenswrapper[4957]: I0218 14:32:35.750930 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:35 crc kubenswrapper[4957]: I0218 14:32:35.750996 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:35Z","lastTransitionTime":"2026-02-18T14:32:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:35 crc kubenswrapper[4957]: I0218 14:32:35.885865 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:35 crc kubenswrapper[4957]: I0218 14:32:35.886117 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:35 crc kubenswrapper[4957]: I0218 14:32:35.886187 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:35 crc kubenswrapper[4957]: I0218 14:32:35.886297 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:35 crc kubenswrapper[4957]: I0218 14:32:35.886361 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:35Z","lastTransitionTime":"2026-02-18T14:32:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:35 crc kubenswrapper[4957]: I0218 14:32:35.989756 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:35 crc kubenswrapper[4957]: I0218 14:32:35.989793 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:35 crc kubenswrapper[4957]: I0218 14:32:35.989802 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:35 crc kubenswrapper[4957]: I0218 14:32:35.989819 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:35 crc kubenswrapper[4957]: I0218 14:32:35.989829 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:35Z","lastTransitionTime":"2026-02-18T14:32:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:36 crc kubenswrapper[4957]: I0218 14:32:36.093088 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:36 crc kubenswrapper[4957]: I0218 14:32:36.093472 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:36 crc kubenswrapper[4957]: I0218 14:32:36.093496 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:36 crc kubenswrapper[4957]: I0218 14:32:36.093517 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:36 crc kubenswrapper[4957]: I0218 14:32:36.093532 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:36Z","lastTransitionTime":"2026-02-18T14:32:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:36 crc kubenswrapper[4957]: I0218 14:32:36.188926 4957 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 15:24:04.520172267 +0000 UTC Feb 18 14:32:36 crc kubenswrapper[4957]: I0218 14:32:36.196202 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:36 crc kubenswrapper[4957]: I0218 14:32:36.196359 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:36 crc kubenswrapper[4957]: I0218 14:32:36.196561 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:36 crc kubenswrapper[4957]: I0218 14:32:36.196724 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:36 crc kubenswrapper[4957]: I0218 14:32:36.196894 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:36Z","lastTransitionTime":"2026-02-18T14:32:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:36 crc kubenswrapper[4957]: I0218 14:32:36.212545 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:32:36 crc kubenswrapper[4957]: I0218 14:32:36.212709 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:32:36 crc kubenswrapper[4957]: E0218 14:32:36.213001 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 14:32:36 crc kubenswrapper[4957]: E0218 14:32:36.213155 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 14:32:36 crc kubenswrapper[4957]: I0218 14:32:36.300294 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:36 crc kubenswrapper[4957]: I0218 14:32:36.300600 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:36 crc kubenswrapper[4957]: I0218 14:32:36.300675 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:36 crc kubenswrapper[4957]: I0218 14:32:36.300755 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:36 crc kubenswrapper[4957]: I0218 14:32:36.300822 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:36Z","lastTransitionTime":"2026-02-18T14:32:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:36 crc kubenswrapper[4957]: I0218 14:32:36.402736 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:36 crc kubenswrapper[4957]: I0218 14:32:36.403012 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:36 crc kubenswrapper[4957]: I0218 14:32:36.403110 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:36 crc kubenswrapper[4957]: I0218 14:32:36.403203 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:36 crc kubenswrapper[4957]: I0218 14:32:36.403276 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:36Z","lastTransitionTime":"2026-02-18T14:32:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:36 crc kubenswrapper[4957]: I0218 14:32:36.506059 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:36 crc kubenswrapper[4957]: I0218 14:32:36.506145 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:36 crc kubenswrapper[4957]: I0218 14:32:36.506159 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:36 crc kubenswrapper[4957]: I0218 14:32:36.506187 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:36 crc kubenswrapper[4957]: I0218 14:32:36.506202 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:36Z","lastTransitionTime":"2026-02-18T14:32:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:36 crc kubenswrapper[4957]: I0218 14:32:36.609155 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:36 crc kubenswrapper[4957]: I0218 14:32:36.609240 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:36 crc kubenswrapper[4957]: I0218 14:32:36.609253 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:36 crc kubenswrapper[4957]: I0218 14:32:36.609282 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:36 crc kubenswrapper[4957]: I0218 14:32:36.609307 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:36Z","lastTransitionTime":"2026-02-18T14:32:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:36 crc kubenswrapper[4957]: I0218 14:32:36.711680 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:36 crc kubenswrapper[4957]: I0218 14:32:36.712054 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:36 crc kubenswrapper[4957]: I0218 14:32:36.712068 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:36 crc kubenswrapper[4957]: I0218 14:32:36.712086 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:36 crc kubenswrapper[4957]: I0218 14:32:36.712104 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:36Z","lastTransitionTime":"2026-02-18T14:32:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:36 crc kubenswrapper[4957]: I0218 14:32:36.814333 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:36 crc kubenswrapper[4957]: I0218 14:32:36.814661 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:36 crc kubenswrapper[4957]: I0218 14:32:36.814744 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:36 crc kubenswrapper[4957]: I0218 14:32:36.814831 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:36 crc kubenswrapper[4957]: I0218 14:32:36.814908 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:36Z","lastTransitionTime":"2026-02-18T14:32:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:36 crc kubenswrapper[4957]: I0218 14:32:36.918030 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:36 crc kubenswrapper[4957]: I0218 14:32:36.918489 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:36 crc kubenswrapper[4957]: I0218 14:32:36.918576 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:36 crc kubenswrapper[4957]: I0218 14:32:36.918679 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:36 crc kubenswrapper[4957]: I0218 14:32:36.918787 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:36Z","lastTransitionTime":"2026-02-18T14:32:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:37 crc kubenswrapper[4957]: I0218 14:32:37.022150 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:37 crc kubenswrapper[4957]: I0218 14:32:37.022202 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:37 crc kubenswrapper[4957]: I0218 14:32:37.022213 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:37 crc kubenswrapper[4957]: I0218 14:32:37.022231 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:37 crc kubenswrapper[4957]: I0218 14:32:37.022246 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:37Z","lastTransitionTime":"2026-02-18T14:32:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:37 crc kubenswrapper[4957]: I0218 14:32:37.124790 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:37 crc kubenswrapper[4957]: I0218 14:32:37.124836 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:37 crc kubenswrapper[4957]: I0218 14:32:37.124850 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:37 crc kubenswrapper[4957]: I0218 14:32:37.124869 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:37 crc kubenswrapper[4957]: I0218 14:32:37.124881 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:37Z","lastTransitionTime":"2026-02-18T14:32:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:37 crc kubenswrapper[4957]: I0218 14:32:37.189353 4957 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 22:42:05.051922204 +0000 UTC Feb 18 14:32:37 crc kubenswrapper[4957]: I0218 14:32:37.212725 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jkmlc" Feb 18 14:32:37 crc kubenswrapper[4957]: E0218 14:32:37.212871 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jkmlc" podUID="58c40982-35c8-4670-ad21-513a7a5a458e" Feb 18 14:32:37 crc kubenswrapper[4957]: I0218 14:32:37.212730 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:32:37 crc kubenswrapper[4957]: E0218 14:32:37.213309 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 14:32:37 crc kubenswrapper[4957]: I0218 14:32:37.227482 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:37 crc kubenswrapper[4957]: I0218 14:32:37.227572 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:37 crc kubenswrapper[4957]: I0218 14:32:37.227586 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:37 crc kubenswrapper[4957]: I0218 14:32:37.227632 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:37 crc kubenswrapper[4957]: I0218 14:32:37.227643 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:37Z","lastTransitionTime":"2026-02-18T14:32:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:37 crc kubenswrapper[4957]: I0218 14:32:37.330753 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:37 crc kubenswrapper[4957]: I0218 14:32:37.330788 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:37 crc kubenswrapper[4957]: I0218 14:32:37.330799 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:37 crc kubenswrapper[4957]: I0218 14:32:37.330816 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:37 crc kubenswrapper[4957]: I0218 14:32:37.330826 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:37Z","lastTransitionTime":"2026-02-18T14:32:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:37 crc kubenswrapper[4957]: I0218 14:32:37.433725 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:37 crc kubenswrapper[4957]: I0218 14:32:37.433760 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:37 crc kubenswrapper[4957]: I0218 14:32:37.433778 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:37 crc kubenswrapper[4957]: I0218 14:32:37.433796 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:37 crc kubenswrapper[4957]: I0218 14:32:37.433824 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:37Z","lastTransitionTime":"2026-02-18T14:32:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:37 crc kubenswrapper[4957]: I0218 14:32:37.537609 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:37 crc kubenswrapper[4957]: I0218 14:32:37.537652 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:37 crc kubenswrapper[4957]: I0218 14:32:37.537664 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:37 crc kubenswrapper[4957]: I0218 14:32:37.537681 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:37 crc kubenswrapper[4957]: I0218 14:32:37.537692 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:37Z","lastTransitionTime":"2026-02-18T14:32:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:37 crc kubenswrapper[4957]: I0218 14:32:37.640770 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:37 crc kubenswrapper[4957]: I0218 14:32:37.640847 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:37 crc kubenswrapper[4957]: I0218 14:32:37.640865 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:37 crc kubenswrapper[4957]: I0218 14:32:37.640886 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:37 crc kubenswrapper[4957]: I0218 14:32:37.640900 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:37Z","lastTransitionTime":"2026-02-18T14:32:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:37 crc kubenswrapper[4957]: I0218 14:32:37.743095 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:37 crc kubenswrapper[4957]: I0218 14:32:37.743523 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:37 crc kubenswrapper[4957]: I0218 14:32:37.743666 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:37 crc kubenswrapper[4957]: I0218 14:32:37.743792 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:37 crc kubenswrapper[4957]: I0218 14:32:37.743922 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:37Z","lastTransitionTime":"2026-02-18T14:32:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:37 crc kubenswrapper[4957]: I0218 14:32:37.846893 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:37 crc kubenswrapper[4957]: I0218 14:32:37.847239 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:37 crc kubenswrapper[4957]: I0218 14:32:37.847388 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:37 crc kubenswrapper[4957]: I0218 14:32:37.847556 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:37 crc kubenswrapper[4957]: I0218 14:32:37.847751 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:37Z","lastTransitionTime":"2026-02-18T14:32:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:37 crc kubenswrapper[4957]: I0218 14:32:37.950298 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:37 crc kubenswrapper[4957]: I0218 14:32:37.950327 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:37 crc kubenswrapper[4957]: I0218 14:32:37.950335 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:37 crc kubenswrapper[4957]: I0218 14:32:37.950349 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:37 crc kubenswrapper[4957]: I0218 14:32:37.950360 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:37Z","lastTransitionTime":"2026-02-18T14:32:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:38 crc kubenswrapper[4957]: I0218 14:32:38.053980 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:38 crc kubenswrapper[4957]: I0218 14:32:38.054041 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:38 crc kubenswrapper[4957]: I0218 14:32:38.054059 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:38 crc kubenswrapper[4957]: I0218 14:32:38.054103 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:38 crc kubenswrapper[4957]: I0218 14:32:38.054120 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:38Z","lastTransitionTime":"2026-02-18T14:32:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:38 crc kubenswrapper[4957]: I0218 14:32:38.156776 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:38 crc kubenswrapper[4957]: I0218 14:32:38.156820 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:38 crc kubenswrapper[4957]: I0218 14:32:38.156829 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:38 crc kubenswrapper[4957]: I0218 14:32:38.156848 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:38 crc kubenswrapper[4957]: I0218 14:32:38.156859 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:38Z","lastTransitionTime":"2026-02-18T14:32:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:38 crc kubenswrapper[4957]: I0218 14:32:38.190310 4957 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 07:37:05.475403436 +0000 UTC Feb 18 14:32:38 crc kubenswrapper[4957]: I0218 14:32:38.212842 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:32:38 crc kubenswrapper[4957]: E0218 14:32:38.213301 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 14:32:38 crc kubenswrapper[4957]: I0218 14:32:38.213669 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:32:38 crc kubenswrapper[4957]: E0218 14:32:38.213759 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 14:32:38 crc kubenswrapper[4957]: I0218 14:32:38.260250 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:38 crc kubenswrapper[4957]: I0218 14:32:38.260305 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:38 crc kubenswrapper[4957]: I0218 14:32:38.260315 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:38 crc kubenswrapper[4957]: I0218 14:32:38.260337 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:38 crc kubenswrapper[4957]: I0218 14:32:38.260349 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:38Z","lastTransitionTime":"2026-02-18T14:32:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:38 crc kubenswrapper[4957]: I0218 14:32:38.363518 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:38 crc kubenswrapper[4957]: I0218 14:32:38.363563 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:38 crc kubenswrapper[4957]: I0218 14:32:38.363575 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:38 crc kubenswrapper[4957]: I0218 14:32:38.363598 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:38 crc kubenswrapper[4957]: I0218 14:32:38.363637 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:38Z","lastTransitionTime":"2026-02-18T14:32:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:38 crc kubenswrapper[4957]: I0218 14:32:38.466963 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:38 crc kubenswrapper[4957]: I0218 14:32:38.467031 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:38 crc kubenswrapper[4957]: I0218 14:32:38.467056 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:38 crc kubenswrapper[4957]: I0218 14:32:38.467090 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:38 crc kubenswrapper[4957]: I0218 14:32:38.467116 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:38Z","lastTransitionTime":"2026-02-18T14:32:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:38 crc kubenswrapper[4957]: I0218 14:32:38.569831 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:38 crc kubenswrapper[4957]: I0218 14:32:38.569873 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:38 crc kubenswrapper[4957]: I0218 14:32:38.569882 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:38 crc kubenswrapper[4957]: I0218 14:32:38.569895 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:38 crc kubenswrapper[4957]: I0218 14:32:38.569904 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:38Z","lastTransitionTime":"2026-02-18T14:32:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:38 crc kubenswrapper[4957]: I0218 14:32:38.672889 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:38 crc kubenswrapper[4957]: I0218 14:32:38.672955 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:38 crc kubenswrapper[4957]: I0218 14:32:38.672966 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:38 crc kubenswrapper[4957]: I0218 14:32:38.672987 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:38 crc kubenswrapper[4957]: I0218 14:32:38.673000 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:38Z","lastTransitionTime":"2026-02-18T14:32:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:38 crc kubenswrapper[4957]: I0218 14:32:38.775129 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:38 crc kubenswrapper[4957]: I0218 14:32:38.775176 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:38 crc kubenswrapper[4957]: I0218 14:32:38.775188 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:38 crc kubenswrapper[4957]: I0218 14:32:38.775203 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:38 crc kubenswrapper[4957]: I0218 14:32:38.775214 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:38Z","lastTransitionTime":"2026-02-18T14:32:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:38 crc kubenswrapper[4957]: I0218 14:32:38.877671 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:38 crc kubenswrapper[4957]: I0218 14:32:38.877701 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:38 crc kubenswrapper[4957]: I0218 14:32:38.877709 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:38 crc kubenswrapper[4957]: I0218 14:32:38.877723 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:38 crc kubenswrapper[4957]: I0218 14:32:38.877732 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:38Z","lastTransitionTime":"2026-02-18T14:32:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:38 crc kubenswrapper[4957]: I0218 14:32:38.979740 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:38 crc kubenswrapper[4957]: I0218 14:32:38.979796 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:38 crc kubenswrapper[4957]: I0218 14:32:38.979812 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:38 crc kubenswrapper[4957]: I0218 14:32:38.979827 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:38 crc kubenswrapper[4957]: I0218 14:32:38.979839 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:38Z","lastTransitionTime":"2026-02-18T14:32:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:39 crc kubenswrapper[4957]: I0218 14:32:39.082167 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:39 crc kubenswrapper[4957]: I0218 14:32:39.082219 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:39 crc kubenswrapper[4957]: I0218 14:32:39.082234 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:39 crc kubenswrapper[4957]: I0218 14:32:39.082251 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:39 crc kubenswrapper[4957]: I0218 14:32:39.082263 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:39Z","lastTransitionTime":"2026-02-18T14:32:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:39 crc kubenswrapper[4957]: I0218 14:32:39.184260 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:39 crc kubenswrapper[4957]: I0218 14:32:39.184308 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:39 crc kubenswrapper[4957]: I0218 14:32:39.184323 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:39 crc kubenswrapper[4957]: I0218 14:32:39.184342 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:39 crc kubenswrapper[4957]: I0218 14:32:39.184353 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:39Z","lastTransitionTime":"2026-02-18T14:32:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:39 crc kubenswrapper[4957]: I0218 14:32:39.190717 4957 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 18:05:43.194453728 +0000 UTC Feb 18 14:32:39 crc kubenswrapper[4957]: I0218 14:32:39.212480 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:32:39 crc kubenswrapper[4957]: E0218 14:32:39.212652 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 14:32:39 crc kubenswrapper[4957]: I0218 14:32:39.212480 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jkmlc" Feb 18 14:32:39 crc kubenswrapper[4957]: E0218 14:32:39.212790 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jkmlc" podUID="58c40982-35c8-4670-ad21-513a7a5a458e" Feb 18 14:32:39 crc kubenswrapper[4957]: I0218 14:32:39.286670 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:39 crc kubenswrapper[4957]: I0218 14:32:39.286727 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:39 crc kubenswrapper[4957]: I0218 14:32:39.286736 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:39 crc kubenswrapper[4957]: I0218 14:32:39.286751 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:39 crc kubenswrapper[4957]: I0218 14:32:39.286762 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:39Z","lastTransitionTime":"2026-02-18T14:32:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:39 crc kubenswrapper[4957]: I0218 14:32:39.389151 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:39 crc kubenswrapper[4957]: I0218 14:32:39.389224 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:39 crc kubenswrapper[4957]: I0218 14:32:39.389242 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:39 crc kubenswrapper[4957]: I0218 14:32:39.389270 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:39 crc kubenswrapper[4957]: I0218 14:32:39.389287 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:39Z","lastTransitionTime":"2026-02-18T14:32:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:39 crc kubenswrapper[4957]: I0218 14:32:39.492261 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:39 crc kubenswrapper[4957]: I0218 14:32:39.492333 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:39 crc kubenswrapper[4957]: I0218 14:32:39.492363 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:39 crc kubenswrapper[4957]: I0218 14:32:39.492393 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:39 crc kubenswrapper[4957]: I0218 14:32:39.492451 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:39Z","lastTransitionTime":"2026-02-18T14:32:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:39 crc kubenswrapper[4957]: I0218 14:32:39.595018 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:39 crc kubenswrapper[4957]: I0218 14:32:39.595052 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:39 crc kubenswrapper[4957]: I0218 14:32:39.595063 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:39 crc kubenswrapper[4957]: I0218 14:32:39.595079 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:39 crc kubenswrapper[4957]: I0218 14:32:39.595090 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:39Z","lastTransitionTime":"2026-02-18T14:32:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:39 crc kubenswrapper[4957]: I0218 14:32:39.696771 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:39 crc kubenswrapper[4957]: I0218 14:32:39.696856 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:39 crc kubenswrapper[4957]: I0218 14:32:39.696871 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:39 crc kubenswrapper[4957]: I0218 14:32:39.696891 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:39 crc kubenswrapper[4957]: I0218 14:32:39.696910 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:39Z","lastTransitionTime":"2026-02-18T14:32:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:39 crc kubenswrapper[4957]: I0218 14:32:39.799472 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:39 crc kubenswrapper[4957]: I0218 14:32:39.799518 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:39 crc kubenswrapper[4957]: I0218 14:32:39.799531 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:39 crc kubenswrapper[4957]: I0218 14:32:39.799549 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:39 crc kubenswrapper[4957]: I0218 14:32:39.799560 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:39Z","lastTransitionTime":"2026-02-18T14:32:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:39 crc kubenswrapper[4957]: I0218 14:32:39.901596 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:39 crc kubenswrapper[4957]: I0218 14:32:39.901649 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:39 crc kubenswrapper[4957]: I0218 14:32:39.901660 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:39 crc kubenswrapper[4957]: I0218 14:32:39.901678 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:39 crc kubenswrapper[4957]: I0218 14:32:39.901714 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:39Z","lastTransitionTime":"2026-02-18T14:32:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:40 crc kubenswrapper[4957]: I0218 14:32:40.003547 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:40 crc kubenswrapper[4957]: I0218 14:32:40.003601 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:40 crc kubenswrapper[4957]: I0218 14:32:40.003617 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:40 crc kubenswrapper[4957]: I0218 14:32:40.003638 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:40 crc kubenswrapper[4957]: I0218 14:32:40.003653 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:40Z","lastTransitionTime":"2026-02-18T14:32:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:40 crc kubenswrapper[4957]: I0218 14:32:40.105683 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:40 crc kubenswrapper[4957]: I0218 14:32:40.105728 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:40 crc kubenswrapper[4957]: I0218 14:32:40.105741 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:40 crc kubenswrapper[4957]: I0218 14:32:40.105758 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:40 crc kubenswrapper[4957]: I0218 14:32:40.105807 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:40Z","lastTransitionTime":"2026-02-18T14:32:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:40 crc kubenswrapper[4957]: I0218 14:32:40.191574 4957 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 21:44:47.88148544 +0000 UTC Feb 18 14:32:40 crc kubenswrapper[4957]: I0218 14:32:40.208820 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:40 crc kubenswrapper[4957]: I0218 14:32:40.208864 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:40 crc kubenswrapper[4957]: I0218 14:32:40.208875 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:40 crc kubenswrapper[4957]: I0218 14:32:40.208892 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:40 crc kubenswrapper[4957]: I0218 14:32:40.208905 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:40Z","lastTransitionTime":"2026-02-18T14:32:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:40 crc kubenswrapper[4957]: I0218 14:32:40.212349 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:32:40 crc kubenswrapper[4957]: I0218 14:32:40.212719 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:32:40 crc kubenswrapper[4957]: E0218 14:32:40.212807 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 14:32:40 crc kubenswrapper[4957]: E0218 14:32:40.212886 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 14:32:40 crc kubenswrapper[4957]: I0218 14:32:40.312188 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:40 crc kubenswrapper[4957]: I0218 14:32:40.312542 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:40 crc kubenswrapper[4957]: I0218 14:32:40.312605 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:40 crc kubenswrapper[4957]: I0218 14:32:40.312696 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:40 crc kubenswrapper[4957]: I0218 14:32:40.312773 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:40Z","lastTransitionTime":"2026-02-18T14:32:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:40 crc kubenswrapper[4957]: I0218 14:32:40.416238 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:40 crc kubenswrapper[4957]: I0218 14:32:40.416327 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:40 crc kubenswrapper[4957]: I0218 14:32:40.416346 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:40 crc kubenswrapper[4957]: I0218 14:32:40.416374 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:40 crc kubenswrapper[4957]: I0218 14:32:40.416392 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:40Z","lastTransitionTime":"2026-02-18T14:32:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:40 crc kubenswrapper[4957]: I0218 14:32:40.519093 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:40 crc kubenswrapper[4957]: I0218 14:32:40.519149 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:40 crc kubenswrapper[4957]: I0218 14:32:40.519161 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:40 crc kubenswrapper[4957]: I0218 14:32:40.519180 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:40 crc kubenswrapper[4957]: I0218 14:32:40.519195 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:40Z","lastTransitionTime":"2026-02-18T14:32:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:40 crc kubenswrapper[4957]: I0218 14:32:40.621791 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:40 crc kubenswrapper[4957]: I0218 14:32:40.621916 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:40 crc kubenswrapper[4957]: I0218 14:32:40.621981 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:40 crc kubenswrapper[4957]: I0218 14:32:40.622020 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:40 crc kubenswrapper[4957]: I0218 14:32:40.622046 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:40Z","lastTransitionTime":"2026-02-18T14:32:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:40 crc kubenswrapper[4957]: I0218 14:32:40.720597 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/58c40982-35c8-4670-ad21-513a7a5a458e-metrics-certs\") pod \"network-metrics-daemon-jkmlc\" (UID: \"58c40982-35c8-4670-ad21-513a7a5a458e\") " pod="openshift-multus/network-metrics-daemon-jkmlc" Feb 18 14:32:40 crc kubenswrapper[4957]: E0218 14:32:40.720737 4957 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 14:32:40 crc kubenswrapper[4957]: E0218 14:32:40.720801 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/58c40982-35c8-4670-ad21-513a7a5a458e-metrics-certs podName:58c40982-35c8-4670-ad21-513a7a5a458e nodeName:}" failed. No retries permitted until 2026-02-18 14:33:12.720783392 +0000 UTC m=+99.241648136 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/58c40982-35c8-4670-ad21-513a7a5a458e-metrics-certs") pod "network-metrics-daemon-jkmlc" (UID: "58c40982-35c8-4670-ad21-513a7a5a458e") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 14:32:40 crc kubenswrapper[4957]: I0218 14:32:40.725326 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:40 crc kubenswrapper[4957]: I0218 14:32:40.725401 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:40 crc kubenswrapper[4957]: I0218 14:32:40.725438 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:40 crc kubenswrapper[4957]: I0218 14:32:40.725466 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:40 crc kubenswrapper[4957]: I0218 14:32:40.725485 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:40Z","lastTransitionTime":"2026-02-18T14:32:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:40 crc kubenswrapper[4957]: I0218 14:32:40.827898 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:40 crc kubenswrapper[4957]: I0218 14:32:40.827960 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:40 crc kubenswrapper[4957]: I0218 14:32:40.827978 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:40 crc kubenswrapper[4957]: I0218 14:32:40.828005 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:40 crc kubenswrapper[4957]: I0218 14:32:40.828026 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:40Z","lastTransitionTime":"2026-02-18T14:32:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:40 crc kubenswrapper[4957]: I0218 14:32:40.931038 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:40 crc kubenswrapper[4957]: I0218 14:32:40.931532 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:40 crc kubenswrapper[4957]: I0218 14:32:40.931669 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:40 crc kubenswrapper[4957]: I0218 14:32:40.931840 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:40 crc kubenswrapper[4957]: I0218 14:32:40.931911 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:40Z","lastTransitionTime":"2026-02-18T14:32:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:40 crc kubenswrapper[4957]: I0218 14:32:40.986233 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:40 crc kubenswrapper[4957]: I0218 14:32:40.986558 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:40 crc kubenswrapper[4957]: I0218 14:32:40.986643 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:40 crc kubenswrapper[4957]: I0218 14:32:40.986732 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:40 crc kubenswrapper[4957]: I0218 14:32:40.986802 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:40Z","lastTransitionTime":"2026-02-18T14:32:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:41 crc kubenswrapper[4957]: E0218 14:32:41.005203 4957 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:32:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:32:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:32:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:32:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"73cbc702-e999-4b05-a826-bb1b15d4d73b\\\",\\\"systemUUID\\\":\\\"9fb0acd4-1ed1-4909-a63a-3f4dd7b07055\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:41Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:41 crc kubenswrapper[4957]: I0218 14:32:41.009431 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:41 crc kubenswrapper[4957]: I0218 14:32:41.009719 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:41 crc kubenswrapper[4957]: I0218 14:32:41.009822 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:41 crc kubenswrapper[4957]: I0218 14:32:41.009928 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:41 crc kubenswrapper[4957]: I0218 14:32:41.010004 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:41Z","lastTransitionTime":"2026-02-18T14:32:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:41 crc kubenswrapper[4957]: E0218 14:32:41.022988 4957 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:32:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:32:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:32:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:32:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"73cbc702-e999-4b05-a826-bb1b15d4d73b\\\",\\\"systemUUID\\\":\\\"9fb0acd4-1ed1-4909-a63a-3f4dd7b07055\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:41Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:41 crc kubenswrapper[4957]: I0218 14:32:41.028198 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:41 crc kubenswrapper[4957]: I0218 14:32:41.028254 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:41 crc kubenswrapper[4957]: I0218 14:32:41.028264 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:41 crc kubenswrapper[4957]: I0218 14:32:41.028281 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:41 crc kubenswrapper[4957]: I0218 14:32:41.028316 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:41Z","lastTransitionTime":"2026-02-18T14:32:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:41 crc kubenswrapper[4957]: E0218 14:32:41.042005 4957 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:32:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:32:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:32:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:32:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"73cbc702-e999-4b05-a826-bb1b15d4d73b\\\",\\\"systemUUID\\\":\\\"9fb0acd4-1ed1-4909-a63a-3f4dd7b07055\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:41Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:41 crc kubenswrapper[4957]: I0218 14:32:41.045755 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:41 crc kubenswrapper[4957]: I0218 14:32:41.045886 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:41 crc kubenswrapper[4957]: I0218 14:32:41.045953 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:41 crc kubenswrapper[4957]: I0218 14:32:41.046017 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:41 crc kubenswrapper[4957]: I0218 14:32:41.046083 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:41Z","lastTransitionTime":"2026-02-18T14:32:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:41 crc kubenswrapper[4957]: E0218 14:32:41.058541 4957 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:32:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:32:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:32:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:32:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"73cbc702-e999-4b05-a826-bb1b15d4d73b\\\",\\\"systemUUID\\\":\\\"9fb0acd4-1ed1-4909-a63a-3f4dd7b07055\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:41Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:41 crc kubenswrapper[4957]: I0218 14:32:41.062411 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:41 crc kubenswrapper[4957]: I0218 14:32:41.062573 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:41 crc kubenswrapper[4957]: I0218 14:32:41.062663 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:41 crc kubenswrapper[4957]: I0218 14:32:41.062751 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:41 crc kubenswrapper[4957]: I0218 14:32:41.062838 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:41Z","lastTransitionTime":"2026-02-18T14:32:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:41 crc kubenswrapper[4957]: E0218 14:32:41.074202 4957 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:32:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:32:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:32:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:32:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"73cbc702-e999-4b05-a826-bb1b15d4d73b\\\",\\\"systemUUID\\\":\\\"9fb0acd4-1ed1-4909-a63a-3f4dd7b07055\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:41Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:41 crc kubenswrapper[4957]: E0218 14:32:41.074681 4957 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 18 14:32:41 crc kubenswrapper[4957]: I0218 14:32:41.076304 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:41 crc kubenswrapper[4957]: I0218 14:32:41.076356 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:41 crc kubenswrapper[4957]: I0218 14:32:41.076371 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:41 crc kubenswrapper[4957]: I0218 14:32:41.076392 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:41 crc kubenswrapper[4957]: I0218 14:32:41.076409 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:41Z","lastTransitionTime":"2026-02-18T14:32:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:41 crc kubenswrapper[4957]: I0218 14:32:41.179447 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:41 crc kubenswrapper[4957]: I0218 14:32:41.179793 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:41 crc kubenswrapper[4957]: I0218 14:32:41.179908 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:41 crc kubenswrapper[4957]: I0218 14:32:41.180003 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:41 crc kubenswrapper[4957]: I0218 14:32:41.180095 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:41Z","lastTransitionTime":"2026-02-18T14:32:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:41 crc kubenswrapper[4957]: I0218 14:32:41.191772 4957 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 14:22:46.936533055 +0000 UTC Feb 18 14:32:41 crc kubenswrapper[4957]: I0218 14:32:41.212146 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:32:41 crc kubenswrapper[4957]: E0218 14:32:41.212280 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 14:32:41 crc kubenswrapper[4957]: I0218 14:32:41.212526 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jkmlc" Feb 18 14:32:41 crc kubenswrapper[4957]: E0218 14:32:41.212813 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jkmlc" podUID="58c40982-35c8-4670-ad21-513a7a5a458e" Feb 18 14:32:41 crc kubenswrapper[4957]: I0218 14:32:41.283440 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:41 crc kubenswrapper[4957]: I0218 14:32:41.283763 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:41 crc kubenswrapper[4957]: I0218 14:32:41.283886 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:41 crc kubenswrapper[4957]: I0218 14:32:41.283964 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:41 crc kubenswrapper[4957]: I0218 14:32:41.284029 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:41Z","lastTransitionTime":"2026-02-18T14:32:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:41 crc kubenswrapper[4957]: I0218 14:32:41.387068 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:41 crc kubenswrapper[4957]: I0218 14:32:41.387121 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:41 crc kubenswrapper[4957]: I0218 14:32:41.387133 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:41 crc kubenswrapper[4957]: I0218 14:32:41.387149 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:41 crc kubenswrapper[4957]: I0218 14:32:41.387162 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:41Z","lastTransitionTime":"2026-02-18T14:32:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:41 crc kubenswrapper[4957]: I0218 14:32:41.490232 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:41 crc kubenswrapper[4957]: I0218 14:32:41.490280 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:41 crc kubenswrapper[4957]: I0218 14:32:41.490295 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:41 crc kubenswrapper[4957]: I0218 14:32:41.490313 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:41 crc kubenswrapper[4957]: I0218 14:32:41.490325 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:41Z","lastTransitionTime":"2026-02-18T14:32:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:41 crc kubenswrapper[4957]: I0218 14:32:41.592521 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:41 crc kubenswrapper[4957]: I0218 14:32:41.592584 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:41 crc kubenswrapper[4957]: I0218 14:32:41.592598 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:41 crc kubenswrapper[4957]: I0218 14:32:41.592618 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:41 crc kubenswrapper[4957]: I0218 14:32:41.592629 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:41Z","lastTransitionTime":"2026-02-18T14:32:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:41 crc kubenswrapper[4957]: I0218 14:32:41.695192 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:41 crc kubenswrapper[4957]: I0218 14:32:41.695248 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:41 crc kubenswrapper[4957]: I0218 14:32:41.695258 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:41 crc kubenswrapper[4957]: I0218 14:32:41.695274 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:41 crc kubenswrapper[4957]: I0218 14:32:41.695285 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:41Z","lastTransitionTime":"2026-02-18T14:32:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:41 crc kubenswrapper[4957]: I0218 14:32:41.797452 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:41 crc kubenswrapper[4957]: I0218 14:32:41.797475 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:41 crc kubenswrapper[4957]: I0218 14:32:41.797483 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:41 crc kubenswrapper[4957]: I0218 14:32:41.797497 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:41 crc kubenswrapper[4957]: I0218 14:32:41.797506 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:41Z","lastTransitionTime":"2026-02-18T14:32:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:41 crc kubenswrapper[4957]: I0218 14:32:41.899206 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:41 crc kubenswrapper[4957]: I0218 14:32:41.899266 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:41 crc kubenswrapper[4957]: I0218 14:32:41.899283 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:41 crc kubenswrapper[4957]: I0218 14:32:41.899307 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:41 crc kubenswrapper[4957]: I0218 14:32:41.899328 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:41Z","lastTransitionTime":"2026-02-18T14:32:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:42 crc kubenswrapper[4957]: I0218 14:32:42.001880 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:42 crc kubenswrapper[4957]: I0218 14:32:42.001932 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:42 crc kubenswrapper[4957]: I0218 14:32:42.001946 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:42 crc kubenswrapper[4957]: I0218 14:32:42.001965 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:42 crc kubenswrapper[4957]: I0218 14:32:42.001976 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:42Z","lastTransitionTime":"2026-02-18T14:32:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:42 crc kubenswrapper[4957]: I0218 14:32:42.104467 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:42 crc kubenswrapper[4957]: I0218 14:32:42.104520 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:42 crc kubenswrapper[4957]: I0218 14:32:42.104532 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:42 crc kubenswrapper[4957]: I0218 14:32:42.104549 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:42 crc kubenswrapper[4957]: I0218 14:32:42.104561 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:42Z","lastTransitionTime":"2026-02-18T14:32:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:42 crc kubenswrapper[4957]: I0218 14:32:42.192466 4957 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 10:03:13.183691267 +0000 UTC Feb 18 14:32:42 crc kubenswrapper[4957]: I0218 14:32:42.207796 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:42 crc kubenswrapper[4957]: I0218 14:32:42.207849 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:42 crc kubenswrapper[4957]: I0218 14:32:42.207869 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:42 crc kubenswrapper[4957]: I0218 14:32:42.207891 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:42 crc kubenswrapper[4957]: I0218 14:32:42.207905 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:42Z","lastTransitionTime":"2026-02-18T14:32:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:42 crc kubenswrapper[4957]: I0218 14:32:42.212102 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:32:42 crc kubenswrapper[4957]: I0218 14:32:42.212140 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:32:42 crc kubenswrapper[4957]: E0218 14:32:42.212217 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 14:32:42 crc kubenswrapper[4957]: E0218 14:32:42.212345 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 14:32:42 crc kubenswrapper[4957]: I0218 14:32:42.310669 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:42 crc kubenswrapper[4957]: I0218 14:32:42.310981 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:42 crc kubenswrapper[4957]: I0218 14:32:42.311046 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:42 crc kubenswrapper[4957]: I0218 14:32:42.311130 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:42 crc kubenswrapper[4957]: I0218 14:32:42.311191 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:42Z","lastTransitionTime":"2026-02-18T14:32:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:42 crc kubenswrapper[4957]: I0218 14:32:42.414408 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:42 crc kubenswrapper[4957]: I0218 14:32:42.414474 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:42 crc kubenswrapper[4957]: I0218 14:32:42.414486 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:42 crc kubenswrapper[4957]: I0218 14:32:42.414502 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:42 crc kubenswrapper[4957]: I0218 14:32:42.414521 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:42Z","lastTransitionTime":"2026-02-18T14:32:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:42 crc kubenswrapper[4957]: I0218 14:32:42.517163 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:42 crc kubenswrapper[4957]: I0218 14:32:42.517205 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:42 crc kubenswrapper[4957]: I0218 14:32:42.517220 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:42 crc kubenswrapper[4957]: I0218 14:32:42.517237 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:42 crc kubenswrapper[4957]: I0218 14:32:42.517250 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:42Z","lastTransitionTime":"2026-02-18T14:32:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:42 crc kubenswrapper[4957]: I0218 14:32:42.620445 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:42 crc kubenswrapper[4957]: I0218 14:32:42.620478 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:42 crc kubenswrapper[4957]: I0218 14:32:42.620489 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:42 crc kubenswrapper[4957]: I0218 14:32:42.620503 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:42 crc kubenswrapper[4957]: I0218 14:32:42.620512 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:42Z","lastTransitionTime":"2026-02-18T14:32:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:42 crc kubenswrapper[4957]: I0218 14:32:42.664120 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-sk96m_e7e4902e-7bb6-44ec-a3ac-3d8b3afe3fbb/kube-multus/0.log" Feb 18 14:32:42 crc kubenswrapper[4957]: I0218 14:32:42.664540 4957 generic.go:334] "Generic (PLEG): container finished" podID="e7e4902e-7bb6-44ec-a3ac-3d8b3afe3fbb" containerID="644a3c29970ac9c5262a3a66390a566947b89d6020eaf5de12afb1afeaaff673" exitCode=1 Feb 18 14:32:42 crc kubenswrapper[4957]: I0218 14:32:42.664724 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-sk96m" event={"ID":"e7e4902e-7bb6-44ec-a3ac-3d8b3afe3fbb","Type":"ContainerDied","Data":"644a3c29970ac9c5262a3a66390a566947b89d6020eaf5de12afb1afeaaff673"} Feb 18 14:32:42 crc kubenswrapper[4957]: I0218 14:32:42.665574 4957 scope.go:117] "RemoveContainer" containerID="644a3c29970ac9c5262a3a66390a566947b89d6020eaf5de12afb1afeaaff673" Feb 18 14:32:42 crc kubenswrapper[4957]: I0218 14:32:42.682910 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"820e2f52-fc4e-43af-8da0-5b1ecb52e54a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdeaa62699fcf37060225a404103ed014fff43f751f744a27891afb76c56d011\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb9f54ba352c3e8d1cb70e31e2321f32cc741e9ea7fa871ac8db9cb380486d6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04cb263fd49c7f580746efa2f81c4e25a4862244502ad5362a1767d6255c023c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f45a9407db247a891731310afb4c1a7b81ac94e44cfd8e267854a4c4e434e95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:34Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:42Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:42 crc kubenswrapper[4957]: I0218 14:32:42.698310 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:42Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:42 crc kubenswrapper[4957]: I0218 14:32:42.712804 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sk96m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7e4902e-7bb6-44ec-a3ac-3d8b3afe3fbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://644a3c29970ac9c5262a3a66390a566947b89d6020eaf5de12afb1afeaaff673\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://644a3c29970ac9c5262a3a66390a566947b89d6020eaf5de12afb1afeaaff673\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T14:32:42Z\\\",\\\"message\\\":\\\"2026-02-18T14:31:56+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_5faa22d4-8273-431b-8440-b348b6738df0\\\\n2026-02-18T14:31:56+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_5faa22d4-8273-431b-8440-b348b6738df0 to /host/opt/cni/bin/\\\\n2026-02-18T14:31:57Z [verbose] multus-daemon started\\\\n2026-02-18T14:31:57Z [verbose] Readiness Indicator file check\\\\n2026-02-18T14:32:42Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vssz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sk96m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:42Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:42 crc kubenswrapper[4957]: I0218 14:32:42.724581 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:42 crc kubenswrapper[4957]: I0218 14:32:42.724644 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:42 crc kubenswrapper[4957]: I0218 14:32:42.724669 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:42 crc kubenswrapper[4957]: I0218 14:32:42.724699 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:42 crc kubenswrapper[4957]: I0218 14:32:42.724721 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:42Z","lastTransitionTime":"2026-02-18T14:32:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:42 crc kubenswrapper[4957]: I0218 14:32:42.725700 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cde17e3-43e9-4bed-afe8-5b76229e35cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1929ddc0bd8d5e1c054fe6b3399c053a91ace49153f8712ad155416d0df0652e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mzslj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4227d09ba91769eac5df7ac50f6da20587fad22be3d3f27f0a938b37d76b457e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mzslj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-x8wwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:42Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:42 crc kubenswrapper[4957]: I0218 14:32:42.744566 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-s7f5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77ae51a3-a3f7-4ea3-afb9-93558bf3b821\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ff578d3657e9ba6b371eb3c3a2e63d4c989e3232da6c9f88cf6ba18edc50dd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:32:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7f4f904ea5311aa8aa0473b62433cd3684d0e1fecdb1b179bfb7a58e463cc27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7f4f904ea5311aa8aa0473b62433cd3684d0e1fecdb1b179bfb7a58e463cc27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ea791a426f30ba77c019b8f865f9cfe90b915f6c0596002f4914cbe2fa68c27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ea791a426f30ba77c019b8f865f9cfe90b915f6c0596002f4914cbe2fa68c27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01a7d815c8c260fb5d2a4586f2640abf157fa209aa8e8aed0e6573a07bdaced6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01a7d815c8c260fb5d2a4586f2640abf157fa209aa8e8aed0e6573a07bdaced6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ae36af89d8b5cc2ad00253398fad76d78310bc8adb012b7b819ef55a26b8004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae36af89d8b5cc2ad00253398fad76d78310bc8adb012b7b819ef55a26b8004\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://def4b27d830824baeb1b965b56d5fc50343585d0fae5a1fed75381364a94caf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://def4b27d830824baeb1b965b56d5fc50343585d0fae5a1fed75381364a94caf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:32:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a13d3fcb295e963277f66a06bad27866ddb270d48474d0154efc52a8721a64a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a13d3fcb295e963277f66a06bad27866ddb270d48474d0154efc52a8721a64a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:32:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-s7f5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:42Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:42 crc kubenswrapper[4957]: I0218 14:32:42.758324 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-c5hxm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2b8a049-72c4-4a87-bb05-a5cc4ebe7a89\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d8c16012d78884094f9be4f3cd7b67463352ff1391c8f60c109b5f734c80cd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mzkhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-c5hxm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:42Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:42 crc kubenswrapper[4957]: I0218 14:32:42.770629 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jkmlc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58c40982-35c8-4670-ad21-513a7a5a458e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gkmzc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gkmzc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:32:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jkmlc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:42Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:42 crc kubenswrapper[4957]: I0218 14:32:42.783733 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3aa342a3ea42d2c64cf284dd2f7fed142b1ff377cc3c5384e4d940ec4364befc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:42Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:42 crc kubenswrapper[4957]: I0218 14:32:42.796231 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:42Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:42 crc kubenswrapper[4957]: I0218 14:32:42.806658 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96ebe482278c96a988073257a64389543a692cbae60b771542dd45ffdbef9977\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:42Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:42 crc kubenswrapper[4957]: I0218 14:32:42.820003 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wn2pd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6a720f-7d42-48e9-8073-fb4f7417e6cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3317042ce828b73a13e6c2724a7c7663e0c73dd05fef1239f5ad36a7d46b1ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qv7rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wn2pd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:42Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:42 crc kubenswrapper[4957]: I0218 14:32:42.827179 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:42 crc kubenswrapper[4957]: I0218 14:32:42.827219 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:42 crc kubenswrapper[4957]: I0218 14:32:42.827230 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:42 crc kubenswrapper[4957]: I0218 14:32:42.827246 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:42 crc kubenswrapper[4957]: I0218 14:32:42.827258 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:42Z","lastTransitionTime":"2026-02-18T14:32:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:42 crc kubenswrapper[4957]: I0218 14:32:42.832587 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b3bef1f-7cee-4035-bc8e-195fadcf2d19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51eded762a62fcb704d09a9c4336a0e308e44c662b636772ea79f754c09dcaaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a2ec9911be4f4d8715bb722cdf72cdbf478111030a7311b93eb67f55ddbce77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d46f625f180ab9afde6a7cb5d578b88012bf5b297e691a35f1fe1af000f1416\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a11b36734e57eec3aba0b6d96b4102850e46b96eb16d99cfcacf273f1ce69f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ff013b6ed4aa92a4bda93a646e7ffd4a158daab436c25025c7fbee49716bcfd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T14:31:47Z\\\",\\\"message\\\":\\\"W0218 14:31:37.311828 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0218 14:31:37.312305 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771425097 cert, and key in /tmp/serving-cert-2580901173/serving-signer.crt, /tmp/serving-cert-2580901173/serving-signer.key\\\\nI0218 14:31:37.668457 1 observer_polling.go:159] Starting file observer\\\\nW0218 14:31:37.670692 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0218 14:31:37.670892 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 14:31:37.672550 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2580901173/tls.crt::/tmp/serving-cert-2580901173/tls.key\\\\\\\"\\\\nF0218 14:31:47.884142 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1c0f986e657c20a059efccb494462926f4fa62d30b012e6ce2b530d679d9d5c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:37Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09ddd547ad67f12c0d4f3b0505206fc69b4bcfcfa966c6f98da8ef4f4e979711\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09ddd547ad67f12c0d4f3b0505206fc69b4bcfcfa966c6f98da8ef4f4e979711\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:42Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:42 crc kubenswrapper[4957]: I0218 14:32:42.845766 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76c85d37-4ff8-4c5d-83aa-4c5fbf9535c0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://796493e4b9e802eafa557bda30c8c5442e4fb5705907cfa1f04cad2b36dc20bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc2518870abf503506ce74ccbc0fc66ac6efa632a7394ceca4f58b38a0a1fac3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b902fee5b17e97ec39e941b26dd9ef6e2da95f49be1fc25192895894a4182685\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b68e74be3ce63b0e6fff6a8141e11a9c6e80a376e205bfc7f2745440c6526e6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b68e74be3ce63b0e6fff6a8141e11a9c6e80a376e205bfc7f2745440c6526e6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:35Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:34Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:42Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:42 crc kubenswrapper[4957]: I0218 14:32:42.858990 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:42Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:42 crc kubenswrapper[4957]: I0218 14:32:42.871731 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3339e9ff017eaa7de40b766111f5d7dcb9b5b45f154d917b2f3b1ebad21ee361\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://815a22c1c4523e923dbe20bb68b1f4195562f2b3a767409555f43c82cd826391\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:42Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:42 crc kubenswrapper[4957]: I0218 14:32:42.887697 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t7lp9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1ab5e7d-28c9-416b-9e12-1209987d8a2c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3d55cd020086b018dcf2183c7307ce4f66a18d4dfd5c7e387419f5aaeb08e53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb7e286dbf7b49a0ee8019eb660a4b977399c163bb094d77f830b3747399d702\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c773ab7da87d534f8ca400b73f89b473291b15b65118abfa48da7e8af126724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fb30623c479a6d604d07d8779f3b5661ba8a2c98fa38f145443c7f80fbfddb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9575c57251809ec79d9e9a63fc18fc0fe431582b023c0bfb9d71b3ea5c369e49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7fc50065803f879ae25e1ab406e1b74e142cab30c38bedd0426616a2cc6cc84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01eae4440ba56e59be4a568c08e1c676d03e370e9abdd72986eca6f68f01e0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01eae4440ba56e59be4a568c08e1c676d03e370e9abdd72986eca6f68f01e0f1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T14:32:21Z\\\",\\\"message\\\":\\\"erving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc0075eb187 \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:metrics,Protocol:TCP,Port:8383,TargetPort:{0 8383 },NodePort:0,AppProtocol:nil,},ServicePort{Name:https-metrics,Protocol:TCP,Port:8081,TargetPort:{0 8081 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{name: marketplace-operator,},ClusterIP:10.217.5.53,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.53],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF0218 14:32:21.034484 6635 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not add\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T14:32:20Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-t7lp9_openshift-ovn-kubernetes(c1ab5e7d-28c9-416b-9e12-1209987d8a2c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07549d11e88948ba7929804722b7d49a2fefb64d0c324c2032529c13df70a23f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854d32930f1afa0a3f29b83159a4a473fd3c074f065cab0f110c4c0e62d0ab79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://854d32930f1afa0a3f29b83159a4a473fd3c074f065cab0f110c4c0e62d0ab79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t7lp9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:42Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:42 crc kubenswrapper[4957]: I0218 14:32:42.901731 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-52gh7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"841cf9b6-bfbb-4ff0-8899-acd00478a669\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e52fe12bee0a96d77a82c8edb65574b11ae3d6b6dd3e2a68e918f1d21c0b1153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xg6v5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d556fc2e5fb106135a544401595696797ede468aa93818660d86424c49486fff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xg6v5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:32:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-52gh7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:42Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:42 crc kubenswrapper[4957]: I0218 14:32:42.930038 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:42 crc kubenswrapper[4957]: I0218 14:32:42.930458 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:42 crc kubenswrapper[4957]: I0218 14:32:42.930605 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:42 crc kubenswrapper[4957]: I0218 14:32:42.930727 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:42 crc kubenswrapper[4957]: I0218 14:32:42.930815 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:42Z","lastTransitionTime":"2026-02-18T14:32:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:43 crc kubenswrapper[4957]: I0218 14:32:43.033213 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:43 crc kubenswrapper[4957]: I0218 14:32:43.033553 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:43 crc kubenswrapper[4957]: I0218 14:32:43.033628 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:43 crc kubenswrapper[4957]: I0218 14:32:43.033696 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:43 crc kubenswrapper[4957]: I0218 14:32:43.033763 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:43Z","lastTransitionTime":"2026-02-18T14:32:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:43 crc kubenswrapper[4957]: I0218 14:32:43.136383 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:43 crc kubenswrapper[4957]: I0218 14:32:43.136743 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:43 crc kubenswrapper[4957]: I0218 14:32:43.136815 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:43 crc kubenswrapper[4957]: I0218 14:32:43.136887 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:43 crc kubenswrapper[4957]: I0218 14:32:43.136955 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:43Z","lastTransitionTime":"2026-02-18T14:32:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:43 crc kubenswrapper[4957]: I0218 14:32:43.193223 4957 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 03:56:22.251642407 +0000 UTC Feb 18 14:32:43 crc kubenswrapper[4957]: I0218 14:32:43.212811 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:32:43 crc kubenswrapper[4957]: I0218 14:32:43.212885 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jkmlc" Feb 18 14:32:43 crc kubenswrapper[4957]: E0218 14:32:43.212968 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 14:32:43 crc kubenswrapper[4957]: E0218 14:32:43.213048 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jkmlc" podUID="58c40982-35c8-4670-ad21-513a7a5a458e" Feb 18 14:32:43 crc kubenswrapper[4957]: I0218 14:32:43.239963 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:43 crc kubenswrapper[4957]: I0218 14:32:43.240013 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:43 crc kubenswrapper[4957]: I0218 14:32:43.240027 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:43 crc kubenswrapper[4957]: I0218 14:32:43.240043 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:43 crc kubenswrapper[4957]: I0218 14:32:43.240054 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:43Z","lastTransitionTime":"2026-02-18T14:32:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:43 crc kubenswrapper[4957]: I0218 14:32:43.342288 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:43 crc kubenswrapper[4957]: I0218 14:32:43.342342 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:43 crc kubenswrapper[4957]: I0218 14:32:43.342354 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:43 crc kubenswrapper[4957]: I0218 14:32:43.342371 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:43 crc kubenswrapper[4957]: I0218 14:32:43.342383 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:43Z","lastTransitionTime":"2026-02-18T14:32:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:43 crc kubenswrapper[4957]: I0218 14:32:43.444979 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:43 crc kubenswrapper[4957]: I0218 14:32:43.445023 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:43 crc kubenswrapper[4957]: I0218 14:32:43.445035 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:43 crc kubenswrapper[4957]: I0218 14:32:43.445052 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:43 crc kubenswrapper[4957]: I0218 14:32:43.445065 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:43Z","lastTransitionTime":"2026-02-18T14:32:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:43 crc kubenswrapper[4957]: I0218 14:32:43.547389 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:43 crc kubenswrapper[4957]: I0218 14:32:43.547468 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:43 crc kubenswrapper[4957]: I0218 14:32:43.547484 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:43 crc kubenswrapper[4957]: I0218 14:32:43.547506 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:43 crc kubenswrapper[4957]: I0218 14:32:43.547521 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:43Z","lastTransitionTime":"2026-02-18T14:32:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:43 crc kubenswrapper[4957]: I0218 14:32:43.650160 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:43 crc kubenswrapper[4957]: I0218 14:32:43.650209 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:43 crc kubenswrapper[4957]: I0218 14:32:43.650221 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:43 crc kubenswrapper[4957]: I0218 14:32:43.650237 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:43 crc kubenswrapper[4957]: I0218 14:32:43.650252 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:43Z","lastTransitionTime":"2026-02-18T14:32:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:43 crc kubenswrapper[4957]: I0218 14:32:43.669743 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-sk96m_e7e4902e-7bb6-44ec-a3ac-3d8b3afe3fbb/kube-multus/0.log" Feb 18 14:32:43 crc kubenswrapper[4957]: I0218 14:32:43.670154 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-sk96m" event={"ID":"e7e4902e-7bb6-44ec-a3ac-3d8b3afe3fbb","Type":"ContainerStarted","Data":"3b9a1d713ed8c708dc17974a35a4e22543cfeee4911baa33638b24cb02f01f1e"} Feb 18 14:32:43 crc kubenswrapper[4957]: I0218 14:32:43.683336 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3aa342a3ea42d2c64cf284dd2f7fed142b1ff377cc3c5384e4d940ec4364befc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:43Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:43 crc kubenswrapper[4957]: I0218 14:32:43.693857 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:43Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:43 crc kubenswrapper[4957]: I0218 14:32:43.710509 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-s7f5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77ae51a3-a3f7-4ea3-afb9-93558bf3b821\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ff578d3657e9ba6b371eb3c3a2e63d4c989e3232da6c9f88cf6ba18edc50dd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:32:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7f4f904ea5311aa8aa0473b62433cd3684d0e1fecdb1b179bfb7a58e463cc27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7f4f904ea5311aa8aa0473b62433cd3684d0e1fecdb1b179bfb7a58e463cc27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ea791a426f30ba77c019b8f865f9cfe90b915f6c0596002f4914cbe2fa68c27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ea791a426f30ba77c019b8f865f9cfe90b915f6c0596002f4914cbe2fa68c27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01a7d815c8c260fb5d2a4586f2640abf157fa209aa8e8aed0e6573a07bdaced6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01a7d815c8c260fb5d2a4586f2640abf157fa209aa8e8aed0e6573a07bdaced6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ae36af89d8b5cc2ad00253398fad76d78310bc8adb012b7b819ef55a26b8004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae36af89d8b5cc2ad00253398fad76d78310bc8adb012b7b819ef55a26b8004\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://def4b27d830824baeb1b965b56d5fc50343585d0fae5a1fed75381364a94caf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://def4b27d830824baeb1b965b56d5fc50343585d0fae5a1fed75381364a94caf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:32:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a13d3fcb295e963277f66a06bad27866ddb270d48474d0154efc52a8721a64a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a13d3fcb295e963277f66a06bad27866ddb270d48474d0154efc52a8721a64a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:32:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-s7f5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:43Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:43 crc kubenswrapper[4957]: I0218 14:32:43.721254 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-c5hxm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2b8a049-72c4-4a87-bb05-a5cc4ebe7a89\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d8c16012d78884094f9be4f3cd7b67463352ff1391c8f60c109b5f734c80cd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mzkhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-c5hxm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:43Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:43 crc kubenswrapper[4957]: I0218 14:32:43.731628 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jkmlc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58c40982-35c8-4670-ad21-513a7a5a458e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gkmzc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gkmzc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:32:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jkmlc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:43Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:43 crc kubenswrapper[4957]: I0218 14:32:43.743253 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b3bef1f-7cee-4035-bc8e-195fadcf2d19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51eded762a62fcb704d09a9c4336a0e308e44c662b636772ea79f754c09dcaaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a2ec9911be4f4d8715bb722cdf72cdbf478111030a7311b93eb67f55ddbce77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d46f625f180ab9afde6a7cb5d578b88012bf5b297e691a35f1fe1af000f1416\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a11b36734e57eec3aba0b6d96b4102850e46b96eb16d99cfcacf273f1ce69f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ff013b6ed4aa92a4bda93a646e7ffd4a158daab436c25025c7fbee49716bcfd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T14:31:47Z\\\",\\\"message\\\":\\\"W0218 14:31:37.311828 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0218 14:31:37.312305 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771425097 cert, and key in /tmp/serving-cert-2580901173/serving-signer.crt, /tmp/serving-cert-2580901173/serving-signer.key\\\\nI0218 14:31:37.668457 1 observer_polling.go:159] Starting file observer\\\\nW0218 14:31:37.670692 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0218 14:31:37.670892 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 14:31:37.672550 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2580901173/tls.crt::/tmp/serving-cert-2580901173/tls.key\\\\\\\"\\\\nF0218 14:31:47.884142 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1c0f986e657c20a059efccb494462926f4fa62d30b012e6ce2b530d679d9d5c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:37Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09ddd547ad67f12c0d4f3b0505206fc69b4bcfcfa966c6f98da8ef4f4e979711\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09ddd547ad67f12c0d4f3b0505206fc69b4bcfcfa966c6f98da8ef4f4e979711\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:43Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:43 crc kubenswrapper[4957]: I0218 14:32:43.753145 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:43 crc kubenswrapper[4957]: I0218 14:32:43.753191 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:43 crc kubenswrapper[4957]: I0218 14:32:43.753199 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:43 crc kubenswrapper[4957]: I0218 14:32:43.753215 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:43 crc kubenswrapper[4957]: I0218 14:32:43.753228 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:43Z","lastTransitionTime":"2026-02-18T14:32:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:43 crc kubenswrapper[4957]: I0218 14:32:43.756327 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76c85d37-4ff8-4c5d-83aa-4c5fbf9535c0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://796493e4b9e802eafa557bda30c8c5442e4fb5705907cfa1f04cad2b36dc20bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc2518870abf503506ce74ccbc0fc66ac6efa632a7394ceca4f58b38a0a1fac3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b902fee5b17e97ec39e941b26dd9ef6e2da95f49be1fc25192895894a4182685\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b68e74be3ce63b0e6fff6a8141e11a9c6e80a376e205bfc7f2745440c6526e6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b68e74be3ce63b0e6fff6a8141e11a9c6e80a376e205bfc7f2745440c6526e6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:35Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:34Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:43Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:43 crc kubenswrapper[4957]: I0218 14:32:43.770011 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:43Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:43 crc kubenswrapper[4957]: I0218 14:32:43.781985 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3339e9ff017eaa7de40b766111f5d7dcb9b5b45f154d917b2f3b1ebad21ee361\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://815a22c1c4523e923dbe20bb68b1f4195562f2b3a767409555f43c82cd826391\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:43Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:43 crc kubenswrapper[4957]: I0218 14:32:43.796733 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96ebe482278c96a988073257a64389543a692cbae60b771542dd45ffdbef9977\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:43Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:43 crc kubenswrapper[4957]: I0218 14:32:43.809518 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wn2pd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6a720f-7d42-48e9-8073-fb4f7417e6cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3317042ce828b73a13e6c2724a7c7663e0c73dd05fef1239f5ad36a7d46b1ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qv7rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wn2pd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:43Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:43 crc kubenswrapper[4957]: I0218 14:32:43.827367 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t7lp9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1ab5e7d-28c9-416b-9e12-1209987d8a2c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3d55cd020086b018dcf2183c7307ce4f66a18d4dfd5c7e387419f5aaeb08e53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb7e286dbf7b49a0ee8019eb660a4b977399c163bb094d77f830b3747399d702\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c773ab7da87d534f8ca400b73f89b473291b15b65118abfa48da7e8af126724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fb30623c479a6d604d07d8779f3b5661ba8a2c98fa38f145443c7f80fbfddb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9575c57251809ec79d9e9a63fc18fc0fe431582b023c0bfb9d71b3ea5c369e49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7fc50065803f879ae25e1ab406e1b74e142cab30c38bedd0426616a2cc6cc84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01eae4440ba56e59be4a568c08e1c676d03e370e9abdd72986eca6f68f01e0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01eae4440ba56e59be4a568c08e1c676d03e370e9abdd72986eca6f68f01e0f1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T14:32:21Z\\\",\\\"message\\\":\\\"erving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc0075eb187 \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:metrics,Protocol:TCP,Port:8383,TargetPort:{0 8383 },NodePort:0,AppProtocol:nil,},ServicePort{Name:https-metrics,Protocol:TCP,Port:8081,TargetPort:{0 8081 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{name: marketplace-operator,},ClusterIP:10.217.5.53,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.53],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF0218 14:32:21.034484 6635 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not add\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T14:32:20Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-t7lp9_openshift-ovn-kubernetes(c1ab5e7d-28c9-416b-9e12-1209987d8a2c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07549d11e88948ba7929804722b7d49a2fefb64d0c324c2032529c13df70a23f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854d32930f1afa0a3f29b83159a4a473fd3c074f065cab0f110c4c0e62d0ab79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://854d32930f1afa0a3f29b83159a4a473fd3c074f065cab0f110c4c0e62d0ab79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t7lp9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:43Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:43 crc kubenswrapper[4957]: I0218 14:32:43.838498 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-52gh7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"841cf9b6-bfbb-4ff0-8899-acd00478a669\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e52fe12bee0a96d77a82c8edb65574b11ae3d6b6dd3e2a68e918f1d21c0b1153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xg6v5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d556fc2e5fb106135a544401595696797ede468aa93818660d86424c49486fff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xg6v5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:32:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-52gh7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:43Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:43 crc kubenswrapper[4957]: I0218 14:32:43.849563 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"820e2f52-fc4e-43af-8da0-5b1ecb52e54a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdeaa62699fcf37060225a404103ed014fff43f751f744a27891afb76c56d011\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb9f54ba352c3e8d1cb70e31e2321f32cc741e9ea7fa871ac8db9cb380486d6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04cb263fd49c7f580746efa2f81c4e25a4862244502ad5362a1767d6255c023c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f45a9407db247a891731310afb4c1a7b81ac94e44cfd8e267854a4c4e434e95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:34Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:43Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:43 crc kubenswrapper[4957]: I0218 14:32:43.855520 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:43 crc kubenswrapper[4957]: I0218 14:32:43.855572 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:43 crc kubenswrapper[4957]: I0218 14:32:43.855586 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:43 crc kubenswrapper[4957]: I0218 14:32:43.855615 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:43 crc kubenswrapper[4957]: I0218 14:32:43.855628 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:43Z","lastTransitionTime":"2026-02-18T14:32:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:43 crc kubenswrapper[4957]: I0218 14:32:43.860130 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:43Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:43 crc kubenswrapper[4957]: I0218 14:32:43.871273 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sk96m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7e4902e-7bb6-44ec-a3ac-3d8b3afe3fbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b9a1d713ed8c708dc17974a35a4e22543cfeee4911baa33638b24cb02f01f1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://644a3c29970ac9c5262a3a66390a566947b89d6020eaf5de12afb1afeaaff673\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T14:32:42Z\\\",\\\"message\\\":\\\"2026-02-18T14:31:56+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_5faa22d4-8273-431b-8440-b348b6738df0\\\\n2026-02-18T14:31:56+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_5faa22d4-8273-431b-8440-b348b6738df0 to /host/opt/cni/bin/\\\\n2026-02-18T14:31:57Z [verbose] multus-daemon started\\\\n2026-02-18T14:31:57Z [verbose] Readiness Indicator file check\\\\n2026-02-18T14:32:42Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:32:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vssz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sk96m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:43Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:43 crc kubenswrapper[4957]: I0218 14:32:43.882698 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cde17e3-43e9-4bed-afe8-5b76229e35cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1929ddc0bd8d5e1c054fe6b3399c053a91ace49153f8712ad155416d0df0652e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mzslj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4227d09ba91769eac5df7ac50f6da20587fad22be3d3f27f0a938b37d76b457e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mzslj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-x8wwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:43Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:43 crc kubenswrapper[4957]: I0218 14:32:43.958443 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:43 crc kubenswrapper[4957]: I0218 14:32:43.958482 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:43 crc kubenswrapper[4957]: I0218 14:32:43.958494 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:43 crc kubenswrapper[4957]: I0218 14:32:43.958513 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:43 crc kubenswrapper[4957]: I0218 14:32:43.958525 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:43Z","lastTransitionTime":"2026-02-18T14:32:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:44 crc kubenswrapper[4957]: I0218 14:32:44.061131 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:44 crc kubenswrapper[4957]: I0218 14:32:44.061193 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:44 crc kubenswrapper[4957]: I0218 14:32:44.061206 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:44 crc kubenswrapper[4957]: I0218 14:32:44.061221 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:44 crc kubenswrapper[4957]: I0218 14:32:44.061232 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:44Z","lastTransitionTime":"2026-02-18T14:32:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:44 crc kubenswrapper[4957]: I0218 14:32:44.164007 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:44 crc kubenswrapper[4957]: I0218 14:32:44.164077 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:44 crc kubenswrapper[4957]: I0218 14:32:44.164103 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:44 crc kubenswrapper[4957]: I0218 14:32:44.164136 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:44 crc kubenswrapper[4957]: I0218 14:32:44.164174 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:44Z","lastTransitionTime":"2026-02-18T14:32:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:44 crc kubenswrapper[4957]: I0218 14:32:44.194235 4957 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 18:31:29.93971834 +0000 UTC Feb 18 14:32:44 crc kubenswrapper[4957]: I0218 14:32:44.212183 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:32:44 crc kubenswrapper[4957]: I0218 14:32:44.212185 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:32:44 crc kubenswrapper[4957]: E0218 14:32:44.212300 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 14:32:44 crc kubenswrapper[4957]: E0218 14:32:44.212344 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 14:32:44 crc kubenswrapper[4957]: I0218 14:32:44.230102 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"820e2f52-fc4e-43af-8da0-5b1ecb52e54a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdeaa62699fcf37060225a404103ed014fff43f751f744a27891afb76c56d011\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb9f54ba352c3e8d1cb70e31e2321f32cc741e9ea7fa871ac8db9cb380486d6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04cb263fd49c7f580746efa2f81c4e25a4862244502ad5362a1767d6255c023c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f45a9407db247a891731310afb4c1a7b81ac94e44cfd8e267854a4c4e434e95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:34Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:44Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:44 crc kubenswrapper[4957]: I0218 14:32:44.246581 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:44Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:44 crc kubenswrapper[4957]: I0218 14:32:44.266458 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sk96m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7e4902e-7bb6-44ec-a3ac-3d8b3afe3fbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b9a1d713ed8c708dc17974a35a4e22543cfeee4911baa33638b24cb02f01f1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://644a3c29970ac9c5262a3a66390a566947b89d6020eaf5de12afb1afeaaff673\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T14:32:42Z\\\",\\\"message\\\":\\\"2026-02-18T14:31:56+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_5faa22d4-8273-431b-8440-b348b6738df0\\\\n2026-02-18T14:31:56+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_5faa22d4-8273-431b-8440-b348b6738df0 to /host/opt/cni/bin/\\\\n2026-02-18T14:31:57Z [verbose] multus-daemon started\\\\n2026-02-18T14:31:57Z [verbose] Readiness Indicator file check\\\\n2026-02-18T14:32:42Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:32:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vssz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sk96m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:44Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:44 crc kubenswrapper[4957]: I0218 14:32:44.266663 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:44 crc kubenswrapper[4957]: I0218 14:32:44.266677 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:44 crc kubenswrapper[4957]: I0218 14:32:44.266685 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:44 crc kubenswrapper[4957]: I0218 14:32:44.266697 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:44 crc kubenswrapper[4957]: I0218 14:32:44.266706 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:44Z","lastTransitionTime":"2026-02-18T14:32:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:44 crc kubenswrapper[4957]: I0218 14:32:44.279099 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cde17e3-43e9-4bed-afe8-5b76229e35cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1929ddc0bd8d5e1c054fe6b3399c053a91ace49153f8712ad155416d0df0652e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mzslj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4227d09ba91769eac5df7ac50f6da20587fad22be3d3f27f0a938b37d76b457e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mzslj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-x8wwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:44Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:44 crc kubenswrapper[4957]: I0218 14:32:44.294026 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-s7f5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77ae51a3-a3f7-4ea3-afb9-93558bf3b821\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ff578d3657e9ba6b371eb3c3a2e63d4c989e3232da6c9f88cf6ba18edc50dd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:32:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7f4f904ea5311aa8aa0473b62433cd3684d0e1fecdb1b179bfb7a58e463cc27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7f4f904ea5311aa8aa0473b62433cd3684d0e1fecdb1b179bfb7a58e463cc27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ea791a426f30ba77c019b8f865f9cfe90b915f6c0596002f4914cbe2fa68c27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ea791a426f30ba77c019b8f865f9cfe90b915f6c0596002f4914cbe2fa68c27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01a7d815c8c260fb5d2a4586f2640abf157fa209aa8e8aed0e6573a07bdaced6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01a7d815c8c260fb5d2a4586f2640abf157fa209aa8e8aed0e6573a07bdaced6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ae36af89d8b5cc2ad00253398fad76d78310bc8adb012b7b819ef55a26b8004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae36af89d8b5cc2ad00253398fad76d78310bc8adb012b7b819ef55a26b8004\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://def4b27d830824baeb1b965b56d5fc50343585d0fae5a1fed75381364a94caf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://def4b27d830824baeb1b965b56d5fc50343585d0fae5a1fed75381364a94caf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:32:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a13d3fcb295e963277f66a06bad27866ddb270d48474d0154efc52a8721a64a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a13d3fcb295e963277f66a06bad27866ddb270d48474d0154efc52a8721a64a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:32:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-s7f5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:44Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:44 crc kubenswrapper[4957]: I0218 14:32:44.306995 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-c5hxm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2b8a049-72c4-4a87-bb05-a5cc4ebe7a89\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d8c16012d78884094f9be4f3cd7b67463352ff1391c8f60c109b5f734c80cd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mzkhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-c5hxm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:44Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:44 crc kubenswrapper[4957]: I0218 14:32:44.321392 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jkmlc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58c40982-35c8-4670-ad21-513a7a5a458e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gkmzc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gkmzc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:32:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jkmlc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:44Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:44 crc kubenswrapper[4957]: I0218 14:32:44.336529 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3aa342a3ea42d2c64cf284dd2f7fed142b1ff377cc3c5384e4d940ec4364befc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:44Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:44 crc kubenswrapper[4957]: I0218 14:32:44.352537 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:44Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:44 crc kubenswrapper[4957]: I0218 14:32:44.365746 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96ebe482278c96a988073257a64389543a692cbae60b771542dd45ffdbef9977\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:44Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:44 crc kubenswrapper[4957]: I0218 14:32:44.369118 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:44 crc kubenswrapper[4957]: I0218 14:32:44.369166 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:44 crc kubenswrapper[4957]: I0218 14:32:44.369180 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:44 crc kubenswrapper[4957]: I0218 14:32:44.369197 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:44 crc kubenswrapper[4957]: I0218 14:32:44.369209 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:44Z","lastTransitionTime":"2026-02-18T14:32:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:44 crc kubenswrapper[4957]: I0218 14:32:44.376176 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wn2pd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6a720f-7d42-48e9-8073-fb4f7417e6cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3317042ce828b73a13e6c2724a7c7663e0c73dd05fef1239f5ad36a7d46b1ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qv7rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wn2pd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:44Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:44 crc kubenswrapper[4957]: I0218 14:32:44.389165 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b3bef1f-7cee-4035-bc8e-195fadcf2d19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51eded762a62fcb704d09a9c4336a0e308e44c662b636772ea79f754c09dcaaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a2ec9911be4f4d8715bb722cdf72cdbf478111030a7311b93eb67f55ddbce77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d46f625f180ab9afde6a7cb5d578b88012bf5b297e691a35f1fe1af000f1416\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a11b36734e57eec3aba0b6d96b4102850e46b96eb16d99cfcacf273f1ce69f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ff013b6ed4aa92a4bda93a646e7ffd4a158daab436c25025c7fbee49716bcfd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T14:31:47Z\\\",\\\"message\\\":\\\"W0218 14:31:37.311828 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0218 14:31:37.312305 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771425097 cert, and key in /tmp/serving-cert-2580901173/serving-signer.crt, /tmp/serving-cert-2580901173/serving-signer.key\\\\nI0218 14:31:37.668457 1 observer_polling.go:159] Starting file observer\\\\nW0218 14:31:37.670692 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0218 14:31:37.670892 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 14:31:37.672550 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2580901173/tls.crt::/tmp/serving-cert-2580901173/tls.key\\\\\\\"\\\\nF0218 14:31:47.884142 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1c0f986e657c20a059efccb494462926f4fa62d30b012e6ce2b530d679d9d5c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:37Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09ddd547ad67f12c0d4f3b0505206fc69b4bcfcfa966c6f98da8ef4f4e979711\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09ddd547ad67f12c0d4f3b0505206fc69b4bcfcfa966c6f98da8ef4f4e979711\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:44Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:44 crc kubenswrapper[4957]: I0218 14:32:44.402965 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76c85d37-4ff8-4c5d-83aa-4c5fbf9535c0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://796493e4b9e802eafa557bda30c8c5442e4fb5705907cfa1f04cad2b36dc20bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc2518870abf503506ce74ccbc0fc66ac6efa632a7394ceca4f58b38a0a1fac3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b902fee5b17e97ec39e941b26dd9ef6e2da95f49be1fc25192895894a4182685\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b68e74be3ce63b0e6fff6a8141e11a9c6e80a376e205bfc7f2745440c6526e6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b68e74be3ce63b0e6fff6a8141e11a9c6e80a376e205bfc7f2745440c6526e6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:35Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:34Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:44Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:44 crc kubenswrapper[4957]: I0218 14:32:44.415858 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:44Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:44 crc kubenswrapper[4957]: I0218 14:32:44.427833 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3339e9ff017eaa7de40b766111f5d7dcb9b5b45f154d917b2f3b1ebad21ee361\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://815a22c1c4523e923dbe20bb68b1f4195562f2b3a767409555f43c82cd826391\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:44Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:44 crc kubenswrapper[4957]: I0218 14:32:44.448545 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t7lp9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1ab5e7d-28c9-416b-9e12-1209987d8a2c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3d55cd020086b018dcf2183c7307ce4f66a18d4dfd5c7e387419f5aaeb08e53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb7e286dbf7b49a0ee8019eb660a4b977399c163bb094d77f830b3747399d702\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c773ab7da87d534f8ca400b73f89b473291b15b65118abfa48da7e8af126724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fb30623c479a6d604d07d8779f3b5661ba8a2c98fa38f145443c7f80fbfddb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9575c57251809ec79d9e9a63fc18fc0fe431582b023c0bfb9d71b3ea5c369e49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7fc50065803f879ae25e1ab406e1b74e142cab30c38bedd0426616a2cc6cc84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01eae4440ba56e59be4a568c08e1c676d03e370e9abdd72986eca6f68f01e0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01eae4440ba56e59be4a568c08e1c676d03e370e9abdd72986eca6f68f01e0f1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T14:32:21Z\\\",\\\"message\\\":\\\"erving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc0075eb187 \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:metrics,Protocol:TCP,Port:8383,TargetPort:{0 8383 },NodePort:0,AppProtocol:nil,},ServicePort{Name:https-metrics,Protocol:TCP,Port:8081,TargetPort:{0 8081 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{name: marketplace-operator,},ClusterIP:10.217.5.53,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.53],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF0218 14:32:21.034484 6635 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not add\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T14:32:20Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-t7lp9_openshift-ovn-kubernetes(c1ab5e7d-28c9-416b-9e12-1209987d8a2c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07549d11e88948ba7929804722b7d49a2fefb64d0c324c2032529c13df70a23f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854d32930f1afa0a3f29b83159a4a473fd3c074f065cab0f110c4c0e62d0ab79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://854d32930f1afa0a3f29b83159a4a473fd3c074f065cab0f110c4c0e62d0ab79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t7lp9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:44Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:44 crc kubenswrapper[4957]: I0218 14:32:44.461430 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-52gh7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"841cf9b6-bfbb-4ff0-8899-acd00478a669\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e52fe12bee0a96d77a82c8edb65574b11ae3d6b6dd3e2a68e918f1d21c0b1153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xg6v5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d556fc2e5fb106135a544401595696797ede468aa93818660d86424c49486fff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xg6v5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:32:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-52gh7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:44Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:44 crc kubenswrapper[4957]: I0218 14:32:44.471760 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:44 crc kubenswrapper[4957]: I0218 14:32:44.471798 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:44 crc kubenswrapper[4957]: I0218 14:32:44.471807 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:44 crc kubenswrapper[4957]: I0218 14:32:44.471821 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:44 crc kubenswrapper[4957]: I0218 14:32:44.471831 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:44Z","lastTransitionTime":"2026-02-18T14:32:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:44 crc kubenswrapper[4957]: I0218 14:32:44.573504 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:44 crc kubenswrapper[4957]: I0218 14:32:44.573539 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:44 crc kubenswrapper[4957]: I0218 14:32:44.573572 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:44 crc kubenswrapper[4957]: I0218 14:32:44.573585 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:44 crc kubenswrapper[4957]: I0218 14:32:44.573594 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:44Z","lastTransitionTime":"2026-02-18T14:32:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:44 crc kubenswrapper[4957]: I0218 14:32:44.676968 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:44 crc kubenswrapper[4957]: I0218 14:32:44.677017 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:44 crc kubenswrapper[4957]: I0218 14:32:44.677065 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:44 crc kubenswrapper[4957]: I0218 14:32:44.677084 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:44 crc kubenswrapper[4957]: I0218 14:32:44.677093 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:44Z","lastTransitionTime":"2026-02-18T14:32:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:44 crc kubenswrapper[4957]: I0218 14:32:44.779622 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:44 crc kubenswrapper[4957]: I0218 14:32:44.779676 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:44 crc kubenswrapper[4957]: I0218 14:32:44.779690 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:44 crc kubenswrapper[4957]: I0218 14:32:44.779709 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:44 crc kubenswrapper[4957]: I0218 14:32:44.779723 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:44Z","lastTransitionTime":"2026-02-18T14:32:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:44 crc kubenswrapper[4957]: I0218 14:32:44.882752 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:44 crc kubenswrapper[4957]: I0218 14:32:44.882794 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:44 crc kubenswrapper[4957]: I0218 14:32:44.882803 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:44 crc kubenswrapper[4957]: I0218 14:32:44.882817 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:44 crc kubenswrapper[4957]: I0218 14:32:44.882827 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:44Z","lastTransitionTime":"2026-02-18T14:32:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:44 crc kubenswrapper[4957]: I0218 14:32:44.984717 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:44 crc kubenswrapper[4957]: I0218 14:32:44.984757 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:44 crc kubenswrapper[4957]: I0218 14:32:44.984769 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:44 crc kubenswrapper[4957]: I0218 14:32:44.984785 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:44 crc kubenswrapper[4957]: I0218 14:32:44.984798 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:44Z","lastTransitionTime":"2026-02-18T14:32:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:45 crc kubenswrapper[4957]: I0218 14:32:45.087382 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:45 crc kubenswrapper[4957]: I0218 14:32:45.087448 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:45 crc kubenswrapper[4957]: I0218 14:32:45.087456 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:45 crc kubenswrapper[4957]: I0218 14:32:45.087472 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:45 crc kubenswrapper[4957]: I0218 14:32:45.087482 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:45Z","lastTransitionTime":"2026-02-18T14:32:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:45 crc kubenswrapper[4957]: I0218 14:32:45.189312 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:45 crc kubenswrapper[4957]: I0218 14:32:45.189354 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:45 crc kubenswrapper[4957]: I0218 14:32:45.189366 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:45 crc kubenswrapper[4957]: I0218 14:32:45.189382 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:45 crc kubenswrapper[4957]: I0218 14:32:45.189394 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:45Z","lastTransitionTime":"2026-02-18T14:32:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:45 crc kubenswrapper[4957]: I0218 14:32:45.194463 4957 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 13:01:09.372518263 +0000 UTC Feb 18 14:32:45 crc kubenswrapper[4957]: I0218 14:32:45.213015 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jkmlc" Feb 18 14:32:45 crc kubenswrapper[4957]: I0218 14:32:45.213189 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:32:45 crc kubenswrapper[4957]: E0218 14:32:45.213298 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jkmlc" podUID="58c40982-35c8-4670-ad21-513a7a5a458e" Feb 18 14:32:45 crc kubenswrapper[4957]: E0218 14:32:45.213573 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 14:32:45 crc kubenswrapper[4957]: I0218 14:32:45.291790 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:45 crc kubenswrapper[4957]: I0218 14:32:45.291844 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:45 crc kubenswrapper[4957]: I0218 14:32:45.291857 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:45 crc kubenswrapper[4957]: I0218 14:32:45.291873 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:45 crc kubenswrapper[4957]: I0218 14:32:45.291885 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:45Z","lastTransitionTime":"2026-02-18T14:32:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:45 crc kubenswrapper[4957]: I0218 14:32:45.394645 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:45 crc kubenswrapper[4957]: I0218 14:32:45.394686 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:45 crc kubenswrapper[4957]: I0218 14:32:45.394696 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:45 crc kubenswrapper[4957]: I0218 14:32:45.394709 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:45 crc kubenswrapper[4957]: I0218 14:32:45.394718 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:45Z","lastTransitionTime":"2026-02-18T14:32:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:45 crc kubenswrapper[4957]: I0218 14:32:45.497467 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:45 crc kubenswrapper[4957]: I0218 14:32:45.497516 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:45 crc kubenswrapper[4957]: I0218 14:32:45.497526 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:45 crc kubenswrapper[4957]: I0218 14:32:45.497547 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:45 crc kubenswrapper[4957]: I0218 14:32:45.497557 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:45Z","lastTransitionTime":"2026-02-18T14:32:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:45 crc kubenswrapper[4957]: I0218 14:32:45.600163 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:45 crc kubenswrapper[4957]: I0218 14:32:45.600206 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:45 crc kubenswrapper[4957]: I0218 14:32:45.600217 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:45 crc kubenswrapper[4957]: I0218 14:32:45.600233 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:45 crc kubenswrapper[4957]: I0218 14:32:45.600244 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:45Z","lastTransitionTime":"2026-02-18T14:32:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:45 crc kubenswrapper[4957]: I0218 14:32:45.702408 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:45 crc kubenswrapper[4957]: I0218 14:32:45.702455 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:45 crc kubenswrapper[4957]: I0218 14:32:45.702464 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:45 crc kubenswrapper[4957]: I0218 14:32:45.702479 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:45 crc kubenswrapper[4957]: I0218 14:32:45.702490 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:45Z","lastTransitionTime":"2026-02-18T14:32:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:45 crc kubenswrapper[4957]: I0218 14:32:45.805197 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:45 crc kubenswrapper[4957]: I0218 14:32:45.805237 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:45 crc kubenswrapper[4957]: I0218 14:32:45.805250 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:45 crc kubenswrapper[4957]: I0218 14:32:45.805266 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:45 crc kubenswrapper[4957]: I0218 14:32:45.805279 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:45Z","lastTransitionTime":"2026-02-18T14:32:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:45 crc kubenswrapper[4957]: I0218 14:32:45.907398 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:45 crc kubenswrapper[4957]: I0218 14:32:45.907460 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:45 crc kubenswrapper[4957]: I0218 14:32:45.907473 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:45 crc kubenswrapper[4957]: I0218 14:32:45.907488 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:45 crc kubenswrapper[4957]: I0218 14:32:45.907499 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:45Z","lastTransitionTime":"2026-02-18T14:32:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:46 crc kubenswrapper[4957]: I0218 14:32:46.010245 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:46 crc kubenswrapper[4957]: I0218 14:32:46.010289 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:46 crc kubenswrapper[4957]: I0218 14:32:46.010298 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:46 crc kubenswrapper[4957]: I0218 14:32:46.010317 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:46 crc kubenswrapper[4957]: I0218 14:32:46.010333 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:46Z","lastTransitionTime":"2026-02-18T14:32:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:46 crc kubenswrapper[4957]: I0218 14:32:46.113166 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:46 crc kubenswrapper[4957]: I0218 14:32:46.113492 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:46 crc kubenswrapper[4957]: I0218 14:32:46.113571 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:46 crc kubenswrapper[4957]: I0218 14:32:46.113639 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:46 crc kubenswrapper[4957]: I0218 14:32:46.113699 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:46Z","lastTransitionTime":"2026-02-18T14:32:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:46 crc kubenswrapper[4957]: I0218 14:32:46.194873 4957 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 01:14:59.959729162 +0000 UTC Feb 18 14:32:46 crc kubenswrapper[4957]: I0218 14:32:46.212273 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:32:46 crc kubenswrapper[4957]: E0218 14:32:46.212400 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 14:32:46 crc kubenswrapper[4957]: I0218 14:32:46.212580 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:32:46 crc kubenswrapper[4957]: E0218 14:32:46.212712 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 14:32:46 crc kubenswrapper[4957]: I0218 14:32:46.215865 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:46 crc kubenswrapper[4957]: I0218 14:32:46.215904 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:46 crc kubenswrapper[4957]: I0218 14:32:46.215914 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:46 crc kubenswrapper[4957]: I0218 14:32:46.215928 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:46 crc kubenswrapper[4957]: I0218 14:32:46.215940 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:46Z","lastTransitionTime":"2026-02-18T14:32:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:46 crc kubenswrapper[4957]: I0218 14:32:46.318407 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:46 crc kubenswrapper[4957]: I0218 14:32:46.318456 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:46 crc kubenswrapper[4957]: I0218 14:32:46.318464 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:46 crc kubenswrapper[4957]: I0218 14:32:46.318477 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:46 crc kubenswrapper[4957]: I0218 14:32:46.318488 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:46Z","lastTransitionTime":"2026-02-18T14:32:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:46 crc kubenswrapper[4957]: I0218 14:32:46.421046 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:46 crc kubenswrapper[4957]: I0218 14:32:46.421094 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:46 crc kubenswrapper[4957]: I0218 14:32:46.421103 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:46 crc kubenswrapper[4957]: I0218 14:32:46.421119 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:46 crc kubenswrapper[4957]: I0218 14:32:46.421135 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:46Z","lastTransitionTime":"2026-02-18T14:32:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:46 crc kubenswrapper[4957]: I0218 14:32:46.523717 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:46 crc kubenswrapper[4957]: I0218 14:32:46.523768 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:46 crc kubenswrapper[4957]: I0218 14:32:46.523781 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:46 crc kubenswrapper[4957]: I0218 14:32:46.523796 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:46 crc kubenswrapper[4957]: I0218 14:32:46.523806 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:46Z","lastTransitionTime":"2026-02-18T14:32:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:46 crc kubenswrapper[4957]: I0218 14:32:46.626866 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:46 crc kubenswrapper[4957]: I0218 14:32:46.626924 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:46 crc kubenswrapper[4957]: I0218 14:32:46.626933 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:46 crc kubenswrapper[4957]: I0218 14:32:46.626967 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:46 crc kubenswrapper[4957]: I0218 14:32:46.626979 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:46Z","lastTransitionTime":"2026-02-18T14:32:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:46 crc kubenswrapper[4957]: I0218 14:32:46.730709 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:46 crc kubenswrapper[4957]: I0218 14:32:46.730765 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:46 crc kubenswrapper[4957]: I0218 14:32:46.730778 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:46 crc kubenswrapper[4957]: I0218 14:32:46.730798 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:46 crc kubenswrapper[4957]: I0218 14:32:46.730813 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:46Z","lastTransitionTime":"2026-02-18T14:32:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:46 crc kubenswrapper[4957]: I0218 14:32:46.832961 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:46 crc kubenswrapper[4957]: I0218 14:32:46.833268 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:46 crc kubenswrapper[4957]: I0218 14:32:46.833395 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:46 crc kubenswrapper[4957]: I0218 14:32:46.833521 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:46 crc kubenswrapper[4957]: I0218 14:32:46.833637 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:46Z","lastTransitionTime":"2026-02-18T14:32:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:46 crc kubenswrapper[4957]: I0218 14:32:46.936256 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:46 crc kubenswrapper[4957]: I0218 14:32:46.936303 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:46 crc kubenswrapper[4957]: I0218 14:32:46.936314 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:46 crc kubenswrapper[4957]: I0218 14:32:46.936330 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:46 crc kubenswrapper[4957]: I0218 14:32:46.936343 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:46Z","lastTransitionTime":"2026-02-18T14:32:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:47 crc kubenswrapper[4957]: I0218 14:32:47.039464 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:47 crc kubenswrapper[4957]: I0218 14:32:47.039815 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:47 crc kubenswrapper[4957]: I0218 14:32:47.039895 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:47 crc kubenswrapper[4957]: I0218 14:32:47.039968 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:47 crc kubenswrapper[4957]: I0218 14:32:47.040043 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:47Z","lastTransitionTime":"2026-02-18T14:32:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:47 crc kubenswrapper[4957]: I0218 14:32:47.144032 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:47 crc kubenswrapper[4957]: I0218 14:32:47.144113 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:47 crc kubenswrapper[4957]: I0218 14:32:47.144135 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:47 crc kubenswrapper[4957]: I0218 14:32:47.144164 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:47 crc kubenswrapper[4957]: I0218 14:32:47.144192 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:47Z","lastTransitionTime":"2026-02-18T14:32:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:47 crc kubenswrapper[4957]: I0218 14:32:47.194996 4957 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 04:49:33.985680247 +0000 UTC Feb 18 14:32:47 crc kubenswrapper[4957]: I0218 14:32:47.212296 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jkmlc" Feb 18 14:32:47 crc kubenswrapper[4957]: I0218 14:32:47.212369 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:32:47 crc kubenswrapper[4957]: E0218 14:32:47.212553 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jkmlc" podUID="58c40982-35c8-4670-ad21-513a7a5a458e" Feb 18 14:32:47 crc kubenswrapper[4957]: E0218 14:32:47.212648 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 14:32:47 crc kubenswrapper[4957]: I0218 14:32:47.213365 4957 scope.go:117] "RemoveContainer" containerID="01eae4440ba56e59be4a568c08e1c676d03e370e9abdd72986eca6f68f01e0f1" Feb 18 14:32:47 crc kubenswrapper[4957]: I0218 14:32:47.246240 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:47 crc kubenswrapper[4957]: I0218 14:32:47.246284 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:47 crc kubenswrapper[4957]: I0218 14:32:47.246297 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:47 crc kubenswrapper[4957]: I0218 14:32:47.246316 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:47 crc kubenswrapper[4957]: I0218 14:32:47.246327 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:47Z","lastTransitionTime":"2026-02-18T14:32:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:47 crc kubenswrapper[4957]: I0218 14:32:47.348553 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:47 crc kubenswrapper[4957]: I0218 14:32:47.348594 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:47 crc kubenswrapper[4957]: I0218 14:32:47.348603 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:47 crc kubenswrapper[4957]: I0218 14:32:47.348619 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:47 crc kubenswrapper[4957]: I0218 14:32:47.348631 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:47Z","lastTransitionTime":"2026-02-18T14:32:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:47 crc kubenswrapper[4957]: I0218 14:32:47.458476 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:47 crc kubenswrapper[4957]: I0218 14:32:47.458517 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:47 crc kubenswrapper[4957]: I0218 14:32:47.458527 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:47 crc kubenswrapper[4957]: I0218 14:32:47.458543 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:47 crc kubenswrapper[4957]: I0218 14:32:47.458554 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:47Z","lastTransitionTime":"2026-02-18T14:32:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:47 crc kubenswrapper[4957]: I0218 14:32:47.560363 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:47 crc kubenswrapper[4957]: I0218 14:32:47.560430 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:47 crc kubenswrapper[4957]: I0218 14:32:47.560443 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:47 crc kubenswrapper[4957]: I0218 14:32:47.560460 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:47 crc kubenswrapper[4957]: I0218 14:32:47.560472 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:47Z","lastTransitionTime":"2026-02-18T14:32:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:47 crc kubenswrapper[4957]: I0218 14:32:47.663234 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:47 crc kubenswrapper[4957]: I0218 14:32:47.663278 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:47 crc kubenswrapper[4957]: I0218 14:32:47.663291 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:47 crc kubenswrapper[4957]: I0218 14:32:47.663309 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:47 crc kubenswrapper[4957]: I0218 14:32:47.663322 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:47Z","lastTransitionTime":"2026-02-18T14:32:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:47 crc kubenswrapper[4957]: I0218 14:32:47.687435 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-t7lp9_c1ab5e7d-28c9-416b-9e12-1209987d8a2c/ovnkube-controller/2.log" Feb 18 14:32:47 crc kubenswrapper[4957]: I0218 14:32:47.689619 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t7lp9" event={"ID":"c1ab5e7d-28c9-416b-9e12-1209987d8a2c","Type":"ContainerStarted","Data":"bc6ab3573641e68bec6f227fa30ead1a586c1cf3abdf781b93edf3396b81f192"} Feb 18 14:32:47 crc kubenswrapper[4957]: I0218 14:32:47.690100 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-t7lp9" Feb 18 14:32:47 crc kubenswrapper[4957]: I0218 14:32:47.710042 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t7lp9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1ab5e7d-28c9-416b-9e12-1209987d8a2c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3d55cd020086b018dcf2183c7307ce4f66a18d4dfd5c7e387419f5aaeb08e53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb7e286dbf7b49a0ee8019eb660a4b977399c163bb094d77f830b3747399d702\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c773ab7da87d534f8ca400b73f89b473291b15b65118abfa48da7e8af126724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fb30623c479a6d604d07d8779f3b5661ba8a2c98fa38f145443c7f80fbfddb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9575c57251809ec79d9e9a63fc18fc0fe431582b023c0bfb9d71b3ea5c369e49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7fc50065803f879ae25e1ab406e1b74e142cab30c38bedd0426616a2cc6cc84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc6ab3573641e68bec6f227fa30ead1a586c1cf3abdf781b93edf3396b81f192\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01eae4440ba56e59be4a568c08e1c676d03e370e9abdd72986eca6f68f01e0f1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T14:32:21Z\\\",\\\"message\\\":\\\"erving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc0075eb187 \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:metrics,Protocol:TCP,Port:8383,TargetPort:{0 8383 },NodePort:0,AppProtocol:nil,},ServicePort{Name:https-metrics,Protocol:TCP,Port:8081,TargetPort:{0 8081 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{name: marketplace-operator,},ClusterIP:10.217.5.53,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.53],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF0218 14:32:21.034484 6635 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not add\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T14:32:20Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:32:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07549d11e88948ba7929804722b7d49a2fefb64d0c324c2032529c13df70a23f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854d32930f1afa0a3f29b83159a4a473fd3c074f065cab0f110c4c0e62d0ab79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://854d32930f1afa0a3f29b83159a4a473fd3c074f065cab0f110c4c0e62d0ab79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t7lp9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:47Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:47 crc kubenswrapper[4957]: I0218 14:32:47.723095 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-52gh7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"841cf9b6-bfbb-4ff0-8899-acd00478a669\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e52fe12bee0a96d77a82c8edb65574b11ae3d6b6dd3e2a68e918f1d21c0b1153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xg6v5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d556fc2e5fb106135a544401595696797ede468aa93818660d86424c49486fff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xg6v5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:32:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-52gh7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:47Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:47 crc kubenswrapper[4957]: I0218 14:32:47.736575 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"820e2f52-fc4e-43af-8da0-5b1ecb52e54a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdeaa62699fcf37060225a404103ed014fff43f751f744a27891afb76c56d011\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb9f54ba352c3e8d1cb70e31e2321f32cc741e9ea7fa871ac8db9cb380486d6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04cb263fd49c7f580746efa2f81c4e25a4862244502ad5362a1767d6255c023c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f45a9407db247a891731310afb4c1a7b81ac94e44cfd8e267854a4c4e434e95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:34Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:47Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:47 crc kubenswrapper[4957]: I0218 14:32:47.750893 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:47Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:47 crc kubenswrapper[4957]: I0218 14:32:47.764265 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sk96m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7e4902e-7bb6-44ec-a3ac-3d8b3afe3fbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b9a1d713ed8c708dc17974a35a4e22543cfeee4911baa33638b24cb02f01f1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://644a3c29970ac9c5262a3a66390a566947b89d6020eaf5de12afb1afeaaff673\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T14:32:42Z\\\",\\\"message\\\":\\\"2026-02-18T14:31:56+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_5faa22d4-8273-431b-8440-b348b6738df0\\\\n2026-02-18T14:31:56+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_5faa22d4-8273-431b-8440-b348b6738df0 to /host/opt/cni/bin/\\\\n2026-02-18T14:31:57Z [verbose] multus-daemon started\\\\n2026-02-18T14:31:57Z [verbose] Readiness Indicator file check\\\\n2026-02-18T14:32:42Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:32:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vssz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sk96m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:47Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:47 crc kubenswrapper[4957]: I0218 14:32:47.765621 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:47 crc kubenswrapper[4957]: I0218 14:32:47.765666 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:47 crc kubenswrapper[4957]: I0218 14:32:47.765678 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:47 crc kubenswrapper[4957]: I0218 14:32:47.765699 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:47 crc kubenswrapper[4957]: I0218 14:32:47.765712 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:47Z","lastTransitionTime":"2026-02-18T14:32:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:47 crc kubenswrapper[4957]: I0218 14:32:47.776155 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cde17e3-43e9-4bed-afe8-5b76229e35cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1929ddc0bd8d5e1c054fe6b3399c053a91ace49153f8712ad155416d0df0652e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mzslj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4227d09ba91769eac5df7ac50f6da20587fad22be3d3f27f0a938b37d76b457e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mzslj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-x8wwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:47Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:47 crc kubenswrapper[4957]: I0218 14:32:47.786784 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jkmlc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58c40982-35c8-4670-ad21-513a7a5a458e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gkmzc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gkmzc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:32:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jkmlc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:47Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:47 crc kubenswrapper[4957]: I0218 14:32:47.801498 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3aa342a3ea42d2c64cf284dd2f7fed142b1ff377cc3c5384e4d940ec4364befc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:47Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:47 crc kubenswrapper[4957]: I0218 14:32:47.822400 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:47Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:47 crc kubenswrapper[4957]: I0218 14:32:47.838806 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-s7f5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77ae51a3-a3f7-4ea3-afb9-93558bf3b821\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ff578d3657e9ba6b371eb3c3a2e63d4c989e3232da6c9f88cf6ba18edc50dd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:32:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7f4f904ea5311aa8aa0473b62433cd3684d0e1fecdb1b179bfb7a58e463cc27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7f4f904ea5311aa8aa0473b62433cd3684d0e1fecdb1b179bfb7a58e463cc27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ea791a426f30ba77c019b8f865f9cfe90b915f6c0596002f4914cbe2fa68c27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ea791a426f30ba77c019b8f865f9cfe90b915f6c0596002f4914cbe2fa68c27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01a7d815c8c260fb5d2a4586f2640abf157fa209aa8e8aed0e6573a07bdaced6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01a7d815c8c260fb5d2a4586f2640abf157fa209aa8e8aed0e6573a07bdaced6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ae36af89d8b5cc2ad00253398fad76d78310bc8adb012b7b819ef55a26b8004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae36af89d8b5cc2ad00253398fad76d78310bc8adb012b7b819ef55a26b8004\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://def4b27d830824baeb1b965b56d5fc50343585d0fae5a1fed75381364a94caf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://def4b27d830824baeb1b965b56d5fc50343585d0fae5a1fed75381364a94caf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:32:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a13d3fcb295e963277f66a06bad27866ddb270d48474d0154efc52a8721a64a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a13d3fcb295e963277f66a06bad27866ddb270d48474d0154efc52a8721a64a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:32:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-s7f5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:47Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:47 crc kubenswrapper[4957]: I0218 14:32:47.848983 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-c5hxm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2b8a049-72c4-4a87-bb05-a5cc4ebe7a89\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d8c16012d78884094f9be4f3cd7b67463352ff1391c8f60c109b5f734c80cd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mzkhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-c5hxm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:47Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:47 crc kubenswrapper[4957]: I0218 14:32:47.860967 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b3bef1f-7cee-4035-bc8e-195fadcf2d19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51eded762a62fcb704d09a9c4336a0e308e44c662b636772ea79f754c09dcaaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a2ec9911be4f4d8715bb722cdf72cdbf478111030a7311b93eb67f55ddbce77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d46f625f180ab9afde6a7cb5d578b88012bf5b297e691a35f1fe1af000f1416\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a11b36734e57eec3aba0b6d96b4102850e46b96eb16d99cfcacf273f1ce69f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ff013b6ed4aa92a4bda93a646e7ffd4a158daab436c25025c7fbee49716bcfd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T14:31:47Z\\\",\\\"message\\\":\\\"W0218 14:31:37.311828 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0218 14:31:37.312305 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771425097 cert, and key in /tmp/serving-cert-2580901173/serving-signer.crt, /tmp/serving-cert-2580901173/serving-signer.key\\\\nI0218 14:31:37.668457 1 observer_polling.go:159] Starting file observer\\\\nW0218 14:31:37.670692 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0218 14:31:37.670892 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 14:31:37.672550 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2580901173/tls.crt::/tmp/serving-cert-2580901173/tls.key\\\\\\\"\\\\nF0218 14:31:47.884142 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1c0f986e657c20a059efccb494462926f4fa62d30b012e6ce2b530d679d9d5c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:37Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09ddd547ad67f12c0d4f3b0505206fc69b4bcfcfa966c6f98da8ef4f4e979711\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09ddd547ad67f12c0d4f3b0505206fc69b4bcfcfa966c6f98da8ef4f4e979711\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:47Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:47 crc kubenswrapper[4957]: I0218 14:32:47.867795 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:47 crc kubenswrapper[4957]: I0218 14:32:47.867830 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:47 crc kubenswrapper[4957]: I0218 14:32:47.867840 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:47 crc kubenswrapper[4957]: I0218 14:32:47.867854 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:47 crc kubenswrapper[4957]: I0218 14:32:47.867863 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:47Z","lastTransitionTime":"2026-02-18T14:32:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:47 crc kubenswrapper[4957]: I0218 14:32:47.874922 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76c85d37-4ff8-4c5d-83aa-4c5fbf9535c0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://796493e4b9e802eafa557bda30c8c5442e4fb5705907cfa1f04cad2b36dc20bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc2518870abf503506ce74ccbc0fc66ac6efa632a7394ceca4f58b38a0a1fac3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b902fee5b17e97ec39e941b26dd9ef6e2da95f49be1fc25192895894a4182685\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b68e74be3ce63b0e6fff6a8141e11a9c6e80a376e205bfc7f2745440c6526e6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b68e74be3ce63b0e6fff6a8141e11a9c6e80a376e205bfc7f2745440c6526e6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:35Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:34Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:47Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:47 crc kubenswrapper[4957]: I0218 14:32:47.887986 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:47Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:47 crc kubenswrapper[4957]: I0218 14:32:47.899799 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3339e9ff017eaa7de40b766111f5d7dcb9b5b45f154d917b2f3b1ebad21ee361\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://815a22c1c4523e923dbe20bb68b1f4195562f2b3a767409555f43c82cd826391\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:47Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:47 crc kubenswrapper[4957]: I0218 14:32:47.912506 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96ebe482278c96a988073257a64389543a692cbae60b771542dd45ffdbef9977\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:47Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:47 crc kubenswrapper[4957]: I0218 14:32:47.924369 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wn2pd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6a720f-7d42-48e9-8073-fb4f7417e6cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3317042ce828b73a13e6c2724a7c7663e0c73dd05fef1239f5ad36a7d46b1ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qv7rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wn2pd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:47Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:47 crc kubenswrapper[4957]: I0218 14:32:47.970446 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:47 crc kubenswrapper[4957]: I0218 14:32:47.970485 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:47 crc kubenswrapper[4957]: I0218 14:32:47.970494 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:47 crc kubenswrapper[4957]: I0218 14:32:47.970508 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:47 crc kubenswrapper[4957]: I0218 14:32:47.970523 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:47Z","lastTransitionTime":"2026-02-18T14:32:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:48 crc kubenswrapper[4957]: I0218 14:32:48.074002 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:48 crc kubenswrapper[4957]: I0218 14:32:48.074062 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:48 crc kubenswrapper[4957]: I0218 14:32:48.074080 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:48 crc kubenswrapper[4957]: I0218 14:32:48.074105 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:48 crc kubenswrapper[4957]: I0218 14:32:48.074120 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:48Z","lastTransitionTime":"2026-02-18T14:32:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:48 crc kubenswrapper[4957]: I0218 14:32:48.176992 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:48 crc kubenswrapper[4957]: I0218 14:32:48.177318 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:48 crc kubenswrapper[4957]: I0218 14:32:48.177384 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:48 crc kubenswrapper[4957]: I0218 14:32:48.177478 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:48 crc kubenswrapper[4957]: I0218 14:32:48.177550 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:48Z","lastTransitionTime":"2026-02-18T14:32:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:48 crc kubenswrapper[4957]: I0218 14:32:48.195781 4957 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 23:56:31.116292438 +0000 UTC Feb 18 14:32:48 crc kubenswrapper[4957]: I0218 14:32:48.212290 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:32:48 crc kubenswrapper[4957]: I0218 14:32:48.212405 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:32:48 crc kubenswrapper[4957]: E0218 14:32:48.212541 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 14:32:48 crc kubenswrapper[4957]: E0218 14:32:48.212647 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 14:32:48 crc kubenswrapper[4957]: I0218 14:32:48.280584 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:48 crc kubenswrapper[4957]: I0218 14:32:48.280639 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:48 crc kubenswrapper[4957]: I0218 14:32:48.280651 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:48 crc kubenswrapper[4957]: I0218 14:32:48.280669 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:48 crc kubenswrapper[4957]: I0218 14:32:48.280681 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:48Z","lastTransitionTime":"2026-02-18T14:32:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:48 crc kubenswrapper[4957]: I0218 14:32:48.383786 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:48 crc kubenswrapper[4957]: I0218 14:32:48.383827 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:48 crc kubenswrapper[4957]: I0218 14:32:48.383838 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:48 crc kubenswrapper[4957]: I0218 14:32:48.383855 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:48 crc kubenswrapper[4957]: I0218 14:32:48.383866 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:48Z","lastTransitionTime":"2026-02-18T14:32:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:48 crc kubenswrapper[4957]: I0218 14:32:48.486252 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:48 crc kubenswrapper[4957]: I0218 14:32:48.486285 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:48 crc kubenswrapper[4957]: I0218 14:32:48.486294 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:48 crc kubenswrapper[4957]: I0218 14:32:48.486309 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:48 crc kubenswrapper[4957]: I0218 14:32:48.486318 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:48Z","lastTransitionTime":"2026-02-18T14:32:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:48 crc kubenswrapper[4957]: I0218 14:32:48.589497 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:48 crc kubenswrapper[4957]: I0218 14:32:48.589536 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:48 crc kubenswrapper[4957]: I0218 14:32:48.589546 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:48 crc kubenswrapper[4957]: I0218 14:32:48.589562 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:48 crc kubenswrapper[4957]: I0218 14:32:48.589573 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:48Z","lastTransitionTime":"2026-02-18T14:32:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:48 crc kubenswrapper[4957]: I0218 14:32:48.692206 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:48 crc kubenswrapper[4957]: I0218 14:32:48.692248 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:48 crc kubenswrapper[4957]: I0218 14:32:48.692257 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:48 crc kubenswrapper[4957]: I0218 14:32:48.692274 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:48 crc kubenswrapper[4957]: I0218 14:32:48.692286 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:48Z","lastTransitionTime":"2026-02-18T14:32:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:48 crc kubenswrapper[4957]: I0218 14:32:48.695640 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-t7lp9_c1ab5e7d-28c9-416b-9e12-1209987d8a2c/ovnkube-controller/3.log" Feb 18 14:32:48 crc kubenswrapper[4957]: I0218 14:32:48.696893 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-t7lp9_c1ab5e7d-28c9-416b-9e12-1209987d8a2c/ovnkube-controller/2.log" Feb 18 14:32:48 crc kubenswrapper[4957]: I0218 14:32:48.699769 4957 generic.go:334] "Generic (PLEG): container finished" podID="c1ab5e7d-28c9-416b-9e12-1209987d8a2c" containerID="bc6ab3573641e68bec6f227fa30ead1a586c1cf3abdf781b93edf3396b81f192" exitCode=1 Feb 18 14:32:48 crc kubenswrapper[4957]: I0218 14:32:48.699798 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t7lp9" event={"ID":"c1ab5e7d-28c9-416b-9e12-1209987d8a2c","Type":"ContainerDied","Data":"bc6ab3573641e68bec6f227fa30ead1a586c1cf3abdf781b93edf3396b81f192"} Feb 18 14:32:48 crc kubenswrapper[4957]: I0218 14:32:48.699866 4957 scope.go:117] "RemoveContainer" containerID="01eae4440ba56e59be4a568c08e1c676d03e370e9abdd72986eca6f68f01e0f1" Feb 18 14:32:48 crc kubenswrapper[4957]: I0218 14:32:48.700541 4957 scope.go:117] "RemoveContainer" containerID="bc6ab3573641e68bec6f227fa30ead1a586c1cf3abdf781b93edf3396b81f192" Feb 18 14:32:48 crc kubenswrapper[4957]: E0218 14:32:48.700698 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-t7lp9_openshift-ovn-kubernetes(c1ab5e7d-28c9-416b-9e12-1209987d8a2c)\"" pod="openshift-ovn-kubernetes/ovnkube-node-t7lp9" podUID="c1ab5e7d-28c9-416b-9e12-1209987d8a2c" Feb 18 14:32:48 crc kubenswrapper[4957]: I0218 14:32:48.712759 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:48Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:48 crc kubenswrapper[4957]: I0218 14:32:48.728193 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-s7f5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77ae51a3-a3f7-4ea3-afb9-93558bf3b821\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ff578d3657e9ba6b371eb3c3a2e63d4c989e3232da6c9f88cf6ba18edc50dd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:32:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7f4f904ea5311aa8aa0473b62433cd3684d0e1fecdb1b179bfb7a58e463cc27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7f4f904ea5311aa8aa0473b62433cd3684d0e1fecdb1b179bfb7a58e463cc27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ea791a426f30ba77c019b8f865f9cfe90b915f6c0596002f4914cbe2fa68c27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ea791a426f30ba77c019b8f865f9cfe90b915f6c0596002f4914cbe2fa68c27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01a7d815c8c260fb5d2a4586f2640abf157fa209aa8e8aed0e6573a07bdaced6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01a7d815c8c260fb5d2a4586f2640abf157fa209aa8e8aed0e6573a07bdaced6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ae36af89d8b5cc2ad00253398fad76d78310bc8adb012b7b819ef55a26b8004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae36af89d8b5cc2ad00253398fad76d78310bc8adb012b7b819ef55a26b8004\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://def4b27d830824baeb1b965b56d5fc50343585d0fae5a1fed75381364a94caf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://def4b27d830824baeb1b965b56d5fc50343585d0fae5a1fed75381364a94caf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:32:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a13d3fcb295e963277f66a06bad27866ddb270d48474d0154efc52a8721a64a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a13d3fcb295e963277f66a06bad27866ddb270d48474d0154efc52a8721a64a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:32:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-s7f5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:48Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:48 crc kubenswrapper[4957]: I0218 14:32:48.739485 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-c5hxm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2b8a049-72c4-4a87-bb05-a5cc4ebe7a89\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d8c16012d78884094f9be4f3cd7b67463352ff1391c8f60c109b5f734c80cd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mzkhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-c5hxm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:48Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:48 crc kubenswrapper[4957]: I0218 14:32:48.753858 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jkmlc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58c40982-35c8-4670-ad21-513a7a5a458e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gkmzc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gkmzc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:32:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jkmlc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:48Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:48 crc kubenswrapper[4957]: I0218 14:32:48.771158 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3aa342a3ea42d2c64cf284dd2f7fed142b1ff377cc3c5384e4d940ec4364befc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:48Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:48 crc kubenswrapper[4957]: I0218 14:32:48.789618 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3339e9ff017eaa7de40b766111f5d7dcb9b5b45f154d917b2f3b1ebad21ee361\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://815a22c1c4523e923dbe20bb68b1f4195562f2b3a767409555f43c82cd826391\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:48Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:48 crc kubenswrapper[4957]: I0218 14:32:48.794861 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:48 crc kubenswrapper[4957]: I0218 14:32:48.794902 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:48 crc kubenswrapper[4957]: I0218 14:32:48.794914 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:48 crc kubenswrapper[4957]: I0218 14:32:48.794933 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:48 crc kubenswrapper[4957]: I0218 14:32:48.794947 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:48Z","lastTransitionTime":"2026-02-18T14:32:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:48 crc kubenswrapper[4957]: I0218 14:32:48.801130 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96ebe482278c96a988073257a64389543a692cbae60b771542dd45ffdbef9977\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:48Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:48 crc kubenswrapper[4957]: I0218 14:32:48.811392 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wn2pd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6a720f-7d42-48e9-8073-fb4f7417e6cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3317042ce828b73a13e6c2724a7c7663e0c73dd05fef1239f5ad36a7d46b1ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qv7rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wn2pd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:48Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:48 crc kubenswrapper[4957]: I0218 14:32:48.834516 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b3bef1f-7cee-4035-bc8e-195fadcf2d19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51eded762a62fcb704d09a9c4336a0e308e44c662b636772ea79f754c09dcaaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a2ec9911be4f4d8715bb722cdf72cdbf478111030a7311b93eb67f55ddbce77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d46f625f180ab9afde6a7cb5d578b88012bf5b297e691a35f1fe1af000f1416\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a11b36734e57eec3aba0b6d96b4102850e46b96eb16d99cfcacf273f1ce69f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ff013b6ed4aa92a4bda93a646e7ffd4a158daab436c25025c7fbee49716bcfd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T14:31:47Z\\\",\\\"message\\\":\\\"W0218 14:31:37.311828 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0218 14:31:37.312305 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771425097 cert, and key in /tmp/serving-cert-2580901173/serving-signer.crt, /tmp/serving-cert-2580901173/serving-signer.key\\\\nI0218 14:31:37.668457 1 observer_polling.go:159] Starting file observer\\\\nW0218 14:31:37.670692 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0218 14:31:37.670892 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 14:31:37.672550 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2580901173/tls.crt::/tmp/serving-cert-2580901173/tls.key\\\\\\\"\\\\nF0218 14:31:47.884142 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1c0f986e657c20a059efccb494462926f4fa62d30b012e6ce2b530d679d9d5c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:37Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09ddd547ad67f12c0d4f3b0505206fc69b4bcfcfa966c6f98da8ef4f4e979711\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09ddd547ad67f12c0d4f3b0505206fc69b4bcfcfa966c6f98da8ef4f4e979711\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:48Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:48 crc kubenswrapper[4957]: I0218 14:32:48.855405 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76c85d37-4ff8-4c5d-83aa-4c5fbf9535c0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://796493e4b9e802eafa557bda30c8c5442e4fb5705907cfa1f04cad2b36dc20bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc2518870abf503506ce74ccbc0fc66ac6efa632a7394ceca4f58b38a0a1fac3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b902fee5b17e97ec39e941b26dd9ef6e2da95f49be1fc25192895894a4182685\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b68e74be3ce63b0e6fff6a8141e11a9c6e80a376e205bfc7f2745440c6526e6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b68e74be3ce63b0e6fff6a8141e11a9c6e80a376e205bfc7f2745440c6526e6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:35Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:34Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:48Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:48 crc kubenswrapper[4957]: I0218 14:32:48.871269 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:48Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:48 crc kubenswrapper[4957]: I0218 14:32:48.888922 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t7lp9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1ab5e7d-28c9-416b-9e12-1209987d8a2c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3d55cd020086b018dcf2183c7307ce4f66a18d4dfd5c7e387419f5aaeb08e53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb7e286dbf7b49a0ee8019eb660a4b977399c163bb094d77f830b3747399d702\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c773ab7da87d534f8ca400b73f89b473291b15b65118abfa48da7e8af126724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fb30623c479a6d604d07d8779f3b5661ba8a2c98fa38f145443c7f80fbfddb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9575c57251809ec79d9e9a63fc18fc0fe431582b023c0bfb9d71b3ea5c369e49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7fc50065803f879ae25e1ab406e1b74e142cab30c38bedd0426616a2cc6cc84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc6ab3573641e68bec6f227fa30ead1a586c1cf3abdf781b93edf3396b81f192\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01eae4440ba56e59be4a568c08e1c676d03e370e9abdd72986eca6f68f01e0f1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T14:32:21Z\\\",\\\"message\\\":\\\"erving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc0075eb187 \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:metrics,Protocol:TCP,Port:8383,TargetPort:{0 8383 },NodePort:0,AppProtocol:nil,},ServicePort{Name:https-metrics,Protocol:TCP,Port:8081,TargetPort:{0 8081 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{name: marketplace-operator,},ClusterIP:10.217.5.53,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.53],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF0218 14:32:21.034484 6635 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not add\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T14:32:20Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc6ab3573641e68bec6f227fa30ead1a586c1cf3abdf781b93edf3396b81f192\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T14:32:48Z\\\",\\\"message\\\":\\\"Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]} options:{GoMap:map[iface-id-ver:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c94130be-172c-477c-88c4-40cc7eba30fe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0218 14:32:48.038296 7032 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0218 14:32:48.038339 7032 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T14:32:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07549d11e88948ba7929804722b7d49a2fefb64d0c324c2032529c13df70a23f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854d32930f1afa0a3f29b83159a4a473fd3c074f065cab0f110c4c0e62d0ab79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://854d32930f1afa0a3f29b83159a4a473fd3c074f065cab0f110c4c0e62d0ab79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t7lp9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:48Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:48 crc kubenswrapper[4957]: I0218 14:32:48.897854 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:48 crc kubenswrapper[4957]: I0218 14:32:48.897912 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:48 crc kubenswrapper[4957]: I0218 14:32:48.897926 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:48 crc kubenswrapper[4957]: I0218 14:32:48.897949 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:48 crc kubenswrapper[4957]: I0218 14:32:48.897962 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:48Z","lastTransitionTime":"2026-02-18T14:32:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:48 crc kubenswrapper[4957]: I0218 14:32:48.898985 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-52gh7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"841cf9b6-bfbb-4ff0-8899-acd00478a669\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e52fe12bee0a96d77a82c8edb65574b11ae3d6b6dd3e2a68e918f1d21c0b1153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xg6v5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d556fc2e5fb106135a544401595696797ede468aa93818660d86424c49486fff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xg6v5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:32:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-52gh7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:48Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:48 crc kubenswrapper[4957]: I0218 14:32:48.911210 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cde17e3-43e9-4bed-afe8-5b76229e35cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1929ddc0bd8d5e1c054fe6b3399c053a91ace49153f8712ad155416d0df0652e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mzslj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4227d09ba91769eac5df7ac50f6da20587fad22be3d3f27f0a938b37d76b457e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mzslj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-x8wwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:48Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:48 crc kubenswrapper[4957]: I0218 14:32:48.922908 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"820e2f52-fc4e-43af-8da0-5b1ecb52e54a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdeaa62699fcf37060225a404103ed014fff43f751f744a27891afb76c56d011\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb9f54ba352c3e8d1cb70e31e2321f32cc741e9ea7fa871ac8db9cb380486d6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04cb263fd49c7f580746efa2f81c4e25a4862244502ad5362a1767d6255c023c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f45a9407db247a891731310afb4c1a7b81ac94e44cfd8e267854a4c4e434e95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:34Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:48Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:48 crc kubenswrapper[4957]: I0218 14:32:48.939128 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:48Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:48 crc kubenswrapper[4957]: I0218 14:32:48.954705 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sk96m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7e4902e-7bb6-44ec-a3ac-3d8b3afe3fbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b9a1d713ed8c708dc17974a35a4e22543cfeee4911baa33638b24cb02f01f1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://644a3c29970ac9c5262a3a66390a566947b89d6020eaf5de12afb1afeaaff673\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T14:32:42Z\\\",\\\"message\\\":\\\"2026-02-18T14:31:56+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_5faa22d4-8273-431b-8440-b348b6738df0\\\\n2026-02-18T14:31:56+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_5faa22d4-8273-431b-8440-b348b6738df0 to /host/opt/cni/bin/\\\\n2026-02-18T14:31:57Z [verbose] multus-daemon started\\\\n2026-02-18T14:31:57Z [verbose] Readiness Indicator file check\\\\n2026-02-18T14:32:42Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:32:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vssz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sk96m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:48Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:49 crc kubenswrapper[4957]: I0218 14:32:49.000840 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:49 crc kubenswrapper[4957]: I0218 14:32:49.000915 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:49 crc kubenswrapper[4957]: I0218 14:32:49.000935 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:49 crc kubenswrapper[4957]: I0218 14:32:49.000967 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:49 crc kubenswrapper[4957]: I0218 14:32:49.000987 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:49Z","lastTransitionTime":"2026-02-18T14:32:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:49 crc kubenswrapper[4957]: I0218 14:32:49.103196 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:49 crc kubenswrapper[4957]: I0218 14:32:49.103271 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:49 crc kubenswrapper[4957]: I0218 14:32:49.103290 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:49 crc kubenswrapper[4957]: I0218 14:32:49.103314 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:49 crc kubenswrapper[4957]: I0218 14:32:49.103332 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:49Z","lastTransitionTime":"2026-02-18T14:32:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:49 crc kubenswrapper[4957]: I0218 14:32:49.196824 4957 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 03:55:35.765000466 +0000 UTC Feb 18 14:32:49 crc kubenswrapper[4957]: I0218 14:32:49.206396 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:49 crc kubenswrapper[4957]: I0218 14:32:49.206448 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:49 crc kubenswrapper[4957]: I0218 14:32:49.206461 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:49 crc kubenswrapper[4957]: I0218 14:32:49.206481 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:49 crc kubenswrapper[4957]: I0218 14:32:49.206491 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:49Z","lastTransitionTime":"2026-02-18T14:32:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:49 crc kubenswrapper[4957]: I0218 14:32:49.212170 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:32:49 crc kubenswrapper[4957]: I0218 14:32:49.212170 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jkmlc" Feb 18 14:32:49 crc kubenswrapper[4957]: E0218 14:32:49.212394 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 14:32:49 crc kubenswrapper[4957]: E0218 14:32:49.212825 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jkmlc" podUID="58c40982-35c8-4670-ad21-513a7a5a458e" Feb 18 14:32:49 crc kubenswrapper[4957]: I0218 14:32:49.309182 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:49 crc kubenswrapper[4957]: I0218 14:32:49.309243 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:49 crc kubenswrapper[4957]: I0218 14:32:49.309257 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:49 crc kubenswrapper[4957]: I0218 14:32:49.309277 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:49 crc kubenswrapper[4957]: I0218 14:32:49.309289 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:49Z","lastTransitionTime":"2026-02-18T14:32:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:49 crc kubenswrapper[4957]: I0218 14:32:49.411497 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:49 crc kubenswrapper[4957]: I0218 14:32:49.411537 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:49 crc kubenswrapper[4957]: I0218 14:32:49.411547 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:49 crc kubenswrapper[4957]: I0218 14:32:49.411568 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:49 crc kubenswrapper[4957]: I0218 14:32:49.411579 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:49Z","lastTransitionTime":"2026-02-18T14:32:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:49 crc kubenswrapper[4957]: I0218 14:32:49.515047 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:49 crc kubenswrapper[4957]: I0218 14:32:49.515168 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:49 crc kubenswrapper[4957]: I0218 14:32:49.515198 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:49 crc kubenswrapper[4957]: I0218 14:32:49.515231 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:49 crc kubenswrapper[4957]: I0218 14:32:49.515268 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:49Z","lastTransitionTime":"2026-02-18T14:32:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:49 crc kubenswrapper[4957]: I0218 14:32:49.618781 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:49 crc kubenswrapper[4957]: I0218 14:32:49.618853 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:49 crc kubenswrapper[4957]: I0218 14:32:49.618875 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:49 crc kubenswrapper[4957]: I0218 14:32:49.618908 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:49 crc kubenswrapper[4957]: I0218 14:32:49.618934 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:49Z","lastTransitionTime":"2026-02-18T14:32:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:49 crc kubenswrapper[4957]: I0218 14:32:49.704280 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-t7lp9_c1ab5e7d-28c9-416b-9e12-1209987d8a2c/ovnkube-controller/3.log" Feb 18 14:32:49 crc kubenswrapper[4957]: I0218 14:32:49.707552 4957 scope.go:117] "RemoveContainer" containerID="bc6ab3573641e68bec6f227fa30ead1a586c1cf3abdf781b93edf3396b81f192" Feb 18 14:32:49 crc kubenswrapper[4957]: E0218 14:32:49.707727 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-t7lp9_openshift-ovn-kubernetes(c1ab5e7d-28c9-416b-9e12-1209987d8a2c)\"" pod="openshift-ovn-kubernetes/ovnkube-node-t7lp9" podUID="c1ab5e7d-28c9-416b-9e12-1209987d8a2c" Feb 18 14:32:49 crc kubenswrapper[4957]: I0218 14:32:49.721184 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:49 crc kubenswrapper[4957]: I0218 14:32:49.721215 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:49 crc kubenswrapper[4957]: I0218 14:32:49.721226 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:49 crc kubenswrapper[4957]: I0218 14:32:49.721241 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:49 crc kubenswrapper[4957]: I0218 14:32:49.721254 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:49Z","lastTransitionTime":"2026-02-18T14:32:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:49 crc kubenswrapper[4957]: I0218 14:32:49.722362 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"820e2f52-fc4e-43af-8da0-5b1ecb52e54a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdeaa62699fcf37060225a404103ed014fff43f751f744a27891afb76c56d011\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb9f54ba352c3e8d1cb70e31e2321f32cc741e9ea7fa871ac8db9cb380486d6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04cb263fd49c7f580746efa2f81c4e25a4862244502ad5362a1767d6255c023c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f45a9407db247a891731310afb4c1a7b81ac94e44cfd8e267854a4c4e434e95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:34Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:49Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:49 crc kubenswrapper[4957]: I0218 14:32:49.736938 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:49Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:49 crc kubenswrapper[4957]: I0218 14:32:49.754735 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sk96m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7e4902e-7bb6-44ec-a3ac-3d8b3afe3fbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b9a1d713ed8c708dc17974a35a4e22543cfeee4911baa33638b24cb02f01f1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://644a3c29970ac9c5262a3a66390a566947b89d6020eaf5de12afb1afeaaff673\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T14:32:42Z\\\",\\\"message\\\":\\\"2026-02-18T14:31:56+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_5faa22d4-8273-431b-8440-b348b6738df0\\\\n2026-02-18T14:31:56+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_5faa22d4-8273-431b-8440-b348b6738df0 to /host/opt/cni/bin/\\\\n2026-02-18T14:31:57Z [verbose] multus-daemon started\\\\n2026-02-18T14:31:57Z [verbose] Readiness Indicator file check\\\\n2026-02-18T14:32:42Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:32:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vssz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sk96m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:49Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:49 crc kubenswrapper[4957]: I0218 14:32:49.770962 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cde17e3-43e9-4bed-afe8-5b76229e35cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1929ddc0bd8d5e1c054fe6b3399c053a91ace49153f8712ad155416d0df0652e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mzslj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4227d09ba91769eac5df7ac50f6da20587fad22be3d3f27f0a938b37d76b457e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mzslj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-x8wwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:49Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:49 crc kubenswrapper[4957]: I0218 14:32:49.784727 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jkmlc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58c40982-35c8-4670-ad21-513a7a5a458e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gkmzc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gkmzc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:32:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jkmlc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:49Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:49 crc kubenswrapper[4957]: I0218 14:32:49.800221 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3aa342a3ea42d2c64cf284dd2f7fed142b1ff377cc3c5384e4d940ec4364befc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:49Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:49 crc kubenswrapper[4957]: I0218 14:32:49.814455 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:49Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:49 crc kubenswrapper[4957]: I0218 14:32:49.827382 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-s7f5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77ae51a3-a3f7-4ea3-afb9-93558bf3b821\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ff578d3657e9ba6b371eb3c3a2e63d4c989e3232da6c9f88cf6ba18edc50dd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:32:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7f4f904ea5311aa8aa0473b62433cd3684d0e1fecdb1b179bfb7a58e463cc27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7f4f904ea5311aa8aa0473b62433cd3684d0e1fecdb1b179bfb7a58e463cc27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ea791a426f30ba77c019b8f865f9cfe90b915f6c0596002f4914cbe2fa68c27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ea791a426f30ba77c019b8f865f9cfe90b915f6c0596002f4914cbe2fa68c27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01a7d815c8c260fb5d2a4586f2640abf157fa209aa8e8aed0e6573a07bdaced6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01a7d815c8c260fb5d2a4586f2640abf157fa209aa8e8aed0e6573a07bdaced6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ae36af89d8b5cc2ad00253398fad76d78310bc8adb012b7b819ef55a26b8004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae36af89d8b5cc2ad00253398fad76d78310bc8adb012b7b819ef55a26b8004\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://def4b27d830824baeb1b965b56d5fc50343585d0fae5a1fed75381364a94caf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://def4b27d830824baeb1b965b56d5fc50343585d0fae5a1fed75381364a94caf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:32:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a13d3fcb295e963277f66a06bad27866ddb270d48474d0154efc52a8721a64a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a13d3fcb295e963277f66a06bad27866ddb270d48474d0154efc52a8721a64a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:32:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-s7f5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:49Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:49 crc kubenswrapper[4957]: I0218 14:32:49.836256 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-c5hxm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2b8a049-72c4-4a87-bb05-a5cc4ebe7a89\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d8c16012d78884094f9be4f3cd7b67463352ff1391c8f60c109b5f734c80cd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mzkhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-c5hxm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:49Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:49 crc kubenswrapper[4957]: I0218 14:32:49.849167 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b3bef1f-7cee-4035-bc8e-195fadcf2d19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51eded762a62fcb704d09a9c4336a0e308e44c662b636772ea79f754c09dcaaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a2ec9911be4f4d8715bb722cdf72cdbf478111030a7311b93eb67f55ddbce77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d46f625f180ab9afde6a7cb5d578b88012bf5b297e691a35f1fe1af000f1416\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a11b36734e57eec3aba0b6d96b4102850e46b96eb16d99cfcacf273f1ce69f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ff013b6ed4aa92a4bda93a646e7ffd4a158daab436c25025c7fbee49716bcfd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T14:31:47Z\\\",\\\"message\\\":\\\"W0218 14:31:37.311828 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0218 14:31:37.312305 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771425097 cert, and key in /tmp/serving-cert-2580901173/serving-signer.crt, /tmp/serving-cert-2580901173/serving-signer.key\\\\nI0218 14:31:37.668457 1 observer_polling.go:159] Starting file observer\\\\nW0218 14:31:37.670692 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0218 14:31:37.670892 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 14:31:37.672550 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2580901173/tls.crt::/tmp/serving-cert-2580901173/tls.key\\\\\\\"\\\\nF0218 14:31:47.884142 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1c0f986e657c20a059efccb494462926f4fa62d30b012e6ce2b530d679d9d5c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:37Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09ddd547ad67f12c0d4f3b0505206fc69b4bcfcfa966c6f98da8ef4f4e979711\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09ddd547ad67f12c0d4f3b0505206fc69b4bcfcfa966c6f98da8ef4f4e979711\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:49Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:49 crc kubenswrapper[4957]: I0218 14:32:49.854203 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:49 crc kubenswrapper[4957]: I0218 14:32:49.854275 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:49 crc kubenswrapper[4957]: I0218 14:32:49.854295 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:49 crc kubenswrapper[4957]: I0218 14:32:49.854319 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:49 crc kubenswrapper[4957]: I0218 14:32:49.854336 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:49Z","lastTransitionTime":"2026-02-18T14:32:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:49 crc kubenswrapper[4957]: I0218 14:32:49.866192 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76c85d37-4ff8-4c5d-83aa-4c5fbf9535c0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://796493e4b9e802eafa557bda30c8c5442e4fb5705907cfa1f04cad2b36dc20bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc2518870abf503506ce74ccbc0fc66ac6efa632a7394ceca4f58b38a0a1fac3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b902fee5b17e97ec39e941b26dd9ef6e2da95f49be1fc25192895894a4182685\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b68e74be3ce63b0e6fff6a8141e11a9c6e80a376e205bfc7f2745440c6526e6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b68e74be3ce63b0e6fff6a8141e11a9c6e80a376e205bfc7f2745440c6526e6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:35Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:34Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:49Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:49 crc kubenswrapper[4957]: I0218 14:32:49.880110 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:49Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:49 crc kubenswrapper[4957]: I0218 14:32:49.893715 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3339e9ff017eaa7de40b766111f5d7dcb9b5b45f154d917b2f3b1ebad21ee361\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://815a22c1c4523e923dbe20bb68b1f4195562f2b3a767409555f43c82cd826391\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:49Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:49 crc kubenswrapper[4957]: I0218 14:32:49.904899 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96ebe482278c96a988073257a64389543a692cbae60b771542dd45ffdbef9977\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:49Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:49 crc kubenswrapper[4957]: I0218 14:32:49.917011 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wn2pd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6a720f-7d42-48e9-8073-fb4f7417e6cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3317042ce828b73a13e6c2724a7c7663e0c73dd05fef1239f5ad36a7d46b1ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qv7rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wn2pd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:49Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:49 crc kubenswrapper[4957]: I0218 14:32:49.947617 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t7lp9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1ab5e7d-28c9-416b-9e12-1209987d8a2c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3d55cd020086b018dcf2183c7307ce4f66a18d4dfd5c7e387419f5aaeb08e53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb7e286dbf7b49a0ee8019eb660a4b977399c163bb094d77f830b3747399d702\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c773ab7da87d534f8ca400b73f89b473291b15b65118abfa48da7e8af126724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fb30623c479a6d604d07d8779f3b5661ba8a2c98fa38f145443c7f80fbfddb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9575c57251809ec79d9e9a63fc18fc0fe431582b023c0bfb9d71b3ea5c369e49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7fc50065803f879ae25e1ab406e1b74e142cab30c38bedd0426616a2cc6cc84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc6ab3573641e68bec6f227fa30ead1a586c1cf3abdf781b93edf3396b81f192\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc6ab3573641e68bec6f227fa30ead1a586c1cf3abdf781b93edf3396b81f192\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T14:32:48Z\\\",\\\"message\\\":\\\"Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]} options:{GoMap:map[iface-id-ver:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c94130be-172c-477c-88c4-40cc7eba30fe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0218 14:32:48.038296 7032 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0218 14:32:48.038339 7032 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T14:32:47Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-t7lp9_openshift-ovn-kubernetes(c1ab5e7d-28c9-416b-9e12-1209987d8a2c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07549d11e88948ba7929804722b7d49a2fefb64d0c324c2032529c13df70a23f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854d32930f1afa0a3f29b83159a4a473fd3c074f065cab0f110c4c0e62d0ab79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://854d32930f1afa0a3f29b83159a4a473fd3c074f065cab0f110c4c0e62d0ab79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t7lp9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:49Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:49 crc kubenswrapper[4957]: I0218 14:32:49.961134 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:49 crc kubenswrapper[4957]: I0218 14:32:49.961169 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:49 crc kubenswrapper[4957]: I0218 14:32:49.961182 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:49 crc kubenswrapper[4957]: I0218 14:32:49.961200 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:49 crc kubenswrapper[4957]: I0218 14:32:49.961213 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:49Z","lastTransitionTime":"2026-02-18T14:32:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:49 crc kubenswrapper[4957]: I0218 14:32:49.967242 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-52gh7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"841cf9b6-bfbb-4ff0-8899-acd00478a669\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e52fe12bee0a96d77a82c8edb65574b11ae3d6b6dd3e2a68e918f1d21c0b1153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xg6v5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d556fc2e5fb106135a544401595696797ede468aa93818660d86424c49486fff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xg6v5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:32:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-52gh7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:49Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:50 crc kubenswrapper[4957]: I0218 14:32:50.063845 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:50 crc kubenswrapper[4957]: I0218 14:32:50.063902 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:50 crc kubenswrapper[4957]: I0218 14:32:50.063920 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:50 crc kubenswrapper[4957]: I0218 14:32:50.063940 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:50 crc kubenswrapper[4957]: I0218 14:32:50.063956 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:50Z","lastTransitionTime":"2026-02-18T14:32:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:50 crc kubenswrapper[4957]: I0218 14:32:50.166592 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:50 crc kubenswrapper[4957]: I0218 14:32:50.166645 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:50 crc kubenswrapper[4957]: I0218 14:32:50.166662 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:50 crc kubenswrapper[4957]: I0218 14:32:50.166682 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:50 crc kubenswrapper[4957]: I0218 14:32:50.166695 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:50Z","lastTransitionTime":"2026-02-18T14:32:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:50 crc kubenswrapper[4957]: I0218 14:32:50.197604 4957 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 13:07:51.288204095 +0000 UTC Feb 18 14:32:50 crc kubenswrapper[4957]: I0218 14:32:50.212374 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:32:50 crc kubenswrapper[4957]: I0218 14:32:50.212385 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:32:50 crc kubenswrapper[4957]: E0218 14:32:50.212638 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 14:32:50 crc kubenswrapper[4957]: E0218 14:32:50.212802 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 14:32:50 crc kubenswrapper[4957]: I0218 14:32:50.269002 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:50 crc kubenswrapper[4957]: I0218 14:32:50.269049 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:50 crc kubenswrapper[4957]: I0218 14:32:50.269064 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:50 crc kubenswrapper[4957]: I0218 14:32:50.269083 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:50 crc kubenswrapper[4957]: I0218 14:32:50.269100 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:50Z","lastTransitionTime":"2026-02-18T14:32:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:50 crc kubenswrapper[4957]: I0218 14:32:50.371639 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:50 crc kubenswrapper[4957]: I0218 14:32:50.371714 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:50 crc kubenswrapper[4957]: I0218 14:32:50.371731 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:50 crc kubenswrapper[4957]: I0218 14:32:50.371756 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:50 crc kubenswrapper[4957]: I0218 14:32:50.371777 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:50Z","lastTransitionTime":"2026-02-18T14:32:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:50 crc kubenswrapper[4957]: I0218 14:32:50.473856 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:50 crc kubenswrapper[4957]: I0218 14:32:50.473889 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:50 crc kubenswrapper[4957]: I0218 14:32:50.473897 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:50 crc kubenswrapper[4957]: I0218 14:32:50.473911 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:50 crc kubenswrapper[4957]: I0218 14:32:50.473921 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:50Z","lastTransitionTime":"2026-02-18T14:32:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:50 crc kubenswrapper[4957]: I0218 14:32:50.576607 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:50 crc kubenswrapper[4957]: I0218 14:32:50.576650 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:50 crc kubenswrapper[4957]: I0218 14:32:50.576661 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:50 crc kubenswrapper[4957]: I0218 14:32:50.576677 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:50 crc kubenswrapper[4957]: I0218 14:32:50.576688 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:50Z","lastTransitionTime":"2026-02-18T14:32:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:50 crc kubenswrapper[4957]: I0218 14:32:50.680246 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:50 crc kubenswrapper[4957]: I0218 14:32:50.680314 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:50 crc kubenswrapper[4957]: I0218 14:32:50.680337 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:50 crc kubenswrapper[4957]: I0218 14:32:50.680364 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:50 crc kubenswrapper[4957]: I0218 14:32:50.680384 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:50Z","lastTransitionTime":"2026-02-18T14:32:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:50 crc kubenswrapper[4957]: I0218 14:32:50.782805 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:50 crc kubenswrapper[4957]: I0218 14:32:50.782872 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:50 crc kubenswrapper[4957]: I0218 14:32:50.782903 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:50 crc kubenswrapper[4957]: I0218 14:32:50.782931 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:50 crc kubenswrapper[4957]: I0218 14:32:50.782950 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:50Z","lastTransitionTime":"2026-02-18T14:32:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:50 crc kubenswrapper[4957]: I0218 14:32:50.885740 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:50 crc kubenswrapper[4957]: I0218 14:32:50.885787 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:50 crc kubenswrapper[4957]: I0218 14:32:50.885799 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:50 crc kubenswrapper[4957]: I0218 14:32:50.885819 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:50 crc kubenswrapper[4957]: I0218 14:32:50.885833 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:50Z","lastTransitionTime":"2026-02-18T14:32:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:50 crc kubenswrapper[4957]: I0218 14:32:50.988153 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:50 crc kubenswrapper[4957]: I0218 14:32:50.988203 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:50 crc kubenswrapper[4957]: I0218 14:32:50.988214 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:50 crc kubenswrapper[4957]: I0218 14:32:50.988232 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:50 crc kubenswrapper[4957]: I0218 14:32:50.988243 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:50Z","lastTransitionTime":"2026-02-18T14:32:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:51 crc kubenswrapper[4957]: I0218 14:32:51.090713 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:51 crc kubenswrapper[4957]: I0218 14:32:51.090754 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:51 crc kubenswrapper[4957]: I0218 14:32:51.090762 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:51 crc kubenswrapper[4957]: I0218 14:32:51.090777 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:51 crc kubenswrapper[4957]: I0218 14:32:51.090787 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:51Z","lastTransitionTime":"2026-02-18T14:32:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:51 crc kubenswrapper[4957]: I0218 14:32:51.194447 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:51 crc kubenswrapper[4957]: I0218 14:32:51.194504 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:51 crc kubenswrapper[4957]: I0218 14:32:51.194516 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:51 crc kubenswrapper[4957]: I0218 14:32:51.194532 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:51 crc kubenswrapper[4957]: I0218 14:32:51.194544 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:51Z","lastTransitionTime":"2026-02-18T14:32:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:51 crc kubenswrapper[4957]: I0218 14:32:51.198737 4957 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 22:55:54.407239217 +0000 UTC Feb 18 14:32:51 crc kubenswrapper[4957]: I0218 14:32:51.212372 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jkmlc" Feb 18 14:32:51 crc kubenswrapper[4957]: I0218 14:32:51.212447 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:32:51 crc kubenswrapper[4957]: E0218 14:32:51.212600 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jkmlc" podUID="58c40982-35c8-4670-ad21-513a7a5a458e" Feb 18 14:32:51 crc kubenswrapper[4957]: E0218 14:32:51.212689 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 14:32:51 crc kubenswrapper[4957]: I0218 14:32:51.296677 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:51 crc kubenswrapper[4957]: I0218 14:32:51.296719 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:51 crc kubenswrapper[4957]: I0218 14:32:51.296740 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:51 crc kubenswrapper[4957]: I0218 14:32:51.296761 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:51 crc kubenswrapper[4957]: I0218 14:32:51.296772 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:51Z","lastTransitionTime":"2026-02-18T14:32:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:51 crc kubenswrapper[4957]: I0218 14:32:51.330302 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:51 crc kubenswrapper[4957]: I0218 14:32:51.330337 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:51 crc kubenswrapper[4957]: I0218 14:32:51.330347 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:51 crc kubenswrapper[4957]: I0218 14:32:51.330362 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:51 crc kubenswrapper[4957]: I0218 14:32:51.330372 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:51Z","lastTransitionTime":"2026-02-18T14:32:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:51 crc kubenswrapper[4957]: E0218 14:32:51.351773 4957 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:32:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:32:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:32:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:32:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"73cbc702-e999-4b05-a826-bb1b15d4d73b\\\",\\\"systemUUID\\\":\\\"9fb0acd4-1ed1-4909-a63a-3f4dd7b07055\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:51Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:51 crc kubenswrapper[4957]: I0218 14:32:51.356398 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:51 crc kubenswrapper[4957]: I0218 14:32:51.356471 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:51 crc kubenswrapper[4957]: I0218 14:32:51.356486 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:51 crc kubenswrapper[4957]: I0218 14:32:51.356503 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:51 crc kubenswrapper[4957]: I0218 14:32:51.356518 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:51Z","lastTransitionTime":"2026-02-18T14:32:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:51 crc kubenswrapper[4957]: E0218 14:32:51.374851 4957 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:32:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:32:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:32:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:32:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"73cbc702-e999-4b05-a826-bb1b15d4d73b\\\",\\\"systemUUID\\\":\\\"9fb0acd4-1ed1-4909-a63a-3f4dd7b07055\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:51Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:51 crc kubenswrapper[4957]: I0218 14:32:51.378601 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:51 crc kubenswrapper[4957]: I0218 14:32:51.378637 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:51 crc kubenswrapper[4957]: I0218 14:32:51.378647 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:51 crc kubenswrapper[4957]: I0218 14:32:51.378664 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:51 crc kubenswrapper[4957]: I0218 14:32:51.378677 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:51Z","lastTransitionTime":"2026-02-18T14:32:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:51 crc kubenswrapper[4957]: E0218 14:32:51.395296 4957 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:32:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:32:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:32:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:32:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"73cbc702-e999-4b05-a826-bb1b15d4d73b\\\",\\\"systemUUID\\\":\\\"9fb0acd4-1ed1-4909-a63a-3f4dd7b07055\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:51Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:51 crc kubenswrapper[4957]: I0218 14:32:51.399000 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:51 crc kubenswrapper[4957]: I0218 14:32:51.399027 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:51 crc kubenswrapper[4957]: I0218 14:32:51.399037 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:51 crc kubenswrapper[4957]: I0218 14:32:51.399050 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:51 crc kubenswrapper[4957]: I0218 14:32:51.399060 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:51Z","lastTransitionTime":"2026-02-18T14:32:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:51 crc kubenswrapper[4957]: E0218 14:32:51.413037 4957 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:32:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:32:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:32:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:32:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"73cbc702-e999-4b05-a826-bb1b15d4d73b\\\",\\\"systemUUID\\\":\\\"9fb0acd4-1ed1-4909-a63a-3f4dd7b07055\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:51Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:51 crc kubenswrapper[4957]: I0218 14:32:51.416504 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:51 crc kubenswrapper[4957]: I0218 14:32:51.416533 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:51 crc kubenswrapper[4957]: I0218 14:32:51.416542 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:51 crc kubenswrapper[4957]: I0218 14:32:51.416556 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:51 crc kubenswrapper[4957]: I0218 14:32:51.416565 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:51Z","lastTransitionTime":"2026-02-18T14:32:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:51 crc kubenswrapper[4957]: E0218 14:32:51.429994 4957 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:32:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:32:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:32:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:32:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"73cbc702-e999-4b05-a826-bb1b15d4d73b\\\",\\\"systemUUID\\\":\\\"9fb0acd4-1ed1-4909-a63a-3f4dd7b07055\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:51Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:51 crc kubenswrapper[4957]: E0218 14:32:51.430480 4957 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 18 14:32:51 crc kubenswrapper[4957]: I0218 14:32:51.432403 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:51 crc kubenswrapper[4957]: I0218 14:32:51.432654 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:51 crc kubenswrapper[4957]: I0218 14:32:51.432777 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:51 crc kubenswrapper[4957]: I0218 14:32:51.432866 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:51 crc kubenswrapper[4957]: I0218 14:32:51.432943 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:51Z","lastTransitionTime":"2026-02-18T14:32:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:51 crc kubenswrapper[4957]: I0218 14:32:51.535047 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:51 crc kubenswrapper[4957]: I0218 14:32:51.535298 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:51 crc kubenswrapper[4957]: I0218 14:32:51.535375 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:51 crc kubenswrapper[4957]: I0218 14:32:51.535474 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:51 crc kubenswrapper[4957]: I0218 14:32:51.535553 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:51Z","lastTransitionTime":"2026-02-18T14:32:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:51 crc kubenswrapper[4957]: I0218 14:32:51.638452 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:51 crc kubenswrapper[4957]: I0218 14:32:51.638790 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:51 crc kubenswrapper[4957]: I0218 14:32:51.638853 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:51 crc kubenswrapper[4957]: I0218 14:32:51.638943 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:51 crc kubenswrapper[4957]: I0218 14:32:51.639029 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:51Z","lastTransitionTime":"2026-02-18T14:32:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:51 crc kubenswrapper[4957]: I0218 14:32:51.742390 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:51 crc kubenswrapper[4957]: I0218 14:32:51.742495 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:51 crc kubenswrapper[4957]: I0218 14:32:51.742520 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:51 crc kubenswrapper[4957]: I0218 14:32:51.742551 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:51 crc kubenswrapper[4957]: I0218 14:32:51.742573 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:51Z","lastTransitionTime":"2026-02-18T14:32:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:51 crc kubenswrapper[4957]: I0218 14:32:51.845762 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:51 crc kubenswrapper[4957]: I0218 14:32:51.845791 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:51 crc kubenswrapper[4957]: I0218 14:32:51.845799 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:51 crc kubenswrapper[4957]: I0218 14:32:51.845813 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:51 crc kubenswrapper[4957]: I0218 14:32:51.845821 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:51Z","lastTransitionTime":"2026-02-18T14:32:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:51 crc kubenswrapper[4957]: I0218 14:32:51.949626 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:51 crc kubenswrapper[4957]: I0218 14:32:51.949992 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:51 crc kubenswrapper[4957]: I0218 14:32:51.950335 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:51 crc kubenswrapper[4957]: I0218 14:32:51.950593 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:51 crc kubenswrapper[4957]: I0218 14:32:51.950889 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:51Z","lastTransitionTime":"2026-02-18T14:32:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:52 crc kubenswrapper[4957]: I0218 14:32:52.054363 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:52 crc kubenswrapper[4957]: I0218 14:32:52.054453 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:52 crc kubenswrapper[4957]: I0218 14:32:52.054474 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:52 crc kubenswrapper[4957]: I0218 14:32:52.054499 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:52 crc kubenswrapper[4957]: I0218 14:32:52.054519 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:52Z","lastTransitionTime":"2026-02-18T14:32:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:52 crc kubenswrapper[4957]: I0218 14:32:52.157250 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:52 crc kubenswrapper[4957]: I0218 14:32:52.157290 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:52 crc kubenswrapper[4957]: I0218 14:32:52.157299 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:52 crc kubenswrapper[4957]: I0218 14:32:52.157315 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:52 crc kubenswrapper[4957]: I0218 14:32:52.157325 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:52Z","lastTransitionTime":"2026-02-18T14:32:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:52 crc kubenswrapper[4957]: I0218 14:32:52.198971 4957 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 08:54:16.084496254 +0000 UTC Feb 18 14:32:52 crc kubenswrapper[4957]: I0218 14:32:52.212169 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:32:52 crc kubenswrapper[4957]: E0218 14:32:52.212357 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 14:32:52 crc kubenswrapper[4957]: I0218 14:32:52.212731 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:32:52 crc kubenswrapper[4957]: E0218 14:32:52.212883 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 14:32:52 crc kubenswrapper[4957]: I0218 14:32:52.260511 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:52 crc kubenswrapper[4957]: I0218 14:32:52.260562 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:52 crc kubenswrapper[4957]: I0218 14:32:52.260574 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:52 crc kubenswrapper[4957]: I0218 14:32:52.260593 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:52 crc kubenswrapper[4957]: I0218 14:32:52.260607 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:52Z","lastTransitionTime":"2026-02-18T14:32:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:52 crc kubenswrapper[4957]: I0218 14:32:52.363683 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:52 crc kubenswrapper[4957]: I0218 14:32:52.363740 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:52 crc kubenswrapper[4957]: I0218 14:32:52.363756 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:52 crc kubenswrapper[4957]: I0218 14:32:52.363776 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:52 crc kubenswrapper[4957]: I0218 14:32:52.363793 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:52Z","lastTransitionTime":"2026-02-18T14:32:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:52 crc kubenswrapper[4957]: I0218 14:32:52.466876 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:52 crc kubenswrapper[4957]: I0218 14:32:52.466923 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:52 crc kubenswrapper[4957]: I0218 14:32:52.466933 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:52 crc kubenswrapper[4957]: I0218 14:32:52.466948 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:52 crc kubenswrapper[4957]: I0218 14:32:52.466960 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:52Z","lastTransitionTime":"2026-02-18T14:32:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:52 crc kubenswrapper[4957]: I0218 14:32:52.569971 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:52 crc kubenswrapper[4957]: I0218 14:32:52.570082 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:52 crc kubenswrapper[4957]: I0218 14:32:52.570109 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:52 crc kubenswrapper[4957]: I0218 14:32:52.570142 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:52 crc kubenswrapper[4957]: I0218 14:32:52.570166 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:52Z","lastTransitionTime":"2026-02-18T14:32:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:52 crc kubenswrapper[4957]: I0218 14:32:52.672830 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:52 crc kubenswrapper[4957]: I0218 14:32:52.672881 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:52 crc kubenswrapper[4957]: I0218 14:32:52.672892 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:52 crc kubenswrapper[4957]: I0218 14:32:52.672909 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:52 crc kubenswrapper[4957]: I0218 14:32:52.672919 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:52Z","lastTransitionTime":"2026-02-18T14:32:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:52 crc kubenswrapper[4957]: I0218 14:32:52.775723 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:52 crc kubenswrapper[4957]: I0218 14:32:52.775792 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:52 crc kubenswrapper[4957]: I0218 14:32:52.775807 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:52 crc kubenswrapper[4957]: I0218 14:32:52.775827 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:52 crc kubenswrapper[4957]: I0218 14:32:52.775842 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:52Z","lastTransitionTime":"2026-02-18T14:32:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:52 crc kubenswrapper[4957]: I0218 14:32:52.879223 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:52 crc kubenswrapper[4957]: I0218 14:32:52.879278 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:52 crc kubenswrapper[4957]: I0218 14:32:52.879293 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:52 crc kubenswrapper[4957]: I0218 14:32:52.879314 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:52 crc kubenswrapper[4957]: I0218 14:32:52.879329 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:52Z","lastTransitionTime":"2026-02-18T14:32:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:52 crc kubenswrapper[4957]: I0218 14:32:52.983467 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:52 crc kubenswrapper[4957]: I0218 14:32:52.983535 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:52 crc kubenswrapper[4957]: I0218 14:32:52.983557 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:52 crc kubenswrapper[4957]: I0218 14:32:52.983581 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:52 crc kubenswrapper[4957]: I0218 14:32:52.983597 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:52Z","lastTransitionTime":"2026-02-18T14:32:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:53 crc kubenswrapper[4957]: I0218 14:32:53.086556 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:53 crc kubenswrapper[4957]: I0218 14:32:53.086614 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:53 crc kubenswrapper[4957]: I0218 14:32:53.086628 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:53 crc kubenswrapper[4957]: I0218 14:32:53.086646 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:53 crc kubenswrapper[4957]: I0218 14:32:53.086659 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:53Z","lastTransitionTime":"2026-02-18T14:32:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:53 crc kubenswrapper[4957]: I0218 14:32:53.189407 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:53 crc kubenswrapper[4957]: I0218 14:32:53.189479 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:53 crc kubenswrapper[4957]: I0218 14:32:53.189493 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:53 crc kubenswrapper[4957]: I0218 14:32:53.189529 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:53 crc kubenswrapper[4957]: I0218 14:32:53.189540 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:53Z","lastTransitionTime":"2026-02-18T14:32:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:53 crc kubenswrapper[4957]: I0218 14:32:53.199851 4957 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 19:56:02.621730084 +0000 UTC Feb 18 14:32:53 crc kubenswrapper[4957]: I0218 14:32:53.212172 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jkmlc" Feb 18 14:32:53 crc kubenswrapper[4957]: I0218 14:32:53.212172 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:32:53 crc kubenswrapper[4957]: E0218 14:32:53.212317 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jkmlc" podUID="58c40982-35c8-4670-ad21-513a7a5a458e" Feb 18 14:32:53 crc kubenswrapper[4957]: E0218 14:32:53.212441 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 14:32:53 crc kubenswrapper[4957]: I0218 14:32:53.291786 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:53 crc kubenswrapper[4957]: I0218 14:32:53.291824 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:53 crc kubenswrapper[4957]: I0218 14:32:53.291835 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:53 crc kubenswrapper[4957]: I0218 14:32:53.291852 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:53 crc kubenswrapper[4957]: I0218 14:32:53.291864 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:53Z","lastTransitionTime":"2026-02-18T14:32:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:53 crc kubenswrapper[4957]: I0218 14:32:53.394556 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:53 crc kubenswrapper[4957]: I0218 14:32:53.394608 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:53 crc kubenswrapper[4957]: I0218 14:32:53.394625 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:53 crc kubenswrapper[4957]: I0218 14:32:53.394646 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:53 crc kubenswrapper[4957]: I0218 14:32:53.394662 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:53Z","lastTransitionTime":"2026-02-18T14:32:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:53 crc kubenswrapper[4957]: I0218 14:32:53.497784 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:53 crc kubenswrapper[4957]: I0218 14:32:53.497836 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:53 crc kubenswrapper[4957]: I0218 14:32:53.497853 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:53 crc kubenswrapper[4957]: I0218 14:32:53.497875 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:53 crc kubenswrapper[4957]: I0218 14:32:53.497893 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:53Z","lastTransitionTime":"2026-02-18T14:32:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:53 crc kubenswrapper[4957]: I0218 14:32:53.599842 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:53 crc kubenswrapper[4957]: I0218 14:32:53.599908 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:53 crc kubenswrapper[4957]: I0218 14:32:53.599920 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:53 crc kubenswrapper[4957]: I0218 14:32:53.599941 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:53 crc kubenswrapper[4957]: I0218 14:32:53.599956 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:53Z","lastTransitionTime":"2026-02-18T14:32:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:53 crc kubenswrapper[4957]: I0218 14:32:53.702560 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:53 crc kubenswrapper[4957]: I0218 14:32:53.702899 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:53 crc kubenswrapper[4957]: I0218 14:32:53.703089 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:53 crc kubenswrapper[4957]: I0218 14:32:53.703291 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:53 crc kubenswrapper[4957]: I0218 14:32:53.703470 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:53Z","lastTransitionTime":"2026-02-18T14:32:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:53 crc kubenswrapper[4957]: I0218 14:32:53.806484 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:53 crc kubenswrapper[4957]: I0218 14:32:53.806575 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:53 crc kubenswrapper[4957]: I0218 14:32:53.806590 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:53 crc kubenswrapper[4957]: I0218 14:32:53.806615 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:53 crc kubenswrapper[4957]: I0218 14:32:53.806627 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:53Z","lastTransitionTime":"2026-02-18T14:32:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:53 crc kubenswrapper[4957]: I0218 14:32:53.912121 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:53 crc kubenswrapper[4957]: I0218 14:32:53.912526 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:53 crc kubenswrapper[4957]: I0218 14:32:53.912597 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:53 crc kubenswrapper[4957]: I0218 14:32:53.912749 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:53 crc kubenswrapper[4957]: I0218 14:32:53.912847 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:53Z","lastTransitionTime":"2026-02-18T14:32:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:54 crc kubenswrapper[4957]: I0218 14:32:54.015003 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:54 crc kubenswrapper[4957]: I0218 14:32:54.015057 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:54 crc kubenswrapper[4957]: I0218 14:32:54.015069 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:54 crc kubenswrapper[4957]: I0218 14:32:54.015088 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:54 crc kubenswrapper[4957]: I0218 14:32:54.015101 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:54Z","lastTransitionTime":"2026-02-18T14:32:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:54 crc kubenswrapper[4957]: I0218 14:32:54.117961 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:54 crc kubenswrapper[4957]: I0218 14:32:54.118024 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:54 crc kubenswrapper[4957]: I0218 14:32:54.118036 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:54 crc kubenswrapper[4957]: I0218 14:32:54.118055 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:54 crc kubenswrapper[4957]: I0218 14:32:54.118071 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:54Z","lastTransitionTime":"2026-02-18T14:32:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:54 crc kubenswrapper[4957]: I0218 14:32:54.200993 4957 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 21:12:54.427104714 +0000 UTC Feb 18 14:32:54 crc kubenswrapper[4957]: I0218 14:32:54.212031 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:32:54 crc kubenswrapper[4957]: I0218 14:32:54.212092 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:32:54 crc kubenswrapper[4957]: E0218 14:32:54.212236 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 14:32:54 crc kubenswrapper[4957]: E0218 14:32:54.212354 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 14:32:54 crc kubenswrapper[4957]: I0218 14:32:54.220621 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:54 crc kubenswrapper[4957]: I0218 14:32:54.220667 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:54 crc kubenswrapper[4957]: I0218 14:32:54.220679 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:54 crc kubenswrapper[4957]: I0218 14:32:54.220697 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:54 crc kubenswrapper[4957]: I0218 14:32:54.220711 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:54Z","lastTransitionTime":"2026-02-18T14:32:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:54 crc kubenswrapper[4957]: I0218 14:32:54.231722 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:54Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:54 crc kubenswrapper[4957]: I0218 14:32:54.250078 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3339e9ff017eaa7de40b766111f5d7dcb9b5b45f154d917b2f3b1ebad21ee361\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://815a22c1c4523e923dbe20bb68b1f4195562f2b3a767409555f43c82cd826391\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:54Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:54 crc kubenswrapper[4957]: I0218 14:32:54.267855 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96ebe482278c96a988073257a64389543a692cbae60b771542dd45ffdbef9977\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:54Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:54 crc kubenswrapper[4957]: I0218 14:32:54.277931 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wn2pd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6a720f-7d42-48e9-8073-fb4f7417e6cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3317042ce828b73a13e6c2724a7c7663e0c73dd05fef1239f5ad36a7d46b1ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qv7rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wn2pd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:54Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:54 crc kubenswrapper[4957]: I0218 14:32:54.289491 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b3bef1f-7cee-4035-bc8e-195fadcf2d19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51eded762a62fcb704d09a9c4336a0e308e44c662b636772ea79f754c09dcaaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a2ec9911be4f4d8715bb722cdf72cdbf478111030a7311b93eb67f55ddbce77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d46f625f180ab9afde6a7cb5d578b88012bf5b297e691a35f1fe1af000f1416\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a11b36734e57eec3aba0b6d96b4102850e46b96eb16d99cfcacf273f1ce69f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ff013b6ed4aa92a4bda93a646e7ffd4a158daab436c25025c7fbee49716bcfd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T14:31:47Z\\\",\\\"message\\\":\\\"W0218 14:31:37.311828 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0218 14:31:37.312305 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771425097 cert, and key in /tmp/serving-cert-2580901173/serving-signer.crt, /tmp/serving-cert-2580901173/serving-signer.key\\\\nI0218 14:31:37.668457 1 observer_polling.go:159] Starting file observer\\\\nW0218 14:31:37.670692 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0218 14:31:37.670892 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 14:31:37.672550 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2580901173/tls.crt::/tmp/serving-cert-2580901173/tls.key\\\\\\\"\\\\nF0218 14:31:47.884142 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1c0f986e657c20a059efccb494462926f4fa62d30b012e6ce2b530d679d9d5c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:37Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09ddd547ad67f12c0d4f3b0505206fc69b4bcfcfa966c6f98da8ef4f4e979711\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09ddd547ad67f12c0d4f3b0505206fc69b4bcfcfa966c6f98da8ef4f4e979711\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:54Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:54 crc kubenswrapper[4957]: I0218 14:32:54.302639 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76c85d37-4ff8-4c5d-83aa-4c5fbf9535c0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://796493e4b9e802eafa557bda30c8c5442e4fb5705907cfa1f04cad2b36dc20bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc2518870abf503506ce74ccbc0fc66ac6efa632a7394ceca4f58b38a0a1fac3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b902fee5b17e97ec39e941b26dd9ef6e2da95f49be1fc25192895894a4182685\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b68e74be3ce63b0e6fff6a8141e11a9c6e80a376e205bfc7f2745440c6526e6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b68e74be3ce63b0e6fff6a8141e11a9c6e80a376e205bfc7f2745440c6526e6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:35Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:34Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:54Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:54 crc kubenswrapper[4957]: I0218 14:32:54.321456 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t7lp9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1ab5e7d-28c9-416b-9e12-1209987d8a2c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3d55cd020086b018dcf2183c7307ce4f66a18d4dfd5c7e387419f5aaeb08e53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb7e286dbf7b49a0ee8019eb660a4b977399c163bb094d77f830b3747399d702\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c773ab7da87d534f8ca400b73f89b473291b15b65118abfa48da7e8af126724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fb30623c479a6d604d07d8779f3b5661ba8a2c98fa38f145443c7f80fbfddb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9575c57251809ec79d9e9a63fc18fc0fe431582b023c0bfb9d71b3ea5c369e49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7fc50065803f879ae25e1ab406e1b74e142cab30c38bedd0426616a2cc6cc84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc6ab3573641e68bec6f227fa30ead1a586c1cf3abdf781b93edf3396b81f192\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc6ab3573641e68bec6f227fa30ead1a586c1cf3abdf781b93edf3396b81f192\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T14:32:48Z\\\",\\\"message\\\":\\\"Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]} options:{GoMap:map[iface-id-ver:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c94130be-172c-477c-88c4-40cc7eba30fe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0218 14:32:48.038296 7032 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0218 14:32:48.038339 7032 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T14:32:47Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-t7lp9_openshift-ovn-kubernetes(c1ab5e7d-28c9-416b-9e12-1209987d8a2c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07549d11e88948ba7929804722b7d49a2fefb64d0c324c2032529c13df70a23f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854d32930f1afa0a3f29b83159a4a473fd3c074f065cab0f110c4c0e62d0ab79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://854d32930f1afa0a3f29b83159a4a473fd3c074f065cab0f110c4c0e62d0ab79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t7lp9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:54Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:54 crc kubenswrapper[4957]: I0218 14:32:54.323539 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:54 crc kubenswrapper[4957]: I0218 14:32:54.323605 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:54 crc kubenswrapper[4957]: I0218 14:32:54.323625 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:54 crc kubenswrapper[4957]: I0218 14:32:54.323649 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:54 crc kubenswrapper[4957]: I0218 14:32:54.323663 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:54Z","lastTransitionTime":"2026-02-18T14:32:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:54 crc kubenswrapper[4957]: I0218 14:32:54.335505 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-52gh7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"841cf9b6-bfbb-4ff0-8899-acd00478a669\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e52fe12bee0a96d77a82c8edb65574b11ae3d6b6dd3e2a68e918f1d21c0b1153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xg6v5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d556fc2e5fb106135a544401595696797ede468aa93818660d86424c49486fff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xg6v5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:32:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-52gh7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:54Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:54 crc kubenswrapper[4957]: I0218 14:32:54.350205 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sk96m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7e4902e-7bb6-44ec-a3ac-3d8b3afe3fbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b9a1d713ed8c708dc17974a35a4e22543cfeee4911baa33638b24cb02f01f1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://644a3c29970ac9c5262a3a66390a566947b89d6020eaf5de12afb1afeaaff673\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T14:32:42Z\\\",\\\"message\\\":\\\"2026-02-18T14:31:56+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_5faa22d4-8273-431b-8440-b348b6738df0\\\\n2026-02-18T14:31:56+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_5faa22d4-8273-431b-8440-b348b6738df0 to /host/opt/cni/bin/\\\\n2026-02-18T14:31:57Z [verbose] multus-daemon started\\\\n2026-02-18T14:31:57Z [verbose] Readiness Indicator file check\\\\n2026-02-18T14:32:42Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:32:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vssz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sk96m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:54Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:54 crc kubenswrapper[4957]: I0218 14:32:54.389902 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cde17e3-43e9-4bed-afe8-5b76229e35cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1929ddc0bd8d5e1c054fe6b3399c053a91ace49153f8712ad155416d0df0652e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mzslj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4227d09ba91769eac5df7ac50f6da20587fad22be3d3f27f0a938b37d76b457e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mzslj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-x8wwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:54Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:54 crc kubenswrapper[4957]: I0218 14:32:54.401775 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"820e2f52-fc4e-43af-8da0-5b1ecb52e54a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdeaa62699fcf37060225a404103ed014fff43f751f744a27891afb76c56d011\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb9f54ba352c3e8d1cb70e31e2321f32cc741e9ea7fa871ac8db9cb380486d6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04cb263fd49c7f580746efa2f81c4e25a4862244502ad5362a1767d6255c023c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f45a9407db247a891731310afb4c1a7b81ac94e44cfd8e267854a4c4e434e95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:34Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:54Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:54 crc kubenswrapper[4957]: I0218 14:32:54.413681 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:54Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:54 crc kubenswrapper[4957]: I0218 14:32:54.426587 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:54 crc kubenswrapper[4957]: I0218 14:32:54.426638 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:54 crc kubenswrapper[4957]: I0218 14:32:54.426651 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:54 crc kubenswrapper[4957]: I0218 14:32:54.426667 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:54 crc kubenswrapper[4957]: I0218 14:32:54.426679 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:54Z","lastTransitionTime":"2026-02-18T14:32:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:54 crc kubenswrapper[4957]: I0218 14:32:54.428010 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3aa342a3ea42d2c64cf284dd2f7fed142b1ff377cc3c5384e4d940ec4364befc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:54Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:54 crc kubenswrapper[4957]: I0218 14:32:54.444156 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:54Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:54 crc kubenswrapper[4957]: I0218 14:32:54.458794 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-s7f5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77ae51a3-a3f7-4ea3-afb9-93558bf3b821\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ff578d3657e9ba6b371eb3c3a2e63d4c989e3232da6c9f88cf6ba18edc50dd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:32:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7f4f904ea5311aa8aa0473b62433cd3684d0e1fecdb1b179bfb7a58e463cc27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7f4f904ea5311aa8aa0473b62433cd3684d0e1fecdb1b179bfb7a58e463cc27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ea791a426f30ba77c019b8f865f9cfe90b915f6c0596002f4914cbe2fa68c27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ea791a426f30ba77c019b8f865f9cfe90b915f6c0596002f4914cbe2fa68c27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01a7d815c8c260fb5d2a4586f2640abf157fa209aa8e8aed0e6573a07bdaced6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01a7d815c8c260fb5d2a4586f2640abf157fa209aa8e8aed0e6573a07bdaced6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ae36af89d8b5cc2ad00253398fad76d78310bc8adb012b7b819ef55a26b8004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae36af89d8b5cc2ad00253398fad76d78310bc8adb012b7b819ef55a26b8004\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://def4b27d830824baeb1b965b56d5fc50343585d0fae5a1fed75381364a94caf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://def4b27d830824baeb1b965b56d5fc50343585d0fae5a1fed75381364a94caf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:32:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a13d3fcb295e963277f66a06bad27866ddb270d48474d0154efc52a8721a64a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a13d3fcb295e963277f66a06bad27866ddb270d48474d0154efc52a8721a64a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:32:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-s7f5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:54Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:54 crc kubenswrapper[4957]: I0218 14:32:54.469126 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-c5hxm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2b8a049-72c4-4a87-bb05-a5cc4ebe7a89\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d8c16012d78884094f9be4f3cd7b67463352ff1391c8f60c109b5f734c80cd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mzkhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-c5hxm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:54Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:54 crc kubenswrapper[4957]: I0218 14:32:54.478180 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jkmlc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58c40982-35c8-4670-ad21-513a7a5a458e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gkmzc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gkmzc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:32:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jkmlc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:32:54Z is after 2025-08-24T17:21:41Z" Feb 18 14:32:54 crc kubenswrapper[4957]: I0218 14:32:54.529545 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:54 crc kubenswrapper[4957]: I0218 14:32:54.529593 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:54 crc kubenswrapper[4957]: I0218 14:32:54.529605 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:54 crc kubenswrapper[4957]: I0218 14:32:54.529622 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:54 crc kubenswrapper[4957]: I0218 14:32:54.529634 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:54Z","lastTransitionTime":"2026-02-18T14:32:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:54 crc kubenswrapper[4957]: I0218 14:32:54.632065 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:54 crc kubenswrapper[4957]: I0218 14:32:54.632117 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:54 crc kubenswrapper[4957]: I0218 14:32:54.632127 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:54 crc kubenswrapper[4957]: I0218 14:32:54.632143 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:54 crc kubenswrapper[4957]: I0218 14:32:54.632154 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:54Z","lastTransitionTime":"2026-02-18T14:32:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:54 crc kubenswrapper[4957]: I0218 14:32:54.734711 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:54 crc kubenswrapper[4957]: I0218 14:32:54.734764 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:54 crc kubenswrapper[4957]: I0218 14:32:54.734779 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:54 crc kubenswrapper[4957]: I0218 14:32:54.734801 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:54 crc kubenswrapper[4957]: I0218 14:32:54.734817 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:54Z","lastTransitionTime":"2026-02-18T14:32:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:54 crc kubenswrapper[4957]: I0218 14:32:54.837612 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:54 crc kubenswrapper[4957]: I0218 14:32:54.837657 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:54 crc kubenswrapper[4957]: I0218 14:32:54.837667 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:54 crc kubenswrapper[4957]: I0218 14:32:54.837688 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:54 crc kubenswrapper[4957]: I0218 14:32:54.837700 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:54Z","lastTransitionTime":"2026-02-18T14:32:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:54 crc kubenswrapper[4957]: I0218 14:32:54.940940 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:54 crc kubenswrapper[4957]: I0218 14:32:54.940982 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:54 crc kubenswrapper[4957]: I0218 14:32:54.940994 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:54 crc kubenswrapper[4957]: I0218 14:32:54.941012 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:54 crc kubenswrapper[4957]: I0218 14:32:54.941026 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:54Z","lastTransitionTime":"2026-02-18T14:32:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:55 crc kubenswrapper[4957]: I0218 14:32:55.043982 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:55 crc kubenswrapper[4957]: I0218 14:32:55.044325 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:55 crc kubenswrapper[4957]: I0218 14:32:55.044337 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:55 crc kubenswrapper[4957]: I0218 14:32:55.044351 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:55 crc kubenswrapper[4957]: I0218 14:32:55.044361 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:55Z","lastTransitionTime":"2026-02-18T14:32:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:55 crc kubenswrapper[4957]: I0218 14:32:55.147123 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:55 crc kubenswrapper[4957]: I0218 14:32:55.147168 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:55 crc kubenswrapper[4957]: I0218 14:32:55.147179 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:55 crc kubenswrapper[4957]: I0218 14:32:55.147197 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:55 crc kubenswrapper[4957]: I0218 14:32:55.147210 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:55Z","lastTransitionTime":"2026-02-18T14:32:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:55 crc kubenswrapper[4957]: I0218 14:32:55.201226 4957 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 19:09:11.383177078 +0000 UTC Feb 18 14:32:55 crc kubenswrapper[4957]: I0218 14:32:55.212834 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jkmlc" Feb 18 14:32:55 crc kubenswrapper[4957]: I0218 14:32:55.212900 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:32:55 crc kubenswrapper[4957]: E0218 14:32:55.213036 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jkmlc" podUID="58c40982-35c8-4670-ad21-513a7a5a458e" Feb 18 14:32:55 crc kubenswrapper[4957]: E0218 14:32:55.213203 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 14:32:55 crc kubenswrapper[4957]: I0218 14:32:55.250176 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:55 crc kubenswrapper[4957]: I0218 14:32:55.250227 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:55 crc kubenswrapper[4957]: I0218 14:32:55.250237 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:55 crc kubenswrapper[4957]: I0218 14:32:55.250256 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:55 crc kubenswrapper[4957]: I0218 14:32:55.250271 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:55Z","lastTransitionTime":"2026-02-18T14:32:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:55 crc kubenswrapper[4957]: I0218 14:32:55.353483 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:55 crc kubenswrapper[4957]: I0218 14:32:55.353531 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:55 crc kubenswrapper[4957]: I0218 14:32:55.353541 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:55 crc kubenswrapper[4957]: I0218 14:32:55.353563 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:55 crc kubenswrapper[4957]: I0218 14:32:55.353574 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:55Z","lastTransitionTime":"2026-02-18T14:32:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:55 crc kubenswrapper[4957]: I0218 14:32:55.456635 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:55 crc kubenswrapper[4957]: I0218 14:32:55.456706 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:55 crc kubenswrapper[4957]: I0218 14:32:55.456719 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:55 crc kubenswrapper[4957]: I0218 14:32:55.456740 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:55 crc kubenswrapper[4957]: I0218 14:32:55.456752 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:55Z","lastTransitionTime":"2026-02-18T14:32:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:55 crc kubenswrapper[4957]: I0218 14:32:55.560164 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:55 crc kubenswrapper[4957]: I0218 14:32:55.560232 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:55 crc kubenswrapper[4957]: I0218 14:32:55.560255 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:55 crc kubenswrapper[4957]: I0218 14:32:55.560282 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:55 crc kubenswrapper[4957]: I0218 14:32:55.560302 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:55Z","lastTransitionTime":"2026-02-18T14:32:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:55 crc kubenswrapper[4957]: I0218 14:32:55.663280 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:55 crc kubenswrapper[4957]: I0218 14:32:55.663325 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:55 crc kubenswrapper[4957]: I0218 14:32:55.663337 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:55 crc kubenswrapper[4957]: I0218 14:32:55.663354 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:55 crc kubenswrapper[4957]: I0218 14:32:55.663366 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:55Z","lastTransitionTime":"2026-02-18T14:32:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:55 crc kubenswrapper[4957]: I0218 14:32:55.765654 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:55 crc kubenswrapper[4957]: I0218 14:32:55.765776 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:55 crc kubenswrapper[4957]: I0218 14:32:55.765795 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:55 crc kubenswrapper[4957]: I0218 14:32:55.766185 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:55 crc kubenswrapper[4957]: I0218 14:32:55.766386 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:55Z","lastTransitionTime":"2026-02-18T14:32:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:55 crc kubenswrapper[4957]: I0218 14:32:55.869881 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:55 crc kubenswrapper[4957]: I0218 14:32:55.869923 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:55 crc kubenswrapper[4957]: I0218 14:32:55.869931 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:55 crc kubenswrapper[4957]: I0218 14:32:55.869944 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:55 crc kubenswrapper[4957]: I0218 14:32:55.869954 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:55Z","lastTransitionTime":"2026-02-18T14:32:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:55 crc kubenswrapper[4957]: I0218 14:32:55.972492 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:55 crc kubenswrapper[4957]: I0218 14:32:55.972533 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:55 crc kubenswrapper[4957]: I0218 14:32:55.972548 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:55 crc kubenswrapper[4957]: I0218 14:32:55.972566 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:55 crc kubenswrapper[4957]: I0218 14:32:55.972578 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:55Z","lastTransitionTime":"2026-02-18T14:32:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:56 crc kubenswrapper[4957]: I0218 14:32:56.075986 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:56 crc kubenswrapper[4957]: I0218 14:32:56.076043 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:56 crc kubenswrapper[4957]: I0218 14:32:56.076060 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:56 crc kubenswrapper[4957]: I0218 14:32:56.076080 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:56 crc kubenswrapper[4957]: I0218 14:32:56.076091 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:56Z","lastTransitionTime":"2026-02-18T14:32:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:56 crc kubenswrapper[4957]: I0218 14:32:56.178723 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:56 crc kubenswrapper[4957]: I0218 14:32:56.178774 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:56 crc kubenswrapper[4957]: I0218 14:32:56.178787 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:56 crc kubenswrapper[4957]: I0218 14:32:56.178805 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:56 crc kubenswrapper[4957]: I0218 14:32:56.178818 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:56Z","lastTransitionTime":"2026-02-18T14:32:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:56 crc kubenswrapper[4957]: I0218 14:32:56.202384 4957 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 09:28:12.150050923 +0000 UTC Feb 18 14:32:56 crc kubenswrapper[4957]: I0218 14:32:56.212776 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:32:56 crc kubenswrapper[4957]: I0218 14:32:56.212830 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:32:56 crc kubenswrapper[4957]: E0218 14:32:56.212909 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 14:32:56 crc kubenswrapper[4957]: E0218 14:32:56.213119 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 14:32:56 crc kubenswrapper[4957]: I0218 14:32:56.225807 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Feb 18 14:32:56 crc kubenswrapper[4957]: I0218 14:32:56.280968 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:56 crc kubenswrapper[4957]: I0218 14:32:56.280996 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:56 crc kubenswrapper[4957]: I0218 14:32:56.281003 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:56 crc kubenswrapper[4957]: I0218 14:32:56.281017 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:56 crc kubenswrapper[4957]: I0218 14:32:56.281027 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:56Z","lastTransitionTime":"2026-02-18T14:32:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:56 crc kubenswrapper[4957]: I0218 14:32:56.383716 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:56 crc kubenswrapper[4957]: I0218 14:32:56.383786 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:56 crc kubenswrapper[4957]: I0218 14:32:56.383806 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:56 crc kubenswrapper[4957]: I0218 14:32:56.383832 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:56 crc kubenswrapper[4957]: I0218 14:32:56.383849 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:56Z","lastTransitionTime":"2026-02-18T14:32:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:56 crc kubenswrapper[4957]: I0218 14:32:56.486796 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:56 crc kubenswrapper[4957]: I0218 14:32:56.486852 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:56 crc kubenswrapper[4957]: I0218 14:32:56.486868 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:56 crc kubenswrapper[4957]: I0218 14:32:56.486891 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:56 crc kubenswrapper[4957]: I0218 14:32:56.486908 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:56Z","lastTransitionTime":"2026-02-18T14:32:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:56 crc kubenswrapper[4957]: I0218 14:32:56.589885 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:56 crc kubenswrapper[4957]: I0218 14:32:56.589934 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:56 crc kubenswrapper[4957]: I0218 14:32:56.589945 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:56 crc kubenswrapper[4957]: I0218 14:32:56.589969 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:56 crc kubenswrapper[4957]: I0218 14:32:56.589981 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:56Z","lastTransitionTime":"2026-02-18T14:32:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:56 crc kubenswrapper[4957]: I0218 14:32:56.692848 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:56 crc kubenswrapper[4957]: I0218 14:32:56.692897 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:56 crc kubenswrapper[4957]: I0218 14:32:56.692913 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:56 crc kubenswrapper[4957]: I0218 14:32:56.692935 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:56 crc kubenswrapper[4957]: I0218 14:32:56.692949 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:56Z","lastTransitionTime":"2026-02-18T14:32:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:56 crc kubenswrapper[4957]: I0218 14:32:56.795902 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:56 crc kubenswrapper[4957]: I0218 14:32:56.795963 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:56 crc kubenswrapper[4957]: I0218 14:32:56.795984 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:56 crc kubenswrapper[4957]: I0218 14:32:56.796009 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:56 crc kubenswrapper[4957]: I0218 14:32:56.796027 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:56Z","lastTransitionTime":"2026-02-18T14:32:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:56 crc kubenswrapper[4957]: I0218 14:32:56.899311 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:56 crc kubenswrapper[4957]: I0218 14:32:56.899360 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:56 crc kubenswrapper[4957]: I0218 14:32:56.899376 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:56 crc kubenswrapper[4957]: I0218 14:32:56.899395 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:56 crc kubenswrapper[4957]: I0218 14:32:56.899413 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:56Z","lastTransitionTime":"2026-02-18T14:32:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:57 crc kubenswrapper[4957]: I0218 14:32:57.002575 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:57 crc kubenswrapper[4957]: I0218 14:32:57.002635 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:57 crc kubenswrapper[4957]: I0218 14:32:57.002648 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:57 crc kubenswrapper[4957]: I0218 14:32:57.002668 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:57 crc kubenswrapper[4957]: I0218 14:32:57.002681 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:57Z","lastTransitionTime":"2026-02-18T14:32:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:57 crc kubenswrapper[4957]: I0218 14:32:57.106075 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:57 crc kubenswrapper[4957]: I0218 14:32:57.106105 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:57 crc kubenswrapper[4957]: I0218 14:32:57.106112 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:57 crc kubenswrapper[4957]: I0218 14:32:57.106126 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:57 crc kubenswrapper[4957]: I0218 14:32:57.106136 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:57Z","lastTransitionTime":"2026-02-18T14:32:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:57 crc kubenswrapper[4957]: I0218 14:32:57.203540 4957 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 03:29:31.692419004 +0000 UTC Feb 18 14:32:57 crc kubenswrapper[4957]: I0218 14:32:57.208503 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:57 crc kubenswrapper[4957]: I0218 14:32:57.208553 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:57 crc kubenswrapper[4957]: I0218 14:32:57.208569 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:57 crc kubenswrapper[4957]: I0218 14:32:57.208590 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:57 crc kubenswrapper[4957]: I0218 14:32:57.208605 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:57Z","lastTransitionTime":"2026-02-18T14:32:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:57 crc kubenswrapper[4957]: I0218 14:32:57.212672 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jkmlc" Feb 18 14:32:57 crc kubenswrapper[4957]: E0218 14:32:57.212797 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jkmlc" podUID="58c40982-35c8-4670-ad21-513a7a5a458e" Feb 18 14:32:57 crc kubenswrapper[4957]: I0218 14:32:57.212672 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:32:57 crc kubenswrapper[4957]: E0218 14:32:57.212972 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 14:32:57 crc kubenswrapper[4957]: I0218 14:32:57.310976 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:57 crc kubenswrapper[4957]: I0218 14:32:57.311035 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:57 crc kubenswrapper[4957]: I0218 14:32:57.311047 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:57 crc kubenswrapper[4957]: I0218 14:32:57.311065 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:57 crc kubenswrapper[4957]: I0218 14:32:57.311078 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:57Z","lastTransitionTime":"2026-02-18T14:32:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:57 crc kubenswrapper[4957]: I0218 14:32:57.414624 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:57 crc kubenswrapper[4957]: I0218 14:32:57.414679 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:57 crc kubenswrapper[4957]: I0218 14:32:57.414710 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:57 crc kubenswrapper[4957]: I0218 14:32:57.414735 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:57 crc kubenswrapper[4957]: I0218 14:32:57.414750 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:57Z","lastTransitionTime":"2026-02-18T14:32:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:57 crc kubenswrapper[4957]: I0218 14:32:57.442139 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 14:32:57 crc kubenswrapper[4957]: I0218 14:32:57.442300 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:32:57 crc kubenswrapper[4957]: E0218 14:32:57.442338 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 14:34:01.442302816 +0000 UTC m=+147.963167580 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:32:57 crc kubenswrapper[4957]: I0218 14:32:57.442402 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:32:57 crc kubenswrapper[4957]: E0218 14:32:57.442506 4957 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 14:32:57 crc kubenswrapper[4957]: E0218 14:32:57.442538 4957 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 14:32:57 crc kubenswrapper[4957]: E0218 14:32:57.442555 4957 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 14:32:57 crc kubenswrapper[4957]: I0218 14:32:57.442581 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:32:57 crc kubenswrapper[4957]: E0218 14:32:57.442620 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-18 14:34:01.442596765 +0000 UTC m=+147.963461599 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 14:32:57 crc kubenswrapper[4957]: I0218 14:32:57.442651 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:32:57 crc kubenswrapper[4957]: E0218 14:32:57.442676 4957 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 14:32:57 crc kubenswrapper[4957]: E0218 14:32:57.442740 4957 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 14:32:57 crc kubenswrapper[4957]: E0218 14:32:57.442740 4957 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 14:32:57 crc kubenswrapper[4957]: E0218 14:32:57.442776 4957 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 14:32:57 crc kubenswrapper[4957]: E0218 14:32:57.442794 4957 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 14:32:57 crc kubenswrapper[4957]: E0218 14:32:57.442746 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 14:34:01.442717538 +0000 UTC m=+147.963582432 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 14:32:57 crc kubenswrapper[4957]: E0218 14:32:57.442867 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 14:34:01.442851082 +0000 UTC m=+147.963716026 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 14:32:57 crc kubenswrapper[4957]: E0218 14:32:57.442887 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-18 14:34:01.442877073 +0000 UTC m=+147.963742037 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 14:32:57 crc kubenswrapper[4957]: I0218 14:32:57.517255 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:57 crc kubenswrapper[4957]: I0218 14:32:57.517563 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:57 crc kubenswrapper[4957]: I0218 14:32:57.517612 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:57 crc kubenswrapper[4957]: I0218 14:32:57.517638 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:57 crc kubenswrapper[4957]: I0218 14:32:57.517655 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:57Z","lastTransitionTime":"2026-02-18T14:32:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:57 crc kubenswrapper[4957]: I0218 14:32:57.620976 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:57 crc kubenswrapper[4957]: I0218 14:32:57.621035 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:57 crc kubenswrapper[4957]: I0218 14:32:57.621045 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:57 crc kubenswrapper[4957]: I0218 14:32:57.621070 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:57 crc kubenswrapper[4957]: I0218 14:32:57.621082 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:57Z","lastTransitionTime":"2026-02-18T14:32:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:57 crc kubenswrapper[4957]: I0218 14:32:57.724442 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:57 crc kubenswrapper[4957]: I0218 14:32:57.724521 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:57 crc kubenswrapper[4957]: I0218 14:32:57.724544 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:57 crc kubenswrapper[4957]: I0218 14:32:57.724564 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:57 crc kubenswrapper[4957]: I0218 14:32:57.724575 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:57Z","lastTransitionTime":"2026-02-18T14:32:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:57 crc kubenswrapper[4957]: I0218 14:32:57.827176 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:57 crc kubenswrapper[4957]: I0218 14:32:57.827251 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:57 crc kubenswrapper[4957]: I0218 14:32:57.827267 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:57 crc kubenswrapper[4957]: I0218 14:32:57.827295 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:57 crc kubenswrapper[4957]: I0218 14:32:57.827314 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:57Z","lastTransitionTime":"2026-02-18T14:32:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:57 crc kubenswrapper[4957]: I0218 14:32:57.930734 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:57 crc kubenswrapper[4957]: I0218 14:32:57.930833 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:57 crc kubenswrapper[4957]: I0218 14:32:57.930861 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:57 crc kubenswrapper[4957]: I0218 14:32:57.930894 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:57 crc kubenswrapper[4957]: I0218 14:32:57.930920 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:57Z","lastTransitionTime":"2026-02-18T14:32:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:58 crc kubenswrapper[4957]: I0218 14:32:58.034063 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:58 crc kubenswrapper[4957]: I0218 14:32:58.034138 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:58 crc kubenswrapper[4957]: I0218 14:32:58.034159 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:58 crc kubenswrapper[4957]: I0218 14:32:58.034186 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:58 crc kubenswrapper[4957]: I0218 14:32:58.034211 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:58Z","lastTransitionTime":"2026-02-18T14:32:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:58 crc kubenswrapper[4957]: I0218 14:32:58.137408 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:58 crc kubenswrapper[4957]: I0218 14:32:58.137481 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:58 crc kubenswrapper[4957]: I0218 14:32:58.137494 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:58 crc kubenswrapper[4957]: I0218 14:32:58.137515 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:58 crc kubenswrapper[4957]: I0218 14:32:58.137527 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:58Z","lastTransitionTime":"2026-02-18T14:32:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:58 crc kubenswrapper[4957]: I0218 14:32:58.204480 4957 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 18:26:23.828727438 +0000 UTC Feb 18 14:32:58 crc kubenswrapper[4957]: I0218 14:32:58.212920 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:32:58 crc kubenswrapper[4957]: E0218 14:32:58.213127 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 14:32:58 crc kubenswrapper[4957]: I0218 14:32:58.213205 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:32:58 crc kubenswrapper[4957]: E0218 14:32:58.213477 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 14:32:58 crc kubenswrapper[4957]: I0218 14:32:58.240603 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:58 crc kubenswrapper[4957]: I0218 14:32:58.240649 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:58 crc kubenswrapper[4957]: I0218 14:32:58.240661 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:58 crc kubenswrapper[4957]: I0218 14:32:58.240679 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:58 crc kubenswrapper[4957]: I0218 14:32:58.240693 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:58Z","lastTransitionTime":"2026-02-18T14:32:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:58 crc kubenswrapper[4957]: I0218 14:32:58.343903 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:58 crc kubenswrapper[4957]: I0218 14:32:58.343969 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:58 crc kubenswrapper[4957]: I0218 14:32:58.343988 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:58 crc kubenswrapper[4957]: I0218 14:32:58.344013 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:58 crc kubenswrapper[4957]: I0218 14:32:58.344028 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:58Z","lastTransitionTime":"2026-02-18T14:32:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:58 crc kubenswrapper[4957]: I0218 14:32:58.447057 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:58 crc kubenswrapper[4957]: I0218 14:32:58.447110 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:58 crc kubenswrapper[4957]: I0218 14:32:58.447118 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:58 crc kubenswrapper[4957]: I0218 14:32:58.447137 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:58 crc kubenswrapper[4957]: I0218 14:32:58.447147 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:58Z","lastTransitionTime":"2026-02-18T14:32:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:58 crc kubenswrapper[4957]: I0218 14:32:58.550074 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:58 crc kubenswrapper[4957]: I0218 14:32:58.550142 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:58 crc kubenswrapper[4957]: I0218 14:32:58.550156 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:58 crc kubenswrapper[4957]: I0218 14:32:58.550179 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:58 crc kubenswrapper[4957]: I0218 14:32:58.550204 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:58Z","lastTransitionTime":"2026-02-18T14:32:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:58 crc kubenswrapper[4957]: I0218 14:32:58.653508 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:58 crc kubenswrapper[4957]: I0218 14:32:58.653557 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:58 crc kubenswrapper[4957]: I0218 14:32:58.653568 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:58 crc kubenswrapper[4957]: I0218 14:32:58.653586 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:58 crc kubenswrapper[4957]: I0218 14:32:58.653597 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:58Z","lastTransitionTime":"2026-02-18T14:32:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:58 crc kubenswrapper[4957]: I0218 14:32:58.756514 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:58 crc kubenswrapper[4957]: I0218 14:32:58.756926 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:58 crc kubenswrapper[4957]: I0218 14:32:58.756950 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:58 crc kubenswrapper[4957]: I0218 14:32:58.756973 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:58 crc kubenswrapper[4957]: I0218 14:32:58.756985 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:58Z","lastTransitionTime":"2026-02-18T14:32:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:58 crc kubenswrapper[4957]: I0218 14:32:58.859575 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:58 crc kubenswrapper[4957]: I0218 14:32:58.859634 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:58 crc kubenswrapper[4957]: I0218 14:32:58.859653 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:58 crc kubenswrapper[4957]: I0218 14:32:58.859674 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:58 crc kubenswrapper[4957]: I0218 14:32:58.859700 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:58Z","lastTransitionTime":"2026-02-18T14:32:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:58 crc kubenswrapper[4957]: I0218 14:32:58.962568 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:58 crc kubenswrapper[4957]: I0218 14:32:58.962627 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:58 crc kubenswrapper[4957]: I0218 14:32:58.962638 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:58 crc kubenswrapper[4957]: I0218 14:32:58.962659 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:58 crc kubenswrapper[4957]: I0218 14:32:58.962674 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:58Z","lastTransitionTime":"2026-02-18T14:32:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:59 crc kubenswrapper[4957]: I0218 14:32:59.065501 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:59 crc kubenswrapper[4957]: I0218 14:32:59.065587 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:59 crc kubenswrapper[4957]: I0218 14:32:59.065598 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:59 crc kubenswrapper[4957]: I0218 14:32:59.065632 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:59 crc kubenswrapper[4957]: I0218 14:32:59.065646 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:59Z","lastTransitionTime":"2026-02-18T14:32:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:59 crc kubenswrapper[4957]: I0218 14:32:59.168907 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:59 crc kubenswrapper[4957]: I0218 14:32:59.168976 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:59 crc kubenswrapper[4957]: I0218 14:32:59.168992 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:59 crc kubenswrapper[4957]: I0218 14:32:59.169021 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:59 crc kubenswrapper[4957]: I0218 14:32:59.169047 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:59Z","lastTransitionTime":"2026-02-18T14:32:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:59 crc kubenswrapper[4957]: I0218 14:32:59.205529 4957 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 08:21:46.333707007 +0000 UTC Feb 18 14:32:59 crc kubenswrapper[4957]: I0218 14:32:59.212993 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jkmlc" Feb 18 14:32:59 crc kubenswrapper[4957]: I0218 14:32:59.212996 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:32:59 crc kubenswrapper[4957]: E0218 14:32:59.213266 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jkmlc" podUID="58c40982-35c8-4670-ad21-513a7a5a458e" Feb 18 14:32:59 crc kubenswrapper[4957]: E0218 14:32:59.213475 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 14:32:59 crc kubenswrapper[4957]: I0218 14:32:59.272032 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:59 crc kubenswrapper[4957]: I0218 14:32:59.272126 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:59 crc kubenswrapper[4957]: I0218 14:32:59.272151 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:59 crc kubenswrapper[4957]: I0218 14:32:59.272181 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:59 crc kubenswrapper[4957]: I0218 14:32:59.272204 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:59Z","lastTransitionTime":"2026-02-18T14:32:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:59 crc kubenswrapper[4957]: I0218 14:32:59.376338 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:59 crc kubenswrapper[4957]: I0218 14:32:59.376410 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:59 crc kubenswrapper[4957]: I0218 14:32:59.376471 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:59 crc kubenswrapper[4957]: I0218 14:32:59.376505 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:59 crc kubenswrapper[4957]: I0218 14:32:59.376529 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:59Z","lastTransitionTime":"2026-02-18T14:32:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:59 crc kubenswrapper[4957]: I0218 14:32:59.480199 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:59 crc kubenswrapper[4957]: I0218 14:32:59.480263 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:59 crc kubenswrapper[4957]: I0218 14:32:59.480277 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:59 crc kubenswrapper[4957]: I0218 14:32:59.480305 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:59 crc kubenswrapper[4957]: I0218 14:32:59.480320 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:59Z","lastTransitionTime":"2026-02-18T14:32:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:59 crc kubenswrapper[4957]: I0218 14:32:59.583628 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:59 crc kubenswrapper[4957]: I0218 14:32:59.583698 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:59 crc kubenswrapper[4957]: I0218 14:32:59.583711 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:59 crc kubenswrapper[4957]: I0218 14:32:59.583730 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:59 crc kubenswrapper[4957]: I0218 14:32:59.583741 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:59Z","lastTransitionTime":"2026-02-18T14:32:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:59 crc kubenswrapper[4957]: I0218 14:32:59.687582 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:59 crc kubenswrapper[4957]: I0218 14:32:59.687635 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:59 crc kubenswrapper[4957]: I0218 14:32:59.687646 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:59 crc kubenswrapper[4957]: I0218 14:32:59.687666 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:59 crc kubenswrapper[4957]: I0218 14:32:59.688079 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:59Z","lastTransitionTime":"2026-02-18T14:32:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:59 crc kubenswrapper[4957]: I0218 14:32:59.791507 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:59 crc kubenswrapper[4957]: I0218 14:32:59.791588 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:59 crc kubenswrapper[4957]: I0218 14:32:59.791602 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:59 crc kubenswrapper[4957]: I0218 14:32:59.791620 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:59 crc kubenswrapper[4957]: I0218 14:32:59.791675 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:59Z","lastTransitionTime":"2026-02-18T14:32:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:59 crc kubenswrapper[4957]: I0218 14:32:59.894964 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:59 crc kubenswrapper[4957]: I0218 14:32:59.895045 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:59 crc kubenswrapper[4957]: I0218 14:32:59.895074 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:59 crc kubenswrapper[4957]: I0218 14:32:59.895104 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:59 crc kubenswrapper[4957]: I0218 14:32:59.895125 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:59Z","lastTransitionTime":"2026-02-18T14:32:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:32:59 crc kubenswrapper[4957]: I0218 14:32:59.998467 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:32:59 crc kubenswrapper[4957]: I0218 14:32:59.998554 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:32:59 crc kubenswrapper[4957]: I0218 14:32:59.998579 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:32:59 crc kubenswrapper[4957]: I0218 14:32:59.998608 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:32:59 crc kubenswrapper[4957]: I0218 14:32:59.998628 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:32:59Z","lastTransitionTime":"2026-02-18T14:32:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:00 crc kubenswrapper[4957]: I0218 14:33:00.101143 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:00 crc kubenswrapper[4957]: I0218 14:33:00.101175 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:00 crc kubenswrapper[4957]: I0218 14:33:00.101183 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:00 crc kubenswrapper[4957]: I0218 14:33:00.101196 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:00 crc kubenswrapper[4957]: I0218 14:33:00.101204 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:00Z","lastTransitionTime":"2026-02-18T14:33:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:00 crc kubenswrapper[4957]: I0218 14:33:00.204201 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:00 crc kubenswrapper[4957]: I0218 14:33:00.204359 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:00 crc kubenswrapper[4957]: I0218 14:33:00.204399 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:00 crc kubenswrapper[4957]: I0218 14:33:00.204460 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:00 crc kubenswrapper[4957]: I0218 14:33:00.204480 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:00Z","lastTransitionTime":"2026-02-18T14:33:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:00 crc kubenswrapper[4957]: I0218 14:33:00.206515 4957 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 07:13:01.480623006 +0000 UTC Feb 18 14:33:00 crc kubenswrapper[4957]: I0218 14:33:00.211891 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:33:00 crc kubenswrapper[4957]: E0218 14:33:00.212131 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 14:33:00 crc kubenswrapper[4957]: I0218 14:33:00.212208 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:33:00 crc kubenswrapper[4957]: E0218 14:33:00.212376 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 14:33:00 crc kubenswrapper[4957]: I0218 14:33:00.307747 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:00 crc kubenswrapper[4957]: I0218 14:33:00.307792 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:00 crc kubenswrapper[4957]: I0218 14:33:00.307808 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:00 crc kubenswrapper[4957]: I0218 14:33:00.307831 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:00 crc kubenswrapper[4957]: I0218 14:33:00.307846 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:00Z","lastTransitionTime":"2026-02-18T14:33:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:00 crc kubenswrapper[4957]: I0218 14:33:00.411018 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:00 crc kubenswrapper[4957]: I0218 14:33:00.411102 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:00 crc kubenswrapper[4957]: I0218 14:33:00.411123 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:00 crc kubenswrapper[4957]: I0218 14:33:00.411152 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:00 crc kubenswrapper[4957]: I0218 14:33:00.411170 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:00Z","lastTransitionTime":"2026-02-18T14:33:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:00 crc kubenswrapper[4957]: I0218 14:33:00.514279 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:00 crc kubenswrapper[4957]: I0218 14:33:00.514343 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:00 crc kubenswrapper[4957]: I0218 14:33:00.514357 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:00 crc kubenswrapper[4957]: I0218 14:33:00.514380 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:00 crc kubenswrapper[4957]: I0218 14:33:00.514396 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:00Z","lastTransitionTime":"2026-02-18T14:33:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:00 crc kubenswrapper[4957]: I0218 14:33:00.617386 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:00 crc kubenswrapper[4957]: I0218 14:33:00.617537 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:00 crc kubenswrapper[4957]: I0218 14:33:00.617564 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:00 crc kubenswrapper[4957]: I0218 14:33:00.617599 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:00 crc kubenswrapper[4957]: I0218 14:33:00.617619 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:00Z","lastTransitionTime":"2026-02-18T14:33:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:00 crc kubenswrapper[4957]: I0218 14:33:00.722961 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:00 crc kubenswrapper[4957]: I0218 14:33:00.723023 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:00 crc kubenswrapper[4957]: I0218 14:33:00.723035 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:00 crc kubenswrapper[4957]: I0218 14:33:00.723054 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:00 crc kubenswrapper[4957]: I0218 14:33:00.723085 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:00Z","lastTransitionTime":"2026-02-18T14:33:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:00 crc kubenswrapper[4957]: I0218 14:33:00.826810 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:00 crc kubenswrapper[4957]: I0218 14:33:00.826913 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:00 crc kubenswrapper[4957]: I0218 14:33:00.826942 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:00 crc kubenswrapper[4957]: I0218 14:33:00.826973 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:00 crc kubenswrapper[4957]: I0218 14:33:00.826997 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:00Z","lastTransitionTime":"2026-02-18T14:33:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:00 crc kubenswrapper[4957]: I0218 14:33:00.930017 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:00 crc kubenswrapper[4957]: I0218 14:33:00.930459 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:00 crc kubenswrapper[4957]: I0218 14:33:00.930906 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:00 crc kubenswrapper[4957]: I0218 14:33:00.931167 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:00 crc kubenswrapper[4957]: I0218 14:33:00.931304 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:00Z","lastTransitionTime":"2026-02-18T14:33:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:01 crc kubenswrapper[4957]: I0218 14:33:01.043328 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:01 crc kubenswrapper[4957]: I0218 14:33:01.043380 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:01 crc kubenswrapper[4957]: I0218 14:33:01.043392 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:01 crc kubenswrapper[4957]: I0218 14:33:01.043409 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:01 crc kubenswrapper[4957]: I0218 14:33:01.043447 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:01Z","lastTransitionTime":"2026-02-18T14:33:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:01 crc kubenswrapper[4957]: I0218 14:33:01.146620 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:01 crc kubenswrapper[4957]: I0218 14:33:01.146683 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:01 crc kubenswrapper[4957]: I0218 14:33:01.146707 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:01 crc kubenswrapper[4957]: I0218 14:33:01.146738 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:01 crc kubenswrapper[4957]: I0218 14:33:01.146760 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:01Z","lastTransitionTime":"2026-02-18T14:33:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:01 crc kubenswrapper[4957]: I0218 14:33:01.206816 4957 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 00:01:39.09939938 +0000 UTC Feb 18 14:33:01 crc kubenswrapper[4957]: I0218 14:33:01.212186 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:33:01 crc kubenswrapper[4957]: I0218 14:33:01.212190 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jkmlc" Feb 18 14:33:01 crc kubenswrapper[4957]: E0218 14:33:01.212377 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 14:33:01 crc kubenswrapper[4957]: E0218 14:33:01.212590 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jkmlc" podUID="58c40982-35c8-4670-ad21-513a7a5a458e" Feb 18 14:33:01 crc kubenswrapper[4957]: I0218 14:33:01.249566 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:01 crc kubenswrapper[4957]: I0218 14:33:01.249695 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:01 crc kubenswrapper[4957]: I0218 14:33:01.249720 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:01 crc kubenswrapper[4957]: I0218 14:33:01.249748 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:01 crc kubenswrapper[4957]: I0218 14:33:01.249769 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:01Z","lastTransitionTime":"2026-02-18T14:33:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:01 crc kubenswrapper[4957]: I0218 14:33:01.351442 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:01 crc kubenswrapper[4957]: I0218 14:33:01.351728 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:01 crc kubenswrapper[4957]: I0218 14:33:01.351803 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:01 crc kubenswrapper[4957]: I0218 14:33:01.351873 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:01 crc kubenswrapper[4957]: I0218 14:33:01.351940 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:01Z","lastTransitionTime":"2026-02-18T14:33:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:01 crc kubenswrapper[4957]: I0218 14:33:01.454845 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:01 crc kubenswrapper[4957]: I0218 14:33:01.454917 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:01 crc kubenswrapper[4957]: I0218 14:33:01.454935 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:01 crc kubenswrapper[4957]: I0218 14:33:01.454960 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:01 crc kubenswrapper[4957]: I0218 14:33:01.454978 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:01Z","lastTransitionTime":"2026-02-18T14:33:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:01 crc kubenswrapper[4957]: I0218 14:33:01.557652 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:01 crc kubenswrapper[4957]: I0218 14:33:01.557696 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:01 crc kubenswrapper[4957]: I0218 14:33:01.557709 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:01 crc kubenswrapper[4957]: I0218 14:33:01.557727 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:01 crc kubenswrapper[4957]: I0218 14:33:01.557737 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:01Z","lastTransitionTime":"2026-02-18T14:33:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:01 crc kubenswrapper[4957]: I0218 14:33:01.647871 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:01 crc kubenswrapper[4957]: I0218 14:33:01.647937 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:01 crc kubenswrapper[4957]: I0218 14:33:01.647954 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:01 crc kubenswrapper[4957]: I0218 14:33:01.647978 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:01 crc kubenswrapper[4957]: I0218 14:33:01.648038 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:01Z","lastTransitionTime":"2026-02-18T14:33:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:01 crc kubenswrapper[4957]: E0218 14:33:01.672396 4957 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:33:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:33:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:33:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:33:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:33:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:33:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:33:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:33:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"73cbc702-e999-4b05-a826-bb1b15d4d73b\\\",\\\"systemUUID\\\":\\\"9fb0acd4-1ed1-4909-a63a-3f4dd7b07055\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:33:01Z is after 2025-08-24T17:21:41Z" Feb 18 14:33:01 crc kubenswrapper[4957]: I0218 14:33:01.678532 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:01 crc kubenswrapper[4957]: I0218 14:33:01.678617 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:01 crc kubenswrapper[4957]: I0218 14:33:01.678644 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:01 crc kubenswrapper[4957]: I0218 14:33:01.678677 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:01 crc kubenswrapper[4957]: I0218 14:33:01.678701 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:01Z","lastTransitionTime":"2026-02-18T14:33:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:01 crc kubenswrapper[4957]: E0218 14:33:01.701782 4957 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:33:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:33:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:33:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:33:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:33:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:33:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:33:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:33:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"73cbc702-e999-4b05-a826-bb1b15d4d73b\\\",\\\"systemUUID\\\":\\\"9fb0acd4-1ed1-4909-a63a-3f4dd7b07055\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:33:01Z is after 2025-08-24T17:21:41Z" Feb 18 14:33:01 crc kubenswrapper[4957]: I0218 14:33:01.706479 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:01 crc kubenswrapper[4957]: I0218 14:33:01.706559 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:01 crc kubenswrapper[4957]: I0218 14:33:01.706588 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:01 crc kubenswrapper[4957]: I0218 14:33:01.706619 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:01 crc kubenswrapper[4957]: I0218 14:33:01.706642 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:01Z","lastTransitionTime":"2026-02-18T14:33:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:01 crc kubenswrapper[4957]: E0218 14:33:01.725169 4957 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:33:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:33:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:33:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:33:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:33:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:33:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:33:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:33:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"73cbc702-e999-4b05-a826-bb1b15d4d73b\\\",\\\"systemUUID\\\":\\\"9fb0acd4-1ed1-4909-a63a-3f4dd7b07055\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:33:01Z is after 2025-08-24T17:21:41Z" Feb 18 14:33:01 crc kubenswrapper[4957]: I0218 14:33:01.730751 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:01 crc kubenswrapper[4957]: I0218 14:33:01.730828 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:01 crc kubenswrapper[4957]: I0218 14:33:01.730855 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:01 crc kubenswrapper[4957]: I0218 14:33:01.730886 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:01 crc kubenswrapper[4957]: I0218 14:33:01.730908 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:01Z","lastTransitionTime":"2026-02-18T14:33:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:01 crc kubenswrapper[4957]: E0218 14:33:01.749631 4957 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:33:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:33:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:33:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:33:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:33:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:33:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:33:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:33:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"73cbc702-e999-4b05-a826-bb1b15d4d73b\\\",\\\"systemUUID\\\":\\\"9fb0acd4-1ed1-4909-a63a-3f4dd7b07055\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:33:01Z is after 2025-08-24T17:21:41Z" Feb 18 14:33:01 crc kubenswrapper[4957]: I0218 14:33:01.754796 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:01 crc kubenswrapper[4957]: I0218 14:33:01.754839 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:01 crc kubenswrapper[4957]: I0218 14:33:01.754852 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:01 crc kubenswrapper[4957]: I0218 14:33:01.754885 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:01 crc kubenswrapper[4957]: I0218 14:33:01.754898 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:01Z","lastTransitionTime":"2026-02-18T14:33:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:01 crc kubenswrapper[4957]: E0218 14:33:01.774058 4957 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:33:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:33:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:33:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:33:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:33:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:33:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:33:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:33:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"73cbc702-e999-4b05-a826-bb1b15d4d73b\\\",\\\"systemUUID\\\":\\\"9fb0acd4-1ed1-4909-a63a-3f4dd7b07055\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:33:01Z is after 2025-08-24T17:21:41Z" Feb 18 14:33:01 crc kubenswrapper[4957]: E0218 14:33:01.774320 4957 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 18 14:33:01 crc kubenswrapper[4957]: I0218 14:33:01.776170 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:01 crc kubenswrapper[4957]: I0218 14:33:01.776210 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:01 crc kubenswrapper[4957]: I0218 14:33:01.776222 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:01 crc kubenswrapper[4957]: I0218 14:33:01.776239 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:01 crc kubenswrapper[4957]: I0218 14:33:01.776253 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:01Z","lastTransitionTime":"2026-02-18T14:33:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:01 crc kubenswrapper[4957]: I0218 14:33:01.878650 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:01 crc kubenswrapper[4957]: I0218 14:33:01.878691 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:01 crc kubenswrapper[4957]: I0218 14:33:01.878702 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:01 crc kubenswrapper[4957]: I0218 14:33:01.878717 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:01 crc kubenswrapper[4957]: I0218 14:33:01.878727 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:01Z","lastTransitionTime":"2026-02-18T14:33:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:01 crc kubenswrapper[4957]: I0218 14:33:01.981821 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:01 crc kubenswrapper[4957]: I0218 14:33:01.981890 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:01 crc kubenswrapper[4957]: I0218 14:33:01.981908 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:01 crc kubenswrapper[4957]: I0218 14:33:01.981934 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:01 crc kubenswrapper[4957]: I0218 14:33:01.981954 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:01Z","lastTransitionTime":"2026-02-18T14:33:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:02 crc kubenswrapper[4957]: I0218 14:33:02.084537 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:02 crc kubenswrapper[4957]: I0218 14:33:02.084570 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:02 crc kubenswrapper[4957]: I0218 14:33:02.084579 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:02 crc kubenswrapper[4957]: I0218 14:33:02.084595 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:02 crc kubenswrapper[4957]: I0218 14:33:02.084604 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:02Z","lastTransitionTime":"2026-02-18T14:33:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:02 crc kubenswrapper[4957]: I0218 14:33:02.186512 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:02 crc kubenswrapper[4957]: I0218 14:33:02.186574 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:02 crc kubenswrapper[4957]: I0218 14:33:02.186585 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:02 crc kubenswrapper[4957]: I0218 14:33:02.186600 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:02 crc kubenswrapper[4957]: I0218 14:33:02.186618 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:02Z","lastTransitionTime":"2026-02-18T14:33:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:02 crc kubenswrapper[4957]: I0218 14:33:02.207863 4957 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 20:09:43.734266579 +0000 UTC Feb 18 14:33:02 crc kubenswrapper[4957]: I0218 14:33:02.212560 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:33:02 crc kubenswrapper[4957]: I0218 14:33:02.212619 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:33:02 crc kubenswrapper[4957]: E0218 14:33:02.213041 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 14:33:02 crc kubenswrapper[4957]: E0218 14:33:02.213174 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 14:33:02 crc kubenswrapper[4957]: I0218 14:33:02.213713 4957 scope.go:117] "RemoveContainer" containerID="bc6ab3573641e68bec6f227fa30ead1a586c1cf3abdf781b93edf3396b81f192" Feb 18 14:33:02 crc kubenswrapper[4957]: E0218 14:33:02.214044 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-t7lp9_openshift-ovn-kubernetes(c1ab5e7d-28c9-416b-9e12-1209987d8a2c)\"" pod="openshift-ovn-kubernetes/ovnkube-node-t7lp9" podUID="c1ab5e7d-28c9-416b-9e12-1209987d8a2c" Feb 18 14:33:02 crc kubenswrapper[4957]: I0218 14:33:02.289958 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:02 crc kubenswrapper[4957]: I0218 14:33:02.290413 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:02 crc kubenswrapper[4957]: I0218 14:33:02.290677 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:02 crc kubenswrapper[4957]: I0218 14:33:02.290871 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:02 crc kubenswrapper[4957]: I0218 14:33:02.291031 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:02Z","lastTransitionTime":"2026-02-18T14:33:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:02 crc kubenswrapper[4957]: I0218 14:33:02.394194 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:02 crc kubenswrapper[4957]: I0218 14:33:02.394270 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:02 crc kubenswrapper[4957]: I0218 14:33:02.394288 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:02 crc kubenswrapper[4957]: I0218 14:33:02.394309 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:02 crc kubenswrapper[4957]: I0218 14:33:02.394324 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:02Z","lastTransitionTime":"2026-02-18T14:33:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:02 crc kubenswrapper[4957]: I0218 14:33:02.497543 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:02 crc kubenswrapper[4957]: I0218 14:33:02.497612 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:02 crc kubenswrapper[4957]: I0218 14:33:02.497631 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:02 crc kubenswrapper[4957]: I0218 14:33:02.497657 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:02 crc kubenswrapper[4957]: I0218 14:33:02.497675 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:02Z","lastTransitionTime":"2026-02-18T14:33:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:02 crc kubenswrapper[4957]: I0218 14:33:02.600986 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:02 crc kubenswrapper[4957]: I0218 14:33:02.601037 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:02 crc kubenswrapper[4957]: I0218 14:33:02.601053 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:02 crc kubenswrapper[4957]: I0218 14:33:02.601079 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:02 crc kubenswrapper[4957]: I0218 14:33:02.601096 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:02Z","lastTransitionTime":"2026-02-18T14:33:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:02 crc kubenswrapper[4957]: I0218 14:33:02.704187 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:02 crc kubenswrapper[4957]: I0218 14:33:02.704229 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:02 crc kubenswrapper[4957]: I0218 14:33:02.704237 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:02 crc kubenswrapper[4957]: I0218 14:33:02.704256 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:02 crc kubenswrapper[4957]: I0218 14:33:02.704266 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:02Z","lastTransitionTime":"2026-02-18T14:33:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:02 crc kubenswrapper[4957]: I0218 14:33:02.807347 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:02 crc kubenswrapper[4957]: I0218 14:33:02.807391 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:02 crc kubenswrapper[4957]: I0218 14:33:02.807400 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:02 crc kubenswrapper[4957]: I0218 14:33:02.807448 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:02 crc kubenswrapper[4957]: I0218 14:33:02.807462 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:02Z","lastTransitionTime":"2026-02-18T14:33:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:02 crc kubenswrapper[4957]: I0218 14:33:02.910161 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:02 crc kubenswrapper[4957]: I0218 14:33:02.910241 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:02 crc kubenswrapper[4957]: I0218 14:33:02.910255 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:02 crc kubenswrapper[4957]: I0218 14:33:02.910271 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:02 crc kubenswrapper[4957]: I0218 14:33:02.910282 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:02Z","lastTransitionTime":"2026-02-18T14:33:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:03 crc kubenswrapper[4957]: I0218 14:33:03.012728 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:03 crc kubenswrapper[4957]: I0218 14:33:03.012762 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:03 crc kubenswrapper[4957]: I0218 14:33:03.012773 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:03 crc kubenswrapper[4957]: I0218 14:33:03.012786 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:03 crc kubenswrapper[4957]: I0218 14:33:03.012796 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:03Z","lastTransitionTime":"2026-02-18T14:33:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:03 crc kubenswrapper[4957]: I0218 14:33:03.114968 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:03 crc kubenswrapper[4957]: I0218 14:33:03.115017 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:03 crc kubenswrapper[4957]: I0218 14:33:03.115031 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:03 crc kubenswrapper[4957]: I0218 14:33:03.115049 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:03 crc kubenswrapper[4957]: I0218 14:33:03.115062 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:03Z","lastTransitionTime":"2026-02-18T14:33:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:03 crc kubenswrapper[4957]: I0218 14:33:03.208296 4957 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 02:31:55.827226693 +0000 UTC Feb 18 14:33:03 crc kubenswrapper[4957]: I0218 14:33:03.212901 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:33:03 crc kubenswrapper[4957]: I0218 14:33:03.212978 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jkmlc" Feb 18 14:33:03 crc kubenswrapper[4957]: E0218 14:33:03.213080 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 14:33:03 crc kubenswrapper[4957]: E0218 14:33:03.213192 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jkmlc" podUID="58c40982-35c8-4670-ad21-513a7a5a458e" Feb 18 14:33:03 crc kubenswrapper[4957]: I0218 14:33:03.217549 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:03 crc kubenswrapper[4957]: I0218 14:33:03.217661 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:03 crc kubenswrapper[4957]: I0218 14:33:03.217748 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:03 crc kubenswrapper[4957]: I0218 14:33:03.217855 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:03 crc kubenswrapper[4957]: I0218 14:33:03.217939 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:03Z","lastTransitionTime":"2026-02-18T14:33:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:03 crc kubenswrapper[4957]: I0218 14:33:03.320496 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:03 crc kubenswrapper[4957]: I0218 14:33:03.320864 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:03 crc kubenswrapper[4957]: I0218 14:33:03.321003 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:03 crc kubenswrapper[4957]: I0218 14:33:03.321163 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:03 crc kubenswrapper[4957]: I0218 14:33:03.321300 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:03Z","lastTransitionTime":"2026-02-18T14:33:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:03 crc kubenswrapper[4957]: I0218 14:33:03.423946 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:03 crc kubenswrapper[4957]: I0218 14:33:03.423999 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:03 crc kubenswrapper[4957]: I0218 14:33:03.424012 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:03 crc kubenswrapper[4957]: I0218 14:33:03.424030 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:03 crc kubenswrapper[4957]: I0218 14:33:03.424046 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:03Z","lastTransitionTime":"2026-02-18T14:33:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:03 crc kubenswrapper[4957]: I0218 14:33:03.527303 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:03 crc kubenswrapper[4957]: I0218 14:33:03.527365 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:03 crc kubenswrapper[4957]: I0218 14:33:03.527378 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:03 crc kubenswrapper[4957]: I0218 14:33:03.527403 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:03 crc kubenswrapper[4957]: I0218 14:33:03.527432 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:03Z","lastTransitionTime":"2026-02-18T14:33:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:03 crc kubenswrapper[4957]: I0218 14:33:03.629639 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:03 crc kubenswrapper[4957]: I0218 14:33:03.630037 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:03 crc kubenswrapper[4957]: I0218 14:33:03.630213 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:03 crc kubenswrapper[4957]: I0218 14:33:03.630361 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:03 crc kubenswrapper[4957]: I0218 14:33:03.630537 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:03Z","lastTransitionTime":"2026-02-18T14:33:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:03 crc kubenswrapper[4957]: I0218 14:33:03.733167 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:03 crc kubenswrapper[4957]: I0218 14:33:03.733216 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:03 crc kubenswrapper[4957]: I0218 14:33:03.733231 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:03 crc kubenswrapper[4957]: I0218 14:33:03.733251 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:03 crc kubenswrapper[4957]: I0218 14:33:03.733264 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:03Z","lastTransitionTime":"2026-02-18T14:33:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:03 crc kubenswrapper[4957]: I0218 14:33:03.836440 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:03 crc kubenswrapper[4957]: I0218 14:33:03.836492 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:03 crc kubenswrapper[4957]: I0218 14:33:03.836510 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:03 crc kubenswrapper[4957]: I0218 14:33:03.836534 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:03 crc kubenswrapper[4957]: I0218 14:33:03.836551 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:03Z","lastTransitionTime":"2026-02-18T14:33:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:03 crc kubenswrapper[4957]: I0218 14:33:03.939216 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:03 crc kubenswrapper[4957]: I0218 14:33:03.939268 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:03 crc kubenswrapper[4957]: I0218 14:33:03.939282 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:03 crc kubenswrapper[4957]: I0218 14:33:03.939301 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:03 crc kubenswrapper[4957]: I0218 14:33:03.939314 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:03Z","lastTransitionTime":"2026-02-18T14:33:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:04 crc kubenswrapper[4957]: I0218 14:33:04.041624 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:04 crc kubenswrapper[4957]: I0218 14:33:04.042030 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:04 crc kubenswrapper[4957]: I0218 14:33:04.042142 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:04 crc kubenswrapper[4957]: I0218 14:33:04.042253 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:04 crc kubenswrapper[4957]: I0218 14:33:04.042358 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:04Z","lastTransitionTime":"2026-02-18T14:33:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:04 crc kubenswrapper[4957]: I0218 14:33:04.145373 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:04 crc kubenswrapper[4957]: I0218 14:33:04.145462 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:04 crc kubenswrapper[4957]: I0218 14:33:04.145480 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:04 crc kubenswrapper[4957]: I0218 14:33:04.145504 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:04 crc kubenswrapper[4957]: I0218 14:33:04.145522 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:04Z","lastTransitionTime":"2026-02-18T14:33:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:04 crc kubenswrapper[4957]: I0218 14:33:04.209153 4957 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 23:12:32.752166384 +0000 UTC Feb 18 14:33:04 crc kubenswrapper[4957]: I0218 14:33:04.213143 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:33:04 crc kubenswrapper[4957]: E0218 14:33:04.213304 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 14:33:04 crc kubenswrapper[4957]: I0218 14:33:04.213893 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:33:04 crc kubenswrapper[4957]: E0218 14:33:04.214081 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 14:33:04 crc kubenswrapper[4957]: I0218 14:33:04.234956 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"820e2f52-fc4e-43af-8da0-5b1ecb52e54a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdeaa62699fcf37060225a404103ed014fff43f751f744a27891afb76c56d011\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb9f54ba352c3e8d1cb70e31e2321f32cc741e9ea7fa871ac8db9cb380486d6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04cb263fd49c7f580746efa2f81c4e25a4862244502ad5362a1767d6255c023c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f45a9407db247a891731310afb4c1a7b81ac94e44cfd8e267854a4c4e434e95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:34Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:33:04Z is after 2025-08-24T17:21:41Z" Feb 18 14:33:04 crc kubenswrapper[4957]: I0218 14:33:04.247621 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:04 crc kubenswrapper[4957]: I0218 14:33:04.247659 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:04 crc kubenswrapper[4957]: I0218 14:33:04.247671 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:04 crc kubenswrapper[4957]: I0218 14:33:04.247689 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:04 crc kubenswrapper[4957]: I0218 14:33:04.247703 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:04Z","lastTransitionTime":"2026-02-18T14:33:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:04 crc kubenswrapper[4957]: I0218 14:33:04.250009 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:33:04Z is after 2025-08-24T17:21:41Z" Feb 18 14:33:04 crc kubenswrapper[4957]: I0218 14:33:04.262249 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sk96m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7e4902e-7bb6-44ec-a3ac-3d8b3afe3fbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b9a1d713ed8c708dc17974a35a4e22543cfeee4911baa33638b24cb02f01f1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://644a3c29970ac9c5262a3a66390a566947b89d6020eaf5de12afb1afeaaff673\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T14:32:42Z\\\",\\\"message\\\":\\\"2026-02-18T14:31:56+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_5faa22d4-8273-431b-8440-b348b6738df0\\\\n2026-02-18T14:31:56+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_5faa22d4-8273-431b-8440-b348b6738df0 to /host/opt/cni/bin/\\\\n2026-02-18T14:31:57Z [verbose] multus-daemon started\\\\n2026-02-18T14:31:57Z [verbose] Readiness Indicator file check\\\\n2026-02-18T14:32:42Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:32:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vssz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sk96m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:33:04Z is after 2025-08-24T17:21:41Z" Feb 18 14:33:04 crc kubenswrapper[4957]: I0218 14:33:04.275892 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cde17e3-43e9-4bed-afe8-5b76229e35cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1929ddc0bd8d5e1c054fe6b3399c053a91ace49153f8712ad155416d0df0652e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mzslj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4227d09ba91769eac5df7ac50f6da20587fad22be3d3f27f0a938b37d76b457e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mzslj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-x8wwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:33:04Z is after 2025-08-24T17:21:41Z" Feb 18 14:33:04 crc kubenswrapper[4957]: I0218 14:33:04.287019 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jkmlc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58c40982-35c8-4670-ad21-513a7a5a458e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gkmzc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gkmzc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:32:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jkmlc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:33:04Z is after 2025-08-24T17:21:41Z" Feb 18 14:33:04 crc kubenswrapper[4957]: I0218 14:33:04.303334 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21de7d45-3cfa-490a-8ff5-f3d1fe4a25b4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66ef2dae6a762df2e06b45cba045afd41e5b0ad5fb05bbc83bab9e4aa2cb3525\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e16888e8d272463cc6d1de95580644996c57cc899bf9668d019732a2a5beedf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e16888e8d272463cc6d1de95580644996c57cc899bf9668d019732a2a5beedf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:33:04Z is after 2025-08-24T17:21:41Z" Feb 18 14:33:04 crc kubenswrapper[4957]: I0218 14:33:04.320083 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3aa342a3ea42d2c64cf284dd2f7fed142b1ff377cc3c5384e4d940ec4364befc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:33:04Z is after 2025-08-24T17:21:41Z" Feb 18 14:33:04 crc kubenswrapper[4957]: I0218 14:33:04.334583 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:33:04Z is after 2025-08-24T17:21:41Z" Feb 18 14:33:04 crc kubenswrapper[4957]: I0218 14:33:04.354828 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:04 crc kubenswrapper[4957]: I0218 14:33:04.355243 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:04 crc kubenswrapper[4957]: I0218 14:33:04.355613 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:04 crc kubenswrapper[4957]: I0218 14:33:04.355883 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:04 crc kubenswrapper[4957]: I0218 14:33:04.355988 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:04Z","lastTransitionTime":"2026-02-18T14:33:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:04 crc kubenswrapper[4957]: I0218 14:33:04.356380 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-s7f5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77ae51a3-a3f7-4ea3-afb9-93558bf3b821\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ff578d3657e9ba6b371eb3c3a2e63d4c989e3232da6c9f88cf6ba18edc50dd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:32:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7f4f904ea5311aa8aa0473b62433cd3684d0e1fecdb1b179bfb7a58e463cc27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7f4f904ea5311aa8aa0473b62433cd3684d0e1fecdb1b179bfb7a58e463cc27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ea791a426f30ba77c019b8f865f9cfe90b915f6c0596002f4914cbe2fa68c27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ea791a426f30ba77c019b8f865f9cfe90b915f6c0596002f4914cbe2fa68c27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01a7d815c8c260fb5d2a4586f2640abf157fa209aa8e8aed0e6573a07bdaced6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01a7d815c8c260fb5d2a4586f2640abf157fa209aa8e8aed0e6573a07bdaced6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ae36af89d8b5cc2ad00253398fad76d78310bc8adb012b7b819ef55a26b8004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae36af89d8b5cc2ad00253398fad76d78310bc8adb012b7b819ef55a26b8004\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://def4b27d830824baeb1b965b56d5fc50343585d0fae5a1fed75381364a94caf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://def4b27d830824baeb1b965b56d5fc50343585d0fae5a1fed75381364a94caf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:32:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a13d3fcb295e963277f66a06bad27866ddb270d48474d0154efc52a8721a64a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a13d3fcb295e963277f66a06bad27866ddb270d48474d0154efc52a8721a64a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:32:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5sf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-s7f5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:33:04Z is after 2025-08-24T17:21:41Z" Feb 18 14:33:04 crc kubenswrapper[4957]: I0218 14:33:04.369624 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-c5hxm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2b8a049-72c4-4a87-bb05-a5cc4ebe7a89\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d8c16012d78884094f9be4f3cd7b67463352ff1391c8f60c109b5f734c80cd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mzkhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-c5hxm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:33:04Z is after 2025-08-24T17:21:41Z" Feb 18 14:33:04 crc kubenswrapper[4957]: I0218 14:33:04.384042 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b3bef1f-7cee-4035-bc8e-195fadcf2d19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51eded762a62fcb704d09a9c4336a0e308e44c662b636772ea79f754c09dcaaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a2ec9911be4f4d8715bb722cdf72cdbf478111030a7311b93eb67f55ddbce77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d46f625f180ab9afde6a7cb5d578b88012bf5b297e691a35f1fe1af000f1416\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a11b36734e57eec3aba0b6d96b4102850e46b96eb16d99cfcacf273f1ce69f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ff013b6ed4aa92a4bda93a646e7ffd4a158daab436c25025c7fbee49716bcfd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T14:31:47Z\\\",\\\"message\\\":\\\"W0218 14:31:37.311828 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0218 14:31:37.312305 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771425097 cert, and key in /tmp/serving-cert-2580901173/serving-signer.crt, /tmp/serving-cert-2580901173/serving-signer.key\\\\nI0218 14:31:37.668457 1 observer_polling.go:159] Starting file observer\\\\nW0218 14:31:37.670692 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0218 14:31:37.670892 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 14:31:37.672550 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2580901173/tls.crt::/tmp/serving-cert-2580901173/tls.key\\\\\\\"\\\\nF0218 14:31:47.884142 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1c0f986e657c20a059efccb494462926f4fa62d30b012e6ce2b530d679d9d5c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:37Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09ddd547ad67f12c0d4f3b0505206fc69b4bcfcfa966c6f98da8ef4f4e979711\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09ddd547ad67f12c0d4f3b0505206fc69b4bcfcfa966c6f98da8ef4f4e979711\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:33:04Z is after 2025-08-24T17:21:41Z" Feb 18 14:33:04 crc kubenswrapper[4957]: I0218 14:33:04.399318 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76c85d37-4ff8-4c5d-83aa-4c5fbf9535c0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://796493e4b9e802eafa557bda30c8c5442e4fb5705907cfa1f04cad2b36dc20bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc2518870abf503506ce74ccbc0fc66ac6efa632a7394ceca4f58b38a0a1fac3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b902fee5b17e97ec39e941b26dd9ef6e2da95f49be1fc25192895894a4182685\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b68e74be3ce63b0e6fff6a8141e11a9c6e80a376e205bfc7f2745440c6526e6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b68e74be3ce63b0e6fff6a8141e11a9c6e80a376e205bfc7f2745440c6526e6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:35Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:34Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:33:04Z is after 2025-08-24T17:21:41Z" Feb 18 14:33:04 crc kubenswrapper[4957]: I0218 14:33:04.414072 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:33:04Z is after 2025-08-24T17:21:41Z" Feb 18 14:33:04 crc kubenswrapper[4957]: I0218 14:33:04.429853 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3339e9ff017eaa7de40b766111f5d7dcb9b5b45f154d917b2f3b1ebad21ee361\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://815a22c1c4523e923dbe20bb68b1f4195562f2b3a767409555f43c82cd826391\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:33:04Z is after 2025-08-24T17:21:41Z" Feb 18 14:33:04 crc kubenswrapper[4957]: I0218 14:33:04.445936 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96ebe482278c96a988073257a64389543a692cbae60b771542dd45ffdbef9977\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:33:04Z is after 2025-08-24T17:21:41Z" Feb 18 14:33:04 crc kubenswrapper[4957]: I0218 14:33:04.457981 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wn2pd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6a720f-7d42-48e9-8073-fb4f7417e6cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3317042ce828b73a13e6c2724a7c7663e0c73dd05fef1239f5ad36a7d46b1ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qv7rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wn2pd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:33:04Z is after 2025-08-24T17:21:41Z" Feb 18 14:33:04 crc kubenswrapper[4957]: I0218 14:33:04.458392 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:04 crc kubenswrapper[4957]: I0218 14:33:04.458556 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:04 crc kubenswrapper[4957]: I0218 14:33:04.458647 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:04 crc kubenswrapper[4957]: I0218 14:33:04.458739 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:04 crc kubenswrapper[4957]: I0218 14:33:04.458830 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:04Z","lastTransitionTime":"2026-02-18T14:33:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:04 crc kubenswrapper[4957]: I0218 14:33:04.485065 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t7lp9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1ab5e7d-28c9-416b-9e12-1209987d8a2c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3d55cd020086b018dcf2183c7307ce4f66a18d4dfd5c7e387419f5aaeb08e53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb7e286dbf7b49a0ee8019eb660a4b977399c163bb094d77f830b3747399d702\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c773ab7da87d534f8ca400b73f89b473291b15b65118abfa48da7e8af126724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fb30623c479a6d604d07d8779f3b5661ba8a2c98fa38f145443c7f80fbfddb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9575c57251809ec79d9e9a63fc18fc0fe431582b023c0bfb9d71b3ea5c369e49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7fc50065803f879ae25e1ab406e1b74e142cab30c38bedd0426616a2cc6cc84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc6ab3573641e68bec6f227fa30ead1a586c1cf3abdf781b93edf3396b81f192\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc6ab3573641e68bec6f227fa30ead1a586c1cf3abdf781b93edf3396b81f192\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T14:32:48Z\\\",\\\"message\\\":\\\"Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]} options:{GoMap:map[iface-id-ver:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c94130be-172c-477c-88c4-40cc7eba30fe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0218 14:32:48.038296 7032 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0218 14:32:48.038339 7032 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T14:32:47Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-t7lp9_openshift-ovn-kubernetes(c1ab5e7d-28c9-416b-9e12-1209987d8a2c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07549d11e88948ba7929804722b7d49a2fefb64d0c324c2032529c13df70a23f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854d32930f1afa0a3f29b83159a4a473fd3c074f065cab0f110c4c0e62d0ab79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://854d32930f1afa0a3f29b83159a4a473fd3c074f065cab0f110c4c0e62d0ab79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:31:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:31:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ngss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:31:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t7lp9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:33:04Z is after 2025-08-24T17:21:41Z" Feb 18 14:33:04 crc kubenswrapper[4957]: I0218 14:33:04.497901 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-52gh7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"841cf9b6-bfbb-4ff0-8899-acd00478a669\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e52fe12bee0a96d77a82c8edb65574b11ae3d6b6dd3e2a68e918f1d21c0b1153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xg6v5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d556fc2e5fb106135a544401595696797ede468aa93818660d86424c49486fff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xg6v5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:32:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-52gh7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:33:04Z is after 2025-08-24T17:21:41Z" Feb 18 14:33:04 crc kubenswrapper[4957]: I0218 14:33:04.561997 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:04 crc kubenswrapper[4957]: I0218 14:33:04.562407 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:04 crc kubenswrapper[4957]: I0218 14:33:04.562741 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:04 crc kubenswrapper[4957]: I0218 14:33:04.562817 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:04 crc kubenswrapper[4957]: I0218 14:33:04.562832 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:04Z","lastTransitionTime":"2026-02-18T14:33:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:04 crc kubenswrapper[4957]: I0218 14:33:04.665577 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:04 crc kubenswrapper[4957]: I0218 14:33:04.665673 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:04 crc kubenswrapper[4957]: I0218 14:33:04.665693 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:04 crc kubenswrapper[4957]: I0218 14:33:04.665783 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:04 crc kubenswrapper[4957]: I0218 14:33:04.665819 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:04Z","lastTransitionTime":"2026-02-18T14:33:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:04 crc kubenswrapper[4957]: I0218 14:33:04.768137 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:04 crc kubenswrapper[4957]: I0218 14:33:04.768178 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:04 crc kubenswrapper[4957]: I0218 14:33:04.768192 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:04 crc kubenswrapper[4957]: I0218 14:33:04.768212 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:04 crc kubenswrapper[4957]: I0218 14:33:04.768227 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:04Z","lastTransitionTime":"2026-02-18T14:33:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:04 crc kubenswrapper[4957]: I0218 14:33:04.871196 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:04 crc kubenswrapper[4957]: I0218 14:33:04.871244 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:04 crc kubenswrapper[4957]: I0218 14:33:04.871255 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:04 crc kubenswrapper[4957]: I0218 14:33:04.871270 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:04 crc kubenswrapper[4957]: I0218 14:33:04.871282 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:04Z","lastTransitionTime":"2026-02-18T14:33:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:04 crc kubenswrapper[4957]: I0218 14:33:04.973595 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:04 crc kubenswrapper[4957]: I0218 14:33:04.973651 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:04 crc kubenswrapper[4957]: I0218 14:33:04.973665 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:04 crc kubenswrapper[4957]: I0218 14:33:04.973685 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:04 crc kubenswrapper[4957]: I0218 14:33:04.973700 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:04Z","lastTransitionTime":"2026-02-18T14:33:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:05 crc kubenswrapper[4957]: I0218 14:33:05.076533 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:05 crc kubenswrapper[4957]: I0218 14:33:05.076581 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:05 crc kubenswrapper[4957]: I0218 14:33:05.076594 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:05 crc kubenswrapper[4957]: I0218 14:33:05.076615 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:05 crc kubenswrapper[4957]: I0218 14:33:05.076630 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:05Z","lastTransitionTime":"2026-02-18T14:33:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:05 crc kubenswrapper[4957]: I0218 14:33:05.179407 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:05 crc kubenswrapper[4957]: I0218 14:33:05.179690 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:05 crc kubenswrapper[4957]: I0218 14:33:05.179753 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:05 crc kubenswrapper[4957]: I0218 14:33:05.179848 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:05 crc kubenswrapper[4957]: I0218 14:33:05.179911 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:05Z","lastTransitionTime":"2026-02-18T14:33:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:05 crc kubenswrapper[4957]: I0218 14:33:05.210337 4957 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 19:16:50.495806273 +0000 UTC Feb 18 14:33:05 crc kubenswrapper[4957]: I0218 14:33:05.212767 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:33:05 crc kubenswrapper[4957]: I0218 14:33:05.212862 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jkmlc" Feb 18 14:33:05 crc kubenswrapper[4957]: E0218 14:33:05.212988 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 14:33:05 crc kubenswrapper[4957]: E0218 14:33:05.213251 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jkmlc" podUID="58c40982-35c8-4670-ad21-513a7a5a458e" Feb 18 14:33:05 crc kubenswrapper[4957]: I0218 14:33:05.282080 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:05 crc kubenswrapper[4957]: I0218 14:33:05.282316 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:05 crc kubenswrapper[4957]: I0218 14:33:05.282412 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:05 crc kubenswrapper[4957]: I0218 14:33:05.282730 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:05 crc kubenswrapper[4957]: I0218 14:33:05.282812 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:05Z","lastTransitionTime":"2026-02-18T14:33:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:05 crc kubenswrapper[4957]: I0218 14:33:05.386775 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:05 crc kubenswrapper[4957]: I0218 14:33:05.387160 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:05 crc kubenswrapper[4957]: I0218 14:33:05.387236 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:05 crc kubenswrapper[4957]: I0218 14:33:05.387369 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:05 crc kubenswrapper[4957]: I0218 14:33:05.387464 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:05Z","lastTransitionTime":"2026-02-18T14:33:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:05 crc kubenswrapper[4957]: I0218 14:33:05.490601 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:05 crc kubenswrapper[4957]: I0218 14:33:05.490643 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:05 crc kubenswrapper[4957]: I0218 14:33:05.490655 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:05 crc kubenswrapper[4957]: I0218 14:33:05.490672 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:05 crc kubenswrapper[4957]: I0218 14:33:05.490684 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:05Z","lastTransitionTime":"2026-02-18T14:33:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:05 crc kubenswrapper[4957]: I0218 14:33:05.593010 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:05 crc kubenswrapper[4957]: I0218 14:33:05.593072 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:05 crc kubenswrapper[4957]: I0218 14:33:05.593088 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:05 crc kubenswrapper[4957]: I0218 14:33:05.593105 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:05 crc kubenswrapper[4957]: I0218 14:33:05.593116 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:05Z","lastTransitionTime":"2026-02-18T14:33:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:05 crc kubenswrapper[4957]: I0218 14:33:05.696374 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:05 crc kubenswrapper[4957]: I0218 14:33:05.696451 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:05 crc kubenswrapper[4957]: I0218 14:33:05.696466 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:05 crc kubenswrapper[4957]: I0218 14:33:05.696485 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:05 crc kubenswrapper[4957]: I0218 14:33:05.696500 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:05Z","lastTransitionTime":"2026-02-18T14:33:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:05 crc kubenswrapper[4957]: I0218 14:33:05.799199 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:05 crc kubenswrapper[4957]: I0218 14:33:05.799772 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:05 crc kubenswrapper[4957]: I0218 14:33:05.799946 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:05 crc kubenswrapper[4957]: I0218 14:33:05.800079 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:05 crc kubenswrapper[4957]: I0218 14:33:05.800209 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:05Z","lastTransitionTime":"2026-02-18T14:33:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:05 crc kubenswrapper[4957]: I0218 14:33:05.903978 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:05 crc kubenswrapper[4957]: I0218 14:33:05.904039 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:05 crc kubenswrapper[4957]: I0218 14:33:05.904050 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:05 crc kubenswrapper[4957]: I0218 14:33:05.904066 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:05 crc kubenswrapper[4957]: I0218 14:33:05.904077 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:05Z","lastTransitionTime":"2026-02-18T14:33:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:06 crc kubenswrapper[4957]: I0218 14:33:06.007479 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:06 crc kubenswrapper[4957]: I0218 14:33:06.007773 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:06 crc kubenswrapper[4957]: I0218 14:33:06.007842 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:06 crc kubenswrapper[4957]: I0218 14:33:06.007936 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:06 crc kubenswrapper[4957]: I0218 14:33:06.008018 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:06Z","lastTransitionTime":"2026-02-18T14:33:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:06 crc kubenswrapper[4957]: I0218 14:33:06.109924 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:06 crc kubenswrapper[4957]: I0218 14:33:06.110237 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:06 crc kubenswrapper[4957]: I0218 14:33:06.110330 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:06 crc kubenswrapper[4957]: I0218 14:33:06.110445 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:06 crc kubenswrapper[4957]: I0218 14:33:06.110584 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:06Z","lastTransitionTime":"2026-02-18T14:33:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:06 crc kubenswrapper[4957]: I0218 14:33:06.211820 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:33:06 crc kubenswrapper[4957]: I0218 14:33:06.211816 4957 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 09:24:23.602811632 +0000 UTC Feb 18 14:33:06 crc kubenswrapper[4957]: I0218 14:33:06.212548 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:33:06 crc kubenswrapper[4957]: E0218 14:33:06.212660 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 14:33:06 crc kubenswrapper[4957]: E0218 14:33:06.213111 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 14:33:06 crc kubenswrapper[4957]: I0218 14:33:06.213870 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:06 crc kubenswrapper[4957]: I0218 14:33:06.213900 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:06 crc kubenswrapper[4957]: I0218 14:33:06.213909 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:06 crc kubenswrapper[4957]: I0218 14:33:06.213920 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:06 crc kubenswrapper[4957]: I0218 14:33:06.213929 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:06Z","lastTransitionTime":"2026-02-18T14:33:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:06 crc kubenswrapper[4957]: I0218 14:33:06.316636 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:06 crc kubenswrapper[4957]: I0218 14:33:06.316686 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:06 crc kubenswrapper[4957]: I0218 14:33:06.316703 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:06 crc kubenswrapper[4957]: I0218 14:33:06.316724 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:06 crc kubenswrapper[4957]: I0218 14:33:06.316740 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:06Z","lastTransitionTime":"2026-02-18T14:33:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:06 crc kubenswrapper[4957]: I0218 14:33:06.420391 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:06 crc kubenswrapper[4957]: I0218 14:33:06.420466 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:06 crc kubenswrapper[4957]: I0218 14:33:06.420480 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:06 crc kubenswrapper[4957]: I0218 14:33:06.420502 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:06 crc kubenswrapper[4957]: I0218 14:33:06.420514 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:06Z","lastTransitionTime":"2026-02-18T14:33:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:06 crc kubenswrapper[4957]: I0218 14:33:06.523907 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:06 crc kubenswrapper[4957]: I0218 14:33:06.523985 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:06 crc kubenswrapper[4957]: I0218 14:33:06.524011 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:06 crc kubenswrapper[4957]: I0218 14:33:06.524042 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:06 crc kubenswrapper[4957]: I0218 14:33:06.524064 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:06Z","lastTransitionTime":"2026-02-18T14:33:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:06 crc kubenswrapper[4957]: I0218 14:33:06.627175 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:06 crc kubenswrapper[4957]: I0218 14:33:06.627250 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:06 crc kubenswrapper[4957]: I0218 14:33:06.627273 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:06 crc kubenswrapper[4957]: I0218 14:33:06.627302 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:06 crc kubenswrapper[4957]: I0218 14:33:06.627327 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:06Z","lastTransitionTime":"2026-02-18T14:33:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:06 crc kubenswrapper[4957]: I0218 14:33:06.731480 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:06 crc kubenswrapper[4957]: I0218 14:33:06.731566 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:06 crc kubenswrapper[4957]: I0218 14:33:06.731582 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:06 crc kubenswrapper[4957]: I0218 14:33:06.731607 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:06 crc kubenswrapper[4957]: I0218 14:33:06.731623 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:06Z","lastTransitionTime":"2026-02-18T14:33:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:06 crc kubenswrapper[4957]: I0218 14:33:06.834368 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:06 crc kubenswrapper[4957]: I0218 14:33:06.834456 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:06 crc kubenswrapper[4957]: I0218 14:33:06.834480 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:06 crc kubenswrapper[4957]: I0218 14:33:06.834506 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:06 crc kubenswrapper[4957]: I0218 14:33:06.834526 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:06Z","lastTransitionTime":"2026-02-18T14:33:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:06 crc kubenswrapper[4957]: I0218 14:33:06.937888 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:06 crc kubenswrapper[4957]: I0218 14:33:06.937927 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:06 crc kubenswrapper[4957]: I0218 14:33:06.937938 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:06 crc kubenswrapper[4957]: I0218 14:33:06.937958 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:06 crc kubenswrapper[4957]: I0218 14:33:06.937970 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:06Z","lastTransitionTime":"2026-02-18T14:33:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:07 crc kubenswrapper[4957]: I0218 14:33:07.041199 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:07 crc kubenswrapper[4957]: I0218 14:33:07.041255 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:07 crc kubenswrapper[4957]: I0218 14:33:07.041265 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:07 crc kubenswrapper[4957]: I0218 14:33:07.041287 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:07 crc kubenswrapper[4957]: I0218 14:33:07.041301 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:07Z","lastTransitionTime":"2026-02-18T14:33:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:07 crc kubenswrapper[4957]: I0218 14:33:07.143509 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:07 crc kubenswrapper[4957]: I0218 14:33:07.143552 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:07 crc kubenswrapper[4957]: I0218 14:33:07.143562 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:07 crc kubenswrapper[4957]: I0218 14:33:07.143577 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:07 crc kubenswrapper[4957]: I0218 14:33:07.143591 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:07Z","lastTransitionTime":"2026-02-18T14:33:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:07 crc kubenswrapper[4957]: I0218 14:33:07.212115 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jkmlc" Feb 18 14:33:07 crc kubenswrapper[4957]: I0218 14:33:07.212162 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:33:07 crc kubenswrapper[4957]: I0218 14:33:07.212175 4957 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 08:02:33.881437383 +0000 UTC Feb 18 14:33:07 crc kubenswrapper[4957]: E0218 14:33:07.212275 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jkmlc" podUID="58c40982-35c8-4670-ad21-513a7a5a458e" Feb 18 14:33:07 crc kubenswrapper[4957]: E0218 14:33:07.212457 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 14:33:07 crc kubenswrapper[4957]: I0218 14:33:07.245595 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:07 crc kubenswrapper[4957]: I0218 14:33:07.245636 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:07 crc kubenswrapper[4957]: I0218 14:33:07.245647 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:07 crc kubenswrapper[4957]: I0218 14:33:07.245661 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:07 crc kubenswrapper[4957]: I0218 14:33:07.245671 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:07Z","lastTransitionTime":"2026-02-18T14:33:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:07 crc kubenswrapper[4957]: I0218 14:33:07.348951 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:07 crc kubenswrapper[4957]: I0218 14:33:07.349000 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:07 crc kubenswrapper[4957]: I0218 14:33:07.349010 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:07 crc kubenswrapper[4957]: I0218 14:33:07.349030 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:07 crc kubenswrapper[4957]: I0218 14:33:07.349043 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:07Z","lastTransitionTime":"2026-02-18T14:33:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:07 crc kubenswrapper[4957]: I0218 14:33:07.451885 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:07 crc kubenswrapper[4957]: I0218 14:33:07.451983 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:07 crc kubenswrapper[4957]: I0218 14:33:07.452001 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:07 crc kubenswrapper[4957]: I0218 14:33:07.452022 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:07 crc kubenswrapper[4957]: I0218 14:33:07.452043 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:07Z","lastTransitionTime":"2026-02-18T14:33:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:07 crc kubenswrapper[4957]: I0218 14:33:07.554690 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:07 crc kubenswrapper[4957]: I0218 14:33:07.554749 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:07 crc kubenswrapper[4957]: I0218 14:33:07.554764 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:07 crc kubenswrapper[4957]: I0218 14:33:07.554782 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:07 crc kubenswrapper[4957]: I0218 14:33:07.554798 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:07Z","lastTransitionTime":"2026-02-18T14:33:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:07 crc kubenswrapper[4957]: I0218 14:33:07.657788 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:07 crc kubenswrapper[4957]: I0218 14:33:07.657840 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:07 crc kubenswrapper[4957]: I0218 14:33:07.657858 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:07 crc kubenswrapper[4957]: I0218 14:33:07.657881 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:07 crc kubenswrapper[4957]: I0218 14:33:07.657896 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:07Z","lastTransitionTime":"2026-02-18T14:33:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:07 crc kubenswrapper[4957]: I0218 14:33:07.761020 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:07 crc kubenswrapper[4957]: I0218 14:33:07.761124 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:07 crc kubenswrapper[4957]: I0218 14:33:07.761136 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:07 crc kubenswrapper[4957]: I0218 14:33:07.761156 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:07 crc kubenswrapper[4957]: I0218 14:33:07.761170 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:07Z","lastTransitionTime":"2026-02-18T14:33:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:07 crc kubenswrapper[4957]: I0218 14:33:07.864150 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:07 crc kubenswrapper[4957]: I0218 14:33:07.864222 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:07 crc kubenswrapper[4957]: I0218 14:33:07.864241 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:07 crc kubenswrapper[4957]: I0218 14:33:07.864266 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:07 crc kubenswrapper[4957]: I0218 14:33:07.864284 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:07Z","lastTransitionTime":"2026-02-18T14:33:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:07 crc kubenswrapper[4957]: I0218 14:33:07.967401 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:07 crc kubenswrapper[4957]: I0218 14:33:07.967548 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:07 crc kubenswrapper[4957]: I0218 14:33:07.967572 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:07 crc kubenswrapper[4957]: I0218 14:33:07.967600 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:07 crc kubenswrapper[4957]: I0218 14:33:07.967621 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:07Z","lastTransitionTime":"2026-02-18T14:33:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:08 crc kubenswrapper[4957]: I0218 14:33:08.070214 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:08 crc kubenswrapper[4957]: I0218 14:33:08.070299 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:08 crc kubenswrapper[4957]: I0218 14:33:08.070322 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:08 crc kubenswrapper[4957]: I0218 14:33:08.070354 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:08 crc kubenswrapper[4957]: I0218 14:33:08.070376 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:08Z","lastTransitionTime":"2026-02-18T14:33:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:08 crc kubenswrapper[4957]: I0218 14:33:08.173579 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:08 crc kubenswrapper[4957]: I0218 14:33:08.173638 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:08 crc kubenswrapper[4957]: I0218 14:33:08.173649 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:08 crc kubenswrapper[4957]: I0218 14:33:08.173674 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:08 crc kubenswrapper[4957]: I0218 14:33:08.173688 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:08Z","lastTransitionTime":"2026-02-18T14:33:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:08 crc kubenswrapper[4957]: I0218 14:33:08.212184 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:33:08 crc kubenswrapper[4957]: I0218 14:33:08.212282 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:33:08 crc kubenswrapper[4957]: I0218 14:33:08.212383 4957 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 08:51:51.091347758 +0000 UTC Feb 18 14:33:08 crc kubenswrapper[4957]: E0218 14:33:08.212441 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 14:33:08 crc kubenswrapper[4957]: E0218 14:33:08.212536 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 14:33:08 crc kubenswrapper[4957]: I0218 14:33:08.277069 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:08 crc kubenswrapper[4957]: I0218 14:33:08.277127 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:08 crc kubenswrapper[4957]: I0218 14:33:08.277146 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:08 crc kubenswrapper[4957]: I0218 14:33:08.277172 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:08 crc kubenswrapper[4957]: I0218 14:33:08.277191 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:08Z","lastTransitionTime":"2026-02-18T14:33:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:08 crc kubenswrapper[4957]: I0218 14:33:08.382289 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:08 crc kubenswrapper[4957]: I0218 14:33:08.382354 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:08 crc kubenswrapper[4957]: I0218 14:33:08.382387 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:08 crc kubenswrapper[4957]: I0218 14:33:08.382460 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:08 crc kubenswrapper[4957]: I0218 14:33:08.382479 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:08Z","lastTransitionTime":"2026-02-18T14:33:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:08 crc kubenswrapper[4957]: I0218 14:33:08.485911 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:08 crc kubenswrapper[4957]: I0218 14:33:08.485962 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:08 crc kubenswrapper[4957]: I0218 14:33:08.485981 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:08 crc kubenswrapper[4957]: I0218 14:33:08.486005 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:08 crc kubenswrapper[4957]: I0218 14:33:08.486023 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:08Z","lastTransitionTime":"2026-02-18T14:33:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:08 crc kubenswrapper[4957]: I0218 14:33:08.588656 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:08 crc kubenswrapper[4957]: I0218 14:33:08.589057 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:08 crc kubenswrapper[4957]: I0218 14:33:08.589186 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:08 crc kubenswrapper[4957]: I0218 14:33:08.589295 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:08 crc kubenswrapper[4957]: I0218 14:33:08.589394 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:08Z","lastTransitionTime":"2026-02-18T14:33:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:08 crc kubenswrapper[4957]: I0218 14:33:08.692446 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:08 crc kubenswrapper[4957]: I0218 14:33:08.692800 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:08 crc kubenswrapper[4957]: I0218 14:33:08.692915 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:08 crc kubenswrapper[4957]: I0218 14:33:08.693027 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:08 crc kubenswrapper[4957]: I0218 14:33:08.693152 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:08Z","lastTransitionTime":"2026-02-18T14:33:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:08 crc kubenswrapper[4957]: I0218 14:33:08.795840 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:08 crc kubenswrapper[4957]: I0218 14:33:08.795890 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:08 crc kubenswrapper[4957]: I0218 14:33:08.795906 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:08 crc kubenswrapper[4957]: I0218 14:33:08.795928 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:08 crc kubenswrapper[4957]: I0218 14:33:08.795945 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:08Z","lastTransitionTime":"2026-02-18T14:33:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:08 crc kubenswrapper[4957]: I0218 14:33:08.899082 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:08 crc kubenswrapper[4957]: I0218 14:33:08.899126 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:08 crc kubenswrapper[4957]: I0218 14:33:08.899135 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:08 crc kubenswrapper[4957]: I0218 14:33:08.899155 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:08 crc kubenswrapper[4957]: I0218 14:33:08.899164 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:08Z","lastTransitionTime":"2026-02-18T14:33:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:09 crc kubenswrapper[4957]: I0218 14:33:09.001674 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:09 crc kubenswrapper[4957]: I0218 14:33:09.002007 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:09 crc kubenswrapper[4957]: I0218 14:33:09.002114 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:09 crc kubenswrapper[4957]: I0218 14:33:09.002258 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:09 crc kubenswrapper[4957]: I0218 14:33:09.002387 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:09Z","lastTransitionTime":"2026-02-18T14:33:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:09 crc kubenswrapper[4957]: I0218 14:33:09.105597 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:09 crc kubenswrapper[4957]: I0218 14:33:09.105685 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:09 crc kubenswrapper[4957]: I0218 14:33:09.105711 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:09 crc kubenswrapper[4957]: I0218 14:33:09.105736 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:09 crc kubenswrapper[4957]: I0218 14:33:09.105754 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:09Z","lastTransitionTime":"2026-02-18T14:33:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:09 crc kubenswrapper[4957]: I0218 14:33:09.208494 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:09 crc kubenswrapper[4957]: I0218 14:33:09.208827 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:09 crc kubenswrapper[4957]: I0218 14:33:09.208948 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:09 crc kubenswrapper[4957]: I0218 14:33:09.209059 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:09 crc kubenswrapper[4957]: I0218 14:33:09.209143 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:09Z","lastTransitionTime":"2026-02-18T14:33:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:09 crc kubenswrapper[4957]: I0218 14:33:09.211909 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jkmlc" Feb 18 14:33:09 crc kubenswrapper[4957]: E0218 14:33:09.212011 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jkmlc" podUID="58c40982-35c8-4670-ad21-513a7a5a458e" Feb 18 14:33:09 crc kubenswrapper[4957]: I0218 14:33:09.212209 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:33:09 crc kubenswrapper[4957]: E0218 14:33:09.212542 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 14:33:09 crc kubenswrapper[4957]: I0218 14:33:09.212582 4957 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 08:01:11.618615433 +0000 UTC Feb 18 14:33:09 crc kubenswrapper[4957]: I0218 14:33:09.311970 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:09 crc kubenswrapper[4957]: I0218 14:33:09.312000 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:09 crc kubenswrapper[4957]: I0218 14:33:09.312009 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:09 crc kubenswrapper[4957]: I0218 14:33:09.312023 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:09 crc kubenswrapper[4957]: I0218 14:33:09.312033 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:09Z","lastTransitionTime":"2026-02-18T14:33:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:09 crc kubenswrapper[4957]: I0218 14:33:09.414935 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:09 crc kubenswrapper[4957]: I0218 14:33:09.415007 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:09 crc kubenswrapper[4957]: I0218 14:33:09.415029 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:09 crc kubenswrapper[4957]: I0218 14:33:09.415057 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:09 crc kubenswrapper[4957]: I0218 14:33:09.415078 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:09Z","lastTransitionTime":"2026-02-18T14:33:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:09 crc kubenswrapper[4957]: I0218 14:33:09.518136 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:09 crc kubenswrapper[4957]: I0218 14:33:09.518196 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:09 crc kubenswrapper[4957]: I0218 14:33:09.518212 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:09 crc kubenswrapper[4957]: I0218 14:33:09.518238 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:09 crc kubenswrapper[4957]: I0218 14:33:09.518256 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:09Z","lastTransitionTime":"2026-02-18T14:33:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:09 crc kubenswrapper[4957]: I0218 14:33:09.621628 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:09 crc kubenswrapper[4957]: I0218 14:33:09.621685 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:09 crc kubenswrapper[4957]: I0218 14:33:09.621702 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:09 crc kubenswrapper[4957]: I0218 14:33:09.621726 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:09 crc kubenswrapper[4957]: I0218 14:33:09.621744 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:09Z","lastTransitionTime":"2026-02-18T14:33:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:09 crc kubenswrapper[4957]: I0218 14:33:09.723954 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:09 crc kubenswrapper[4957]: I0218 14:33:09.724029 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:09 crc kubenswrapper[4957]: I0218 14:33:09.724049 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:09 crc kubenswrapper[4957]: I0218 14:33:09.724078 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:09 crc kubenswrapper[4957]: I0218 14:33:09.724102 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:09Z","lastTransitionTime":"2026-02-18T14:33:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:09 crc kubenswrapper[4957]: I0218 14:33:09.828009 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:09 crc kubenswrapper[4957]: I0218 14:33:09.828078 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:09 crc kubenswrapper[4957]: I0218 14:33:09.828096 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:09 crc kubenswrapper[4957]: I0218 14:33:09.828120 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:09 crc kubenswrapper[4957]: I0218 14:33:09.828137 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:09Z","lastTransitionTime":"2026-02-18T14:33:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:09 crc kubenswrapper[4957]: I0218 14:33:09.930394 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:09 crc kubenswrapper[4957]: I0218 14:33:09.930517 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:09 crc kubenswrapper[4957]: I0218 14:33:09.930547 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:09 crc kubenswrapper[4957]: I0218 14:33:09.930594 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:09 crc kubenswrapper[4957]: I0218 14:33:09.930620 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:09Z","lastTransitionTime":"2026-02-18T14:33:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:10 crc kubenswrapper[4957]: I0218 14:33:10.033816 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:10 crc kubenswrapper[4957]: I0218 14:33:10.033887 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:10 crc kubenswrapper[4957]: I0218 14:33:10.033916 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:10 crc kubenswrapper[4957]: I0218 14:33:10.033946 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:10 crc kubenswrapper[4957]: I0218 14:33:10.033969 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:10Z","lastTransitionTime":"2026-02-18T14:33:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:10 crc kubenswrapper[4957]: I0218 14:33:10.137851 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:10 crc kubenswrapper[4957]: I0218 14:33:10.137899 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:10 crc kubenswrapper[4957]: I0218 14:33:10.137913 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:10 crc kubenswrapper[4957]: I0218 14:33:10.137930 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:10 crc kubenswrapper[4957]: I0218 14:33:10.137948 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:10Z","lastTransitionTime":"2026-02-18T14:33:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:10 crc kubenswrapper[4957]: I0218 14:33:10.212151 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:33:10 crc kubenswrapper[4957]: E0218 14:33:10.212306 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 14:33:10 crc kubenswrapper[4957]: I0218 14:33:10.212351 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:33:10 crc kubenswrapper[4957]: E0218 14:33:10.212549 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 14:33:10 crc kubenswrapper[4957]: I0218 14:33:10.212650 4957 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 23:54:41.352089327 +0000 UTC Feb 18 14:33:10 crc kubenswrapper[4957]: I0218 14:33:10.240887 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:10 crc kubenswrapper[4957]: I0218 14:33:10.240957 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:10 crc kubenswrapper[4957]: I0218 14:33:10.240980 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:10 crc kubenswrapper[4957]: I0218 14:33:10.241000 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:10 crc kubenswrapper[4957]: I0218 14:33:10.241015 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:10Z","lastTransitionTime":"2026-02-18T14:33:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:10 crc kubenswrapper[4957]: I0218 14:33:10.344011 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:10 crc kubenswrapper[4957]: I0218 14:33:10.344079 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:10 crc kubenswrapper[4957]: I0218 14:33:10.344102 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:10 crc kubenswrapper[4957]: I0218 14:33:10.344131 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:10 crc kubenswrapper[4957]: I0218 14:33:10.344158 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:10Z","lastTransitionTime":"2026-02-18T14:33:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:10 crc kubenswrapper[4957]: I0218 14:33:10.447265 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:10 crc kubenswrapper[4957]: I0218 14:33:10.447330 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:10 crc kubenswrapper[4957]: I0218 14:33:10.447347 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:10 crc kubenswrapper[4957]: I0218 14:33:10.447374 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:10 crc kubenswrapper[4957]: I0218 14:33:10.447392 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:10Z","lastTransitionTime":"2026-02-18T14:33:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:10 crc kubenswrapper[4957]: I0218 14:33:10.549631 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:10 crc kubenswrapper[4957]: I0218 14:33:10.549669 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:10 crc kubenswrapper[4957]: I0218 14:33:10.549682 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:10 crc kubenswrapper[4957]: I0218 14:33:10.549699 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:10 crc kubenswrapper[4957]: I0218 14:33:10.549710 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:10Z","lastTransitionTime":"2026-02-18T14:33:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:10 crc kubenswrapper[4957]: I0218 14:33:10.656579 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:10 crc kubenswrapper[4957]: I0218 14:33:10.656666 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:10 crc kubenswrapper[4957]: I0218 14:33:10.656687 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:10 crc kubenswrapper[4957]: I0218 14:33:10.656712 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:10 crc kubenswrapper[4957]: I0218 14:33:10.656729 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:10Z","lastTransitionTime":"2026-02-18T14:33:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:10 crc kubenswrapper[4957]: I0218 14:33:10.758762 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:10 crc kubenswrapper[4957]: I0218 14:33:10.758804 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:10 crc kubenswrapper[4957]: I0218 14:33:10.758813 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:10 crc kubenswrapper[4957]: I0218 14:33:10.758827 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:10 crc kubenswrapper[4957]: I0218 14:33:10.758842 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:10Z","lastTransitionTime":"2026-02-18T14:33:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:10 crc kubenswrapper[4957]: I0218 14:33:10.861672 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:10 crc kubenswrapper[4957]: I0218 14:33:10.861726 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:10 crc kubenswrapper[4957]: I0218 14:33:10.861741 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:10 crc kubenswrapper[4957]: I0218 14:33:10.861761 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:10 crc kubenswrapper[4957]: I0218 14:33:10.861777 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:10Z","lastTransitionTime":"2026-02-18T14:33:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:10 crc kubenswrapper[4957]: I0218 14:33:10.964867 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:10 crc kubenswrapper[4957]: I0218 14:33:10.964980 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:10 crc kubenswrapper[4957]: I0218 14:33:10.964992 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:10 crc kubenswrapper[4957]: I0218 14:33:10.965005 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:10 crc kubenswrapper[4957]: I0218 14:33:10.965014 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:10Z","lastTransitionTime":"2026-02-18T14:33:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:11 crc kubenswrapper[4957]: I0218 14:33:11.067593 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:11 crc kubenswrapper[4957]: I0218 14:33:11.067633 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:11 crc kubenswrapper[4957]: I0218 14:33:11.067643 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:11 crc kubenswrapper[4957]: I0218 14:33:11.067660 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:11 crc kubenswrapper[4957]: I0218 14:33:11.067671 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:11Z","lastTransitionTime":"2026-02-18T14:33:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:11 crc kubenswrapper[4957]: I0218 14:33:11.171027 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:11 crc kubenswrapper[4957]: I0218 14:33:11.171198 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:11 crc kubenswrapper[4957]: I0218 14:33:11.171235 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:11 crc kubenswrapper[4957]: I0218 14:33:11.171258 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:11 crc kubenswrapper[4957]: I0218 14:33:11.171273 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:11Z","lastTransitionTime":"2026-02-18T14:33:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:11 crc kubenswrapper[4957]: I0218 14:33:11.212177 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jkmlc" Feb 18 14:33:11 crc kubenswrapper[4957]: I0218 14:33:11.212253 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:33:11 crc kubenswrapper[4957]: E0218 14:33:11.212365 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jkmlc" podUID="58c40982-35c8-4670-ad21-513a7a5a458e" Feb 18 14:33:11 crc kubenswrapper[4957]: E0218 14:33:11.212527 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 14:33:11 crc kubenswrapper[4957]: I0218 14:33:11.213295 4957 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 13:52:35.224980888 +0000 UTC Feb 18 14:33:11 crc kubenswrapper[4957]: I0218 14:33:11.278660 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:11 crc kubenswrapper[4957]: I0218 14:33:11.278741 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:11 crc kubenswrapper[4957]: I0218 14:33:11.278811 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:11 crc kubenswrapper[4957]: I0218 14:33:11.278833 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:11 crc kubenswrapper[4957]: I0218 14:33:11.278870 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:11Z","lastTransitionTime":"2026-02-18T14:33:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:11 crc kubenswrapper[4957]: I0218 14:33:11.382357 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:11 crc kubenswrapper[4957]: I0218 14:33:11.382529 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:11 crc kubenswrapper[4957]: I0218 14:33:11.382542 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:11 crc kubenswrapper[4957]: I0218 14:33:11.382563 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:11 crc kubenswrapper[4957]: I0218 14:33:11.382574 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:11Z","lastTransitionTime":"2026-02-18T14:33:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:11 crc kubenswrapper[4957]: I0218 14:33:11.485546 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:11 crc kubenswrapper[4957]: I0218 14:33:11.485600 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:11 crc kubenswrapper[4957]: I0218 14:33:11.485609 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:11 crc kubenswrapper[4957]: I0218 14:33:11.485628 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:11 crc kubenswrapper[4957]: I0218 14:33:11.485641 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:11Z","lastTransitionTime":"2026-02-18T14:33:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:11 crc kubenswrapper[4957]: I0218 14:33:11.588980 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:11 crc kubenswrapper[4957]: I0218 14:33:11.589051 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:11 crc kubenswrapper[4957]: I0218 14:33:11.589066 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:11 crc kubenswrapper[4957]: I0218 14:33:11.589086 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:11 crc kubenswrapper[4957]: I0218 14:33:11.589101 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:11Z","lastTransitionTime":"2026-02-18T14:33:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:11 crc kubenswrapper[4957]: I0218 14:33:11.692406 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:11 crc kubenswrapper[4957]: I0218 14:33:11.692483 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:11 crc kubenswrapper[4957]: I0218 14:33:11.692495 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:11 crc kubenswrapper[4957]: I0218 14:33:11.692515 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:11 crc kubenswrapper[4957]: I0218 14:33:11.692526 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:11Z","lastTransitionTime":"2026-02-18T14:33:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:11 crc kubenswrapper[4957]: I0218 14:33:11.795256 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:11 crc kubenswrapper[4957]: I0218 14:33:11.795313 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:11 crc kubenswrapper[4957]: I0218 14:33:11.795325 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:11 crc kubenswrapper[4957]: I0218 14:33:11.795347 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:11 crc kubenswrapper[4957]: I0218 14:33:11.795363 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:11Z","lastTransitionTime":"2026-02-18T14:33:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:11 crc kubenswrapper[4957]: I0218 14:33:11.898386 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:11 crc kubenswrapper[4957]: I0218 14:33:11.898488 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:11 crc kubenswrapper[4957]: I0218 14:33:11.898513 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:11 crc kubenswrapper[4957]: I0218 14:33:11.898545 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:11 crc kubenswrapper[4957]: I0218 14:33:11.898568 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:11Z","lastTransitionTime":"2026-02-18T14:33:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:12 crc kubenswrapper[4957]: I0218 14:33:12.001512 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:12 crc kubenswrapper[4957]: I0218 14:33:12.001690 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:12 crc kubenswrapper[4957]: I0218 14:33:12.001778 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:12 crc kubenswrapper[4957]: I0218 14:33:12.001818 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:12 crc kubenswrapper[4957]: I0218 14:33:12.001898 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:12Z","lastTransitionTime":"2026-02-18T14:33:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:12 crc kubenswrapper[4957]: I0218 14:33:12.076729 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:12 crc kubenswrapper[4957]: I0218 14:33:12.076796 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:12 crc kubenswrapper[4957]: I0218 14:33:12.076820 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:12 crc kubenswrapper[4957]: I0218 14:33:12.076850 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:12 crc kubenswrapper[4957]: I0218 14:33:12.076872 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:12Z","lastTransitionTime":"2026-02-18T14:33:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:12 crc kubenswrapper[4957]: E0218 14:33:12.092541 4957 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:33:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:33:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:33:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:33:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:33:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:33:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:33:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:33:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"73cbc702-e999-4b05-a826-bb1b15d4d73b\\\",\\\"systemUUID\\\":\\\"9fb0acd4-1ed1-4909-a63a-3f4dd7b07055\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:33:12Z is after 2025-08-24T17:21:41Z" Feb 18 14:33:12 crc kubenswrapper[4957]: I0218 14:33:12.097570 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:12 crc kubenswrapper[4957]: I0218 14:33:12.097707 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:12 crc kubenswrapper[4957]: I0218 14:33:12.097742 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:12 crc kubenswrapper[4957]: I0218 14:33:12.097828 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:12 crc kubenswrapper[4957]: I0218 14:33:12.097854 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:12Z","lastTransitionTime":"2026-02-18T14:33:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:12 crc kubenswrapper[4957]: E0218 14:33:12.118791 4957 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:33:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:33:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:33:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:33:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:33:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:33:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:33:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:33:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"73cbc702-e999-4b05-a826-bb1b15d4d73b\\\",\\\"systemUUID\\\":\\\"9fb0acd4-1ed1-4909-a63a-3f4dd7b07055\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:33:12Z is after 2025-08-24T17:21:41Z" Feb 18 14:33:12 crc kubenswrapper[4957]: I0218 14:33:12.122889 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:12 crc kubenswrapper[4957]: I0218 14:33:12.122920 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:12 crc kubenswrapper[4957]: I0218 14:33:12.122929 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:12 crc kubenswrapper[4957]: I0218 14:33:12.122944 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:12 crc kubenswrapper[4957]: I0218 14:33:12.122954 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:12Z","lastTransitionTime":"2026-02-18T14:33:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:12 crc kubenswrapper[4957]: E0218 14:33:12.137024 4957 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:33:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:33:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:33:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:33:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:33:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:33:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:33:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:33:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"73cbc702-e999-4b05-a826-bb1b15d4d73b\\\",\\\"systemUUID\\\":\\\"9fb0acd4-1ed1-4909-a63a-3f4dd7b07055\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:33:12Z is after 2025-08-24T17:21:41Z" Feb 18 14:33:12 crc kubenswrapper[4957]: I0218 14:33:12.141963 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:12 crc kubenswrapper[4957]: I0218 14:33:12.141992 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:12 crc kubenswrapper[4957]: I0218 14:33:12.142000 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:12 crc kubenswrapper[4957]: I0218 14:33:12.142016 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:12 crc kubenswrapper[4957]: I0218 14:33:12.142026 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:12Z","lastTransitionTime":"2026-02-18T14:33:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:12 crc kubenswrapper[4957]: E0218 14:33:12.156296 4957 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:33:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:33:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:33:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:33:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:33:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:33:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:33:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:33:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"73cbc702-e999-4b05-a826-bb1b15d4d73b\\\",\\\"systemUUID\\\":\\\"9fb0acd4-1ed1-4909-a63a-3f4dd7b07055\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:33:12Z is after 2025-08-24T17:21:41Z" Feb 18 14:33:12 crc kubenswrapper[4957]: I0218 14:33:12.161278 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:12 crc kubenswrapper[4957]: I0218 14:33:12.161315 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:12 crc kubenswrapper[4957]: I0218 14:33:12.161326 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:12 crc kubenswrapper[4957]: I0218 14:33:12.161343 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:12 crc kubenswrapper[4957]: I0218 14:33:12.161355 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:12Z","lastTransitionTime":"2026-02-18T14:33:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:12 crc kubenswrapper[4957]: E0218 14:33:12.172851 4957 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:33:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:33:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:33:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:33:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:33:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:33:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:33:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:33:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"73cbc702-e999-4b05-a826-bb1b15d4d73b\\\",\\\"systemUUID\\\":\\\"9fb0acd4-1ed1-4909-a63a-3f4dd7b07055\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:33:12Z is after 2025-08-24T17:21:41Z" Feb 18 14:33:12 crc kubenswrapper[4957]: E0218 14:33:12.173006 4957 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 18 14:33:12 crc kubenswrapper[4957]: I0218 14:33:12.174232 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:12 crc kubenswrapper[4957]: I0218 14:33:12.174260 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:12 crc kubenswrapper[4957]: I0218 14:33:12.174270 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:12 crc kubenswrapper[4957]: I0218 14:33:12.174288 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:12 crc kubenswrapper[4957]: I0218 14:33:12.174299 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:12Z","lastTransitionTime":"2026-02-18T14:33:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:12 crc kubenswrapper[4957]: I0218 14:33:12.212177 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:33:12 crc kubenswrapper[4957]: I0218 14:33:12.212186 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:33:12 crc kubenswrapper[4957]: E0218 14:33:12.212451 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 14:33:12 crc kubenswrapper[4957]: E0218 14:33:12.212531 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 14:33:12 crc kubenswrapper[4957]: I0218 14:33:12.213680 4957 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 18:46:17.042290695 +0000 UTC Feb 18 14:33:12 crc kubenswrapper[4957]: I0218 14:33:12.263884 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Feb 18 14:33:12 crc kubenswrapper[4957]: I0218 14:33:12.277354 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:12 crc kubenswrapper[4957]: I0218 14:33:12.277400 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:12 crc kubenswrapper[4957]: I0218 14:33:12.277410 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:12 crc kubenswrapper[4957]: I0218 14:33:12.277448 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:12 crc kubenswrapper[4957]: I0218 14:33:12.277498 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:12Z","lastTransitionTime":"2026-02-18T14:33:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:12 crc kubenswrapper[4957]: I0218 14:33:12.379958 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:12 crc kubenswrapper[4957]: I0218 14:33:12.380040 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:12 crc kubenswrapper[4957]: I0218 14:33:12.380057 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:12 crc kubenswrapper[4957]: I0218 14:33:12.380079 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:12 crc kubenswrapper[4957]: I0218 14:33:12.380094 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:12Z","lastTransitionTime":"2026-02-18T14:33:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:12 crc kubenswrapper[4957]: I0218 14:33:12.483187 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:12 crc kubenswrapper[4957]: I0218 14:33:12.483248 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:12 crc kubenswrapper[4957]: I0218 14:33:12.483266 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:12 crc kubenswrapper[4957]: I0218 14:33:12.483290 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:12 crc kubenswrapper[4957]: I0218 14:33:12.483308 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:12Z","lastTransitionTime":"2026-02-18T14:33:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:12 crc kubenswrapper[4957]: I0218 14:33:12.587507 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:12 crc kubenswrapper[4957]: I0218 14:33:12.587564 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:12 crc kubenswrapper[4957]: I0218 14:33:12.587584 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:12 crc kubenswrapper[4957]: I0218 14:33:12.587611 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:12 crc kubenswrapper[4957]: I0218 14:33:12.587690 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:12Z","lastTransitionTime":"2026-02-18T14:33:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:12 crc kubenswrapper[4957]: I0218 14:33:12.691495 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:12 crc kubenswrapper[4957]: I0218 14:33:12.691569 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:12 crc kubenswrapper[4957]: I0218 14:33:12.691594 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:12 crc kubenswrapper[4957]: I0218 14:33:12.691621 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:12 crc kubenswrapper[4957]: I0218 14:33:12.691643 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:12Z","lastTransitionTime":"2026-02-18T14:33:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:12 crc kubenswrapper[4957]: I0218 14:33:12.793681 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:12 crc kubenswrapper[4957]: I0218 14:33:12.793736 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:12 crc kubenswrapper[4957]: I0218 14:33:12.793748 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:12 crc kubenswrapper[4957]: I0218 14:33:12.793764 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:12 crc kubenswrapper[4957]: I0218 14:33:12.793777 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:12Z","lastTransitionTime":"2026-02-18T14:33:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:12 crc kubenswrapper[4957]: I0218 14:33:12.798468 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/58c40982-35c8-4670-ad21-513a7a5a458e-metrics-certs\") pod \"network-metrics-daemon-jkmlc\" (UID: \"58c40982-35c8-4670-ad21-513a7a5a458e\") " pod="openshift-multus/network-metrics-daemon-jkmlc" Feb 18 14:33:12 crc kubenswrapper[4957]: E0218 14:33:12.798634 4957 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 14:33:12 crc kubenswrapper[4957]: E0218 14:33:12.798702 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/58c40982-35c8-4670-ad21-513a7a5a458e-metrics-certs podName:58c40982-35c8-4670-ad21-513a7a5a458e nodeName:}" failed. No retries permitted until 2026-02-18 14:34:16.798686047 +0000 UTC m=+163.319550791 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/58c40982-35c8-4670-ad21-513a7a5a458e-metrics-certs") pod "network-metrics-daemon-jkmlc" (UID: "58c40982-35c8-4670-ad21-513a7a5a458e") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 14:33:12 crc kubenswrapper[4957]: I0218 14:33:12.896523 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:12 crc kubenswrapper[4957]: I0218 14:33:12.896587 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:12 crc kubenswrapper[4957]: I0218 14:33:12.896601 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:12 crc kubenswrapper[4957]: I0218 14:33:12.896620 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:12 crc kubenswrapper[4957]: I0218 14:33:12.896652 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:12Z","lastTransitionTime":"2026-02-18T14:33:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:12 crc kubenswrapper[4957]: I0218 14:33:12.999572 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:12 crc kubenswrapper[4957]: I0218 14:33:12.999640 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:12 crc kubenswrapper[4957]: I0218 14:33:12.999662 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:12 crc kubenswrapper[4957]: I0218 14:33:12.999687 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:13 crc kubenswrapper[4957]: I0218 14:33:12.999703 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:12Z","lastTransitionTime":"2026-02-18T14:33:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:13 crc kubenswrapper[4957]: I0218 14:33:13.106002 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:13 crc kubenswrapper[4957]: I0218 14:33:13.106100 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:13 crc kubenswrapper[4957]: I0218 14:33:13.106146 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:13 crc kubenswrapper[4957]: I0218 14:33:13.106185 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:13 crc kubenswrapper[4957]: I0218 14:33:13.106228 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:13Z","lastTransitionTime":"2026-02-18T14:33:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:13 crc kubenswrapper[4957]: I0218 14:33:13.209894 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:13 crc kubenswrapper[4957]: I0218 14:33:13.209959 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:13 crc kubenswrapper[4957]: I0218 14:33:13.209976 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:13 crc kubenswrapper[4957]: I0218 14:33:13.210002 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:13 crc kubenswrapper[4957]: I0218 14:33:13.210021 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:13Z","lastTransitionTime":"2026-02-18T14:33:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:13 crc kubenswrapper[4957]: I0218 14:33:13.212137 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:33:13 crc kubenswrapper[4957]: I0218 14:33:13.212219 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jkmlc" Feb 18 14:33:13 crc kubenswrapper[4957]: E0218 14:33:13.212341 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 14:33:13 crc kubenswrapper[4957]: E0218 14:33:13.212794 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jkmlc" podUID="58c40982-35c8-4670-ad21-513a7a5a458e" Feb 18 14:33:13 crc kubenswrapper[4957]: I0218 14:33:13.212963 4957 scope.go:117] "RemoveContainer" containerID="bc6ab3573641e68bec6f227fa30ead1a586c1cf3abdf781b93edf3396b81f192" Feb 18 14:33:13 crc kubenswrapper[4957]: E0218 14:33:13.213102 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-t7lp9_openshift-ovn-kubernetes(c1ab5e7d-28c9-416b-9e12-1209987d8a2c)\"" pod="openshift-ovn-kubernetes/ovnkube-node-t7lp9" podUID="c1ab5e7d-28c9-416b-9e12-1209987d8a2c" Feb 18 14:33:13 crc kubenswrapper[4957]: I0218 14:33:13.214482 4957 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 20:25:50.828695048 +0000 UTC Feb 18 14:33:13 crc kubenswrapper[4957]: I0218 14:33:13.312749 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:13 crc kubenswrapper[4957]: I0218 14:33:13.312812 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:13 crc kubenswrapper[4957]: I0218 14:33:13.312829 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:13 crc kubenswrapper[4957]: I0218 14:33:13.312854 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:13 crc kubenswrapper[4957]: I0218 14:33:13.312872 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:13Z","lastTransitionTime":"2026-02-18T14:33:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:13 crc kubenswrapper[4957]: I0218 14:33:13.415833 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:13 crc kubenswrapper[4957]: I0218 14:33:13.415878 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:13 crc kubenswrapper[4957]: I0218 14:33:13.415891 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:13 crc kubenswrapper[4957]: I0218 14:33:13.415908 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:13 crc kubenswrapper[4957]: I0218 14:33:13.415921 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:13Z","lastTransitionTime":"2026-02-18T14:33:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:13 crc kubenswrapper[4957]: I0218 14:33:13.518960 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:13 crc kubenswrapper[4957]: I0218 14:33:13.519029 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:13 crc kubenswrapper[4957]: I0218 14:33:13.519046 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:13 crc kubenswrapper[4957]: I0218 14:33:13.519074 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:13 crc kubenswrapper[4957]: I0218 14:33:13.519091 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:13Z","lastTransitionTime":"2026-02-18T14:33:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:13 crc kubenswrapper[4957]: I0218 14:33:13.621697 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:13 crc kubenswrapper[4957]: I0218 14:33:13.621743 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:13 crc kubenswrapper[4957]: I0218 14:33:13.621755 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:13 crc kubenswrapper[4957]: I0218 14:33:13.621772 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:13 crc kubenswrapper[4957]: I0218 14:33:13.621783 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:13Z","lastTransitionTime":"2026-02-18T14:33:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:13 crc kubenswrapper[4957]: I0218 14:33:13.724539 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:13 crc kubenswrapper[4957]: I0218 14:33:13.724582 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:13 crc kubenswrapper[4957]: I0218 14:33:13.724590 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:13 crc kubenswrapper[4957]: I0218 14:33:13.724606 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:13 crc kubenswrapper[4957]: I0218 14:33:13.724617 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:13Z","lastTransitionTime":"2026-02-18T14:33:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:13 crc kubenswrapper[4957]: I0218 14:33:13.826928 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:13 crc kubenswrapper[4957]: I0218 14:33:13.826974 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:13 crc kubenswrapper[4957]: I0218 14:33:13.826987 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:13 crc kubenswrapper[4957]: I0218 14:33:13.827002 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:13 crc kubenswrapper[4957]: I0218 14:33:13.827018 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:13Z","lastTransitionTime":"2026-02-18T14:33:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:13 crc kubenswrapper[4957]: I0218 14:33:13.930894 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:13 crc kubenswrapper[4957]: I0218 14:33:13.930951 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:13 crc kubenswrapper[4957]: I0218 14:33:13.930967 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:13 crc kubenswrapper[4957]: I0218 14:33:13.930996 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:13 crc kubenswrapper[4957]: I0218 14:33:13.931015 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:13Z","lastTransitionTime":"2026-02-18T14:33:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:14 crc kubenswrapper[4957]: I0218 14:33:14.034280 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:14 crc kubenswrapper[4957]: I0218 14:33:14.034333 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:14 crc kubenswrapper[4957]: I0218 14:33:14.034346 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:14 crc kubenswrapper[4957]: I0218 14:33:14.034361 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:14 crc kubenswrapper[4957]: I0218 14:33:14.034371 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:14Z","lastTransitionTime":"2026-02-18T14:33:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:14 crc kubenswrapper[4957]: I0218 14:33:14.137068 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:14 crc kubenswrapper[4957]: I0218 14:33:14.137120 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:14 crc kubenswrapper[4957]: I0218 14:33:14.137131 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:14 crc kubenswrapper[4957]: I0218 14:33:14.137148 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:14 crc kubenswrapper[4957]: I0218 14:33:14.137160 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:14Z","lastTransitionTime":"2026-02-18T14:33:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:14 crc kubenswrapper[4957]: I0218 14:33:14.212392 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:33:14 crc kubenswrapper[4957]: E0218 14:33:14.212546 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 14:33:14 crc kubenswrapper[4957]: I0218 14:33:14.212390 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:33:14 crc kubenswrapper[4957]: E0218 14:33:14.212858 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 14:33:14 crc kubenswrapper[4957]: I0218 14:33:14.215092 4957 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 23:58:02.077964376 +0000 UTC Feb 18 14:33:14 crc kubenswrapper[4957]: I0218 14:33:14.243942 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:14 crc kubenswrapper[4957]: I0218 14:33:14.243973 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:14 crc kubenswrapper[4957]: I0218 14:33:14.243984 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:14 crc kubenswrapper[4957]: I0218 14:33:14.243997 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:14 crc kubenswrapper[4957]: I0218 14:33:14.244005 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:14Z","lastTransitionTime":"2026-02-18T14:33:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:14 crc kubenswrapper[4957]: I0218 14:33:14.254793 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=77.254771538 podStartE2EDuration="1m17.254771538s" podCreationTimestamp="2026-02-18 14:31:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:33:14.238963466 +0000 UTC m=+100.759828210" watchObservedRunningTime="2026-02-18 14:33:14.254771538 +0000 UTC m=+100.775636282" Feb 18 14:33:14 crc kubenswrapper[4957]: I0218 14:33:14.279146 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podStartSLOduration=80.279129315 podStartE2EDuration="1m20.279129315s" podCreationTimestamp="2026-02-18 14:31:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:33:14.278758044 +0000 UTC m=+100.799622798" watchObservedRunningTime="2026-02-18 14:33:14.279129315 +0000 UTC m=+100.799994059" Feb 18 14:33:14 crc kubenswrapper[4957]: I0218 14:33:14.279309 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-sk96m" podStartSLOduration=80.279304401 podStartE2EDuration="1m20.279304401s" podCreationTimestamp="2026-02-18 14:31:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:33:14.266989363 +0000 UTC m=+100.787854117" watchObservedRunningTime="2026-02-18 14:33:14.279304401 +0000 UTC m=+100.800169145" Feb 18 14:33:14 crc kubenswrapper[4957]: I0218 14:33:14.302865 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-s7f5j" podStartSLOduration=80.302844373 podStartE2EDuration="1m20.302844373s" podCreationTimestamp="2026-02-18 14:31:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:33:14.302744401 +0000 UTC m=+100.823609165" watchObservedRunningTime="2026-02-18 14:33:14.302844373 +0000 UTC m=+100.823709117" Feb 18 14:33:14 crc kubenswrapper[4957]: I0218 14:33:14.328992 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-c5hxm" podStartSLOduration=80.328969133 podStartE2EDuration="1m20.328969133s" podCreationTimestamp="2026-02-18 14:31:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:33:14.318598314 +0000 UTC m=+100.839463068" watchObservedRunningTime="2026-02-18 14:33:14.328969133 +0000 UTC m=+100.849833887" Feb 18 14:33:14 crc kubenswrapper[4957]: I0218 14:33:14.338121 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=18.338093956 podStartE2EDuration="18.338093956s" podCreationTimestamp="2026-02-18 14:32:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:33:14.337272101 +0000 UTC m=+100.858136855" watchObservedRunningTime="2026-02-18 14:33:14.338093956 +0000 UTC m=+100.858958700" Feb 18 14:33:14 crc kubenswrapper[4957]: I0218 14:33:14.346643 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:14 crc kubenswrapper[4957]: I0218 14:33:14.346686 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:14 crc kubenswrapper[4957]: I0218 14:33:14.346700 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:14 crc kubenswrapper[4957]: I0218 14:33:14.346718 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:14 crc kubenswrapper[4957]: I0218 14:33:14.346732 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:14Z","lastTransitionTime":"2026-02-18T14:33:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:14 crc kubenswrapper[4957]: I0218 14:33:14.361868 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=2.361836255 podStartE2EDuration="2.361836255s" podCreationTimestamp="2026-02-18 14:33:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:33:14.36100733 +0000 UTC m=+100.881872084" watchObservedRunningTime="2026-02-18 14:33:14.361836255 +0000 UTC m=+100.882700999" Feb 18 14:33:14 crc kubenswrapper[4957]: I0218 14:33:14.426902 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-wn2pd" podStartSLOduration=80.426880297 podStartE2EDuration="1m20.426880297s" podCreationTimestamp="2026-02-18 14:31:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:33:14.426081613 +0000 UTC m=+100.946946367" watchObservedRunningTime="2026-02-18 14:33:14.426880297 +0000 UTC m=+100.947745041" Feb 18 14:33:14 crc kubenswrapper[4957]: I0218 14:33:14.448784 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:14 crc kubenswrapper[4957]: I0218 14:33:14.448838 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:14 crc kubenswrapper[4957]: I0218 14:33:14.448850 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:14 crc kubenswrapper[4957]: I0218 14:33:14.448870 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:14 crc kubenswrapper[4957]: I0218 14:33:14.448881 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:14Z","lastTransitionTime":"2026-02-18T14:33:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:14 crc kubenswrapper[4957]: I0218 14:33:14.465439 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=81.465401827 podStartE2EDuration="1m21.465401827s" podCreationTimestamp="2026-02-18 14:31:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:33:14.451007727 +0000 UTC m=+100.971872481" watchObservedRunningTime="2026-02-18 14:33:14.465401827 +0000 UTC m=+100.986266571" Feb 18 14:33:14 crc kubenswrapper[4957]: I0218 14:33:14.465676 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=44.465670985 podStartE2EDuration="44.465670985s" podCreationTimestamp="2026-02-18 14:32:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:33:14.46519387 +0000 UTC m=+100.986058624" watchObservedRunningTime="2026-02-18 14:33:14.465670985 +0000 UTC m=+100.986535729" Feb 18 14:33:14 crc kubenswrapper[4957]: I0218 14:33:14.551748 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:14 crc kubenswrapper[4957]: I0218 14:33:14.552078 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:14 crc kubenswrapper[4957]: I0218 14:33:14.552183 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:14 crc kubenswrapper[4957]: I0218 14:33:14.552277 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:14 crc kubenswrapper[4957]: I0218 14:33:14.552369 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:14Z","lastTransitionTime":"2026-02-18T14:33:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:14 crc kubenswrapper[4957]: I0218 14:33:14.655320 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:14 crc kubenswrapper[4957]: I0218 14:33:14.655642 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:14 crc kubenswrapper[4957]: I0218 14:33:14.655730 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:14 crc kubenswrapper[4957]: I0218 14:33:14.655829 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:14 crc kubenswrapper[4957]: I0218 14:33:14.655937 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:14Z","lastTransitionTime":"2026-02-18T14:33:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:14 crc kubenswrapper[4957]: I0218 14:33:14.758828 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:14 crc kubenswrapper[4957]: I0218 14:33:14.758875 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:14 crc kubenswrapper[4957]: I0218 14:33:14.758886 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:14 crc kubenswrapper[4957]: I0218 14:33:14.758904 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:14 crc kubenswrapper[4957]: I0218 14:33:14.758917 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:14Z","lastTransitionTime":"2026-02-18T14:33:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:14 crc kubenswrapper[4957]: I0218 14:33:14.861050 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:14 crc kubenswrapper[4957]: I0218 14:33:14.861135 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:14 crc kubenswrapper[4957]: I0218 14:33:14.861148 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:14 crc kubenswrapper[4957]: I0218 14:33:14.861163 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:14 crc kubenswrapper[4957]: I0218 14:33:14.861173 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:14Z","lastTransitionTime":"2026-02-18T14:33:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:14 crc kubenswrapper[4957]: I0218 14:33:14.963844 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:14 crc kubenswrapper[4957]: I0218 14:33:14.963963 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:14 crc kubenswrapper[4957]: I0218 14:33:14.963991 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:14 crc kubenswrapper[4957]: I0218 14:33:14.964023 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:14 crc kubenswrapper[4957]: I0218 14:33:14.964045 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:14Z","lastTransitionTime":"2026-02-18T14:33:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:15 crc kubenswrapper[4957]: I0218 14:33:15.066672 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:15 crc kubenswrapper[4957]: I0218 14:33:15.066734 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:15 crc kubenswrapper[4957]: I0218 14:33:15.066747 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:15 crc kubenswrapper[4957]: I0218 14:33:15.066764 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:15 crc kubenswrapper[4957]: I0218 14:33:15.066776 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:15Z","lastTransitionTime":"2026-02-18T14:33:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:15 crc kubenswrapper[4957]: I0218 14:33:15.169493 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:15 crc kubenswrapper[4957]: I0218 14:33:15.169544 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:15 crc kubenswrapper[4957]: I0218 14:33:15.169562 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:15 crc kubenswrapper[4957]: I0218 14:33:15.169583 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:15 crc kubenswrapper[4957]: I0218 14:33:15.169596 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:15Z","lastTransitionTime":"2026-02-18T14:33:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:15 crc kubenswrapper[4957]: I0218 14:33:15.212507 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jkmlc" Feb 18 14:33:15 crc kubenswrapper[4957]: I0218 14:33:15.212543 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:33:15 crc kubenswrapper[4957]: E0218 14:33:15.212668 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jkmlc" podUID="58c40982-35c8-4670-ad21-513a7a5a458e" Feb 18 14:33:15 crc kubenswrapper[4957]: E0218 14:33:15.212848 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 14:33:15 crc kubenswrapper[4957]: I0218 14:33:15.215542 4957 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 20:17:45.43984829 +0000 UTC Feb 18 14:33:15 crc kubenswrapper[4957]: I0218 14:33:15.272973 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:15 crc kubenswrapper[4957]: I0218 14:33:15.273040 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:15 crc kubenswrapper[4957]: I0218 14:33:15.273057 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:15 crc kubenswrapper[4957]: I0218 14:33:15.273103 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:15 crc kubenswrapper[4957]: I0218 14:33:15.273123 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:15Z","lastTransitionTime":"2026-02-18T14:33:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:15 crc kubenswrapper[4957]: I0218 14:33:15.375885 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:15 crc kubenswrapper[4957]: I0218 14:33:15.375939 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:15 crc kubenswrapper[4957]: I0218 14:33:15.375956 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:15 crc kubenswrapper[4957]: I0218 14:33:15.375978 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:15 crc kubenswrapper[4957]: I0218 14:33:15.375995 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:15Z","lastTransitionTime":"2026-02-18T14:33:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:15 crc kubenswrapper[4957]: I0218 14:33:15.479109 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:15 crc kubenswrapper[4957]: I0218 14:33:15.479151 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:15 crc kubenswrapper[4957]: I0218 14:33:15.479167 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:15 crc kubenswrapper[4957]: I0218 14:33:15.479186 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:15 crc kubenswrapper[4957]: I0218 14:33:15.479200 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:15Z","lastTransitionTime":"2026-02-18T14:33:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:15 crc kubenswrapper[4957]: I0218 14:33:15.581671 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:15 crc kubenswrapper[4957]: I0218 14:33:15.581730 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:15 crc kubenswrapper[4957]: I0218 14:33:15.581745 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:15 crc kubenswrapper[4957]: I0218 14:33:15.581765 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:15 crc kubenswrapper[4957]: I0218 14:33:15.581782 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:15Z","lastTransitionTime":"2026-02-18T14:33:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:15 crc kubenswrapper[4957]: I0218 14:33:15.684788 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:15 crc kubenswrapper[4957]: I0218 14:33:15.684896 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:15 crc kubenswrapper[4957]: I0218 14:33:15.684927 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:15 crc kubenswrapper[4957]: I0218 14:33:15.684965 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:15 crc kubenswrapper[4957]: I0218 14:33:15.685003 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:15Z","lastTransitionTime":"2026-02-18T14:33:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:15 crc kubenswrapper[4957]: I0218 14:33:15.788636 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:15 crc kubenswrapper[4957]: I0218 14:33:15.788713 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:15 crc kubenswrapper[4957]: I0218 14:33:15.788731 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:15 crc kubenswrapper[4957]: I0218 14:33:15.788754 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:15 crc kubenswrapper[4957]: I0218 14:33:15.788771 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:15Z","lastTransitionTime":"2026-02-18T14:33:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:15 crc kubenswrapper[4957]: I0218 14:33:15.891549 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:15 crc kubenswrapper[4957]: I0218 14:33:15.891609 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:15 crc kubenswrapper[4957]: I0218 14:33:15.891633 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:15 crc kubenswrapper[4957]: I0218 14:33:15.891654 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:15 crc kubenswrapper[4957]: I0218 14:33:15.891668 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:15Z","lastTransitionTime":"2026-02-18T14:33:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:15 crc kubenswrapper[4957]: I0218 14:33:15.994305 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:15 crc kubenswrapper[4957]: I0218 14:33:15.994341 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:15 crc kubenswrapper[4957]: I0218 14:33:15.994366 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:15 crc kubenswrapper[4957]: I0218 14:33:15.994383 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:15 crc kubenswrapper[4957]: I0218 14:33:15.994395 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:15Z","lastTransitionTime":"2026-02-18T14:33:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:16 crc kubenswrapper[4957]: I0218 14:33:16.098853 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:16 crc kubenswrapper[4957]: I0218 14:33:16.098916 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:16 crc kubenswrapper[4957]: I0218 14:33:16.098937 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:16 crc kubenswrapper[4957]: I0218 14:33:16.098966 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:16 crc kubenswrapper[4957]: I0218 14:33:16.098987 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:16Z","lastTransitionTime":"2026-02-18T14:33:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:16 crc kubenswrapper[4957]: I0218 14:33:16.202046 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:16 crc kubenswrapper[4957]: I0218 14:33:16.202123 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:16 crc kubenswrapper[4957]: I0218 14:33:16.202148 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:16 crc kubenswrapper[4957]: I0218 14:33:16.202190 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:16 crc kubenswrapper[4957]: I0218 14:33:16.202214 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:16Z","lastTransitionTime":"2026-02-18T14:33:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:16 crc kubenswrapper[4957]: I0218 14:33:16.212363 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:33:16 crc kubenswrapper[4957]: I0218 14:33:16.212368 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:33:16 crc kubenswrapper[4957]: E0218 14:33:16.212556 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 14:33:16 crc kubenswrapper[4957]: E0218 14:33:16.212769 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 14:33:16 crc kubenswrapper[4957]: I0218 14:33:16.215960 4957 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 17:01:23.57303469 +0000 UTC Feb 18 14:33:16 crc kubenswrapper[4957]: I0218 14:33:16.305393 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:16 crc kubenswrapper[4957]: I0218 14:33:16.305481 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:16 crc kubenswrapper[4957]: I0218 14:33:16.305499 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:16 crc kubenswrapper[4957]: I0218 14:33:16.305525 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:16 crc kubenswrapper[4957]: I0218 14:33:16.305543 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:16Z","lastTransitionTime":"2026-02-18T14:33:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:16 crc kubenswrapper[4957]: I0218 14:33:16.408521 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:16 crc kubenswrapper[4957]: I0218 14:33:16.408647 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:16 crc kubenswrapper[4957]: I0218 14:33:16.408667 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:16 crc kubenswrapper[4957]: I0218 14:33:16.408690 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:16 crc kubenswrapper[4957]: I0218 14:33:16.408708 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:16Z","lastTransitionTime":"2026-02-18T14:33:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:16 crc kubenswrapper[4957]: I0218 14:33:16.512516 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:16 crc kubenswrapper[4957]: I0218 14:33:16.512580 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:16 crc kubenswrapper[4957]: I0218 14:33:16.512605 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:16 crc kubenswrapper[4957]: I0218 14:33:16.512635 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:16 crc kubenswrapper[4957]: I0218 14:33:16.512663 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:16Z","lastTransitionTime":"2026-02-18T14:33:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:16 crc kubenswrapper[4957]: I0218 14:33:16.617272 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:16 crc kubenswrapper[4957]: I0218 14:33:16.617348 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:16 crc kubenswrapper[4957]: I0218 14:33:16.617379 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:16 crc kubenswrapper[4957]: I0218 14:33:16.617410 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:16 crc kubenswrapper[4957]: I0218 14:33:16.617501 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:16Z","lastTransitionTime":"2026-02-18T14:33:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:16 crc kubenswrapper[4957]: I0218 14:33:16.720555 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:16 crc kubenswrapper[4957]: I0218 14:33:16.720610 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:16 crc kubenswrapper[4957]: I0218 14:33:16.720626 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:16 crc kubenswrapper[4957]: I0218 14:33:16.720650 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:16 crc kubenswrapper[4957]: I0218 14:33:16.720668 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:16Z","lastTransitionTime":"2026-02-18T14:33:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:16 crc kubenswrapper[4957]: I0218 14:33:16.823806 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:16 crc kubenswrapper[4957]: I0218 14:33:16.823876 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:16 crc kubenswrapper[4957]: I0218 14:33:16.823896 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:16 crc kubenswrapper[4957]: I0218 14:33:16.823921 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:16 crc kubenswrapper[4957]: I0218 14:33:16.823938 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:16Z","lastTransitionTime":"2026-02-18T14:33:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:16 crc kubenswrapper[4957]: I0218 14:33:16.926846 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:16 crc kubenswrapper[4957]: I0218 14:33:16.926915 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:16 crc kubenswrapper[4957]: I0218 14:33:16.926932 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:16 crc kubenswrapper[4957]: I0218 14:33:16.926958 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:16 crc kubenswrapper[4957]: I0218 14:33:16.926975 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:16Z","lastTransitionTime":"2026-02-18T14:33:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:17 crc kubenswrapper[4957]: I0218 14:33:17.030268 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:17 crc kubenswrapper[4957]: I0218 14:33:17.030320 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:17 crc kubenswrapper[4957]: I0218 14:33:17.030330 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:17 crc kubenswrapper[4957]: I0218 14:33:17.030351 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:17 crc kubenswrapper[4957]: I0218 14:33:17.030366 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:17Z","lastTransitionTime":"2026-02-18T14:33:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:17 crc kubenswrapper[4957]: I0218 14:33:17.133281 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:17 crc kubenswrapper[4957]: I0218 14:33:17.133338 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:17 crc kubenswrapper[4957]: I0218 14:33:17.133355 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:17 crc kubenswrapper[4957]: I0218 14:33:17.133379 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:17 crc kubenswrapper[4957]: I0218 14:33:17.133395 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:17Z","lastTransitionTime":"2026-02-18T14:33:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:17 crc kubenswrapper[4957]: I0218 14:33:17.212300 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jkmlc" Feb 18 14:33:17 crc kubenswrapper[4957]: I0218 14:33:17.212367 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:33:17 crc kubenswrapper[4957]: E0218 14:33:17.212728 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jkmlc" podUID="58c40982-35c8-4670-ad21-513a7a5a458e" Feb 18 14:33:17 crc kubenswrapper[4957]: E0218 14:33:17.212889 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 14:33:17 crc kubenswrapper[4957]: I0218 14:33:17.216350 4957 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 09:22:52.209814615 +0000 UTC Feb 18 14:33:17 crc kubenswrapper[4957]: I0218 14:33:17.235844 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:17 crc kubenswrapper[4957]: I0218 14:33:17.235924 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:17 crc kubenswrapper[4957]: I0218 14:33:17.235954 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:17 crc kubenswrapper[4957]: I0218 14:33:17.235988 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:17 crc kubenswrapper[4957]: I0218 14:33:17.236010 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:17Z","lastTransitionTime":"2026-02-18T14:33:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:17 crc kubenswrapper[4957]: I0218 14:33:17.338462 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:17 crc kubenswrapper[4957]: I0218 14:33:17.338517 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:17 crc kubenswrapper[4957]: I0218 14:33:17.338531 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:17 crc kubenswrapper[4957]: I0218 14:33:17.339077 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:17 crc kubenswrapper[4957]: I0218 14:33:17.339093 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:17Z","lastTransitionTime":"2026-02-18T14:33:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:17 crc kubenswrapper[4957]: I0218 14:33:17.442071 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:17 crc kubenswrapper[4957]: I0218 14:33:17.442139 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:17 crc kubenswrapper[4957]: I0218 14:33:17.442160 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:17 crc kubenswrapper[4957]: I0218 14:33:17.442188 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:17 crc kubenswrapper[4957]: I0218 14:33:17.442209 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:17Z","lastTransitionTime":"2026-02-18T14:33:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:17 crc kubenswrapper[4957]: I0218 14:33:17.545805 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:17 crc kubenswrapper[4957]: I0218 14:33:17.545878 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:17 crc kubenswrapper[4957]: I0218 14:33:17.545902 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:17 crc kubenswrapper[4957]: I0218 14:33:17.545934 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:17 crc kubenswrapper[4957]: I0218 14:33:17.545953 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:17Z","lastTransitionTime":"2026-02-18T14:33:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:17 crc kubenswrapper[4957]: I0218 14:33:17.648388 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:17 crc kubenswrapper[4957]: I0218 14:33:17.648440 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:17 crc kubenswrapper[4957]: I0218 14:33:17.648448 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:17 crc kubenswrapper[4957]: I0218 14:33:17.648461 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:17 crc kubenswrapper[4957]: I0218 14:33:17.648469 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:17Z","lastTransitionTime":"2026-02-18T14:33:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:17 crc kubenswrapper[4957]: I0218 14:33:17.750977 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:17 crc kubenswrapper[4957]: I0218 14:33:17.751013 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:17 crc kubenswrapper[4957]: I0218 14:33:17.751021 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:17 crc kubenswrapper[4957]: I0218 14:33:17.751061 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:17 crc kubenswrapper[4957]: I0218 14:33:17.751072 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:17Z","lastTransitionTime":"2026-02-18T14:33:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:17 crc kubenswrapper[4957]: I0218 14:33:17.853679 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:17 crc kubenswrapper[4957]: I0218 14:33:17.853727 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:17 crc kubenswrapper[4957]: I0218 14:33:17.853739 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:17 crc kubenswrapper[4957]: I0218 14:33:17.853758 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:17 crc kubenswrapper[4957]: I0218 14:33:17.853783 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:17Z","lastTransitionTime":"2026-02-18T14:33:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:17 crc kubenswrapper[4957]: I0218 14:33:17.956731 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:17 crc kubenswrapper[4957]: I0218 14:33:17.956787 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:17 crc kubenswrapper[4957]: I0218 14:33:17.956802 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:17 crc kubenswrapper[4957]: I0218 14:33:17.956823 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:17 crc kubenswrapper[4957]: I0218 14:33:17.956839 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:17Z","lastTransitionTime":"2026-02-18T14:33:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:18 crc kubenswrapper[4957]: I0218 14:33:18.060204 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:18 crc kubenswrapper[4957]: I0218 14:33:18.060247 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:18 crc kubenswrapper[4957]: I0218 14:33:18.060258 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:18 crc kubenswrapper[4957]: I0218 14:33:18.060281 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:18 crc kubenswrapper[4957]: I0218 14:33:18.060294 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:18Z","lastTransitionTime":"2026-02-18T14:33:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:18 crc kubenswrapper[4957]: I0218 14:33:18.162552 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:18 crc kubenswrapper[4957]: I0218 14:33:18.162589 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:18 crc kubenswrapper[4957]: I0218 14:33:18.162598 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:18 crc kubenswrapper[4957]: I0218 14:33:18.162636 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:18 crc kubenswrapper[4957]: I0218 14:33:18.162655 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:18Z","lastTransitionTime":"2026-02-18T14:33:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:18 crc kubenswrapper[4957]: I0218 14:33:18.212458 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:33:18 crc kubenswrapper[4957]: I0218 14:33:18.212456 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:33:18 crc kubenswrapper[4957]: E0218 14:33:18.212653 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 14:33:18 crc kubenswrapper[4957]: E0218 14:33:18.212847 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 14:33:18 crc kubenswrapper[4957]: I0218 14:33:18.217178 4957 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 01:48:30.540022737 +0000 UTC Feb 18 14:33:18 crc kubenswrapper[4957]: I0218 14:33:18.264920 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:18 crc kubenswrapper[4957]: I0218 14:33:18.264990 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:18 crc kubenswrapper[4957]: I0218 14:33:18.265002 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:18 crc kubenswrapper[4957]: I0218 14:33:18.265021 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:18 crc kubenswrapper[4957]: I0218 14:33:18.265035 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:18Z","lastTransitionTime":"2026-02-18T14:33:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:18 crc kubenswrapper[4957]: I0218 14:33:18.367480 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:18 crc kubenswrapper[4957]: I0218 14:33:18.367528 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:18 crc kubenswrapper[4957]: I0218 14:33:18.367540 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:18 crc kubenswrapper[4957]: I0218 14:33:18.367558 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:18 crc kubenswrapper[4957]: I0218 14:33:18.367572 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:18Z","lastTransitionTime":"2026-02-18T14:33:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:18 crc kubenswrapper[4957]: I0218 14:33:18.469741 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:18 crc kubenswrapper[4957]: I0218 14:33:18.469770 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:18 crc kubenswrapper[4957]: I0218 14:33:18.469797 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:18 crc kubenswrapper[4957]: I0218 14:33:18.469810 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:18 crc kubenswrapper[4957]: I0218 14:33:18.469820 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:18Z","lastTransitionTime":"2026-02-18T14:33:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:18 crc kubenswrapper[4957]: I0218 14:33:18.572689 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:18 crc kubenswrapper[4957]: I0218 14:33:18.572751 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:18 crc kubenswrapper[4957]: I0218 14:33:18.572768 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:18 crc kubenswrapper[4957]: I0218 14:33:18.572795 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:18 crc kubenswrapper[4957]: I0218 14:33:18.572811 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:18Z","lastTransitionTime":"2026-02-18T14:33:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:18 crc kubenswrapper[4957]: I0218 14:33:18.675121 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:18 crc kubenswrapper[4957]: I0218 14:33:18.675177 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:18 crc kubenswrapper[4957]: I0218 14:33:18.675196 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:18 crc kubenswrapper[4957]: I0218 14:33:18.675219 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:18 crc kubenswrapper[4957]: I0218 14:33:18.675238 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:18Z","lastTransitionTime":"2026-02-18T14:33:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:18 crc kubenswrapper[4957]: I0218 14:33:18.777490 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:18 crc kubenswrapper[4957]: I0218 14:33:18.777552 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:18 crc kubenswrapper[4957]: I0218 14:33:18.777567 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:18 crc kubenswrapper[4957]: I0218 14:33:18.777583 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:18 crc kubenswrapper[4957]: I0218 14:33:18.777594 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:18Z","lastTransitionTime":"2026-02-18T14:33:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:18 crc kubenswrapper[4957]: I0218 14:33:18.880613 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:18 crc kubenswrapper[4957]: I0218 14:33:18.880678 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:18 crc kubenswrapper[4957]: I0218 14:33:18.880702 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:18 crc kubenswrapper[4957]: I0218 14:33:18.880732 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:18 crc kubenswrapper[4957]: I0218 14:33:18.880757 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:18Z","lastTransitionTime":"2026-02-18T14:33:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:18 crc kubenswrapper[4957]: I0218 14:33:18.983695 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:18 crc kubenswrapper[4957]: I0218 14:33:18.983746 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:18 crc kubenswrapper[4957]: I0218 14:33:18.983756 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:18 crc kubenswrapper[4957]: I0218 14:33:18.983773 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:18 crc kubenswrapper[4957]: I0218 14:33:18.983784 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:18Z","lastTransitionTime":"2026-02-18T14:33:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:19 crc kubenswrapper[4957]: I0218 14:33:19.085956 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:19 crc kubenswrapper[4957]: I0218 14:33:19.086028 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:19 crc kubenswrapper[4957]: I0218 14:33:19.086052 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:19 crc kubenswrapper[4957]: I0218 14:33:19.086082 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:19 crc kubenswrapper[4957]: I0218 14:33:19.086105 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:19Z","lastTransitionTime":"2026-02-18T14:33:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:19 crc kubenswrapper[4957]: I0218 14:33:19.189055 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:19 crc kubenswrapper[4957]: I0218 14:33:19.189124 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:19 crc kubenswrapper[4957]: I0218 14:33:19.189141 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:19 crc kubenswrapper[4957]: I0218 14:33:19.189166 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:19 crc kubenswrapper[4957]: I0218 14:33:19.189182 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:19Z","lastTransitionTime":"2026-02-18T14:33:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:19 crc kubenswrapper[4957]: I0218 14:33:19.212853 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jkmlc" Feb 18 14:33:19 crc kubenswrapper[4957]: I0218 14:33:19.213228 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:33:19 crc kubenswrapper[4957]: E0218 14:33:19.214048 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 14:33:19 crc kubenswrapper[4957]: E0218 14:33:19.214376 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jkmlc" podUID="58c40982-35c8-4670-ad21-513a7a5a458e" Feb 18 14:33:19 crc kubenswrapper[4957]: I0218 14:33:19.218243 4957 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 20:27:27.463568989 +0000 UTC Feb 18 14:33:19 crc kubenswrapper[4957]: I0218 14:33:19.292647 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:19 crc kubenswrapper[4957]: I0218 14:33:19.293035 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:19 crc kubenswrapper[4957]: I0218 14:33:19.293049 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:19 crc kubenswrapper[4957]: I0218 14:33:19.293065 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:19 crc kubenswrapper[4957]: I0218 14:33:19.293076 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:19Z","lastTransitionTime":"2026-02-18T14:33:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:19 crc kubenswrapper[4957]: I0218 14:33:19.396096 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:19 crc kubenswrapper[4957]: I0218 14:33:19.396154 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:19 crc kubenswrapper[4957]: I0218 14:33:19.396173 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:19 crc kubenswrapper[4957]: I0218 14:33:19.396196 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:19 crc kubenswrapper[4957]: I0218 14:33:19.396215 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:19Z","lastTransitionTime":"2026-02-18T14:33:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:19 crc kubenswrapper[4957]: I0218 14:33:19.498731 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:19 crc kubenswrapper[4957]: I0218 14:33:19.498796 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:19 crc kubenswrapper[4957]: I0218 14:33:19.498813 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:19 crc kubenswrapper[4957]: I0218 14:33:19.498838 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:19 crc kubenswrapper[4957]: I0218 14:33:19.498855 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:19Z","lastTransitionTime":"2026-02-18T14:33:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:19 crc kubenswrapper[4957]: I0218 14:33:19.601647 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:19 crc kubenswrapper[4957]: I0218 14:33:19.601697 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:19 crc kubenswrapper[4957]: I0218 14:33:19.601721 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:19 crc kubenswrapper[4957]: I0218 14:33:19.601742 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:19 crc kubenswrapper[4957]: I0218 14:33:19.601758 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:19Z","lastTransitionTime":"2026-02-18T14:33:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:19 crc kubenswrapper[4957]: I0218 14:33:19.704195 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:19 crc kubenswrapper[4957]: I0218 14:33:19.704253 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:19 crc kubenswrapper[4957]: I0218 14:33:19.704270 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:19 crc kubenswrapper[4957]: I0218 14:33:19.704291 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:19 crc kubenswrapper[4957]: I0218 14:33:19.704308 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:19Z","lastTransitionTime":"2026-02-18T14:33:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:19 crc kubenswrapper[4957]: I0218 14:33:19.807271 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:19 crc kubenswrapper[4957]: I0218 14:33:19.807331 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:19 crc kubenswrapper[4957]: I0218 14:33:19.807341 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:19 crc kubenswrapper[4957]: I0218 14:33:19.807364 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:19 crc kubenswrapper[4957]: I0218 14:33:19.807381 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:19Z","lastTransitionTime":"2026-02-18T14:33:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:19 crc kubenswrapper[4957]: I0218 14:33:19.910529 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:19 crc kubenswrapper[4957]: I0218 14:33:19.910625 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:19 crc kubenswrapper[4957]: I0218 14:33:19.910663 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:19 crc kubenswrapper[4957]: I0218 14:33:19.910696 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:19 crc kubenswrapper[4957]: I0218 14:33:19.910722 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:19Z","lastTransitionTime":"2026-02-18T14:33:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:20 crc kubenswrapper[4957]: I0218 14:33:20.013707 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:20 crc kubenswrapper[4957]: I0218 14:33:20.013770 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:20 crc kubenswrapper[4957]: I0218 14:33:20.013780 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:20 crc kubenswrapper[4957]: I0218 14:33:20.013803 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:20 crc kubenswrapper[4957]: I0218 14:33:20.013816 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:20Z","lastTransitionTime":"2026-02-18T14:33:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:20 crc kubenswrapper[4957]: I0218 14:33:20.116791 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:20 crc kubenswrapper[4957]: I0218 14:33:20.117197 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:20 crc kubenswrapper[4957]: I0218 14:33:20.117235 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:20 crc kubenswrapper[4957]: I0218 14:33:20.117265 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:20 crc kubenswrapper[4957]: I0218 14:33:20.117286 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:20Z","lastTransitionTime":"2026-02-18T14:33:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:20 crc kubenswrapper[4957]: I0218 14:33:20.212909 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:33:20 crc kubenswrapper[4957]: I0218 14:33:20.212933 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:33:20 crc kubenswrapper[4957]: E0218 14:33:20.213133 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 14:33:20 crc kubenswrapper[4957]: E0218 14:33:20.213215 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 14:33:20 crc kubenswrapper[4957]: I0218 14:33:20.218817 4957 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 22:27:22.625798301 +0000 UTC Feb 18 14:33:20 crc kubenswrapper[4957]: I0218 14:33:20.221877 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:20 crc kubenswrapper[4957]: I0218 14:33:20.221931 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:20 crc kubenswrapper[4957]: I0218 14:33:20.221949 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:20 crc kubenswrapper[4957]: I0218 14:33:20.221970 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:20 crc kubenswrapper[4957]: I0218 14:33:20.221988 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:20Z","lastTransitionTime":"2026-02-18T14:33:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:20 crc kubenswrapper[4957]: I0218 14:33:20.324529 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:20 crc kubenswrapper[4957]: I0218 14:33:20.324587 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:20 crc kubenswrapper[4957]: I0218 14:33:20.324607 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:20 crc kubenswrapper[4957]: I0218 14:33:20.324631 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:20 crc kubenswrapper[4957]: I0218 14:33:20.324650 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:20Z","lastTransitionTime":"2026-02-18T14:33:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:20 crc kubenswrapper[4957]: I0218 14:33:20.427559 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:20 crc kubenswrapper[4957]: I0218 14:33:20.427616 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:20 crc kubenswrapper[4957]: I0218 14:33:20.427630 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:20 crc kubenswrapper[4957]: I0218 14:33:20.427696 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:20 crc kubenswrapper[4957]: I0218 14:33:20.427714 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:20Z","lastTransitionTime":"2026-02-18T14:33:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:20 crc kubenswrapper[4957]: I0218 14:33:20.530707 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:20 crc kubenswrapper[4957]: I0218 14:33:20.530754 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:20 crc kubenswrapper[4957]: I0218 14:33:20.530766 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:20 crc kubenswrapper[4957]: I0218 14:33:20.530784 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:20 crc kubenswrapper[4957]: I0218 14:33:20.530797 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:20Z","lastTransitionTime":"2026-02-18T14:33:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:20 crc kubenswrapper[4957]: I0218 14:33:20.633663 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:20 crc kubenswrapper[4957]: I0218 14:33:20.633785 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:20 crc kubenswrapper[4957]: I0218 14:33:20.633803 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:20 crc kubenswrapper[4957]: I0218 14:33:20.633829 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:20 crc kubenswrapper[4957]: I0218 14:33:20.633850 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:20Z","lastTransitionTime":"2026-02-18T14:33:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:20 crc kubenswrapper[4957]: I0218 14:33:20.737149 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:20 crc kubenswrapper[4957]: I0218 14:33:20.737217 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:20 crc kubenswrapper[4957]: I0218 14:33:20.737234 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:20 crc kubenswrapper[4957]: I0218 14:33:20.737260 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:20 crc kubenswrapper[4957]: I0218 14:33:20.737279 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:20Z","lastTransitionTime":"2026-02-18T14:33:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:20 crc kubenswrapper[4957]: I0218 14:33:20.839990 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:20 crc kubenswrapper[4957]: I0218 14:33:20.840042 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:20 crc kubenswrapper[4957]: I0218 14:33:20.840058 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:20 crc kubenswrapper[4957]: I0218 14:33:20.840085 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:20 crc kubenswrapper[4957]: I0218 14:33:20.840103 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:20Z","lastTransitionTime":"2026-02-18T14:33:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:20 crc kubenswrapper[4957]: I0218 14:33:20.943595 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:20 crc kubenswrapper[4957]: I0218 14:33:20.943700 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:20 crc kubenswrapper[4957]: I0218 14:33:20.943729 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:20 crc kubenswrapper[4957]: I0218 14:33:20.943810 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:20 crc kubenswrapper[4957]: I0218 14:33:20.943830 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:20Z","lastTransitionTime":"2026-02-18T14:33:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:21 crc kubenswrapper[4957]: I0218 14:33:21.047053 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:21 crc kubenswrapper[4957]: I0218 14:33:21.047111 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:21 crc kubenswrapper[4957]: I0218 14:33:21.047127 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:21 crc kubenswrapper[4957]: I0218 14:33:21.047153 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:21 crc kubenswrapper[4957]: I0218 14:33:21.047175 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:21Z","lastTransitionTime":"2026-02-18T14:33:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:21 crc kubenswrapper[4957]: I0218 14:33:21.149992 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:21 crc kubenswrapper[4957]: I0218 14:33:21.150135 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:21 crc kubenswrapper[4957]: I0218 14:33:21.150216 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:21 crc kubenswrapper[4957]: I0218 14:33:21.150303 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:21 crc kubenswrapper[4957]: I0218 14:33:21.150330 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:21Z","lastTransitionTime":"2026-02-18T14:33:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:21 crc kubenswrapper[4957]: I0218 14:33:21.212210 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jkmlc" Feb 18 14:33:21 crc kubenswrapper[4957]: I0218 14:33:21.212230 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:33:21 crc kubenswrapper[4957]: E0218 14:33:21.212512 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jkmlc" podUID="58c40982-35c8-4670-ad21-513a7a5a458e" Feb 18 14:33:21 crc kubenswrapper[4957]: E0218 14:33:21.212618 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 14:33:21 crc kubenswrapper[4957]: I0218 14:33:21.219478 4957 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 07:36:28.722402534 +0000 UTC Feb 18 14:33:21 crc kubenswrapper[4957]: I0218 14:33:21.254369 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:21 crc kubenswrapper[4957]: I0218 14:33:21.254442 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:21 crc kubenswrapper[4957]: I0218 14:33:21.254459 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:21 crc kubenswrapper[4957]: I0218 14:33:21.254481 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:21 crc kubenswrapper[4957]: I0218 14:33:21.254494 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:21Z","lastTransitionTime":"2026-02-18T14:33:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:21 crc kubenswrapper[4957]: I0218 14:33:21.358352 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:21 crc kubenswrapper[4957]: I0218 14:33:21.358487 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:21 crc kubenswrapper[4957]: I0218 14:33:21.358515 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:21 crc kubenswrapper[4957]: I0218 14:33:21.358546 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:21 crc kubenswrapper[4957]: I0218 14:33:21.358574 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:21Z","lastTransitionTime":"2026-02-18T14:33:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:21 crc kubenswrapper[4957]: I0218 14:33:21.461347 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:21 crc kubenswrapper[4957]: I0218 14:33:21.461387 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:21 crc kubenswrapper[4957]: I0218 14:33:21.461396 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:21 crc kubenswrapper[4957]: I0218 14:33:21.461411 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:21 crc kubenswrapper[4957]: I0218 14:33:21.461436 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:21Z","lastTransitionTime":"2026-02-18T14:33:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:21 crc kubenswrapper[4957]: I0218 14:33:21.564827 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:21 crc kubenswrapper[4957]: I0218 14:33:21.564895 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:21 crc kubenswrapper[4957]: I0218 14:33:21.564913 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:21 crc kubenswrapper[4957]: I0218 14:33:21.565239 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:21 crc kubenswrapper[4957]: I0218 14:33:21.565280 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:21Z","lastTransitionTime":"2026-02-18T14:33:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:21 crc kubenswrapper[4957]: I0218 14:33:21.668914 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:21 crc kubenswrapper[4957]: I0218 14:33:21.668981 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:21 crc kubenswrapper[4957]: I0218 14:33:21.668998 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:21 crc kubenswrapper[4957]: I0218 14:33:21.669020 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:21 crc kubenswrapper[4957]: I0218 14:33:21.669043 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:21Z","lastTransitionTime":"2026-02-18T14:33:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:21 crc kubenswrapper[4957]: I0218 14:33:21.772299 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:21 crc kubenswrapper[4957]: I0218 14:33:21.772359 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:21 crc kubenswrapper[4957]: I0218 14:33:21.772375 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:21 crc kubenswrapper[4957]: I0218 14:33:21.772400 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:21 crc kubenswrapper[4957]: I0218 14:33:21.772454 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:21Z","lastTransitionTime":"2026-02-18T14:33:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:21 crc kubenswrapper[4957]: I0218 14:33:21.876596 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:21 crc kubenswrapper[4957]: I0218 14:33:21.876693 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:21 crc kubenswrapper[4957]: I0218 14:33:21.876714 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:21 crc kubenswrapper[4957]: I0218 14:33:21.876741 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:21 crc kubenswrapper[4957]: I0218 14:33:21.876762 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:21Z","lastTransitionTime":"2026-02-18T14:33:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:21 crc kubenswrapper[4957]: I0218 14:33:21.979021 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:21 crc kubenswrapper[4957]: I0218 14:33:21.979074 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:21 crc kubenswrapper[4957]: I0218 14:33:21.979089 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:21 crc kubenswrapper[4957]: I0218 14:33:21.979112 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:21 crc kubenswrapper[4957]: I0218 14:33:21.979126 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:21Z","lastTransitionTime":"2026-02-18T14:33:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:22 crc kubenswrapper[4957]: I0218 14:33:22.081712 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:22 crc kubenswrapper[4957]: I0218 14:33:22.081976 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:22 crc kubenswrapper[4957]: I0218 14:33:22.082145 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:22 crc kubenswrapper[4957]: I0218 14:33:22.082256 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:22 crc kubenswrapper[4957]: I0218 14:33:22.082343 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:22Z","lastTransitionTime":"2026-02-18T14:33:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:22 crc kubenswrapper[4957]: I0218 14:33:22.185986 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:22 crc kubenswrapper[4957]: I0218 14:33:22.186505 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:22 crc kubenswrapper[4957]: I0218 14:33:22.186770 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:22 crc kubenswrapper[4957]: I0218 14:33:22.186901 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:22 crc kubenswrapper[4957]: I0218 14:33:22.187036 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:22Z","lastTransitionTime":"2026-02-18T14:33:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:22 crc kubenswrapper[4957]: I0218 14:33:22.214573 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:33:22 crc kubenswrapper[4957]: I0218 14:33:22.214652 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:33:22 crc kubenswrapper[4957]: E0218 14:33:22.214737 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 14:33:22 crc kubenswrapper[4957]: E0218 14:33:22.214845 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 14:33:22 crc kubenswrapper[4957]: I0218 14:33:22.220450 4957 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 15:36:08.03291495 +0000 UTC Feb 18 14:33:22 crc kubenswrapper[4957]: I0218 14:33:22.289739 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:22 crc kubenswrapper[4957]: I0218 14:33:22.289775 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:22 crc kubenswrapper[4957]: I0218 14:33:22.289786 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:22 crc kubenswrapper[4957]: I0218 14:33:22.289802 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:22 crc kubenswrapper[4957]: I0218 14:33:22.289815 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:22Z","lastTransitionTime":"2026-02-18T14:33:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:22 crc kubenswrapper[4957]: I0218 14:33:22.363556 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:33:22 crc kubenswrapper[4957]: I0218 14:33:22.363604 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:33:22 crc kubenswrapper[4957]: I0218 14:33:22.363619 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:33:22 crc kubenswrapper[4957]: I0218 14:33:22.363640 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:33:22 crc kubenswrapper[4957]: I0218 14:33:22.363655 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:33:22Z","lastTransitionTime":"2026-02-18T14:33:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:33:22 crc kubenswrapper[4957]: I0218 14:33:22.420029 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-52gh7" podStartSLOduration=88.4200091 podStartE2EDuration="1m28.4200091s" podCreationTimestamp="2026-02-18 14:31:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:33:14.537965123 +0000 UTC m=+101.058829867" watchObservedRunningTime="2026-02-18 14:33:22.4200091 +0000 UTC m=+108.940873854" Feb 18 14:33:22 crc kubenswrapper[4957]: I0218 14:33:22.421190 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-86lt7"] Feb 18 14:33:22 crc kubenswrapper[4957]: I0218 14:33:22.422475 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-86lt7" Feb 18 14:33:22 crc kubenswrapper[4957]: I0218 14:33:22.426008 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 18 14:33:22 crc kubenswrapper[4957]: I0218 14:33:22.427525 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 18 14:33:22 crc kubenswrapper[4957]: I0218 14:33:22.429484 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 18 14:33:22 crc kubenswrapper[4957]: I0218 14:33:22.429934 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 18 14:33:22 crc kubenswrapper[4957]: I0218 14:33:22.511983 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/970b8f89-4865-4cf1-84de-fc48f9d4464f-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-86lt7\" (UID: \"970b8f89-4865-4cf1-84de-fc48f9d4464f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-86lt7" Feb 18 14:33:22 crc kubenswrapper[4957]: I0218 14:33:22.512039 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/970b8f89-4865-4cf1-84de-fc48f9d4464f-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-86lt7\" (UID: \"970b8f89-4865-4cf1-84de-fc48f9d4464f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-86lt7" Feb 18 14:33:22 crc kubenswrapper[4957]: I0218 14:33:22.512062 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/970b8f89-4865-4cf1-84de-fc48f9d4464f-service-ca\") pod \"cluster-version-operator-5c965bbfc6-86lt7\" (UID: \"970b8f89-4865-4cf1-84de-fc48f9d4464f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-86lt7" Feb 18 14:33:22 crc kubenswrapper[4957]: I0218 14:33:22.512097 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/970b8f89-4865-4cf1-84de-fc48f9d4464f-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-86lt7\" (UID: \"970b8f89-4865-4cf1-84de-fc48f9d4464f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-86lt7" Feb 18 14:33:22 crc kubenswrapper[4957]: I0218 14:33:22.512124 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/970b8f89-4865-4cf1-84de-fc48f9d4464f-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-86lt7\" (UID: \"970b8f89-4865-4cf1-84de-fc48f9d4464f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-86lt7" Feb 18 14:33:22 crc kubenswrapper[4957]: I0218 14:33:22.613709 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/970b8f89-4865-4cf1-84de-fc48f9d4464f-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-86lt7\" (UID: \"970b8f89-4865-4cf1-84de-fc48f9d4464f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-86lt7" Feb 18 14:33:22 crc kubenswrapper[4957]: I0218 14:33:22.613795 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/970b8f89-4865-4cf1-84de-fc48f9d4464f-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-86lt7\" (UID: \"970b8f89-4865-4cf1-84de-fc48f9d4464f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-86lt7" Feb 18 14:33:22 crc kubenswrapper[4957]: I0218 14:33:22.613846 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/970b8f89-4865-4cf1-84de-fc48f9d4464f-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-86lt7\" (UID: \"970b8f89-4865-4cf1-84de-fc48f9d4464f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-86lt7" Feb 18 14:33:22 crc kubenswrapper[4957]: I0218 14:33:22.613871 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/970b8f89-4865-4cf1-84de-fc48f9d4464f-service-ca\") pod \"cluster-version-operator-5c965bbfc6-86lt7\" (UID: \"970b8f89-4865-4cf1-84de-fc48f9d4464f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-86lt7" Feb 18 14:33:22 crc kubenswrapper[4957]: I0218 14:33:22.613899 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/970b8f89-4865-4cf1-84de-fc48f9d4464f-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-86lt7\" (UID: \"970b8f89-4865-4cf1-84de-fc48f9d4464f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-86lt7" Feb 18 14:33:22 crc kubenswrapper[4957]: I0218 14:33:22.613960 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/970b8f89-4865-4cf1-84de-fc48f9d4464f-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-86lt7\" (UID: \"970b8f89-4865-4cf1-84de-fc48f9d4464f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-86lt7" Feb 18 14:33:22 crc kubenswrapper[4957]: I0218 14:33:22.614027 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/970b8f89-4865-4cf1-84de-fc48f9d4464f-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-86lt7\" (UID: \"970b8f89-4865-4cf1-84de-fc48f9d4464f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-86lt7" Feb 18 14:33:22 crc kubenswrapper[4957]: I0218 14:33:22.615109 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/970b8f89-4865-4cf1-84de-fc48f9d4464f-service-ca\") pod \"cluster-version-operator-5c965bbfc6-86lt7\" (UID: \"970b8f89-4865-4cf1-84de-fc48f9d4464f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-86lt7" Feb 18 14:33:22 crc kubenswrapper[4957]: I0218 14:33:22.621805 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/970b8f89-4865-4cf1-84de-fc48f9d4464f-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-86lt7\" (UID: \"970b8f89-4865-4cf1-84de-fc48f9d4464f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-86lt7" Feb 18 14:33:22 crc kubenswrapper[4957]: I0218 14:33:22.635994 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/970b8f89-4865-4cf1-84de-fc48f9d4464f-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-86lt7\" (UID: \"970b8f89-4865-4cf1-84de-fc48f9d4464f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-86lt7" Feb 18 14:33:22 crc kubenswrapper[4957]: I0218 14:33:22.744639 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-86lt7" Feb 18 14:33:22 crc kubenswrapper[4957]: I0218 14:33:22.826278 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-86lt7" event={"ID":"970b8f89-4865-4cf1-84de-fc48f9d4464f","Type":"ContainerStarted","Data":"71ab1419419fbb2dc08a3d10eea78cac4d5b9342509ae26711977cd9560c47c1"} Feb 18 14:33:23 crc kubenswrapper[4957]: I0218 14:33:23.212873 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jkmlc" Feb 18 14:33:23 crc kubenswrapper[4957]: I0218 14:33:23.212925 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:33:23 crc kubenswrapper[4957]: E0218 14:33:23.213027 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jkmlc" podUID="58c40982-35c8-4670-ad21-513a7a5a458e" Feb 18 14:33:23 crc kubenswrapper[4957]: E0218 14:33:23.213317 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 14:33:23 crc kubenswrapper[4957]: I0218 14:33:23.221492 4957 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 23:08:04.728332332 +0000 UTC Feb 18 14:33:23 crc kubenswrapper[4957]: I0218 14:33:23.221542 4957 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Feb 18 14:33:23 crc kubenswrapper[4957]: I0218 14:33:23.231699 4957 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 18 14:33:23 crc kubenswrapper[4957]: I0218 14:33:23.834335 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-86lt7" event={"ID":"970b8f89-4865-4cf1-84de-fc48f9d4464f","Type":"ContainerStarted","Data":"ea8e4b9aad11bcba09d903cadba6a3b3c8e90c8cc4264c4cc3ed7fd01724fc28"} Feb 18 14:33:23 crc kubenswrapper[4957]: I0218 14:33:23.851770 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-86lt7" podStartSLOduration=89.851742715 podStartE2EDuration="1m29.851742715s" podCreationTimestamp="2026-02-18 14:31:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:33:23.851744035 +0000 UTC m=+110.372608839" watchObservedRunningTime="2026-02-18 14:33:23.851742715 +0000 UTC m=+110.372607489" Feb 18 14:33:24 crc kubenswrapper[4957]: I0218 14:33:24.212507 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:33:24 crc kubenswrapper[4957]: I0218 14:33:24.212564 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:33:24 crc kubenswrapper[4957]: E0218 14:33:24.215079 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 14:33:24 crc kubenswrapper[4957]: E0218 14:33:24.215510 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 14:33:24 crc kubenswrapper[4957]: I0218 14:33:24.216007 4957 scope.go:117] "RemoveContainer" containerID="bc6ab3573641e68bec6f227fa30ead1a586c1cf3abdf781b93edf3396b81f192" Feb 18 14:33:24 crc kubenswrapper[4957]: E0218 14:33:24.216255 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-t7lp9_openshift-ovn-kubernetes(c1ab5e7d-28c9-416b-9e12-1209987d8a2c)\"" pod="openshift-ovn-kubernetes/ovnkube-node-t7lp9" podUID="c1ab5e7d-28c9-416b-9e12-1209987d8a2c" Feb 18 14:33:25 crc kubenswrapper[4957]: I0218 14:33:25.212126 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jkmlc" Feb 18 14:33:25 crc kubenswrapper[4957]: I0218 14:33:25.212145 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:33:25 crc kubenswrapper[4957]: E0218 14:33:25.212303 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jkmlc" podUID="58c40982-35c8-4670-ad21-513a7a5a458e" Feb 18 14:33:25 crc kubenswrapper[4957]: E0218 14:33:25.212514 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 14:33:26 crc kubenswrapper[4957]: I0218 14:33:26.213695 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:33:26 crc kubenswrapper[4957]: I0218 14:33:26.213695 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:33:26 crc kubenswrapper[4957]: E0218 14:33:26.214287 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 14:33:26 crc kubenswrapper[4957]: E0218 14:33:26.214352 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 14:33:27 crc kubenswrapper[4957]: I0218 14:33:27.212001 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jkmlc" Feb 18 14:33:27 crc kubenswrapper[4957]: I0218 14:33:27.212058 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:33:27 crc kubenswrapper[4957]: E0218 14:33:27.212217 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jkmlc" podUID="58c40982-35c8-4670-ad21-513a7a5a458e" Feb 18 14:33:27 crc kubenswrapper[4957]: E0218 14:33:27.212480 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 14:33:28 crc kubenswrapper[4957]: I0218 14:33:28.211982 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:33:28 crc kubenswrapper[4957]: E0218 14:33:28.212189 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 14:33:28 crc kubenswrapper[4957]: I0218 14:33:28.212561 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:33:28 crc kubenswrapper[4957]: E0218 14:33:28.212683 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 14:33:28 crc kubenswrapper[4957]: I0218 14:33:28.854923 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-sk96m_e7e4902e-7bb6-44ec-a3ac-3d8b3afe3fbb/kube-multus/1.log" Feb 18 14:33:28 crc kubenswrapper[4957]: I0218 14:33:28.856243 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-sk96m_e7e4902e-7bb6-44ec-a3ac-3d8b3afe3fbb/kube-multus/0.log" Feb 18 14:33:28 crc kubenswrapper[4957]: I0218 14:33:28.856376 4957 generic.go:334] "Generic (PLEG): container finished" podID="e7e4902e-7bb6-44ec-a3ac-3d8b3afe3fbb" containerID="3b9a1d713ed8c708dc17974a35a4e22543cfeee4911baa33638b24cb02f01f1e" exitCode=1 Feb 18 14:33:28 crc kubenswrapper[4957]: I0218 14:33:28.856470 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-sk96m" event={"ID":"e7e4902e-7bb6-44ec-a3ac-3d8b3afe3fbb","Type":"ContainerDied","Data":"3b9a1d713ed8c708dc17974a35a4e22543cfeee4911baa33638b24cb02f01f1e"} Feb 18 14:33:28 crc kubenswrapper[4957]: I0218 14:33:28.856550 4957 scope.go:117] "RemoveContainer" containerID="644a3c29970ac9c5262a3a66390a566947b89d6020eaf5de12afb1afeaaff673" Feb 18 14:33:28 crc kubenswrapper[4957]: I0218 14:33:28.859526 4957 scope.go:117] "RemoveContainer" containerID="3b9a1d713ed8c708dc17974a35a4e22543cfeee4911baa33638b24cb02f01f1e" Feb 18 14:33:28 crc kubenswrapper[4957]: E0218 14:33:28.860160 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-sk96m_openshift-multus(e7e4902e-7bb6-44ec-a3ac-3d8b3afe3fbb)\"" pod="openshift-multus/multus-sk96m" podUID="e7e4902e-7bb6-44ec-a3ac-3d8b3afe3fbb" Feb 18 14:33:29 crc kubenswrapper[4957]: I0218 14:33:29.212657 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:33:29 crc kubenswrapper[4957]: I0218 14:33:29.212670 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jkmlc" Feb 18 14:33:29 crc kubenswrapper[4957]: E0218 14:33:29.212871 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 14:33:29 crc kubenswrapper[4957]: E0218 14:33:29.213032 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jkmlc" podUID="58c40982-35c8-4670-ad21-513a7a5a458e" Feb 18 14:33:29 crc kubenswrapper[4957]: I0218 14:33:29.862343 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-sk96m_e7e4902e-7bb6-44ec-a3ac-3d8b3afe3fbb/kube-multus/1.log" Feb 18 14:33:30 crc kubenswrapper[4957]: I0218 14:33:30.212608 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:33:30 crc kubenswrapper[4957]: I0218 14:33:30.212617 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:33:30 crc kubenswrapper[4957]: E0218 14:33:30.212761 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 14:33:30 crc kubenswrapper[4957]: E0218 14:33:30.212846 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 14:33:31 crc kubenswrapper[4957]: I0218 14:33:31.211958 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jkmlc" Feb 18 14:33:31 crc kubenswrapper[4957]: I0218 14:33:31.211972 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:33:31 crc kubenswrapper[4957]: E0218 14:33:31.212162 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jkmlc" podUID="58c40982-35c8-4670-ad21-513a7a5a458e" Feb 18 14:33:31 crc kubenswrapper[4957]: E0218 14:33:31.212246 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 14:33:32 crc kubenswrapper[4957]: I0218 14:33:32.212129 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:33:32 crc kubenswrapper[4957]: I0218 14:33:32.212129 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:33:32 crc kubenswrapper[4957]: E0218 14:33:32.212294 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 14:33:32 crc kubenswrapper[4957]: E0218 14:33:32.212347 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 14:33:33 crc kubenswrapper[4957]: I0218 14:33:33.212324 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:33:33 crc kubenswrapper[4957]: E0218 14:33:33.212784 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 14:33:33 crc kubenswrapper[4957]: I0218 14:33:33.213113 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jkmlc" Feb 18 14:33:33 crc kubenswrapper[4957]: E0218 14:33:33.213446 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jkmlc" podUID="58c40982-35c8-4670-ad21-513a7a5a458e" Feb 18 14:33:34 crc kubenswrapper[4957]: E0218 14:33:34.151307 4957 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Feb 18 14:33:34 crc kubenswrapper[4957]: I0218 14:33:34.212965 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:33:34 crc kubenswrapper[4957]: I0218 14:33:34.213023 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:33:34 crc kubenswrapper[4957]: E0218 14:33:34.213129 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 14:33:34 crc kubenswrapper[4957]: E0218 14:33:34.213221 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 14:33:34 crc kubenswrapper[4957]: E0218 14:33:34.301822 4957 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 18 14:33:35 crc kubenswrapper[4957]: I0218 14:33:35.212656 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jkmlc" Feb 18 14:33:35 crc kubenswrapper[4957]: E0218 14:33:35.212827 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jkmlc" podUID="58c40982-35c8-4670-ad21-513a7a5a458e" Feb 18 14:33:35 crc kubenswrapper[4957]: I0218 14:33:35.212656 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:33:35 crc kubenswrapper[4957]: E0218 14:33:35.213028 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 14:33:36 crc kubenswrapper[4957]: I0218 14:33:36.212465 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:33:36 crc kubenswrapper[4957]: E0218 14:33:36.212604 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 14:33:36 crc kubenswrapper[4957]: I0218 14:33:36.212481 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:33:36 crc kubenswrapper[4957]: E0218 14:33:36.212732 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 14:33:37 crc kubenswrapper[4957]: I0218 14:33:37.212688 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jkmlc" Feb 18 14:33:37 crc kubenswrapper[4957]: I0218 14:33:37.212689 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:33:37 crc kubenswrapper[4957]: E0218 14:33:37.213017 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jkmlc" podUID="58c40982-35c8-4670-ad21-513a7a5a458e" Feb 18 14:33:37 crc kubenswrapper[4957]: E0218 14:33:37.213150 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 14:33:37 crc kubenswrapper[4957]: I0218 14:33:37.213966 4957 scope.go:117] "RemoveContainer" containerID="bc6ab3573641e68bec6f227fa30ead1a586c1cf3abdf781b93edf3396b81f192" Feb 18 14:33:37 crc kubenswrapper[4957]: I0218 14:33:37.898422 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-t7lp9_c1ab5e7d-28c9-416b-9e12-1209987d8a2c/ovnkube-controller/3.log" Feb 18 14:33:37 crc kubenswrapper[4957]: I0218 14:33:37.900884 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t7lp9" event={"ID":"c1ab5e7d-28c9-416b-9e12-1209987d8a2c","Type":"ContainerStarted","Data":"6abc231989624dfd52760fc08c1c15a4ca94c6c14ac91bfb7bc64bf5b3fc6404"} Feb 18 14:33:37 crc kubenswrapper[4957]: I0218 14:33:37.901631 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-t7lp9" Feb 18 14:33:37 crc kubenswrapper[4957]: I0218 14:33:37.941251 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-t7lp9" podStartSLOduration=103.941227902 podStartE2EDuration="1m43.941227902s" podCreationTimestamp="2026-02-18 14:31:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:33:37.940512711 +0000 UTC m=+124.461377455" watchObservedRunningTime="2026-02-18 14:33:37.941227902 +0000 UTC m=+124.462092656" Feb 18 14:33:38 crc kubenswrapper[4957]: I0218 14:33:38.161088 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-jkmlc"] Feb 18 14:33:38 crc kubenswrapper[4957]: I0218 14:33:38.161221 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jkmlc" Feb 18 14:33:38 crc kubenswrapper[4957]: E0218 14:33:38.161380 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jkmlc" podUID="58c40982-35c8-4670-ad21-513a7a5a458e" Feb 18 14:33:38 crc kubenswrapper[4957]: I0218 14:33:38.212681 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:33:38 crc kubenswrapper[4957]: I0218 14:33:38.212730 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:33:38 crc kubenswrapper[4957]: E0218 14:33:38.212825 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 14:33:38 crc kubenswrapper[4957]: E0218 14:33:38.213091 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 14:33:39 crc kubenswrapper[4957]: I0218 14:33:39.212170 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:33:39 crc kubenswrapper[4957]: E0218 14:33:39.212934 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 14:33:39 crc kubenswrapper[4957]: E0218 14:33:39.303815 4957 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 18 14:33:40 crc kubenswrapper[4957]: I0218 14:33:40.213744 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:33:40 crc kubenswrapper[4957]: E0218 14:33:40.213966 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 14:33:40 crc kubenswrapper[4957]: I0218 14:33:40.214258 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:33:40 crc kubenswrapper[4957]: E0218 14:33:40.214349 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 14:33:40 crc kubenswrapper[4957]: I0218 14:33:40.214476 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jkmlc" Feb 18 14:33:40 crc kubenswrapper[4957]: E0218 14:33:40.214710 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jkmlc" podUID="58c40982-35c8-4670-ad21-513a7a5a458e" Feb 18 14:33:41 crc kubenswrapper[4957]: I0218 14:33:41.212032 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:33:41 crc kubenswrapper[4957]: E0218 14:33:41.212197 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 14:33:42 crc kubenswrapper[4957]: I0218 14:33:42.212012 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:33:42 crc kubenswrapper[4957]: I0218 14:33:42.212099 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:33:42 crc kubenswrapper[4957]: I0218 14:33:42.212169 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jkmlc" Feb 18 14:33:42 crc kubenswrapper[4957]: E0218 14:33:42.212301 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 14:33:42 crc kubenswrapper[4957]: E0218 14:33:42.212514 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 14:33:42 crc kubenswrapper[4957]: E0218 14:33:42.212914 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jkmlc" podUID="58c40982-35c8-4670-ad21-513a7a5a458e" Feb 18 14:33:42 crc kubenswrapper[4957]: I0218 14:33:42.213182 4957 scope.go:117] "RemoveContainer" containerID="3b9a1d713ed8c708dc17974a35a4e22543cfeee4911baa33638b24cb02f01f1e" Feb 18 14:33:42 crc kubenswrapper[4957]: I0218 14:33:42.916993 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-sk96m_e7e4902e-7bb6-44ec-a3ac-3d8b3afe3fbb/kube-multus/1.log" Feb 18 14:33:42 crc kubenswrapper[4957]: I0218 14:33:42.917391 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-sk96m" event={"ID":"e7e4902e-7bb6-44ec-a3ac-3d8b3afe3fbb","Type":"ContainerStarted","Data":"5404d3757559a526399e1c8ad0cad0fc231a9f59ad1d9be14407c358b01a6bb2"} Feb 18 14:33:43 crc kubenswrapper[4957]: I0218 14:33:43.212941 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:33:43 crc kubenswrapper[4957]: E0218 14:33:43.213088 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 14:33:44 crc kubenswrapper[4957]: I0218 14:33:44.212097 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:33:44 crc kubenswrapper[4957]: I0218 14:33:44.212245 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:33:44 crc kubenswrapper[4957]: I0218 14:33:44.212332 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jkmlc" Feb 18 14:33:44 crc kubenswrapper[4957]: E0218 14:33:44.214557 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 14:33:44 crc kubenswrapper[4957]: E0218 14:33:44.214692 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jkmlc" podUID="58c40982-35c8-4670-ad21-513a7a5a458e" Feb 18 14:33:44 crc kubenswrapper[4957]: E0218 14:33:44.214768 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 14:33:45 crc kubenswrapper[4957]: I0218 14:33:45.212537 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:33:45 crc kubenswrapper[4957]: I0218 14:33:45.214843 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 18 14:33:45 crc kubenswrapper[4957]: I0218 14:33:45.217292 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 18 14:33:46 crc kubenswrapper[4957]: I0218 14:33:46.212373 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:33:46 crc kubenswrapper[4957]: I0218 14:33:46.212518 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jkmlc" Feb 18 14:33:46 crc kubenswrapper[4957]: I0218 14:33:46.212605 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:33:46 crc kubenswrapper[4957]: I0218 14:33:46.214666 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 18 14:33:46 crc kubenswrapper[4957]: I0218 14:33:46.214758 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 18 14:33:46 crc kubenswrapper[4957]: I0218 14:33:46.215030 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 18 14:33:46 crc kubenswrapper[4957]: I0218 14:33:46.216405 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.450508 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-t7lp9" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.785917 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.839676 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-fkdzp"] Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.840182 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fkdzp" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.845296 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.845479 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.845705 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-s7l47"] Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.845830 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.846157 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.846290 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-s7l47" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.849492 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-lp5cj"] Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.849957 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jffnj"] Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.850355 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jffnj" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.850790 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-lp5cj" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.850858 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-tmknf"] Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.851612 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-tmknf" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.853085 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.858662 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-d9hcp"] Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.859306 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-d9hcp" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.859321 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-g4vvs"] Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.860402 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-g4vvs" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.860488 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-594sd"] Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.860880 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-594sd" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.865133 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-d4mbw"] Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.865690 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-d4mbw" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.865962 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-2db8v"] Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.866754 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2db8v" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.872857 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.873088 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.873122 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.873222 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.873397 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.873487 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.873598 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.873814 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.873859 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.873921 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.874029 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.876543 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.874039 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.878104 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.878835 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.879322 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.879343 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.881162 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.881405 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.881606 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.881864 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.882060 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.882777 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.891049 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.892048 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.892133 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.892376 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.892499 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.892637 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.892737 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.892806 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.892978 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.893242 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.893641 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.894126 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-dk2nb"] Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.894814 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-dk2nb" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.894945 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-fxh8s"] Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.895391 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-fxh8s" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.896615 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-hvb66"] Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.896953 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-hvb66" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.899367 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.899599 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.899672 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.899771 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.900269 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.900364 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.900469 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.900550 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.900750 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.901025 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.901158 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.901303 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.901465 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.901933 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.903620 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.903734 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.905669 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.905880 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.906002 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.906080 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.906234 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.906249 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.906582 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.907368 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.907726 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.907866 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.913505 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.914075 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.914233 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.914621 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.914782 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.908142 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.911169 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.918967 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.919407 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.919537 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.919713 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.919828 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.919988 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.921263 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-fpnw9"] Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.921739 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.921912 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.922052 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-fpnw9" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.923141 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-vd6hx"] Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.923729 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-vd6hx" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.946472 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-2pvcz"] Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.947302 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-2pvcz" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.947817 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-crshz"] Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.948703 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.949136 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-crshz" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.950216 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.950470 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.951554 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.952471 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1889400f-2fff-4c67-b401-966e820d5a26-serving-cert\") pod \"apiserver-7bbb656c7d-2db8v\" (UID: \"1889400f-2fff-4c67-b401-966e820d5a26\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2db8v" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.952506 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9ce37342-c95b-4be4-b48c-91553e81206a-client-ca\") pod \"route-controller-manager-6576b87f9c-fkdzp\" (UID: \"9ce37342-c95b-4be4-b48c-91553e81206a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fkdzp" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.952530 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b8d0adaf-a3c6-4121-970c-1f6205db177e-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-594sd\" (UID: \"b8d0adaf-a3c6-4121-970c-1f6205db177e\") " pod="openshift-authentication/oauth-openshift-558db77b4-594sd" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.952560 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ce37342-c95b-4be4-b48c-91553e81206a-config\") pod \"route-controller-manager-6576b87f9c-fkdzp\" (UID: \"9ce37342-c95b-4be4-b48c-91553e81206a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fkdzp" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.952592 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/22fac79a-ec42-4f23-9f6b-79b68cde0489-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-jffnj\" (UID: \"22fac79a-ec42-4f23-9f6b-79b68cde0489\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jffnj" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.952618 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d30d957-c658-4ce4-9b04-3f1d64fb67b7-config\") pod \"console-operator-58897d9998-d9hcp\" (UID: \"2d30d957-c658-4ce4-9b04-3f1d64fb67b7\") " pod="openshift-console-operator/console-operator-58897d9998-d9hcp" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.952640 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rll8m\" (UniqueName: \"kubernetes.io/projected/2d30d957-c658-4ce4-9b04-3f1d64fb67b7-kube-api-access-rll8m\") pod \"console-operator-58897d9998-d9hcp\" (UID: \"2d30d957-c658-4ce4-9b04-3f1d64fb67b7\") " pod="openshift-console-operator/console-operator-58897d9998-d9hcp" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.952669 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1889400f-2fff-4c67-b401-966e820d5a26-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-2db8v\" (UID: \"1889400f-2fff-4c67-b401-966e820d5a26\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2db8v" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.953235 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rh4mc\" (UniqueName: \"kubernetes.io/projected/7b0517cf-fb33-49b0-9f1c-ba39f8edfc37-kube-api-access-rh4mc\") pod \"openshift-config-operator-7777fb866f-tmknf\" (UID: \"7b0517cf-fb33-49b0-9f1c-ba39f8edfc37\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-tmknf" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.953296 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-hfcww"] Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.953302 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p46m8\" (UniqueName: \"kubernetes.io/projected/609d339c-7830-4e17-b847-da17573e1ed0-kube-api-access-p46m8\") pod \"controller-manager-879f6c89f-lp5cj\" (UID: \"609d339c-7830-4e17-b847-da17573e1ed0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lp5cj" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.953849 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/7ebbd0f0-af37-460a-88f5-ff0e855f652c-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-d4mbw\" (UID: \"7ebbd0f0-af37-460a-88f5-ff0e855f652c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-d4mbw" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.953898 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.953930 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/f2c035f3-f03e-4263-a7a5-e821b1fc5488-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-g4vvs\" (UID: \"f2c035f3-f03e-4263-a7a5-e821b1fc5488\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-g4vvs" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.953992 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2d30d957-c658-4ce4-9b04-3f1d64fb67b7-trusted-ca\") pod \"console-operator-58897d9998-d9hcp\" (UID: \"2d30d957-c658-4ce4-9b04-3f1d64fb67b7\") " pod="openshift-console-operator/console-operator-58897d9998-d9hcp" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.954032 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1889400f-2fff-4c67-b401-966e820d5a26-audit-dir\") pod \"apiserver-7bbb656c7d-2db8v\" (UID: \"1889400f-2fff-4c67-b401-966e820d5a26\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2db8v" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.954081 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b8d0adaf-a3c6-4121-970c-1f6205db177e-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-594sd\" (UID: \"b8d0adaf-a3c6-4121-970c-1f6205db177e\") " pod="openshift-authentication/oauth-openshift-558db77b4-594sd" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.954094 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.954110 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1889400f-2fff-4c67-b401-966e820d5a26-encryption-config\") pod \"apiserver-7bbb656c7d-2db8v\" (UID: \"1889400f-2fff-4c67-b401-966e820d5a26\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2db8v" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.954167 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mkgnl\" (UniqueName: \"kubernetes.io/projected/f2c035f3-f03e-4263-a7a5-e821b1fc5488-kube-api-access-mkgnl\") pod \"cluster-samples-operator-665b6dd947-g4vvs\" (UID: \"f2c035f3-f03e-4263-a7a5-e821b1fc5488\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-g4vvs" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.954210 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ebbd0f0-af37-460a-88f5-ff0e855f652c-config\") pod \"machine-api-operator-5694c8668f-d4mbw\" (UID: \"7ebbd0f0-af37-460a-88f5-ff0e855f652c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-d4mbw" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.954217 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-hfcww" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.954247 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/7ebbd0f0-af37-460a-88f5-ff0e855f652c-images\") pod \"machine-api-operator-5694c8668f-d4mbw\" (UID: \"7ebbd0f0-af37-460a-88f5-ff0e855f652c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-d4mbw" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.954294 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.954332 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b8d0adaf-a3c6-4121-970c-1f6205db177e-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-594sd\" (UID: \"b8d0adaf-a3c6-4121-970c-1f6205db177e\") " pod="openshift-authentication/oauth-openshift-558db77b4-594sd" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.954377 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7b0517cf-fb33-49b0-9f1c-ba39f8edfc37-serving-cert\") pod \"openshift-config-operator-7777fb866f-tmknf\" (UID: \"7b0517cf-fb33-49b0-9f1c-ba39f8edfc37\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-tmknf" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.954368 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.954456 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/30c49091-ad03-4fcf-a0d4-3955a1ddaf97-metrics-tls\") pod \"dns-operator-744455d44c-s7l47\" (UID: \"30c49091-ad03-4fcf-a0d4-3955a1ddaf97\") " pod="openshift-dns-operator/dns-operator-744455d44c-s7l47" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.954490 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/609d339c-7830-4e17-b847-da17573e1ed0-config\") pod \"controller-manager-879f6c89f-lp5cj\" (UID: \"609d339c-7830-4e17-b847-da17573e1ed0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lp5cj" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.954518 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22fac79a-ec42-4f23-9f6b-79b68cde0489-config\") pod \"openshift-apiserver-operator-796bbdcf4f-jffnj\" (UID: \"22fac79a-ec42-4f23-9f6b-79b68cde0489\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jffnj" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.954549 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b8d0adaf-a3c6-4121-970c-1f6205db177e-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-594sd\" (UID: \"b8d0adaf-a3c6-4121-970c-1f6205db177e\") " pod="openshift-authentication/oauth-openshift-558db77b4-594sd" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.954569 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.954670 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.954580 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b8d0adaf-a3c6-4121-970c-1f6205db177e-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-594sd\" (UID: \"b8d0adaf-a3c6-4121-970c-1f6205db177e\") " pod="openshift-authentication/oauth-openshift-558db77b4-594sd" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.954778 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/609d339c-7830-4e17-b847-da17573e1ed0-client-ca\") pod \"controller-manager-879f6c89f-lp5cj\" (UID: \"609d339c-7830-4e17-b847-da17573e1ed0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lp5cj" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.954823 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b8d0adaf-a3c6-4121-970c-1f6205db177e-audit-dir\") pod \"oauth-openshift-558db77b4-594sd\" (UID: \"b8d0adaf-a3c6-4121-970c-1f6205db177e\") " pod="openshift-authentication/oauth-openshift-558db77b4-594sd" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.954857 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1889400f-2fff-4c67-b401-966e820d5a26-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-2db8v\" (UID: \"1889400f-2fff-4c67-b401-966e820d5a26\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2db8v" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.954870 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.954894 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9ce37342-c95b-4be4-b48c-91553e81206a-serving-cert\") pod \"route-controller-manager-6576b87f9c-fkdzp\" (UID: \"9ce37342-c95b-4be4-b48c-91553e81206a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fkdzp" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.954921 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b8d0adaf-a3c6-4121-970c-1f6205db177e-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-594sd\" (UID: \"b8d0adaf-a3c6-4121-970c-1f6205db177e\") " pod="openshift-authentication/oauth-openshift-558db77b4-594sd" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.955003 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1889400f-2fff-4c67-b401-966e820d5a26-etcd-client\") pod \"apiserver-7bbb656c7d-2db8v\" (UID: \"1889400f-2fff-4c67-b401-966e820d5a26\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2db8v" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.955052 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/7b0517cf-fb33-49b0-9f1c-ba39f8edfc37-available-featuregates\") pod \"openshift-config-operator-7777fb866f-tmknf\" (UID: \"7b0517cf-fb33-49b0-9f1c-ba39f8edfc37\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-tmknf" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.955085 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnnmh\" (UniqueName: \"kubernetes.io/projected/22fac79a-ec42-4f23-9f6b-79b68cde0489-kube-api-access-hnnmh\") pod \"openshift-apiserver-operator-796bbdcf4f-jffnj\" (UID: \"22fac79a-ec42-4f23-9f6b-79b68cde0489\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jffnj" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.955111 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/1889400f-2fff-4c67-b401-966e820d5a26-audit-policies\") pod \"apiserver-7bbb656c7d-2db8v\" (UID: \"1889400f-2fff-4c67-b401-966e820d5a26\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2db8v" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.955135 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2d30d957-c658-4ce4-9b04-3f1d64fb67b7-serving-cert\") pod \"console-operator-58897d9998-d9hcp\" (UID: \"2d30d957-c658-4ce4-9b04-3f1d64fb67b7\") " pod="openshift-console-operator/console-operator-58897d9998-d9hcp" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.955220 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f46tj\" (UniqueName: \"kubernetes.io/projected/9ce37342-c95b-4be4-b48c-91553e81206a-kube-api-access-f46tj\") pod \"route-controller-manager-6576b87f9c-fkdzp\" (UID: \"9ce37342-c95b-4be4-b48c-91553e81206a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fkdzp" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.955300 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b8d0adaf-a3c6-4121-970c-1f6205db177e-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-594sd\" (UID: \"b8d0adaf-a3c6-4121-970c-1f6205db177e\") " pod="openshift-authentication/oauth-openshift-558db77b4-594sd" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.955325 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/609d339c-7830-4e17-b847-da17573e1ed0-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-lp5cj\" (UID: \"609d339c-7830-4e17-b847-da17573e1ed0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lp5cj" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.955345 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b8d0adaf-a3c6-4121-970c-1f6205db177e-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-594sd\" (UID: \"b8d0adaf-a3c6-4121-970c-1f6205db177e\") " pod="openshift-authentication/oauth-openshift-558db77b4-594sd" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.955368 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxvk8\" (UniqueName: \"kubernetes.io/projected/7ebbd0f0-af37-460a-88f5-ff0e855f652c-kube-api-access-lxvk8\") pod \"machine-api-operator-5694c8668f-d4mbw\" (UID: \"7ebbd0f0-af37-460a-88f5-ff0e855f652c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-d4mbw" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.955385 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gcbrj\" (UniqueName: \"kubernetes.io/projected/30c49091-ad03-4fcf-a0d4-3955a1ddaf97-kube-api-access-gcbrj\") pod \"dns-operator-744455d44c-s7l47\" (UID: \"30c49091-ad03-4fcf-a0d4-3955a1ddaf97\") " pod="openshift-dns-operator/dns-operator-744455d44c-s7l47" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.955405 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b8d0adaf-a3c6-4121-970c-1f6205db177e-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-594sd\" (UID: \"b8d0adaf-a3c6-4121-970c-1f6205db177e\") " pod="openshift-authentication/oauth-openshift-558db77b4-594sd" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.955445 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/609d339c-7830-4e17-b847-da17573e1ed0-serving-cert\") pod \"controller-manager-879f6c89f-lp5cj\" (UID: \"609d339c-7830-4e17-b847-da17573e1ed0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lp5cj" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.955469 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mx625\" (UniqueName: \"kubernetes.io/projected/b8d0adaf-a3c6-4121-970c-1f6205db177e-kube-api-access-mx625\") pod \"oauth-openshift-558db77b4-594sd\" (UID: \"b8d0adaf-a3c6-4121-970c-1f6205db177e\") " pod="openshift-authentication/oauth-openshift-558db77b4-594sd" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.955484 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vr52b\" (UniqueName: \"kubernetes.io/projected/1889400f-2fff-4c67-b401-966e820d5a26-kube-api-access-vr52b\") pod \"apiserver-7bbb656c7d-2db8v\" (UID: \"1889400f-2fff-4c67-b401-966e820d5a26\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2db8v" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.955504 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b8d0adaf-a3c6-4121-970c-1f6205db177e-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-594sd\" (UID: \"b8d0adaf-a3c6-4121-970c-1f6205db177e\") " pod="openshift-authentication/oauth-openshift-558db77b4-594sd" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.955523 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b8d0adaf-a3c6-4121-970c-1f6205db177e-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-594sd\" (UID: \"b8d0adaf-a3c6-4121-970c-1f6205db177e\") " pod="openshift-authentication/oauth-openshift-558db77b4-594sd" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.955543 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b8d0adaf-a3c6-4121-970c-1f6205db177e-audit-policies\") pod \"oauth-openshift-558db77b4-594sd\" (UID: \"b8d0adaf-a3c6-4121-970c-1f6205db177e\") " pod="openshift-authentication/oauth-openshift-558db77b4-594sd" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.955722 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.956063 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.956258 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.956364 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.958736 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-q9xrs"] Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.959635 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.959910 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-q9xrs" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.974787 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.970517 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.975125 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.974289 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.975644 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.976523 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mn6hj"] Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.975362 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.977728 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-nddjw"] Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.977964 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mn6hj" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.978191 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-nddjw" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.978879 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-xsf6z"] Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.979229 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.979559 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.980002 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-f8bjt"] Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.980152 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-xsf6z" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.980473 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-f8bjt" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.980950 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-6f6dt"] Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.981979 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-6f6dt" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.983481 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.984570 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-lwq5z"] Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.985160 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-jg752"] Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.985539 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-jg752" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.985595 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lwq5z" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.986143 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-fjrfx"] Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.986565 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-fjrfx" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.987584 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-m5ftr"] Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.988667 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bwgmc"] Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.988801 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-m5ftr" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.989063 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8bmsm"] Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.989833 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bwgmc" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.989964 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8bmsm" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.997270 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-q4vp7"] Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.998382 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-sfz7k"] Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.998843 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-q4vp7" Feb 18 14:33:52 crc kubenswrapper[4957]: I0218 14:33:52.999687 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qc4wh"] Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:52.999994 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-sfz7k" Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.001045 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-nmxbx"] Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.001169 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qc4wh" Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.001752 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-nmxbx" Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.002234 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.002504 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-txsxl"] Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.003201 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-fkdzp"] Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.003643 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-txsxl" Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.004591 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-s74tf"] Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.005045 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-s74tf" Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.005501 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29523750-lq4hl"] Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.006310 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29523750-lq4hl" Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.006625 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-s7l47"] Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.007739 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-nld6v"] Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.008495 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-nld6v" Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.014350 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-wbzr4"] Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.015090 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-wbzr4" Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.015195 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-2db8v"] Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.016105 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-d4mbw"] Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.017590 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-fxh8s"] Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.020234 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-q9xrs"] Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.022519 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.029447 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-fpnw9"] Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.031073 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-lp5cj"] Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.038473 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jffnj"] Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.043398 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-2pvcz"] Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.043588 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.045584 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-vd6hx"] Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.046809 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-f8bjt"] Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.049963 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-594sd"] Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.053844 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-d9hcp"] Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.054954 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-hvb66"] Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.056991 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/30c49091-ad03-4fcf-a0d4-3955a1ddaf97-metrics-tls\") pod \"dns-operator-744455d44c-s7l47\" (UID: \"30c49091-ad03-4fcf-a0d4-3955a1ddaf97\") " pod="openshift-dns-operator/dns-operator-744455d44c-s7l47" Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.057048 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b8d0adaf-a3c6-4121-970c-1f6205db177e-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-594sd\" (UID: \"b8d0adaf-a3c6-4121-970c-1f6205db177e\") " pod="openshift-authentication/oauth-openshift-558db77b4-594sd" Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.057069 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7b0517cf-fb33-49b0-9f1c-ba39f8edfc37-serving-cert\") pod \"openshift-config-operator-7777fb866f-tmknf\" (UID: \"7b0517cf-fb33-49b0-9f1c-ba39f8edfc37\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-tmknf" Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.057051 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-nmxbx"] Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.057130 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/609d339c-7830-4e17-b847-da17573e1ed0-config\") pod \"controller-manager-879f6c89f-lp5cj\" (UID: \"609d339c-7830-4e17-b847-da17573e1ed0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lp5cj" Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.057147 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22fac79a-ec42-4f23-9f6b-79b68cde0489-config\") pod \"openshift-apiserver-operator-796bbdcf4f-jffnj\" (UID: \"22fac79a-ec42-4f23-9f6b-79b68cde0489\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jffnj" Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.057164 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b8d0adaf-a3c6-4121-970c-1f6205db177e-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-594sd\" (UID: \"b8d0adaf-a3c6-4121-970c-1f6205db177e\") " pod="openshift-authentication/oauth-openshift-558db77b4-594sd" Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.057735 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b8d0adaf-a3c6-4121-970c-1f6205db177e-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-594sd\" (UID: \"b8d0adaf-a3c6-4121-970c-1f6205db177e\") " pod="openshift-authentication/oauth-openshift-558db77b4-594sd" Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.057754 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/609d339c-7830-4e17-b847-da17573e1ed0-client-ca\") pod \"controller-manager-879f6c89f-lp5cj\" (UID: \"609d339c-7830-4e17-b847-da17573e1ed0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lp5cj" Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.057785 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b8d0adaf-a3c6-4121-970c-1f6205db177e-audit-dir\") pod \"oauth-openshift-558db77b4-594sd\" (UID: \"b8d0adaf-a3c6-4121-970c-1f6205db177e\") " pod="openshift-authentication/oauth-openshift-558db77b4-594sd" Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.058245 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1889400f-2fff-4c67-b401-966e820d5a26-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-2db8v\" (UID: \"1889400f-2fff-4c67-b401-966e820d5a26\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2db8v" Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.058270 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9ce37342-c95b-4be4-b48c-91553e81206a-serving-cert\") pod \"route-controller-manager-6576b87f9c-fkdzp\" (UID: \"9ce37342-c95b-4be4-b48c-91553e81206a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fkdzp" Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.058286 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b8d0adaf-a3c6-4121-970c-1f6205db177e-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-594sd\" (UID: \"b8d0adaf-a3c6-4121-970c-1f6205db177e\") " pod="openshift-authentication/oauth-openshift-558db77b4-594sd" Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.058299 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1889400f-2fff-4c67-b401-966e820d5a26-etcd-client\") pod \"apiserver-7bbb656c7d-2db8v\" (UID: \"1889400f-2fff-4c67-b401-966e820d5a26\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2db8v" Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.058314 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/7b0517cf-fb33-49b0-9f1c-ba39f8edfc37-available-featuregates\") pod \"openshift-config-operator-7777fb866f-tmknf\" (UID: \"7b0517cf-fb33-49b0-9f1c-ba39f8edfc37\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-tmknf" Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.058330 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f46tj\" (UniqueName: \"kubernetes.io/projected/9ce37342-c95b-4be4-b48c-91553e81206a-kube-api-access-f46tj\") pod \"route-controller-manager-6576b87f9c-fkdzp\" (UID: \"9ce37342-c95b-4be4-b48c-91553e81206a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fkdzp" Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.058343 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hnnmh\" (UniqueName: \"kubernetes.io/projected/22fac79a-ec42-4f23-9f6b-79b68cde0489-kube-api-access-hnnmh\") pod \"openshift-apiserver-operator-796bbdcf4f-jffnj\" (UID: \"22fac79a-ec42-4f23-9f6b-79b68cde0489\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jffnj" Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.058358 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/1889400f-2fff-4c67-b401-966e820d5a26-audit-policies\") pod \"apiserver-7bbb656c7d-2db8v\" (UID: \"1889400f-2fff-4c67-b401-966e820d5a26\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2db8v" Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.058374 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2d30d957-c658-4ce4-9b04-3f1d64fb67b7-serving-cert\") pod \"console-operator-58897d9998-d9hcp\" (UID: \"2d30d957-c658-4ce4-9b04-3f1d64fb67b7\") " pod="openshift-console-operator/console-operator-58897d9998-d9hcp" Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.058405 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/609d339c-7830-4e17-b847-da17573e1ed0-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-lp5cj\" (UID: \"609d339c-7830-4e17-b847-da17573e1ed0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lp5cj" Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.058438 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/262c1468-53c3-406c-b186-2912e491ba70-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-2pvcz\" (UID: \"262c1468-53c3-406c-b186-2912e491ba70\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-2pvcz" Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.058456 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b8d0adaf-a3c6-4121-970c-1f6205db177e-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-594sd\" (UID: \"b8d0adaf-a3c6-4121-970c-1f6205db177e\") " pod="openshift-authentication/oauth-openshift-558db77b4-594sd" Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.058473 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b8d0adaf-a3c6-4121-970c-1f6205db177e-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-594sd\" (UID: \"b8d0adaf-a3c6-4121-970c-1f6205db177e\") " pod="openshift-authentication/oauth-openshift-558db77b4-594sd" Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.058701 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/609d339c-7830-4e17-b847-da17573e1ed0-config\") pod \"controller-manager-879f6c89f-lp5cj\" (UID: \"609d339c-7830-4e17-b847-da17573e1ed0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lp5cj" Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.058772 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b8d0adaf-a3c6-4121-970c-1f6205db177e-audit-dir\") pod \"oauth-openshift-558db77b4-594sd\" (UID: \"b8d0adaf-a3c6-4121-970c-1f6205db177e\") " pod="openshift-authentication/oauth-openshift-558db77b4-594sd" Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.058059 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22fac79a-ec42-4f23-9f6b-79b68cde0489-config\") pod \"openshift-apiserver-operator-796bbdcf4f-jffnj\" (UID: \"22fac79a-ec42-4f23-9f6b-79b68cde0489\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jffnj" Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.058840 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/609d339c-7830-4e17-b847-da17573e1ed0-client-ca\") pod \"controller-manager-879f6c89f-lp5cj\" (UID: \"609d339c-7830-4e17-b847-da17573e1ed0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lp5cj" Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.059523 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b8d0adaf-a3c6-4121-970c-1f6205db177e-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-594sd\" (UID: \"b8d0adaf-a3c6-4121-970c-1f6205db177e\") " pod="openshift-authentication/oauth-openshift-558db77b4-594sd" Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.059527 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b8d0adaf-a3c6-4121-970c-1f6205db177e-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-594sd\" (UID: \"b8d0adaf-a3c6-4121-970c-1f6205db177e\") " pod="openshift-authentication/oauth-openshift-558db77b4-594sd" Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.059578 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/609d339c-7830-4e17-b847-da17573e1ed0-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-lp5cj\" (UID: \"609d339c-7830-4e17-b847-da17573e1ed0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lp5cj" Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.059587 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1889400f-2fff-4c67-b401-966e820d5a26-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-2db8v\" (UID: \"1889400f-2fff-4c67-b401-966e820d5a26\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2db8v" Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.060118 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/1889400f-2fff-4c67-b401-966e820d5a26-audit-policies\") pod \"apiserver-7bbb656c7d-2db8v\" (UID: \"1889400f-2fff-4c67-b401-966e820d5a26\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2db8v" Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.061126 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b8d0adaf-a3c6-4121-970c-1f6205db177e-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-594sd\" (UID: \"b8d0adaf-a3c6-4121-970c-1f6205db177e\") " pod="openshift-authentication/oauth-openshift-558db77b4-594sd" Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.061494 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/7b0517cf-fb33-49b0-9f1c-ba39f8edfc37-available-featuregates\") pod \"openshift-config-operator-7777fb866f-tmknf\" (UID: \"7b0517cf-fb33-49b0-9f1c-ba39f8edfc37\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-tmknf" Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.061544 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-nddjw"] Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.061780 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxvk8\" (UniqueName: \"kubernetes.io/projected/7ebbd0f0-af37-460a-88f5-ff0e855f652c-kube-api-access-lxvk8\") pod \"machine-api-operator-5694c8668f-d4mbw\" (UID: \"7ebbd0f0-af37-460a-88f5-ff0e855f652c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-d4mbw" Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.061830 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gcbrj\" (UniqueName: \"kubernetes.io/projected/30c49091-ad03-4fcf-a0d4-3955a1ddaf97-kube-api-access-gcbrj\") pod \"dns-operator-744455d44c-s7l47\" (UID: \"30c49091-ad03-4fcf-a0d4-3955a1ddaf97\") " pod="openshift-dns-operator/dns-operator-744455d44c-s7l47" Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.061855 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b8d0adaf-a3c6-4121-970c-1f6205db177e-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-594sd\" (UID: \"b8d0adaf-a3c6-4121-970c-1f6205db177e\") " pod="openshift-authentication/oauth-openshift-558db77b4-594sd" Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.061878 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/609d339c-7830-4e17-b847-da17573e1ed0-serving-cert\") pod \"controller-manager-879f6c89f-lp5cj\" (UID: \"609d339c-7830-4e17-b847-da17573e1ed0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lp5cj" Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.061903 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mx625\" (UniqueName: \"kubernetes.io/projected/b8d0adaf-a3c6-4121-970c-1f6205db177e-kube-api-access-mx625\") pod \"oauth-openshift-558db77b4-594sd\" (UID: \"b8d0adaf-a3c6-4121-970c-1f6205db177e\") " pod="openshift-authentication/oauth-openshift-558db77b4-594sd" Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.061926 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vr52b\" (UniqueName: \"kubernetes.io/projected/1889400f-2fff-4c67-b401-966e820d5a26-kube-api-access-vr52b\") pod \"apiserver-7bbb656c7d-2db8v\" (UID: \"1889400f-2fff-4c67-b401-966e820d5a26\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2db8v" Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.061948 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b8d0adaf-a3c6-4121-970c-1f6205db177e-audit-policies\") pod \"oauth-openshift-558db77b4-594sd\" (UID: \"b8d0adaf-a3c6-4121-970c-1f6205db177e\") " pod="openshift-authentication/oauth-openshift-558db77b4-594sd" Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.061967 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b8d0adaf-a3c6-4121-970c-1f6205db177e-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-594sd\" (UID: \"b8d0adaf-a3c6-4121-970c-1f6205db177e\") " pod="openshift-authentication/oauth-openshift-558db77b4-594sd" Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.061992 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b8d0adaf-a3c6-4121-970c-1f6205db177e-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-594sd\" (UID: \"b8d0adaf-a3c6-4121-970c-1f6205db177e\") " pod="openshift-authentication/oauth-openshift-558db77b4-594sd" Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.062016 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1889400f-2fff-4c67-b401-966e820d5a26-serving-cert\") pod \"apiserver-7bbb656c7d-2db8v\" (UID: \"1889400f-2fff-4c67-b401-966e820d5a26\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2db8v" Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.062043 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b8d0adaf-a3c6-4121-970c-1f6205db177e-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-594sd\" (UID: \"b8d0adaf-a3c6-4121-970c-1f6205db177e\") " pod="openshift-authentication/oauth-openshift-558db77b4-594sd" Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.062070 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9ce37342-c95b-4be4-b48c-91553e81206a-client-ca\") pod \"route-controller-manager-6576b87f9c-fkdzp\" (UID: \"9ce37342-c95b-4be4-b48c-91553e81206a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fkdzp" Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.062091 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ce37342-c95b-4be4-b48c-91553e81206a-config\") pod \"route-controller-manager-6576b87f9c-fkdzp\" (UID: \"9ce37342-c95b-4be4-b48c-91553e81206a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fkdzp" Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.062111 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/22fac79a-ec42-4f23-9f6b-79b68cde0489-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-jffnj\" (UID: \"22fac79a-ec42-4f23-9f6b-79b68cde0489\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jffnj" Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.062130 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d30d957-c658-4ce4-9b04-3f1d64fb67b7-config\") pod \"console-operator-58897d9998-d9hcp\" (UID: \"2d30d957-c658-4ce4-9b04-3f1d64fb67b7\") " pod="openshift-console-operator/console-operator-58897d9998-d9hcp" Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.062151 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1889400f-2fff-4c67-b401-966e820d5a26-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-2db8v\" (UID: \"1889400f-2fff-4c67-b401-966e820d5a26\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2db8v" Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.062172 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rh4mc\" (UniqueName: \"kubernetes.io/projected/7b0517cf-fb33-49b0-9f1c-ba39f8edfc37-kube-api-access-rh4mc\") pod \"openshift-config-operator-7777fb866f-tmknf\" (UID: \"7b0517cf-fb33-49b0-9f1c-ba39f8edfc37\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-tmknf" Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.062194 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rll8m\" (UniqueName: \"kubernetes.io/projected/2d30d957-c658-4ce4-9b04-3f1d64fb67b7-kube-api-access-rll8m\") pod \"console-operator-58897d9998-d9hcp\" (UID: \"2d30d957-c658-4ce4-9b04-3f1d64fb67b7\") " pod="openshift-console-operator/console-operator-58897d9998-d9hcp" Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.062218 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p46m8\" (UniqueName: \"kubernetes.io/projected/609d339c-7830-4e17-b847-da17573e1ed0-kube-api-access-p46m8\") pod \"controller-manager-879f6c89f-lp5cj\" (UID: \"609d339c-7830-4e17-b847-da17573e1ed0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lp5cj" Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.062248 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/7ebbd0f0-af37-460a-88f5-ff0e855f652c-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-d4mbw\" (UID: \"7ebbd0f0-af37-460a-88f5-ff0e855f652c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-d4mbw" Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.062286 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/f2c035f3-f03e-4263-a7a5-e821b1fc5488-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-g4vvs\" (UID: \"f2c035f3-f03e-4263-a7a5-e821b1fc5488\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-g4vvs" Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.062307 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2d30d957-c658-4ce4-9b04-3f1d64fb67b7-trusted-ca\") pod \"console-operator-58897d9998-d9hcp\" (UID: \"2d30d957-c658-4ce4-9b04-3f1d64fb67b7\") " pod="openshift-console-operator/console-operator-58897d9998-d9hcp" Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.062331 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1889400f-2fff-4c67-b401-966e820d5a26-audit-dir\") pod \"apiserver-7bbb656c7d-2db8v\" (UID: \"1889400f-2fff-4c67-b401-966e820d5a26\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2db8v" Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.062355 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b8d0adaf-a3c6-4121-970c-1f6205db177e-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-594sd\" (UID: \"b8d0adaf-a3c6-4121-970c-1f6205db177e\") " pod="openshift-authentication/oauth-openshift-558db77b4-594sd" Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.062371 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1889400f-2fff-4c67-b401-966e820d5a26-encryption-config\") pod \"apiserver-7bbb656c7d-2db8v\" (UID: \"1889400f-2fff-4c67-b401-966e820d5a26\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2db8v" Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.062387 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mkgnl\" (UniqueName: \"kubernetes.io/projected/f2c035f3-f03e-4263-a7a5-e821b1fc5488-kube-api-access-mkgnl\") pod \"cluster-samples-operator-665b6dd947-g4vvs\" (UID: \"f2c035f3-f03e-4263-a7a5-e821b1fc5488\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-g4vvs" Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.062416 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/262c1468-53c3-406c-b186-2912e491ba70-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-2pvcz\" (UID: \"262c1468-53c3-406c-b186-2912e491ba70\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-2pvcz" Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.062464 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ebbd0f0-af37-460a-88f5-ff0e855f652c-config\") pod \"machine-api-operator-5694c8668f-d4mbw\" (UID: \"7ebbd0f0-af37-460a-88f5-ff0e855f652c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-d4mbw" Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.062480 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/7ebbd0f0-af37-460a-88f5-ff0e855f652c-images\") pod \"machine-api-operator-5694c8668f-d4mbw\" (UID: \"7ebbd0f0-af37-460a-88f5-ff0e855f652c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-d4mbw" Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.062500 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmg2c\" (UniqueName: \"kubernetes.io/projected/262c1468-53c3-406c-b186-2912e491ba70-kube-api-access-rmg2c\") pod \"kube-storage-version-migrator-operator-b67b599dd-2pvcz\" (UID: \"262c1468-53c3-406c-b186-2912e491ba70\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-2pvcz" Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.062836 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1889400f-2fff-4c67-b401-966e820d5a26-etcd-client\") pod \"apiserver-7bbb656c7d-2db8v\" (UID: \"1889400f-2fff-4c67-b401-966e820d5a26\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2db8v" Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.062881 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b8d0adaf-a3c6-4121-970c-1f6205db177e-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-594sd\" (UID: \"b8d0adaf-a3c6-4121-970c-1f6205db177e\") " pod="openshift-authentication/oauth-openshift-558db77b4-594sd" Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.063100 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1889400f-2fff-4c67-b401-966e820d5a26-audit-dir\") pod \"apiserver-7bbb656c7d-2db8v\" (UID: \"1889400f-2fff-4c67-b401-966e820d5a26\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2db8v" Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.063164 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b8d0adaf-a3c6-4121-970c-1f6205db177e-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-594sd\" (UID: \"b8d0adaf-a3c6-4121-970c-1f6205db177e\") " pod="openshift-authentication/oauth-openshift-558db77b4-594sd" Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.063250 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1889400f-2fff-4c67-b401-966e820d5a26-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-2db8v\" (UID: \"1889400f-2fff-4c67-b401-966e820d5a26\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2db8v" Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.064404 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ce37342-c95b-4be4-b48c-91553e81206a-config\") pod \"route-controller-manager-6576b87f9c-fkdzp\" (UID: \"9ce37342-c95b-4be4-b48c-91553e81206a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fkdzp" Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.061903 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-fjrfx"] Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.064562 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9ce37342-c95b-4be4-b48c-91553e81206a-client-ca\") pod \"route-controller-manager-6576b87f9c-fkdzp\" (UID: \"9ce37342-c95b-4be4-b48c-91553e81206a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fkdzp" Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.065152 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b8d0adaf-a3c6-4121-970c-1f6205db177e-audit-policies\") pod \"oauth-openshift-558db77b4-594sd\" (UID: \"b8d0adaf-a3c6-4121-970c-1f6205db177e\") " pod="openshift-authentication/oauth-openshift-558db77b4-594sd" Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.065408 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b8d0adaf-a3c6-4121-970c-1f6205db177e-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-594sd\" (UID: \"b8d0adaf-a3c6-4121-970c-1f6205db177e\") " pod="openshift-authentication/oauth-openshift-558db77b4-594sd" Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.065487 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.065583 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d30d957-c658-4ce4-9b04-3f1d64fb67b7-config\") pod \"console-operator-58897d9998-d9hcp\" (UID: \"2d30d957-c658-4ce4-9b04-3f1d64fb67b7\") " pod="openshift-console-operator/console-operator-58897d9998-d9hcp" Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.065687 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bwgmc"] Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.066709 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ebbd0f0-af37-460a-88f5-ff0e855f652c-config\") pod \"machine-api-operator-5694c8668f-d4mbw\" (UID: \"7ebbd0f0-af37-460a-88f5-ff0e855f652c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-d4mbw" Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.066787 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2d30d957-c658-4ce4-9b04-3f1d64fb67b7-trusted-ca\") pod \"console-operator-58897d9998-d9hcp\" (UID: \"2d30d957-c658-4ce4-9b04-3f1d64fb67b7\") " pod="openshift-console-operator/console-operator-58897d9998-d9hcp" Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.066921 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/7ebbd0f0-af37-460a-88f5-ff0e855f652c-images\") pod \"machine-api-operator-5694c8668f-d4mbw\" (UID: \"7ebbd0f0-af37-460a-88f5-ff0e855f652c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-d4mbw" Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.066994 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-sfz7k"] Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.067501 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b8d0adaf-a3c6-4121-970c-1f6205db177e-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-594sd\" (UID: \"b8d0adaf-a3c6-4121-970c-1f6205db177e\") " pod="openshift-authentication/oauth-openshift-558db77b4-594sd" Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.067988 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b8d0adaf-a3c6-4121-970c-1f6205db177e-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-594sd\" (UID: \"b8d0adaf-a3c6-4121-970c-1f6205db177e\") " pod="openshift-authentication/oauth-openshift-558db77b4-594sd" Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.068323 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/7ebbd0f0-af37-460a-88f5-ff0e855f652c-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-d4mbw\" (UID: \"7ebbd0f0-af37-460a-88f5-ff0e855f652c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-d4mbw" Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.068678 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-lwq5z"] Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.069253 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1889400f-2fff-4c67-b401-966e820d5a26-serving-cert\") pod \"apiserver-7bbb656c7d-2db8v\" (UID: \"1889400f-2fff-4c67-b401-966e820d5a26\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2db8v" Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.069611 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/f2c035f3-f03e-4263-a7a5-e821b1fc5488-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-g4vvs\" (UID: \"f2c035f3-f03e-4263-a7a5-e821b1fc5488\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-g4vvs" Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.069727 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b8d0adaf-a3c6-4121-970c-1f6205db177e-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-594sd\" (UID: \"b8d0adaf-a3c6-4121-970c-1f6205db177e\") " pod="openshift-authentication/oauth-openshift-558db77b4-594sd" Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.069858 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/22fac79a-ec42-4f23-9f6b-79b68cde0489-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-jffnj\" (UID: \"22fac79a-ec42-4f23-9f6b-79b68cde0489\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jffnj" Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.070012 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/609d339c-7830-4e17-b847-da17573e1ed0-serving-cert\") pod \"controller-manager-879f6c89f-lp5cj\" (UID: \"609d339c-7830-4e17-b847-da17573e1ed0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lp5cj" Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.070295 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-6f6dt"] Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.070914 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b8d0adaf-a3c6-4121-970c-1f6205db177e-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-594sd\" (UID: \"b8d0adaf-a3c6-4121-970c-1f6205db177e\") " pod="openshift-authentication/oauth-openshift-558db77b4-594sd" Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.071098 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b8d0adaf-a3c6-4121-970c-1f6205db177e-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-594sd\" (UID: \"b8d0adaf-a3c6-4121-970c-1f6205db177e\") " pod="openshift-authentication/oauth-openshift-558db77b4-594sd" Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.071126 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1889400f-2fff-4c67-b401-966e820d5a26-encryption-config\") pod \"apiserver-7bbb656c7d-2db8v\" (UID: \"1889400f-2fff-4c67-b401-966e820d5a26\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2db8v" Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.071610 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-g4vvs"] Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.072174 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/30c49091-ad03-4fcf-a0d4-3955a1ddaf97-metrics-tls\") pod \"dns-operator-744455d44c-s7l47\" (UID: \"30c49091-ad03-4fcf-a0d4-3955a1ddaf97\") " pod="openshift-dns-operator/dns-operator-744455d44c-s7l47" Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.072183 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9ce37342-c95b-4be4-b48c-91553e81206a-serving-cert\") pod \"route-controller-manager-6576b87f9c-fkdzp\" (UID: \"9ce37342-c95b-4be4-b48c-91553e81206a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fkdzp" Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.072253 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2d30d957-c658-4ce4-9b04-3f1d64fb67b7-serving-cert\") pod \"console-operator-58897d9998-d9hcp\" (UID: \"2d30d957-c658-4ce4-9b04-3f1d64fb67b7\") " pod="openshift-console-operator/console-operator-58897d9998-d9hcp" Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.072693 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-tmknf"] Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.073545 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-nx4lr"] Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.073936 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7b0517cf-fb33-49b0-9f1c-ba39f8edfc37-serving-cert\") pod \"openshift-config-operator-7777fb866f-tmknf\" (UID: \"7b0517cf-fb33-49b0-9f1c-ba39f8edfc37\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-tmknf" Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.074258 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-nx4lr" Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.074883 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-wbzr4"] Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.077219 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-m5ftr"] Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.078335 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-xsf6z"] Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.079300 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mn6hj"] Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.080578 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8bmsm"] Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.081681 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-crshz"] Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.082051 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.082818 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qc4wh"] Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.083816 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29523750-lq4hl"] Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.084983 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-txsxl"] Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.085959 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-q4vp7"] Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.086943 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-hfcww"] Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.087949 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-nld6v"] Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.088908 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-s74tf"] Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.090475 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-xrkwr"] Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.091205 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-xrkwr" Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.094269 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-gdjrj"] Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.095444 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-xrkwr"] Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.095467 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-gdjrj"] Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.095555 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-gdjrj" Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.110116 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.132294 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.142345 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.162520 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.162993 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/262c1468-53c3-406c-b186-2912e491ba70-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-2pvcz\" (UID: \"262c1468-53c3-406c-b186-2912e491ba70\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-2pvcz" Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.163028 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rmg2c\" (UniqueName: \"kubernetes.io/projected/262c1468-53c3-406c-b186-2912e491ba70-kube-api-access-rmg2c\") pod \"kube-storage-version-migrator-operator-b67b599dd-2pvcz\" (UID: \"262c1468-53c3-406c-b186-2912e491ba70\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-2pvcz" Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.163103 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/262c1468-53c3-406c-b186-2912e491ba70-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-2pvcz\" (UID: \"262c1468-53c3-406c-b186-2912e491ba70\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-2pvcz" Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.164577 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/262c1468-53c3-406c-b186-2912e491ba70-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-2pvcz\" (UID: \"262c1468-53c3-406c-b186-2912e491ba70\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-2pvcz" Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.166064 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/262c1468-53c3-406c-b186-2912e491ba70-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-2pvcz\" (UID: \"262c1468-53c3-406c-b186-2912e491ba70\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-2pvcz" Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.202871 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.222961 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.242938 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.262950 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.283044 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.303496 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.322794 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.343279 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.362477 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.383089 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.403541 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.423247 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.443130 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.463307 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.483498 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.502885 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.522711 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.543117 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.563750 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.583822 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.602845 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.622243 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.642983 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.662467 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.683094 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.703074 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.723470 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.743187 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.763489 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.783884 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.802923 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.824078 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.843213 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.863042 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.883285 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.903401 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.922796 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.943173 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.969608 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 18 14:33:53 crc kubenswrapper[4957]: I0218 14:33:53.996502 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 18 14:33:54 crc kubenswrapper[4957]: I0218 14:33:54.001737 4957 request.go:700] Waited for 1.012679667s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0 Feb 18 14:33:54 crc kubenswrapper[4957]: I0218 14:33:54.003128 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 18 14:33:54 crc kubenswrapper[4957]: I0218 14:33:54.023794 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 18 14:33:54 crc kubenswrapper[4957]: I0218 14:33:54.043254 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 18 14:33:54 crc kubenswrapper[4957]: I0218 14:33:54.062443 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 18 14:33:54 crc kubenswrapper[4957]: I0218 14:33:54.083852 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 18 14:33:54 crc kubenswrapper[4957]: I0218 14:33:54.103258 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 18 14:33:54 crc kubenswrapper[4957]: I0218 14:33:54.129140 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 18 14:33:54 crc kubenswrapper[4957]: I0218 14:33:54.163600 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 18 14:33:54 crc kubenswrapper[4957]: I0218 14:33:54.182957 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 18 14:33:54 crc kubenswrapper[4957]: I0218 14:33:54.202986 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 18 14:33:54 crc kubenswrapper[4957]: I0218 14:33:54.223795 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 18 14:33:54 crc kubenswrapper[4957]: I0218 14:33:54.243481 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 18 14:33:54 crc kubenswrapper[4957]: I0218 14:33:54.267639 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 18 14:33:54 crc kubenswrapper[4957]: I0218 14:33:54.282948 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 18 14:33:54 crc kubenswrapper[4957]: I0218 14:33:54.302407 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 18 14:33:54 crc kubenswrapper[4957]: I0218 14:33:54.323271 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 18 14:33:54 crc kubenswrapper[4957]: I0218 14:33:54.343546 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 18 14:33:54 crc kubenswrapper[4957]: I0218 14:33:54.363532 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 18 14:33:54 crc kubenswrapper[4957]: I0218 14:33:54.382386 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 18 14:33:54 crc kubenswrapper[4957]: I0218 14:33:54.402995 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 18 14:33:54 crc kubenswrapper[4957]: I0218 14:33:54.423124 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 18 14:33:54 crc kubenswrapper[4957]: I0218 14:33:54.443594 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 18 14:33:54 crc kubenswrapper[4957]: I0218 14:33:54.464305 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 18 14:33:54 crc kubenswrapper[4957]: I0218 14:33:54.483619 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 18 14:33:54 crc kubenswrapper[4957]: I0218 14:33:54.503166 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 18 14:33:54 crc kubenswrapper[4957]: I0218 14:33:54.523338 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 18 14:33:54 crc kubenswrapper[4957]: I0218 14:33:54.543593 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 18 14:33:54 crc kubenswrapper[4957]: I0218 14:33:54.563399 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 18 14:33:54 crc kubenswrapper[4957]: I0218 14:33:54.583780 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 18 14:33:54 crc kubenswrapper[4957]: I0218 14:33:54.604283 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 18 14:33:54 crc kubenswrapper[4957]: I0218 14:33:54.624355 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 18 14:33:54 crc kubenswrapper[4957]: I0218 14:33:54.642989 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 18 14:33:54 crc kubenswrapper[4957]: I0218 14:33:54.666901 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 18 14:33:54 crc kubenswrapper[4957]: I0218 14:33:54.682677 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 18 14:33:54 crc kubenswrapper[4957]: I0218 14:33:54.723529 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f46tj\" (UniqueName: \"kubernetes.io/projected/9ce37342-c95b-4be4-b48c-91553e81206a-kube-api-access-f46tj\") pod \"route-controller-manager-6576b87f9c-fkdzp\" (UID: \"9ce37342-c95b-4be4-b48c-91553e81206a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fkdzp" Feb 18 14:33:54 crc kubenswrapper[4957]: I0218 14:33:54.745638 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hnnmh\" (UniqueName: \"kubernetes.io/projected/22fac79a-ec42-4f23-9f6b-79b68cde0489-kube-api-access-hnnmh\") pod \"openshift-apiserver-operator-796bbdcf4f-jffnj\" (UID: \"22fac79a-ec42-4f23-9f6b-79b68cde0489\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jffnj" Feb 18 14:33:54 crc kubenswrapper[4957]: I0218 14:33:54.761116 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxvk8\" (UniqueName: \"kubernetes.io/projected/7ebbd0f0-af37-460a-88f5-ff0e855f652c-kube-api-access-lxvk8\") pod \"machine-api-operator-5694c8668f-d4mbw\" (UID: \"7ebbd0f0-af37-460a-88f5-ff0e855f652c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-d4mbw" Feb 18 14:33:54 crc kubenswrapper[4957]: I0218 14:33:54.780520 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gcbrj\" (UniqueName: \"kubernetes.io/projected/30c49091-ad03-4fcf-a0d4-3955a1ddaf97-kube-api-access-gcbrj\") pod \"dns-operator-744455d44c-s7l47\" (UID: \"30c49091-ad03-4fcf-a0d4-3955a1ddaf97\") " pod="openshift-dns-operator/dns-operator-744455d44c-s7l47" Feb 18 14:33:54 crc kubenswrapper[4957]: I0218 14:33:54.802044 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rh4mc\" (UniqueName: \"kubernetes.io/projected/7b0517cf-fb33-49b0-9f1c-ba39f8edfc37-kube-api-access-rh4mc\") pod \"openshift-config-operator-7777fb866f-tmknf\" (UID: \"7b0517cf-fb33-49b0-9f1c-ba39f8edfc37\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-tmknf" Feb 18 14:33:54 crc kubenswrapper[4957]: I0218 14:33:54.810930 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-d4mbw" Feb 18 14:33:54 crc kubenswrapper[4957]: I0218 14:33:54.818005 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rll8m\" (UniqueName: \"kubernetes.io/projected/2d30d957-c658-4ce4-9b04-3f1d64fb67b7-kube-api-access-rll8m\") pod \"console-operator-58897d9998-d9hcp\" (UID: \"2d30d957-c658-4ce4-9b04-3f1d64fb67b7\") " pod="openshift-console-operator/console-operator-58897d9998-d9hcp" Feb 18 14:33:54 crc kubenswrapper[4957]: I0218 14:33:54.843868 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p46m8\" (UniqueName: \"kubernetes.io/projected/609d339c-7830-4e17-b847-da17573e1ed0-kube-api-access-p46m8\") pod \"controller-manager-879f6c89f-lp5cj\" (UID: \"609d339c-7830-4e17-b847-da17573e1ed0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lp5cj" Feb 18 14:33:54 crc kubenswrapper[4957]: I0218 14:33:54.861449 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vr52b\" (UniqueName: \"kubernetes.io/projected/1889400f-2fff-4c67-b401-966e820d5a26-kube-api-access-vr52b\") pod \"apiserver-7bbb656c7d-2db8v\" (UID: \"1889400f-2fff-4c67-b401-966e820d5a26\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2db8v" Feb 18 14:33:54 crc kubenswrapper[4957]: I0218 14:33:54.879403 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mx625\" (UniqueName: \"kubernetes.io/projected/b8d0adaf-a3c6-4121-970c-1f6205db177e-kube-api-access-mx625\") pod \"oauth-openshift-558db77b4-594sd\" (UID: \"b8d0adaf-a3c6-4121-970c-1f6205db177e\") " pod="openshift-authentication/oauth-openshift-558db77b4-594sd" Feb 18 14:33:54 crc kubenswrapper[4957]: I0218 14:33:54.902812 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 18 14:33:54 crc kubenswrapper[4957]: I0218 14:33:54.904639 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mkgnl\" (UniqueName: \"kubernetes.io/projected/f2c035f3-f03e-4263-a7a5-e821b1fc5488-kube-api-access-mkgnl\") pod \"cluster-samples-operator-665b6dd947-g4vvs\" (UID: \"f2c035f3-f03e-4263-a7a5-e821b1fc5488\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-g4vvs" Feb 18 14:33:54 crc kubenswrapper[4957]: I0218 14:33:54.923490 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 18 14:33:54 crc kubenswrapper[4957]: I0218 14:33:54.943165 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 18 14:33:54 crc kubenswrapper[4957]: I0218 14:33:54.962478 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 18 14:33:54 crc kubenswrapper[4957]: I0218 14:33:54.972059 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fkdzp" Feb 18 14:33:54 crc kubenswrapper[4957]: I0218 14:33:54.977018 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-d4mbw"] Feb 18 14:33:54 crc kubenswrapper[4957]: I0218 14:33:54.982255 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 18 14:33:54 crc kubenswrapper[4957]: W0218 14:33:54.983962 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7ebbd0f0_af37_460a_88f5_ff0e855f652c.slice/crio-710aac53ca66865200c4e7edcece9251f12c9a08dda447f4c3f03012c750c583 WatchSource:0}: Error finding container 710aac53ca66865200c4e7edcece9251f12c9a08dda447f4c3f03012c750c583: Status 404 returned error can't find the container with id 710aac53ca66865200c4e7edcece9251f12c9a08dda447f4c3f03012c750c583 Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.002746 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-s7l47" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.002797 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.021398 4957 request.go:700] Waited for 1.929806063s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-canary/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0 Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.021557 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jffnj" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.023541 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-lp5cj" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.023606 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.043657 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.063021 4957 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.080004 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-tmknf" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.083456 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.088969 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-d9hcp" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.101587 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-g4vvs" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.104409 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-594sd" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.118195 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2db8v" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.128581 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rmg2c\" (UniqueName: \"kubernetes.io/projected/262c1468-53c3-406c-b186-2912e491ba70-kube-api-access-rmg2c\") pod \"kube-storage-version-migrator-operator-b67b599dd-2pvcz\" (UID: \"262c1468-53c3-406c-b186-2912e491ba70\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-2pvcz" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.156261 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-fkdzp"] Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.192163 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16bd595c-a77a-408f-9488-3499d8d57bdb-serving-cert\") pod \"etcd-operator-b45778765-nddjw\" (UID: \"16bd595c-a77a-408f-9488-3499d8d57bdb\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nddjw" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.192205 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hfcww\" (UID: \"1c7a025e-0270-445c-ac05-34ffe3502176\") " pod="openshift-image-registry/image-registry-697d97f7c8-hfcww" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.192941 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b57b2a2c-ac8c-4c84-84c4-c24479600b71-serving-cert\") pod \"apiserver-76f77b778f-fpnw9\" (UID: \"b57b2a2c-ac8c-4c84-84c4-c24479600b71\") " pod="openshift-apiserver/apiserver-76f77b778f-fpnw9" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.192961 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b57b2a2c-ac8c-4c84-84c4-c24479600b71-trusted-ca-bundle\") pod \"apiserver-76f77b778f-fpnw9\" (UID: \"b57b2a2c-ac8c-4c84-84c4-c24479600b71\") " pod="openshift-apiserver/apiserver-76f77b778f-fpnw9" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.192978 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5j2bh\" (UniqueName: \"kubernetes.io/projected/b57b2a2c-ac8c-4c84-84c4-c24479600b71-kube-api-access-5j2bh\") pod \"apiserver-76f77b778f-fpnw9\" (UID: \"b57b2a2c-ac8c-4c84-84c4-c24479600b71\") " pod="openshift-apiserver/apiserver-76f77b778f-fpnw9" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.193012 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/099076c9-9f78-47b8-87f1-3c9cc47e0b09-service-ca-bundle\") pod \"router-default-5444994796-jg752\" (UID: \"099076c9-9f78-47b8-87f1-3c9cc47e0b09\") " pod="openshift-ingress/router-default-5444994796-jg752" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.193030 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8523189e-83c3-4ec4-aee3-7fc8859d380a-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-f8bjt\" (UID: \"8523189e-83c3-4ec4-aee3-7fc8859d380a\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-f8bjt" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.193045 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4e7531cd-b62c-452a-b6a7-716513aaad71-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-6f6dt\" (UID: \"4e7531cd-b62c-452a-b6a7-716513aaad71\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-6f6dt" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.193060 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e32179f-a59d-44e1-9a56-ca25b8c5ff21-config\") pod \"authentication-operator-69f744f599-vd6hx\" (UID: \"7e32179f-a59d-44e1-9a56-ca25b8c5ff21\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-vd6hx" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.193086 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/3da7ba0a-4ddc-4bca-acd8-e598854eceec-oauth-serving-cert\") pod \"console-f9d7485db-hvb66\" (UID: \"3da7ba0a-4ddc-4bca-acd8-e598854eceec\") " pod="openshift-console/console-f9d7485db-hvb66" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.193102 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/1b3f7089-9ab3-4753-b0a2-7454ed4425ac-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-bwgmc\" (UID: \"1b3f7089-9ab3-4753-b0a2-7454ed4425ac\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bwgmc" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.193121 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/1c7a025e-0270-445c-ac05-34ffe3502176-registry-certificates\") pod \"image-registry-697d97f7c8-hfcww\" (UID: \"1c7a025e-0270-445c-ac05-34ffe3502176\") " pod="openshift-image-registry/image-registry-697d97f7c8-hfcww" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.193144 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1c7a025e-0270-445c-ac05-34ffe3502176-trusted-ca\") pod \"image-registry-697d97f7c8-hfcww\" (UID: \"1c7a025e-0270-445c-ac05-34ffe3502176\") " pod="openshift-image-registry/image-registry-697d97f7c8-hfcww" Feb 18 14:33:55 crc kubenswrapper[4957]: E0218 14:33:55.193202 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 14:33:55.69318217 +0000 UTC m=+142.214046914 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hfcww" (UID: "1c7a025e-0270-445c-ac05-34ffe3502176") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.193928 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8523189e-83c3-4ec4-aee3-7fc8859d380a-config\") pod \"kube-controller-manager-operator-78b949d7b-f8bjt\" (UID: \"8523189e-83c3-4ec4-aee3-7fc8859d380a\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-f8bjt" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.193948 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rddc7\" (UniqueName: \"kubernetes.io/projected/0eeb321c-f53d-4fa6-b824-006c3844edad-kube-api-access-rddc7\") pod \"openshift-controller-manager-operator-756b6f6bc6-crshz\" (UID: \"0eeb321c-f53d-4fa6-b824-006c3844edad\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-crshz" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.193965 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dr49g\" (UniqueName: \"kubernetes.io/projected/914e8f14-972c-4ca7-bcc6-4fc802cdfdc6-kube-api-access-dr49g\") pod \"control-plane-machine-set-operator-78cbb6b69f-mn6hj\" (UID: \"914e8f14-972c-4ca7-bcc6-4fc802cdfdc6\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mn6hj" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.193980 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/3da7ba0a-4ddc-4bca-acd8-e598854eceec-console-serving-cert\") pod \"console-f9d7485db-hvb66\" (UID: \"3da7ba0a-4ddc-4bca-acd8-e598854eceec\") " pod="openshift-console/console-f9d7485db-hvb66" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.194014 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/1c7a025e-0270-445c-ac05-34ffe3502176-registry-tls\") pod \"image-registry-697d97f7c8-hfcww\" (UID: \"1c7a025e-0270-445c-ac05-34ffe3502176\") " pod="openshift-image-registry/image-registry-697d97f7c8-hfcww" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.194027 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8523189e-83c3-4ec4-aee3-7fc8859d380a-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-f8bjt\" (UID: \"8523189e-83c3-4ec4-aee3-7fc8859d380a\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-f8bjt" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.194041 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b57b2a2c-ac8c-4c84-84c4-c24479600b71-node-pullsecrets\") pod \"apiserver-76f77b778f-fpnw9\" (UID: \"b57b2a2c-ac8c-4c84-84c4-c24479600b71\") " pod="openshift-apiserver/apiserver-76f77b778f-fpnw9" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.194057 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/914e8f14-972c-4ca7-bcc6-4fc802cdfdc6-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-mn6hj\" (UID: \"914e8f14-972c-4ca7-bcc6-4fc802cdfdc6\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mn6hj" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.194091 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3da7ba0a-4ddc-4bca-acd8-e598854eceec-service-ca\") pod \"console-f9d7485db-hvb66\" (UID: \"3da7ba0a-4ddc-4bca-acd8-e598854eceec\") " pod="openshift-console/console-f9d7485db-hvb66" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.194105 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-652bq\" (UniqueName: \"kubernetes.io/projected/7e32179f-a59d-44e1-9a56-ca25b8c5ff21-kube-api-access-652bq\") pod \"authentication-operator-69f744f599-vd6hx\" (UID: \"7e32179f-a59d-44e1-9a56-ca25b8c5ff21\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-vd6hx" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.194121 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/3925001b-348a-4dde-a066-e49891c345bb-srv-cert\") pod \"catalog-operator-68c6474976-8bmsm\" (UID: \"3925001b-348a-4dde-a066-e49891c345bb\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8bmsm" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.194143 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mh6tf\" (UniqueName: \"kubernetes.io/projected/4f7ee635-a9ad-4c0f-98b8-c72a33eb8b39-kube-api-access-mh6tf\") pod \"migrator-59844c95c7-xsf6z\" (UID: \"4f7ee635-a9ad-4c0f-98b8-c72a33eb8b39\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-xsf6z" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.194157 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/099076c9-9f78-47b8-87f1-3c9cc47e0b09-metrics-certs\") pod \"router-default-5444994796-jg752\" (UID: \"099076c9-9f78-47b8-87f1-3c9cc47e0b09\") " pod="openshift-ingress/router-default-5444994796-jg752" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.194444 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/80bc6540-5a90-42c9-a3b6-ae9a897119dd-auth-proxy-config\") pod \"machine-approver-56656f9798-dk2nb\" (UID: \"80bc6540-5a90-42c9-a3b6-ae9a897119dd\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-dk2nb" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.194475 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/099076c9-9f78-47b8-87f1-3c9cc47e0b09-default-certificate\") pod \"router-default-5444994796-jg752\" (UID: \"099076c9-9f78-47b8-87f1-3c9cc47e0b09\") " pod="openshift-ingress/router-default-5444994796-jg752" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.194507 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/16bd595c-a77a-408f-9488-3499d8d57bdb-etcd-service-ca\") pod \"etcd-operator-b45778765-nddjw\" (UID: \"16bd595c-a77a-408f-9488-3499d8d57bdb\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nddjw" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.194523 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/35a9f539-42ce-4dc0-a9d1-d23278463bfc-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-fjrfx\" (UID: \"35a9f539-42ce-4dc0-a9d1-d23278463bfc\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-fjrfx" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.194643 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/35a9f539-42ce-4dc0-a9d1-d23278463bfc-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-fjrfx\" (UID: \"35a9f539-42ce-4dc0-a9d1-d23278463bfc\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-fjrfx" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.194657 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/35a9f539-42ce-4dc0-a9d1-d23278463bfc-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-fjrfx\" (UID: \"35a9f539-42ce-4dc0-a9d1-d23278463bfc\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-fjrfx" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.194725 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b57b2a2c-ac8c-4c84-84c4-c24479600b71-config\") pod \"apiserver-76f77b778f-fpnw9\" (UID: \"b57b2a2c-ac8c-4c84-84c4-c24479600b71\") " pod="openshift-apiserver/apiserver-76f77b778f-fpnw9" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.194742 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/3925001b-348a-4dde-a066-e49891c345bb-profile-collector-cert\") pod \"catalog-operator-68c6474976-8bmsm\" (UID: \"3925001b-348a-4dde-a066-e49891c345bb\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8bmsm" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.194759 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8559e798-7b17-41b7-86c8-bf530c4092ea-config\") pod \"kube-apiserver-operator-766d6c64bb-q9xrs\" (UID: \"8559e798-7b17-41b7-86c8-bf530c4092ea\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-q9xrs" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.195978 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/1c7a025e-0270-445c-ac05-34ffe3502176-ca-trust-extracted\") pod \"image-registry-697d97f7c8-hfcww\" (UID: \"1c7a025e-0270-445c-ac05-34ffe3502176\") " pod="openshift-image-registry/image-registry-697d97f7c8-hfcww" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.196019 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/16bd595c-a77a-408f-9488-3499d8d57bdb-etcd-client\") pod \"etcd-operator-b45778765-nddjw\" (UID: \"16bd595c-a77a-408f-9488-3499d8d57bdb\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nddjw" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.196053 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8559e798-7b17-41b7-86c8-bf530c4092ea-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-q9xrs\" (UID: \"8559e798-7b17-41b7-86c8-bf530c4092ea\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-q9xrs" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.196140 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/b57b2a2c-ac8c-4c84-84c4-c24479600b71-image-import-ca\") pod \"apiserver-76f77b778f-fpnw9\" (UID: \"b57b2a2c-ac8c-4c84-84c4-c24479600b71\") " pod="openshift-apiserver/apiserver-76f77b778f-fpnw9" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.196295 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1c7a025e-0270-445c-ac05-34ffe3502176-bound-sa-token\") pod \"image-registry-697d97f7c8-hfcww\" (UID: \"1c7a025e-0270-445c-ac05-34ffe3502176\") " pod="openshift-image-registry/image-registry-697d97f7c8-hfcww" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.196318 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7e32179f-a59d-44e1-9a56-ca25b8c5ff21-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-vd6hx\" (UID: \"7e32179f-a59d-44e1-9a56-ca25b8c5ff21\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-vd6hx" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.196499 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/80bc6540-5a90-42c9-a3b6-ae9a897119dd-config\") pod \"machine-approver-56656f9798-dk2nb\" (UID: \"80bc6540-5a90-42c9-a3b6-ae9a897119dd\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-dk2nb" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.196556 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8559e798-7b17-41b7-86c8-bf530c4092ea-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-q9xrs\" (UID: \"8559e798-7b17-41b7-86c8-bf530c4092ea\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-q9xrs" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.196594 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6tp4\" (UniqueName: \"kubernetes.io/projected/6e462cfd-ac3c-4e75-bcce-f8291746b89e-kube-api-access-k6tp4\") pod \"downloads-7954f5f757-fxh8s\" (UID: \"6e462cfd-ac3c-4e75-bcce-f8291746b89e\") " pod="openshift-console/downloads-7954f5f757-fxh8s" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.196651 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tgrhq\" (UniqueName: \"kubernetes.io/projected/3da7ba0a-4ddc-4bca-acd8-e598854eceec-kube-api-access-tgrhq\") pod \"console-f9d7485db-hvb66\" (UID: \"3da7ba0a-4ddc-4bca-acd8-e598854eceec\") " pod="openshift-console/console-f9d7485db-hvb66" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.196671 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b57b2a2c-ac8c-4c84-84c4-c24479600b71-etcd-client\") pod \"apiserver-76f77b778f-fpnw9\" (UID: \"b57b2a2c-ac8c-4c84-84c4-c24479600b71\") " pod="openshift-apiserver/apiserver-76f77b778f-fpnw9" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.196962 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/1c7a025e-0270-445c-ac05-34ffe3502176-installation-pull-secrets\") pod \"image-registry-697d97f7c8-hfcww\" (UID: \"1c7a025e-0270-445c-ac05-34ffe3502176\") " pod="openshift-image-registry/image-registry-697d97f7c8-hfcww" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.196990 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/3da7ba0a-4ddc-4bca-acd8-e598854eceec-console-oauth-config\") pod \"console-f9d7485db-hvb66\" (UID: \"3da7ba0a-4ddc-4bca-acd8-e598854eceec\") " pod="openshift-console/console-f9d7485db-hvb66" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.197052 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hmnd\" (UniqueName: \"kubernetes.io/projected/1c7a025e-0270-445c-ac05-34ffe3502176-kube-api-access-8hmnd\") pod \"image-registry-697d97f7c8-hfcww\" (UID: \"1c7a025e-0270-445c-ac05-34ffe3502176\") " pod="openshift-image-registry/image-registry-697d97f7c8-hfcww" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.197075 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/80bc6540-5a90-42c9-a3b6-ae9a897119dd-machine-approver-tls\") pod \"machine-approver-56656f9798-dk2nb\" (UID: \"80bc6540-5a90-42c9-a3b6-ae9a897119dd\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-dk2nb" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.197152 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/b57b2a2c-ac8c-4c84-84c4-c24479600b71-encryption-config\") pod \"apiserver-76f77b778f-fpnw9\" (UID: \"b57b2a2c-ac8c-4c84-84c4-c24479600b71\") " pod="openshift-apiserver/apiserver-76f77b778f-fpnw9" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.197178 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7kvv\" (UniqueName: \"kubernetes.io/projected/16bd595c-a77a-408f-9488-3499d8d57bdb-kube-api-access-d7kvv\") pod \"etcd-operator-b45778765-nddjw\" (UID: \"16bd595c-a77a-408f-9488-3499d8d57bdb\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nddjw" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.197205 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/b57b2a2c-ac8c-4c84-84c4-c24479600b71-audit\") pod \"apiserver-76f77b778f-fpnw9\" (UID: \"b57b2a2c-ac8c-4c84-84c4-c24479600b71\") " pod="openshift-apiserver/apiserver-76f77b778f-fpnw9" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.197439 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e1fce9e3-ffff-49f7-9f88-f5fd4cc98978-proxy-tls\") pod \"machine-config-controller-84d6567774-lwq5z\" (UID: \"e1fce9e3-ffff-49f7-9f88-f5fd4cc98978\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lwq5z" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.197469 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/099076c9-9f78-47b8-87f1-3c9cc47e0b09-stats-auth\") pod \"router-default-5444994796-jg752\" (UID: \"099076c9-9f78-47b8-87f1-3c9cc47e0b09\") " pod="openshift-ingress/router-default-5444994796-jg752" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.197490 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x87jv\" (UniqueName: \"kubernetes.io/projected/80bc6540-5a90-42c9-a3b6-ae9a897119dd-kube-api-access-x87jv\") pod \"machine-approver-56656f9798-dk2nb\" (UID: \"80bc6540-5a90-42c9-a3b6-ae9a897119dd\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-dk2nb" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.197515 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1365afbc-1c4d-47e9-856e-520d67653d37-trusted-ca\") pod \"ingress-operator-5b745b69d9-m5ftr\" (UID: \"1365afbc-1c4d-47e9-856e-520d67653d37\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-m5ftr" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.197558 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltjb2\" (UniqueName: \"kubernetes.io/projected/4e7531cd-b62c-452a-b6a7-716513aaad71-kube-api-access-ltjb2\") pod \"multus-admission-controller-857f4d67dd-6f6dt\" (UID: \"4e7531cd-b62c-452a-b6a7-716513aaad71\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-6f6dt" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.197580 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdk7v\" (UniqueName: \"kubernetes.io/projected/099076c9-9f78-47b8-87f1-3c9cc47e0b09-kube-api-access-cdk7v\") pod \"router-default-5444994796-jg752\" (UID: \"099076c9-9f78-47b8-87f1-3c9cc47e0b09\") " pod="openshift-ingress/router-default-5444994796-jg752" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.197614 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ntb6\" (UniqueName: \"kubernetes.io/projected/1365afbc-1c4d-47e9-856e-520d67653d37-kube-api-access-7ntb6\") pod \"ingress-operator-5b745b69d9-m5ftr\" (UID: \"1365afbc-1c4d-47e9-856e-520d67653d37\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-m5ftr" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.198071 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/3da7ba0a-4ddc-4bca-acd8-e598854eceec-console-config\") pod \"console-f9d7485db-hvb66\" (UID: \"3da7ba0a-4ddc-4bca-acd8-e598854eceec\") " pod="openshift-console/console-f9d7485db-hvb66" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.198112 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7e32179f-a59d-44e1-9a56-ca25b8c5ff21-serving-cert\") pod \"authentication-operator-69f744f599-vd6hx\" (UID: \"7e32179f-a59d-44e1-9a56-ca25b8c5ff21\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-vd6hx" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.198364 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dk5tj\" (UniqueName: \"kubernetes.io/projected/1b3f7089-9ab3-4753-b0a2-7454ed4425ac-kube-api-access-dk5tj\") pod \"package-server-manager-789f6589d5-bwgmc\" (UID: \"1b3f7089-9ab3-4753-b0a2-7454ed4425ac\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bwgmc" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.198389 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/b57b2a2c-ac8c-4c84-84c4-c24479600b71-etcd-serving-ca\") pod \"apiserver-76f77b778f-fpnw9\" (UID: \"b57b2a2c-ac8c-4c84-84c4-c24479600b71\") " pod="openshift-apiserver/apiserver-76f77b778f-fpnw9" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.198412 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b57b2a2c-ac8c-4c84-84c4-c24479600b71-audit-dir\") pod \"apiserver-76f77b778f-fpnw9\" (UID: \"b57b2a2c-ac8c-4c84-84c4-c24479600b71\") " pod="openshift-apiserver/apiserver-76f77b778f-fpnw9" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.198536 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/16bd595c-a77a-408f-9488-3499d8d57bdb-etcd-ca\") pod \"etcd-operator-b45778765-nddjw\" (UID: \"16bd595c-a77a-408f-9488-3499d8d57bdb\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nddjw" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.198555 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16bd595c-a77a-408f-9488-3499d8d57bdb-config\") pod \"etcd-operator-b45778765-nddjw\" (UID: \"16bd595c-a77a-408f-9488-3499d8d57bdb\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nddjw" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.198599 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1365afbc-1c4d-47e9-856e-520d67653d37-metrics-tls\") pod \"ingress-operator-5b745b69d9-m5ftr\" (UID: \"1365afbc-1c4d-47e9-856e-520d67653d37\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-m5ftr" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.198637 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7e32179f-a59d-44e1-9a56-ca25b8c5ff21-service-ca-bundle\") pod \"authentication-operator-69f744f599-vd6hx\" (UID: \"7e32179f-a59d-44e1-9a56-ca25b8c5ff21\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-vd6hx" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.198654 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gsx4c\" (UniqueName: \"kubernetes.io/projected/3925001b-348a-4dde-a066-e49891c345bb-kube-api-access-gsx4c\") pod \"catalog-operator-68c6474976-8bmsm\" (UID: \"3925001b-348a-4dde-a066-e49891c345bb\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8bmsm" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.198715 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e1fce9e3-ffff-49f7-9f88-f5fd4cc98978-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-lwq5z\" (UID: \"e1fce9e3-ffff-49f7-9f88-f5fd4cc98978\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lwq5z" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.198732 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jkhw\" (UniqueName: \"kubernetes.io/projected/e1fce9e3-ffff-49f7-9f88-f5fd4cc98978-kube-api-access-7jkhw\") pod \"machine-config-controller-84d6567774-lwq5z\" (UID: \"e1fce9e3-ffff-49f7-9f88-f5fd4cc98978\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lwq5z" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.198746 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1365afbc-1c4d-47e9-856e-520d67653d37-bound-sa-token\") pod \"ingress-operator-5b745b69d9-m5ftr\" (UID: \"1365afbc-1c4d-47e9-856e-520d67653d37\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-m5ftr" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.199176 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0eeb321c-f53d-4fa6-b824-006c3844edad-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-crshz\" (UID: \"0eeb321c-f53d-4fa6-b824-006c3844edad\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-crshz" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.199376 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3da7ba0a-4ddc-4bca-acd8-e598854eceec-trusted-ca-bundle\") pod \"console-f9d7485db-hvb66\" (UID: \"3da7ba0a-4ddc-4bca-acd8-e598854eceec\") " pod="openshift-console/console-f9d7485db-hvb66" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.199393 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0eeb321c-f53d-4fa6-b824-006c3844edad-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-crshz\" (UID: \"0eeb321c-f53d-4fa6-b824-006c3844edad\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-crshz" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.223734 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-2pvcz" Feb 18 14:33:55 crc kubenswrapper[4957]: W0218 14:33:55.227820 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9ce37342_c95b_4be4_b48c_91553e81206a.slice/crio-8a9a86ade20c051332e06667e51ed8af13c3298eea6952593191428c74eaaef1 WatchSource:0}: Error finding container 8a9a86ade20c051332e06667e51ed8af13c3298eea6952593191428c74eaaef1: Status 404 returned error can't find the container with id 8a9a86ade20c051332e06667e51ed8af13c3298eea6952593191428c74eaaef1 Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.244118 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-s7l47"] Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.301335 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.301894 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/1c7a025e-0270-445c-ac05-34ffe3502176-ca-trust-extracted\") pod \"image-registry-697d97f7c8-hfcww\" (UID: \"1c7a025e-0270-445c-ac05-34ffe3502176\") " pod="openshift-image-registry/image-registry-697d97f7c8-hfcww" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.301924 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/16bd595c-a77a-408f-9488-3499d8d57bdb-etcd-client\") pod \"etcd-operator-b45778765-nddjw\" (UID: \"16bd595c-a77a-408f-9488-3499d8d57bdb\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nddjw" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.301956 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/11cb8341-3939-4c82-9745-510f73904864-registration-dir\") pod \"csi-hostpathplugin-gdjrj\" (UID: \"11cb8341-3939-4c82-9745-510f73904864\") " pod="hostpath-provisioner/csi-hostpathplugin-gdjrj" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.301980 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/11cb8341-3939-4c82-9745-510f73904864-csi-data-dir\") pod \"csi-hostpathplugin-gdjrj\" (UID: \"11cb8341-3939-4c82-9745-510f73904864\") " pod="hostpath-provisioner/csi-hostpathplugin-gdjrj" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.302009 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8559e798-7b17-41b7-86c8-bf530c4092ea-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-q9xrs\" (UID: \"8559e798-7b17-41b7-86c8-bf530c4092ea\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-q9xrs" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.302032 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/b57b2a2c-ac8c-4c84-84c4-c24479600b71-image-import-ca\") pod \"apiserver-76f77b778f-fpnw9\" (UID: \"b57b2a2c-ac8c-4c84-84c4-c24479600b71\") " pod="openshift-apiserver/apiserver-76f77b778f-fpnw9" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.302057 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1c7a025e-0270-445c-ac05-34ffe3502176-bound-sa-token\") pod \"image-registry-697d97f7c8-hfcww\" (UID: \"1c7a025e-0270-445c-ac05-34ffe3502176\") " pod="openshift-image-registry/image-registry-697d97f7c8-hfcww" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.302081 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7e32179f-a59d-44e1-9a56-ca25b8c5ff21-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-vd6hx\" (UID: \"7e32179f-a59d-44e1-9a56-ca25b8c5ff21\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-vd6hx" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.302107 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/80bc6540-5a90-42c9-a3b6-ae9a897119dd-config\") pod \"machine-approver-56656f9798-dk2nb\" (UID: \"80bc6540-5a90-42c9-a3b6-ae9a897119dd\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-dk2nb" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.302129 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/01fa9f33-5f97-41b3-ae1f-f6421dd58827-auth-proxy-config\") pod \"machine-config-operator-74547568cd-txsxl\" (UID: \"01fa9f33-5f97-41b3-ae1f-f6421dd58827\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-txsxl" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.302150 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/1ea89768-219d-4769-9010-a34764aaee1d-certs\") pod \"machine-config-server-nx4lr\" (UID: \"1ea89768-219d-4769-9010-a34764aaee1d\") " pod="openshift-machine-config-operator/machine-config-server-nx4lr" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.302176 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8559e798-7b17-41b7-86c8-bf530c4092ea-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-q9xrs\" (UID: \"8559e798-7b17-41b7-86c8-bf530c4092ea\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-q9xrs" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.302197 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k6tp4\" (UniqueName: \"kubernetes.io/projected/6e462cfd-ac3c-4e75-bcce-f8291746b89e-kube-api-access-k6tp4\") pod \"downloads-7954f5f757-fxh8s\" (UID: \"6e462cfd-ac3c-4e75-bcce-f8291746b89e\") " pod="openshift-console/downloads-7954f5f757-fxh8s" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.302222 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/06d8fe1f-0a71-480f-899e-a070c091cb79-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-sfz7k\" (UID: \"06d8fe1f-0a71-480f-899e-a070c091cb79\") " pod="openshift-marketplace/marketplace-operator-79b997595-sfz7k" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.302239 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tgrhq\" (UniqueName: \"kubernetes.io/projected/3da7ba0a-4ddc-4bca-acd8-e598854eceec-kube-api-access-tgrhq\") pod \"console-f9d7485db-hvb66\" (UID: \"3da7ba0a-4ddc-4bca-acd8-e598854eceec\") " pod="openshift-console/console-f9d7485db-hvb66" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.302257 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b57b2a2c-ac8c-4c84-84c4-c24479600b71-etcd-client\") pod \"apiserver-76f77b778f-fpnw9\" (UID: \"b57b2a2c-ac8c-4c84-84c4-c24479600b71\") " pod="openshift-apiserver/apiserver-76f77b778f-fpnw9" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.302295 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4cbe5d01-ad51-4b03-aba9-8757f5643bdb-config-volume\") pod \"collect-profiles-29523750-lq4hl\" (UID: \"4cbe5d01-ad51-4b03-aba9-8757f5643bdb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523750-lq4hl" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.302325 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/1c7a025e-0270-445c-ac05-34ffe3502176-installation-pull-secrets\") pod \"image-registry-697d97f7c8-hfcww\" (UID: \"1c7a025e-0270-445c-ac05-34ffe3502176\") " pod="openshift-image-registry/image-registry-697d97f7c8-hfcww" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.302341 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/3da7ba0a-4ddc-4bca-acd8-e598854eceec-console-oauth-config\") pod \"console-f9d7485db-hvb66\" (UID: \"3da7ba0a-4ddc-4bca-acd8-e598854eceec\") " pod="openshift-console/console-f9d7485db-hvb66" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.302360 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/1ea89768-219d-4769-9010-a34764aaee1d-node-bootstrap-token\") pod \"machine-config-server-nx4lr\" (UID: \"1ea89768-219d-4769-9010-a34764aaee1d\") " pod="openshift-machine-config-operator/machine-config-server-nx4lr" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.302377 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8hmnd\" (UniqueName: \"kubernetes.io/projected/1c7a025e-0270-445c-ac05-34ffe3502176-kube-api-access-8hmnd\") pod \"image-registry-697d97f7c8-hfcww\" (UID: \"1c7a025e-0270-445c-ac05-34ffe3502176\") " pod="openshift-image-registry/image-registry-697d97f7c8-hfcww" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.302393 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/80bc6540-5a90-42c9-a3b6-ae9a897119dd-machine-approver-tls\") pod \"machine-approver-56656f9798-dk2nb\" (UID: \"80bc6540-5a90-42c9-a3b6-ae9a897119dd\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-dk2nb" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.302410 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/b57b2a2c-ac8c-4c84-84c4-c24479600b71-encryption-config\") pod \"apiserver-76f77b778f-fpnw9\" (UID: \"b57b2a2c-ac8c-4c84-84c4-c24479600b71\") " pod="openshift-apiserver/apiserver-76f77b778f-fpnw9" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.302496 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/b57b2a2c-ac8c-4c84-84c4-c24479600b71-audit\") pod \"apiserver-76f77b778f-fpnw9\" (UID: \"b57b2a2c-ac8c-4c84-84c4-c24479600b71\") " pod="openshift-apiserver/apiserver-76f77b778f-fpnw9" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.302522 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d7kvv\" (UniqueName: \"kubernetes.io/projected/16bd595c-a77a-408f-9488-3499d8d57bdb-kube-api-access-d7kvv\") pod \"etcd-operator-b45778765-nddjw\" (UID: \"16bd595c-a77a-408f-9488-3499d8d57bdb\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nddjw" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.302546 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x87jv\" (UniqueName: \"kubernetes.io/projected/80bc6540-5a90-42c9-a3b6-ae9a897119dd-kube-api-access-x87jv\") pod \"machine-approver-56656f9798-dk2nb\" (UID: \"80bc6540-5a90-42c9-a3b6-ae9a897119dd\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-dk2nb" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.302567 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/35a815ee-3601-412e-a76e-b4e15292e02c-config\") pod \"service-ca-operator-777779d784-nmxbx\" (UID: \"35a815ee-3601-412e-a76e-b4e15292e02c\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-nmxbx" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.302586 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e1fce9e3-ffff-49f7-9f88-f5fd4cc98978-proxy-tls\") pod \"machine-config-controller-84d6567774-lwq5z\" (UID: \"e1fce9e3-ffff-49f7-9f88-f5fd4cc98978\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lwq5z" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.302603 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/099076c9-9f78-47b8-87f1-3c9cc47e0b09-stats-auth\") pod \"router-default-5444994796-jg752\" (UID: \"099076c9-9f78-47b8-87f1-3c9cc47e0b09\") " pod="openshift-ingress/router-default-5444994796-jg752" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.302623 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/fa2af10f-a542-4777-9f7e-a2ca54798d99-signing-key\") pod \"service-ca-9c57cc56f-s74tf\" (UID: \"fa2af10f-a542-4777-9f7e-a2ca54798d99\") " pod="openshift-service-ca/service-ca-9c57cc56f-s74tf" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.302680 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1365afbc-1c4d-47e9-856e-520d67653d37-trusted-ca\") pod \"ingress-operator-5b745b69d9-m5ftr\" (UID: \"1365afbc-1c4d-47e9-856e-520d67653d37\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-m5ftr" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.302707 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ltjb2\" (UniqueName: \"kubernetes.io/projected/4e7531cd-b62c-452a-b6a7-716513aaad71-kube-api-access-ltjb2\") pod \"multus-admission-controller-857f4d67dd-6f6dt\" (UID: \"4e7531cd-b62c-452a-b6a7-716513aaad71\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-6f6dt" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.302732 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cdk7v\" (UniqueName: \"kubernetes.io/projected/099076c9-9f78-47b8-87f1-3c9cc47e0b09-kube-api-access-cdk7v\") pod \"router-default-5444994796-jg752\" (UID: \"099076c9-9f78-47b8-87f1-3c9cc47e0b09\") " pod="openshift-ingress/router-default-5444994796-jg752" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.302758 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwkpj\" (UniqueName: \"kubernetes.io/projected/455504d8-7edb-4008-9343-536491e9504a-kube-api-access-dwkpj\") pod \"olm-operator-6b444d44fb-qc4wh\" (UID: \"455504d8-7edb-4008-9343-536491e9504a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qc4wh" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.302787 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7ntb6\" (UniqueName: \"kubernetes.io/projected/1365afbc-1c4d-47e9-856e-520d67653d37-kube-api-access-7ntb6\") pod \"ingress-operator-5b745b69d9-m5ftr\" (UID: \"1365afbc-1c4d-47e9-856e-520d67653d37\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-m5ftr" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.302839 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/11cb8341-3939-4c82-9745-510f73904864-socket-dir\") pod \"csi-hostpathplugin-gdjrj\" (UID: \"11cb8341-3939-4c82-9745-510f73904864\") " pod="hostpath-provisioner/csi-hostpathplugin-gdjrj" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.302862 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/3da7ba0a-4ddc-4bca-acd8-e598854eceec-console-config\") pod \"console-f9d7485db-hvb66\" (UID: \"3da7ba0a-4ddc-4bca-acd8-e598854eceec\") " pod="openshift-console/console-f9d7485db-hvb66" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.302883 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7e32179f-a59d-44e1-9a56-ca25b8c5ff21-serving-cert\") pod \"authentication-operator-69f744f599-vd6hx\" (UID: \"7e32179f-a59d-44e1-9a56-ca25b8c5ff21\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-vd6hx" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.302904 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8nv2j\" (UniqueName: \"kubernetes.io/projected/11cb8341-3939-4c82-9745-510f73904864-kube-api-access-8nv2j\") pod \"csi-hostpathplugin-gdjrj\" (UID: \"11cb8341-3939-4c82-9745-510f73904864\") " pod="hostpath-provisioner/csi-hostpathplugin-gdjrj" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.302926 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9a161f1b-77bb-4a9d-9bfc-345bb46d439b-webhook-cert\") pod \"packageserver-d55dfcdfc-q4vp7\" (UID: \"9a161f1b-77bb-4a9d-9bfc-345bb46d439b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-q4vp7" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.302946 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/455504d8-7edb-4008-9343-536491e9504a-profile-collector-cert\") pod \"olm-operator-6b444d44fb-qc4wh\" (UID: \"455504d8-7edb-4008-9343-536491e9504a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qc4wh" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.302969 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/b57b2a2c-ac8c-4c84-84c4-c24479600b71-etcd-serving-ca\") pod \"apiserver-76f77b778f-fpnw9\" (UID: \"b57b2a2c-ac8c-4c84-84c4-c24479600b71\") " pod="openshift-apiserver/apiserver-76f77b778f-fpnw9" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.302991 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b57b2a2c-ac8c-4c84-84c4-c24479600b71-audit-dir\") pod \"apiserver-76f77b778f-fpnw9\" (UID: \"b57b2a2c-ac8c-4c84-84c4-c24479600b71\") " pod="openshift-apiserver/apiserver-76f77b778f-fpnw9" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.303017 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dk5tj\" (UniqueName: \"kubernetes.io/projected/1b3f7089-9ab3-4753-b0a2-7454ed4425ac-kube-api-access-dk5tj\") pod \"package-server-manager-789f6589d5-bwgmc\" (UID: \"1b3f7089-9ab3-4753-b0a2-7454ed4425ac\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bwgmc" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.303041 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/a3609f97-bc24-49d9-994a-026f5b1f8c73-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-nld6v\" (UID: \"a3609f97-bc24-49d9-994a-026f5b1f8c73\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-nld6v" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.303069 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/16bd595c-a77a-408f-9488-3499d8d57bdb-etcd-ca\") pod \"etcd-operator-b45778765-nddjw\" (UID: \"16bd595c-a77a-408f-9488-3499d8d57bdb\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nddjw" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.303090 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16bd595c-a77a-408f-9488-3499d8d57bdb-config\") pod \"etcd-operator-b45778765-nddjw\" (UID: \"16bd595c-a77a-408f-9488-3499d8d57bdb\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nddjw" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.303109 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1365afbc-1c4d-47e9-856e-520d67653d37-metrics-tls\") pod \"ingress-operator-5b745b69d9-m5ftr\" (UID: \"1365afbc-1c4d-47e9-856e-520d67653d37\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-m5ftr" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.303129 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7e32179f-a59d-44e1-9a56-ca25b8c5ff21-service-ca-bundle\") pod \"authentication-operator-69f744f599-vd6hx\" (UID: \"7e32179f-a59d-44e1-9a56-ca25b8c5ff21\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-vd6hx" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.303149 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gsx4c\" (UniqueName: \"kubernetes.io/projected/3925001b-348a-4dde-a066-e49891c345bb-kube-api-access-gsx4c\") pod \"catalog-operator-68c6474976-8bmsm\" (UID: \"3925001b-348a-4dde-a066-e49891c345bb\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8bmsm" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.303166 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/01fa9f33-5f97-41b3-ae1f-f6421dd58827-images\") pod \"machine-config-operator-74547568cd-txsxl\" (UID: \"01fa9f33-5f97-41b3-ae1f-f6421dd58827\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-txsxl" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.303180 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e681c560-a7ad-4a54-831c-122ccd49c1c9-config-volume\") pod \"dns-default-wbzr4\" (UID: \"e681c560-a7ad-4a54-831c-122ccd49c1c9\") " pod="openshift-dns/dns-default-wbzr4" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.303197 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e1fce9e3-ffff-49f7-9f88-f5fd4cc98978-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-lwq5z\" (UID: \"e1fce9e3-ffff-49f7-9f88-f5fd4cc98978\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lwq5z" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.303215 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7jkhw\" (UniqueName: \"kubernetes.io/projected/e1fce9e3-ffff-49f7-9f88-f5fd4cc98978-kube-api-access-7jkhw\") pod \"machine-config-controller-84d6567774-lwq5z\" (UID: \"e1fce9e3-ffff-49f7-9f88-f5fd4cc98978\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lwq5z" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.303230 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1365afbc-1c4d-47e9-856e-520d67653d37-bound-sa-token\") pod \"ingress-operator-5b745b69d9-m5ftr\" (UID: \"1365afbc-1c4d-47e9-856e-520d67653d37\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-m5ftr" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.303249 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdv4s\" (UniqueName: \"kubernetes.io/projected/06d8fe1f-0a71-480f-899e-a070c091cb79-kube-api-access-jdv4s\") pod \"marketplace-operator-79b997595-sfz7k\" (UID: \"06d8fe1f-0a71-480f-899e-a070c091cb79\") " pod="openshift-marketplace/marketplace-operator-79b997595-sfz7k" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.303263 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxv59\" (UniqueName: \"kubernetes.io/projected/9a161f1b-77bb-4a9d-9bfc-345bb46d439b-kube-api-access-pxv59\") pod \"packageserver-d55dfcdfc-q4vp7\" (UID: \"9a161f1b-77bb-4a9d-9bfc-345bb46d439b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-q4vp7" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.303279 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0eeb321c-f53d-4fa6-b824-006c3844edad-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-crshz\" (UID: \"0eeb321c-f53d-4fa6-b824-006c3844edad\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-crshz" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.303296 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-htpzn\" (UniqueName: \"kubernetes.io/projected/35a815ee-3601-412e-a76e-b4e15292e02c-kube-api-access-htpzn\") pod \"service-ca-operator-777779d784-nmxbx\" (UID: \"35a815ee-3601-412e-a76e-b4e15292e02c\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-nmxbx" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.303310 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/fa2af10f-a542-4777-9f7e-a2ca54798d99-signing-cabundle\") pod \"service-ca-9c57cc56f-s74tf\" (UID: \"fa2af10f-a542-4777-9f7e-a2ca54798d99\") " pod="openshift-service-ca/service-ca-9c57cc56f-s74tf" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.303325 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4cbe5d01-ad51-4b03-aba9-8757f5643bdb-secret-volume\") pod \"collect-profiles-29523750-lq4hl\" (UID: \"4cbe5d01-ad51-4b03-aba9-8757f5643bdb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523750-lq4hl" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.303344 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3da7ba0a-4ddc-4bca-acd8-e598854eceec-trusted-ca-bundle\") pod \"console-f9d7485db-hvb66\" (UID: \"3da7ba0a-4ddc-4bca-acd8-e598854eceec\") " pod="openshift-console/console-f9d7485db-hvb66" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.303360 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0eeb321c-f53d-4fa6-b824-006c3844edad-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-crshz\" (UID: \"0eeb321c-f53d-4fa6-b824-006c3844edad\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-crshz" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.303383 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16bd595c-a77a-408f-9488-3499d8d57bdb-serving-cert\") pod \"etcd-operator-b45778765-nddjw\" (UID: \"16bd595c-a77a-408f-9488-3499d8d57bdb\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nddjw" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.303397 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/11cb8341-3939-4c82-9745-510f73904864-mountpoint-dir\") pod \"csi-hostpathplugin-gdjrj\" (UID: \"11cb8341-3939-4c82-9745-510f73904864\") " pod="hostpath-provisioner/csi-hostpathplugin-gdjrj" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.303446 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/35a815ee-3601-412e-a76e-b4e15292e02c-serving-cert\") pod \"service-ca-operator-777779d784-nmxbx\" (UID: \"35a815ee-3601-412e-a76e-b4e15292e02c\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-nmxbx" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.303475 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b57b2a2c-ac8c-4c84-84c4-c24479600b71-serving-cert\") pod \"apiserver-76f77b778f-fpnw9\" (UID: \"b57b2a2c-ac8c-4c84-84c4-c24479600b71\") " pod="openshift-apiserver/apiserver-76f77b778f-fpnw9" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.303498 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/01fa9f33-5f97-41b3-ae1f-f6421dd58827-proxy-tls\") pod \"machine-config-operator-74547568cd-txsxl\" (UID: \"01fa9f33-5f97-41b3-ae1f-f6421dd58827\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-txsxl" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.303524 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b57b2a2c-ac8c-4c84-84c4-c24479600b71-trusted-ca-bundle\") pod \"apiserver-76f77b778f-fpnw9\" (UID: \"b57b2a2c-ac8c-4c84-84c4-c24479600b71\") " pod="openshift-apiserver/apiserver-76f77b778f-fpnw9" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.303547 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5j2bh\" (UniqueName: \"kubernetes.io/projected/b57b2a2c-ac8c-4c84-84c4-c24479600b71-kube-api-access-5j2bh\") pod \"apiserver-76f77b778f-fpnw9\" (UID: \"b57b2a2c-ac8c-4c84-84c4-c24479600b71\") " pod="openshift-apiserver/apiserver-76f77b778f-fpnw9" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.303571 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wth85\" (UniqueName: \"kubernetes.io/projected/e681c560-a7ad-4a54-831c-122ccd49c1c9-kube-api-access-wth85\") pod \"dns-default-wbzr4\" (UID: \"e681c560-a7ad-4a54-831c-122ccd49c1c9\") " pod="openshift-dns/dns-default-wbzr4" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.303607 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/099076c9-9f78-47b8-87f1-3c9cc47e0b09-service-ca-bundle\") pod \"router-default-5444994796-jg752\" (UID: \"099076c9-9f78-47b8-87f1-3c9cc47e0b09\") " pod="openshift-ingress/router-default-5444994796-jg752" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.303629 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8523189e-83c3-4ec4-aee3-7fc8859d380a-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-f8bjt\" (UID: \"8523189e-83c3-4ec4-aee3-7fc8859d380a\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-f8bjt" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.303650 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4e7531cd-b62c-452a-b6a7-716513aaad71-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-6f6dt\" (UID: \"4e7531cd-b62c-452a-b6a7-716513aaad71\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-6f6dt" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.303674 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vl8sw\" (UniqueName: \"kubernetes.io/projected/01fa9f33-5f97-41b3-ae1f-f6421dd58827-kube-api-access-vl8sw\") pod \"machine-config-operator-74547568cd-txsxl\" (UID: \"01fa9f33-5f97-41b3-ae1f-f6421dd58827\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-txsxl" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.303694 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/9a161f1b-77bb-4a9d-9bfc-345bb46d439b-tmpfs\") pod \"packageserver-d55dfcdfc-q4vp7\" (UID: \"9a161f1b-77bb-4a9d-9bfc-345bb46d439b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-q4vp7" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.303715 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e32179f-a59d-44e1-9a56-ca25b8c5ff21-config\") pod \"authentication-operator-69f744f599-vd6hx\" (UID: \"7e32179f-a59d-44e1-9a56-ca25b8c5ff21\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-vd6hx" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.303741 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/3da7ba0a-4ddc-4bca-acd8-e598854eceec-oauth-serving-cert\") pod \"console-f9d7485db-hvb66\" (UID: \"3da7ba0a-4ddc-4bca-acd8-e598854eceec\") " pod="openshift-console/console-f9d7485db-hvb66" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.303762 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/1b3f7089-9ab3-4753-b0a2-7454ed4425ac-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-bwgmc\" (UID: \"1b3f7089-9ab3-4753-b0a2-7454ed4425ac\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bwgmc" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.303784 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/1c7a025e-0270-445c-ac05-34ffe3502176-registry-certificates\") pod \"image-registry-697d97f7c8-hfcww\" (UID: \"1c7a025e-0270-445c-ac05-34ffe3502176\") " pod="openshift-image-registry/image-registry-697d97f7c8-hfcww" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.303806 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a3609f97-bc24-49d9-994a-026f5b1f8c73-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-nld6v\" (UID: \"a3609f97-bc24-49d9-994a-026f5b1f8c73\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-nld6v" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.306151 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1c7a025e-0270-445c-ac05-34ffe3502176-trusted-ca\") pod \"image-registry-697d97f7c8-hfcww\" (UID: \"1c7a025e-0270-445c-ac05-34ffe3502176\") " pod="openshift-image-registry/image-registry-697d97f7c8-hfcww" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.303832 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1c7a025e-0270-445c-ac05-34ffe3502176-trusted-ca\") pod \"image-registry-697d97f7c8-hfcww\" (UID: \"1c7a025e-0270-445c-ac05-34ffe3502176\") " pod="openshift-image-registry/image-registry-697d97f7c8-hfcww" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.306223 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8523189e-83c3-4ec4-aee3-7fc8859d380a-config\") pod \"kube-controller-manager-operator-78b949d7b-f8bjt\" (UID: \"8523189e-83c3-4ec4-aee3-7fc8859d380a\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-f8bjt" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.306250 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rddc7\" (UniqueName: \"kubernetes.io/projected/0eeb321c-f53d-4fa6-b824-006c3844edad-kube-api-access-rddc7\") pod \"openshift-controller-manager-operator-756b6f6bc6-crshz\" (UID: \"0eeb321c-f53d-4fa6-b824-006c3844edad\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-crshz" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.306275 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dr49g\" (UniqueName: \"kubernetes.io/projected/914e8f14-972c-4ca7-bcc6-4fc802cdfdc6-kube-api-access-dr49g\") pod \"control-plane-machine-set-operator-78cbb6b69f-mn6hj\" (UID: \"914e8f14-972c-4ca7-bcc6-4fc802cdfdc6\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mn6hj" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.306296 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/1c7a025e-0270-445c-ac05-34ffe3502176-registry-tls\") pod \"image-registry-697d97f7c8-hfcww\" (UID: \"1c7a025e-0270-445c-ac05-34ffe3502176\") " pod="openshift-image-registry/image-registry-697d97f7c8-hfcww" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.306315 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8523189e-83c3-4ec4-aee3-7fc8859d380a-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-f8bjt\" (UID: \"8523189e-83c3-4ec4-aee3-7fc8859d380a\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-f8bjt" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.306336 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/3da7ba0a-4ddc-4bca-acd8-e598854eceec-console-serving-cert\") pod \"console-f9d7485db-hvb66\" (UID: \"3da7ba0a-4ddc-4bca-acd8-e598854eceec\") " pod="openshift-console/console-f9d7485db-hvb66" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.306360 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b57b2a2c-ac8c-4c84-84c4-c24479600b71-node-pullsecrets\") pod \"apiserver-76f77b778f-fpnw9\" (UID: \"b57b2a2c-ac8c-4c84-84c4-c24479600b71\") " pod="openshift-apiserver/apiserver-76f77b778f-fpnw9" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.306380 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/914e8f14-972c-4ca7-bcc6-4fc802cdfdc6-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-mn6hj\" (UID: \"914e8f14-972c-4ca7-bcc6-4fc802cdfdc6\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mn6hj" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.306404 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/11cb8341-3939-4c82-9745-510f73904864-plugins-dir\") pod \"csi-hostpathplugin-gdjrj\" (UID: \"11cb8341-3939-4c82-9745-510f73904864\") " pod="hostpath-provisioner/csi-hostpathplugin-gdjrj" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.306453 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/3925001b-348a-4dde-a066-e49891c345bb-srv-cert\") pod \"catalog-operator-68c6474976-8bmsm\" (UID: \"3925001b-348a-4dde-a066-e49891c345bb\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8bmsm" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.306478 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/06d8fe1f-0a71-480f-899e-a070c091cb79-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-sfz7k\" (UID: \"06d8fe1f-0a71-480f-899e-a070c091cb79\") " pod="openshift-marketplace/marketplace-operator-79b997595-sfz7k" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.306502 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8gdb\" (UniqueName: \"kubernetes.io/projected/1ea89768-219d-4769-9010-a34764aaee1d-kube-api-access-l8gdb\") pod \"machine-config-server-nx4lr\" (UID: \"1ea89768-219d-4769-9010-a34764aaee1d\") " pod="openshift-machine-config-operator/machine-config-server-nx4lr" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.306529 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3da7ba0a-4ddc-4bca-acd8-e598854eceec-service-ca\") pod \"console-f9d7485db-hvb66\" (UID: \"3da7ba0a-4ddc-4bca-acd8-e598854eceec\") " pod="openshift-console/console-f9d7485db-hvb66" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.306552 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-652bq\" (UniqueName: \"kubernetes.io/projected/7e32179f-a59d-44e1-9a56-ca25b8c5ff21-kube-api-access-652bq\") pod \"authentication-operator-69f744f599-vd6hx\" (UID: \"7e32179f-a59d-44e1-9a56-ca25b8c5ff21\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-vd6hx" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.306574 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mh6tf\" (UniqueName: \"kubernetes.io/projected/4f7ee635-a9ad-4c0f-98b8-c72a33eb8b39-kube-api-access-mh6tf\") pod \"migrator-59844c95c7-xsf6z\" (UID: \"4f7ee635-a9ad-4c0f-98b8-c72a33eb8b39\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-xsf6z" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.306598 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/099076c9-9f78-47b8-87f1-3c9cc47e0b09-metrics-certs\") pod \"router-default-5444994796-jg752\" (UID: \"099076c9-9f78-47b8-87f1-3c9cc47e0b09\") " pod="openshift-ingress/router-default-5444994796-jg752" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.306620 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tlpv6\" (UniqueName: \"kubernetes.io/projected/4cbe5d01-ad51-4b03-aba9-8757f5643bdb-kube-api-access-tlpv6\") pod \"collect-profiles-29523750-lq4hl\" (UID: \"4cbe5d01-ad51-4b03-aba9-8757f5643bdb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523750-lq4hl" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.306660 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/80bc6540-5a90-42c9-a3b6-ae9a897119dd-auth-proxy-config\") pod \"machine-approver-56656f9798-dk2nb\" (UID: \"80bc6540-5a90-42c9-a3b6-ae9a897119dd\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-dk2nb" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.306683 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68msh\" (UniqueName: \"kubernetes.io/projected/c7a5e1cd-ca67-4a39-b3b1-16889e4ea7ae-kube-api-access-68msh\") pod \"ingress-canary-xrkwr\" (UID: \"c7a5e1cd-ca67-4a39-b3b1-16889e4ea7ae\") " pod="openshift-ingress-canary/ingress-canary-xrkwr" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.306707 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/099076c9-9f78-47b8-87f1-3c9cc47e0b09-default-certificate\") pod \"router-default-5444994796-jg752\" (UID: \"099076c9-9f78-47b8-87f1-3c9cc47e0b09\") " pod="openshift-ingress/router-default-5444994796-jg752" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.306730 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7t87\" (UniqueName: \"kubernetes.io/projected/fa2af10f-a542-4777-9f7e-a2ca54798d99-kube-api-access-h7t87\") pod \"service-ca-9c57cc56f-s74tf\" (UID: \"fa2af10f-a542-4777-9f7e-a2ca54798d99\") " pod="openshift-service-ca/service-ca-9c57cc56f-s74tf" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.306752 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9a161f1b-77bb-4a9d-9bfc-345bb46d439b-apiservice-cert\") pod \"packageserver-d55dfcdfc-q4vp7\" (UID: \"9a161f1b-77bb-4a9d-9bfc-345bb46d439b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-q4vp7" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.306772 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/455504d8-7edb-4008-9343-536491e9504a-srv-cert\") pod \"olm-operator-6b444d44fb-qc4wh\" (UID: \"455504d8-7edb-4008-9343-536491e9504a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qc4wh" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.306796 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/16bd595c-a77a-408f-9488-3499d8d57bdb-etcd-service-ca\") pod \"etcd-operator-b45778765-nddjw\" (UID: \"16bd595c-a77a-408f-9488-3499d8d57bdb\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nddjw" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.306820 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/35a9f539-42ce-4dc0-a9d1-d23278463bfc-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-fjrfx\" (UID: \"35a9f539-42ce-4dc0-a9d1-d23278463bfc\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-fjrfx" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.306847 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/35a9f539-42ce-4dc0-a9d1-d23278463bfc-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-fjrfx\" (UID: \"35a9f539-42ce-4dc0-a9d1-d23278463bfc\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-fjrfx" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.306871 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e681c560-a7ad-4a54-831c-122ccd49c1c9-metrics-tls\") pod \"dns-default-wbzr4\" (UID: \"e681c560-a7ad-4a54-831c-122ccd49c1c9\") " pod="openshift-dns/dns-default-wbzr4" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.306894 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/35a9f539-42ce-4dc0-a9d1-d23278463bfc-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-fjrfx\" (UID: \"35a9f539-42ce-4dc0-a9d1-d23278463bfc\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-fjrfx" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.306924 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a3609f97-bc24-49d9-994a-026f5b1f8c73-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-nld6v\" (UID: \"a3609f97-bc24-49d9-994a-026f5b1f8c73\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-nld6v" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.306945 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c7a5e1cd-ca67-4a39-b3b1-16889e4ea7ae-cert\") pod \"ingress-canary-xrkwr\" (UID: \"c7a5e1cd-ca67-4a39-b3b1-16889e4ea7ae\") " pod="openshift-ingress-canary/ingress-canary-xrkwr" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.306969 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b57b2a2c-ac8c-4c84-84c4-c24479600b71-config\") pod \"apiserver-76f77b778f-fpnw9\" (UID: \"b57b2a2c-ac8c-4c84-84c4-c24479600b71\") " pod="openshift-apiserver/apiserver-76f77b778f-fpnw9" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.306993 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/3925001b-348a-4dde-a066-e49891c345bb-profile-collector-cert\") pod \"catalog-operator-68c6474976-8bmsm\" (UID: \"3925001b-348a-4dde-a066-e49891c345bb\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8bmsm" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.307016 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gntw7\" (UniqueName: \"kubernetes.io/projected/a3609f97-bc24-49d9-994a-026f5b1f8c73-kube-api-access-gntw7\") pod \"cluster-image-registry-operator-dc59b4c8b-nld6v\" (UID: \"a3609f97-bc24-49d9-994a-026f5b1f8c73\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-nld6v" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.307041 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8559e798-7b17-41b7-86c8-bf530c4092ea-config\") pod \"kube-apiserver-operator-766d6c64bb-q9xrs\" (UID: \"8559e798-7b17-41b7-86c8-bf530c4092ea\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-q9xrs" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.307682 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8559e798-7b17-41b7-86c8-bf530c4092ea-config\") pod \"kube-apiserver-operator-766d6c64bb-q9xrs\" (UID: \"8559e798-7b17-41b7-86c8-bf530c4092ea\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-q9xrs" Feb 18 14:33:55 crc kubenswrapper[4957]: E0218 14:33:55.307706 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 14:33:55.807662408 +0000 UTC m=+142.328527172 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.308287 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/1c7a025e-0270-445c-ac05-34ffe3502176-ca-trust-extracted\") pod \"image-registry-697d97f7c8-hfcww\" (UID: \"1c7a025e-0270-445c-ac05-34ffe3502176\") " pod="openshift-image-registry/image-registry-697d97f7c8-hfcww" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.308355 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8523189e-83c3-4ec4-aee3-7fc8859d380a-config\") pod \"kube-controller-manager-operator-78b949d7b-f8bjt\" (UID: \"8523189e-83c3-4ec4-aee3-7fc8859d380a\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-f8bjt" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.310307 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b57b2a2c-ac8c-4c84-84c4-c24479600b71-audit-dir\") pod \"apiserver-76f77b778f-fpnw9\" (UID: \"b57b2a2c-ac8c-4c84-84c4-c24479600b71\") " pod="openshift-apiserver/apiserver-76f77b778f-fpnw9" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.310840 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3da7ba0a-4ddc-4bca-acd8-e598854eceec-trusted-ca-bundle\") pod \"console-f9d7485db-hvb66\" (UID: \"3da7ba0a-4ddc-4bca-acd8-e598854eceec\") " pod="openshift-console/console-f9d7485db-hvb66" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.311020 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/16bd595c-a77a-408f-9488-3499d8d57bdb-etcd-ca\") pod \"etcd-operator-b45778765-nddjw\" (UID: \"16bd595c-a77a-408f-9488-3499d8d57bdb\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nddjw" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.311774 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0eeb321c-f53d-4fa6-b824-006c3844edad-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-crshz\" (UID: \"0eeb321c-f53d-4fa6-b824-006c3844edad\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-crshz" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.311800 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/099076c9-9f78-47b8-87f1-3c9cc47e0b09-service-ca-bundle\") pod \"router-default-5444994796-jg752\" (UID: \"099076c9-9f78-47b8-87f1-3c9cc47e0b09\") " pod="openshift-ingress/router-default-5444994796-jg752" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.311953 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e32179f-a59d-44e1-9a56-ca25b8c5ff21-config\") pod \"authentication-operator-69f744f599-vd6hx\" (UID: \"7e32179f-a59d-44e1-9a56-ca25b8c5ff21\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-vd6hx" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.312027 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7e32179f-a59d-44e1-9a56-ca25b8c5ff21-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-vd6hx\" (UID: \"7e32179f-a59d-44e1-9a56-ca25b8c5ff21\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-vd6hx" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.312084 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/3da7ba0a-4ddc-4bca-acd8-e598854eceec-oauth-serving-cert\") pod \"console-f9d7485db-hvb66\" (UID: \"3da7ba0a-4ddc-4bca-acd8-e598854eceec\") " pod="openshift-console/console-f9d7485db-hvb66" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.312620 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/80bc6540-5a90-42c9-a3b6-ae9a897119dd-config\") pod \"machine-approver-56656f9798-dk2nb\" (UID: \"80bc6540-5a90-42c9-a3b6-ae9a897119dd\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-dk2nb" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.312814 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/b57b2a2c-ac8c-4c84-84c4-c24479600b71-etcd-serving-ca\") pod \"apiserver-76f77b778f-fpnw9\" (UID: \"b57b2a2c-ac8c-4c84-84c4-c24479600b71\") " pod="openshift-apiserver/apiserver-76f77b778f-fpnw9" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.313163 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16bd595c-a77a-408f-9488-3499d8d57bdb-config\") pod \"etcd-operator-b45778765-nddjw\" (UID: \"16bd595c-a77a-408f-9488-3499d8d57bdb\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nddjw" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.313206 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/80bc6540-5a90-42c9-a3b6-ae9a897119dd-auth-proxy-config\") pod \"machine-approver-56656f9798-dk2nb\" (UID: \"80bc6540-5a90-42c9-a3b6-ae9a897119dd\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-dk2nb" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.315892 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/16bd595c-a77a-408f-9488-3499d8d57bdb-etcd-client\") pod \"etcd-operator-b45778765-nddjw\" (UID: \"16bd595c-a77a-408f-9488-3499d8d57bdb\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nddjw" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.316114 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b57b2a2c-ac8c-4c84-84c4-c24479600b71-trusted-ca-bundle\") pod \"apiserver-76f77b778f-fpnw9\" (UID: \"b57b2a2c-ac8c-4c84-84c4-c24479600b71\") " pod="openshift-apiserver/apiserver-76f77b778f-fpnw9" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.317585 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/1c7a025e-0270-445c-ac05-34ffe3502176-registry-certificates\") pod \"image-registry-697d97f7c8-hfcww\" (UID: \"1c7a025e-0270-445c-ac05-34ffe3502176\") " pod="openshift-image-registry/image-registry-697d97f7c8-hfcww" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.319770 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/16bd595c-a77a-408f-9488-3499d8d57bdb-etcd-service-ca\") pod \"etcd-operator-b45778765-nddjw\" (UID: \"16bd595c-a77a-408f-9488-3499d8d57bdb\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nddjw" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.320582 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/b57b2a2c-ac8c-4c84-84c4-c24479600b71-image-import-ca\") pod \"apiserver-76f77b778f-fpnw9\" (UID: \"b57b2a2c-ac8c-4c84-84c4-c24479600b71\") " pod="openshift-apiserver/apiserver-76f77b778f-fpnw9" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.320729 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/b57b2a2c-ac8c-4c84-84c4-c24479600b71-audit\") pod \"apiserver-76f77b778f-fpnw9\" (UID: \"b57b2a2c-ac8c-4c84-84c4-c24479600b71\") " pod="openshift-apiserver/apiserver-76f77b778f-fpnw9" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.321672 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7e32179f-a59d-44e1-9a56-ca25b8c5ff21-service-ca-bundle\") pod \"authentication-operator-69f744f599-vd6hx\" (UID: \"7e32179f-a59d-44e1-9a56-ca25b8c5ff21\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-vd6hx" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.321808 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/1c7a025e-0270-445c-ac05-34ffe3502176-registry-tls\") pod \"image-registry-697d97f7c8-hfcww\" (UID: \"1c7a025e-0270-445c-ac05-34ffe3502176\") " pod="openshift-image-registry/image-registry-697d97f7c8-hfcww" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.322095 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/35a9f539-42ce-4dc0-a9d1-d23278463bfc-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-fjrfx\" (UID: \"35a9f539-42ce-4dc0-a9d1-d23278463bfc\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-fjrfx" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.322182 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b57b2a2c-ac8c-4c84-84c4-c24479600b71-node-pullsecrets\") pod \"apiserver-76f77b778f-fpnw9\" (UID: \"b57b2a2c-ac8c-4c84-84c4-c24479600b71\") " pod="openshift-apiserver/apiserver-76f77b778f-fpnw9" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.322387 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/3da7ba0a-4ddc-4bca-acd8-e598854eceec-console-config\") pod \"console-f9d7485db-hvb66\" (UID: \"3da7ba0a-4ddc-4bca-acd8-e598854eceec\") " pod="openshift-console/console-f9d7485db-hvb66" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.323270 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e1fce9e3-ffff-49f7-9f88-f5fd4cc98978-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-lwq5z\" (UID: \"e1fce9e3-ffff-49f7-9f88-f5fd4cc98978\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lwq5z" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.324192 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3da7ba0a-4ddc-4bca-acd8-e598854eceec-service-ca\") pod \"console-f9d7485db-hvb66\" (UID: \"3da7ba0a-4ddc-4bca-acd8-e598854eceec\") " pod="openshift-console/console-f9d7485db-hvb66" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.324548 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b57b2a2c-ac8c-4c84-84c4-c24479600b71-config\") pod \"apiserver-76f77b778f-fpnw9\" (UID: \"b57b2a2c-ac8c-4c84-84c4-c24479600b71\") " pod="openshift-apiserver/apiserver-76f77b778f-fpnw9" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.327556 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/b57b2a2c-ac8c-4c84-84c4-c24479600b71-encryption-config\") pod \"apiserver-76f77b778f-fpnw9\" (UID: \"b57b2a2c-ac8c-4c84-84c4-c24479600b71\") " pod="openshift-apiserver/apiserver-76f77b778f-fpnw9" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.327937 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/35a9f539-42ce-4dc0-a9d1-d23278463bfc-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-fjrfx\" (UID: \"35a9f539-42ce-4dc0-a9d1-d23278463bfc\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-fjrfx" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.327961 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/3da7ba0a-4ddc-4bca-acd8-e598854eceec-console-serving-cert\") pod \"console-f9d7485db-hvb66\" (UID: \"3da7ba0a-4ddc-4bca-acd8-e598854eceec\") " pod="openshift-console/console-f9d7485db-hvb66" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.328947 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7e32179f-a59d-44e1-9a56-ca25b8c5ff21-serving-cert\") pod \"authentication-operator-69f744f599-vd6hx\" (UID: \"7e32179f-a59d-44e1-9a56-ca25b8c5ff21\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-vd6hx" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.334848 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16bd595c-a77a-408f-9488-3499d8d57bdb-serving-cert\") pod \"etcd-operator-b45778765-nddjw\" (UID: \"16bd595c-a77a-408f-9488-3499d8d57bdb\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nddjw" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.335622 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/80bc6540-5a90-42c9-a3b6-ae9a897119dd-machine-approver-tls\") pod \"machine-approver-56656f9798-dk2nb\" (UID: \"80bc6540-5a90-42c9-a3b6-ae9a897119dd\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-dk2nb" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.335804 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/3925001b-348a-4dde-a066-e49891c345bb-profile-collector-cert\") pod \"catalog-operator-68c6474976-8bmsm\" (UID: \"3925001b-348a-4dde-a066-e49891c345bb\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8bmsm" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.338489 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/914e8f14-972c-4ca7-bcc6-4fc802cdfdc6-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-mn6hj\" (UID: \"914e8f14-972c-4ca7-bcc6-4fc802cdfdc6\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mn6hj" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.340015 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1365afbc-1c4d-47e9-856e-520d67653d37-trusted-ca\") pod \"ingress-operator-5b745b69d9-m5ftr\" (UID: \"1365afbc-1c4d-47e9-856e-520d67653d37\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-m5ftr" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.341106 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/099076c9-9f78-47b8-87f1-3c9cc47e0b09-metrics-certs\") pod \"router-default-5444994796-jg752\" (UID: \"099076c9-9f78-47b8-87f1-3c9cc47e0b09\") " pod="openshift-ingress/router-default-5444994796-jg752" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.341447 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rddc7\" (UniqueName: \"kubernetes.io/projected/0eeb321c-f53d-4fa6-b824-006c3844edad-kube-api-access-rddc7\") pod \"openshift-controller-manager-operator-756b6f6bc6-crshz\" (UID: \"0eeb321c-f53d-4fa6-b824-006c3844edad\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-crshz" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.346304 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b57b2a2c-ac8c-4c84-84c4-c24479600b71-etcd-client\") pod \"apiserver-76f77b778f-fpnw9\" (UID: \"b57b2a2c-ac8c-4c84-84c4-c24479600b71\") " pod="openshift-apiserver/apiserver-76f77b778f-fpnw9" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.347434 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/099076c9-9f78-47b8-87f1-3c9cc47e0b09-stats-auth\") pod \"router-default-5444994796-jg752\" (UID: \"099076c9-9f78-47b8-87f1-3c9cc47e0b09\") " pod="openshift-ingress/router-default-5444994796-jg752" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.347962 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/099076c9-9f78-47b8-87f1-3c9cc47e0b09-default-certificate\") pod \"router-default-5444994796-jg752\" (UID: \"099076c9-9f78-47b8-87f1-3c9cc47e0b09\") " pod="openshift-ingress/router-default-5444994796-jg752" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.348394 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/3925001b-348a-4dde-a066-e49891c345bb-srv-cert\") pod \"catalog-operator-68c6474976-8bmsm\" (UID: \"3925001b-348a-4dde-a066-e49891c345bb\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8bmsm" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.348880 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/1c7a025e-0270-445c-ac05-34ffe3502176-installation-pull-secrets\") pod \"image-registry-697d97f7c8-hfcww\" (UID: \"1c7a025e-0270-445c-ac05-34ffe3502176\") " pod="openshift-image-registry/image-registry-697d97f7c8-hfcww" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.349065 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e1fce9e3-ffff-49f7-9f88-f5fd4cc98978-proxy-tls\") pod \"machine-config-controller-84d6567774-lwq5z\" (UID: \"e1fce9e3-ffff-49f7-9f88-f5fd4cc98978\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lwq5z" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.349196 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0eeb321c-f53d-4fa6-b824-006c3844edad-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-crshz\" (UID: \"0eeb321c-f53d-4fa6-b824-006c3844edad\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-crshz" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.349492 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8559e798-7b17-41b7-86c8-bf530c4092ea-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-q9xrs\" (UID: \"8559e798-7b17-41b7-86c8-bf530c4092ea\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-q9xrs" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.349637 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b57b2a2c-ac8c-4c84-84c4-c24479600b71-serving-cert\") pod \"apiserver-76f77b778f-fpnw9\" (UID: \"b57b2a2c-ac8c-4c84-84c4-c24479600b71\") " pod="openshift-apiserver/apiserver-76f77b778f-fpnw9" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.352840 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8523189e-83c3-4ec4-aee3-7fc8859d380a-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-f8bjt\" (UID: \"8523189e-83c3-4ec4-aee3-7fc8859d380a\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-f8bjt" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.354349 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1365afbc-1c4d-47e9-856e-520d67653d37-metrics-tls\") pod \"ingress-operator-5b745b69d9-m5ftr\" (UID: \"1365afbc-1c4d-47e9-856e-520d67653d37\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-m5ftr" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.355249 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4e7531cd-b62c-452a-b6a7-716513aaad71-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-6f6dt\" (UID: \"4e7531cd-b62c-452a-b6a7-716513aaad71\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-6f6dt" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.355407 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/1b3f7089-9ab3-4753-b0a2-7454ed4425ac-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-bwgmc\" (UID: \"1b3f7089-9ab3-4753-b0a2-7454ed4425ac\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bwgmc" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.356226 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/3da7ba0a-4ddc-4bca-acd8-e598854eceec-console-oauth-config\") pod \"console-f9d7485db-hvb66\" (UID: \"3da7ba0a-4ddc-4bca-acd8-e598854eceec\") " pod="openshift-console/console-f9d7485db-hvb66" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.364528 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8559e798-7b17-41b7-86c8-bf530c4092ea-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-q9xrs\" (UID: \"8559e798-7b17-41b7-86c8-bf530c4092ea\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-q9xrs" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.381151 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dr49g\" (UniqueName: \"kubernetes.io/projected/914e8f14-972c-4ca7-bcc6-4fc802cdfdc6-kube-api-access-dr49g\") pod \"control-plane-machine-set-operator-78cbb6b69f-mn6hj\" (UID: \"914e8f14-972c-4ca7-bcc6-4fc802cdfdc6\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mn6hj" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.403535 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1c7a025e-0270-445c-ac05-34ffe3502176-bound-sa-token\") pod \"image-registry-697d97f7c8-hfcww\" (UID: \"1c7a025e-0270-445c-ac05-34ffe3502176\") " pod="openshift-image-registry/image-registry-697d97f7c8-hfcww" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.407980 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/1ea89768-219d-4769-9010-a34764aaee1d-node-bootstrap-token\") pod \"machine-config-server-nx4lr\" (UID: \"1ea89768-219d-4769-9010-a34764aaee1d\") " pod="openshift-machine-config-operator/machine-config-server-nx4lr" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.408057 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/35a815ee-3601-412e-a76e-b4e15292e02c-config\") pod \"service-ca-operator-777779d784-nmxbx\" (UID: \"35a815ee-3601-412e-a76e-b4e15292e02c\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-nmxbx" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.408084 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/fa2af10f-a542-4777-9f7e-a2ca54798d99-signing-key\") pod \"service-ca-9c57cc56f-s74tf\" (UID: \"fa2af10f-a542-4777-9f7e-a2ca54798d99\") " pod="openshift-service-ca/service-ca-9c57cc56f-s74tf" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.408126 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dwkpj\" (UniqueName: \"kubernetes.io/projected/455504d8-7edb-4008-9343-536491e9504a-kube-api-access-dwkpj\") pod \"olm-operator-6b444d44fb-qc4wh\" (UID: \"455504d8-7edb-4008-9343-536491e9504a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qc4wh" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.408174 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/11cb8341-3939-4c82-9745-510f73904864-socket-dir\") pod \"csi-hostpathplugin-gdjrj\" (UID: \"11cb8341-3939-4c82-9745-510f73904864\") " pod="hostpath-provisioner/csi-hostpathplugin-gdjrj" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.408201 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8nv2j\" (UniqueName: \"kubernetes.io/projected/11cb8341-3939-4c82-9745-510f73904864-kube-api-access-8nv2j\") pod \"csi-hostpathplugin-gdjrj\" (UID: \"11cb8341-3939-4c82-9745-510f73904864\") " pod="hostpath-provisioner/csi-hostpathplugin-gdjrj" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.408223 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9a161f1b-77bb-4a9d-9bfc-345bb46d439b-webhook-cert\") pod \"packageserver-d55dfcdfc-q4vp7\" (UID: \"9a161f1b-77bb-4a9d-9bfc-345bb46d439b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-q4vp7" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.408247 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/455504d8-7edb-4008-9343-536491e9504a-profile-collector-cert\") pod \"olm-operator-6b444d44fb-qc4wh\" (UID: \"455504d8-7edb-4008-9343-536491e9504a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qc4wh" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.408277 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/a3609f97-bc24-49d9-994a-026f5b1f8c73-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-nld6v\" (UID: \"a3609f97-bc24-49d9-994a-026f5b1f8c73\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-nld6v" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.408316 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/01fa9f33-5f97-41b3-ae1f-f6421dd58827-images\") pod \"machine-config-operator-74547568cd-txsxl\" (UID: \"01fa9f33-5f97-41b3-ae1f-f6421dd58827\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-txsxl" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.408340 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e681c560-a7ad-4a54-831c-122ccd49c1c9-config-volume\") pod \"dns-default-wbzr4\" (UID: \"e681c560-a7ad-4a54-831c-122ccd49c1c9\") " pod="openshift-dns/dns-default-wbzr4" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.408380 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jdv4s\" (UniqueName: \"kubernetes.io/projected/06d8fe1f-0a71-480f-899e-a070c091cb79-kube-api-access-jdv4s\") pod \"marketplace-operator-79b997595-sfz7k\" (UID: \"06d8fe1f-0a71-480f-899e-a070c091cb79\") " pod="openshift-marketplace/marketplace-operator-79b997595-sfz7k" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.408404 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pxv59\" (UniqueName: \"kubernetes.io/projected/9a161f1b-77bb-4a9d-9bfc-345bb46d439b-kube-api-access-pxv59\") pod \"packageserver-d55dfcdfc-q4vp7\" (UID: \"9a161f1b-77bb-4a9d-9bfc-345bb46d439b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-q4vp7" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.408635 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-htpzn\" (UniqueName: \"kubernetes.io/projected/35a815ee-3601-412e-a76e-b4e15292e02c-kube-api-access-htpzn\") pod \"service-ca-operator-777779d784-nmxbx\" (UID: \"35a815ee-3601-412e-a76e-b4e15292e02c\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-nmxbx" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.408662 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/fa2af10f-a542-4777-9f7e-a2ca54798d99-signing-cabundle\") pod \"service-ca-9c57cc56f-s74tf\" (UID: \"fa2af10f-a542-4777-9f7e-a2ca54798d99\") " pod="openshift-service-ca/service-ca-9c57cc56f-s74tf" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.408684 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4cbe5d01-ad51-4b03-aba9-8757f5643bdb-secret-volume\") pod \"collect-profiles-29523750-lq4hl\" (UID: \"4cbe5d01-ad51-4b03-aba9-8757f5643bdb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523750-lq4hl" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.408709 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/11cb8341-3939-4c82-9745-510f73904864-mountpoint-dir\") pod \"csi-hostpathplugin-gdjrj\" (UID: \"11cb8341-3939-4c82-9745-510f73904864\") " pod="hostpath-provisioner/csi-hostpathplugin-gdjrj" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.408833 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hfcww\" (UID: \"1c7a025e-0270-445c-ac05-34ffe3502176\") " pod="openshift-image-registry/image-registry-697d97f7c8-hfcww" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.409098 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/35a815ee-3601-412e-a76e-b4e15292e02c-config\") pod \"service-ca-operator-777779d784-nmxbx\" (UID: \"35a815ee-3601-412e-a76e-b4e15292e02c\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-nmxbx" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.409676 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/11cb8341-3939-4c82-9745-510f73904864-mountpoint-dir\") pod \"csi-hostpathplugin-gdjrj\" (UID: \"11cb8341-3939-4c82-9745-510f73904864\") " pod="hostpath-provisioner/csi-hostpathplugin-gdjrj" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.408124 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-d9hcp"] Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.409873 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/35a815ee-3601-412e-a76e-b4e15292e02c-serving-cert\") pod \"service-ca-operator-777779d784-nmxbx\" (UID: \"35a815ee-3601-412e-a76e-b4e15292e02c\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-nmxbx" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.409918 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/01fa9f33-5f97-41b3-ae1f-f6421dd58827-proxy-tls\") pod \"machine-config-operator-74547568cd-txsxl\" (UID: \"01fa9f33-5f97-41b3-ae1f-f6421dd58827\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-txsxl" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.409951 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wth85\" (UniqueName: \"kubernetes.io/projected/e681c560-a7ad-4a54-831c-122ccd49c1c9-kube-api-access-wth85\") pod \"dns-default-wbzr4\" (UID: \"e681c560-a7ad-4a54-831c-122ccd49c1c9\") " pod="openshift-dns/dns-default-wbzr4" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.409989 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vl8sw\" (UniqueName: \"kubernetes.io/projected/01fa9f33-5f97-41b3-ae1f-f6421dd58827-kube-api-access-vl8sw\") pod \"machine-config-operator-74547568cd-txsxl\" (UID: \"01fa9f33-5f97-41b3-ae1f-f6421dd58827\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-txsxl" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.410013 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/9a161f1b-77bb-4a9d-9bfc-345bb46d439b-tmpfs\") pod \"packageserver-d55dfcdfc-q4vp7\" (UID: \"9a161f1b-77bb-4a9d-9bfc-345bb46d439b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-q4vp7" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.410036 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a3609f97-bc24-49d9-994a-026f5b1f8c73-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-nld6v\" (UID: \"a3609f97-bc24-49d9-994a-026f5b1f8c73\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-nld6v" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.410069 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/11cb8341-3939-4c82-9745-510f73904864-plugins-dir\") pod \"csi-hostpathplugin-gdjrj\" (UID: \"11cb8341-3939-4c82-9745-510f73904864\") " pod="hostpath-provisioner/csi-hostpathplugin-gdjrj" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.410094 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/06d8fe1f-0a71-480f-899e-a070c091cb79-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-sfz7k\" (UID: \"06d8fe1f-0a71-480f-899e-a070c091cb79\") " pod="openshift-marketplace/marketplace-operator-79b997595-sfz7k" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.410120 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8gdb\" (UniqueName: \"kubernetes.io/projected/1ea89768-219d-4769-9010-a34764aaee1d-kube-api-access-l8gdb\") pod \"machine-config-server-nx4lr\" (UID: \"1ea89768-219d-4769-9010-a34764aaee1d\") " pod="openshift-machine-config-operator/machine-config-server-nx4lr" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.410161 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tlpv6\" (UniqueName: \"kubernetes.io/projected/4cbe5d01-ad51-4b03-aba9-8757f5643bdb-kube-api-access-tlpv6\") pod \"collect-profiles-29523750-lq4hl\" (UID: \"4cbe5d01-ad51-4b03-aba9-8757f5643bdb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523750-lq4hl" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.410228 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-68msh\" (UniqueName: \"kubernetes.io/projected/c7a5e1cd-ca67-4a39-b3b1-16889e4ea7ae-kube-api-access-68msh\") pod \"ingress-canary-xrkwr\" (UID: \"c7a5e1cd-ca67-4a39-b3b1-16889e4ea7ae\") " pod="openshift-ingress-canary/ingress-canary-xrkwr" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.410559 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h7t87\" (UniqueName: \"kubernetes.io/projected/fa2af10f-a542-4777-9f7e-a2ca54798d99-kube-api-access-h7t87\") pod \"service-ca-9c57cc56f-s74tf\" (UID: \"fa2af10f-a542-4777-9f7e-a2ca54798d99\") " pod="openshift-service-ca/service-ca-9c57cc56f-s74tf" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.410700 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9a161f1b-77bb-4a9d-9bfc-345bb46d439b-apiservice-cert\") pod \"packageserver-d55dfcdfc-q4vp7\" (UID: \"9a161f1b-77bb-4a9d-9bfc-345bb46d439b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-q4vp7" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.410729 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/455504d8-7edb-4008-9343-536491e9504a-srv-cert\") pod \"olm-operator-6b444d44fb-qc4wh\" (UID: \"455504d8-7edb-4008-9343-536491e9504a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qc4wh" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.410753 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e681c560-a7ad-4a54-831c-122ccd49c1c9-metrics-tls\") pod \"dns-default-wbzr4\" (UID: \"e681c560-a7ad-4a54-831c-122ccd49c1c9\") " pod="openshift-dns/dns-default-wbzr4" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.410788 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a3609f97-bc24-49d9-994a-026f5b1f8c73-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-nld6v\" (UID: \"a3609f97-bc24-49d9-994a-026f5b1f8c73\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-nld6v" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.410810 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c7a5e1cd-ca67-4a39-b3b1-16889e4ea7ae-cert\") pod \"ingress-canary-xrkwr\" (UID: \"c7a5e1cd-ca67-4a39-b3b1-16889e4ea7ae\") " pod="openshift-ingress-canary/ingress-canary-xrkwr" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.410957 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gntw7\" (UniqueName: \"kubernetes.io/projected/a3609f97-bc24-49d9-994a-026f5b1f8c73-kube-api-access-gntw7\") pod \"cluster-image-registry-operator-dc59b4c8b-nld6v\" (UID: \"a3609f97-bc24-49d9-994a-026f5b1f8c73\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-nld6v" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.410994 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/11cb8341-3939-4c82-9745-510f73904864-registration-dir\") pod \"csi-hostpathplugin-gdjrj\" (UID: \"11cb8341-3939-4c82-9745-510f73904864\") " pod="hostpath-provisioner/csi-hostpathplugin-gdjrj" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.411019 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/11cb8341-3939-4c82-9745-510f73904864-csi-data-dir\") pod \"csi-hostpathplugin-gdjrj\" (UID: \"11cb8341-3939-4c82-9745-510f73904864\") " pod="hostpath-provisioner/csi-hostpathplugin-gdjrj" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.411048 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/01fa9f33-5f97-41b3-ae1f-f6421dd58827-auth-proxy-config\") pod \"machine-config-operator-74547568cd-txsxl\" (UID: \"01fa9f33-5f97-41b3-ae1f-f6421dd58827\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-txsxl" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.411070 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/1ea89768-219d-4769-9010-a34764aaee1d-certs\") pod \"machine-config-server-nx4lr\" (UID: \"1ea89768-219d-4769-9010-a34764aaee1d\") " pod="openshift-machine-config-operator/machine-config-server-nx4lr" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.411104 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/06d8fe1f-0a71-480f-899e-a070c091cb79-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-sfz7k\" (UID: \"06d8fe1f-0a71-480f-899e-a070c091cb79\") " pod="openshift-marketplace/marketplace-operator-79b997595-sfz7k" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.411147 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4cbe5d01-ad51-4b03-aba9-8757f5643bdb-config-volume\") pod \"collect-profiles-29523750-lq4hl\" (UID: \"4cbe5d01-ad51-4b03-aba9-8757f5643bdb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523750-lq4hl" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.411293 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/fa2af10f-a542-4777-9f7e-a2ca54798d99-signing-cabundle\") pod \"service-ca-9c57cc56f-s74tf\" (UID: \"fa2af10f-a542-4777-9f7e-a2ca54798d99\") " pod="openshift-service-ca/service-ca-9c57cc56f-s74tf" Feb 18 14:33:55 crc kubenswrapper[4957]: E0218 14:33:55.411678 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 14:33:55.911661455 +0000 UTC m=+142.432526309 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hfcww" (UID: "1c7a025e-0270-445c-ac05-34ffe3502176") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.412133 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4cbe5d01-ad51-4b03-aba9-8757f5643bdb-config-volume\") pod \"collect-profiles-29523750-lq4hl\" (UID: \"4cbe5d01-ad51-4b03-aba9-8757f5643bdb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523750-lq4hl" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.412903 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/11cb8341-3939-4c82-9745-510f73904864-socket-dir\") pod \"csi-hostpathplugin-gdjrj\" (UID: \"11cb8341-3939-4c82-9745-510f73904864\") " pod="hostpath-provisioner/csi-hostpathplugin-gdjrj" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.413220 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4cbe5d01-ad51-4b03-aba9-8757f5643bdb-secret-volume\") pod \"collect-profiles-29523750-lq4hl\" (UID: \"4cbe5d01-ad51-4b03-aba9-8757f5643bdb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523750-lq4hl" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.414085 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a3609f97-bc24-49d9-994a-026f5b1f8c73-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-nld6v\" (UID: \"a3609f97-bc24-49d9-994a-026f5b1f8c73\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-nld6v" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.414521 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/455504d8-7edb-4008-9343-536491e9504a-profile-collector-cert\") pod \"olm-operator-6b444d44fb-qc4wh\" (UID: \"455504d8-7edb-4008-9343-536491e9504a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qc4wh" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.414787 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9a161f1b-77bb-4a9d-9bfc-345bb46d439b-apiservice-cert\") pod \"packageserver-d55dfcdfc-q4vp7\" (UID: \"9a161f1b-77bb-4a9d-9bfc-345bb46d439b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-q4vp7" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.414864 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/01fa9f33-5f97-41b3-ae1f-f6421dd58827-images\") pod \"machine-config-operator-74547568cd-txsxl\" (UID: \"01fa9f33-5f97-41b3-ae1f-f6421dd58827\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-txsxl" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.414978 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/01fa9f33-5f97-41b3-ae1f-f6421dd58827-auth-proxy-config\") pod \"machine-config-operator-74547568cd-txsxl\" (UID: \"01fa9f33-5f97-41b3-ae1f-f6421dd58827\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-txsxl" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.415046 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/11cb8341-3939-4c82-9745-510f73904864-registration-dir\") pod \"csi-hostpathplugin-gdjrj\" (UID: \"11cb8341-3939-4c82-9745-510f73904864\") " pod="hostpath-provisioner/csi-hostpathplugin-gdjrj" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.415061 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/11cb8341-3939-4c82-9745-510f73904864-csi-data-dir\") pod \"csi-hostpathplugin-gdjrj\" (UID: \"11cb8341-3939-4c82-9745-510f73904864\") " pod="hostpath-provisioner/csi-hostpathplugin-gdjrj" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.415131 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/11cb8341-3939-4c82-9745-510f73904864-plugins-dir\") pod \"csi-hostpathplugin-gdjrj\" (UID: \"11cb8341-3939-4c82-9745-510f73904864\") " pod="hostpath-provisioner/csi-hostpathplugin-gdjrj" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.415226 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9a161f1b-77bb-4a9d-9bfc-345bb46d439b-webhook-cert\") pod \"packageserver-d55dfcdfc-q4vp7\" (UID: \"9a161f1b-77bb-4a9d-9bfc-345bb46d439b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-q4vp7" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.415491 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e681c560-a7ad-4a54-831c-122ccd49c1c9-config-volume\") pod \"dns-default-wbzr4\" (UID: \"e681c560-a7ad-4a54-831c-122ccd49c1c9\") " pod="openshift-dns/dns-default-wbzr4" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.415814 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/9a161f1b-77bb-4a9d-9bfc-345bb46d439b-tmpfs\") pod \"packageserver-d55dfcdfc-q4vp7\" (UID: \"9a161f1b-77bb-4a9d-9bfc-345bb46d439b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-q4vp7" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.416359 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/06d8fe1f-0a71-480f-899e-a070c091cb79-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-sfz7k\" (UID: \"06d8fe1f-0a71-480f-899e-a070c091cb79\") " pod="openshift-marketplace/marketplace-operator-79b997595-sfz7k" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.418877 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/01fa9f33-5f97-41b3-ae1f-f6421dd58827-proxy-tls\") pod \"machine-config-operator-74547568cd-txsxl\" (UID: \"01fa9f33-5f97-41b3-ae1f-f6421dd58827\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-txsxl" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.418879 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/35a815ee-3601-412e-a76e-b4e15292e02c-serving-cert\") pod \"service-ca-operator-777779d784-nmxbx\" (UID: \"35a815ee-3601-412e-a76e-b4e15292e02c\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-nmxbx" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.419468 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/1ea89768-219d-4769-9010-a34764aaee1d-certs\") pod \"machine-config-server-nx4lr\" (UID: \"1ea89768-219d-4769-9010-a34764aaee1d\") " pod="openshift-machine-config-operator/machine-config-server-nx4lr" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.419753 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/1ea89768-219d-4769-9010-a34764aaee1d-node-bootstrap-token\") pod \"machine-config-server-nx4lr\" (UID: \"1ea89768-219d-4769-9010-a34764aaee1d\") " pod="openshift-machine-config-operator/machine-config-server-nx4lr" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.419910 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/455504d8-7edb-4008-9343-536491e9504a-srv-cert\") pod \"olm-operator-6b444d44fb-qc4wh\" (UID: \"455504d8-7edb-4008-9343-536491e9504a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qc4wh" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.420242 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/fa2af10f-a542-4777-9f7e-a2ca54798d99-signing-key\") pod \"service-ca-9c57cc56f-s74tf\" (UID: \"fa2af10f-a542-4777-9f7e-a2ca54798d99\") " pod="openshift-service-ca/service-ca-9c57cc56f-s74tf" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.420689 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e681c560-a7ad-4a54-831c-122ccd49c1c9-metrics-tls\") pod \"dns-default-wbzr4\" (UID: \"e681c560-a7ad-4a54-831c-122ccd49c1c9\") " pod="openshift-dns/dns-default-wbzr4" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.423195 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/06d8fe1f-0a71-480f-899e-a070c091cb79-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-sfz7k\" (UID: \"06d8fe1f-0a71-480f-899e-a070c091cb79\") " pod="openshift-marketplace/marketplace-operator-79b997595-sfz7k" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.423587 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c7a5e1cd-ca67-4a39-b3b1-16889e4ea7ae-cert\") pod \"ingress-canary-xrkwr\" (UID: \"c7a5e1cd-ca67-4a39-b3b1-16889e4ea7ae\") " pod="openshift-ingress-canary/ingress-canary-xrkwr" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.429581 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dk5tj\" (UniqueName: \"kubernetes.io/projected/1b3f7089-9ab3-4753-b0a2-7454ed4425ac-kube-api-access-dk5tj\") pod \"package-server-manager-789f6589d5-bwgmc\" (UID: \"1b3f7089-9ab3-4753-b0a2-7454ed4425ac\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bwgmc" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.430212 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/a3609f97-bc24-49d9-994a-026f5b1f8c73-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-nld6v\" (UID: \"a3609f97-bc24-49d9-994a-026f5b1f8c73\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-nld6v" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.435228 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-g4vvs"] Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.438724 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cdk7v\" (UniqueName: \"kubernetes.io/projected/099076c9-9f78-47b8-87f1-3c9cc47e0b09-kube-api-access-cdk7v\") pod \"router-default-5444994796-jg752\" (UID: \"099076c9-9f78-47b8-87f1-3c9cc47e0b09\") " pod="openshift-ingress/router-default-5444994796-jg752" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.458921 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x87jv\" (UniqueName: \"kubernetes.io/projected/80bc6540-5a90-42c9-a3b6-ae9a897119dd-kube-api-access-x87jv\") pod \"machine-approver-56656f9798-dk2nb\" (UID: \"80bc6540-5a90-42c9-a3b6-ae9a897119dd\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-dk2nb" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.478921 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8hmnd\" (UniqueName: \"kubernetes.io/projected/1c7a025e-0270-445c-ac05-34ffe3502176-kube-api-access-8hmnd\") pod \"image-registry-697d97f7c8-hfcww\" (UID: \"1c7a025e-0270-445c-ac05-34ffe3502176\") " pod="openshift-image-registry/image-registry-697d97f7c8-hfcww" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.503375 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7ntb6\" (UniqueName: \"kubernetes.io/projected/1365afbc-1c4d-47e9-856e-520d67653d37-kube-api-access-7ntb6\") pod \"ingress-operator-5b745b69d9-m5ftr\" (UID: \"1365afbc-1c4d-47e9-856e-520d67653d37\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-m5ftr" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.510985 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-lp5cj"] Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.512563 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 14:33:55 crc kubenswrapper[4957]: E0218 14:33:55.512729 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 14:33:56.012709171 +0000 UTC m=+142.533573915 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.512892 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hfcww\" (UID: \"1c7a025e-0270-445c-ac05-34ffe3502176\") " pod="openshift-image-registry/image-registry-697d97f7c8-hfcww" Feb 18 14:33:55 crc kubenswrapper[4957]: E0218 14:33:55.513305 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 14:33:56.013294917 +0000 UTC m=+142.534159661 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hfcww" (UID: "1c7a025e-0270-445c-ac05-34ffe3502176") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:33:55 crc kubenswrapper[4957]: W0218 14:33:55.529120 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod609d339c_7830_4e17_b847_da17573e1ed0.slice/crio-5652f1f7b30a9ca29abe4f9463be7ab2821452d2910a20492a6b9c8ee50f2587 WatchSource:0}: Error finding container 5652f1f7b30a9ca29abe4f9463be7ab2821452d2910a20492a6b9c8ee50f2587: Status 404 returned error can't find the container with id 5652f1f7b30a9ca29abe4f9463be7ab2821452d2910a20492a6b9c8ee50f2587 Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.530843 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jffnj"] Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.531918 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-crshz" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.532550 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1365afbc-1c4d-47e9-856e-520d67653d37-bound-sa-token\") pod \"ingress-operator-5b745b69d9-m5ftr\" (UID: \"1365afbc-1c4d-47e9-856e-520d67653d37\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-m5ftr" Feb 18 14:33:55 crc kubenswrapper[4957]: W0218 14:33:55.536285 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod22fac79a_ec42_4f23_9f6b_79b68cde0489.slice/crio-587d103483a7c63db911a364da4d686ef3f1a51b5f005a2e84a1967149641a28 WatchSource:0}: Error finding container 587d103483a7c63db911a364da4d686ef3f1a51b5f005a2e84a1967149641a28: Status 404 returned error can't find the container with id 587d103483a7c63db911a364da4d686ef3f1a51b5f005a2e84a1967149641a28 Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.539861 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-2pvcz"] Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.545605 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7jkhw\" (UniqueName: \"kubernetes.io/projected/e1fce9e3-ffff-49f7-9f88-f5fd4cc98978-kube-api-access-7jkhw\") pod \"machine-config-controller-84d6567774-lwq5z\" (UID: \"e1fce9e3-ffff-49f7-9f88-f5fd4cc98978\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lwq5z" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.551337 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-q9xrs" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.557079 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mn6hj" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.558172 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8523189e-83c3-4ec4-aee3-7fc8859d380a-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-f8bjt\" (UID: \"8523189e-83c3-4ec4-aee3-7fc8859d380a\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-f8bjt" Feb 18 14:33:55 crc kubenswrapper[4957]: W0218 14:33:55.558645 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod262c1468_53c3_406c_b186_2912e491ba70.slice/crio-5ccb4a1b62c46c7227af511341846a14da8fd5d55c71c9c98d47b960567c86f5 WatchSource:0}: Error finding container 5ccb4a1b62c46c7227af511341846a14da8fd5d55c71c9c98d47b960567c86f5: Status 404 returned error can't find the container with id 5ccb4a1b62c46c7227af511341846a14da8fd5d55c71c9c98d47b960567c86f5 Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.578341 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5j2bh\" (UniqueName: \"kubernetes.io/projected/b57b2a2c-ac8c-4c84-84c4-c24479600b71-kube-api-access-5j2bh\") pod \"apiserver-76f77b778f-fpnw9\" (UID: \"b57b2a2c-ac8c-4c84-84c4-c24479600b71\") " pod="openshift-apiserver/apiserver-76f77b778f-fpnw9" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.591379 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-f8bjt" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.598069 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/35a9f539-42ce-4dc0-a9d1-d23278463bfc-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-fjrfx\" (UID: \"35a9f539-42ce-4dc0-a9d1-d23278463bfc\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-fjrfx" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.605231 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-jg752" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.615041 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lwq5z" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.615235 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 14:33:55 crc kubenswrapper[4957]: E0218 14:33:55.615538 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 14:33:56.115522355 +0000 UTC m=+142.636387089 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.617216 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hfcww\" (UID: \"1c7a025e-0270-445c-ac05-34ffe3502176\") " pod="openshift-image-registry/image-registry-697d97f7c8-hfcww" Feb 18 14:33:55 crc kubenswrapper[4957]: E0218 14:33:55.617884 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 14:33:56.11784258 +0000 UTC m=+142.638707504 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hfcww" (UID: "1c7a025e-0270-445c-ac05-34ffe3502176") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.618276 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gsx4c\" (UniqueName: \"kubernetes.io/projected/3925001b-348a-4dde-a066-e49891c345bb-kube-api-access-gsx4c\") pod \"catalog-operator-68c6474976-8bmsm\" (UID: \"3925001b-348a-4dde-a066-e49891c345bb\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8bmsm" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.621788 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-fjrfx" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.629085 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-2db8v"] Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.630792 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-m5ftr" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.636881 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bwgmc" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.640488 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d7kvv\" (UniqueName: \"kubernetes.io/projected/16bd595c-a77a-408f-9488-3499d8d57bdb-kube-api-access-d7kvv\") pod \"etcd-operator-b45778765-nddjw\" (UID: \"16bd595c-a77a-408f-9488-3499d8d57bdb\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nddjw" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.644172 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8bmsm" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.665371 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k6tp4\" (UniqueName: \"kubernetes.io/projected/6e462cfd-ac3c-4e75-bcce-f8291746b89e-kube-api-access-k6tp4\") pod \"downloads-7954f5f757-fxh8s\" (UID: \"6e462cfd-ac3c-4e75-bcce-f8291746b89e\") " pod="openshift-console/downloads-7954f5f757-fxh8s" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.682900 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-tmknf"] Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.687961 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tgrhq\" (UniqueName: \"kubernetes.io/projected/3da7ba0a-4ddc-4bca-acd8-e598854eceec-kube-api-access-tgrhq\") pod \"console-f9d7485db-hvb66\" (UID: \"3da7ba0a-4ddc-4bca-acd8-e598854eceec\") " pod="openshift-console/console-f9d7485db-hvb66" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.699561 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-594sd"] Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.718233 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.718475 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-652bq\" (UniqueName: \"kubernetes.io/projected/7e32179f-a59d-44e1-9a56-ca25b8c5ff21-kube-api-access-652bq\") pod \"authentication-operator-69f744f599-vd6hx\" (UID: \"7e32179f-a59d-44e1-9a56-ca25b8c5ff21\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-vd6hx" Feb 18 14:33:55 crc kubenswrapper[4957]: E0218 14:33:55.718976 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 14:33:56.218958957 +0000 UTC m=+142.739823691 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.719768 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mh6tf\" (UniqueName: \"kubernetes.io/projected/4f7ee635-a9ad-4c0f-98b8-c72a33eb8b39-kube-api-access-mh6tf\") pod \"migrator-59844c95c7-xsf6z\" (UID: \"4f7ee635-a9ad-4c0f-98b8-c72a33eb8b39\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-xsf6z" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.728713 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-dk2nb" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.738947 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-fxh8s" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.743194 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ltjb2\" (UniqueName: \"kubernetes.io/projected/4e7531cd-b62c-452a-b6a7-716513aaad71-kube-api-access-ltjb2\") pod \"multus-admission-controller-857f4d67dd-6f6dt\" (UID: \"4e7531cd-b62c-452a-b6a7-716513aaad71\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-6f6dt" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.751782 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-hvb66" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.761491 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-fpnw9" Feb 18 14:33:55 crc kubenswrapper[4957]: W0218 14:33:55.796783 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb8d0adaf_a3c6_4121_970c_1f6205db177e.slice/crio-d1592f6d814192d72d91450a827e64e20af5adf513187b9ecdfef1d572ac3c49 WatchSource:0}: Error finding container d1592f6d814192d72d91450a827e64e20af5adf513187b9ecdfef1d572ac3c49: Status 404 returned error can't find the container with id d1592f6d814192d72d91450a827e64e20af5adf513187b9ecdfef1d572ac3c49 Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.798450 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pxv59\" (UniqueName: \"kubernetes.io/projected/9a161f1b-77bb-4a9d-9bfc-345bb46d439b-kube-api-access-pxv59\") pod \"packageserver-d55dfcdfc-q4vp7\" (UID: \"9a161f1b-77bb-4a9d-9bfc-345bb46d439b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-q4vp7" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.801819 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-crshz"] Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.804609 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-htpzn\" (UniqueName: \"kubernetes.io/projected/35a815ee-3601-412e-a76e-b4e15292e02c-kube-api-access-htpzn\") pod \"service-ca-operator-777779d784-nmxbx\" (UID: \"35a815ee-3601-412e-a76e-b4e15292e02c\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-nmxbx" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.816918 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-vd6hx" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.821641 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hfcww\" (UID: \"1c7a025e-0270-445c-ac05-34ffe3502176\") " pod="openshift-image-registry/image-registry-697d97f7c8-hfcww" Feb 18 14:33:55 crc kubenswrapper[4957]: E0218 14:33:55.822037 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 14:33:56.322017489 +0000 UTC m=+142.842882233 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hfcww" (UID: "1c7a025e-0270-445c-ac05-34ffe3502176") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.847080 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dwkpj\" (UniqueName: \"kubernetes.io/projected/455504d8-7edb-4008-9343-536491e9504a-kube-api-access-dwkpj\") pod \"olm-operator-6b444d44fb-qc4wh\" (UID: \"455504d8-7edb-4008-9343-536491e9504a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qc4wh" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.852921 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-68msh\" (UniqueName: \"kubernetes.io/projected/c7a5e1cd-ca67-4a39-b3b1-16889e4ea7ae-kube-api-access-68msh\") pod \"ingress-canary-xrkwr\" (UID: \"c7a5e1cd-ca67-4a39-b3b1-16889e4ea7ae\") " pod="openshift-ingress-canary/ingress-canary-xrkwr" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.864674 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-nddjw" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.867365 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h7t87\" (UniqueName: \"kubernetes.io/projected/fa2af10f-a542-4777-9f7e-a2ca54798d99-kube-api-access-h7t87\") pod \"service-ca-9c57cc56f-s74tf\" (UID: \"fa2af10f-a542-4777-9f7e-a2ca54798d99\") " pod="openshift-service-ca/service-ca-9c57cc56f-s74tf" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.879510 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-xsf6z" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.895550 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-6f6dt" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.904673 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gntw7\" (UniqueName: \"kubernetes.io/projected/a3609f97-bc24-49d9-994a-026f5b1f8c73-kube-api-access-gntw7\") pod \"cluster-image-registry-operator-dc59b4c8b-nld6v\" (UID: \"a3609f97-bc24-49d9-994a-026f5b1f8c73\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-nld6v" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.909727 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8nv2j\" (UniqueName: \"kubernetes.io/projected/11cb8341-3939-4c82-9745-510f73904864-kube-api-access-8nv2j\") pod \"csi-hostpathplugin-gdjrj\" (UID: \"11cb8341-3939-4c82-9745-510f73904864\") " pod="hostpath-provisioner/csi-hostpathplugin-gdjrj" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.916689 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a3609f97-bc24-49d9-994a-026f5b1f8c73-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-nld6v\" (UID: \"a3609f97-bc24-49d9-994a-026f5b1f8c73\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-nld6v" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.924809 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 14:33:55 crc kubenswrapper[4957]: E0218 14:33:55.925042 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 14:33:56.424994998 +0000 UTC m=+142.945859742 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.925162 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hfcww\" (UID: \"1c7a025e-0270-445c-ac05-34ffe3502176\") " pod="openshift-image-registry/image-registry-697d97f7c8-hfcww" Feb 18 14:33:55 crc kubenswrapper[4957]: E0218 14:33:55.925955 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 14:33:56.425640506 +0000 UTC m=+142.946505260 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hfcww" (UID: "1c7a025e-0270-445c-ac05-34ffe3502176") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.951950 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-q4vp7" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.967944 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qc4wh" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.974173 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-nmxbx" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.979138 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vl8sw\" (UniqueName: \"kubernetes.io/projected/01fa9f33-5f97-41b3-ae1f-f6421dd58827-kube-api-access-vl8sw\") pod \"machine-config-operator-74547568cd-txsxl\" (UID: \"01fa9f33-5f97-41b3-ae1f-f6421dd58827\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-txsxl" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.983913 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-txsxl" Feb 18 14:33:55 crc kubenswrapper[4957]: I0218 14:33:55.990449 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-s74tf" Feb 18 14:33:56 crc kubenswrapper[4957]: I0218 14:33:56.001092 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jdv4s\" (UniqueName: \"kubernetes.io/projected/06d8fe1f-0a71-480f-899e-a070c091cb79-kube-api-access-jdv4s\") pod \"marketplace-operator-79b997595-sfz7k\" (UID: \"06d8fe1f-0a71-480f-899e-a070c091cb79\") " pod="openshift-marketplace/marketplace-operator-79b997595-sfz7k" Feb 18 14:33:56 crc kubenswrapper[4957]: I0218 14:33:56.002000 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wth85\" (UniqueName: \"kubernetes.io/projected/e681c560-a7ad-4a54-831c-122ccd49c1c9-kube-api-access-wth85\") pod \"dns-default-wbzr4\" (UID: \"e681c560-a7ad-4a54-831c-122ccd49c1c9\") " pod="openshift-dns/dns-default-wbzr4" Feb 18 14:33:56 crc kubenswrapper[4957]: I0218 14:33:56.006989 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-nld6v" Feb 18 14:33:56 crc kubenswrapper[4957]: I0218 14:33:56.009463 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8gdb\" (UniqueName: \"kubernetes.io/projected/1ea89768-219d-4769-9010-a34764aaee1d-kube-api-access-l8gdb\") pod \"machine-config-server-nx4lr\" (UID: \"1ea89768-219d-4769-9010-a34764aaee1d\") " pod="openshift-machine-config-operator/machine-config-server-nx4lr" Feb 18 14:33:56 crc kubenswrapper[4957]: I0218 14:33:56.017449 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-wbzr4" Feb 18 14:33:56 crc kubenswrapper[4957]: I0218 14:33:56.018953 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2db8v" event={"ID":"1889400f-2fff-4c67-b401-966e820d5a26","Type":"ContainerStarted","Data":"e23c10a3ba4dad01cd3863e3755254e2ab1b98aa286b47be1572a0b7b958fce6"} Feb 18 14:33:56 crc kubenswrapper[4957]: I0218 14:33:56.020265 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-nx4lr" Feb 18 14:33:56 crc kubenswrapper[4957]: I0218 14:33:56.021452 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-2pvcz" event={"ID":"262c1468-53c3-406c-b186-2912e491ba70","Type":"ContainerStarted","Data":"5ccb4a1b62c46c7227af511341846a14da8fd5d55c71c9c98d47b960567c86f5"} Feb 18 14:33:56 crc kubenswrapper[4957]: I0218 14:33:56.029459 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 14:33:56 crc kubenswrapper[4957]: I0218 14:33:56.029789 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-xrkwr" Feb 18 14:33:56 crc kubenswrapper[4957]: E0218 14:33:56.030374 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 14:33:56.529686024 +0000 UTC m=+143.050550768 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:33:56 crc kubenswrapper[4957]: E0218 14:33:56.031402 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 14:33:56.531389491 +0000 UTC m=+143.052254235 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hfcww" (UID: "1c7a025e-0270-445c-ac05-34ffe3502176") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:33:56 crc kubenswrapper[4957]: I0218 14:33:56.031707 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jffnj" event={"ID":"22fac79a-ec42-4f23-9f6b-79b68cde0489","Type":"ContainerStarted","Data":"cc791ae2a4982630720ab1449f924dd0be2ddda54f132679b01e7c24439e166a"} Feb 18 14:33:56 crc kubenswrapper[4957]: I0218 14:33:56.031753 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jffnj" event={"ID":"22fac79a-ec42-4f23-9f6b-79b68cde0489","Type":"ContainerStarted","Data":"587d103483a7c63db911a364da4d686ef3f1a51b5f005a2e84a1967149641a28"} Feb 18 14:33:56 crc kubenswrapper[4957]: I0218 14:33:56.030676 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hfcww\" (UID: \"1c7a025e-0270-445c-ac05-34ffe3502176\") " pod="openshift-image-registry/image-registry-697d97f7c8-hfcww" Feb 18 14:33:56 crc kubenswrapper[4957]: I0218 14:33:56.044601 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-lp5cj" event={"ID":"609d339c-7830-4e17-b847-da17573e1ed0","Type":"ContainerStarted","Data":"20e4ae50292bda4da977486c8b0618e87da93f79621b99e9fbb2ef7f7ee55626"} Feb 18 14:33:56 crc kubenswrapper[4957]: I0218 14:33:56.044689 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-lp5cj" event={"ID":"609d339c-7830-4e17-b847-da17573e1ed0","Type":"ContainerStarted","Data":"5652f1f7b30a9ca29abe4f9463be7ab2821452d2910a20492a6b9c8ee50f2587"} Feb 18 14:33:56 crc kubenswrapper[4957]: I0218 14:33:56.045451 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-lp5cj" Feb 18 14:33:56 crc kubenswrapper[4957]: I0218 14:33:56.047399 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-gdjrj" Feb 18 14:33:56 crc kubenswrapper[4957]: I0218 14:33:56.048406 4957 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-lp5cj container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Feb 18 14:33:56 crc kubenswrapper[4957]: I0218 14:33:56.048550 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-lp5cj" podUID="609d339c-7830-4e17-b847-da17573e1ed0" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Feb 18 14:33:56 crc kubenswrapper[4957]: I0218 14:33:56.048849 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-tmknf" event={"ID":"7b0517cf-fb33-49b0-9f1c-ba39f8edfc37","Type":"ContainerStarted","Data":"0b63ff039845e57120fad56793cd80d0597b669af490e0d53a8acfa4c18505c5"} Feb 18 14:33:56 crc kubenswrapper[4957]: I0218 14:33:56.050127 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tlpv6\" (UniqueName: \"kubernetes.io/projected/4cbe5d01-ad51-4b03-aba9-8757f5643bdb-kube-api-access-tlpv6\") pod \"collect-profiles-29523750-lq4hl\" (UID: \"4cbe5d01-ad51-4b03-aba9-8757f5643bdb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523750-lq4hl" Feb 18 14:33:56 crc kubenswrapper[4957]: I0218 14:33:56.051642 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-d9hcp" event={"ID":"2d30d957-c658-4ce4-9b04-3f1d64fb67b7","Type":"ContainerStarted","Data":"a9e6a9ad56696bcd6dfd3b32e6ea9c2bcf4aca0e403105f96e117d4daae08e28"} Feb 18 14:33:56 crc kubenswrapper[4957]: I0218 14:33:56.053965 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-d9hcp" event={"ID":"2d30d957-c658-4ce4-9b04-3f1d64fb67b7","Type":"ContainerStarted","Data":"1858c9499037b82bf933e28fee40d59b31f8a13106729b424a546822066b4970"} Feb 18 14:33:56 crc kubenswrapper[4957]: I0218 14:33:56.056834 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-d9hcp" Feb 18 14:33:56 crc kubenswrapper[4957]: I0218 14:33:56.056526 4957 patch_prober.go:28] interesting pod/console-operator-58897d9998-d9hcp container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.14:8443/readyz\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Feb 18 14:33:56 crc kubenswrapper[4957]: I0218 14:33:56.056924 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-594sd" event={"ID":"b8d0adaf-a3c6-4121-970c-1f6205db177e","Type":"ContainerStarted","Data":"d1592f6d814192d72d91450a827e64e20af5adf513187b9ecdfef1d572ac3c49"} Feb 18 14:33:56 crc kubenswrapper[4957]: I0218 14:33:56.056932 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-d9hcp" podUID="2d30d957-c658-4ce4-9b04-3f1d64fb67b7" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.14:8443/readyz\": dial tcp 10.217.0.14:8443: connect: connection refused" Feb 18 14:33:56 crc kubenswrapper[4957]: I0218 14:33:56.058204 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fkdzp" event={"ID":"9ce37342-c95b-4be4-b48c-91553e81206a","Type":"ContainerStarted","Data":"f166d8c7004ce26d6faae6b20632b93b0d08cfb36dd73a271dc623e8f4c622df"} Feb 18 14:33:56 crc kubenswrapper[4957]: I0218 14:33:56.058273 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fkdzp" event={"ID":"9ce37342-c95b-4be4-b48c-91553e81206a","Type":"ContainerStarted","Data":"8a9a86ade20c051332e06667e51ed8af13c3298eea6952593191428c74eaaef1"} Feb 18 14:33:56 crc kubenswrapper[4957]: I0218 14:33:56.058638 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fkdzp" Feb 18 14:33:56 crc kubenswrapper[4957]: I0218 14:33:56.064341 4957 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-fkdzp container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Feb 18 14:33:56 crc kubenswrapper[4957]: I0218 14:33:56.064672 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fkdzp" podUID="9ce37342-c95b-4be4-b48c-91553e81206a" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Feb 18 14:33:56 crc kubenswrapper[4957]: I0218 14:33:56.101461 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-g4vvs" event={"ID":"f2c035f3-f03e-4263-a7a5-e821b1fc5488","Type":"ContainerStarted","Data":"d5d73a0da56b4c732a13dc9956d6a70237d87ba903b15242e6f42fa9366e90ba"} Feb 18 14:33:56 crc kubenswrapper[4957]: I0218 14:33:56.101539 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-g4vvs" event={"ID":"f2c035f3-f03e-4263-a7a5-e821b1fc5488","Type":"ContainerStarted","Data":"f2fbd655f25dcceb392b844db3e3f19dfe6f6d8269b56c234730aa3eaeaaa45b"} Feb 18 14:33:56 crc kubenswrapper[4957]: I0218 14:33:56.133325 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-jg752" event={"ID":"099076c9-9f78-47b8-87f1-3c9cc47e0b09","Type":"ContainerStarted","Data":"21fb0c5d9b995cf9e05e30502f74d11620e42160d0d5d08b72e8707d702013fa"} Feb 18 14:33:56 crc kubenswrapper[4957]: I0218 14:33:56.134112 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 14:33:56 crc kubenswrapper[4957]: E0218 14:33:56.135957 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 14:33:56.635934414 +0000 UTC m=+143.156799158 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:33:56 crc kubenswrapper[4957]: I0218 14:33:56.137354 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-s7l47" event={"ID":"30c49091-ad03-4fcf-a0d4-3955a1ddaf97","Type":"ContainerStarted","Data":"574d67a980440465832d4eafaa7d6eee7e6c5151b8deca9f87dc007fc85b876e"} Feb 18 14:33:56 crc kubenswrapper[4957]: I0218 14:33:56.137390 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-s7l47" event={"ID":"30c49091-ad03-4fcf-a0d4-3955a1ddaf97","Type":"ContainerStarted","Data":"12d9abd2bbc9952f6c318363f9852bf0ac093ba1a945bcdc2ec515ccb2fb7f55"} Feb 18 14:33:56 crc kubenswrapper[4957]: I0218 14:33:56.193746 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-d4mbw" event={"ID":"7ebbd0f0-af37-460a-88f5-ff0e855f652c","Type":"ContainerStarted","Data":"04524bf3435d667cf6f5ddc46ead884dfef97f10d341e9e11d9be5d921a8b630"} Feb 18 14:33:56 crc kubenswrapper[4957]: I0218 14:33:56.193922 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-d4mbw" event={"ID":"7ebbd0f0-af37-460a-88f5-ff0e855f652c","Type":"ContainerStarted","Data":"7ef73a936c8c1720268f5259a5452af81c60de4ccfe14a8de7a95ae162da6e1e"} Feb 18 14:33:56 crc kubenswrapper[4957]: I0218 14:33:56.194054 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-d4mbw" event={"ID":"7ebbd0f0-af37-460a-88f5-ff0e855f652c","Type":"ContainerStarted","Data":"710aac53ca66865200c4e7edcece9251f12c9a08dda447f4c3f03012c750c583"} Feb 18 14:33:56 crc kubenswrapper[4957]: I0218 14:33:56.236393 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hfcww\" (UID: \"1c7a025e-0270-445c-ac05-34ffe3502176\") " pod="openshift-image-registry/image-registry-697d97f7c8-hfcww" Feb 18 14:33:56 crc kubenswrapper[4957]: E0218 14:33:56.237125 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 14:33:56.737111503 +0000 UTC m=+143.257976247 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hfcww" (UID: "1c7a025e-0270-445c-ac05-34ffe3502176") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:33:56 crc kubenswrapper[4957]: I0218 14:33:56.260091 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-sfz7k" Feb 18 14:33:56 crc kubenswrapper[4957]: I0218 14:33:56.299387 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29523750-lq4hl" Feb 18 14:33:56 crc kubenswrapper[4957]: I0218 14:33:56.337439 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 14:33:56 crc kubenswrapper[4957]: E0218 14:33:56.338813 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 14:33:56.838792727 +0000 UTC m=+143.359657481 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:33:56 crc kubenswrapper[4957]: I0218 14:33:56.342097 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-q9xrs"] Feb 18 14:33:56 crc kubenswrapper[4957]: I0218 14:33:56.342131 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mn6hj"] Feb 18 14:33:56 crc kubenswrapper[4957]: I0218 14:33:56.342152 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-f8bjt"] Feb 18 14:33:56 crc kubenswrapper[4957]: I0218 14:33:56.438964 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hfcww\" (UID: \"1c7a025e-0270-445c-ac05-34ffe3502176\") " pod="openshift-image-registry/image-registry-697d97f7c8-hfcww" Feb 18 14:33:56 crc kubenswrapper[4957]: E0218 14:33:56.439483 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 14:33:56.939458852 +0000 UTC m=+143.460323766 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hfcww" (UID: "1c7a025e-0270-445c-ac05-34ffe3502176") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:33:56 crc kubenswrapper[4957]: W0218 14:33:56.522715 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8523189e_83c3_4ec4_aee3_7fc8859d380a.slice/crio-cca7eb1166ef4a6db8b852a25953c2a5e506625b1afc7fa63cd0b3b7802acce8 WatchSource:0}: Error finding container cca7eb1166ef4a6db8b852a25953c2a5e506625b1afc7fa63cd0b3b7802acce8: Status 404 returned error can't find the container with id cca7eb1166ef4a6db8b852a25953c2a5e506625b1afc7fa63cd0b3b7802acce8 Feb 18 14:33:56 crc kubenswrapper[4957]: I0218 14:33:56.541946 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 14:33:56 crc kubenswrapper[4957]: E0218 14:33:56.542447 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 14:33:57.042431091 +0000 UTC m=+143.563295835 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:33:56 crc kubenswrapper[4957]: I0218 14:33:56.567828 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-m5ftr"] Feb 18 14:33:56 crc kubenswrapper[4957]: I0218 14:33:56.650378 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hfcww\" (UID: \"1c7a025e-0270-445c-ac05-34ffe3502176\") " pod="openshift-image-registry/image-registry-697d97f7c8-hfcww" Feb 18 14:33:56 crc kubenswrapper[4957]: E0218 14:33:56.651612 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 14:33:57.151592931 +0000 UTC m=+143.672457675 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hfcww" (UID: "1c7a025e-0270-445c-ac05-34ffe3502176") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:33:56 crc kubenswrapper[4957]: I0218 14:33:56.752040 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 14:33:56 crc kubenswrapper[4957]: E0218 14:33:56.752465 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 14:33:57.252443891 +0000 UTC m=+143.773308645 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:33:56 crc kubenswrapper[4957]: I0218 14:33:56.854185 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hfcww\" (UID: \"1c7a025e-0270-445c-ac05-34ffe3502176\") " pod="openshift-image-registry/image-registry-697d97f7c8-hfcww" Feb 18 14:33:56 crc kubenswrapper[4957]: E0218 14:33:56.854858 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 14:33:57.354824874 +0000 UTC m=+143.875689618 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hfcww" (UID: "1c7a025e-0270-445c-ac05-34ffe3502176") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:33:56 crc kubenswrapper[4957]: I0218 14:33:56.893591 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-lp5cj" podStartSLOduration=122.893555516 podStartE2EDuration="2m2.893555516s" podCreationTimestamp="2026-02-18 14:31:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:33:56.891980902 +0000 UTC m=+143.412845646" watchObservedRunningTime="2026-02-18 14:33:56.893555516 +0000 UTC m=+143.414420260" Feb 18 14:33:56 crc kubenswrapper[4957]: I0218 14:33:56.928196 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-d4mbw" podStartSLOduration=122.928169162 podStartE2EDuration="2m2.928169162s" podCreationTimestamp="2026-02-18 14:31:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:33:56.922763893 +0000 UTC m=+143.443628647" watchObservedRunningTime="2026-02-18 14:33:56.928169162 +0000 UTC m=+143.449033896" Feb 18 14:33:56 crc kubenswrapper[4957]: I0218 14:33:56.961975 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 14:33:56 crc kubenswrapper[4957]: E0218 14:33:56.962409 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 14:33:57.462392689 +0000 UTC m=+143.983257433 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:33:57 crc kubenswrapper[4957]: I0218 14:33:57.039054 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-hvb66"] Feb 18 14:33:57 crc kubenswrapper[4957]: I0218 14:33:57.073975 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bwgmc"] Feb 18 14:33:57 crc kubenswrapper[4957]: I0218 14:33:57.075901 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hfcww\" (UID: \"1c7a025e-0270-445c-ac05-34ffe3502176\") " pod="openshift-image-registry/image-registry-697d97f7c8-hfcww" Feb 18 14:33:57 crc kubenswrapper[4957]: E0218 14:33:57.076489 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 14:33:57.576475196 +0000 UTC m=+144.097339940 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hfcww" (UID: "1c7a025e-0270-445c-ac05-34ffe3502176") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:33:57 crc kubenswrapper[4957]: I0218 14:33:57.082594 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-fjrfx"] Feb 18 14:33:57 crc kubenswrapper[4957]: I0218 14:33:57.086038 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-lwq5z"] Feb 18 14:33:57 crc kubenswrapper[4957]: W0218 14:33:57.149588 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3da7ba0a_4ddc_4bca_acd8_e598854eceec.slice/crio-2589baf18de33489f2254deddca36101ec0dd9c817ee0858fb1885543755b7fd WatchSource:0}: Error finding container 2589baf18de33489f2254deddca36101ec0dd9c817ee0858fb1885543755b7fd: Status 404 returned error can't find the container with id 2589baf18de33489f2254deddca36101ec0dd9c817ee0858fb1885543755b7fd Feb 18 14:33:57 crc kubenswrapper[4957]: I0218 14:33:57.173650 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-fxh8s"] Feb 18 14:33:57 crc kubenswrapper[4957]: I0218 14:33:57.181316 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 14:33:57 crc kubenswrapper[4957]: E0218 14:33:57.181893 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 14:33:57.681875152 +0000 UTC m=+144.202739896 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:33:57 crc kubenswrapper[4957]: I0218 14:33:57.227476 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8bmsm"] Feb 18 14:33:57 crc kubenswrapper[4957]: I0218 14:33:57.238007 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-dk2nb" event={"ID":"80bc6540-5a90-42c9-a3b6-ae9a897119dd","Type":"ContainerStarted","Data":"ad22f925ca1cf73488cdd3168671e91c547710032c0a4d297ad4ffb29604ee17"} Feb 18 14:33:57 crc kubenswrapper[4957]: I0218 14:33:57.246628 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-2pvcz" event={"ID":"262c1468-53c3-406c-b186-2912e491ba70","Type":"ContainerStarted","Data":"643d01a7031c5597c8a4a008bdb124eba00efc878d2ba931eb69d24757cfad66"} Feb 18 14:33:57 crc kubenswrapper[4957]: I0218 14:33:57.251655 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-q9xrs" event={"ID":"8559e798-7b17-41b7-86c8-bf530c4092ea","Type":"ContainerStarted","Data":"91aa0311d2007c4253c126a80a8739f3dd6516207b01021d9df8cc6754dbd8c4"} Feb 18 14:33:57 crc kubenswrapper[4957]: I0218 14:33:57.252940 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-nx4lr" event={"ID":"1ea89768-219d-4769-9010-a34764aaee1d","Type":"ContainerStarted","Data":"7b203e5c4cae20a72784c4f4901a46008dac88c89a6147d8a4a3ed2107f3f2d7"} Feb 18 14:33:57 crc kubenswrapper[4957]: I0218 14:33:57.253738 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-fpnw9"] Feb 18 14:33:57 crc kubenswrapper[4957]: I0218 14:33:57.257353 4957 generic.go:334] "Generic (PLEG): container finished" podID="1889400f-2fff-4c67-b401-966e820d5a26" containerID="ce16067cbf0dc018ba6a486d0d6b2551fd750c1bd28a605640e2ba1e9a22d980" exitCode=0 Feb 18 14:33:57 crc kubenswrapper[4957]: I0218 14:33:57.257384 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2db8v" event={"ID":"1889400f-2fff-4c67-b401-966e820d5a26","Type":"ContainerDied","Data":"ce16067cbf0dc018ba6a486d0d6b2551fd750c1bd28a605640e2ba1e9a22d980"} Feb 18 14:33:57 crc kubenswrapper[4957]: I0218 14:33:57.261581 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-jg752" event={"ID":"099076c9-9f78-47b8-87f1-3c9cc47e0b09","Type":"ContainerStarted","Data":"c8ed8cb910878cb5bab4b11705486636080e4cd10ec369ab57d89ad0832f752e"} Feb 18 14:33:57 crc kubenswrapper[4957]: I0218 14:33:57.263742 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-594sd" event={"ID":"b8d0adaf-a3c6-4121-970c-1f6205db177e","Type":"ContainerStarted","Data":"c458271912e707e3caa43b7614f2c63396fed6ee120fc00c9c1919b2ef86b017"} Feb 18 14:33:57 crc kubenswrapper[4957]: I0218 14:33:57.264049 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-594sd" Feb 18 14:33:57 crc kubenswrapper[4957]: I0218 14:33:57.268089 4957 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-594sd container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.16:6443/healthz\": dial tcp 10.217.0.16:6443: connect: connection refused" start-of-body= Feb 18 14:33:57 crc kubenswrapper[4957]: I0218 14:33:57.268134 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-594sd" podUID="b8d0adaf-a3c6-4121-970c-1f6205db177e" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.16:6443/healthz\": dial tcp 10.217.0.16:6443: connect: connection refused" Feb 18 14:33:57 crc kubenswrapper[4957]: I0218 14:33:57.272265 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-hvb66" event={"ID":"3da7ba0a-4ddc-4bca-acd8-e598854eceec","Type":"ContainerStarted","Data":"2589baf18de33489f2254deddca36101ec0dd9c817ee0858fb1885543755b7fd"} Feb 18 14:33:57 crc kubenswrapper[4957]: I0218 14:33:57.275038 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fkdzp" podStartSLOduration=123.275020309 podStartE2EDuration="2m3.275020309s" podCreationTimestamp="2026-02-18 14:31:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:33:57.272868459 +0000 UTC m=+143.793733203" watchObservedRunningTime="2026-02-18 14:33:57.275020309 +0000 UTC m=+143.795885083" Feb 18 14:33:57 crc kubenswrapper[4957]: I0218 14:33:57.280211 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-m5ftr" event={"ID":"1365afbc-1c4d-47e9-856e-520d67653d37","Type":"ContainerStarted","Data":"e3f30dfa850daee5b64dfc362b26fb6f338f5c9a3b42cf0e41640fba9cb5c189"} Feb 18 14:33:57 crc kubenswrapper[4957]: I0218 14:33:57.280703 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-nddjw"] Feb 18 14:33:57 crc kubenswrapper[4957]: I0218 14:33:57.283929 4957 generic.go:334] "Generic (PLEG): container finished" podID="7b0517cf-fb33-49b0-9f1c-ba39f8edfc37" containerID="ea14c09170356620cd9a9470108ad5b5bc295db0597803ba5acc6a97c3f30976" exitCode=0 Feb 18 14:33:57 crc kubenswrapper[4957]: I0218 14:33:57.284603 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-tmknf" event={"ID":"7b0517cf-fb33-49b0-9f1c-ba39f8edfc37","Type":"ContainerDied","Data":"ea14c09170356620cd9a9470108ad5b5bc295db0597803ba5acc6a97c3f30976"} Feb 18 14:33:57 crc kubenswrapper[4957]: I0218 14:33:57.285210 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hfcww\" (UID: \"1c7a025e-0270-445c-ac05-34ffe3502176\") " pod="openshift-image-registry/image-registry-697d97f7c8-hfcww" Feb 18 14:33:57 crc kubenswrapper[4957]: E0218 14:33:57.285616 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 14:33:57.785599182 +0000 UTC m=+144.306463926 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hfcww" (UID: "1c7a025e-0270-445c-ac05-34ffe3502176") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:33:57 crc kubenswrapper[4957]: I0218 14:33:57.287279 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-crshz" event={"ID":"0eeb321c-f53d-4fa6-b824-006c3844edad","Type":"ContainerStarted","Data":"f542dca38b23527c0f24a11be83fd015e0fd46fded8b7e6cc0ad2465cfe350ee"} Feb 18 14:33:57 crc kubenswrapper[4957]: I0218 14:33:57.289785 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bwgmc" event={"ID":"1b3f7089-9ab3-4753-b0a2-7454ed4425ac","Type":"ContainerStarted","Data":"3ccd5c46d1ca03946066354a64186a79eb33d32bcc7b760ee06538fa6c03b58a"} Feb 18 14:33:57 crc kubenswrapper[4957]: I0218 14:33:57.291246 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mn6hj" event={"ID":"914e8f14-972c-4ca7-bcc6-4fc802cdfdc6","Type":"ContainerStarted","Data":"593a89f6f82df4666a026691a7f942e7969b5cb5929bdc054a2ad5946888c8e5"} Feb 18 14:33:57 crc kubenswrapper[4957]: I0218 14:33:57.293702 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-s7l47" event={"ID":"30c49091-ad03-4fcf-a0d4-3955a1ddaf97","Type":"ContainerStarted","Data":"88476e8d484a78ddad964fc17d1014d162646686ad6dfc52c4fe9f4efd646e8a"} Feb 18 14:33:57 crc kubenswrapper[4957]: I0218 14:33:57.295133 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-f8bjt" event={"ID":"8523189e-83c3-4ec4-aee3-7fc8859d380a","Type":"ContainerStarted","Data":"cca7eb1166ef4a6db8b852a25953c2a5e506625b1afc7fa63cd0b3b7802acce8"} Feb 18 14:33:57 crc kubenswrapper[4957]: I0218 14:33:57.296309 4957 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-lp5cj container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Feb 18 14:33:57 crc kubenswrapper[4957]: I0218 14:33:57.296361 4957 patch_prober.go:28] interesting pod/console-operator-58897d9998-d9hcp container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.14:8443/readyz\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Feb 18 14:33:57 crc kubenswrapper[4957]: I0218 14:33:57.296458 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-lp5cj" podUID="609d339c-7830-4e17-b847-da17573e1ed0" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Feb 18 14:33:57 crc kubenswrapper[4957]: I0218 14:33:57.296455 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-d9hcp" podUID="2d30d957-c658-4ce4-9b04-3f1d64fb67b7" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.14:8443/readyz\": dial tcp 10.217.0.14:8443: connect: connection refused" Feb 18 14:33:57 crc kubenswrapper[4957]: I0218 14:33:57.300663 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fkdzp" Feb 18 14:33:57 crc kubenswrapper[4957]: I0218 14:33:57.360520 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jffnj" podStartSLOduration=123.360500484 podStartE2EDuration="2m3.360500484s" podCreationTimestamp="2026-02-18 14:31:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:33:57.359959829 +0000 UTC m=+143.880824583" watchObservedRunningTime="2026-02-18 14:33:57.360500484 +0000 UTC m=+143.881365228" Feb 18 14:33:57 crc kubenswrapper[4957]: W0218 14:33:57.384930 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb57b2a2c_ac8c_4c84_84c4_c24479600b71.slice/crio-46aefde3da7fb1e5c15774766cdfb01de92adefa66b9f5bf9f1243f814f1c82d WatchSource:0}: Error finding container 46aefde3da7fb1e5c15774766cdfb01de92adefa66b9f5bf9f1243f814f1c82d: Status 404 returned error can't find the container with id 46aefde3da7fb1e5c15774766cdfb01de92adefa66b9f5bf9f1243f814f1c82d Feb 18 14:33:57 crc kubenswrapper[4957]: I0218 14:33:57.386411 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 14:33:57 crc kubenswrapper[4957]: E0218 14:33:57.386593 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 14:33:57.886570065 +0000 UTC m=+144.407434809 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:33:57 crc kubenswrapper[4957]: I0218 14:33:57.387062 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hfcww\" (UID: \"1c7a025e-0270-445c-ac05-34ffe3502176\") " pod="openshift-image-registry/image-registry-697d97f7c8-hfcww" Feb 18 14:33:57 crc kubenswrapper[4957]: E0218 14:33:57.389151 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 14:33:57.889133186 +0000 UTC m=+144.409997920 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hfcww" (UID: "1c7a025e-0270-445c-ac05-34ffe3502176") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:33:57 crc kubenswrapper[4957]: I0218 14:33:57.451501 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qc4wh"] Feb 18 14:33:57 crc kubenswrapper[4957]: I0218 14:33:57.489629 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 14:33:57 crc kubenswrapper[4957]: I0218 14:33:57.489907 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-d9hcp" podStartSLOduration=123.489878304 podStartE2EDuration="2m3.489878304s" podCreationTimestamp="2026-02-18 14:31:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:33:57.486238693 +0000 UTC m=+144.007103437" watchObservedRunningTime="2026-02-18 14:33:57.489878304 +0000 UTC m=+144.010743048" Feb 18 14:33:57 crc kubenswrapper[4957]: E0218 14:33:57.489966 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 14:33:57.989951126 +0000 UTC m=+144.510815870 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:33:57 crc kubenswrapper[4957]: I0218 14:33:57.509904 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-6f6dt"] Feb 18 14:33:57 crc kubenswrapper[4957]: I0218 14:33:57.537072 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-nmxbx"] Feb 18 14:33:57 crc kubenswrapper[4957]: I0218 14:33:57.605768 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hfcww\" (UID: \"1c7a025e-0270-445c-ac05-34ffe3502176\") " pod="openshift-image-registry/image-registry-697d97f7c8-hfcww" Feb 18 14:33:57 crc kubenswrapper[4957]: I0218 14:33:57.606558 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-jg752" Feb 18 14:33:57 crc kubenswrapper[4957]: E0218 14:33:57.606806 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 14:33:58.106779548 +0000 UTC m=+144.627644292 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hfcww" (UID: "1c7a025e-0270-445c-ac05-34ffe3502176") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:33:57 crc kubenswrapper[4957]: I0218 14:33:57.628078 4957 patch_prober.go:28] interesting pod/router-default-5444994796-jg752 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 14:33:57 crc kubenswrapper[4957]: [-]has-synced failed: reason withheld Feb 18 14:33:57 crc kubenswrapper[4957]: [+]process-running ok Feb 18 14:33:57 crc kubenswrapper[4957]: healthz check failed Feb 18 14:33:57 crc kubenswrapper[4957]: I0218 14:33:57.628186 4957 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jg752" podUID="099076c9-9f78-47b8-87f1-3c9cc47e0b09" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 14:33:57 crc kubenswrapper[4957]: I0218 14:33:57.662303 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-vd6hx"] Feb 18 14:33:57 crc kubenswrapper[4957]: I0218 14:33:57.664963 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-q4vp7"] Feb 18 14:33:57 crc kubenswrapper[4957]: I0218 14:33:57.679472 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-wbzr4"] Feb 18 14:33:57 crc kubenswrapper[4957]: I0218 14:33:57.686230 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-gdjrj"] Feb 18 14:33:57 crc kubenswrapper[4957]: I0218 14:33:57.696247 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-s74tf"] Feb 18 14:33:57 crc kubenswrapper[4957]: I0218 14:33:57.714397 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 14:33:57 crc kubenswrapper[4957]: E0218 14:33:57.714687 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 14:33:58.214621462 +0000 UTC m=+144.735486206 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:33:57 crc kubenswrapper[4957]: I0218 14:33:57.714885 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hfcww\" (UID: \"1c7a025e-0270-445c-ac05-34ffe3502176\") " pod="openshift-image-registry/image-registry-697d97f7c8-hfcww" Feb 18 14:33:57 crc kubenswrapper[4957]: E0218 14:33:57.715241 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 14:33:58.215223978 +0000 UTC m=+144.736088722 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hfcww" (UID: "1c7a025e-0270-445c-ac05-34ffe3502176") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:33:57 crc kubenswrapper[4957]: I0218 14:33:57.766076 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-s7l47" podStartSLOduration=123.766055875 podStartE2EDuration="2m3.766055875s" podCreationTimestamp="2026-02-18 14:31:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:33:57.764206344 +0000 UTC m=+144.285071108" watchObservedRunningTime="2026-02-18 14:33:57.766055875 +0000 UTC m=+144.286920619" Feb 18 14:33:57 crc kubenswrapper[4957]: I0218 14:33:57.815496 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 14:33:57 crc kubenswrapper[4957]: E0218 14:33:57.815765 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 14:33:58.315720899 +0000 UTC m=+144.836585643 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:33:57 crc kubenswrapper[4957]: I0218 14:33:57.816463 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hfcww\" (UID: \"1c7a025e-0270-445c-ac05-34ffe3502176\") " pod="openshift-image-registry/image-registry-697d97f7c8-hfcww" Feb 18 14:33:57 crc kubenswrapper[4957]: E0218 14:33:57.817034 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 14:33:58.317016785 +0000 UTC m=+144.837881529 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hfcww" (UID: "1c7a025e-0270-445c-ac05-34ffe3502176") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:33:57 crc kubenswrapper[4957]: I0218 14:33:57.897020 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-594sd" podStartSLOduration=123.896996718 podStartE2EDuration="2m3.896996718s" podCreationTimestamp="2026-02-18 14:31:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:33:57.875645317 +0000 UTC m=+144.396510071" watchObservedRunningTime="2026-02-18 14:33:57.896996718 +0000 UTC m=+144.417861462" Feb 18 14:33:57 crc kubenswrapper[4957]: I0218 14:33:57.913846 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29523750-lq4hl"] Feb 18 14:33:57 crc kubenswrapper[4957]: I0218 14:33:57.915413 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-xsf6z"] Feb 18 14:33:57 crc kubenswrapper[4957]: I0218 14:33:57.917586 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 14:33:57 crc kubenswrapper[4957]: E0218 14:33:57.918765 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 14:33:58.418726149 +0000 UTC m=+144.939591073 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:33:57 crc kubenswrapper[4957]: I0218 14:33:57.922167 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-txsxl"] Feb 18 14:33:57 crc kubenswrapper[4957]: I0218 14:33:57.927022 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-xrkwr"] Feb 18 14:33:57 crc kubenswrapper[4957]: I0218 14:33:57.936179 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-sfz7k"] Feb 18 14:33:57 crc kubenswrapper[4957]: I0218 14:33:57.978103 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-nld6v"] Feb 18 14:33:58 crc kubenswrapper[4957]: I0218 14:33:58.019312 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hfcww\" (UID: \"1c7a025e-0270-445c-ac05-34ffe3502176\") " pod="openshift-image-registry/image-registry-697d97f7c8-hfcww" Feb 18 14:33:58 crc kubenswrapper[4957]: E0218 14:33:58.019987 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 14:33:58.51996763 +0000 UTC m=+145.040832374 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hfcww" (UID: "1c7a025e-0270-445c-ac05-34ffe3502176") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:33:58 crc kubenswrapper[4957]: I0218 14:33:58.122011 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 14:33:58 crc kubenswrapper[4957]: E0218 14:33:58.122818 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 14:33:58.622793355 +0000 UTC m=+145.143658099 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:33:58 crc kubenswrapper[4957]: W0218 14:33:58.160608 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda3609f97_bc24_49d9_994a_026f5b1f8c73.slice/crio-504462d37fac59b38e0a787528af2680abc04a3a8ef963d89b5a59d51d303a21 WatchSource:0}: Error finding container 504462d37fac59b38e0a787528af2680abc04a3a8ef963d89b5a59d51d303a21: Status 404 returned error can't find the container with id 504462d37fac59b38e0a787528af2680abc04a3a8ef963d89b5a59d51d303a21 Feb 18 14:33:58 crc kubenswrapper[4957]: I0218 14:33:58.190116 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-jg752" podStartSLOduration=124.190089387 podStartE2EDuration="2m4.190089387s" podCreationTimestamp="2026-02-18 14:31:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:33:58.188826272 +0000 UTC m=+144.709691026" watchObservedRunningTime="2026-02-18 14:33:58.190089387 +0000 UTC m=+144.710954131" Feb 18 14:33:58 crc kubenswrapper[4957]: I0218 14:33:58.224312 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hfcww\" (UID: \"1c7a025e-0270-445c-ac05-34ffe3502176\") " pod="openshift-image-registry/image-registry-697d97f7c8-hfcww" Feb 18 14:33:58 crc kubenswrapper[4957]: E0218 14:33:58.224972 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 14:33:58.724958022 +0000 UTC m=+145.245822766 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hfcww" (UID: "1c7a025e-0270-445c-ac05-34ffe3502176") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:33:58 crc kubenswrapper[4957]: I0218 14:33:58.325519 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 14:33:58 crc kubenswrapper[4957]: E0218 14:33:58.326620 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 14:33:58.826597514 +0000 UTC m=+145.347462258 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:33:58 crc kubenswrapper[4957]: I0218 14:33:58.417337 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-2pvcz" podStartSLOduration=124.417319734 podStartE2EDuration="2m4.417319734s" podCreationTimestamp="2026-02-18 14:31:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:33:58.416725187 +0000 UTC m=+144.937589931" watchObservedRunningTime="2026-02-18 14:33:58.417319734 +0000 UTC m=+144.938184488" Feb 18 14:33:58 crc kubenswrapper[4957]: I0218 14:33:58.442028 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hfcww\" (UID: \"1c7a025e-0270-445c-ac05-34ffe3502176\") " pod="openshift-image-registry/image-registry-697d97f7c8-hfcww" Feb 18 14:33:58 crc kubenswrapper[4957]: E0218 14:33:58.442409 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 14:33:58.942394778 +0000 UTC m=+145.463259522 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hfcww" (UID: "1c7a025e-0270-445c-ac05-34ffe3502176") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:33:58 crc kubenswrapper[4957]: I0218 14:33:58.444018 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-g4vvs" event={"ID":"f2c035f3-f03e-4263-a7a5-e821b1fc5488","Type":"ContainerStarted","Data":"d2a23834817dce1df126a6a38580dc729ca9cf50893eed32c288fa17dc86489c"} Feb 18 14:33:58 crc kubenswrapper[4957]: I0218 14:33:58.458154 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-f8bjt" event={"ID":"8523189e-83c3-4ec4-aee3-7fc8859d380a","Type":"ContainerStarted","Data":"8c21fdde5df944da6e8ab7e75e78d689dbdd88a2fd268028e5e754da4b7ceaa4"} Feb 18 14:33:58 crc kubenswrapper[4957]: I0218 14:33:58.472757 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-g4vvs" podStartSLOduration=124.472734367 podStartE2EDuration="2m4.472734367s" podCreationTimestamp="2026-02-18 14:31:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:33:58.472569133 +0000 UTC m=+144.993433877" watchObservedRunningTime="2026-02-18 14:33:58.472734367 +0000 UTC m=+144.993599111" Feb 18 14:33:58 crc kubenswrapper[4957]: I0218 14:33:58.474600 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-nx4lr" event={"ID":"1ea89768-219d-4769-9010-a34764aaee1d","Type":"ContainerStarted","Data":"7d52830bba717786941bb2c2f02b77a2421b7f160994b07d72053a43b008fb16"} Feb 18 14:33:58 crc kubenswrapper[4957]: I0218 14:33:58.487253 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-q4vp7" event={"ID":"9a161f1b-77bb-4a9d-9bfc-345bb46d439b","Type":"ContainerStarted","Data":"2321641266fb9e33be554e346e6eade777c02515b6dfe3f7675cf8dc4205231c"} Feb 18 14:33:58 crc kubenswrapper[4957]: I0218 14:33:58.525154 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-q9xrs" event={"ID":"8559e798-7b17-41b7-86c8-bf530c4092ea","Type":"ContainerStarted","Data":"19ac880096f8c8250e6e36cf37975250333c129ab47fa17412593fbfa698e3ab"} Feb 18 14:33:58 crc kubenswrapper[4957]: I0218 14:33:58.526827 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-f8bjt" podStartSLOduration=124.526803013 podStartE2EDuration="2m4.526803013s" podCreationTimestamp="2026-02-18 14:31:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:33:58.525519017 +0000 UTC m=+145.046383771" watchObservedRunningTime="2026-02-18 14:33:58.526803013 +0000 UTC m=+145.047667757" Feb 18 14:33:58 crc kubenswrapper[4957]: I0218 14:33:58.544371 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 14:33:58 crc kubenswrapper[4957]: E0218 14:33:58.546044 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 14:33:59.046023805 +0000 UTC m=+145.566888559 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:33:58 crc kubenswrapper[4957]: I0218 14:33:58.555855 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-nmxbx" event={"ID":"35a815ee-3601-412e-a76e-b4e15292e02c","Type":"ContainerStarted","Data":"c1c3d5cea38ab86d59928b09c21a05157de0dedf6a4471a9fd9b733a6a028707"} Feb 18 14:33:58 crc kubenswrapper[4957]: I0218 14:33:58.565520 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-q9xrs" podStartSLOduration=124.565499684 podStartE2EDuration="2m4.565499684s" podCreationTimestamp="2026-02-18 14:31:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:33:58.564918818 +0000 UTC m=+145.085783572" watchObservedRunningTime="2026-02-18 14:33:58.565499684 +0000 UTC m=+145.086364428" Feb 18 14:33:58 crc kubenswrapper[4957]: I0218 14:33:58.584387 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-fxh8s" event={"ID":"6e462cfd-ac3c-4e75-bcce-f8291746b89e","Type":"ContainerStarted","Data":"3d3a1ae29fe36ca19aeed36e4506b26f798c3d34fcc449df5b7dcb5cac3ace47"} Feb 18 14:33:58 crc kubenswrapper[4957]: I0218 14:33:58.584644 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-fxh8s" event={"ID":"6e462cfd-ac3c-4e75-bcce-f8291746b89e","Type":"ContainerStarted","Data":"97a5d60a44454811b728ebdeb7c274ac99986c1a1c1750708f1ef616e76895b1"} Feb 18 14:33:58 crc kubenswrapper[4957]: I0218 14:33:58.585143 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-fxh8s" Feb 18 14:33:58 crc kubenswrapper[4957]: I0218 14:33:58.603832 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-nx4lr" podStartSLOduration=6.603814944 podStartE2EDuration="6.603814944s" podCreationTimestamp="2026-02-18 14:33:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:33:58.601920871 +0000 UTC m=+145.122785615" watchObservedRunningTime="2026-02-18 14:33:58.603814944 +0000 UTC m=+145.124679688" Feb 18 14:33:58 crc kubenswrapper[4957]: I0218 14:33:58.603892 4957 patch_prober.go:28] interesting pod/downloads-7954f5f757-fxh8s container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Feb 18 14:33:58 crc kubenswrapper[4957]: I0218 14:33:58.603949 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-fxh8s" podUID="6e462cfd-ac3c-4e75-bcce-f8291746b89e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Feb 18 14:33:58 crc kubenswrapper[4957]: I0218 14:33:58.622067 4957 patch_prober.go:28] interesting pod/router-default-5444994796-jg752 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 14:33:58 crc kubenswrapper[4957]: [-]has-synced failed: reason withheld Feb 18 14:33:58 crc kubenswrapper[4957]: [+]process-running ok Feb 18 14:33:58 crc kubenswrapper[4957]: healthz check failed Feb 18 14:33:58 crc kubenswrapper[4957]: I0218 14:33:58.622134 4957 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jg752" podUID="099076c9-9f78-47b8-87f1-3c9cc47e0b09" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 14:33:58 crc kubenswrapper[4957]: I0218 14:33:58.641531 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-fxh8s" podStartSLOduration=124.641511267 podStartE2EDuration="2m4.641511267s" podCreationTimestamp="2026-02-18 14:31:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:33:58.633785983 +0000 UTC m=+145.154650727" watchObservedRunningTime="2026-02-18 14:33:58.641511267 +0000 UTC m=+145.162376011" Feb 18 14:33:58 crc kubenswrapper[4957]: I0218 14:33:58.646122 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hfcww\" (UID: \"1c7a025e-0270-445c-ac05-34ffe3502176\") " pod="openshift-image-registry/image-registry-697d97f7c8-hfcww" Feb 18 14:33:58 crc kubenswrapper[4957]: E0218 14:33:58.648508 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 14:33:59.14848766 +0000 UTC m=+145.669352404 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hfcww" (UID: "1c7a025e-0270-445c-ac05-34ffe3502176") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:33:58 crc kubenswrapper[4957]: I0218 14:33:58.710064 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bwgmc" event={"ID":"1b3f7089-9ab3-4753-b0a2-7454ed4425ac","Type":"ContainerStarted","Data":"363e4297fb7adbadfbd91f6fd1b75c1176987fb0d8e12314d06bf9d7edc70d07"} Feb 18 14:33:58 crc kubenswrapper[4957]: I0218 14:33:58.747761 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 14:33:58 crc kubenswrapper[4957]: E0218 14:33:58.749626 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 14:33:59.249605928 +0000 UTC m=+145.770470682 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:33:58 crc kubenswrapper[4957]: I0218 14:33:58.769726 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29523750-lq4hl" event={"ID":"4cbe5d01-ad51-4b03-aba9-8757f5643bdb","Type":"ContainerStarted","Data":"1abb467f37fea709ec57d2822cd084487a3dd11d7e3fda9fb5b451959d464df2"} Feb 18 14:33:58 crc kubenswrapper[4957]: I0218 14:33:58.806662 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-xrkwr" event={"ID":"c7a5e1cd-ca67-4a39-b3b1-16889e4ea7ae","Type":"ContainerStarted","Data":"a7ba2081d58641fd7132e3e7d81a1cacb6c83f26a832bdd4528da3c17784c4df"} Feb 18 14:33:58 crc kubenswrapper[4957]: I0218 14:33:58.857662 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hfcww\" (UID: \"1c7a025e-0270-445c-ac05-34ffe3502176\") " pod="openshift-image-registry/image-registry-697d97f7c8-hfcww" Feb 18 14:33:58 crc kubenswrapper[4957]: E0218 14:33:58.858019 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 14:33:59.358006707 +0000 UTC m=+145.878871451 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hfcww" (UID: "1c7a025e-0270-445c-ac05-34ffe3502176") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:33:58 crc kubenswrapper[4957]: I0218 14:33:58.858715 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-6f6dt" event={"ID":"4e7531cd-b62c-452a-b6a7-716513aaad71","Type":"ContainerStarted","Data":"16e4fa54b9e34f50d3bcb2f8dcc8d2119c98881c47b4de0a6f8acaf11edb3146"} Feb 18 14:33:58 crc kubenswrapper[4957]: I0218 14:33:58.871917 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-crshz" event={"ID":"0eeb321c-f53d-4fa6-b824-006c3844edad","Type":"ContainerStarted","Data":"d802c3903ce10c8d41c261a111732cdcd24258e81d7b6ac2eded1315dbf4b2fd"} Feb 18 14:33:58 crc kubenswrapper[4957]: I0218 14:33:58.873903 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-xsf6z" event={"ID":"4f7ee635-a9ad-4c0f-98b8-c72a33eb8b39","Type":"ContainerStarted","Data":"14943d5595113e63ebec3832258722186492c0f75edac2f3ae30e8c7c089ed43"} Feb 18 14:33:58 crc kubenswrapper[4957]: I0218 14:33:58.904743 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-fpnw9" event={"ID":"b57b2a2c-ac8c-4c84-84c4-c24479600b71","Type":"ContainerStarted","Data":"46aefde3da7fb1e5c15774766cdfb01de92adefa66b9f5bf9f1243f814f1c82d"} Feb 18 14:33:58 crc kubenswrapper[4957]: I0218 14:33:58.935937 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lwq5z" event={"ID":"e1fce9e3-ffff-49f7-9f88-f5fd4cc98978","Type":"ContainerStarted","Data":"d9091fa767a32adedf02e2ad00c1c1ddefd934f216434a179a590c71b73759f6"} Feb 18 14:33:58 crc kubenswrapper[4957]: I0218 14:33:58.936001 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lwq5z" event={"ID":"e1fce9e3-ffff-49f7-9f88-f5fd4cc98978","Type":"ContainerStarted","Data":"f6e4e6809496d21666f26a146a1379bdb137a7eb4972212c3b491b471b949c35"} Feb 18 14:33:58 crc kubenswrapper[4957]: I0218 14:33:58.951167 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-tmknf" event={"ID":"7b0517cf-fb33-49b0-9f1c-ba39f8edfc37","Type":"ContainerStarted","Data":"613e0204559e97040ebfe66db7d75d7effa45b74745a4dccc086b2e01d24ad10"} Feb 18 14:33:58 crc kubenswrapper[4957]: I0218 14:33:58.958835 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 14:33:58 crc kubenswrapper[4957]: I0218 14:33:58.962462 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-tmknf" Feb 18 14:33:58 crc kubenswrapper[4957]: E0218 14:33:58.962639 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 14:33:59.462611241 +0000 UTC m=+145.983475985 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:33:58 crc kubenswrapper[4957]: I0218 14:33:58.993961 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-crshz" podStartSLOduration=124.993943138 podStartE2EDuration="2m4.993943138s" podCreationTimestamp="2026-02-18 14:31:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:33:58.911704202 +0000 UTC m=+145.432568956" watchObservedRunningTime="2026-02-18 14:33:58.993943138 +0000 UTC m=+145.514807882" Feb 18 14:33:58 crc kubenswrapper[4957]: I0218 14:33:58.994804 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-tmknf" podStartSLOduration=124.994798151 podStartE2EDuration="2m4.994798151s" podCreationTimestamp="2026-02-18 14:31:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:33:58.992897729 +0000 UTC m=+145.513762473" watchObservedRunningTime="2026-02-18 14:33:58.994798151 +0000 UTC m=+145.515662895" Feb 18 14:33:59 crc kubenswrapper[4957]: I0218 14:33:59.026386 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-fjrfx" event={"ID":"35a9f539-42ce-4dc0-a9d1-d23278463bfc","Type":"ContainerStarted","Data":"149ad5c589f7882f9f40cc936dd14a4c5fad2b50a01dc0b4c6f86d9fd7705d48"} Feb 18 14:33:59 crc kubenswrapper[4957]: I0218 14:33:59.027085 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-fjrfx" event={"ID":"35a9f539-42ce-4dc0-a9d1-d23278463bfc","Type":"ContainerStarted","Data":"c89891436ea040e0b3fc6dfd2df2f60ac37cf524f1122119dd504b49096624cd"} Feb 18 14:33:59 crc kubenswrapper[4957]: I0218 14:33:59.060299 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hfcww\" (UID: \"1c7a025e-0270-445c-ac05-34ffe3502176\") " pod="openshift-image-registry/image-registry-697d97f7c8-hfcww" Feb 18 14:33:59 crc kubenswrapper[4957]: E0218 14:33:59.060842 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 14:33:59.560830638 +0000 UTC m=+146.081695382 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hfcww" (UID: "1c7a025e-0270-445c-ac05-34ffe3502176") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:33:59 crc kubenswrapper[4957]: I0218 14:33:59.073108 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8bmsm" event={"ID":"3925001b-348a-4dde-a066-e49891c345bb","Type":"ContainerStarted","Data":"fc4eef7be542c58a76a750786c4fba75ded19c8ae7c6959480f3a00ed51a32bc"} Feb 18 14:33:59 crc kubenswrapper[4957]: I0218 14:33:59.073895 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8bmsm" Feb 18 14:33:59 crc kubenswrapper[4957]: I0218 14:33:59.084599 4957 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-8bmsm container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: connect: connection refused" start-of-body= Feb 18 14:33:59 crc kubenswrapper[4957]: I0218 14:33:59.084680 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8bmsm" podUID="3925001b-348a-4dde-a066-e49891c345bb" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: connect: connection refused" Feb 18 14:33:59 crc kubenswrapper[4957]: I0218 14:33:59.102339 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-hvb66" event={"ID":"3da7ba0a-4ddc-4bca-acd8-e598854eceec","Type":"ContainerStarted","Data":"ba042a90b28b4f74a10a27ab7f2f733d6eaf97aba98e22701ba26f4b80ff6555"} Feb 18 14:33:59 crc kubenswrapper[4957]: I0218 14:33:59.110440 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-fjrfx" podStartSLOduration=125.11039289 podStartE2EDuration="2m5.11039289s" podCreationTimestamp="2026-02-18 14:31:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:33:59.068390528 +0000 UTC m=+145.589255302" watchObservedRunningTime="2026-02-18 14:33:59.11039289 +0000 UTC m=+145.631257634" Feb 18 14:33:59 crc kubenswrapper[4957]: I0218 14:33:59.113154 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8bmsm" podStartSLOduration=125.113145846 podStartE2EDuration="2m5.113145846s" podCreationTimestamp="2026-02-18 14:31:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:33:59.110955905 +0000 UTC m=+145.631820649" watchObservedRunningTime="2026-02-18 14:33:59.113145846 +0000 UTC m=+145.634010590" Feb 18 14:33:59 crc kubenswrapper[4957]: I0218 14:33:59.117886 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mn6hj" event={"ID":"914e8f14-972c-4ca7-bcc6-4fc802cdfdc6","Type":"ContainerStarted","Data":"48d08e209f69b0a7282a3c32460881aef2c3d554febde549368414b979fda8b2"} Feb 18 14:33:59 crc kubenswrapper[4957]: I0218 14:33:59.150222 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-sfz7k" event={"ID":"06d8fe1f-0a71-480f-899e-a070c091cb79","Type":"ContainerStarted","Data":"d557f13f07fc41eb40f307ff8dd39eadfa840dc297d9d8631a8936835b7c521f"} Feb 18 14:33:59 crc kubenswrapper[4957]: I0218 14:33:59.150883 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-hvb66" podStartSLOduration=125.150858159 podStartE2EDuration="2m5.150858159s" podCreationTimestamp="2026-02-18 14:31:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:33:59.149372538 +0000 UTC m=+145.670237282" watchObservedRunningTime="2026-02-18 14:33:59.150858159 +0000 UTC m=+145.671722893" Feb 18 14:33:59 crc kubenswrapper[4957]: I0218 14:33:59.169580 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 14:33:59 crc kubenswrapper[4957]: E0218 14:33:59.171250 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 14:33:59.671220253 +0000 UTC m=+146.192084997 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:33:59 crc kubenswrapper[4957]: I0218 14:33:59.206543 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-m5ftr" event={"ID":"1365afbc-1c4d-47e9-856e-520d67653d37","Type":"ContainerStarted","Data":"761b230f9e3779def0d667bbc62d519b2e506f6ad05093d5d0389cc6106d67b1"} Feb 18 14:33:59 crc kubenswrapper[4957]: I0218 14:33:59.207848 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mn6hj" podStartSLOduration=125.207828595 podStartE2EDuration="2m5.207828595s" podCreationTimestamp="2026-02-18 14:31:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:33:59.207405224 +0000 UTC m=+145.728269988" watchObservedRunningTime="2026-02-18 14:33:59.207828595 +0000 UTC m=+145.728693339" Feb 18 14:33:59 crc kubenswrapper[4957]: I0218 14:33:59.223034 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-dk2nb" event={"ID":"80bc6540-5a90-42c9-a3b6-ae9a897119dd","Type":"ContainerStarted","Data":"b8dc3e1a604383ace16f6579547c6649349f6a18c3f75a1b569f57e194e37df3"} Feb 18 14:33:59 crc kubenswrapper[4957]: I0218 14:33:59.225140 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-gdjrj" event={"ID":"11cb8341-3939-4c82-9745-510f73904864","Type":"ContainerStarted","Data":"40f635d7e63013ef46e95844bd9c0eeb80a8a5d1440290361e47458c56b1a68a"} Feb 18 14:33:59 crc kubenswrapper[4957]: I0218 14:33:59.247374 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-m5ftr" podStartSLOduration=125.247355349 podStartE2EDuration="2m5.247355349s" podCreationTimestamp="2026-02-18 14:31:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:33:59.246258469 +0000 UTC m=+145.767123203" watchObservedRunningTime="2026-02-18 14:33:59.247355349 +0000 UTC m=+145.768220093" Feb 18 14:33:59 crc kubenswrapper[4957]: I0218 14:33:59.271758 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hfcww\" (UID: \"1c7a025e-0270-445c-ac05-34ffe3502176\") " pod="openshift-image-registry/image-registry-697d97f7c8-hfcww" Feb 18 14:33:59 crc kubenswrapper[4957]: I0218 14:33:59.273254 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-nddjw" event={"ID":"16bd595c-a77a-408f-9488-3499d8d57bdb","Type":"ContainerStarted","Data":"cc804aa151894738923ccc9ef853c3c06875eb93aa806417ec00c0d53ad1227b"} Feb 18 14:33:59 crc kubenswrapper[4957]: E0218 14:33:59.274318 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 14:33:59.774301105 +0000 UTC m=+146.295165849 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hfcww" (UID: "1c7a025e-0270-445c-ac05-34ffe3502176") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:33:59 crc kubenswrapper[4957]: I0218 14:33:59.305488 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qc4wh" event={"ID":"455504d8-7edb-4008-9343-536491e9504a","Type":"ContainerStarted","Data":"4146d9ff9bcf34cf34c92e9d7a2bc5f3b02cd94049e760833b6324760431ffc1"} Feb 18 14:33:59 crc kubenswrapper[4957]: I0218 14:33:59.306329 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qc4wh" Feb 18 14:33:59 crc kubenswrapper[4957]: I0218 14:33:59.335574 4957 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-qc4wh container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" start-of-body= Feb 18 14:33:59 crc kubenswrapper[4957]: I0218 14:33:59.335844 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qc4wh" podUID="455504d8-7edb-4008-9343-536491e9504a" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" Feb 18 14:33:59 crc kubenswrapper[4957]: I0218 14:33:59.340948 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-nld6v" event={"ID":"a3609f97-bc24-49d9-994a-026f5b1f8c73","Type":"ContainerStarted","Data":"504462d37fac59b38e0a787528af2680abc04a3a8ef963d89b5a59d51d303a21"} Feb 18 14:33:59 crc kubenswrapper[4957]: I0218 14:33:59.350400 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-txsxl" event={"ID":"01fa9f33-5f97-41b3-ae1f-f6421dd58827","Type":"ContainerStarted","Data":"ad8778a982742ac5a171d04756ff3f8ba695067e38f2391e7b7de4acbf7d988d"} Feb 18 14:33:59 crc kubenswrapper[4957]: I0218 14:33:59.372759 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 14:33:59 crc kubenswrapper[4957]: E0218 14:33:59.373450 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 14:33:59.873407307 +0000 UTC m=+146.394272051 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:33:59 crc kubenswrapper[4957]: I0218 14:33:59.373840 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hfcww\" (UID: \"1c7a025e-0270-445c-ac05-34ffe3502176\") " pod="openshift-image-registry/image-registry-697d97f7c8-hfcww" Feb 18 14:33:59 crc kubenswrapper[4957]: E0218 14:33:59.377001 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 14:33:59.876986486 +0000 UTC m=+146.397851230 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hfcww" (UID: "1c7a025e-0270-445c-ac05-34ffe3502176") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:33:59 crc kubenswrapper[4957]: I0218 14:33:59.373258 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-s74tf" event={"ID":"fa2af10f-a542-4777-9f7e-a2ca54798d99","Type":"ContainerStarted","Data":"a26410ec7f087f2cb9810d9a3708a41ad2aefab5e886ecd1a2aab9fd14318f5c"} Feb 18 14:33:59 crc kubenswrapper[4957]: I0218 14:33:59.383310 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-wbzr4" event={"ID":"e681c560-a7ad-4a54-831c-122ccd49c1c9","Type":"ContainerStarted","Data":"3c247127ed21ed01222416608be28e5d5b79dc3149ef61bbdefd9465c62bf119"} Feb 18 14:33:59 crc kubenswrapper[4957]: I0218 14:33:59.387362 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-nddjw" podStartSLOduration=125.387343132 podStartE2EDuration="2m5.387343132s" podCreationTimestamp="2026-02-18 14:31:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:33:59.335545869 +0000 UTC m=+145.856410613" watchObservedRunningTime="2026-02-18 14:33:59.387343132 +0000 UTC m=+145.908207876" Feb 18 14:33:59 crc kubenswrapper[4957]: I0218 14:33:59.403462 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qc4wh" podStartSLOduration=125.403443548 podStartE2EDuration="2m5.403443548s" podCreationTimestamp="2026-02-18 14:31:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:33:59.385783229 +0000 UTC m=+145.906647973" watchObservedRunningTime="2026-02-18 14:33:59.403443548 +0000 UTC m=+145.924308292" Feb 18 14:33:59 crc kubenswrapper[4957]: I0218 14:33:59.410340 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-vd6hx" event={"ID":"7e32179f-a59d-44e1-9a56-ca25b8c5ff21","Type":"ContainerStarted","Data":"986b3780ba1be6b3a6327236d07dbc98032d056c3f6f8f6a4532dac1de5afb8b"} Feb 18 14:33:59 crc kubenswrapper[4957]: I0218 14:33:59.475050 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 14:33:59 crc kubenswrapper[4957]: E0218 14:33:59.477144 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 14:33:59.977117936 +0000 UTC m=+146.497982880 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:33:59 crc kubenswrapper[4957]: I0218 14:33:59.583354 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hfcww\" (UID: \"1c7a025e-0270-445c-ac05-34ffe3502176\") " pod="openshift-image-registry/image-registry-697d97f7c8-hfcww" Feb 18 14:33:59 crc kubenswrapper[4957]: E0218 14:33:59.587037 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 14:34:00.087021277 +0000 UTC m=+146.607886021 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hfcww" (UID: "1c7a025e-0270-445c-ac05-34ffe3502176") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:33:59 crc kubenswrapper[4957]: I0218 14:33:59.624460 4957 patch_prober.go:28] interesting pod/router-default-5444994796-jg752 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 14:33:59 crc kubenswrapper[4957]: [-]has-synced failed: reason withheld Feb 18 14:33:59 crc kubenswrapper[4957]: [+]process-running ok Feb 18 14:33:59 crc kubenswrapper[4957]: healthz check failed Feb 18 14:33:59 crc kubenswrapper[4957]: I0218 14:33:59.624495 4957 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jg752" podUID="099076c9-9f78-47b8-87f1-3c9cc47e0b09" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 14:33:59 crc kubenswrapper[4957]: I0218 14:33:59.685460 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 14:33:59 crc kubenswrapper[4957]: E0218 14:33:59.685788 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 14:34:00.185772519 +0000 UTC m=+146.706637263 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:33:59 crc kubenswrapper[4957]: I0218 14:33:59.787893 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hfcww\" (UID: \"1c7a025e-0270-445c-ac05-34ffe3502176\") " pod="openshift-image-registry/image-registry-697d97f7c8-hfcww" Feb 18 14:33:59 crc kubenswrapper[4957]: E0218 14:33:59.788648 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 14:34:00.288635785 +0000 UTC m=+146.809500529 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hfcww" (UID: "1c7a025e-0270-445c-ac05-34ffe3502176") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:33:59 crc kubenswrapper[4957]: I0218 14:33:59.894154 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 14:33:59 crc kubenswrapper[4957]: E0218 14:33:59.895040 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 14:34:00.395023529 +0000 UTC m=+146.915888273 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:33:59 crc kubenswrapper[4957]: I0218 14:33:59.997063 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hfcww\" (UID: \"1c7a025e-0270-445c-ac05-34ffe3502176\") " pod="openshift-image-registry/image-registry-697d97f7c8-hfcww" Feb 18 14:33:59 crc kubenswrapper[4957]: E0218 14:33:59.997462 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 14:34:00.497407971 +0000 UTC m=+147.018272715 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hfcww" (UID: "1c7a025e-0270-445c-ac05-34ffe3502176") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:34:00 crc kubenswrapper[4957]: I0218 14:34:00.043467 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-594sd" Feb 18 14:34:00 crc kubenswrapper[4957]: E0218 14:34:00.098014 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 14:34:00.597976484 +0000 UTC m=+147.118841228 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:34:00 crc kubenswrapper[4957]: I0218 14:34:00.097729 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 14:34:00 crc kubenswrapper[4957]: I0218 14:34:00.098344 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hfcww\" (UID: \"1c7a025e-0270-445c-ac05-34ffe3502176\") " pod="openshift-image-registry/image-registry-697d97f7c8-hfcww" Feb 18 14:34:00 crc kubenswrapper[4957]: E0218 14:34:00.098805 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 14:34:00.598792046 +0000 UTC m=+147.119656780 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hfcww" (UID: "1c7a025e-0270-445c-ac05-34ffe3502176") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:34:00 crc kubenswrapper[4957]: I0218 14:34:00.199399 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 14:34:00 crc kubenswrapper[4957]: E0218 14:34:00.199565 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 14:34:00.699540524 +0000 UTC m=+147.220405268 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:34:00 crc kubenswrapper[4957]: I0218 14:34:00.199705 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hfcww\" (UID: \"1c7a025e-0270-445c-ac05-34ffe3502176\") " pod="openshift-image-registry/image-registry-697d97f7c8-hfcww" Feb 18 14:34:00 crc kubenswrapper[4957]: E0218 14:34:00.200014 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 14:34:00.700002467 +0000 UTC m=+147.220867211 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hfcww" (UID: "1c7a025e-0270-445c-ac05-34ffe3502176") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:34:00 crc kubenswrapper[4957]: I0218 14:34:00.301249 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 14:34:00 crc kubenswrapper[4957]: E0218 14:34:00.301487 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 14:34:00.801451034 +0000 UTC m=+147.322315808 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:34:00 crc kubenswrapper[4957]: I0218 14:34:00.301632 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hfcww\" (UID: \"1c7a025e-0270-445c-ac05-34ffe3502176\") " pod="openshift-image-registry/image-registry-697d97f7c8-hfcww" Feb 18 14:34:00 crc kubenswrapper[4957]: E0218 14:34:00.302031 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 14:34:00.802021709 +0000 UTC m=+147.322886453 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hfcww" (UID: "1c7a025e-0270-445c-ac05-34ffe3502176") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:34:00 crc kubenswrapper[4957]: I0218 14:34:00.402715 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 14:34:00 crc kubenswrapper[4957]: E0218 14:34:00.402929 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 14:34:00.902912801 +0000 UTC m=+147.423777535 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:34:00 crc kubenswrapper[4957]: I0218 14:34:00.403028 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hfcww\" (UID: \"1c7a025e-0270-445c-ac05-34ffe3502176\") " pod="openshift-image-registry/image-registry-697d97f7c8-hfcww" Feb 18 14:34:00 crc kubenswrapper[4957]: E0218 14:34:00.403335 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 14:34:00.903327122 +0000 UTC m=+147.424191876 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hfcww" (UID: "1c7a025e-0270-445c-ac05-34ffe3502176") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:34:00 crc kubenswrapper[4957]: I0218 14:34:00.419702 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-6f6dt" event={"ID":"4e7531cd-b62c-452a-b6a7-716513aaad71","Type":"ContainerStarted","Data":"7c87d28af87db248bd492f55399115033e3de3055ed336f8946d5e47f51cd9f9"} Feb 18 14:34:00 crc kubenswrapper[4957]: I0218 14:34:00.419749 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-6f6dt" event={"ID":"4e7531cd-b62c-452a-b6a7-716513aaad71","Type":"ContainerStarted","Data":"e90748ead3f08e65a5b03513e364b84a11e8be8bba5afec7c898e64061b89116"} Feb 18 14:34:00 crc kubenswrapper[4957]: I0218 14:34:00.421552 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29523750-lq4hl" event={"ID":"4cbe5d01-ad51-4b03-aba9-8757f5643bdb","Type":"ContainerStarted","Data":"1af9bc629710bff03e0e53f21c128e9198f95e3d624450305c5de49ad7b1b75a"} Feb 18 14:34:00 crc kubenswrapper[4957]: I0218 14:34:00.423155 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-xsf6z" event={"ID":"4f7ee635-a9ad-4c0f-98b8-c72a33eb8b39","Type":"ContainerStarted","Data":"7ecf9b510261f66a1447f7168750e510df4df780be10f906caf7eff1ae09045e"} Feb 18 14:34:00 crc kubenswrapper[4957]: I0218 14:34:00.423181 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-xsf6z" event={"ID":"4f7ee635-a9ad-4c0f-98b8-c72a33eb8b39","Type":"ContainerStarted","Data":"2681e0320979c46944b51c766baf99d3e7fd84b72d75f70526af00aa2a7f02a6"} Feb 18 14:34:00 crc kubenswrapper[4957]: I0218 14:34:00.425457 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bwgmc" event={"ID":"1b3f7089-9ab3-4753-b0a2-7454ed4425ac","Type":"ContainerStarted","Data":"0cc403daf8d17277cb4930562337daddf4c2a72d07197e4d8f7f8bd8cf609c43"} Feb 18 14:34:00 crc kubenswrapper[4957]: I0218 14:34:00.425620 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bwgmc" Feb 18 14:34:00 crc kubenswrapper[4957]: I0218 14:34:00.426857 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-gdjrj" event={"ID":"11cb8341-3939-4c82-9745-510f73904864","Type":"ContainerStarted","Data":"fd27e8571ceb4b4cffa7844add2265690bd0eecf09b5f3c39992bde1f2249d31"} Feb 18 14:34:00 crc kubenswrapper[4957]: I0218 14:34:00.428215 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-nddjw" event={"ID":"16bd595c-a77a-408f-9488-3499d8d57bdb","Type":"ContainerStarted","Data":"c727a2caaf60344dca8ad579a8f0f449ea71c5b48aaf9af639f5675e285827ba"} Feb 18 14:34:00 crc kubenswrapper[4957]: I0218 14:34:00.429925 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-m5ftr" event={"ID":"1365afbc-1c4d-47e9-856e-520d67653d37","Type":"ContainerStarted","Data":"9c9125418fd588d2f69ba3c5c4fd4b7b67b4261e10ed4456aba08f920a72aea3"} Feb 18 14:34:00 crc kubenswrapper[4957]: I0218 14:34:00.431265 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8bmsm" event={"ID":"3925001b-348a-4dde-a066-e49891c345bb","Type":"ContainerStarted","Data":"433e9255f81fe43bf83563552d9bb8b7ca0e6e76cd0d686373496064bf612423"} Feb 18 14:34:00 crc kubenswrapper[4957]: I0218 14:34:00.432759 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-nld6v" event={"ID":"a3609f97-bc24-49d9-994a-026f5b1f8c73","Type":"ContainerStarted","Data":"e6c9575462d812075872fa5470aecc27d2c2147a1e3c4c8825b8163971138d63"} Feb 18 14:34:00 crc kubenswrapper[4957]: I0218 14:34:00.435294 4957 generic.go:334] "Generic (PLEG): container finished" podID="b57b2a2c-ac8c-4c84-84c4-c24479600b71" containerID="0d681045c125f40aec73385bfb86450935c1bc4bfc22e032dc4b47fe38cf20cc" exitCode=0 Feb 18 14:34:00 crc kubenswrapper[4957]: I0218 14:34:00.435403 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-fpnw9" event={"ID":"b57b2a2c-ac8c-4c84-84c4-c24479600b71","Type":"ContainerDied","Data":"0d681045c125f40aec73385bfb86450935c1bc4bfc22e032dc4b47fe38cf20cc"} Feb 18 14:34:00 crc kubenswrapper[4957]: I0218 14:34:00.438085 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2db8v" event={"ID":"1889400f-2fff-4c67-b401-966e820d5a26","Type":"ContainerStarted","Data":"58c35fd6122c875dd655d5d2ef1ab95f58b731883c2978667bf1edae3de69c13"} Feb 18 14:34:00 crc kubenswrapper[4957]: I0218 14:34:00.439358 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-vd6hx" event={"ID":"7e32179f-a59d-44e1-9a56-ca25b8c5ff21","Type":"ContainerStarted","Data":"968ae94edfcba12fecbbd2e18efed4c32a46c1e1c7b27c9816cc37fc4a4e28be"} Feb 18 14:34:00 crc kubenswrapper[4957]: I0218 14:34:00.441093 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-txsxl" event={"ID":"01fa9f33-5f97-41b3-ae1f-f6421dd58827","Type":"ContainerStarted","Data":"67c37b6d62032a07f1c8b6cf49c3172ac28bdc213023a57401b1382210fb8c14"} Feb 18 14:34:00 crc kubenswrapper[4957]: I0218 14:34:00.441149 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-txsxl" event={"ID":"01fa9f33-5f97-41b3-ae1f-f6421dd58827","Type":"ContainerStarted","Data":"437a03236798ac6efa77644e956cfb7f1e99fb4a43020884a4e5e3047efd618f"} Feb 18 14:34:00 crc kubenswrapper[4957]: I0218 14:34:00.442734 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-6f6dt" podStartSLOduration=126.442720032 podStartE2EDuration="2m6.442720032s" podCreationTimestamp="2026-02-18 14:31:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:34:00.440895822 +0000 UTC m=+146.961760566" watchObservedRunningTime="2026-02-18 14:34:00.442720032 +0000 UTC m=+146.963584766" Feb 18 14:34:00 crc kubenswrapper[4957]: I0218 14:34:00.444589 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-dk2nb" event={"ID":"80bc6540-5a90-42c9-a3b6-ae9a897119dd","Type":"ContainerStarted","Data":"ce6fec730b91fe24fc79f6f997a89d8de30e0d604d60a2ecf81cd9535317645d"} Feb 18 14:34:00 crc kubenswrapper[4957]: I0218 14:34:00.450931 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lwq5z" event={"ID":"e1fce9e3-ffff-49f7-9f88-f5fd4cc98978","Type":"ContainerStarted","Data":"ba10dc904ae72811673b0b013d513ffe07f37c2f1a63f3a6c529980aa041c842"} Feb 18 14:34:00 crc kubenswrapper[4957]: I0218 14:34:00.453918 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-xrkwr" event={"ID":"c7a5e1cd-ca67-4a39-b3b1-16889e4ea7ae","Type":"ContainerStarted","Data":"8af6b4174329c8bdaf2a392cb7ed99acaec5d75dad239e0a337cfc238c9c61b3"} Feb 18 14:34:00 crc kubenswrapper[4957]: I0218 14:34:00.456075 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-nmxbx" event={"ID":"35a815ee-3601-412e-a76e-b4e15292e02c","Type":"ContainerStarted","Data":"b795ca58f31f8a2c7755cd59996945b47f263832d3f95f3b804c8b36ce52dd00"} Feb 18 14:34:00 crc kubenswrapper[4957]: I0218 14:34:00.457395 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-q4vp7" event={"ID":"9a161f1b-77bb-4a9d-9bfc-345bb46d439b","Type":"ContainerStarted","Data":"5955a16e3e753996736404ee9f2ef22ee71db1024fa108014ada860916093401"} Feb 18 14:34:00 crc kubenswrapper[4957]: I0218 14:34:00.458285 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-q4vp7" Feb 18 14:34:00 crc kubenswrapper[4957]: I0218 14:34:00.459269 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-s74tf" event={"ID":"fa2af10f-a542-4777-9f7e-a2ca54798d99","Type":"ContainerStarted","Data":"299680cc4ba88ff4e459bec8c2204c825fa29d04f5fb81d5abfc6f0e166794d1"} Feb 18 14:34:00 crc kubenswrapper[4957]: I0218 14:34:00.459794 4957 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-q4vp7 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.38:5443/healthz\": dial tcp 10.217.0.38:5443: connect: connection refused" start-of-body= Feb 18 14:34:00 crc kubenswrapper[4957]: I0218 14:34:00.459855 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-q4vp7" podUID="9a161f1b-77bb-4a9d-9bfc-345bb46d439b" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.38:5443/healthz\": dial tcp 10.217.0.38:5443: connect: connection refused" Feb 18 14:34:00 crc kubenswrapper[4957]: I0218 14:34:00.464485 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-sfz7k" event={"ID":"06d8fe1f-0a71-480f-899e-a070c091cb79","Type":"ContainerStarted","Data":"77bcd1e092adb83f4c51065440a331bbd68ddf9dab496ab869c506288a8f5cce"} Feb 18 14:34:00 crc kubenswrapper[4957]: I0218 14:34:00.465550 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-sfz7k" Feb 18 14:34:00 crc kubenswrapper[4957]: I0218 14:34:00.466858 4957 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-sfz7k container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/healthz\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Feb 18 14:34:00 crc kubenswrapper[4957]: I0218 14:34:00.466905 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-sfz7k" podUID="06d8fe1f-0a71-480f-899e-a070c091cb79" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.23:8080/healthz\": dial tcp 10.217.0.23:8080: connect: connection refused" Feb 18 14:34:00 crc kubenswrapper[4957]: I0218 14:34:00.467823 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-wbzr4" event={"ID":"e681c560-a7ad-4a54-831c-122ccd49c1c9","Type":"ContainerStarted","Data":"63636216d69b449165363e0e7013c9581163483966f519ff35026e540a235e39"} Feb 18 14:34:00 crc kubenswrapper[4957]: I0218 14:34:00.467856 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-wbzr4" event={"ID":"e681c560-a7ad-4a54-831c-122ccd49c1c9","Type":"ContainerStarted","Data":"e0380973ffb7d6e85e0dfe9cbe033e9d175da8b88b019214577cbeefb4eea3a1"} Feb 18 14:34:00 crc kubenswrapper[4957]: I0218 14:34:00.468436 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-wbzr4" Feb 18 14:34:00 crc kubenswrapper[4957]: I0218 14:34:00.472370 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qc4wh" event={"ID":"455504d8-7edb-4008-9343-536491e9504a","Type":"ContainerStarted","Data":"eb98dfdcf50ca67106e54b27dffd20a2d5ae30d6bd371d384f667d49a3b4466a"} Feb 18 14:34:00 crc kubenswrapper[4957]: I0218 14:34:00.496644 4957 patch_prober.go:28] interesting pod/downloads-7954f5f757-fxh8s container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Feb 18 14:34:00 crc kubenswrapper[4957]: I0218 14:34:00.496740 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-fxh8s" podUID="6e462cfd-ac3c-4e75-bcce-f8291746b89e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Feb 18 14:34:00 crc kubenswrapper[4957]: I0218 14:34:00.504716 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 14:34:00 crc kubenswrapper[4957]: E0218 14:34:00.506041 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 14:34:01.006019722 +0000 UTC m=+147.526884466 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:34:00 crc kubenswrapper[4957]: I0218 14:34:00.535862 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-vd6hx" podStartSLOduration=126.535835337 podStartE2EDuration="2m6.535835337s" podCreationTimestamp="2026-02-18 14:31:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:34:00.496132869 +0000 UTC m=+147.016997623" watchObservedRunningTime="2026-02-18 14:34:00.535835337 +0000 UTC m=+147.056700081" Feb 18 14:34:00 crc kubenswrapper[4957]: I0218 14:34:00.537700 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8bmsm" Feb 18 14:34:00 crc kubenswrapper[4957]: I0218 14:34:00.558047 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qc4wh" Feb 18 14:34:00 crc kubenswrapper[4957]: I0218 14:34:00.616402 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hfcww\" (UID: \"1c7a025e-0270-445c-ac05-34ffe3502176\") " pod="openshift-image-registry/image-registry-697d97f7c8-hfcww" Feb 18 14:34:00 crc kubenswrapper[4957]: E0218 14:34:00.617067 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 14:34:01.117047504 +0000 UTC m=+147.637912248 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hfcww" (UID: "1c7a025e-0270-445c-ac05-34ffe3502176") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:34:00 crc kubenswrapper[4957]: I0218 14:34:00.639960 4957 patch_prober.go:28] interesting pod/router-default-5444994796-jg752 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 14:34:00 crc kubenswrapper[4957]: [-]has-synced failed: reason withheld Feb 18 14:34:00 crc kubenswrapper[4957]: [+]process-running ok Feb 18 14:34:00 crc kubenswrapper[4957]: healthz check failed Feb 18 14:34:00 crc kubenswrapper[4957]: I0218 14:34:00.640022 4957 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jg752" podUID="099076c9-9f78-47b8-87f1-3c9cc47e0b09" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 14:34:00 crc kubenswrapper[4957]: I0218 14:34:00.640081 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2db8v" podStartSLOduration=126.640059471 podStartE2EDuration="2m6.640059471s" podCreationTimestamp="2026-02-18 14:31:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:34:00.63711027 +0000 UTC m=+147.157975014" watchObservedRunningTime="2026-02-18 14:34:00.640059471 +0000 UTC m=+147.160924225" Feb 18 14:34:00 crc kubenswrapper[4957]: I0218 14:34:00.678256 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29523750-lq4hl" podStartSLOduration=126.678232707 podStartE2EDuration="2m6.678232707s" podCreationTimestamp="2026-02-18 14:31:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:34:00.667878931 +0000 UTC m=+147.188743665" watchObservedRunningTime="2026-02-18 14:34:00.678232707 +0000 UTC m=+147.199097451" Feb 18 14:34:00 crc kubenswrapper[4957]: I0218 14:34:00.725627 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bwgmc" podStartSLOduration=126.725608888 podStartE2EDuration="2m6.725608888s" podCreationTimestamp="2026-02-18 14:31:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:34:00.725148565 +0000 UTC m=+147.246013319" watchObservedRunningTime="2026-02-18 14:34:00.725608888 +0000 UTC m=+147.246473632" Feb 18 14:34:00 crc kubenswrapper[4957]: I0218 14:34:00.726175 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 14:34:00 crc kubenswrapper[4957]: E0218 14:34:00.726346 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 14:34:01.226322818 +0000 UTC m=+147.747187562 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:34:00 crc kubenswrapper[4957]: I0218 14:34:00.726600 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hfcww\" (UID: \"1c7a025e-0270-445c-ac05-34ffe3502176\") " pod="openshift-image-registry/image-registry-697d97f7c8-hfcww" Feb 18 14:34:00 crc kubenswrapper[4957]: E0218 14:34:00.726941 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 14:34:01.226932585 +0000 UTC m=+147.747797509 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hfcww" (UID: "1c7a025e-0270-445c-ac05-34ffe3502176") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:34:00 crc kubenswrapper[4957]: I0218 14:34:00.748905 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-xsf6z" podStartSLOduration=126.748886822 podStartE2EDuration="2m6.748886822s" podCreationTimestamp="2026-02-18 14:31:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:34:00.747628177 +0000 UTC m=+147.268492931" watchObservedRunningTime="2026-02-18 14:34:00.748886822 +0000 UTC m=+147.269751556" Feb 18 14:34:00 crc kubenswrapper[4957]: I0218 14:34:00.827331 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 14:34:00 crc kubenswrapper[4957]: E0218 14:34:00.827519 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 14:34:01.327483997 +0000 UTC m=+147.848348741 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:34:00 crc kubenswrapper[4957]: I0218 14:34:00.827709 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hfcww\" (UID: \"1c7a025e-0270-445c-ac05-34ffe3502176\") " pod="openshift-image-registry/image-registry-697d97f7c8-hfcww" Feb 18 14:34:00 crc kubenswrapper[4957]: E0218 14:34:00.828164 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 14:34:01.328156305 +0000 UTC m=+147.849021049 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hfcww" (UID: "1c7a025e-0270-445c-ac05-34ffe3502176") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:34:00 crc kubenswrapper[4957]: I0218 14:34:00.837400 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-nld6v" podStartSLOduration=126.83737827 podStartE2EDuration="2m6.83737827s" podCreationTimestamp="2026-02-18 14:31:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:34:00.783712906 +0000 UTC m=+147.304577650" watchObservedRunningTime="2026-02-18 14:34:00.83737827 +0000 UTC m=+147.358243014" Feb 18 14:34:00 crc kubenswrapper[4957]: I0218 14:34:00.839132 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-sfz7k" podStartSLOduration=126.839121859 podStartE2EDuration="2m6.839121859s" podCreationTimestamp="2026-02-18 14:31:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:34:00.831744415 +0000 UTC m=+147.352609159" watchObservedRunningTime="2026-02-18 14:34:00.839121859 +0000 UTC m=+147.359986603" Feb 18 14:34:00 crc kubenswrapper[4957]: I0218 14:34:00.864813 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-txsxl" podStartSLOduration=126.864794589 podStartE2EDuration="2m6.864794589s" podCreationTimestamp="2026-02-18 14:31:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:34:00.859555404 +0000 UTC m=+147.380420148" watchObservedRunningTime="2026-02-18 14:34:00.864794589 +0000 UTC m=+147.385659333" Feb 18 14:34:00 crc kubenswrapper[4957]: I0218 14:34:00.898514 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lwq5z" podStartSLOduration=126.898490981 podStartE2EDuration="2m6.898490981s" podCreationTimestamp="2026-02-18 14:31:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:34:00.896166977 +0000 UTC m=+147.417031741" watchObservedRunningTime="2026-02-18 14:34:00.898490981 +0000 UTC m=+147.419355735" Feb 18 14:34:00 crc kubenswrapper[4957]: I0218 14:34:00.918464 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-tmknf" Feb 18 14:34:00 crc kubenswrapper[4957]: I0218 14:34:00.928623 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 14:34:00 crc kubenswrapper[4957]: E0218 14:34:00.928991 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 14:34:01.428976075 +0000 UTC m=+147.949840819 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:34:01 crc kubenswrapper[4957]: I0218 14:34:01.030131 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hfcww\" (UID: \"1c7a025e-0270-445c-ac05-34ffe3502176\") " pod="openshift-image-registry/image-registry-697d97f7c8-hfcww" Feb 18 14:34:01 crc kubenswrapper[4957]: E0218 14:34:01.030835 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 14:34:01.530823203 +0000 UTC m=+148.051687947 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hfcww" (UID: "1c7a025e-0270-445c-ac05-34ffe3502176") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:34:01 crc kubenswrapper[4957]: I0218 14:34:01.047998 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-dk2nb" podStartSLOduration=127.047980337 podStartE2EDuration="2m7.047980337s" podCreationTimestamp="2026-02-18 14:31:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:34:00.955852598 +0000 UTC m=+147.476717352" watchObservedRunningTime="2026-02-18 14:34:01.047980337 +0000 UTC m=+147.568845081" Feb 18 14:34:01 crc kubenswrapper[4957]: I0218 14:34:01.083001 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-s74tf" podStartSLOduration=127.082982606 podStartE2EDuration="2m7.082982606s" podCreationTimestamp="2026-02-18 14:31:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:34:01.080786835 +0000 UTC m=+147.601651579" watchObservedRunningTime="2026-02-18 14:34:01.082982606 +0000 UTC m=+147.603847350" Feb 18 14:34:01 crc kubenswrapper[4957]: I0218 14:34:01.124911 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-wbzr4" podStartSLOduration=9.124883005 podStartE2EDuration="9.124883005s" podCreationTimestamp="2026-02-18 14:33:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:34:01.123855177 +0000 UTC m=+147.644719931" watchObservedRunningTime="2026-02-18 14:34:01.124883005 +0000 UTC m=+147.645747769" Feb 18 14:34:01 crc kubenswrapper[4957]: I0218 14:34:01.138014 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 14:34:01 crc kubenswrapper[4957]: E0218 14:34:01.138516 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 14:34:01.638470551 +0000 UTC m=+148.159335295 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:34:01 crc kubenswrapper[4957]: I0218 14:34:01.239597 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hfcww\" (UID: \"1c7a025e-0270-445c-ac05-34ffe3502176\") " pod="openshift-image-registry/image-registry-697d97f7c8-hfcww" Feb 18 14:34:01 crc kubenswrapper[4957]: E0218 14:34:01.240021 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 14:34:01.74000569 +0000 UTC m=+148.260870434 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hfcww" (UID: "1c7a025e-0270-445c-ac05-34ffe3502176") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:34:01 crc kubenswrapper[4957]: I0218 14:34:01.343403 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 14:34:01 crc kubenswrapper[4957]: E0218 14:34:01.343930 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 14:34:01.843908045 +0000 UTC m=+148.364772789 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:34:01 crc kubenswrapper[4957]: I0218 14:34:01.398019 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-nmxbx" podStartSLOduration=127.397989721 podStartE2EDuration="2m7.397989721s" podCreationTimestamp="2026-02-18 14:31:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:34:01.280942793 +0000 UTC m=+147.801807537" watchObservedRunningTime="2026-02-18 14:34:01.397989721 +0000 UTC m=+147.918854455" Feb 18 14:34:01 crc kubenswrapper[4957]: I0218 14:34:01.399190 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-q4vp7" podStartSLOduration=127.399181314 podStartE2EDuration="2m7.399181314s" podCreationTimestamp="2026-02-18 14:31:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:34:01.37590319 +0000 UTC m=+147.896767934" watchObservedRunningTime="2026-02-18 14:34:01.399181314 +0000 UTC m=+147.920046058" Feb 18 14:34:01 crc kubenswrapper[4957]: I0218 14:34:01.445779 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hfcww\" (UID: \"1c7a025e-0270-445c-ac05-34ffe3502176\") " pod="openshift-image-registry/image-registry-697d97f7c8-hfcww" Feb 18 14:34:01 crc kubenswrapper[4957]: I0218 14:34:01.445845 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:34:01 crc kubenswrapper[4957]: I0218 14:34:01.445879 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:34:01 crc kubenswrapper[4957]: I0218 14:34:01.445918 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:34:01 crc kubenswrapper[4957]: I0218 14:34:01.445950 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:34:01 crc kubenswrapper[4957]: E0218 14:34:01.446324 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 14:34:01.946299828 +0000 UTC m=+148.467164572 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hfcww" (UID: "1c7a025e-0270-445c-ac05-34ffe3502176") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:34:01 crc kubenswrapper[4957]: I0218 14:34:01.448715 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:34:01 crc kubenswrapper[4957]: I0218 14:34:01.468374 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:34:01 crc kubenswrapper[4957]: I0218 14:34:01.478354 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:34:01 crc kubenswrapper[4957]: I0218 14:34:01.481849 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:34:01 crc kubenswrapper[4957]: I0218 14:34:01.545637 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-xrkwr" podStartSLOduration=8.545608056 podStartE2EDuration="8.545608056s" podCreationTimestamp="2026-02-18 14:33:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:34:01.496999161 +0000 UTC m=+148.017863905" watchObservedRunningTime="2026-02-18 14:34:01.545608056 +0000 UTC m=+148.066472800" Feb 18 14:34:01 crc kubenswrapper[4957]: I0218 14:34:01.551107 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:34:01 crc kubenswrapper[4957]: I0218 14:34:01.562622 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 14:34:01 crc kubenswrapper[4957]: I0218 14:34:01.563666 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:34:01 crc kubenswrapper[4957]: E0218 14:34:01.566765 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 14:34:02.06672196 +0000 UTC m=+148.587586704 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:34:01 crc kubenswrapper[4957]: I0218 14:34:01.610725 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-fpnw9" event={"ID":"b57b2a2c-ac8c-4c84-84c4-c24479600b71","Type":"ContainerStarted","Data":"c1f73a457efe9aa8d08be9a5e51aefb978a2ff7fce3902392199b679f3ffcf34"} Feb 18 14:34:01 crc kubenswrapper[4957]: I0218 14:34:01.610790 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-fpnw9" event={"ID":"b57b2a2c-ac8c-4c84-84c4-c24479600b71","Type":"ContainerStarted","Data":"e3e04099f40cddde3d55d8a024b23cd18257d93f7ce66130e2e7a74f5d7437ef"} Feb 18 14:34:01 crc kubenswrapper[4957]: I0218 14:34:01.636805 4957 patch_prober.go:28] interesting pod/downloads-7954f5f757-fxh8s container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Feb 18 14:34:01 crc kubenswrapper[4957]: I0218 14:34:01.636903 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-fxh8s" podUID="6e462cfd-ac3c-4e75-bcce-f8291746b89e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Feb 18 14:34:01 crc kubenswrapper[4957]: I0218 14:34:01.637642 4957 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-sfz7k container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/healthz\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Feb 18 14:34:01 crc kubenswrapper[4957]: I0218 14:34:01.637739 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-sfz7k" podUID="06d8fe1f-0a71-480f-899e-a070c091cb79" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.23:8080/healthz\": dial tcp 10.217.0.23:8080: connect: connection refused" Feb 18 14:34:01 crc kubenswrapper[4957]: I0218 14:34:01.645742 4957 patch_prober.go:28] interesting pod/router-default-5444994796-jg752 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 14:34:01 crc kubenswrapper[4957]: [-]has-synced failed: reason withheld Feb 18 14:34:01 crc kubenswrapper[4957]: [+]process-running ok Feb 18 14:34:01 crc kubenswrapper[4957]: healthz check failed Feb 18 14:34:01 crc kubenswrapper[4957]: I0218 14:34:01.645809 4957 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jg752" podUID="099076c9-9f78-47b8-87f1-3c9cc47e0b09" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 14:34:01 crc kubenswrapper[4957]: I0218 14:34:01.667716 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hfcww\" (UID: \"1c7a025e-0270-445c-ac05-34ffe3502176\") " pod="openshift-image-registry/image-registry-697d97f7c8-hfcww" Feb 18 14:34:01 crc kubenswrapper[4957]: E0218 14:34:01.668154 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 14:34:02.168137196 +0000 UTC m=+148.689001940 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hfcww" (UID: "1c7a025e-0270-445c-ac05-34ffe3502176") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:34:01 crc kubenswrapper[4957]: I0218 14:34:01.732086 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:34:01 crc kubenswrapper[4957]: I0218 14:34:01.772808 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 14:34:01 crc kubenswrapper[4957]: E0218 14:34:01.775954 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 14:34:02.275891197 +0000 UTC m=+148.796755941 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:34:01 crc kubenswrapper[4957]: I0218 14:34:01.882529 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hfcww\" (UID: \"1c7a025e-0270-445c-ac05-34ffe3502176\") " pod="openshift-image-registry/image-registry-697d97f7c8-hfcww" Feb 18 14:34:01 crc kubenswrapper[4957]: E0218 14:34:01.882992 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 14:34:02.38297637 +0000 UTC m=+148.903841114 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hfcww" (UID: "1c7a025e-0270-445c-ac05-34ffe3502176") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:34:01 crc kubenswrapper[4957]: I0218 14:34:01.963177 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-fpnw9" podStartSLOduration=127.963148548 podStartE2EDuration="2m7.963148548s" podCreationTimestamp="2026-02-18 14:31:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:34:01.957921763 +0000 UTC m=+148.478786507" watchObservedRunningTime="2026-02-18 14:34:01.963148548 +0000 UTC m=+148.484013292" Feb 18 14:34:01 crc kubenswrapper[4957]: I0218 14:34:01.983631 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 14:34:01 crc kubenswrapper[4957]: E0218 14:34:01.984198 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 14:34:02.48417705 +0000 UTC m=+149.005041794 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:34:02 crc kubenswrapper[4957]: I0218 14:34:02.009354 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-p65zp"] Feb 18 14:34:02 crc kubenswrapper[4957]: I0218 14:34:02.010810 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p65zp" Feb 18 14:34:02 crc kubenswrapper[4957]: I0218 14:34:02.014347 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 18 14:34:02 crc kubenswrapper[4957]: I0218 14:34:02.032441 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-p65zp"] Feb 18 14:34:02 crc kubenswrapper[4957]: I0218 14:34:02.090396 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af490140-34e3-4689-b13a-112b97f5cd9e-catalog-content\") pod \"certified-operators-p65zp\" (UID: \"af490140-34e3-4689-b13a-112b97f5cd9e\") " pod="openshift-marketplace/certified-operators-p65zp" Feb 18 14:34:02 crc kubenswrapper[4957]: I0218 14:34:02.090496 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af490140-34e3-4689-b13a-112b97f5cd9e-utilities\") pod \"certified-operators-p65zp\" (UID: \"af490140-34e3-4689-b13a-112b97f5cd9e\") " pod="openshift-marketplace/certified-operators-p65zp" Feb 18 14:34:02 crc kubenswrapper[4957]: I0218 14:34:02.090563 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hfcww\" (UID: \"1c7a025e-0270-445c-ac05-34ffe3502176\") " pod="openshift-image-registry/image-registry-697d97f7c8-hfcww" Feb 18 14:34:02 crc kubenswrapper[4957]: I0218 14:34:02.090627 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dffxq\" (UniqueName: \"kubernetes.io/projected/af490140-34e3-4689-b13a-112b97f5cd9e-kube-api-access-dffxq\") pod \"certified-operators-p65zp\" (UID: \"af490140-34e3-4689-b13a-112b97f5cd9e\") " pod="openshift-marketplace/certified-operators-p65zp" Feb 18 14:34:02 crc kubenswrapper[4957]: E0218 14:34:02.091042 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 14:34:02.591020736 +0000 UTC m=+149.111885480 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hfcww" (UID: "1c7a025e-0270-445c-ac05-34ffe3502176") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:34:02 crc kubenswrapper[4957]: I0218 14:34:02.144473 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-wh6pb"] Feb 18 14:34:02 crc kubenswrapper[4957]: I0218 14:34:02.145876 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wh6pb" Feb 18 14:34:02 crc kubenswrapper[4957]: I0218 14:34:02.152120 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 18 14:34:02 crc kubenswrapper[4957]: I0218 14:34:02.178259 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wh6pb"] Feb 18 14:34:02 crc kubenswrapper[4957]: I0218 14:34:02.192463 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 14:34:02 crc kubenswrapper[4957]: I0218 14:34:02.192762 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af490140-34e3-4689-b13a-112b97f5cd9e-catalog-content\") pod \"certified-operators-p65zp\" (UID: \"af490140-34e3-4689-b13a-112b97f5cd9e\") " pod="openshift-marketplace/certified-operators-p65zp" Feb 18 14:34:02 crc kubenswrapper[4957]: I0218 14:34:02.192817 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af490140-34e3-4689-b13a-112b97f5cd9e-utilities\") pod \"certified-operators-p65zp\" (UID: \"af490140-34e3-4689-b13a-112b97f5cd9e\") " pod="openshift-marketplace/certified-operators-p65zp" Feb 18 14:34:02 crc kubenswrapper[4957]: I0218 14:34:02.192849 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dffxq\" (UniqueName: \"kubernetes.io/projected/af490140-34e3-4689-b13a-112b97f5cd9e-kube-api-access-dffxq\") pod \"certified-operators-p65zp\" (UID: \"af490140-34e3-4689-b13a-112b97f5cd9e\") " pod="openshift-marketplace/certified-operators-p65zp" Feb 18 14:34:02 crc kubenswrapper[4957]: I0218 14:34:02.193939 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af490140-34e3-4689-b13a-112b97f5cd9e-catalog-content\") pod \"certified-operators-p65zp\" (UID: \"af490140-34e3-4689-b13a-112b97f5cd9e\") " pod="openshift-marketplace/certified-operators-p65zp" Feb 18 14:34:02 crc kubenswrapper[4957]: E0218 14:34:02.194121 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 14:34:02.694071437 +0000 UTC m=+149.214936221 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:34:02 crc kubenswrapper[4957]: I0218 14:34:02.194301 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af490140-34e3-4689-b13a-112b97f5cd9e-utilities\") pod \"certified-operators-p65zp\" (UID: \"af490140-34e3-4689-b13a-112b97f5cd9e\") " pod="openshift-marketplace/certified-operators-p65zp" Feb 18 14:34:02 crc kubenswrapper[4957]: I0218 14:34:02.280304 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dffxq\" (UniqueName: \"kubernetes.io/projected/af490140-34e3-4689-b13a-112b97f5cd9e-kube-api-access-dffxq\") pod \"certified-operators-p65zp\" (UID: \"af490140-34e3-4689-b13a-112b97f5cd9e\") " pod="openshift-marketplace/certified-operators-p65zp" Feb 18 14:34:02 crc kubenswrapper[4957]: I0218 14:34:02.295127 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74312833-84d3-4221-a8f7-07c892db5165-utilities\") pod \"community-operators-wh6pb\" (UID: \"74312833-84d3-4221-a8f7-07c892db5165\") " pod="openshift-marketplace/community-operators-wh6pb" Feb 18 14:34:02 crc kubenswrapper[4957]: I0218 14:34:02.295199 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67dpz\" (UniqueName: \"kubernetes.io/projected/74312833-84d3-4221-a8f7-07c892db5165-kube-api-access-67dpz\") pod \"community-operators-wh6pb\" (UID: \"74312833-84d3-4221-a8f7-07c892db5165\") " pod="openshift-marketplace/community-operators-wh6pb" Feb 18 14:34:02 crc kubenswrapper[4957]: I0218 14:34:02.295296 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hfcww\" (UID: \"1c7a025e-0270-445c-ac05-34ffe3502176\") " pod="openshift-image-registry/image-registry-697d97f7c8-hfcww" Feb 18 14:34:02 crc kubenswrapper[4957]: I0218 14:34:02.295375 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74312833-84d3-4221-a8f7-07c892db5165-catalog-content\") pod \"community-operators-wh6pb\" (UID: \"74312833-84d3-4221-a8f7-07c892db5165\") " pod="openshift-marketplace/community-operators-wh6pb" Feb 18 14:34:02 crc kubenswrapper[4957]: E0218 14:34:02.296143 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 14:34:02.79610609 +0000 UTC m=+149.316970834 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hfcww" (UID: "1c7a025e-0270-445c-ac05-34ffe3502176") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:34:02 crc kubenswrapper[4957]: I0218 14:34:02.344774 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-lj4dn"] Feb 18 14:34:02 crc kubenswrapper[4957]: I0218 14:34:02.346187 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lj4dn" Feb 18 14:34:02 crc kubenswrapper[4957]: I0218 14:34:02.371109 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p65zp" Feb 18 14:34:02 crc kubenswrapper[4957]: I0218 14:34:02.396712 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 14:34:02 crc kubenswrapper[4957]: I0218 14:34:02.397135 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74312833-84d3-4221-a8f7-07c892db5165-catalog-content\") pod \"community-operators-wh6pb\" (UID: \"74312833-84d3-4221-a8f7-07c892db5165\") " pod="openshift-marketplace/community-operators-wh6pb" Feb 18 14:34:02 crc kubenswrapper[4957]: I0218 14:34:02.397196 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74312833-84d3-4221-a8f7-07c892db5165-utilities\") pod \"community-operators-wh6pb\" (UID: \"74312833-84d3-4221-a8f7-07c892db5165\") " pod="openshift-marketplace/community-operators-wh6pb" Feb 18 14:34:02 crc kubenswrapper[4957]: I0218 14:34:02.397218 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67dpz\" (UniqueName: \"kubernetes.io/projected/74312833-84d3-4221-a8f7-07c892db5165-kube-api-access-67dpz\") pod \"community-operators-wh6pb\" (UID: \"74312833-84d3-4221-a8f7-07c892db5165\") " pod="openshift-marketplace/community-operators-wh6pb" Feb 18 14:34:02 crc kubenswrapper[4957]: E0218 14:34:02.397763 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 14:34:02.897740812 +0000 UTC m=+149.418605556 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:34:02 crc kubenswrapper[4957]: I0218 14:34:02.398220 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74312833-84d3-4221-a8f7-07c892db5165-catalog-content\") pod \"community-operators-wh6pb\" (UID: \"74312833-84d3-4221-a8f7-07c892db5165\") " pod="openshift-marketplace/community-operators-wh6pb" Feb 18 14:34:02 crc kubenswrapper[4957]: I0218 14:34:02.398505 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74312833-84d3-4221-a8f7-07c892db5165-utilities\") pod \"community-operators-wh6pb\" (UID: \"74312833-84d3-4221-a8f7-07c892db5165\") " pod="openshift-marketplace/community-operators-wh6pb" Feb 18 14:34:02 crc kubenswrapper[4957]: I0218 14:34:02.400989 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lj4dn"] Feb 18 14:34:02 crc kubenswrapper[4957]: I0218 14:34:02.454320 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67dpz\" (UniqueName: \"kubernetes.io/projected/74312833-84d3-4221-a8f7-07c892db5165-kube-api-access-67dpz\") pod \"community-operators-wh6pb\" (UID: \"74312833-84d3-4221-a8f7-07c892db5165\") " pod="openshift-marketplace/community-operators-wh6pb" Feb 18 14:34:02 crc kubenswrapper[4957]: I0218 14:34:02.463828 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wh6pb" Feb 18 14:34:02 crc kubenswrapper[4957]: I0218 14:34:02.501091 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hfcww\" (UID: \"1c7a025e-0270-445c-ac05-34ffe3502176\") " pod="openshift-image-registry/image-registry-697d97f7c8-hfcww" Feb 18 14:34:02 crc kubenswrapper[4957]: E0218 14:34:02.501837 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 14:34:03.001806312 +0000 UTC m=+149.522671056 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hfcww" (UID: "1c7a025e-0270-445c-ac05-34ffe3502176") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:34:02 crc kubenswrapper[4957]: I0218 14:34:02.509537 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f83e4add-33e3-4e43-af47-f9980471df63-catalog-content\") pod \"certified-operators-lj4dn\" (UID: \"f83e4add-33e3-4e43-af47-f9980471df63\") " pod="openshift-marketplace/certified-operators-lj4dn" Feb 18 14:34:02 crc kubenswrapper[4957]: I0218 14:34:02.509740 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f83e4add-33e3-4e43-af47-f9980471df63-utilities\") pod \"certified-operators-lj4dn\" (UID: \"f83e4add-33e3-4e43-af47-f9980471df63\") " pod="openshift-marketplace/certified-operators-lj4dn" Feb 18 14:34:02 crc kubenswrapper[4957]: I0218 14:34:02.509837 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vlkgb\" (UniqueName: \"kubernetes.io/projected/f83e4add-33e3-4e43-af47-f9980471df63-kube-api-access-vlkgb\") pod \"certified-operators-lj4dn\" (UID: \"f83e4add-33e3-4e43-af47-f9980471df63\") " pod="openshift-marketplace/certified-operators-lj4dn" Feb 18 14:34:02 crc kubenswrapper[4957]: I0218 14:34:02.566984 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-gnwwm"] Feb 18 14:34:02 crc kubenswrapper[4957]: I0218 14:34:02.586517 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 18 14:34:02 crc kubenswrapper[4957]: I0218 14:34:02.586726 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gnwwm" Feb 18 14:34:02 crc kubenswrapper[4957]: I0218 14:34:02.601475 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-q4vp7" Feb 18 14:34:02 crc kubenswrapper[4957]: I0218 14:34:02.601578 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 18 14:34:02 crc kubenswrapper[4957]: I0218 14:34:02.607231 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Feb 18 14:34:02 crc kubenswrapper[4957]: I0218 14:34:02.607566 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Feb 18 14:34:02 crc kubenswrapper[4957]: I0218 14:34:02.617153 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gnwwm"] Feb 18 14:34:02 crc kubenswrapper[4957]: I0218 14:34:02.620046 4957 patch_prober.go:28] interesting pod/router-default-5444994796-jg752 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 14:34:02 crc kubenswrapper[4957]: [-]has-synced failed: reason withheld Feb 18 14:34:02 crc kubenswrapper[4957]: [+]process-running ok Feb 18 14:34:02 crc kubenswrapper[4957]: healthz check failed Feb 18 14:34:02 crc kubenswrapper[4957]: I0218 14:34:02.620113 4957 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jg752" podUID="099076c9-9f78-47b8-87f1-3c9cc47e0b09" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 14:34:02 crc kubenswrapper[4957]: I0218 14:34:02.620464 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 14:34:02 crc kubenswrapper[4957]: E0218 14:34:02.620580 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 14:34:03.120541297 +0000 UTC m=+149.641406061 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:34:02 crc kubenswrapper[4957]: I0218 14:34:02.620695 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hfcww\" (UID: \"1c7a025e-0270-445c-ac05-34ffe3502176\") " pod="openshift-image-registry/image-registry-697d97f7c8-hfcww" Feb 18 14:34:02 crc kubenswrapper[4957]: I0218 14:34:02.620735 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f83e4add-33e3-4e43-af47-f9980471df63-catalog-content\") pod \"certified-operators-lj4dn\" (UID: \"f83e4add-33e3-4e43-af47-f9980471df63\") " pod="openshift-marketplace/certified-operators-lj4dn" Feb 18 14:34:02 crc kubenswrapper[4957]: I0218 14:34:02.620766 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f83e4add-33e3-4e43-af47-f9980471df63-utilities\") pod \"certified-operators-lj4dn\" (UID: \"f83e4add-33e3-4e43-af47-f9980471df63\") " pod="openshift-marketplace/certified-operators-lj4dn" Feb 18 14:34:02 crc kubenswrapper[4957]: I0218 14:34:02.620789 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vlkgb\" (UniqueName: \"kubernetes.io/projected/f83e4add-33e3-4e43-af47-f9980471df63-kube-api-access-vlkgb\") pod \"certified-operators-lj4dn\" (UID: \"f83e4add-33e3-4e43-af47-f9980471df63\") " pod="openshift-marketplace/certified-operators-lj4dn" Feb 18 14:34:02 crc kubenswrapper[4957]: E0218 14:34:02.621364 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 14:34:03.121352569 +0000 UTC m=+149.642217303 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hfcww" (UID: "1c7a025e-0270-445c-ac05-34ffe3502176") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:34:02 crc kubenswrapper[4957]: I0218 14:34:02.622179 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f83e4add-33e3-4e43-af47-f9980471df63-catalog-content\") pod \"certified-operators-lj4dn\" (UID: \"f83e4add-33e3-4e43-af47-f9980471df63\") " pod="openshift-marketplace/certified-operators-lj4dn" Feb 18 14:34:02 crc kubenswrapper[4957]: I0218 14:34:02.622393 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f83e4add-33e3-4e43-af47-f9980471df63-utilities\") pod \"certified-operators-lj4dn\" (UID: \"f83e4add-33e3-4e43-af47-f9980471df63\") " pod="openshift-marketplace/certified-operators-lj4dn" Feb 18 14:34:02 crc kubenswrapper[4957]: I0218 14:34:02.624576 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 18 14:34:02 crc kubenswrapper[4957]: I0218 14:34:02.637265 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-gdjrj" event={"ID":"11cb8341-3939-4c82-9745-510f73904864","Type":"ContainerStarted","Data":"6b3d0d12e3389050ac654520ee2ef7c8e3a05208518e27f9ee901907e52bf312"} Feb 18 14:34:02 crc kubenswrapper[4957]: I0218 14:34:02.639492 4957 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-sfz7k container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/healthz\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Feb 18 14:34:02 crc kubenswrapper[4957]: I0218 14:34:02.639543 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-sfz7k" podUID="06d8fe1f-0a71-480f-899e-a070c091cb79" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.23:8080/healthz\": dial tcp 10.217.0.23:8080: connect: connection refused" Feb 18 14:34:02 crc kubenswrapper[4957]: I0218 14:34:02.659710 4957 csr.go:261] certificate signing request csr-mnddp is approved, waiting to be issued Feb 18 14:34:02 crc kubenswrapper[4957]: I0218 14:34:02.665220 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vlkgb\" (UniqueName: \"kubernetes.io/projected/f83e4add-33e3-4e43-af47-f9980471df63-kube-api-access-vlkgb\") pod \"certified-operators-lj4dn\" (UID: \"f83e4add-33e3-4e43-af47-f9980471df63\") " pod="openshift-marketplace/certified-operators-lj4dn" Feb 18 14:34:02 crc kubenswrapper[4957]: I0218 14:34:02.678565 4957 csr.go:257] certificate signing request csr-mnddp is issued Feb 18 14:34:02 crc kubenswrapper[4957]: I0218 14:34:02.719507 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lj4dn" Feb 18 14:34:02 crc kubenswrapper[4957]: I0218 14:34:02.722523 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 14:34:02 crc kubenswrapper[4957]: I0218 14:34:02.722871 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4c4b9db2-8bcc-4ad3-88da-2c26aeb72241-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"4c4b9db2-8bcc-4ad3-88da-2c26aeb72241\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 18 14:34:02 crc kubenswrapper[4957]: I0218 14:34:02.722963 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4c4b9db2-8bcc-4ad3-88da-2c26aeb72241-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"4c4b9db2-8bcc-4ad3-88da-2c26aeb72241\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 18 14:34:02 crc kubenswrapper[4957]: I0218 14:34:02.723070 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0237301c-c7c4-490c-9be1-daaa5db30a10-utilities\") pod \"community-operators-gnwwm\" (UID: \"0237301c-c7c4-490c-9be1-daaa5db30a10\") " pod="openshift-marketplace/community-operators-gnwwm" Feb 18 14:34:02 crc kubenswrapper[4957]: I0218 14:34:02.723103 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdh9w\" (UniqueName: \"kubernetes.io/projected/0237301c-c7c4-490c-9be1-daaa5db30a10-kube-api-access-vdh9w\") pod \"community-operators-gnwwm\" (UID: \"0237301c-c7c4-490c-9be1-daaa5db30a10\") " pod="openshift-marketplace/community-operators-gnwwm" Feb 18 14:34:02 crc kubenswrapper[4957]: I0218 14:34:02.723233 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0237301c-c7c4-490c-9be1-daaa5db30a10-catalog-content\") pod \"community-operators-gnwwm\" (UID: \"0237301c-c7c4-490c-9be1-daaa5db30a10\") " pod="openshift-marketplace/community-operators-gnwwm" Feb 18 14:34:02 crc kubenswrapper[4957]: E0218 14:34:02.723830 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 14:34:03.223809294 +0000 UTC m=+149.744674038 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:34:02 crc kubenswrapper[4957]: I0218 14:34:02.829364 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0237301c-c7c4-490c-9be1-daaa5db30a10-catalog-content\") pod \"community-operators-gnwwm\" (UID: \"0237301c-c7c4-490c-9be1-daaa5db30a10\") " pod="openshift-marketplace/community-operators-gnwwm" Feb 18 14:34:02 crc kubenswrapper[4957]: I0218 14:34:02.829455 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4c4b9db2-8bcc-4ad3-88da-2c26aeb72241-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"4c4b9db2-8bcc-4ad3-88da-2c26aeb72241\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 18 14:34:02 crc kubenswrapper[4957]: I0218 14:34:02.829486 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4c4b9db2-8bcc-4ad3-88da-2c26aeb72241-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"4c4b9db2-8bcc-4ad3-88da-2c26aeb72241\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 18 14:34:02 crc kubenswrapper[4957]: I0218 14:34:02.829531 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hfcww\" (UID: \"1c7a025e-0270-445c-ac05-34ffe3502176\") " pod="openshift-image-registry/image-registry-697d97f7c8-hfcww" Feb 18 14:34:02 crc kubenswrapper[4957]: I0218 14:34:02.829561 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0237301c-c7c4-490c-9be1-daaa5db30a10-utilities\") pod \"community-operators-gnwwm\" (UID: \"0237301c-c7c4-490c-9be1-daaa5db30a10\") " pod="openshift-marketplace/community-operators-gnwwm" Feb 18 14:34:02 crc kubenswrapper[4957]: I0218 14:34:02.829587 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vdh9w\" (UniqueName: \"kubernetes.io/projected/0237301c-c7c4-490c-9be1-daaa5db30a10-kube-api-access-vdh9w\") pod \"community-operators-gnwwm\" (UID: \"0237301c-c7c4-490c-9be1-daaa5db30a10\") " pod="openshift-marketplace/community-operators-gnwwm" Feb 18 14:34:02 crc kubenswrapper[4957]: I0218 14:34:02.829751 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4c4b9db2-8bcc-4ad3-88da-2c26aeb72241-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"4c4b9db2-8bcc-4ad3-88da-2c26aeb72241\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 18 14:34:02 crc kubenswrapper[4957]: I0218 14:34:02.830160 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0237301c-c7c4-490c-9be1-daaa5db30a10-catalog-content\") pod \"community-operators-gnwwm\" (UID: \"0237301c-c7c4-490c-9be1-daaa5db30a10\") " pod="openshift-marketplace/community-operators-gnwwm" Feb 18 14:34:02 crc kubenswrapper[4957]: E0218 14:34:02.830622 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 14:34:03.330608289 +0000 UTC m=+149.851473033 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hfcww" (UID: "1c7a025e-0270-445c-ac05-34ffe3502176") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:34:02 crc kubenswrapper[4957]: I0218 14:34:02.830680 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0237301c-c7c4-490c-9be1-daaa5db30a10-utilities\") pod \"community-operators-gnwwm\" (UID: \"0237301c-c7c4-490c-9be1-daaa5db30a10\") " pod="openshift-marketplace/community-operators-gnwwm" Feb 18 14:34:02 crc kubenswrapper[4957]: I0218 14:34:02.885035 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vdh9w\" (UniqueName: \"kubernetes.io/projected/0237301c-c7c4-490c-9be1-daaa5db30a10-kube-api-access-vdh9w\") pod \"community-operators-gnwwm\" (UID: \"0237301c-c7c4-490c-9be1-daaa5db30a10\") " pod="openshift-marketplace/community-operators-gnwwm" Feb 18 14:34:02 crc kubenswrapper[4957]: I0218 14:34:02.889232 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4c4b9db2-8bcc-4ad3-88da-2c26aeb72241-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"4c4b9db2-8bcc-4ad3-88da-2c26aeb72241\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 18 14:34:02 crc kubenswrapper[4957]: I0218 14:34:02.921215 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gnwwm" Feb 18 14:34:02 crc kubenswrapper[4957]: I0218 14:34:02.930083 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 14:34:02 crc kubenswrapper[4957]: E0218 14:34:02.930482 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 14:34:03.430460452 +0000 UTC m=+149.951325206 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:34:02 crc kubenswrapper[4957]: I0218 14:34:02.956880 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 18 14:34:03 crc kubenswrapper[4957]: I0218 14:34:03.037688 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hfcww\" (UID: \"1c7a025e-0270-445c-ac05-34ffe3502176\") " pod="openshift-image-registry/image-registry-697d97f7c8-hfcww" Feb 18 14:34:03 crc kubenswrapper[4957]: E0218 14:34:03.038263 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 14:34:03.538241914 +0000 UTC m=+150.059106658 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hfcww" (UID: "1c7a025e-0270-445c-ac05-34ffe3502176") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:34:03 crc kubenswrapper[4957]: I0218 14:34:03.139369 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 14:34:03 crc kubenswrapper[4957]: E0218 14:34:03.139729 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 14:34:03.639682901 +0000 UTC m=+150.160547645 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:34:03 crc kubenswrapper[4957]: I0218 14:34:03.140211 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hfcww\" (UID: \"1c7a025e-0270-445c-ac05-34ffe3502176\") " pod="openshift-image-registry/image-registry-697d97f7c8-hfcww" Feb 18 14:34:03 crc kubenswrapper[4957]: E0218 14:34:03.140638 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 14:34:03.640622757 +0000 UTC m=+150.161487501 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hfcww" (UID: "1c7a025e-0270-445c-ac05-34ffe3502176") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:34:03 crc kubenswrapper[4957]: I0218 14:34:03.193943 4957 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Feb 18 14:34:03 crc kubenswrapper[4957]: I0218 14:34:03.241021 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 14:34:03 crc kubenswrapper[4957]: E0218 14:34:03.241310 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 14:34:03.741293552 +0000 UTC m=+150.262158296 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:34:03 crc kubenswrapper[4957]: I0218 14:34:03.268914 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-p65zp"] Feb 18 14:34:03 crc kubenswrapper[4957]: I0218 14:34:03.345266 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hfcww\" (UID: \"1c7a025e-0270-445c-ac05-34ffe3502176\") " pod="openshift-image-registry/image-registry-697d97f7c8-hfcww" Feb 18 14:34:03 crc kubenswrapper[4957]: E0218 14:34:03.345751 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 14:34:03.845730271 +0000 UTC m=+150.366595015 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hfcww" (UID: "1c7a025e-0270-445c-ac05-34ffe3502176") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:34:03 crc kubenswrapper[4957]: I0218 14:34:03.422451 4957 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-02-18T14:34:03.193979012Z","Handler":null,"Name":""} Feb 18 14:34:03 crc kubenswrapper[4957]: I0218 14:34:03.432225 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lj4dn"] Feb 18 14:34:03 crc kubenswrapper[4957]: I0218 14:34:03.447666 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 14:34:03 crc kubenswrapper[4957]: E0218 14:34:03.448007 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 14:34:03.94798936 +0000 UTC m=+150.468854104 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:34:03 crc kubenswrapper[4957]: I0218 14:34:03.532925 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wh6pb"] Feb 18 14:34:03 crc kubenswrapper[4957]: I0218 14:34:03.552323 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hfcww\" (UID: \"1c7a025e-0270-445c-ac05-34ffe3502176\") " pod="openshift-image-registry/image-registry-697d97f7c8-hfcww" Feb 18 14:34:03 crc kubenswrapper[4957]: E0218 14:34:03.552723 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 14:34:04.052704878 +0000 UTC m=+150.573569622 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hfcww" (UID: "1c7a025e-0270-445c-ac05-34ffe3502176") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:34:03 crc kubenswrapper[4957]: I0218 14:34:03.614293 4957 patch_prober.go:28] interesting pod/router-default-5444994796-jg752 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 14:34:03 crc kubenswrapper[4957]: [-]has-synced failed: reason withheld Feb 18 14:34:03 crc kubenswrapper[4957]: [+]process-running ok Feb 18 14:34:03 crc kubenswrapper[4957]: healthz check failed Feb 18 14:34:03 crc kubenswrapper[4957]: I0218 14:34:03.614802 4957 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jg752" podUID="099076c9-9f78-47b8-87f1-3c9cc47e0b09" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 14:34:03 crc kubenswrapper[4957]: I0218 14:34:03.616819 4957 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Feb 18 14:34:03 crc kubenswrapper[4957]: I0218 14:34:03.616847 4957 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Feb 18 14:34:03 crc kubenswrapper[4957]: I0218 14:34:03.655270 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 14:34:03 crc kubenswrapper[4957]: I0218 14:34:03.679893 4957 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-18 14:29:02 +0000 UTC, rotation deadline is 2026-12-19 00:20:44.487639943 +0000 UTC Feb 18 14:34:03 crc kubenswrapper[4957]: I0218 14:34:03.679946 4957 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 7281h46m40.807697495s for next certificate rotation Feb 18 14:34:03 crc kubenswrapper[4957]: I0218 14:34:03.696883 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"78231a46aadaa0754a982747ac79c016da44fb82321048161b7d8aec25854607"} Feb 18 14:34:03 crc kubenswrapper[4957]: I0218 14:34:03.698018 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lj4dn" event={"ID":"f83e4add-33e3-4e43-af47-f9980471df63","Type":"ContainerStarted","Data":"12e0ce09ce8f568e47cdc0f6c056e82eab414886c5f646e4512507f275f960e0"} Feb 18 14:34:03 crc kubenswrapper[4957]: I0218 14:34:03.698841 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p65zp" event={"ID":"af490140-34e3-4689-b13a-112b97f5cd9e","Type":"ContainerStarted","Data":"afbd2873a934416248bac03b934d4a40e637756f15f94211e3f998840349075a"} Feb 18 14:34:03 crc kubenswrapper[4957]: I0218 14:34:03.699593 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"38c8b06e93431521789abe309ccc3ee286b4e30a012e5f1589663e7100333755"} Feb 18 14:34:03 crc kubenswrapper[4957]: I0218 14:34:03.702084 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-gdjrj" event={"ID":"11cb8341-3939-4c82-9745-510f73904864","Type":"ContainerStarted","Data":"bb5ac23d3f31fb7a7cb548f8072fb6d4ca48289c1e9561040c57b415f8b0dedf"} Feb 18 14:34:03 crc kubenswrapper[4957]: I0218 14:34:03.712206 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"2670c55b5277ffb29df4b64ecdc56fa0432b2abfae410fdbcad02693a8ad6c68"} Feb 18 14:34:03 crc kubenswrapper[4957]: I0218 14:34:03.714493 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wh6pb" event={"ID":"74312833-84d3-4221-a8f7-07c892db5165","Type":"ContainerStarted","Data":"0a11791d259e2a1bab4a1b5628d1eccd2b70c0695a378c0eeec65cf412db93dc"} Feb 18 14:34:03 crc kubenswrapper[4957]: I0218 14:34:03.725347 4957 generic.go:334] "Generic (PLEG): container finished" podID="4cbe5d01-ad51-4b03-aba9-8757f5643bdb" containerID="1af9bc629710bff03e0e53f21c128e9198f95e3d624450305c5de49ad7b1b75a" exitCode=0 Feb 18 14:34:03 crc kubenswrapper[4957]: I0218 14:34:03.725538 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29523750-lq4hl" event={"ID":"4cbe5d01-ad51-4b03-aba9-8757f5643bdb","Type":"ContainerDied","Data":"1af9bc629710bff03e0e53f21c128e9198f95e3d624450305c5de49ad7b1b75a"} Feb 18 14:34:03 crc kubenswrapper[4957]: I0218 14:34:03.828212 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 18 14:34:03 crc kubenswrapper[4957]: I0218 14:34:03.844783 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 18 14:34:03 crc kubenswrapper[4957]: I0218 14:34:03.870887 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hfcww\" (UID: \"1c7a025e-0270-445c-ac05-34ffe3502176\") " pod="openshift-image-registry/image-registry-697d97f7c8-hfcww" Feb 18 14:34:03 crc kubenswrapper[4957]: I0218 14:34:03.913072 4957 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 18 14:34:03 crc kubenswrapper[4957]: I0218 14:34:03.913118 4957 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hfcww\" (UID: \"1c7a025e-0270-445c-ac05-34ffe3502176\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-hfcww" Feb 18 14:34:03 crc kubenswrapper[4957]: I0218 14:34:03.954738 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gnwwm"] Feb 18 14:34:04 crc kubenswrapper[4957]: I0218 14:34:04.000549 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hfcww\" (UID: \"1c7a025e-0270-445c-ac05-34ffe3502176\") " pod="openshift-image-registry/image-registry-697d97f7c8-hfcww" Feb 18 14:34:04 crc kubenswrapper[4957]: I0218 14:34:04.104245 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-wg8d9"] Feb 18 14:34:04 crc kubenswrapper[4957]: I0218 14:34:04.105306 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wg8d9" Feb 18 14:34:04 crc kubenswrapper[4957]: I0218 14:34:04.108248 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 18 14:34:04 crc kubenswrapper[4957]: I0218 14:34:04.112933 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wg8d9"] Feb 18 14:34:04 crc kubenswrapper[4957]: I0218 14:34:04.188573 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ffc559a7-20d1-416b-ae18-bcbfc5193d32-catalog-content\") pod \"redhat-marketplace-wg8d9\" (UID: \"ffc559a7-20d1-416b-ae18-bcbfc5193d32\") " pod="openshift-marketplace/redhat-marketplace-wg8d9" Feb 18 14:34:04 crc kubenswrapper[4957]: I0218 14:34:04.188646 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9c5w5\" (UniqueName: \"kubernetes.io/projected/ffc559a7-20d1-416b-ae18-bcbfc5193d32-kube-api-access-9c5w5\") pod \"redhat-marketplace-wg8d9\" (UID: \"ffc559a7-20d1-416b-ae18-bcbfc5193d32\") " pod="openshift-marketplace/redhat-marketplace-wg8d9" Feb 18 14:34:04 crc kubenswrapper[4957]: I0218 14:34:04.188721 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ffc559a7-20d1-416b-ae18-bcbfc5193d32-utilities\") pod \"redhat-marketplace-wg8d9\" (UID: \"ffc559a7-20d1-416b-ae18-bcbfc5193d32\") " pod="openshift-marketplace/redhat-marketplace-wg8d9" Feb 18 14:34:04 crc kubenswrapper[4957]: I0218 14:34:04.221793 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Feb 18 14:34:04 crc kubenswrapper[4957]: I0218 14:34:04.243003 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-hfcww" Feb 18 14:34:04 crc kubenswrapper[4957]: I0218 14:34:04.292176 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ffc559a7-20d1-416b-ae18-bcbfc5193d32-utilities\") pod \"redhat-marketplace-wg8d9\" (UID: \"ffc559a7-20d1-416b-ae18-bcbfc5193d32\") " pod="openshift-marketplace/redhat-marketplace-wg8d9" Feb 18 14:34:04 crc kubenswrapper[4957]: I0218 14:34:04.292636 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ffc559a7-20d1-416b-ae18-bcbfc5193d32-catalog-content\") pod \"redhat-marketplace-wg8d9\" (UID: \"ffc559a7-20d1-416b-ae18-bcbfc5193d32\") " pod="openshift-marketplace/redhat-marketplace-wg8d9" Feb 18 14:34:04 crc kubenswrapper[4957]: I0218 14:34:04.292677 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9c5w5\" (UniqueName: \"kubernetes.io/projected/ffc559a7-20d1-416b-ae18-bcbfc5193d32-kube-api-access-9c5w5\") pod \"redhat-marketplace-wg8d9\" (UID: \"ffc559a7-20d1-416b-ae18-bcbfc5193d32\") " pod="openshift-marketplace/redhat-marketplace-wg8d9" Feb 18 14:34:04 crc kubenswrapper[4957]: I0218 14:34:04.293933 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ffc559a7-20d1-416b-ae18-bcbfc5193d32-catalog-content\") pod \"redhat-marketplace-wg8d9\" (UID: \"ffc559a7-20d1-416b-ae18-bcbfc5193d32\") " pod="openshift-marketplace/redhat-marketplace-wg8d9" Feb 18 14:34:04 crc kubenswrapper[4957]: I0218 14:34:04.295804 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ffc559a7-20d1-416b-ae18-bcbfc5193d32-utilities\") pod \"redhat-marketplace-wg8d9\" (UID: \"ffc559a7-20d1-416b-ae18-bcbfc5193d32\") " pod="openshift-marketplace/redhat-marketplace-wg8d9" Feb 18 14:34:04 crc kubenswrapper[4957]: I0218 14:34:04.345369 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9c5w5\" (UniqueName: \"kubernetes.io/projected/ffc559a7-20d1-416b-ae18-bcbfc5193d32-kube-api-access-9c5w5\") pod \"redhat-marketplace-wg8d9\" (UID: \"ffc559a7-20d1-416b-ae18-bcbfc5193d32\") " pod="openshift-marketplace/redhat-marketplace-wg8d9" Feb 18 14:34:04 crc kubenswrapper[4957]: I0218 14:34:04.503539 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-9jpjb"] Feb 18 14:34:04 crc kubenswrapper[4957]: I0218 14:34:04.505394 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9jpjb" Feb 18 14:34:04 crc kubenswrapper[4957]: I0218 14:34:04.513282 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9jpjb"] Feb 18 14:34:04 crc kubenswrapper[4957]: I0218 14:34:04.558920 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-hfcww"] Feb 18 14:34:04 crc kubenswrapper[4957]: I0218 14:34:04.595322 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fbe80d3e-51d2-41fa-aa8b-5607e7e69745-catalog-content\") pod \"redhat-marketplace-9jpjb\" (UID: \"fbe80d3e-51d2-41fa-aa8b-5607e7e69745\") " pod="openshift-marketplace/redhat-marketplace-9jpjb" Feb 18 14:34:04 crc kubenswrapper[4957]: I0218 14:34:04.595387 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fbe80d3e-51d2-41fa-aa8b-5607e7e69745-utilities\") pod \"redhat-marketplace-9jpjb\" (UID: \"fbe80d3e-51d2-41fa-aa8b-5607e7e69745\") " pod="openshift-marketplace/redhat-marketplace-9jpjb" Feb 18 14:34:04 crc kubenswrapper[4957]: I0218 14:34:04.595408 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtrn5\" (UniqueName: \"kubernetes.io/projected/fbe80d3e-51d2-41fa-aa8b-5607e7e69745-kube-api-access-gtrn5\") pod \"redhat-marketplace-9jpjb\" (UID: \"fbe80d3e-51d2-41fa-aa8b-5607e7e69745\") " pod="openshift-marketplace/redhat-marketplace-9jpjb" Feb 18 14:34:04 crc kubenswrapper[4957]: I0218 14:34:04.612495 4957 patch_prober.go:28] interesting pod/router-default-5444994796-jg752 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 14:34:04 crc kubenswrapper[4957]: [-]has-synced failed: reason withheld Feb 18 14:34:04 crc kubenswrapper[4957]: [+]process-running ok Feb 18 14:34:04 crc kubenswrapper[4957]: healthz check failed Feb 18 14:34:04 crc kubenswrapper[4957]: I0218 14:34:04.612575 4957 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jg752" podUID="099076c9-9f78-47b8-87f1-3c9cc47e0b09" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 14:34:04 crc kubenswrapper[4957]: I0218 14:34:04.627825 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wg8d9" Feb 18 14:34:04 crc kubenswrapper[4957]: I0218 14:34:04.696907 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fbe80d3e-51d2-41fa-aa8b-5607e7e69745-catalog-content\") pod \"redhat-marketplace-9jpjb\" (UID: \"fbe80d3e-51d2-41fa-aa8b-5607e7e69745\") " pod="openshift-marketplace/redhat-marketplace-9jpjb" Feb 18 14:34:04 crc kubenswrapper[4957]: I0218 14:34:04.696973 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fbe80d3e-51d2-41fa-aa8b-5607e7e69745-utilities\") pod \"redhat-marketplace-9jpjb\" (UID: \"fbe80d3e-51d2-41fa-aa8b-5607e7e69745\") " pod="openshift-marketplace/redhat-marketplace-9jpjb" Feb 18 14:34:04 crc kubenswrapper[4957]: I0218 14:34:04.697010 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gtrn5\" (UniqueName: \"kubernetes.io/projected/fbe80d3e-51d2-41fa-aa8b-5607e7e69745-kube-api-access-gtrn5\") pod \"redhat-marketplace-9jpjb\" (UID: \"fbe80d3e-51d2-41fa-aa8b-5607e7e69745\") " pod="openshift-marketplace/redhat-marketplace-9jpjb" Feb 18 14:34:04 crc kubenswrapper[4957]: I0218 14:34:04.697684 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fbe80d3e-51d2-41fa-aa8b-5607e7e69745-catalog-content\") pod \"redhat-marketplace-9jpjb\" (UID: \"fbe80d3e-51d2-41fa-aa8b-5607e7e69745\") " pod="openshift-marketplace/redhat-marketplace-9jpjb" Feb 18 14:34:04 crc kubenswrapper[4957]: I0218 14:34:04.697714 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fbe80d3e-51d2-41fa-aa8b-5607e7e69745-utilities\") pod \"redhat-marketplace-9jpjb\" (UID: \"fbe80d3e-51d2-41fa-aa8b-5607e7e69745\") " pod="openshift-marketplace/redhat-marketplace-9jpjb" Feb 18 14:34:04 crc kubenswrapper[4957]: I0218 14:34:04.715975 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gtrn5\" (UniqueName: \"kubernetes.io/projected/fbe80d3e-51d2-41fa-aa8b-5607e7e69745-kube-api-access-gtrn5\") pod \"redhat-marketplace-9jpjb\" (UID: \"fbe80d3e-51d2-41fa-aa8b-5607e7e69745\") " pod="openshift-marketplace/redhat-marketplace-9jpjb" Feb 18 14:34:04 crc kubenswrapper[4957]: I0218 14:34:04.733222 4957 generic.go:334] "Generic (PLEG): container finished" podID="74312833-84d3-4221-a8f7-07c892db5165" containerID="b089905d3f5dd76c2ff0ffca9f1c761476dcb7730038874fdfd2e39c1f462d5a" exitCode=0 Feb 18 14:34:04 crc kubenswrapper[4957]: I0218 14:34:04.733297 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wh6pb" event={"ID":"74312833-84d3-4221-a8f7-07c892db5165","Type":"ContainerDied","Data":"b089905d3f5dd76c2ff0ffca9f1c761476dcb7730038874fdfd2e39c1f462d5a"} Feb 18 14:34:04 crc kubenswrapper[4957]: I0218 14:34:04.738233 4957 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 14:34:04 crc kubenswrapper[4957]: I0218 14:34:04.748144 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-gdjrj" event={"ID":"11cb8341-3939-4c82-9745-510f73904864","Type":"ContainerStarted","Data":"bc55b07a7f2ec6e82b508acf90927e8054d9719dc5154e90299a5c59e1b0c522"} Feb 18 14:34:04 crc kubenswrapper[4957]: I0218 14:34:04.751997 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"4c4b9db2-8bcc-4ad3-88da-2c26aeb72241","Type":"ContainerStarted","Data":"89c3a46a5f3a7ce29dbb8993acbe483b28833cff46b7591e714a7460dc789f7b"} Feb 18 14:34:04 crc kubenswrapper[4957]: I0218 14:34:04.752038 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"4c4b9db2-8bcc-4ad3-88da-2c26aeb72241","Type":"ContainerStarted","Data":"3e5897ef379b895a9056e29d8b3d03d7cf31b0c0ced3a53483dfd6048cd94a01"} Feb 18 14:34:04 crc kubenswrapper[4957]: I0218 14:34:04.763380 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"b4359dc88ceed0d6074ac0d3925d81edea7d60d276fb0c3c9a9ecc9ea7284df0"} Feb 18 14:34:04 crc kubenswrapper[4957]: I0218 14:34:04.773512 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"581adc30ef4d799c36483643069e41d25a0cba667e0b72d7f883637b0d3bcb7a"} Feb 18 14:34:04 crc kubenswrapper[4957]: I0218 14:34:04.778931 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-hfcww" event={"ID":"1c7a025e-0270-445c-ac05-34ffe3502176","Type":"ContainerStarted","Data":"4d4a82ad50b3e71d9bb688538b27c522aedfcb1d714475cdf4e65bbac5113ac1"} Feb 18 14:34:04 crc kubenswrapper[4957]: I0218 14:34:04.779026 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-hfcww" event={"ID":"1c7a025e-0270-445c-ac05-34ffe3502176","Type":"ContainerStarted","Data":"a73696d362f145a294c2ebfcd25250a3feb7c9ac05c6d0a1efd85f0e48b58fd1"} Feb 18 14:34:04 crc kubenswrapper[4957]: I0218 14:34:04.780220 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-hfcww" Feb 18 14:34:04 crc kubenswrapper[4957]: I0218 14:34:04.783021 4957 generic.go:334] "Generic (PLEG): container finished" podID="0237301c-c7c4-490c-9be1-daaa5db30a10" containerID="58d022c89bfe4386971455e907858bb48b6bfe6f7fe43a6ac02e0013ee815c93" exitCode=0 Feb 18 14:34:04 crc kubenswrapper[4957]: I0218 14:34:04.783102 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gnwwm" event={"ID":"0237301c-c7c4-490c-9be1-daaa5db30a10","Type":"ContainerDied","Data":"58d022c89bfe4386971455e907858bb48b6bfe6f7fe43a6ac02e0013ee815c93"} Feb 18 14:34:04 crc kubenswrapper[4957]: I0218 14:34:04.783122 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gnwwm" event={"ID":"0237301c-c7c4-490c-9be1-daaa5db30a10","Type":"ContainerStarted","Data":"542b05c769c4073407e0bfad9ed2b4b5a7e1cced52b347b5c3b20e4eef21a112"} Feb 18 14:34:04 crc kubenswrapper[4957]: I0218 14:34:04.785598 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"3f92d1cd69ad2dc01a3e99d84b10f360f7831733b8786b29cf9c78f4279aaa1a"} Feb 18 14:34:04 crc kubenswrapper[4957]: I0218 14:34:04.786463 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:34:04 crc kubenswrapper[4957]: I0218 14:34:04.790108 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-gdjrj" podStartSLOduration=12.790086082 podStartE2EDuration="12.790086082s" podCreationTimestamp="2026-02-18 14:33:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:34:04.788899439 +0000 UTC m=+151.309764203" watchObservedRunningTime="2026-02-18 14:34:04.790086082 +0000 UTC m=+151.310950826" Feb 18 14:34:04 crc kubenswrapper[4957]: I0218 14:34:04.791588 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=2.791580684 podStartE2EDuration="2.791580684s" podCreationTimestamp="2026-02-18 14:34:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:34:04.769347529 +0000 UTC m=+151.290212263" watchObservedRunningTime="2026-02-18 14:34:04.791580684 +0000 UTC m=+151.312445428" Feb 18 14:34:04 crc kubenswrapper[4957]: I0218 14:34:04.796614 4957 generic.go:334] "Generic (PLEG): container finished" podID="f83e4add-33e3-4e43-af47-f9980471df63" containerID="c0aea824c0178765a471c2297627caad783fae42feb994db5d1aab99099b268a" exitCode=0 Feb 18 14:34:04 crc kubenswrapper[4957]: I0218 14:34:04.797528 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lj4dn" event={"ID":"f83e4add-33e3-4e43-af47-f9980471df63","Type":"ContainerDied","Data":"c0aea824c0178765a471c2297627caad783fae42feb994db5d1aab99099b268a"} Feb 18 14:34:04 crc kubenswrapper[4957]: I0218 14:34:04.822168 4957 generic.go:334] "Generic (PLEG): container finished" podID="af490140-34e3-4689-b13a-112b97f5cd9e" containerID="a64b56e72369eec2d9b55244bbd3d5b4f0d4669e627896dd0f530abd8bf14204" exitCode=0 Feb 18 14:34:04 crc kubenswrapper[4957]: I0218 14:34:04.822456 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p65zp" event={"ID":"af490140-34e3-4689-b13a-112b97f5cd9e","Type":"ContainerDied","Data":"a64b56e72369eec2d9b55244bbd3d5b4f0d4669e627896dd0f530abd8bf14204"} Feb 18 14:34:04 crc kubenswrapper[4957]: I0218 14:34:04.843974 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9jpjb" Feb 18 14:34:04 crc kubenswrapper[4957]: I0218 14:34:04.893798 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-hfcww" podStartSLOduration=130.893780281 podStartE2EDuration="2m10.893780281s" podCreationTimestamp="2026-02-18 14:31:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:34:04.879926488 +0000 UTC m=+151.400791232" watchObservedRunningTime="2026-02-18 14:34:04.893780281 +0000 UTC m=+151.414645015" Feb 18 14:34:04 crc kubenswrapper[4957]: I0218 14:34:04.937118 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wg8d9"] Feb 18 14:34:04 crc kubenswrapper[4957]: W0218 14:34:04.955845 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podffc559a7_20d1_416b_ae18_bcbfc5193d32.slice/crio-1e4679f64a5102c65cfc61d96a99f281f67b7b285f11e659a85efde3304a2b43 WatchSource:0}: Error finding container 1e4679f64a5102c65cfc61d96a99f281f67b7b285f11e659a85efde3304a2b43: Status 404 returned error can't find the container with id 1e4679f64a5102c65cfc61d96a99f281f67b7b285f11e659a85efde3304a2b43 Feb 18 14:34:05 crc kubenswrapper[4957]: I0218 14:34:05.034643 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-lp5cj" Feb 18 14:34:05 crc kubenswrapper[4957]: I0218 14:34:05.100049 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-d9hcp" Feb 18 14:34:05 crc kubenswrapper[4957]: I0218 14:34:05.119929 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2db8v" Feb 18 14:34:05 crc kubenswrapper[4957]: I0218 14:34:05.121074 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2db8v" Feb 18 14:34:05 crc kubenswrapper[4957]: I0218 14:34:05.123934 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-gpb4w"] Feb 18 14:34:05 crc kubenswrapper[4957]: I0218 14:34:05.128099 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gpb4w" Feb 18 14:34:05 crc kubenswrapper[4957]: I0218 14:34:05.136804 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 18 14:34:05 crc kubenswrapper[4957]: I0218 14:34:05.181747 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gpb4w"] Feb 18 14:34:05 crc kubenswrapper[4957]: I0218 14:34:05.182407 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2db8v" Feb 18 14:34:05 crc kubenswrapper[4957]: I0218 14:34:05.211246 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4d1018f-2e03-4372-98e6-cbba16adff43-catalog-content\") pod \"redhat-operators-gpb4w\" (UID: \"f4d1018f-2e03-4372-98e6-cbba16adff43\") " pod="openshift-marketplace/redhat-operators-gpb4w" Feb 18 14:34:05 crc kubenswrapper[4957]: I0218 14:34:05.211287 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxcvg\" (UniqueName: \"kubernetes.io/projected/f4d1018f-2e03-4372-98e6-cbba16adff43-kube-api-access-zxcvg\") pod \"redhat-operators-gpb4w\" (UID: \"f4d1018f-2e03-4372-98e6-cbba16adff43\") " pod="openshift-marketplace/redhat-operators-gpb4w" Feb 18 14:34:05 crc kubenswrapper[4957]: I0218 14:34:05.211336 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4d1018f-2e03-4372-98e6-cbba16adff43-utilities\") pod \"redhat-operators-gpb4w\" (UID: \"f4d1018f-2e03-4372-98e6-cbba16adff43\") " pod="openshift-marketplace/redhat-operators-gpb4w" Feb 18 14:34:05 crc kubenswrapper[4957]: I0218 14:34:05.213824 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29523750-lq4hl" Feb 18 14:34:05 crc kubenswrapper[4957]: I0218 14:34:05.227274 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9jpjb"] Feb 18 14:34:05 crc kubenswrapper[4957]: I0218 14:34:05.312554 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4d1018f-2e03-4372-98e6-cbba16adff43-utilities\") pod \"redhat-operators-gpb4w\" (UID: \"f4d1018f-2e03-4372-98e6-cbba16adff43\") " pod="openshift-marketplace/redhat-operators-gpb4w" Feb 18 14:34:05 crc kubenswrapper[4957]: I0218 14:34:05.312759 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4d1018f-2e03-4372-98e6-cbba16adff43-catalog-content\") pod \"redhat-operators-gpb4w\" (UID: \"f4d1018f-2e03-4372-98e6-cbba16adff43\") " pod="openshift-marketplace/redhat-operators-gpb4w" Feb 18 14:34:05 crc kubenswrapper[4957]: I0218 14:34:05.312781 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zxcvg\" (UniqueName: \"kubernetes.io/projected/f4d1018f-2e03-4372-98e6-cbba16adff43-kube-api-access-zxcvg\") pod \"redhat-operators-gpb4w\" (UID: \"f4d1018f-2e03-4372-98e6-cbba16adff43\") " pod="openshift-marketplace/redhat-operators-gpb4w" Feb 18 14:34:05 crc kubenswrapper[4957]: I0218 14:34:05.313440 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4d1018f-2e03-4372-98e6-cbba16adff43-utilities\") pod \"redhat-operators-gpb4w\" (UID: \"f4d1018f-2e03-4372-98e6-cbba16adff43\") " pod="openshift-marketplace/redhat-operators-gpb4w" Feb 18 14:34:05 crc kubenswrapper[4957]: I0218 14:34:05.313517 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4d1018f-2e03-4372-98e6-cbba16adff43-catalog-content\") pod \"redhat-operators-gpb4w\" (UID: \"f4d1018f-2e03-4372-98e6-cbba16adff43\") " pod="openshift-marketplace/redhat-operators-gpb4w" Feb 18 14:34:05 crc kubenswrapper[4957]: I0218 14:34:05.333740 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zxcvg\" (UniqueName: \"kubernetes.io/projected/f4d1018f-2e03-4372-98e6-cbba16adff43-kube-api-access-zxcvg\") pod \"redhat-operators-gpb4w\" (UID: \"f4d1018f-2e03-4372-98e6-cbba16adff43\") " pod="openshift-marketplace/redhat-operators-gpb4w" Feb 18 14:34:05 crc kubenswrapper[4957]: I0218 14:34:05.414257 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4cbe5d01-ad51-4b03-aba9-8757f5643bdb-config-volume\") pod \"4cbe5d01-ad51-4b03-aba9-8757f5643bdb\" (UID: \"4cbe5d01-ad51-4b03-aba9-8757f5643bdb\") " Feb 18 14:34:05 crc kubenswrapper[4957]: I0218 14:34:05.414409 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tlpv6\" (UniqueName: \"kubernetes.io/projected/4cbe5d01-ad51-4b03-aba9-8757f5643bdb-kube-api-access-tlpv6\") pod \"4cbe5d01-ad51-4b03-aba9-8757f5643bdb\" (UID: \"4cbe5d01-ad51-4b03-aba9-8757f5643bdb\") " Feb 18 14:34:05 crc kubenswrapper[4957]: I0218 14:34:05.414470 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4cbe5d01-ad51-4b03-aba9-8757f5643bdb-secret-volume\") pod \"4cbe5d01-ad51-4b03-aba9-8757f5643bdb\" (UID: \"4cbe5d01-ad51-4b03-aba9-8757f5643bdb\") " Feb 18 14:34:05 crc kubenswrapper[4957]: I0218 14:34:05.416053 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4cbe5d01-ad51-4b03-aba9-8757f5643bdb-config-volume" (OuterVolumeSpecName: "config-volume") pod "4cbe5d01-ad51-4b03-aba9-8757f5643bdb" (UID: "4cbe5d01-ad51-4b03-aba9-8757f5643bdb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:34:05 crc kubenswrapper[4957]: I0218 14:34:05.419798 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4cbe5d01-ad51-4b03-aba9-8757f5643bdb-kube-api-access-tlpv6" (OuterVolumeSpecName: "kube-api-access-tlpv6") pod "4cbe5d01-ad51-4b03-aba9-8757f5643bdb" (UID: "4cbe5d01-ad51-4b03-aba9-8757f5643bdb"). InnerVolumeSpecName "kube-api-access-tlpv6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:34:05 crc kubenswrapper[4957]: I0218 14:34:05.419855 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4cbe5d01-ad51-4b03-aba9-8757f5643bdb-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "4cbe5d01-ad51-4b03-aba9-8757f5643bdb" (UID: "4cbe5d01-ad51-4b03-aba9-8757f5643bdb"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:34:05 crc kubenswrapper[4957]: I0218 14:34:05.504633 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-5tbbh"] Feb 18 14:34:05 crc kubenswrapper[4957]: E0218 14:34:05.507145 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4cbe5d01-ad51-4b03-aba9-8757f5643bdb" containerName="collect-profiles" Feb 18 14:34:05 crc kubenswrapper[4957]: I0218 14:34:05.507270 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="4cbe5d01-ad51-4b03-aba9-8757f5643bdb" containerName="collect-profiles" Feb 18 14:34:05 crc kubenswrapper[4957]: I0218 14:34:05.507628 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="4cbe5d01-ad51-4b03-aba9-8757f5643bdb" containerName="collect-profiles" Feb 18 14:34:05 crc kubenswrapper[4957]: I0218 14:34:05.509008 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5tbbh" Feb 18 14:34:05 crc kubenswrapper[4957]: I0218 14:34:05.510971 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gpb4w" Feb 18 14:34:05 crc kubenswrapper[4957]: I0218 14:34:05.515355 4957 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4cbe5d01-ad51-4b03-aba9-8757f5643bdb-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 18 14:34:05 crc kubenswrapper[4957]: I0218 14:34:05.515384 4957 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4cbe5d01-ad51-4b03-aba9-8757f5643bdb-config-volume\") on node \"crc\" DevicePath \"\"" Feb 18 14:34:05 crc kubenswrapper[4957]: I0218 14:34:05.515398 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tlpv6\" (UniqueName: \"kubernetes.io/projected/4cbe5d01-ad51-4b03-aba9-8757f5643bdb-kube-api-access-tlpv6\") on node \"crc\" DevicePath \"\"" Feb 18 14:34:05 crc kubenswrapper[4957]: I0218 14:34:05.518537 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5tbbh"] Feb 18 14:34:05 crc kubenswrapper[4957]: I0218 14:34:05.611321 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-jg752" Feb 18 14:34:05 crc kubenswrapper[4957]: I0218 14:34:05.617399 4957 patch_prober.go:28] interesting pod/router-default-5444994796-jg752 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 14:34:05 crc kubenswrapper[4957]: [-]has-synced failed: reason withheld Feb 18 14:34:05 crc kubenswrapper[4957]: [+]process-running ok Feb 18 14:34:05 crc kubenswrapper[4957]: healthz check failed Feb 18 14:34:05 crc kubenswrapper[4957]: I0218 14:34:05.617520 4957 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jg752" podUID="099076c9-9f78-47b8-87f1-3c9cc47e0b09" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 14:34:05 crc kubenswrapper[4957]: I0218 14:34:05.621385 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwkwt\" (UniqueName: \"kubernetes.io/projected/597544d6-7743-45b7-91d3-54e797b3e342-kube-api-access-vwkwt\") pod \"redhat-operators-5tbbh\" (UID: \"597544d6-7743-45b7-91d3-54e797b3e342\") " pod="openshift-marketplace/redhat-operators-5tbbh" Feb 18 14:34:05 crc kubenswrapper[4957]: I0218 14:34:05.621549 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/597544d6-7743-45b7-91d3-54e797b3e342-catalog-content\") pod \"redhat-operators-5tbbh\" (UID: \"597544d6-7743-45b7-91d3-54e797b3e342\") " pod="openshift-marketplace/redhat-operators-5tbbh" Feb 18 14:34:05 crc kubenswrapper[4957]: I0218 14:34:05.621585 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/597544d6-7743-45b7-91d3-54e797b3e342-utilities\") pod \"redhat-operators-5tbbh\" (UID: \"597544d6-7743-45b7-91d3-54e797b3e342\") " pod="openshift-marketplace/redhat-operators-5tbbh" Feb 18 14:34:05 crc kubenswrapper[4957]: I0218 14:34:05.723068 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vwkwt\" (UniqueName: \"kubernetes.io/projected/597544d6-7743-45b7-91d3-54e797b3e342-kube-api-access-vwkwt\") pod \"redhat-operators-5tbbh\" (UID: \"597544d6-7743-45b7-91d3-54e797b3e342\") " pod="openshift-marketplace/redhat-operators-5tbbh" Feb 18 14:34:05 crc kubenswrapper[4957]: I0218 14:34:05.723164 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/597544d6-7743-45b7-91d3-54e797b3e342-catalog-content\") pod \"redhat-operators-5tbbh\" (UID: \"597544d6-7743-45b7-91d3-54e797b3e342\") " pod="openshift-marketplace/redhat-operators-5tbbh" Feb 18 14:34:05 crc kubenswrapper[4957]: I0218 14:34:05.723199 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/597544d6-7743-45b7-91d3-54e797b3e342-utilities\") pod \"redhat-operators-5tbbh\" (UID: \"597544d6-7743-45b7-91d3-54e797b3e342\") " pod="openshift-marketplace/redhat-operators-5tbbh" Feb 18 14:34:05 crc kubenswrapper[4957]: I0218 14:34:05.723770 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/597544d6-7743-45b7-91d3-54e797b3e342-utilities\") pod \"redhat-operators-5tbbh\" (UID: \"597544d6-7743-45b7-91d3-54e797b3e342\") " pod="openshift-marketplace/redhat-operators-5tbbh" Feb 18 14:34:05 crc kubenswrapper[4957]: I0218 14:34:05.723861 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/597544d6-7743-45b7-91d3-54e797b3e342-catalog-content\") pod \"redhat-operators-5tbbh\" (UID: \"597544d6-7743-45b7-91d3-54e797b3e342\") " pod="openshift-marketplace/redhat-operators-5tbbh" Feb 18 14:34:05 crc kubenswrapper[4957]: I0218 14:34:05.741235 4957 patch_prober.go:28] interesting pod/downloads-7954f5f757-fxh8s container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Feb 18 14:34:05 crc kubenswrapper[4957]: I0218 14:34:05.741297 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-fxh8s" podUID="6e462cfd-ac3c-4e75-bcce-f8291746b89e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Feb 18 14:34:05 crc kubenswrapper[4957]: I0218 14:34:05.741556 4957 patch_prober.go:28] interesting pod/downloads-7954f5f757-fxh8s container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Feb 18 14:34:05 crc kubenswrapper[4957]: I0218 14:34:05.741576 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-fxh8s" podUID="6e462cfd-ac3c-4e75-bcce-f8291746b89e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Feb 18 14:34:05 crc kubenswrapper[4957]: I0218 14:34:05.753070 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-hvb66" Feb 18 14:34:05 crc kubenswrapper[4957]: I0218 14:34:05.753139 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-hvb66" Feb 18 14:34:05 crc kubenswrapper[4957]: I0218 14:34:05.755156 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vwkwt\" (UniqueName: \"kubernetes.io/projected/597544d6-7743-45b7-91d3-54e797b3e342-kube-api-access-vwkwt\") pod \"redhat-operators-5tbbh\" (UID: \"597544d6-7743-45b7-91d3-54e797b3e342\") " pod="openshift-marketplace/redhat-operators-5tbbh" Feb 18 14:34:05 crc kubenswrapper[4957]: I0218 14:34:05.759686 4957 patch_prober.go:28] interesting pod/console-f9d7485db-hvb66 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.21:8443/health\": dial tcp 10.217.0.21:8443: connect: connection refused" start-of-body= Feb 18 14:34:05 crc kubenswrapper[4957]: I0218 14:34:05.760136 4957 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-hvb66" podUID="3da7ba0a-4ddc-4bca-acd8-e598854eceec" containerName="console" probeResult="failure" output="Get \"https://10.217.0.21:8443/health\": dial tcp 10.217.0.21:8443: connect: connection refused" Feb 18 14:34:05 crc kubenswrapper[4957]: I0218 14:34:05.762137 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-fpnw9" Feb 18 14:34:05 crc kubenswrapper[4957]: I0218 14:34:05.763140 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-fpnw9" Feb 18 14:34:05 crc kubenswrapper[4957]: I0218 14:34:05.768559 4957 patch_prober.go:28] interesting pod/apiserver-76f77b778f-fpnw9 container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Feb 18 14:34:05 crc kubenswrapper[4957]: [+]log ok Feb 18 14:34:05 crc kubenswrapper[4957]: [+]etcd ok Feb 18 14:34:05 crc kubenswrapper[4957]: [+]poststarthook/start-apiserver-admission-initializer ok Feb 18 14:34:05 crc kubenswrapper[4957]: [+]poststarthook/generic-apiserver-start-informers ok Feb 18 14:34:05 crc kubenswrapper[4957]: [+]poststarthook/max-in-flight-filter ok Feb 18 14:34:05 crc kubenswrapper[4957]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 18 14:34:05 crc kubenswrapper[4957]: [+]poststarthook/image.openshift.io-apiserver-caches ok Feb 18 14:34:05 crc kubenswrapper[4957]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Feb 18 14:34:05 crc kubenswrapper[4957]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Feb 18 14:34:05 crc kubenswrapper[4957]: [+]poststarthook/project.openshift.io-projectcache ok Feb 18 14:34:05 crc kubenswrapper[4957]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Feb 18 14:34:05 crc kubenswrapper[4957]: [+]poststarthook/openshift.io-startinformers ok Feb 18 14:34:05 crc kubenswrapper[4957]: [+]poststarthook/openshift.io-restmapperupdater ok Feb 18 14:34:05 crc kubenswrapper[4957]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Feb 18 14:34:05 crc kubenswrapper[4957]: livez check failed Feb 18 14:34:05 crc kubenswrapper[4957]: I0218 14:34:05.768616 4957 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-fpnw9" podUID="b57b2a2c-ac8c-4c84-84c4-c24479600b71" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 14:34:05 crc kubenswrapper[4957]: I0218 14:34:05.836521 4957 generic.go:334] "Generic (PLEG): container finished" podID="fbe80d3e-51d2-41fa-aa8b-5607e7e69745" containerID="5df7239993bf74a801bd908e7d535230116dab2862ea1b4185d6c1dc8d3a08d9" exitCode=0 Feb 18 14:34:05 crc kubenswrapper[4957]: I0218 14:34:05.836602 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9jpjb" event={"ID":"fbe80d3e-51d2-41fa-aa8b-5607e7e69745","Type":"ContainerDied","Data":"5df7239993bf74a801bd908e7d535230116dab2862ea1b4185d6c1dc8d3a08d9"} Feb 18 14:34:05 crc kubenswrapper[4957]: I0218 14:34:05.836640 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9jpjb" event={"ID":"fbe80d3e-51d2-41fa-aa8b-5607e7e69745","Type":"ContainerStarted","Data":"190158fbf3221724df9d5bf7f29d47005dde94e402292b60b6664156459b2063"} Feb 18 14:34:05 crc kubenswrapper[4957]: I0218 14:34:05.850770 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29523750-lq4hl" event={"ID":"4cbe5d01-ad51-4b03-aba9-8757f5643bdb","Type":"ContainerDied","Data":"1abb467f37fea709ec57d2822cd084487a3dd11d7e3fda9fb5b451959d464df2"} Feb 18 14:34:05 crc kubenswrapper[4957]: I0218 14:34:05.850823 4957 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1abb467f37fea709ec57d2822cd084487a3dd11d7e3fda9fb5b451959d464df2" Feb 18 14:34:05 crc kubenswrapper[4957]: I0218 14:34:05.850934 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29523750-lq4hl" Feb 18 14:34:05 crc kubenswrapper[4957]: I0218 14:34:05.868586 4957 generic.go:334] "Generic (PLEG): container finished" podID="4c4b9db2-8bcc-4ad3-88da-2c26aeb72241" containerID="89c3a46a5f3a7ce29dbb8993acbe483b28833cff46b7591e714a7460dc789f7b" exitCode=0 Feb 18 14:34:05 crc kubenswrapper[4957]: I0218 14:34:05.868659 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"4c4b9db2-8bcc-4ad3-88da-2c26aeb72241","Type":"ContainerDied","Data":"89c3a46a5f3a7ce29dbb8993acbe483b28833cff46b7591e714a7460dc789f7b"} Feb 18 14:34:05 crc kubenswrapper[4957]: I0218 14:34:05.873301 4957 generic.go:334] "Generic (PLEG): container finished" podID="ffc559a7-20d1-416b-ae18-bcbfc5193d32" containerID="6d19ba2bd7747d06d8dc2f1e82b008add7b9e6a84c91ec6bbda37c727b74d811" exitCode=0 Feb 18 14:34:05 crc kubenswrapper[4957]: I0218 14:34:05.874403 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wg8d9" event={"ID":"ffc559a7-20d1-416b-ae18-bcbfc5193d32","Type":"ContainerDied","Data":"6d19ba2bd7747d06d8dc2f1e82b008add7b9e6a84c91ec6bbda37c727b74d811"} Feb 18 14:34:05 crc kubenswrapper[4957]: I0218 14:34:05.874477 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wg8d9" event={"ID":"ffc559a7-20d1-416b-ae18-bcbfc5193d32","Type":"ContainerStarted","Data":"1e4679f64a5102c65cfc61d96a99f281f67b7b285f11e659a85efde3304a2b43"} Feb 18 14:34:05 crc kubenswrapper[4957]: I0218 14:34:05.916659 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2db8v" Feb 18 14:34:05 crc kubenswrapper[4957]: I0218 14:34:05.959015 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5tbbh" Feb 18 14:34:05 crc kubenswrapper[4957]: I0218 14:34:05.972537 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 18 14:34:05 crc kubenswrapper[4957]: I0218 14:34:05.973384 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 18 14:34:05 crc kubenswrapper[4957]: I0218 14:34:05.978102 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 18 14:34:05 crc kubenswrapper[4957]: I0218 14:34:05.981586 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 18 14:34:05 crc kubenswrapper[4957]: I0218 14:34:05.991102 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 18 14:34:06 crc kubenswrapper[4957]: I0218 14:34:06.056792 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gpb4w"] Feb 18 14:34:06 crc kubenswrapper[4957]: I0218 14:34:06.134381 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/25e0950e-0a83-453a-9b1c-71d08a01c026-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"25e0950e-0a83-453a-9b1c-71d08a01c026\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 18 14:34:06 crc kubenswrapper[4957]: I0218 14:34:06.134880 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/25e0950e-0a83-453a-9b1c-71d08a01c026-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"25e0950e-0a83-453a-9b1c-71d08a01c026\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 18 14:34:06 crc kubenswrapper[4957]: I0218 14:34:06.236045 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/25e0950e-0a83-453a-9b1c-71d08a01c026-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"25e0950e-0a83-453a-9b1c-71d08a01c026\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 18 14:34:06 crc kubenswrapper[4957]: I0218 14:34:06.236171 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/25e0950e-0a83-453a-9b1c-71d08a01c026-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"25e0950e-0a83-453a-9b1c-71d08a01c026\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 18 14:34:06 crc kubenswrapper[4957]: I0218 14:34:06.236278 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/25e0950e-0a83-453a-9b1c-71d08a01c026-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"25e0950e-0a83-453a-9b1c-71d08a01c026\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 18 14:34:06 crc kubenswrapper[4957]: I0218 14:34:06.278464 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-sfz7k" Feb 18 14:34:06 crc kubenswrapper[4957]: I0218 14:34:06.285905 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/25e0950e-0a83-453a-9b1c-71d08a01c026-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"25e0950e-0a83-453a-9b1c-71d08a01c026\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 18 14:34:06 crc kubenswrapper[4957]: I0218 14:34:06.309893 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 18 14:34:06 crc kubenswrapper[4957]: I0218 14:34:06.418856 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5tbbh"] Feb 18 14:34:06 crc kubenswrapper[4957]: W0218 14:34:06.429013 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod597544d6_7743_45b7_91d3_54e797b3e342.slice/crio-edb1992b016c8f0e686b348126832778959b400038ad07c15e2e8da9cabd3e3d WatchSource:0}: Error finding container edb1992b016c8f0e686b348126832778959b400038ad07c15e2e8da9cabd3e3d: Status 404 returned error can't find the container with id edb1992b016c8f0e686b348126832778959b400038ad07c15e2e8da9cabd3e3d Feb 18 14:34:06 crc kubenswrapper[4957]: I0218 14:34:06.621796 4957 patch_prober.go:28] interesting pod/router-default-5444994796-jg752 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 14:34:06 crc kubenswrapper[4957]: [-]has-synced failed: reason withheld Feb 18 14:34:06 crc kubenswrapper[4957]: [+]process-running ok Feb 18 14:34:06 crc kubenswrapper[4957]: healthz check failed Feb 18 14:34:06 crc kubenswrapper[4957]: I0218 14:34:06.621857 4957 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jg752" podUID="099076c9-9f78-47b8-87f1-3c9cc47e0b09" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 14:34:06 crc kubenswrapper[4957]: I0218 14:34:06.674092 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 18 14:34:06 crc kubenswrapper[4957]: W0218 14:34:06.687617 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod25e0950e_0a83_453a_9b1c_71d08a01c026.slice/crio-fd61225a72eb4de7c950cb152d89f07a7ed3ed4418f0ed4eaa64d7b874715bdf WatchSource:0}: Error finding container fd61225a72eb4de7c950cb152d89f07a7ed3ed4418f0ed4eaa64d7b874715bdf: Status 404 returned error can't find the container with id fd61225a72eb4de7c950cb152d89f07a7ed3ed4418f0ed4eaa64d7b874715bdf Feb 18 14:34:06 crc kubenswrapper[4957]: I0218 14:34:06.884962 4957 generic.go:334] "Generic (PLEG): container finished" podID="f4d1018f-2e03-4372-98e6-cbba16adff43" containerID="5fad4694dfc8b3de7e40efa9fadd2ec82a1ab862d10013beb1fed441838e534f" exitCode=0 Feb 18 14:34:06 crc kubenswrapper[4957]: I0218 14:34:06.885098 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gpb4w" event={"ID":"f4d1018f-2e03-4372-98e6-cbba16adff43","Type":"ContainerDied","Data":"5fad4694dfc8b3de7e40efa9fadd2ec82a1ab862d10013beb1fed441838e534f"} Feb 18 14:34:06 crc kubenswrapper[4957]: I0218 14:34:06.885156 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gpb4w" event={"ID":"f4d1018f-2e03-4372-98e6-cbba16adff43","Type":"ContainerStarted","Data":"fbb1194a118bb54e6ce20d642367aa7092b892befb61fe853d343736f8361e12"} Feb 18 14:34:06 crc kubenswrapper[4957]: I0218 14:34:06.891643 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"25e0950e-0a83-453a-9b1c-71d08a01c026","Type":"ContainerStarted","Data":"fd61225a72eb4de7c950cb152d89f07a7ed3ed4418f0ed4eaa64d7b874715bdf"} Feb 18 14:34:06 crc kubenswrapper[4957]: I0218 14:34:06.910652 4957 generic.go:334] "Generic (PLEG): container finished" podID="597544d6-7743-45b7-91d3-54e797b3e342" containerID="dd90bc58813ef65e3c9361da444ac9058e6b3d085b09470ee0a3194fc95b8bd3" exitCode=0 Feb 18 14:34:06 crc kubenswrapper[4957]: I0218 14:34:06.910771 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5tbbh" event={"ID":"597544d6-7743-45b7-91d3-54e797b3e342","Type":"ContainerDied","Data":"dd90bc58813ef65e3c9361da444ac9058e6b3d085b09470ee0a3194fc95b8bd3"} Feb 18 14:34:06 crc kubenswrapper[4957]: I0218 14:34:06.910806 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5tbbh" event={"ID":"597544d6-7743-45b7-91d3-54e797b3e342","Type":"ContainerStarted","Data":"edb1992b016c8f0e686b348126832778959b400038ad07c15e2e8da9cabd3e3d"} Feb 18 14:34:07 crc kubenswrapper[4957]: I0218 14:34:07.217546 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 18 14:34:07 crc kubenswrapper[4957]: I0218 14:34:07.283083 4957 patch_prober.go:28] interesting pod/machine-config-daemon-x8wwg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 14:34:07 crc kubenswrapper[4957]: I0218 14:34:07.283172 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 14:34:07 crc kubenswrapper[4957]: I0218 14:34:07.365955 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4c4b9db2-8bcc-4ad3-88da-2c26aeb72241-kube-api-access\") pod \"4c4b9db2-8bcc-4ad3-88da-2c26aeb72241\" (UID: \"4c4b9db2-8bcc-4ad3-88da-2c26aeb72241\") " Feb 18 14:34:07 crc kubenswrapper[4957]: I0218 14:34:07.366101 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4c4b9db2-8bcc-4ad3-88da-2c26aeb72241-kubelet-dir\") pod \"4c4b9db2-8bcc-4ad3-88da-2c26aeb72241\" (UID: \"4c4b9db2-8bcc-4ad3-88da-2c26aeb72241\") " Feb 18 14:34:07 crc kubenswrapper[4957]: I0218 14:34:07.366241 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4c4b9db2-8bcc-4ad3-88da-2c26aeb72241-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "4c4b9db2-8bcc-4ad3-88da-2c26aeb72241" (UID: "4c4b9db2-8bcc-4ad3-88da-2c26aeb72241"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 14:34:07 crc kubenswrapper[4957]: I0218 14:34:07.366655 4957 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4c4b9db2-8bcc-4ad3-88da-2c26aeb72241-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 18 14:34:07 crc kubenswrapper[4957]: I0218 14:34:07.375795 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c4b9db2-8bcc-4ad3-88da-2c26aeb72241-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "4c4b9db2-8bcc-4ad3-88da-2c26aeb72241" (UID: "4c4b9db2-8bcc-4ad3-88da-2c26aeb72241"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:34:07 crc kubenswrapper[4957]: I0218 14:34:07.468484 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4c4b9db2-8bcc-4ad3-88da-2c26aeb72241-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 18 14:34:07 crc kubenswrapper[4957]: I0218 14:34:07.610642 4957 patch_prober.go:28] interesting pod/router-default-5444994796-jg752 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 14:34:07 crc kubenswrapper[4957]: [-]has-synced failed: reason withheld Feb 18 14:34:07 crc kubenswrapper[4957]: [+]process-running ok Feb 18 14:34:07 crc kubenswrapper[4957]: healthz check failed Feb 18 14:34:07 crc kubenswrapper[4957]: I0218 14:34:07.610738 4957 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jg752" podUID="099076c9-9f78-47b8-87f1-3c9cc47e0b09" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 14:34:07 crc kubenswrapper[4957]: I0218 14:34:07.928113 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"4c4b9db2-8bcc-4ad3-88da-2c26aeb72241","Type":"ContainerDied","Data":"3e5897ef379b895a9056e29d8b3d03d7cf31b0c0ced3a53483dfd6048cd94a01"} Feb 18 14:34:07 crc kubenswrapper[4957]: I0218 14:34:07.928161 4957 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3e5897ef379b895a9056e29d8b3d03d7cf31b0c0ced3a53483dfd6048cd94a01" Feb 18 14:34:07 crc kubenswrapper[4957]: I0218 14:34:07.928250 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 18 14:34:07 crc kubenswrapper[4957]: I0218 14:34:07.935802 4957 generic.go:334] "Generic (PLEG): container finished" podID="25e0950e-0a83-453a-9b1c-71d08a01c026" containerID="ab76fce5f5061c930951b35273acd37f81b41480e903e686eb7c1b3b237537aa" exitCode=0 Feb 18 14:34:07 crc kubenswrapper[4957]: I0218 14:34:07.935964 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"25e0950e-0a83-453a-9b1c-71d08a01c026","Type":"ContainerDied","Data":"ab76fce5f5061c930951b35273acd37f81b41480e903e686eb7c1b3b237537aa"} Feb 18 14:34:08 crc kubenswrapper[4957]: I0218 14:34:08.610296 4957 patch_prober.go:28] interesting pod/router-default-5444994796-jg752 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 14:34:08 crc kubenswrapper[4957]: [-]has-synced failed: reason withheld Feb 18 14:34:08 crc kubenswrapper[4957]: [+]process-running ok Feb 18 14:34:08 crc kubenswrapper[4957]: healthz check failed Feb 18 14:34:08 crc kubenswrapper[4957]: I0218 14:34:08.610374 4957 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jg752" podUID="099076c9-9f78-47b8-87f1-3c9cc47e0b09" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 14:34:09 crc kubenswrapper[4957]: I0218 14:34:09.312210 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 18 14:34:09 crc kubenswrapper[4957]: I0218 14:34:09.419187 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/25e0950e-0a83-453a-9b1c-71d08a01c026-kube-api-access\") pod \"25e0950e-0a83-453a-9b1c-71d08a01c026\" (UID: \"25e0950e-0a83-453a-9b1c-71d08a01c026\") " Feb 18 14:34:09 crc kubenswrapper[4957]: I0218 14:34:09.419337 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/25e0950e-0a83-453a-9b1c-71d08a01c026-kubelet-dir\") pod \"25e0950e-0a83-453a-9b1c-71d08a01c026\" (UID: \"25e0950e-0a83-453a-9b1c-71d08a01c026\") " Feb 18 14:34:09 crc kubenswrapper[4957]: I0218 14:34:09.419495 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/25e0950e-0a83-453a-9b1c-71d08a01c026-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "25e0950e-0a83-453a-9b1c-71d08a01c026" (UID: "25e0950e-0a83-453a-9b1c-71d08a01c026"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 14:34:09 crc kubenswrapper[4957]: I0218 14:34:09.420118 4957 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/25e0950e-0a83-453a-9b1c-71d08a01c026-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 18 14:34:09 crc kubenswrapper[4957]: I0218 14:34:09.439833 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e0950e-0a83-453a-9b1c-71d08a01c026-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "25e0950e-0a83-453a-9b1c-71d08a01c026" (UID: "25e0950e-0a83-453a-9b1c-71d08a01c026"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:34:09 crc kubenswrapper[4957]: I0218 14:34:09.521806 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/25e0950e-0a83-453a-9b1c-71d08a01c026-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 18 14:34:09 crc kubenswrapper[4957]: I0218 14:34:09.612066 4957 patch_prober.go:28] interesting pod/router-default-5444994796-jg752 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 14:34:09 crc kubenswrapper[4957]: [-]has-synced failed: reason withheld Feb 18 14:34:09 crc kubenswrapper[4957]: [+]process-running ok Feb 18 14:34:09 crc kubenswrapper[4957]: healthz check failed Feb 18 14:34:09 crc kubenswrapper[4957]: I0218 14:34:09.612170 4957 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jg752" podUID="099076c9-9f78-47b8-87f1-3c9cc47e0b09" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 14:34:10 crc kubenswrapper[4957]: I0218 14:34:10.014885 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"25e0950e-0a83-453a-9b1c-71d08a01c026","Type":"ContainerDied","Data":"fd61225a72eb4de7c950cb152d89f07a7ed3ed4418f0ed4eaa64d7b874715bdf"} Feb 18 14:34:10 crc kubenswrapper[4957]: I0218 14:34:10.014927 4957 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fd61225a72eb4de7c950cb152d89f07a7ed3ed4418f0ed4eaa64d7b874715bdf" Feb 18 14:34:10 crc kubenswrapper[4957]: I0218 14:34:10.015299 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 18 14:34:10 crc kubenswrapper[4957]: I0218 14:34:10.609687 4957 patch_prober.go:28] interesting pod/router-default-5444994796-jg752 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 14:34:10 crc kubenswrapper[4957]: [-]has-synced failed: reason withheld Feb 18 14:34:10 crc kubenswrapper[4957]: [+]process-running ok Feb 18 14:34:10 crc kubenswrapper[4957]: healthz check failed Feb 18 14:34:10 crc kubenswrapper[4957]: I0218 14:34:10.609934 4957 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jg752" podUID="099076c9-9f78-47b8-87f1-3c9cc47e0b09" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 14:34:10 crc kubenswrapper[4957]: I0218 14:34:10.778260 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-fpnw9" Feb 18 14:34:10 crc kubenswrapper[4957]: I0218 14:34:10.785754 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-fpnw9" Feb 18 14:34:11 crc kubenswrapper[4957]: I0218 14:34:11.028029 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-wbzr4" Feb 18 14:34:11 crc kubenswrapper[4957]: I0218 14:34:11.615413 4957 patch_prober.go:28] interesting pod/router-default-5444994796-jg752 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 14:34:11 crc kubenswrapper[4957]: [-]has-synced failed: reason withheld Feb 18 14:34:11 crc kubenswrapper[4957]: [+]process-running ok Feb 18 14:34:11 crc kubenswrapper[4957]: healthz check failed Feb 18 14:34:11 crc kubenswrapper[4957]: I0218 14:34:11.615534 4957 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jg752" podUID="099076c9-9f78-47b8-87f1-3c9cc47e0b09" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 14:34:12 crc kubenswrapper[4957]: I0218 14:34:12.617289 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-jg752" Feb 18 14:34:12 crc kubenswrapper[4957]: I0218 14:34:12.625152 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-jg752" Feb 18 14:34:15 crc kubenswrapper[4957]: I0218 14:34:15.740640 4957 patch_prober.go:28] interesting pod/downloads-7954f5f757-fxh8s container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Feb 18 14:34:15 crc kubenswrapper[4957]: I0218 14:34:15.740743 4957 patch_prober.go:28] interesting pod/downloads-7954f5f757-fxh8s container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Feb 18 14:34:15 crc kubenswrapper[4957]: I0218 14:34:15.741938 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-fxh8s" podUID="6e462cfd-ac3c-4e75-bcce-f8291746b89e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Feb 18 14:34:15 crc kubenswrapper[4957]: I0218 14:34:15.742347 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-fxh8s" podUID="6e462cfd-ac3c-4e75-bcce-f8291746b89e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Feb 18 14:34:15 crc kubenswrapper[4957]: I0218 14:34:15.753910 4957 patch_prober.go:28] interesting pod/console-f9d7485db-hvb66 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.21:8443/health\": dial tcp 10.217.0.21:8443: connect: connection refused" start-of-body= Feb 18 14:34:15 crc kubenswrapper[4957]: I0218 14:34:15.753968 4957 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-hvb66" podUID="3da7ba0a-4ddc-4bca-acd8-e598854eceec" containerName="console" probeResult="failure" output="Get \"https://10.217.0.21:8443/health\": dial tcp 10.217.0.21:8443: connect: connection refused" Feb 18 14:34:16 crc kubenswrapper[4957]: I0218 14:34:16.875753 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/58c40982-35c8-4670-ad21-513a7a5a458e-metrics-certs\") pod \"network-metrics-daemon-jkmlc\" (UID: \"58c40982-35c8-4670-ad21-513a7a5a458e\") " pod="openshift-multus/network-metrics-daemon-jkmlc" Feb 18 14:34:16 crc kubenswrapper[4957]: I0218 14:34:16.901332 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/58c40982-35c8-4670-ad21-513a7a5a458e-metrics-certs\") pod \"network-metrics-daemon-jkmlc\" (UID: \"58c40982-35c8-4670-ad21-513a7a5a458e\") " pod="openshift-multus/network-metrics-daemon-jkmlc" Feb 18 14:34:17 crc kubenswrapper[4957]: I0218 14:34:17.150348 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jkmlc" Feb 18 14:34:21 crc kubenswrapper[4957]: I0218 14:34:21.078317 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-lp5cj"] Feb 18 14:34:21 crc kubenswrapper[4957]: I0218 14:34:21.079317 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-lp5cj" podUID="609d339c-7830-4e17-b847-da17573e1ed0" containerName="controller-manager" containerID="cri-o://20e4ae50292bda4da977486c8b0618e87da93f79621b99e9fbb2ef7f7ee55626" gracePeriod=30 Feb 18 14:34:21 crc kubenswrapper[4957]: I0218 14:34:21.109302 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-fkdzp"] Feb 18 14:34:21 crc kubenswrapper[4957]: I0218 14:34:21.109625 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fkdzp" podUID="9ce37342-c95b-4be4-b48c-91553e81206a" containerName="route-controller-manager" containerID="cri-o://f166d8c7004ce26d6faae6b20632b93b0d08cfb36dd73a271dc623e8f4c622df" gracePeriod=30 Feb 18 14:34:21 crc kubenswrapper[4957]: I0218 14:34:21.539583 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-jkmlc"] Feb 18 14:34:22 crc kubenswrapper[4957]: I0218 14:34:22.209268 4957 generic.go:334] "Generic (PLEG): container finished" podID="609d339c-7830-4e17-b847-da17573e1ed0" containerID="20e4ae50292bda4da977486c8b0618e87da93f79621b99e9fbb2ef7f7ee55626" exitCode=0 Feb 18 14:34:22 crc kubenswrapper[4957]: I0218 14:34:22.209339 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-lp5cj" event={"ID":"609d339c-7830-4e17-b847-da17573e1ed0","Type":"ContainerDied","Data":"20e4ae50292bda4da977486c8b0618e87da93f79621b99e9fbb2ef7f7ee55626"} Feb 18 14:34:24 crc kubenswrapper[4957]: I0218 14:34:24.256139 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-hfcww" Feb 18 14:34:24 crc kubenswrapper[4957]: I0218 14:34:24.973493 4957 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-fkdzp container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Feb 18 14:34:24 crc kubenswrapper[4957]: I0218 14:34:24.973578 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fkdzp" podUID="9ce37342-c95b-4be4-b48c-91553e81206a" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Feb 18 14:34:25 crc kubenswrapper[4957]: I0218 14:34:25.024621 4957 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-lp5cj container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Feb 18 14:34:25 crc kubenswrapper[4957]: I0218 14:34:25.024701 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-lp5cj" podUID="609d339c-7830-4e17-b847-da17573e1ed0" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Feb 18 14:34:25 crc kubenswrapper[4957]: I0218 14:34:25.750130 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-fxh8s" Feb 18 14:34:25 crc kubenswrapper[4957]: I0218 14:34:25.756861 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-hvb66" Feb 18 14:34:25 crc kubenswrapper[4957]: I0218 14:34:25.762008 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-hvb66" Feb 18 14:34:34 crc kubenswrapper[4957]: I0218 14:34:34.974272 4957 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-fkdzp container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Feb 18 14:34:34 crc kubenswrapper[4957]: I0218 14:34:34.974766 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fkdzp" podUID="9ce37342-c95b-4be4-b48c-91553e81206a" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Feb 18 14:34:35 crc kubenswrapper[4957]: I0218 14:34:35.024877 4957 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-lp5cj container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Feb 18 14:34:35 crc kubenswrapper[4957]: I0218 14:34:35.024973 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-lp5cj" podUID="609d339c-7830-4e17-b847-da17573e1ed0" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Feb 18 14:34:35 crc kubenswrapper[4957]: W0218 14:34:35.629269 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod58c40982_35c8_4670_ad21_513a7a5a458e.slice/crio-e434701299b8d119187cd59186699c4191ef01b77569ece9518acf748fb123c8 WatchSource:0}: Error finding container e434701299b8d119187cd59186699c4191ef01b77569ece9518acf748fb123c8: Status 404 returned error can't find the container with id e434701299b8d119187cd59186699c4191ef01b77569ece9518acf748fb123c8 Feb 18 14:34:35 crc kubenswrapper[4957]: I0218 14:34:35.647054 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bwgmc" Feb 18 14:34:36 crc kubenswrapper[4957]: I0218 14:34:36.289471 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-jkmlc" event={"ID":"58c40982-35c8-4670-ad21-513a7a5a458e","Type":"ContainerStarted","Data":"e434701299b8d119187cd59186699c4191ef01b77569ece9518acf748fb123c8"} Feb 18 14:34:36 crc kubenswrapper[4957]: I0218 14:34:36.292318 4957 generic.go:334] "Generic (PLEG): container finished" podID="9ce37342-c95b-4be4-b48c-91553e81206a" containerID="f166d8c7004ce26d6faae6b20632b93b0d08cfb36dd73a271dc623e8f4c622df" exitCode=0 Feb 18 14:34:36 crc kubenswrapper[4957]: I0218 14:34:36.292380 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fkdzp" event={"ID":"9ce37342-c95b-4be4-b48c-91553e81206a","Type":"ContainerDied","Data":"f166d8c7004ce26d6faae6b20632b93b0d08cfb36dd73a271dc623e8f4c622df"} Feb 18 14:34:37 crc kubenswrapper[4957]: I0218 14:34:37.280141 4957 patch_prober.go:28] interesting pod/machine-config-daemon-x8wwg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 14:34:37 crc kubenswrapper[4957]: I0218 14:34:37.280633 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 14:34:39 crc kubenswrapper[4957]: I0218 14:34:39.265599 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fkdzp" Feb 18 14:34:39 crc kubenswrapper[4957]: I0218 14:34:39.270114 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-lp5cj" Feb 18 14:34:39 crc kubenswrapper[4957]: I0218 14:34:39.319965 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9ce37342-c95b-4be4-b48c-91553e81206a-serving-cert\") pod \"9ce37342-c95b-4be4-b48c-91553e81206a\" (UID: \"9ce37342-c95b-4be4-b48c-91553e81206a\") " Feb 18 14:34:39 crc kubenswrapper[4957]: I0218 14:34:39.320057 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ce37342-c95b-4be4-b48c-91553e81206a-config\") pod \"9ce37342-c95b-4be4-b48c-91553e81206a\" (UID: \"9ce37342-c95b-4be4-b48c-91553e81206a\") " Feb 18 14:34:39 crc kubenswrapper[4957]: I0218 14:34:39.320114 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/609d339c-7830-4e17-b847-da17573e1ed0-client-ca\") pod \"609d339c-7830-4e17-b847-da17573e1ed0\" (UID: \"609d339c-7830-4e17-b847-da17573e1ed0\") " Feb 18 14:34:39 crc kubenswrapper[4957]: I0218 14:34:39.320525 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f46tj\" (UniqueName: \"kubernetes.io/projected/9ce37342-c95b-4be4-b48c-91553e81206a-kube-api-access-f46tj\") pod \"9ce37342-c95b-4be4-b48c-91553e81206a\" (UID: \"9ce37342-c95b-4be4-b48c-91553e81206a\") " Feb 18 14:34:39 crc kubenswrapper[4957]: I0218 14:34:39.320600 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/609d339c-7830-4e17-b847-da17573e1ed0-serving-cert\") pod \"609d339c-7830-4e17-b847-da17573e1ed0\" (UID: \"609d339c-7830-4e17-b847-da17573e1ed0\") " Feb 18 14:34:39 crc kubenswrapper[4957]: I0218 14:34:39.320624 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9ce37342-c95b-4be4-b48c-91553e81206a-client-ca\") pod \"9ce37342-c95b-4be4-b48c-91553e81206a\" (UID: \"9ce37342-c95b-4be4-b48c-91553e81206a\") " Feb 18 14:34:39 crc kubenswrapper[4957]: I0218 14:34:39.320657 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/609d339c-7830-4e17-b847-da17573e1ed0-proxy-ca-bundles\") pod \"609d339c-7830-4e17-b847-da17573e1ed0\" (UID: \"609d339c-7830-4e17-b847-da17573e1ed0\") " Feb 18 14:34:39 crc kubenswrapper[4957]: I0218 14:34:39.320723 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/609d339c-7830-4e17-b847-da17573e1ed0-config\") pod \"609d339c-7830-4e17-b847-da17573e1ed0\" (UID: \"609d339c-7830-4e17-b847-da17573e1ed0\") " Feb 18 14:34:39 crc kubenswrapper[4957]: I0218 14:34:39.320750 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p46m8\" (UniqueName: \"kubernetes.io/projected/609d339c-7830-4e17-b847-da17573e1ed0-kube-api-access-p46m8\") pod \"609d339c-7830-4e17-b847-da17573e1ed0\" (UID: \"609d339c-7830-4e17-b847-da17573e1ed0\") " Feb 18 14:34:39 crc kubenswrapper[4957]: I0218 14:34:39.322053 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/609d339c-7830-4e17-b847-da17573e1ed0-client-ca" (OuterVolumeSpecName: "client-ca") pod "609d339c-7830-4e17-b847-da17573e1ed0" (UID: "609d339c-7830-4e17-b847-da17573e1ed0"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:34:39 crc kubenswrapper[4957]: I0218 14:34:39.322108 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/609d339c-7830-4e17-b847-da17573e1ed0-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "609d339c-7830-4e17-b847-da17573e1ed0" (UID: "609d339c-7830-4e17-b847-da17573e1ed0"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:34:39 crc kubenswrapper[4957]: I0218 14:34:39.322174 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ce37342-c95b-4be4-b48c-91553e81206a-client-ca" (OuterVolumeSpecName: "client-ca") pod "9ce37342-c95b-4be4-b48c-91553e81206a" (UID: "9ce37342-c95b-4be4-b48c-91553e81206a"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:34:39 crc kubenswrapper[4957]: I0218 14:34:39.322375 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/609d339c-7830-4e17-b847-da17573e1ed0-config" (OuterVolumeSpecName: "config") pod "609d339c-7830-4e17-b847-da17573e1ed0" (UID: "609d339c-7830-4e17-b847-da17573e1ed0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:34:39 crc kubenswrapper[4957]: I0218 14:34:39.322655 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fkdzp" event={"ID":"9ce37342-c95b-4be4-b48c-91553e81206a","Type":"ContainerDied","Data":"8a9a86ade20c051332e06667e51ed8af13c3298eea6952593191428c74eaaef1"} Feb 18 14:34:39 crc kubenswrapper[4957]: I0218 14:34:39.322711 4957 scope.go:117] "RemoveContainer" containerID="f166d8c7004ce26d6faae6b20632b93b0d08cfb36dd73a271dc623e8f4c622df" Feb 18 14:34:39 crc kubenswrapper[4957]: I0218 14:34:39.322848 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fkdzp" Feb 18 14:34:39 crc kubenswrapper[4957]: I0218 14:34:39.324984 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ce37342-c95b-4be4-b48c-91553e81206a-config" (OuterVolumeSpecName: "config") pod "9ce37342-c95b-4be4-b48c-91553e81206a" (UID: "9ce37342-c95b-4be4-b48c-91553e81206a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:34:39 crc kubenswrapper[4957]: I0218 14:34:39.326197 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-lp5cj" event={"ID":"609d339c-7830-4e17-b847-da17573e1ed0","Type":"ContainerDied","Data":"5652f1f7b30a9ca29abe4f9463be7ab2821452d2910a20492a6b9c8ee50f2587"} Feb 18 14:34:39 crc kubenswrapper[4957]: I0218 14:34:39.326368 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-lp5cj" Feb 18 14:34:39 crc kubenswrapper[4957]: I0218 14:34:39.337353 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/609d339c-7830-4e17-b847-da17573e1ed0-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "609d339c-7830-4e17-b847-da17573e1ed0" (UID: "609d339c-7830-4e17-b847-da17573e1ed0"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:34:39 crc kubenswrapper[4957]: I0218 14:34:39.340958 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/609d339c-7830-4e17-b847-da17573e1ed0-kube-api-access-p46m8" (OuterVolumeSpecName: "kube-api-access-p46m8") pod "609d339c-7830-4e17-b847-da17573e1ed0" (UID: "609d339c-7830-4e17-b847-da17573e1ed0"). InnerVolumeSpecName "kube-api-access-p46m8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:34:39 crc kubenswrapper[4957]: I0218 14:34:39.344722 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ce37342-c95b-4be4-b48c-91553e81206a-kube-api-access-f46tj" (OuterVolumeSpecName: "kube-api-access-f46tj") pod "9ce37342-c95b-4be4-b48c-91553e81206a" (UID: "9ce37342-c95b-4be4-b48c-91553e81206a"). InnerVolumeSpecName "kube-api-access-f46tj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:34:39 crc kubenswrapper[4957]: I0218 14:34:39.349228 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ce37342-c95b-4be4-b48c-91553e81206a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9ce37342-c95b-4be4-b48c-91553e81206a" (UID: "9ce37342-c95b-4be4-b48c-91553e81206a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:34:39 crc kubenswrapper[4957]: I0218 14:34:39.354553 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5c97956df5-qdh8d"] Feb 18 14:34:39 crc kubenswrapper[4957]: E0218 14:34:39.357335 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="609d339c-7830-4e17-b847-da17573e1ed0" containerName="controller-manager" Feb 18 14:34:39 crc kubenswrapper[4957]: I0218 14:34:39.357369 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="609d339c-7830-4e17-b847-da17573e1ed0" containerName="controller-manager" Feb 18 14:34:39 crc kubenswrapper[4957]: E0218 14:34:39.357387 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c4b9db2-8bcc-4ad3-88da-2c26aeb72241" containerName="pruner" Feb 18 14:34:39 crc kubenswrapper[4957]: I0218 14:34:39.357396 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c4b9db2-8bcc-4ad3-88da-2c26aeb72241" containerName="pruner" Feb 18 14:34:39 crc kubenswrapper[4957]: E0218 14:34:39.358680 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ce37342-c95b-4be4-b48c-91553e81206a" containerName="route-controller-manager" Feb 18 14:34:39 crc kubenswrapper[4957]: I0218 14:34:39.358705 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ce37342-c95b-4be4-b48c-91553e81206a" containerName="route-controller-manager" Feb 18 14:34:39 crc kubenswrapper[4957]: E0218 14:34:39.358720 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25e0950e-0a83-453a-9b1c-71d08a01c026" containerName="pruner" Feb 18 14:34:39 crc kubenswrapper[4957]: I0218 14:34:39.358729 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="25e0950e-0a83-453a-9b1c-71d08a01c026" containerName="pruner" Feb 18 14:34:39 crc kubenswrapper[4957]: I0218 14:34:39.358873 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="25e0950e-0a83-453a-9b1c-71d08a01c026" containerName="pruner" Feb 18 14:34:39 crc kubenswrapper[4957]: I0218 14:34:39.358887 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="609d339c-7830-4e17-b847-da17573e1ed0" containerName="controller-manager" Feb 18 14:34:39 crc kubenswrapper[4957]: I0218 14:34:39.358902 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ce37342-c95b-4be4-b48c-91553e81206a" containerName="route-controller-manager" Feb 18 14:34:39 crc kubenswrapper[4957]: I0218 14:34:39.358913 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c4b9db2-8bcc-4ad3-88da-2c26aeb72241" containerName="pruner" Feb 18 14:34:39 crc kubenswrapper[4957]: I0218 14:34:39.359461 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c97956df5-qdh8d" Feb 18 14:34:39 crc kubenswrapper[4957]: I0218 14:34:39.366869 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5c97956df5-qdh8d"] Feb 18 14:34:39 crc kubenswrapper[4957]: I0218 14:34:39.422228 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47079600-0758-4b59-a2ed-97cfe597a38f-config\") pod \"route-controller-manager-5c97956df5-qdh8d\" (UID: \"47079600-0758-4b59-a2ed-97cfe597a38f\") " pod="openshift-route-controller-manager/route-controller-manager-5c97956df5-qdh8d" Feb 18 14:34:39 crc kubenswrapper[4957]: I0218 14:34:39.422734 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9n5gz\" (UniqueName: \"kubernetes.io/projected/47079600-0758-4b59-a2ed-97cfe597a38f-kube-api-access-9n5gz\") pod \"route-controller-manager-5c97956df5-qdh8d\" (UID: \"47079600-0758-4b59-a2ed-97cfe597a38f\") " pod="openshift-route-controller-manager/route-controller-manager-5c97956df5-qdh8d" Feb 18 14:34:39 crc kubenswrapper[4957]: I0218 14:34:39.422784 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/47079600-0758-4b59-a2ed-97cfe597a38f-client-ca\") pod \"route-controller-manager-5c97956df5-qdh8d\" (UID: \"47079600-0758-4b59-a2ed-97cfe597a38f\") " pod="openshift-route-controller-manager/route-controller-manager-5c97956df5-qdh8d" Feb 18 14:34:39 crc kubenswrapper[4957]: I0218 14:34:39.422816 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/47079600-0758-4b59-a2ed-97cfe597a38f-serving-cert\") pod \"route-controller-manager-5c97956df5-qdh8d\" (UID: \"47079600-0758-4b59-a2ed-97cfe597a38f\") " pod="openshift-route-controller-manager/route-controller-manager-5c97956df5-qdh8d" Feb 18 14:34:39 crc kubenswrapper[4957]: I0218 14:34:39.422855 4957 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/609d339c-7830-4e17-b847-da17573e1ed0-config\") on node \"crc\" DevicePath \"\"" Feb 18 14:34:39 crc kubenswrapper[4957]: I0218 14:34:39.422866 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p46m8\" (UniqueName: \"kubernetes.io/projected/609d339c-7830-4e17-b847-da17573e1ed0-kube-api-access-p46m8\") on node \"crc\" DevicePath \"\"" Feb 18 14:34:39 crc kubenswrapper[4957]: I0218 14:34:39.422876 4957 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9ce37342-c95b-4be4-b48c-91553e81206a-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 14:34:39 crc kubenswrapper[4957]: I0218 14:34:39.422889 4957 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ce37342-c95b-4be4-b48c-91553e81206a-config\") on node \"crc\" DevicePath \"\"" Feb 18 14:34:39 crc kubenswrapper[4957]: I0218 14:34:39.422898 4957 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/609d339c-7830-4e17-b847-da17573e1ed0-client-ca\") on node \"crc\" DevicePath \"\"" Feb 18 14:34:39 crc kubenswrapper[4957]: I0218 14:34:39.422907 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f46tj\" (UniqueName: \"kubernetes.io/projected/9ce37342-c95b-4be4-b48c-91553e81206a-kube-api-access-f46tj\") on node \"crc\" DevicePath \"\"" Feb 18 14:34:39 crc kubenswrapper[4957]: I0218 14:34:39.422921 4957 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9ce37342-c95b-4be4-b48c-91553e81206a-client-ca\") on node \"crc\" DevicePath \"\"" Feb 18 14:34:39 crc kubenswrapper[4957]: I0218 14:34:39.422942 4957 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/609d339c-7830-4e17-b847-da17573e1ed0-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 14:34:39 crc kubenswrapper[4957]: I0218 14:34:39.422954 4957 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/609d339c-7830-4e17-b847-da17573e1ed0-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 18 14:34:39 crc kubenswrapper[4957]: I0218 14:34:39.538332 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/47079600-0758-4b59-a2ed-97cfe597a38f-client-ca\") pod \"route-controller-manager-5c97956df5-qdh8d\" (UID: \"47079600-0758-4b59-a2ed-97cfe597a38f\") " pod="openshift-route-controller-manager/route-controller-manager-5c97956df5-qdh8d" Feb 18 14:34:39 crc kubenswrapper[4957]: I0218 14:34:39.538401 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/47079600-0758-4b59-a2ed-97cfe597a38f-serving-cert\") pod \"route-controller-manager-5c97956df5-qdh8d\" (UID: \"47079600-0758-4b59-a2ed-97cfe597a38f\") " pod="openshift-route-controller-manager/route-controller-manager-5c97956df5-qdh8d" Feb 18 14:34:39 crc kubenswrapper[4957]: I0218 14:34:39.538465 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47079600-0758-4b59-a2ed-97cfe597a38f-config\") pod \"route-controller-manager-5c97956df5-qdh8d\" (UID: \"47079600-0758-4b59-a2ed-97cfe597a38f\") " pod="openshift-route-controller-manager/route-controller-manager-5c97956df5-qdh8d" Feb 18 14:34:39 crc kubenswrapper[4957]: I0218 14:34:39.538518 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9n5gz\" (UniqueName: \"kubernetes.io/projected/47079600-0758-4b59-a2ed-97cfe597a38f-kube-api-access-9n5gz\") pod \"route-controller-manager-5c97956df5-qdh8d\" (UID: \"47079600-0758-4b59-a2ed-97cfe597a38f\") " pod="openshift-route-controller-manager/route-controller-manager-5c97956df5-qdh8d" Feb 18 14:34:39 crc kubenswrapper[4957]: I0218 14:34:39.539529 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/47079600-0758-4b59-a2ed-97cfe597a38f-client-ca\") pod \"route-controller-manager-5c97956df5-qdh8d\" (UID: \"47079600-0758-4b59-a2ed-97cfe597a38f\") " pod="openshift-route-controller-manager/route-controller-manager-5c97956df5-qdh8d" Feb 18 14:34:39 crc kubenswrapper[4957]: I0218 14:34:39.540488 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47079600-0758-4b59-a2ed-97cfe597a38f-config\") pod \"route-controller-manager-5c97956df5-qdh8d\" (UID: \"47079600-0758-4b59-a2ed-97cfe597a38f\") " pod="openshift-route-controller-manager/route-controller-manager-5c97956df5-qdh8d" Feb 18 14:34:39 crc kubenswrapper[4957]: I0218 14:34:39.545227 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/47079600-0758-4b59-a2ed-97cfe597a38f-serving-cert\") pod \"route-controller-manager-5c97956df5-qdh8d\" (UID: \"47079600-0758-4b59-a2ed-97cfe597a38f\") " pod="openshift-route-controller-manager/route-controller-manager-5c97956df5-qdh8d" Feb 18 14:34:39 crc kubenswrapper[4957]: I0218 14:34:39.560287 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9n5gz\" (UniqueName: \"kubernetes.io/projected/47079600-0758-4b59-a2ed-97cfe597a38f-kube-api-access-9n5gz\") pod \"route-controller-manager-5c97956df5-qdh8d\" (UID: \"47079600-0758-4b59-a2ed-97cfe597a38f\") " pod="openshift-route-controller-manager/route-controller-manager-5c97956df5-qdh8d" Feb 18 14:34:39 crc kubenswrapper[4957]: I0218 14:34:39.650675 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-fkdzp"] Feb 18 14:34:39 crc kubenswrapper[4957]: I0218 14:34:39.655725 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-fkdzp"] Feb 18 14:34:39 crc kubenswrapper[4957]: I0218 14:34:39.660222 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-lp5cj"] Feb 18 14:34:39 crc kubenswrapper[4957]: I0218 14:34:39.666764 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-lp5cj"] Feb 18 14:34:39 crc kubenswrapper[4957]: I0218 14:34:39.719182 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c97956df5-qdh8d" Feb 18 14:34:40 crc kubenswrapper[4957]: I0218 14:34:40.222400 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="609d339c-7830-4e17-b847-da17573e1ed0" path="/var/lib/kubelet/pods/609d339c-7830-4e17-b847-da17573e1ed0/volumes" Feb 18 14:34:40 crc kubenswrapper[4957]: I0218 14:34:40.223060 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ce37342-c95b-4be4-b48c-91553e81206a" path="/var/lib/kubelet/pods/9ce37342-c95b-4be4-b48c-91553e81206a/volumes" Feb 18 14:34:40 crc kubenswrapper[4957]: E0218 14:34:40.620405 4957 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 18 14:34:40 crc kubenswrapper[4957]: E0218 14:34:40.620657 4957 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9c5w5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-wg8d9_openshift-marketplace(ffc559a7-20d1-416b-ae18-bcbfc5193d32): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 18 14:34:40 crc kubenswrapper[4957]: E0218 14:34:40.622235 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-wg8d9" podUID="ffc559a7-20d1-416b-ae18-bcbfc5193d32" Feb 18 14:34:41 crc kubenswrapper[4957]: I0218 14:34:41.087657 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-777c8b6456-wpp9b"] Feb 18 14:34:41 crc kubenswrapper[4957]: I0218 14:34:41.088777 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-777c8b6456-wpp9b" Feb 18 14:34:41 crc kubenswrapper[4957]: I0218 14:34:41.090792 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 18 14:34:41 crc kubenswrapper[4957]: I0218 14:34:41.091198 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 18 14:34:41 crc kubenswrapper[4957]: I0218 14:34:41.091588 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 18 14:34:41 crc kubenswrapper[4957]: I0218 14:34:41.091951 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 18 14:34:41 crc kubenswrapper[4957]: I0218 14:34:41.092499 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 18 14:34:41 crc kubenswrapper[4957]: I0218 14:34:41.092715 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 18 14:34:41 crc kubenswrapper[4957]: I0218 14:34:41.102367 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-777c8b6456-wpp9b"] Feb 18 14:34:41 crc kubenswrapper[4957]: I0218 14:34:41.104011 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 18 14:34:41 crc kubenswrapper[4957]: I0218 14:34:41.159394 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/27455848-8e9a-475d-b5bc-7461384ba575-serving-cert\") pod \"controller-manager-777c8b6456-wpp9b\" (UID: \"27455848-8e9a-475d-b5bc-7461384ba575\") " pod="openshift-controller-manager/controller-manager-777c8b6456-wpp9b" Feb 18 14:34:41 crc kubenswrapper[4957]: I0218 14:34:41.159793 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27455848-8e9a-475d-b5bc-7461384ba575-config\") pod \"controller-manager-777c8b6456-wpp9b\" (UID: \"27455848-8e9a-475d-b5bc-7461384ba575\") " pod="openshift-controller-manager/controller-manager-777c8b6456-wpp9b" Feb 18 14:34:41 crc kubenswrapper[4957]: I0218 14:34:41.159827 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/27455848-8e9a-475d-b5bc-7461384ba575-proxy-ca-bundles\") pod \"controller-manager-777c8b6456-wpp9b\" (UID: \"27455848-8e9a-475d-b5bc-7461384ba575\") " pod="openshift-controller-manager/controller-manager-777c8b6456-wpp9b" Feb 18 14:34:41 crc kubenswrapper[4957]: I0218 14:34:41.159865 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btxbz\" (UniqueName: \"kubernetes.io/projected/27455848-8e9a-475d-b5bc-7461384ba575-kube-api-access-btxbz\") pod \"controller-manager-777c8b6456-wpp9b\" (UID: \"27455848-8e9a-475d-b5bc-7461384ba575\") " pod="openshift-controller-manager/controller-manager-777c8b6456-wpp9b" Feb 18 14:34:41 crc kubenswrapper[4957]: I0218 14:34:41.159896 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/27455848-8e9a-475d-b5bc-7461384ba575-client-ca\") pod \"controller-manager-777c8b6456-wpp9b\" (UID: \"27455848-8e9a-475d-b5bc-7461384ba575\") " pod="openshift-controller-manager/controller-manager-777c8b6456-wpp9b" Feb 18 14:34:41 crc kubenswrapper[4957]: I0218 14:34:41.183624 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5c97956df5-qdh8d"] Feb 18 14:34:41 crc kubenswrapper[4957]: I0218 14:34:41.261127 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-btxbz\" (UniqueName: \"kubernetes.io/projected/27455848-8e9a-475d-b5bc-7461384ba575-kube-api-access-btxbz\") pod \"controller-manager-777c8b6456-wpp9b\" (UID: \"27455848-8e9a-475d-b5bc-7461384ba575\") " pod="openshift-controller-manager/controller-manager-777c8b6456-wpp9b" Feb 18 14:34:41 crc kubenswrapper[4957]: I0218 14:34:41.261194 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/27455848-8e9a-475d-b5bc-7461384ba575-client-ca\") pod \"controller-manager-777c8b6456-wpp9b\" (UID: \"27455848-8e9a-475d-b5bc-7461384ba575\") " pod="openshift-controller-manager/controller-manager-777c8b6456-wpp9b" Feb 18 14:34:41 crc kubenswrapper[4957]: I0218 14:34:41.261270 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/27455848-8e9a-475d-b5bc-7461384ba575-serving-cert\") pod \"controller-manager-777c8b6456-wpp9b\" (UID: \"27455848-8e9a-475d-b5bc-7461384ba575\") " pod="openshift-controller-manager/controller-manager-777c8b6456-wpp9b" Feb 18 14:34:41 crc kubenswrapper[4957]: I0218 14:34:41.261293 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27455848-8e9a-475d-b5bc-7461384ba575-config\") pod \"controller-manager-777c8b6456-wpp9b\" (UID: \"27455848-8e9a-475d-b5bc-7461384ba575\") " pod="openshift-controller-manager/controller-manager-777c8b6456-wpp9b" Feb 18 14:34:41 crc kubenswrapper[4957]: I0218 14:34:41.261330 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/27455848-8e9a-475d-b5bc-7461384ba575-proxy-ca-bundles\") pod \"controller-manager-777c8b6456-wpp9b\" (UID: \"27455848-8e9a-475d-b5bc-7461384ba575\") " pod="openshift-controller-manager/controller-manager-777c8b6456-wpp9b" Feb 18 14:34:41 crc kubenswrapper[4957]: I0218 14:34:41.263302 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/27455848-8e9a-475d-b5bc-7461384ba575-client-ca\") pod \"controller-manager-777c8b6456-wpp9b\" (UID: \"27455848-8e9a-475d-b5bc-7461384ba575\") " pod="openshift-controller-manager/controller-manager-777c8b6456-wpp9b" Feb 18 14:34:41 crc kubenswrapper[4957]: I0218 14:34:41.264053 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/27455848-8e9a-475d-b5bc-7461384ba575-proxy-ca-bundles\") pod \"controller-manager-777c8b6456-wpp9b\" (UID: \"27455848-8e9a-475d-b5bc-7461384ba575\") " pod="openshift-controller-manager/controller-manager-777c8b6456-wpp9b" Feb 18 14:34:41 crc kubenswrapper[4957]: I0218 14:34:41.264298 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27455848-8e9a-475d-b5bc-7461384ba575-config\") pod \"controller-manager-777c8b6456-wpp9b\" (UID: \"27455848-8e9a-475d-b5bc-7461384ba575\") " pod="openshift-controller-manager/controller-manager-777c8b6456-wpp9b" Feb 18 14:34:41 crc kubenswrapper[4957]: I0218 14:34:41.285934 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/27455848-8e9a-475d-b5bc-7461384ba575-serving-cert\") pod \"controller-manager-777c8b6456-wpp9b\" (UID: \"27455848-8e9a-475d-b5bc-7461384ba575\") " pod="openshift-controller-manager/controller-manager-777c8b6456-wpp9b" Feb 18 14:34:41 crc kubenswrapper[4957]: I0218 14:34:41.290389 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-btxbz\" (UniqueName: \"kubernetes.io/projected/27455848-8e9a-475d-b5bc-7461384ba575-kube-api-access-btxbz\") pod \"controller-manager-777c8b6456-wpp9b\" (UID: \"27455848-8e9a-475d-b5bc-7461384ba575\") " pod="openshift-controller-manager/controller-manager-777c8b6456-wpp9b" Feb 18 14:34:41 crc kubenswrapper[4957]: I0218 14:34:41.414955 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-777c8b6456-wpp9b" Feb 18 14:34:41 crc kubenswrapper[4957]: I0218 14:34:41.742944 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:34:44 crc kubenswrapper[4957]: E0218 14:34:44.203607 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-wg8d9" podUID="ffc559a7-20d1-416b-ae18-bcbfc5193d32" Feb 18 14:34:44 crc kubenswrapper[4957]: E0218 14:34:44.273610 4957 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 18 14:34:44 crc kubenswrapper[4957]: E0218 14:34:44.273852 4957 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vwkwt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-5tbbh_openshift-marketplace(597544d6-7743-45b7-91d3-54e797b3e342): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 18 14:34:44 crc kubenswrapper[4957]: E0218 14:34:44.275097 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-5tbbh" podUID="597544d6-7743-45b7-91d3-54e797b3e342" Feb 18 14:34:45 crc kubenswrapper[4957]: E0218 14:34:45.812724 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-5tbbh" podUID="597544d6-7743-45b7-91d3-54e797b3e342" Feb 18 14:34:45 crc kubenswrapper[4957]: E0218 14:34:45.883149 4957 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 18 14:34:45 crc kubenswrapper[4957]: E0218 14:34:45.883348 4957 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vlkgb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-lj4dn_openshift-marketplace(f83e4add-33e3-4e43-af47-f9980471df63): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 18 14:34:45 crc kubenswrapper[4957]: E0218 14:34:45.884599 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-lj4dn" podUID="f83e4add-33e3-4e43-af47-f9980471df63" Feb 18 14:34:46 crc kubenswrapper[4957]: I0218 14:34:46.145402 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 18 14:34:46 crc kubenswrapper[4957]: I0218 14:34:46.146593 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 18 14:34:46 crc kubenswrapper[4957]: I0218 14:34:46.150504 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 18 14:34:46 crc kubenswrapper[4957]: I0218 14:34:46.157749 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 18 14:34:46 crc kubenswrapper[4957]: I0218 14:34:46.158057 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 18 14:34:46 crc kubenswrapper[4957]: I0218 14:34:46.239229 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5749f52b-5200-4d2e-a997-9e25a1839903-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"5749f52b-5200-4d2e-a997-9e25a1839903\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 18 14:34:46 crc kubenswrapper[4957]: I0218 14:34:46.239302 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5749f52b-5200-4d2e-a997-9e25a1839903-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"5749f52b-5200-4d2e-a997-9e25a1839903\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 18 14:34:46 crc kubenswrapper[4957]: I0218 14:34:46.341114 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5749f52b-5200-4d2e-a997-9e25a1839903-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"5749f52b-5200-4d2e-a997-9e25a1839903\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 18 14:34:46 crc kubenswrapper[4957]: I0218 14:34:46.341254 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5749f52b-5200-4d2e-a997-9e25a1839903-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"5749f52b-5200-4d2e-a997-9e25a1839903\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 18 14:34:46 crc kubenswrapper[4957]: I0218 14:34:46.341347 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5749f52b-5200-4d2e-a997-9e25a1839903-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"5749f52b-5200-4d2e-a997-9e25a1839903\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 18 14:34:46 crc kubenswrapper[4957]: I0218 14:34:46.363099 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5749f52b-5200-4d2e-a997-9e25a1839903-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"5749f52b-5200-4d2e-a997-9e25a1839903\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 18 14:34:46 crc kubenswrapper[4957]: I0218 14:34:46.475317 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 18 14:34:47 crc kubenswrapper[4957]: E0218 14:34:47.229534 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-lj4dn" podUID="f83e4add-33e3-4e43-af47-f9980471df63" Feb 18 14:34:47 crc kubenswrapper[4957]: I0218 14:34:47.240927 4957 scope.go:117] "RemoveContainer" containerID="20e4ae50292bda4da977486c8b0618e87da93f79621b99e9fbb2ef7f7ee55626" Feb 18 14:34:47 crc kubenswrapper[4957]: E0218 14:34:47.323617 4957 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 18 14:34:47 crc kubenswrapper[4957]: E0218 14:34:47.323848 4957 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vdh9w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-gnwwm_openshift-marketplace(0237301c-c7c4-490c-9be1-daaa5db30a10): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 18 14:34:47 crc kubenswrapper[4957]: E0218 14:34:47.325746 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-gnwwm" podUID="0237301c-c7c4-490c-9be1-daaa5db30a10" Feb 18 14:34:47 crc kubenswrapper[4957]: E0218 14:34:47.333188 4957 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 18 14:34:47 crc kubenswrapper[4957]: E0218 14:34:47.333390 4957 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gtrn5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-9jpjb_openshift-marketplace(fbe80d3e-51d2-41fa-aa8b-5607e7e69745): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 18 14:34:47 crc kubenswrapper[4957]: E0218 14:34:47.334611 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-9jpjb" podUID="fbe80d3e-51d2-41fa-aa8b-5607e7e69745" Feb 18 14:34:47 crc kubenswrapper[4957]: E0218 14:34:47.390640 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-gnwwm" podUID="0237301c-c7c4-490c-9be1-daaa5db30a10" Feb 18 14:34:47 crc kubenswrapper[4957]: E0218 14:34:47.390950 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-9jpjb" podUID="fbe80d3e-51d2-41fa-aa8b-5607e7e69745" Feb 18 14:34:47 crc kubenswrapper[4957]: E0218 14:34:47.395963 4957 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 18 14:34:47 crc kubenswrapper[4957]: E0218 14:34:47.396464 4957 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-67dpz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-wh6pb_openshift-marketplace(74312833-84d3-4221-a8f7-07c892db5165): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 18 14:34:47 crc kubenswrapper[4957]: E0218 14:34:47.397757 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-wh6pb" podUID="74312833-84d3-4221-a8f7-07c892db5165" Feb 18 14:34:47 crc kubenswrapper[4957]: E0218 14:34:47.398173 4957 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 18 14:34:47 crc kubenswrapper[4957]: E0218 14:34:47.398636 4957 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dffxq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-p65zp_openshift-marketplace(af490140-34e3-4689-b13a-112b97f5cd9e): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 18 14:34:47 crc kubenswrapper[4957]: E0218 14:34:47.400211 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-p65zp" podUID="af490140-34e3-4689-b13a-112b97f5cd9e" Feb 18 14:34:47 crc kubenswrapper[4957]: E0218 14:34:47.443836 4957 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 18 14:34:47 crc kubenswrapper[4957]: E0218 14:34:47.444711 4957 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zxcvg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-gpb4w_openshift-marketplace(f4d1018f-2e03-4372-98e6-cbba16adff43): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 18 14:34:47 crc kubenswrapper[4957]: E0218 14:34:47.447113 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-gpb4w" podUID="f4d1018f-2e03-4372-98e6-cbba16adff43" Feb 18 14:34:47 crc kubenswrapper[4957]: I0218 14:34:47.688611 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5c97956df5-qdh8d"] Feb 18 14:34:47 crc kubenswrapper[4957]: I0218 14:34:47.728254 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-777c8b6456-wpp9b"] Feb 18 14:34:47 crc kubenswrapper[4957]: W0218 14:34:47.742074 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod27455848_8e9a_475d_b5bc_7461384ba575.slice/crio-cb7421ab31f81d67d1ef47e38aacc5b4e3e28b8d1ea701bf0728ac218a3a522f WatchSource:0}: Error finding container cb7421ab31f81d67d1ef47e38aacc5b4e3e28b8d1ea701bf0728ac218a3a522f: Status 404 returned error can't find the container with id cb7421ab31f81d67d1ef47e38aacc5b4e3e28b8d1ea701bf0728ac218a3a522f Feb 18 14:34:47 crc kubenswrapper[4957]: I0218 14:34:47.793703 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 18 14:34:47 crc kubenswrapper[4957]: W0218 14:34:47.806653 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod5749f52b_5200_4d2e_a997_9e25a1839903.slice/crio-07f2afcbab4f591a966f940d114621e9ffd34f6e3ffeef809a266b6854d21418 WatchSource:0}: Error finding container 07f2afcbab4f591a966f940d114621e9ffd34f6e3ffeef809a266b6854d21418: Status 404 returned error can't find the container with id 07f2afcbab4f591a966f940d114621e9ffd34f6e3ffeef809a266b6854d21418 Feb 18 14:34:48 crc kubenswrapper[4957]: I0218 14:34:48.399045 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-777c8b6456-wpp9b" event={"ID":"27455848-8e9a-475d-b5bc-7461384ba575","Type":"ContainerStarted","Data":"7e7084fbda1c7228c1e42f7cf4b3376a5c90faaacb1043d891fa810d1cfeac4b"} Feb 18 14:34:48 crc kubenswrapper[4957]: I0218 14:34:48.399575 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-777c8b6456-wpp9b" event={"ID":"27455848-8e9a-475d-b5bc-7461384ba575","Type":"ContainerStarted","Data":"cb7421ab31f81d67d1ef47e38aacc5b4e3e28b8d1ea701bf0728ac218a3a522f"} Feb 18 14:34:48 crc kubenswrapper[4957]: I0218 14:34:48.401402 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-777c8b6456-wpp9b" Feb 18 14:34:48 crc kubenswrapper[4957]: I0218 14:34:48.403130 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5c97956df5-qdh8d" event={"ID":"47079600-0758-4b59-a2ed-97cfe597a38f","Type":"ContainerStarted","Data":"e7f8cfea375e54d9ccb0ed7b75946e194c05c1f1711d41ee5a445aefbc3a8e7d"} Feb 18 14:34:48 crc kubenswrapper[4957]: I0218 14:34:48.403167 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5c97956df5-qdh8d" event={"ID":"47079600-0758-4b59-a2ed-97cfe597a38f","Type":"ContainerStarted","Data":"88a4891c232a39e8e72a75ed79cef0a3a6d18e3569764fd41e78d7df8308af8c"} Feb 18 14:34:48 crc kubenswrapper[4957]: I0218 14:34:48.403334 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-5c97956df5-qdh8d" podUID="47079600-0758-4b59-a2ed-97cfe597a38f" containerName="route-controller-manager" containerID="cri-o://e7f8cfea375e54d9ccb0ed7b75946e194c05c1f1711d41ee5a445aefbc3a8e7d" gracePeriod=30 Feb 18 14:34:48 crc kubenswrapper[4957]: I0218 14:34:48.404086 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5c97956df5-qdh8d" Feb 18 14:34:48 crc kubenswrapper[4957]: I0218 14:34:48.416723 4957 patch_prober.go:28] interesting pod/controller-manager-777c8b6456-wpp9b container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.55:8443/healthz\": dial tcp 10.217.0.55:8443: connect: connection refused" start-of-body= Feb 18 14:34:48 crc kubenswrapper[4957]: I0218 14:34:48.421203 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-777c8b6456-wpp9b" podUID="27455848-8e9a-475d-b5bc-7461384ba575" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.55:8443/healthz\": dial tcp 10.217.0.55:8443: connect: connection refused" Feb 18 14:34:48 crc kubenswrapper[4957]: I0218 14:34:48.421641 4957 patch_prober.go:28] interesting pod/route-controller-manager-5c97956df5-qdh8d container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.54:8443/healthz\": dial tcp 10.217.0.54:8443: connect: connection refused" start-of-body= Feb 18 14:34:48 crc kubenswrapper[4957]: I0218 14:34:48.429844 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-5c97956df5-qdh8d" podUID="47079600-0758-4b59-a2ed-97cfe597a38f" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.54:8443/healthz\": dial tcp 10.217.0.54:8443: connect: connection refused" Feb 18 14:34:48 crc kubenswrapper[4957]: I0218 14:34:48.422671 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-jkmlc" event={"ID":"58c40982-35c8-4670-ad21-513a7a5a458e","Type":"ContainerStarted","Data":"2ff1acad0782e2ee5acc303cce9db18a2cfe593815eeea46fd511a5c75909cab"} Feb 18 14:34:48 crc kubenswrapper[4957]: I0218 14:34:48.430222 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-jkmlc" event={"ID":"58c40982-35c8-4670-ad21-513a7a5a458e","Type":"ContainerStarted","Data":"f05aae62155b3514631559d8b5a8a263f5b1d04a01bfc2780edb1d8dc9ede0b6"} Feb 18 14:34:48 crc kubenswrapper[4957]: I0218 14:34:48.448481 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"5749f52b-5200-4d2e-a997-9e25a1839903","Type":"ContainerStarted","Data":"eae440e24999d6cdfb4b666b61596ec56cb386e4e43c9466c9203ed88fb83ac7"} Feb 18 14:34:48 crc kubenswrapper[4957]: I0218 14:34:48.448524 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"5749f52b-5200-4d2e-a997-9e25a1839903","Type":"ContainerStarted","Data":"07f2afcbab4f591a966f940d114621e9ffd34f6e3ffeef809a266b6854d21418"} Feb 18 14:34:48 crc kubenswrapper[4957]: E0218 14:34:48.455930 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-wh6pb" podUID="74312833-84d3-4221-a8f7-07c892db5165" Feb 18 14:34:48 crc kubenswrapper[4957]: E0218 14:34:48.456102 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gpb4w" podUID="f4d1018f-2e03-4372-98e6-cbba16adff43" Feb 18 14:34:48 crc kubenswrapper[4957]: E0218 14:34:48.456178 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-p65zp" podUID="af490140-34e3-4689-b13a-112b97f5cd9e" Feb 18 14:34:48 crc kubenswrapper[4957]: I0218 14:34:48.484377 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5c97956df5-qdh8d" podStartSLOduration=27.484350565 podStartE2EDuration="27.484350565s" podCreationTimestamp="2026-02-18 14:34:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:34:48.470041789 +0000 UTC m=+194.990906533" watchObservedRunningTime="2026-02-18 14:34:48.484350565 +0000 UTC m=+195.005215309" Feb 18 14:34:48 crc kubenswrapper[4957]: I0218 14:34:48.484779 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-777c8b6456-wpp9b" podStartSLOduration=7.484771186 podStartE2EDuration="7.484771186s" podCreationTimestamp="2026-02-18 14:34:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:34:48.438970519 +0000 UTC m=+194.959835263" watchObservedRunningTime="2026-02-18 14:34:48.484771186 +0000 UTC m=+195.005635930" Feb 18 14:34:48 crc kubenswrapper[4957]: I0218 14:34:48.504052 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-jkmlc" podStartSLOduration=174.504029629 podStartE2EDuration="2m54.504029629s" podCreationTimestamp="2026-02-18 14:31:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:34:48.502674452 +0000 UTC m=+195.023539196" watchObservedRunningTime="2026-02-18 14:34:48.504029629 +0000 UTC m=+195.024894373" Feb 18 14:34:48 crc kubenswrapper[4957]: I0218 14:34:48.547112 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=2.54709357 podStartE2EDuration="2.54709357s" podCreationTimestamp="2026-02-18 14:34:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:34:48.546239817 +0000 UTC m=+195.067104581" watchObservedRunningTime="2026-02-18 14:34:48.54709357 +0000 UTC m=+195.067958314" Feb 18 14:34:48 crc kubenswrapper[4957]: I0218 14:34:48.815684 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-5c97956df5-qdh8d_47079600-0758-4b59-a2ed-97cfe597a38f/route-controller-manager/0.log" Feb 18 14:34:48 crc kubenswrapper[4957]: I0218 14:34:48.815774 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c97956df5-qdh8d" Feb 18 14:34:48 crc kubenswrapper[4957]: I0218 14:34:48.852105 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-74fbcdbf75-m7xnj"] Feb 18 14:34:48 crc kubenswrapper[4957]: E0218 14:34:48.852860 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47079600-0758-4b59-a2ed-97cfe597a38f" containerName="route-controller-manager" Feb 18 14:34:48 crc kubenswrapper[4957]: I0218 14:34:48.852875 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="47079600-0758-4b59-a2ed-97cfe597a38f" containerName="route-controller-manager" Feb 18 14:34:48 crc kubenswrapper[4957]: I0218 14:34:48.852982 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="47079600-0758-4b59-a2ed-97cfe597a38f" containerName="route-controller-manager" Feb 18 14:34:48 crc kubenswrapper[4957]: I0218 14:34:48.853381 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-74fbcdbf75-m7xnj" Feb 18 14:34:48 crc kubenswrapper[4957]: I0218 14:34:48.864538 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-74fbcdbf75-m7xnj"] Feb 18 14:34:48 crc kubenswrapper[4957]: I0218 14:34:48.891778 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/47079600-0758-4b59-a2ed-97cfe597a38f-client-ca\") pod \"47079600-0758-4b59-a2ed-97cfe597a38f\" (UID: \"47079600-0758-4b59-a2ed-97cfe597a38f\") " Feb 18 14:34:48 crc kubenswrapper[4957]: I0218 14:34:48.891879 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47079600-0758-4b59-a2ed-97cfe597a38f-config\") pod \"47079600-0758-4b59-a2ed-97cfe597a38f\" (UID: \"47079600-0758-4b59-a2ed-97cfe597a38f\") " Feb 18 14:34:48 crc kubenswrapper[4957]: I0218 14:34:48.891986 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9n5gz\" (UniqueName: \"kubernetes.io/projected/47079600-0758-4b59-a2ed-97cfe597a38f-kube-api-access-9n5gz\") pod \"47079600-0758-4b59-a2ed-97cfe597a38f\" (UID: \"47079600-0758-4b59-a2ed-97cfe597a38f\") " Feb 18 14:34:48 crc kubenswrapper[4957]: I0218 14:34:48.892066 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/47079600-0758-4b59-a2ed-97cfe597a38f-serving-cert\") pod \"47079600-0758-4b59-a2ed-97cfe597a38f\" (UID: \"47079600-0758-4b59-a2ed-97cfe597a38f\") " Feb 18 14:34:48 crc kubenswrapper[4957]: I0218 14:34:48.892291 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dac46aaf-f49f-47d7-a71e-14d94fa7d759-serving-cert\") pod \"route-controller-manager-74fbcdbf75-m7xnj\" (UID: \"dac46aaf-f49f-47d7-a71e-14d94fa7d759\") " pod="openshift-route-controller-manager/route-controller-manager-74fbcdbf75-m7xnj" Feb 18 14:34:48 crc kubenswrapper[4957]: I0218 14:34:48.892335 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dac46aaf-f49f-47d7-a71e-14d94fa7d759-client-ca\") pod \"route-controller-manager-74fbcdbf75-m7xnj\" (UID: \"dac46aaf-f49f-47d7-a71e-14d94fa7d759\") " pod="openshift-route-controller-manager/route-controller-manager-74fbcdbf75-m7xnj" Feb 18 14:34:48 crc kubenswrapper[4957]: I0218 14:34:48.892352 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqb8b\" (UniqueName: \"kubernetes.io/projected/dac46aaf-f49f-47d7-a71e-14d94fa7d759-kube-api-access-zqb8b\") pod \"route-controller-manager-74fbcdbf75-m7xnj\" (UID: \"dac46aaf-f49f-47d7-a71e-14d94fa7d759\") " pod="openshift-route-controller-manager/route-controller-manager-74fbcdbf75-m7xnj" Feb 18 14:34:48 crc kubenswrapper[4957]: I0218 14:34:48.892374 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dac46aaf-f49f-47d7-a71e-14d94fa7d759-config\") pod \"route-controller-manager-74fbcdbf75-m7xnj\" (UID: \"dac46aaf-f49f-47d7-a71e-14d94fa7d759\") " pod="openshift-route-controller-manager/route-controller-manager-74fbcdbf75-m7xnj" Feb 18 14:34:48 crc kubenswrapper[4957]: I0218 14:34:48.892947 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/47079600-0758-4b59-a2ed-97cfe597a38f-config" (OuterVolumeSpecName: "config") pod "47079600-0758-4b59-a2ed-97cfe597a38f" (UID: "47079600-0758-4b59-a2ed-97cfe597a38f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:34:48 crc kubenswrapper[4957]: I0218 14:34:48.893480 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/47079600-0758-4b59-a2ed-97cfe597a38f-client-ca" (OuterVolumeSpecName: "client-ca") pod "47079600-0758-4b59-a2ed-97cfe597a38f" (UID: "47079600-0758-4b59-a2ed-97cfe597a38f"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:34:48 crc kubenswrapper[4957]: I0218 14:34:48.901398 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47079600-0758-4b59-a2ed-97cfe597a38f-kube-api-access-9n5gz" (OuterVolumeSpecName: "kube-api-access-9n5gz") pod "47079600-0758-4b59-a2ed-97cfe597a38f" (UID: "47079600-0758-4b59-a2ed-97cfe597a38f"). InnerVolumeSpecName "kube-api-access-9n5gz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:34:48 crc kubenswrapper[4957]: I0218 14:34:48.905578 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47079600-0758-4b59-a2ed-97cfe597a38f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "47079600-0758-4b59-a2ed-97cfe597a38f" (UID: "47079600-0758-4b59-a2ed-97cfe597a38f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:34:48 crc kubenswrapper[4957]: I0218 14:34:48.993725 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dac46aaf-f49f-47d7-a71e-14d94fa7d759-serving-cert\") pod \"route-controller-manager-74fbcdbf75-m7xnj\" (UID: \"dac46aaf-f49f-47d7-a71e-14d94fa7d759\") " pod="openshift-route-controller-manager/route-controller-manager-74fbcdbf75-m7xnj" Feb 18 14:34:48 crc kubenswrapper[4957]: I0218 14:34:48.993828 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dac46aaf-f49f-47d7-a71e-14d94fa7d759-client-ca\") pod \"route-controller-manager-74fbcdbf75-m7xnj\" (UID: \"dac46aaf-f49f-47d7-a71e-14d94fa7d759\") " pod="openshift-route-controller-manager/route-controller-manager-74fbcdbf75-m7xnj" Feb 18 14:34:48 crc kubenswrapper[4957]: I0218 14:34:48.993850 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zqb8b\" (UniqueName: \"kubernetes.io/projected/dac46aaf-f49f-47d7-a71e-14d94fa7d759-kube-api-access-zqb8b\") pod \"route-controller-manager-74fbcdbf75-m7xnj\" (UID: \"dac46aaf-f49f-47d7-a71e-14d94fa7d759\") " pod="openshift-route-controller-manager/route-controller-manager-74fbcdbf75-m7xnj" Feb 18 14:34:48 crc kubenswrapper[4957]: I0218 14:34:48.993879 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dac46aaf-f49f-47d7-a71e-14d94fa7d759-config\") pod \"route-controller-manager-74fbcdbf75-m7xnj\" (UID: \"dac46aaf-f49f-47d7-a71e-14d94fa7d759\") " pod="openshift-route-controller-manager/route-controller-manager-74fbcdbf75-m7xnj" Feb 18 14:34:48 crc kubenswrapper[4957]: I0218 14:34:48.993936 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9n5gz\" (UniqueName: \"kubernetes.io/projected/47079600-0758-4b59-a2ed-97cfe597a38f-kube-api-access-9n5gz\") on node \"crc\" DevicePath \"\"" Feb 18 14:34:48 crc kubenswrapper[4957]: I0218 14:34:48.993951 4957 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/47079600-0758-4b59-a2ed-97cfe597a38f-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 14:34:48 crc kubenswrapper[4957]: I0218 14:34:48.993962 4957 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/47079600-0758-4b59-a2ed-97cfe597a38f-client-ca\") on node \"crc\" DevicePath \"\"" Feb 18 14:34:48 crc kubenswrapper[4957]: I0218 14:34:48.993973 4957 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47079600-0758-4b59-a2ed-97cfe597a38f-config\") on node \"crc\" DevicePath \"\"" Feb 18 14:34:48 crc kubenswrapper[4957]: I0218 14:34:48.995315 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dac46aaf-f49f-47d7-a71e-14d94fa7d759-client-ca\") pod \"route-controller-manager-74fbcdbf75-m7xnj\" (UID: \"dac46aaf-f49f-47d7-a71e-14d94fa7d759\") " pod="openshift-route-controller-manager/route-controller-manager-74fbcdbf75-m7xnj" Feb 18 14:34:48 crc kubenswrapper[4957]: I0218 14:34:48.995940 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dac46aaf-f49f-47d7-a71e-14d94fa7d759-config\") pod \"route-controller-manager-74fbcdbf75-m7xnj\" (UID: \"dac46aaf-f49f-47d7-a71e-14d94fa7d759\") " pod="openshift-route-controller-manager/route-controller-manager-74fbcdbf75-m7xnj" Feb 18 14:34:49 crc kubenswrapper[4957]: I0218 14:34:48.999956 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dac46aaf-f49f-47d7-a71e-14d94fa7d759-serving-cert\") pod \"route-controller-manager-74fbcdbf75-m7xnj\" (UID: \"dac46aaf-f49f-47d7-a71e-14d94fa7d759\") " pod="openshift-route-controller-manager/route-controller-manager-74fbcdbf75-m7xnj" Feb 18 14:34:49 crc kubenswrapper[4957]: I0218 14:34:49.017593 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zqb8b\" (UniqueName: \"kubernetes.io/projected/dac46aaf-f49f-47d7-a71e-14d94fa7d759-kube-api-access-zqb8b\") pod \"route-controller-manager-74fbcdbf75-m7xnj\" (UID: \"dac46aaf-f49f-47d7-a71e-14d94fa7d759\") " pod="openshift-route-controller-manager/route-controller-manager-74fbcdbf75-m7xnj" Feb 18 14:34:49 crc kubenswrapper[4957]: I0218 14:34:49.201782 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-74fbcdbf75-m7xnj" Feb 18 14:34:49 crc kubenswrapper[4957]: I0218 14:34:49.453983 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-5c97956df5-qdh8d_47079600-0758-4b59-a2ed-97cfe597a38f/route-controller-manager/0.log" Feb 18 14:34:49 crc kubenswrapper[4957]: I0218 14:34:49.454386 4957 generic.go:334] "Generic (PLEG): container finished" podID="47079600-0758-4b59-a2ed-97cfe597a38f" containerID="e7f8cfea375e54d9ccb0ed7b75946e194c05c1f1711d41ee5a445aefbc3a8e7d" exitCode=2 Feb 18 14:34:49 crc kubenswrapper[4957]: I0218 14:34:49.454493 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c97956df5-qdh8d" Feb 18 14:34:49 crc kubenswrapper[4957]: I0218 14:34:49.455203 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5c97956df5-qdh8d" event={"ID":"47079600-0758-4b59-a2ed-97cfe597a38f","Type":"ContainerDied","Data":"e7f8cfea375e54d9ccb0ed7b75946e194c05c1f1711d41ee5a445aefbc3a8e7d"} Feb 18 14:34:49 crc kubenswrapper[4957]: I0218 14:34:49.455234 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5c97956df5-qdh8d" event={"ID":"47079600-0758-4b59-a2ed-97cfe597a38f","Type":"ContainerDied","Data":"88a4891c232a39e8e72a75ed79cef0a3a6d18e3569764fd41e78d7df8308af8c"} Feb 18 14:34:49 crc kubenswrapper[4957]: I0218 14:34:49.455266 4957 scope.go:117] "RemoveContainer" containerID="e7f8cfea375e54d9ccb0ed7b75946e194c05c1f1711d41ee5a445aefbc3a8e7d" Feb 18 14:34:49 crc kubenswrapper[4957]: I0218 14:34:49.456853 4957 generic.go:334] "Generic (PLEG): container finished" podID="5749f52b-5200-4d2e-a997-9e25a1839903" containerID="eae440e24999d6cdfb4b666b61596ec56cb386e4e43c9466c9203ed88fb83ac7" exitCode=0 Feb 18 14:34:49 crc kubenswrapper[4957]: I0218 14:34:49.456918 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"5749f52b-5200-4d2e-a997-9e25a1839903","Type":"ContainerDied","Data":"eae440e24999d6cdfb4b666b61596ec56cb386e4e43c9466c9203ed88fb83ac7"} Feb 18 14:34:49 crc kubenswrapper[4957]: I0218 14:34:49.462499 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-777c8b6456-wpp9b" Feb 18 14:34:49 crc kubenswrapper[4957]: I0218 14:34:49.486628 4957 scope.go:117] "RemoveContainer" containerID="e7f8cfea375e54d9ccb0ed7b75946e194c05c1f1711d41ee5a445aefbc3a8e7d" Feb 18 14:34:49 crc kubenswrapper[4957]: E0218 14:34:49.487771 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e7f8cfea375e54d9ccb0ed7b75946e194c05c1f1711d41ee5a445aefbc3a8e7d\": container with ID starting with e7f8cfea375e54d9ccb0ed7b75946e194c05c1f1711d41ee5a445aefbc3a8e7d not found: ID does not exist" containerID="e7f8cfea375e54d9ccb0ed7b75946e194c05c1f1711d41ee5a445aefbc3a8e7d" Feb 18 14:34:49 crc kubenswrapper[4957]: I0218 14:34:49.487828 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7f8cfea375e54d9ccb0ed7b75946e194c05c1f1711d41ee5a445aefbc3a8e7d"} err="failed to get container status \"e7f8cfea375e54d9ccb0ed7b75946e194c05c1f1711d41ee5a445aefbc3a8e7d\": rpc error: code = NotFound desc = could not find container \"e7f8cfea375e54d9ccb0ed7b75946e194c05c1f1711d41ee5a445aefbc3a8e7d\": container with ID starting with e7f8cfea375e54d9ccb0ed7b75946e194c05c1f1711d41ee5a445aefbc3a8e7d not found: ID does not exist" Feb 18 14:34:49 crc kubenswrapper[4957]: I0218 14:34:49.514978 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5c97956df5-qdh8d"] Feb 18 14:34:49 crc kubenswrapper[4957]: I0218 14:34:49.520934 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5c97956df5-qdh8d"] Feb 18 14:34:49 crc kubenswrapper[4957]: I0218 14:34:49.587340 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-74fbcdbf75-m7xnj"] Feb 18 14:34:49 crc kubenswrapper[4957]: W0218 14:34:49.593539 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddac46aaf_f49f_47d7_a71e_14d94fa7d759.slice/crio-8fb071092dfb26d5dad23b90691905329e71fe9db00635c25705f21b25efc545 WatchSource:0}: Error finding container 8fb071092dfb26d5dad23b90691905329e71fe9db00635c25705f21b25efc545: Status 404 returned error can't find the container with id 8fb071092dfb26d5dad23b90691905329e71fe9db00635c25705f21b25efc545 Feb 18 14:34:50 crc kubenswrapper[4957]: I0218 14:34:50.220252 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="47079600-0758-4b59-a2ed-97cfe597a38f" path="/var/lib/kubelet/pods/47079600-0758-4b59-a2ed-97cfe597a38f/volumes" Feb 18 14:34:50 crc kubenswrapper[4957]: I0218 14:34:50.485305 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-74fbcdbf75-m7xnj" event={"ID":"dac46aaf-f49f-47d7-a71e-14d94fa7d759","Type":"ContainerStarted","Data":"c47be2b1b9e70c3c6368f5b9cdacb985b0a27ab3d40a0649eb7d80e314de8cad"} Feb 18 14:34:50 crc kubenswrapper[4957]: I0218 14:34:50.485374 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-74fbcdbf75-m7xnj" event={"ID":"dac46aaf-f49f-47d7-a71e-14d94fa7d759","Type":"ContainerStarted","Data":"8fb071092dfb26d5dad23b90691905329e71fe9db00635c25705f21b25efc545"} Feb 18 14:34:50 crc kubenswrapper[4957]: I0218 14:34:50.486174 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-74fbcdbf75-m7xnj" Feb 18 14:34:50 crc kubenswrapper[4957]: I0218 14:34:50.491316 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-74fbcdbf75-m7xnj" Feb 18 14:34:50 crc kubenswrapper[4957]: I0218 14:34:50.538009 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-74fbcdbf75-m7xnj" podStartSLOduration=9.537984304 podStartE2EDuration="9.537984304s" podCreationTimestamp="2026-02-18 14:34:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:34:50.51576983 +0000 UTC m=+197.036634584" watchObservedRunningTime="2026-02-18 14:34:50.537984304 +0000 UTC m=+197.058849048" Feb 18 14:34:50 crc kubenswrapper[4957]: I0218 14:34:50.774840 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 18 14:34:50 crc kubenswrapper[4957]: I0218 14:34:50.821471 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5749f52b-5200-4d2e-a997-9e25a1839903-kubelet-dir\") pod \"5749f52b-5200-4d2e-a997-9e25a1839903\" (UID: \"5749f52b-5200-4d2e-a997-9e25a1839903\") " Feb 18 14:34:50 crc kubenswrapper[4957]: I0218 14:34:50.821586 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5749f52b-5200-4d2e-a997-9e25a1839903-kube-api-access\") pod \"5749f52b-5200-4d2e-a997-9e25a1839903\" (UID: \"5749f52b-5200-4d2e-a997-9e25a1839903\") " Feb 18 14:34:50 crc kubenswrapper[4957]: I0218 14:34:50.821618 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5749f52b-5200-4d2e-a997-9e25a1839903-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "5749f52b-5200-4d2e-a997-9e25a1839903" (UID: "5749f52b-5200-4d2e-a997-9e25a1839903"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 14:34:50 crc kubenswrapper[4957]: I0218 14:34:50.821875 4957 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5749f52b-5200-4d2e-a997-9e25a1839903-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 18 14:34:50 crc kubenswrapper[4957]: I0218 14:34:50.827486 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5749f52b-5200-4d2e-a997-9e25a1839903-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "5749f52b-5200-4d2e-a997-9e25a1839903" (UID: "5749f52b-5200-4d2e-a997-9e25a1839903"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:34:50 crc kubenswrapper[4957]: I0218 14:34:50.923013 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5749f52b-5200-4d2e-a997-9e25a1839903-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 18 14:34:51 crc kubenswrapper[4957]: I0218 14:34:51.494741 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"5749f52b-5200-4d2e-a997-9e25a1839903","Type":"ContainerDied","Data":"07f2afcbab4f591a966f940d114621e9ffd34f6e3ffeef809a266b6854d21418"} Feb 18 14:34:51 crc kubenswrapper[4957]: I0218 14:34:51.495211 4957 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="07f2afcbab4f591a966f940d114621e9ffd34f6e3ffeef809a266b6854d21418" Feb 18 14:34:51 crc kubenswrapper[4957]: I0218 14:34:51.494908 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 18 14:34:53 crc kubenswrapper[4957]: I0218 14:34:53.929034 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 18 14:34:53 crc kubenswrapper[4957]: E0218 14:34:53.929830 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5749f52b-5200-4d2e-a997-9e25a1839903" containerName="pruner" Feb 18 14:34:53 crc kubenswrapper[4957]: I0218 14:34:53.929843 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="5749f52b-5200-4d2e-a997-9e25a1839903" containerName="pruner" Feb 18 14:34:53 crc kubenswrapper[4957]: I0218 14:34:53.929958 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="5749f52b-5200-4d2e-a997-9e25a1839903" containerName="pruner" Feb 18 14:34:53 crc kubenswrapper[4957]: I0218 14:34:53.930379 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 18 14:34:53 crc kubenswrapper[4957]: I0218 14:34:53.942108 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 18 14:34:53 crc kubenswrapper[4957]: I0218 14:34:53.943706 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 18 14:34:53 crc kubenswrapper[4957]: I0218 14:34:53.945245 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 18 14:34:54 crc kubenswrapper[4957]: I0218 14:34:54.071031 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2ea270e1-6732-4aed-8052-bc3a03f88791-kubelet-dir\") pod \"installer-9-crc\" (UID: \"2ea270e1-6732-4aed-8052-bc3a03f88791\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 18 14:34:54 crc kubenswrapper[4957]: I0218 14:34:54.071372 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2ea270e1-6732-4aed-8052-bc3a03f88791-var-lock\") pod \"installer-9-crc\" (UID: \"2ea270e1-6732-4aed-8052-bc3a03f88791\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 18 14:34:54 crc kubenswrapper[4957]: I0218 14:34:54.071497 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2ea270e1-6732-4aed-8052-bc3a03f88791-kube-api-access\") pod \"installer-9-crc\" (UID: \"2ea270e1-6732-4aed-8052-bc3a03f88791\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 18 14:34:54 crc kubenswrapper[4957]: I0218 14:34:54.141367 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-594sd"] Feb 18 14:34:54 crc kubenswrapper[4957]: I0218 14:34:54.172641 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2ea270e1-6732-4aed-8052-bc3a03f88791-var-lock\") pod \"installer-9-crc\" (UID: \"2ea270e1-6732-4aed-8052-bc3a03f88791\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 18 14:34:54 crc kubenswrapper[4957]: I0218 14:34:54.172686 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2ea270e1-6732-4aed-8052-bc3a03f88791-kube-api-access\") pod \"installer-9-crc\" (UID: \"2ea270e1-6732-4aed-8052-bc3a03f88791\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 18 14:34:54 crc kubenswrapper[4957]: I0218 14:34:54.172772 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2ea270e1-6732-4aed-8052-bc3a03f88791-kubelet-dir\") pod \"installer-9-crc\" (UID: \"2ea270e1-6732-4aed-8052-bc3a03f88791\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 18 14:34:54 crc kubenswrapper[4957]: I0218 14:34:54.172830 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2ea270e1-6732-4aed-8052-bc3a03f88791-kubelet-dir\") pod \"installer-9-crc\" (UID: \"2ea270e1-6732-4aed-8052-bc3a03f88791\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 18 14:34:54 crc kubenswrapper[4957]: I0218 14:34:54.172875 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2ea270e1-6732-4aed-8052-bc3a03f88791-var-lock\") pod \"installer-9-crc\" (UID: \"2ea270e1-6732-4aed-8052-bc3a03f88791\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 18 14:34:54 crc kubenswrapper[4957]: I0218 14:34:54.205393 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2ea270e1-6732-4aed-8052-bc3a03f88791-kube-api-access\") pod \"installer-9-crc\" (UID: \"2ea270e1-6732-4aed-8052-bc3a03f88791\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 18 14:34:54 crc kubenswrapper[4957]: I0218 14:34:54.261112 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 18 14:34:54 crc kubenswrapper[4957]: I0218 14:34:54.535763 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 18 14:34:54 crc kubenswrapper[4957]: W0218 14:34:54.543621 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod2ea270e1_6732_4aed_8052_bc3a03f88791.slice/crio-44add0e762f098374f054996ab4cad264e290e0bfaa54acf37440e24ac49436a WatchSource:0}: Error finding container 44add0e762f098374f054996ab4cad264e290e0bfaa54acf37440e24ac49436a: Status 404 returned error can't find the container with id 44add0e762f098374f054996ab4cad264e290e0bfaa54acf37440e24ac49436a Feb 18 14:34:55 crc kubenswrapper[4957]: I0218 14:34:55.516702 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"2ea270e1-6732-4aed-8052-bc3a03f88791","Type":"ContainerStarted","Data":"fda91dd5fe6369a9cc479dc125beef4b5aeda1e8faf7de11ec41220bff3c0efb"} Feb 18 14:34:55 crc kubenswrapper[4957]: I0218 14:34:55.517093 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"2ea270e1-6732-4aed-8052-bc3a03f88791","Type":"ContainerStarted","Data":"44add0e762f098374f054996ab4cad264e290e0bfaa54acf37440e24ac49436a"} Feb 18 14:34:55 crc kubenswrapper[4957]: I0218 14:34:55.537264 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=2.537242916 podStartE2EDuration="2.537242916s" podCreationTimestamp="2026-02-18 14:34:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:34:55.532260683 +0000 UTC m=+202.053125427" watchObservedRunningTime="2026-02-18 14:34:55.537242916 +0000 UTC m=+202.058107660" Feb 18 14:34:59 crc kubenswrapper[4957]: I0218 14:34:59.539074 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wg8d9" event={"ID":"ffc559a7-20d1-416b-ae18-bcbfc5193d32","Type":"ContainerStarted","Data":"ffb32e84e89226a933d6a9cc18617bb1459d3b66f1433f4fc7302aa52f5a599a"} Feb 18 14:35:00 crc kubenswrapper[4957]: I0218 14:35:00.547199 4957 generic.go:334] "Generic (PLEG): container finished" podID="ffc559a7-20d1-416b-ae18-bcbfc5193d32" containerID="ffb32e84e89226a933d6a9cc18617bb1459d3b66f1433f4fc7302aa52f5a599a" exitCode=0 Feb 18 14:35:00 crc kubenswrapper[4957]: I0218 14:35:00.547306 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wg8d9" event={"ID":"ffc559a7-20d1-416b-ae18-bcbfc5193d32","Type":"ContainerDied","Data":"ffb32e84e89226a933d6a9cc18617bb1459d3b66f1433f4fc7302aa52f5a599a"} Feb 18 14:35:01 crc kubenswrapper[4957]: I0218 14:35:01.079968 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-777c8b6456-wpp9b"] Feb 18 14:35:01 crc kubenswrapper[4957]: I0218 14:35:01.080295 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-777c8b6456-wpp9b" podUID="27455848-8e9a-475d-b5bc-7461384ba575" containerName="controller-manager" containerID="cri-o://7e7084fbda1c7228c1e42f7cf4b3376a5c90faaacb1043d891fa810d1cfeac4b" gracePeriod=30 Feb 18 14:35:01 crc kubenswrapper[4957]: I0218 14:35:01.099767 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-74fbcdbf75-m7xnj"] Feb 18 14:35:01 crc kubenswrapper[4957]: I0218 14:35:01.100070 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-74fbcdbf75-m7xnj" podUID="dac46aaf-f49f-47d7-a71e-14d94fa7d759" containerName="route-controller-manager" containerID="cri-o://c47be2b1b9e70c3c6368f5b9cdacb985b0a27ab3d40a0649eb7d80e314de8cad" gracePeriod=30 Feb 18 14:35:01 crc kubenswrapper[4957]: I0218 14:35:01.416266 4957 patch_prober.go:28] interesting pod/controller-manager-777c8b6456-wpp9b container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.55:8443/healthz\": dial tcp 10.217.0.55:8443: connect: connection refused" start-of-body= Feb 18 14:35:01 crc kubenswrapper[4957]: I0218 14:35:01.416999 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-777c8b6456-wpp9b" podUID="27455848-8e9a-475d-b5bc-7461384ba575" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.55:8443/healthz\": dial tcp 10.217.0.55:8443: connect: connection refused" Feb 18 14:35:01 crc kubenswrapper[4957]: I0218 14:35:01.574212 4957 generic.go:334] "Generic (PLEG): container finished" podID="dac46aaf-f49f-47d7-a71e-14d94fa7d759" containerID="c47be2b1b9e70c3c6368f5b9cdacb985b0a27ab3d40a0649eb7d80e314de8cad" exitCode=0 Feb 18 14:35:01 crc kubenswrapper[4957]: I0218 14:35:01.574287 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-74fbcdbf75-m7xnj" event={"ID":"dac46aaf-f49f-47d7-a71e-14d94fa7d759","Type":"ContainerDied","Data":"c47be2b1b9e70c3c6368f5b9cdacb985b0a27ab3d40a0649eb7d80e314de8cad"} Feb 18 14:35:01 crc kubenswrapper[4957]: I0218 14:35:01.576080 4957 generic.go:334] "Generic (PLEG): container finished" podID="27455848-8e9a-475d-b5bc-7461384ba575" containerID="7e7084fbda1c7228c1e42f7cf4b3376a5c90faaacb1043d891fa810d1cfeac4b" exitCode=0 Feb 18 14:35:01 crc kubenswrapper[4957]: I0218 14:35:01.576105 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-777c8b6456-wpp9b" event={"ID":"27455848-8e9a-475d-b5bc-7461384ba575","Type":"ContainerDied","Data":"7e7084fbda1c7228c1e42f7cf4b3376a5c90faaacb1043d891fa810d1cfeac4b"} Feb 18 14:35:02 crc kubenswrapper[4957]: I0218 14:35:02.586685 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-777c8b6456-wpp9b" event={"ID":"27455848-8e9a-475d-b5bc-7461384ba575","Type":"ContainerDied","Data":"cb7421ab31f81d67d1ef47e38aacc5b4e3e28b8d1ea701bf0728ac218a3a522f"} Feb 18 14:35:02 crc kubenswrapper[4957]: I0218 14:35:02.586998 4957 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cb7421ab31f81d67d1ef47e38aacc5b4e3e28b8d1ea701bf0728ac218a3a522f" Feb 18 14:35:02 crc kubenswrapper[4957]: I0218 14:35:02.587526 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-777c8b6456-wpp9b" Feb 18 14:35:02 crc kubenswrapper[4957]: I0218 14:35:02.618902 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-768657449-zd8xv"] Feb 18 14:35:02 crc kubenswrapper[4957]: E0218 14:35:02.619178 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27455848-8e9a-475d-b5bc-7461384ba575" containerName="controller-manager" Feb 18 14:35:02 crc kubenswrapper[4957]: I0218 14:35:02.619212 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="27455848-8e9a-475d-b5bc-7461384ba575" containerName="controller-manager" Feb 18 14:35:02 crc kubenswrapper[4957]: I0218 14:35:02.619319 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="27455848-8e9a-475d-b5bc-7461384ba575" containerName="controller-manager" Feb 18 14:35:02 crc kubenswrapper[4957]: I0218 14:35:02.620233 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-768657449-zd8xv" Feb 18 14:35:02 crc kubenswrapper[4957]: I0218 14:35:02.640318 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-768657449-zd8xv"] Feb 18 14:35:02 crc kubenswrapper[4957]: I0218 14:35:02.686058 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-btxbz\" (UniqueName: \"kubernetes.io/projected/27455848-8e9a-475d-b5bc-7461384ba575-kube-api-access-btxbz\") pod \"27455848-8e9a-475d-b5bc-7461384ba575\" (UID: \"27455848-8e9a-475d-b5bc-7461384ba575\") " Feb 18 14:35:02 crc kubenswrapper[4957]: I0218 14:35:02.686238 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/27455848-8e9a-475d-b5bc-7461384ba575-proxy-ca-bundles\") pod \"27455848-8e9a-475d-b5bc-7461384ba575\" (UID: \"27455848-8e9a-475d-b5bc-7461384ba575\") " Feb 18 14:35:02 crc kubenswrapper[4957]: I0218 14:35:02.686306 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27455848-8e9a-475d-b5bc-7461384ba575-config\") pod \"27455848-8e9a-475d-b5bc-7461384ba575\" (UID: \"27455848-8e9a-475d-b5bc-7461384ba575\") " Feb 18 14:35:02 crc kubenswrapper[4957]: I0218 14:35:02.686380 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/27455848-8e9a-475d-b5bc-7461384ba575-serving-cert\") pod \"27455848-8e9a-475d-b5bc-7461384ba575\" (UID: \"27455848-8e9a-475d-b5bc-7461384ba575\") " Feb 18 14:35:02 crc kubenswrapper[4957]: I0218 14:35:02.686406 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/27455848-8e9a-475d-b5bc-7461384ba575-client-ca\") pod \"27455848-8e9a-475d-b5bc-7461384ba575\" (UID: \"27455848-8e9a-475d-b5bc-7461384ba575\") " Feb 18 14:35:02 crc kubenswrapper[4957]: I0218 14:35:02.686973 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6af986d4-5f4a-407a-b9c8-815250d3a145-config\") pod \"controller-manager-768657449-zd8xv\" (UID: \"6af986d4-5f4a-407a-b9c8-815250d3a145\") " pod="openshift-controller-manager/controller-manager-768657449-zd8xv" Feb 18 14:35:02 crc kubenswrapper[4957]: I0218 14:35:02.687108 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6af986d4-5f4a-407a-b9c8-815250d3a145-client-ca\") pod \"controller-manager-768657449-zd8xv\" (UID: \"6af986d4-5f4a-407a-b9c8-815250d3a145\") " pod="openshift-controller-manager/controller-manager-768657449-zd8xv" Feb 18 14:35:02 crc kubenswrapper[4957]: I0218 14:35:02.687137 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6af986d4-5f4a-407a-b9c8-815250d3a145-proxy-ca-bundles\") pod \"controller-manager-768657449-zd8xv\" (UID: \"6af986d4-5f4a-407a-b9c8-815250d3a145\") " pod="openshift-controller-manager/controller-manager-768657449-zd8xv" Feb 18 14:35:02 crc kubenswrapper[4957]: I0218 14:35:02.687147 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/27455848-8e9a-475d-b5bc-7461384ba575-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "27455848-8e9a-475d-b5bc-7461384ba575" (UID: "27455848-8e9a-475d-b5bc-7461384ba575"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:35:02 crc kubenswrapper[4957]: I0218 14:35:02.687139 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/27455848-8e9a-475d-b5bc-7461384ba575-client-ca" (OuterVolumeSpecName: "client-ca") pod "27455848-8e9a-475d-b5bc-7461384ba575" (UID: "27455848-8e9a-475d-b5bc-7461384ba575"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:35:02 crc kubenswrapper[4957]: I0218 14:35:02.687273 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l64c6\" (UniqueName: \"kubernetes.io/projected/6af986d4-5f4a-407a-b9c8-815250d3a145-kube-api-access-l64c6\") pod \"controller-manager-768657449-zd8xv\" (UID: \"6af986d4-5f4a-407a-b9c8-815250d3a145\") " pod="openshift-controller-manager/controller-manager-768657449-zd8xv" Feb 18 14:35:02 crc kubenswrapper[4957]: I0218 14:35:02.687313 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6af986d4-5f4a-407a-b9c8-815250d3a145-serving-cert\") pod \"controller-manager-768657449-zd8xv\" (UID: \"6af986d4-5f4a-407a-b9c8-815250d3a145\") " pod="openshift-controller-manager/controller-manager-768657449-zd8xv" Feb 18 14:35:02 crc kubenswrapper[4957]: I0218 14:35:02.687637 4957 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/27455848-8e9a-475d-b5bc-7461384ba575-client-ca\") on node \"crc\" DevicePath \"\"" Feb 18 14:35:02 crc kubenswrapper[4957]: I0218 14:35:02.687653 4957 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/27455848-8e9a-475d-b5bc-7461384ba575-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 18 14:35:02 crc kubenswrapper[4957]: I0218 14:35:02.688225 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/27455848-8e9a-475d-b5bc-7461384ba575-config" (OuterVolumeSpecName: "config") pod "27455848-8e9a-475d-b5bc-7461384ba575" (UID: "27455848-8e9a-475d-b5bc-7461384ba575"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:35:02 crc kubenswrapper[4957]: I0218 14:35:02.693068 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27455848-8e9a-475d-b5bc-7461384ba575-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "27455848-8e9a-475d-b5bc-7461384ba575" (UID: "27455848-8e9a-475d-b5bc-7461384ba575"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:35:02 crc kubenswrapper[4957]: I0218 14:35:02.699041 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27455848-8e9a-475d-b5bc-7461384ba575-kube-api-access-btxbz" (OuterVolumeSpecName: "kube-api-access-btxbz") pod "27455848-8e9a-475d-b5bc-7461384ba575" (UID: "27455848-8e9a-475d-b5bc-7461384ba575"). InnerVolumeSpecName "kube-api-access-btxbz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:35:02 crc kubenswrapper[4957]: I0218 14:35:02.788837 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6af986d4-5f4a-407a-b9c8-815250d3a145-config\") pod \"controller-manager-768657449-zd8xv\" (UID: \"6af986d4-5f4a-407a-b9c8-815250d3a145\") " pod="openshift-controller-manager/controller-manager-768657449-zd8xv" Feb 18 14:35:02 crc kubenswrapper[4957]: I0218 14:35:02.788881 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6af986d4-5f4a-407a-b9c8-815250d3a145-client-ca\") pod \"controller-manager-768657449-zd8xv\" (UID: \"6af986d4-5f4a-407a-b9c8-815250d3a145\") " pod="openshift-controller-manager/controller-manager-768657449-zd8xv" Feb 18 14:35:02 crc kubenswrapper[4957]: I0218 14:35:02.788904 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6af986d4-5f4a-407a-b9c8-815250d3a145-proxy-ca-bundles\") pod \"controller-manager-768657449-zd8xv\" (UID: \"6af986d4-5f4a-407a-b9c8-815250d3a145\") " pod="openshift-controller-manager/controller-manager-768657449-zd8xv" Feb 18 14:35:02 crc kubenswrapper[4957]: I0218 14:35:02.788931 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l64c6\" (UniqueName: \"kubernetes.io/projected/6af986d4-5f4a-407a-b9c8-815250d3a145-kube-api-access-l64c6\") pod \"controller-manager-768657449-zd8xv\" (UID: \"6af986d4-5f4a-407a-b9c8-815250d3a145\") " pod="openshift-controller-manager/controller-manager-768657449-zd8xv" Feb 18 14:35:02 crc kubenswrapper[4957]: I0218 14:35:02.788956 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6af986d4-5f4a-407a-b9c8-815250d3a145-serving-cert\") pod \"controller-manager-768657449-zd8xv\" (UID: \"6af986d4-5f4a-407a-b9c8-815250d3a145\") " pod="openshift-controller-manager/controller-manager-768657449-zd8xv" Feb 18 14:35:02 crc kubenswrapper[4957]: I0218 14:35:02.788992 4957 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27455848-8e9a-475d-b5bc-7461384ba575-config\") on node \"crc\" DevicePath \"\"" Feb 18 14:35:02 crc kubenswrapper[4957]: I0218 14:35:02.789003 4957 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/27455848-8e9a-475d-b5bc-7461384ba575-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 14:35:02 crc kubenswrapper[4957]: I0218 14:35:02.789014 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-btxbz\" (UniqueName: \"kubernetes.io/projected/27455848-8e9a-475d-b5bc-7461384ba575-kube-api-access-btxbz\") on node \"crc\" DevicePath \"\"" Feb 18 14:35:02 crc kubenswrapper[4957]: I0218 14:35:02.790094 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6af986d4-5f4a-407a-b9c8-815250d3a145-client-ca\") pod \"controller-manager-768657449-zd8xv\" (UID: \"6af986d4-5f4a-407a-b9c8-815250d3a145\") " pod="openshift-controller-manager/controller-manager-768657449-zd8xv" Feb 18 14:35:02 crc kubenswrapper[4957]: I0218 14:35:02.790323 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6af986d4-5f4a-407a-b9c8-815250d3a145-config\") pod \"controller-manager-768657449-zd8xv\" (UID: \"6af986d4-5f4a-407a-b9c8-815250d3a145\") " pod="openshift-controller-manager/controller-manager-768657449-zd8xv" Feb 18 14:35:02 crc kubenswrapper[4957]: I0218 14:35:02.791296 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6af986d4-5f4a-407a-b9c8-815250d3a145-proxy-ca-bundles\") pod \"controller-manager-768657449-zd8xv\" (UID: \"6af986d4-5f4a-407a-b9c8-815250d3a145\") " pod="openshift-controller-manager/controller-manager-768657449-zd8xv" Feb 18 14:35:02 crc kubenswrapper[4957]: I0218 14:35:02.793224 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6af986d4-5f4a-407a-b9c8-815250d3a145-serving-cert\") pod \"controller-manager-768657449-zd8xv\" (UID: \"6af986d4-5f4a-407a-b9c8-815250d3a145\") " pod="openshift-controller-manager/controller-manager-768657449-zd8xv" Feb 18 14:35:02 crc kubenswrapper[4957]: I0218 14:35:02.808581 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l64c6\" (UniqueName: \"kubernetes.io/projected/6af986d4-5f4a-407a-b9c8-815250d3a145-kube-api-access-l64c6\") pod \"controller-manager-768657449-zd8xv\" (UID: \"6af986d4-5f4a-407a-b9c8-815250d3a145\") " pod="openshift-controller-manager/controller-manager-768657449-zd8xv" Feb 18 14:35:02 crc kubenswrapper[4957]: I0218 14:35:02.851910 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-74fbcdbf75-m7xnj" Feb 18 14:35:02 crc kubenswrapper[4957]: I0218 14:35:02.889903 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dac46aaf-f49f-47d7-a71e-14d94fa7d759-config\") pod \"dac46aaf-f49f-47d7-a71e-14d94fa7d759\" (UID: \"dac46aaf-f49f-47d7-a71e-14d94fa7d759\") " Feb 18 14:35:02 crc kubenswrapper[4957]: I0218 14:35:02.889990 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zqb8b\" (UniqueName: \"kubernetes.io/projected/dac46aaf-f49f-47d7-a71e-14d94fa7d759-kube-api-access-zqb8b\") pod \"dac46aaf-f49f-47d7-a71e-14d94fa7d759\" (UID: \"dac46aaf-f49f-47d7-a71e-14d94fa7d759\") " Feb 18 14:35:02 crc kubenswrapper[4957]: I0218 14:35:02.890039 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dac46aaf-f49f-47d7-a71e-14d94fa7d759-client-ca\") pod \"dac46aaf-f49f-47d7-a71e-14d94fa7d759\" (UID: \"dac46aaf-f49f-47d7-a71e-14d94fa7d759\") " Feb 18 14:35:02 crc kubenswrapper[4957]: I0218 14:35:02.890076 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dac46aaf-f49f-47d7-a71e-14d94fa7d759-serving-cert\") pod \"dac46aaf-f49f-47d7-a71e-14d94fa7d759\" (UID: \"dac46aaf-f49f-47d7-a71e-14d94fa7d759\") " Feb 18 14:35:02 crc kubenswrapper[4957]: I0218 14:35:02.891822 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dac46aaf-f49f-47d7-a71e-14d94fa7d759-client-ca" (OuterVolumeSpecName: "client-ca") pod "dac46aaf-f49f-47d7-a71e-14d94fa7d759" (UID: "dac46aaf-f49f-47d7-a71e-14d94fa7d759"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:35:02 crc kubenswrapper[4957]: I0218 14:35:02.891932 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dac46aaf-f49f-47d7-a71e-14d94fa7d759-config" (OuterVolumeSpecName: "config") pod "dac46aaf-f49f-47d7-a71e-14d94fa7d759" (UID: "dac46aaf-f49f-47d7-a71e-14d94fa7d759"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:35:02 crc kubenswrapper[4957]: I0218 14:35:02.894304 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dac46aaf-f49f-47d7-a71e-14d94fa7d759-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "dac46aaf-f49f-47d7-a71e-14d94fa7d759" (UID: "dac46aaf-f49f-47d7-a71e-14d94fa7d759"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:35:02 crc kubenswrapper[4957]: I0218 14:35:02.895634 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dac46aaf-f49f-47d7-a71e-14d94fa7d759-kube-api-access-zqb8b" (OuterVolumeSpecName: "kube-api-access-zqb8b") pod "dac46aaf-f49f-47d7-a71e-14d94fa7d759" (UID: "dac46aaf-f49f-47d7-a71e-14d94fa7d759"). InnerVolumeSpecName "kube-api-access-zqb8b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:35:02 crc kubenswrapper[4957]: I0218 14:35:02.979885 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-768657449-zd8xv" Feb 18 14:35:02 crc kubenswrapper[4957]: I0218 14:35:02.991762 4957 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dac46aaf-f49f-47d7-a71e-14d94fa7d759-client-ca\") on node \"crc\" DevicePath \"\"" Feb 18 14:35:02 crc kubenswrapper[4957]: I0218 14:35:02.991793 4957 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dac46aaf-f49f-47d7-a71e-14d94fa7d759-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 14:35:02 crc kubenswrapper[4957]: I0218 14:35:02.991802 4957 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dac46aaf-f49f-47d7-a71e-14d94fa7d759-config\") on node \"crc\" DevicePath \"\"" Feb 18 14:35:02 crc kubenswrapper[4957]: I0218 14:35:02.991812 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zqb8b\" (UniqueName: \"kubernetes.io/projected/dac46aaf-f49f-47d7-a71e-14d94fa7d759-kube-api-access-zqb8b\") on node \"crc\" DevicePath \"\"" Feb 18 14:35:03 crc kubenswrapper[4957]: I0218 14:35:03.496224 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-768657449-zd8xv"] Feb 18 14:35:03 crc kubenswrapper[4957]: W0218 14:35:03.506020 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6af986d4_5f4a_407a_b9c8_815250d3a145.slice/crio-7a86329d350e65950e28edb3a822c99d7516681739433509d360914b3cc67dc3 WatchSource:0}: Error finding container 7a86329d350e65950e28edb3a822c99d7516681739433509d360914b3cc67dc3: Status 404 returned error can't find the container with id 7a86329d350e65950e28edb3a822c99d7516681739433509d360914b3cc67dc3 Feb 18 14:35:03 crc kubenswrapper[4957]: I0218 14:35:03.595558 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-74fbcdbf75-m7xnj" event={"ID":"dac46aaf-f49f-47d7-a71e-14d94fa7d759","Type":"ContainerDied","Data":"8fb071092dfb26d5dad23b90691905329e71fe9db00635c25705f21b25efc545"} Feb 18 14:35:03 crc kubenswrapper[4957]: I0218 14:35:03.595604 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-74fbcdbf75-m7xnj" Feb 18 14:35:03 crc kubenswrapper[4957]: I0218 14:35:03.595617 4957 scope.go:117] "RemoveContainer" containerID="c47be2b1b9e70c3c6368f5b9cdacb985b0a27ab3d40a0649eb7d80e314de8cad" Feb 18 14:35:03 crc kubenswrapper[4957]: I0218 14:35:03.598792 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5tbbh" event={"ID":"597544d6-7743-45b7-91d3-54e797b3e342","Type":"ContainerStarted","Data":"e8fa035f729ddfac56971bbb2c8fcaf987f976f07664e2c77c16a2a56b2f4d50"} Feb 18 14:35:03 crc kubenswrapper[4957]: I0218 14:35:03.604470 4957 generic.go:334] "Generic (PLEG): container finished" podID="0237301c-c7c4-490c-9be1-daaa5db30a10" containerID="e9064e566324427830a05c44aa17e4ef4e1c72f08c21e5d2d9b555a588dc039b" exitCode=0 Feb 18 14:35:03 crc kubenswrapper[4957]: I0218 14:35:03.604559 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gnwwm" event={"ID":"0237301c-c7c4-490c-9be1-daaa5db30a10","Type":"ContainerDied","Data":"e9064e566324427830a05c44aa17e4ef4e1c72f08c21e5d2d9b555a588dc039b"} Feb 18 14:35:03 crc kubenswrapper[4957]: I0218 14:35:03.607879 4957 generic.go:334] "Generic (PLEG): container finished" podID="f83e4add-33e3-4e43-af47-f9980471df63" containerID="302e99003891611fc6e14978182e8bffa1217e4f48712144aae55ea34361761a" exitCode=0 Feb 18 14:35:03 crc kubenswrapper[4957]: I0218 14:35:03.607970 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lj4dn" event={"ID":"f83e4add-33e3-4e43-af47-f9980471df63","Type":"ContainerDied","Data":"302e99003891611fc6e14978182e8bffa1217e4f48712144aae55ea34361761a"} Feb 18 14:35:03 crc kubenswrapper[4957]: I0218 14:35:03.610246 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-768657449-zd8xv" event={"ID":"6af986d4-5f4a-407a-b9c8-815250d3a145","Type":"ContainerStarted","Data":"7a86329d350e65950e28edb3a822c99d7516681739433509d360914b3cc67dc3"} Feb 18 14:35:03 crc kubenswrapper[4957]: I0218 14:35:03.617221 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-777c8b6456-wpp9b" Feb 18 14:35:03 crc kubenswrapper[4957]: I0218 14:35:03.620469 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wg8d9" event={"ID":"ffc559a7-20d1-416b-ae18-bcbfc5193d32","Type":"ContainerStarted","Data":"2c34676225b454785f4d326bc3bf8f563a626f8d4f5706567491f71d3539744b"} Feb 18 14:35:03 crc kubenswrapper[4957]: I0218 14:35:03.704826 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-wg8d9" podStartSLOduration=2.674293557 podStartE2EDuration="59.704802261s" podCreationTimestamp="2026-02-18 14:34:04 +0000 UTC" firstStartedPulling="2026-02-18 14:34:05.897734358 +0000 UTC m=+152.418599102" lastFinishedPulling="2026-02-18 14:35:02.928243042 +0000 UTC m=+209.449107806" observedRunningTime="2026-02-18 14:35:03.701084824 +0000 UTC m=+210.221949578" watchObservedRunningTime="2026-02-18 14:35:03.704802261 +0000 UTC m=+210.225667035" Feb 18 14:35:03 crc kubenswrapper[4957]: I0218 14:35:03.717123 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-74fbcdbf75-m7xnj"] Feb 18 14:35:03 crc kubenswrapper[4957]: I0218 14:35:03.727006 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-74fbcdbf75-m7xnj"] Feb 18 14:35:03 crc kubenswrapper[4957]: I0218 14:35:03.731226 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-777c8b6456-wpp9b"] Feb 18 14:35:03 crc kubenswrapper[4957]: I0218 14:35:03.735049 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-777c8b6456-wpp9b"] Feb 18 14:35:04 crc kubenswrapper[4957]: I0218 14:35:04.220819 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="27455848-8e9a-475d-b5bc-7461384ba575" path="/var/lib/kubelet/pods/27455848-8e9a-475d-b5bc-7461384ba575/volumes" Feb 18 14:35:04 crc kubenswrapper[4957]: I0218 14:35:04.221831 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dac46aaf-f49f-47d7-a71e-14d94fa7d759" path="/var/lib/kubelet/pods/dac46aaf-f49f-47d7-a71e-14d94fa7d759/volumes" Feb 18 14:35:04 crc kubenswrapper[4957]: I0218 14:35:04.623744 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lj4dn" event={"ID":"f83e4add-33e3-4e43-af47-f9980471df63","Type":"ContainerStarted","Data":"72add7e3816ed4e76a05547e4025bfbae55d05b9fda925d6e5170dc6791b2bb0"} Feb 18 14:35:04 crc kubenswrapper[4957]: I0218 14:35:04.625960 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-768657449-zd8xv" event={"ID":"6af986d4-5f4a-407a-b9c8-815250d3a145","Type":"ContainerStarted","Data":"91d0110582942c4740b10d02f8ecda582646ac6a029d0cdb32be92ffa4396a2f"} Feb 18 14:35:04 crc kubenswrapper[4957]: I0218 14:35:04.626597 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-768657449-zd8xv" Feb 18 14:35:04 crc kubenswrapper[4957]: I0218 14:35:04.627952 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-wg8d9" Feb 18 14:35:04 crc kubenswrapper[4957]: I0218 14:35:04.628058 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-wg8d9" Feb 18 14:35:04 crc kubenswrapper[4957]: I0218 14:35:04.630448 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9jpjb" event={"ID":"fbe80d3e-51d2-41fa-aa8b-5607e7e69745","Type":"ContainerStarted","Data":"420c24a0e4e2caba7d093f683859e7c32662193a3f926d17e713a54fa4a6b5ae"} Feb 18 14:35:04 crc kubenswrapper[4957]: I0218 14:35:04.632504 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-768657449-zd8xv" Feb 18 14:35:04 crc kubenswrapper[4957]: I0218 14:35:04.633934 4957 generic.go:334] "Generic (PLEG): container finished" podID="597544d6-7743-45b7-91d3-54e797b3e342" containerID="e8fa035f729ddfac56971bbb2c8fcaf987f976f07664e2c77c16a2a56b2f4d50" exitCode=0 Feb 18 14:35:04 crc kubenswrapper[4957]: I0218 14:35:04.634007 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5tbbh" event={"ID":"597544d6-7743-45b7-91d3-54e797b3e342","Type":"ContainerDied","Data":"e8fa035f729ddfac56971bbb2c8fcaf987f976f07664e2c77c16a2a56b2f4d50"} Feb 18 14:35:04 crc kubenswrapper[4957]: I0218 14:35:04.637096 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gnwwm" event={"ID":"0237301c-c7c4-490c-9be1-daaa5db30a10","Type":"ContainerStarted","Data":"5bd52f2bcc255fc3b41355e7e240114ceb74bdb8c1686360a044eeffd9dca01b"} Feb 18 14:35:04 crc kubenswrapper[4957]: I0218 14:35:04.641359 4957 generic.go:334] "Generic (PLEG): container finished" podID="74312833-84d3-4221-a8f7-07c892db5165" containerID="34bcb97b082fca12d9fccbfd6e09bc34808de7e528457b959bee3a5bac3fa1d1" exitCode=0 Feb 18 14:35:04 crc kubenswrapper[4957]: I0218 14:35:04.641466 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wh6pb" event={"ID":"74312833-84d3-4221-a8f7-07c892db5165","Type":"ContainerDied","Data":"34bcb97b082fca12d9fccbfd6e09bc34808de7e528457b959bee3a5bac3fa1d1"} Feb 18 14:35:04 crc kubenswrapper[4957]: I0218 14:35:04.655119 4957 generic.go:334] "Generic (PLEG): container finished" podID="f4d1018f-2e03-4372-98e6-cbba16adff43" containerID="242405b454f851299caf8bc87a30864e03ccb12c9699ed4c01538ce0b122e006" exitCode=0 Feb 18 14:35:04 crc kubenswrapper[4957]: I0218 14:35:04.655924 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gpb4w" event={"ID":"f4d1018f-2e03-4372-98e6-cbba16adff43","Type":"ContainerDied","Data":"242405b454f851299caf8bc87a30864e03ccb12c9699ed4c01538ce0b122e006"} Feb 18 14:35:04 crc kubenswrapper[4957]: I0218 14:35:04.676309 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-lj4dn" podStartSLOduration=3.254154841 podStartE2EDuration="1m2.676287873s" podCreationTimestamp="2026-02-18 14:34:02 +0000 UTC" firstStartedPulling="2026-02-18 14:34:04.801870668 +0000 UTC m=+151.322735412" lastFinishedPulling="2026-02-18 14:35:04.2240037 +0000 UTC m=+210.744868444" observedRunningTime="2026-02-18 14:35:04.648124752 +0000 UTC m=+211.168989496" watchObservedRunningTime="2026-02-18 14:35:04.676287873 +0000 UTC m=+211.197152617" Feb 18 14:35:04 crc kubenswrapper[4957]: I0218 14:35:04.699410 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-gnwwm" podStartSLOduration=3.33371968 podStartE2EDuration="1m2.699394908s" podCreationTimestamp="2026-02-18 14:34:02 +0000 UTC" firstStartedPulling="2026-02-18 14:34:04.785207567 +0000 UTC m=+151.306072311" lastFinishedPulling="2026-02-18 14:35:04.150882795 +0000 UTC m=+210.671747539" observedRunningTime="2026-02-18 14:35:04.696853815 +0000 UTC m=+211.217718569" watchObservedRunningTime="2026-02-18 14:35:04.699394908 +0000 UTC m=+211.220259642" Feb 18 14:35:04 crc kubenswrapper[4957]: I0218 14:35:04.710086 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7fb88d4f5-hmhbz"] Feb 18 14:35:04 crc kubenswrapper[4957]: E0218 14:35:04.710314 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dac46aaf-f49f-47d7-a71e-14d94fa7d759" containerName="route-controller-manager" Feb 18 14:35:04 crc kubenswrapper[4957]: I0218 14:35:04.710326 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="dac46aaf-f49f-47d7-a71e-14d94fa7d759" containerName="route-controller-manager" Feb 18 14:35:04 crc kubenswrapper[4957]: I0218 14:35:04.710446 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="dac46aaf-f49f-47d7-a71e-14d94fa7d759" containerName="route-controller-manager" Feb 18 14:35:04 crc kubenswrapper[4957]: I0218 14:35:04.710824 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7fb88d4f5-hmhbz" Feb 18 14:35:04 crc kubenswrapper[4957]: I0218 14:35:04.712798 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 18 14:35:04 crc kubenswrapper[4957]: I0218 14:35:04.712932 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 18 14:35:04 crc kubenswrapper[4957]: I0218 14:35:04.713286 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 18 14:35:04 crc kubenswrapper[4957]: I0218 14:35:04.713803 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 18 14:35:04 crc kubenswrapper[4957]: I0218 14:35:04.713963 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 18 14:35:04 crc kubenswrapper[4957]: I0218 14:35:04.714165 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 18 14:35:04 crc kubenswrapper[4957]: I0218 14:35:04.734669 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7fb88d4f5-hmhbz"] Feb 18 14:35:04 crc kubenswrapper[4957]: I0218 14:35:04.796104 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-768657449-zd8xv" podStartSLOduration=3.796080362 podStartE2EDuration="3.796080362s" podCreationTimestamp="2026-02-18 14:35:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:35:04.774887582 +0000 UTC m=+211.295752326" watchObservedRunningTime="2026-02-18 14:35:04.796080362 +0000 UTC m=+211.316945106" Feb 18 14:35:04 crc kubenswrapper[4957]: I0218 14:35:04.814364 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8095b960-a00c-4939-aa15-5bd9e6e9e6e3-config\") pod \"route-controller-manager-7fb88d4f5-hmhbz\" (UID: \"8095b960-a00c-4939-aa15-5bd9e6e9e6e3\") " pod="openshift-route-controller-manager/route-controller-manager-7fb88d4f5-hmhbz" Feb 18 14:35:04 crc kubenswrapper[4957]: I0218 14:35:04.814471 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2bdpb\" (UniqueName: \"kubernetes.io/projected/8095b960-a00c-4939-aa15-5bd9e6e9e6e3-kube-api-access-2bdpb\") pod \"route-controller-manager-7fb88d4f5-hmhbz\" (UID: \"8095b960-a00c-4939-aa15-5bd9e6e9e6e3\") " pod="openshift-route-controller-manager/route-controller-manager-7fb88d4f5-hmhbz" Feb 18 14:35:04 crc kubenswrapper[4957]: I0218 14:35:04.814565 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8095b960-a00c-4939-aa15-5bd9e6e9e6e3-serving-cert\") pod \"route-controller-manager-7fb88d4f5-hmhbz\" (UID: \"8095b960-a00c-4939-aa15-5bd9e6e9e6e3\") " pod="openshift-route-controller-manager/route-controller-manager-7fb88d4f5-hmhbz" Feb 18 14:35:04 crc kubenswrapper[4957]: I0218 14:35:04.814602 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8095b960-a00c-4939-aa15-5bd9e6e9e6e3-client-ca\") pod \"route-controller-manager-7fb88d4f5-hmhbz\" (UID: \"8095b960-a00c-4939-aa15-5bd9e6e9e6e3\") " pod="openshift-route-controller-manager/route-controller-manager-7fb88d4f5-hmhbz" Feb 18 14:35:04 crc kubenswrapper[4957]: I0218 14:35:04.916135 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8095b960-a00c-4939-aa15-5bd9e6e9e6e3-serving-cert\") pod \"route-controller-manager-7fb88d4f5-hmhbz\" (UID: \"8095b960-a00c-4939-aa15-5bd9e6e9e6e3\") " pod="openshift-route-controller-manager/route-controller-manager-7fb88d4f5-hmhbz" Feb 18 14:35:04 crc kubenswrapper[4957]: I0218 14:35:04.916210 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8095b960-a00c-4939-aa15-5bd9e6e9e6e3-client-ca\") pod \"route-controller-manager-7fb88d4f5-hmhbz\" (UID: \"8095b960-a00c-4939-aa15-5bd9e6e9e6e3\") " pod="openshift-route-controller-manager/route-controller-manager-7fb88d4f5-hmhbz" Feb 18 14:35:04 crc kubenswrapper[4957]: I0218 14:35:04.916252 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8095b960-a00c-4939-aa15-5bd9e6e9e6e3-config\") pod \"route-controller-manager-7fb88d4f5-hmhbz\" (UID: \"8095b960-a00c-4939-aa15-5bd9e6e9e6e3\") " pod="openshift-route-controller-manager/route-controller-manager-7fb88d4f5-hmhbz" Feb 18 14:35:04 crc kubenswrapper[4957]: I0218 14:35:04.916278 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2bdpb\" (UniqueName: \"kubernetes.io/projected/8095b960-a00c-4939-aa15-5bd9e6e9e6e3-kube-api-access-2bdpb\") pod \"route-controller-manager-7fb88d4f5-hmhbz\" (UID: \"8095b960-a00c-4939-aa15-5bd9e6e9e6e3\") " pod="openshift-route-controller-manager/route-controller-manager-7fb88d4f5-hmhbz" Feb 18 14:35:04 crc kubenswrapper[4957]: I0218 14:35:04.918283 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8095b960-a00c-4939-aa15-5bd9e6e9e6e3-client-ca\") pod \"route-controller-manager-7fb88d4f5-hmhbz\" (UID: \"8095b960-a00c-4939-aa15-5bd9e6e9e6e3\") " pod="openshift-route-controller-manager/route-controller-manager-7fb88d4f5-hmhbz" Feb 18 14:35:04 crc kubenswrapper[4957]: I0218 14:35:04.918435 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8095b960-a00c-4939-aa15-5bd9e6e9e6e3-config\") pod \"route-controller-manager-7fb88d4f5-hmhbz\" (UID: \"8095b960-a00c-4939-aa15-5bd9e6e9e6e3\") " pod="openshift-route-controller-manager/route-controller-manager-7fb88d4f5-hmhbz" Feb 18 14:35:04 crc kubenswrapper[4957]: I0218 14:35:04.937814 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8095b960-a00c-4939-aa15-5bd9e6e9e6e3-serving-cert\") pod \"route-controller-manager-7fb88d4f5-hmhbz\" (UID: \"8095b960-a00c-4939-aa15-5bd9e6e9e6e3\") " pod="openshift-route-controller-manager/route-controller-manager-7fb88d4f5-hmhbz" Feb 18 14:35:04 crc kubenswrapper[4957]: I0218 14:35:04.942787 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2bdpb\" (UniqueName: \"kubernetes.io/projected/8095b960-a00c-4939-aa15-5bd9e6e9e6e3-kube-api-access-2bdpb\") pod \"route-controller-manager-7fb88d4f5-hmhbz\" (UID: \"8095b960-a00c-4939-aa15-5bd9e6e9e6e3\") " pod="openshift-route-controller-manager/route-controller-manager-7fb88d4f5-hmhbz" Feb 18 14:35:05 crc kubenswrapper[4957]: I0218 14:35:05.032447 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7fb88d4f5-hmhbz" Feb 18 14:35:05 crc kubenswrapper[4957]: I0218 14:35:05.516358 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7fb88d4f5-hmhbz"] Feb 18 14:35:05 crc kubenswrapper[4957]: W0218 14:35:05.525590 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8095b960_a00c_4939_aa15_5bd9e6e9e6e3.slice/crio-5c8544dd3bfa73a22430efcddfa34b954cdc4d4abdb7c174e3d9ecfd15bef8f4 WatchSource:0}: Error finding container 5c8544dd3bfa73a22430efcddfa34b954cdc4d4abdb7c174e3d9ecfd15bef8f4: Status 404 returned error can't find the container with id 5c8544dd3bfa73a22430efcddfa34b954cdc4d4abdb7c174e3d9ecfd15bef8f4 Feb 18 14:35:05 crc kubenswrapper[4957]: I0218 14:35:05.672577 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5tbbh" event={"ID":"597544d6-7743-45b7-91d3-54e797b3e342","Type":"ContainerStarted","Data":"ef5dc7ca92d2eca00c471307d40c39405c081ffb5cd26d99231e5a6ae101c4a4"} Feb 18 14:35:05 crc kubenswrapper[4957]: I0218 14:35:05.677357 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wh6pb" event={"ID":"74312833-84d3-4221-a8f7-07c892db5165","Type":"ContainerStarted","Data":"dbaa3e944aa77d4c3e27954dd14e21624758f428d52bd4a63750dfe0985cebe5"} Feb 18 14:35:05 crc kubenswrapper[4957]: I0218 14:35:05.682447 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gpb4w" event={"ID":"f4d1018f-2e03-4372-98e6-cbba16adff43","Type":"ContainerStarted","Data":"584b5b4a77173200d863df2124a9f387ddb936c8f02e90a33157512daac54a56"} Feb 18 14:35:05 crc kubenswrapper[4957]: I0218 14:35:05.684329 4957 generic.go:334] "Generic (PLEG): container finished" podID="af490140-34e3-4689-b13a-112b97f5cd9e" containerID="855e6077a7264473e15c1864077ce238d3f521ccfa71b8ceee6a5af2408ab52f" exitCode=0 Feb 18 14:35:05 crc kubenswrapper[4957]: I0218 14:35:05.684430 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p65zp" event={"ID":"af490140-34e3-4689-b13a-112b97f5cd9e","Type":"ContainerDied","Data":"855e6077a7264473e15c1864077ce238d3f521ccfa71b8ceee6a5af2408ab52f"} Feb 18 14:35:05 crc kubenswrapper[4957]: I0218 14:35:05.691009 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7fb88d4f5-hmhbz" event={"ID":"8095b960-a00c-4939-aa15-5bd9e6e9e6e3","Type":"ContainerStarted","Data":"5c8544dd3bfa73a22430efcddfa34b954cdc4d4abdb7c174e3d9ecfd15bef8f4"} Feb 18 14:35:05 crc kubenswrapper[4957]: I0218 14:35:05.693020 4957 generic.go:334] "Generic (PLEG): container finished" podID="fbe80d3e-51d2-41fa-aa8b-5607e7e69745" containerID="420c24a0e4e2caba7d093f683859e7c32662193a3f926d17e713a54fa4a6b5ae" exitCode=0 Feb 18 14:35:05 crc kubenswrapper[4957]: I0218 14:35:05.693788 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9jpjb" event={"ID":"fbe80d3e-51d2-41fa-aa8b-5607e7e69745","Type":"ContainerDied","Data":"420c24a0e4e2caba7d093f683859e7c32662193a3f926d17e713a54fa4a6b5ae"} Feb 18 14:35:05 crc kubenswrapper[4957]: I0218 14:35:05.701608 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-5tbbh" podStartSLOduration=2.483755098 podStartE2EDuration="1m0.701587153s" podCreationTimestamp="2026-02-18 14:34:05 +0000 UTC" firstStartedPulling="2026-02-18 14:34:06.925806233 +0000 UTC m=+153.446670977" lastFinishedPulling="2026-02-18 14:35:05.143638288 +0000 UTC m=+211.664503032" observedRunningTime="2026-02-18 14:35:05.697954808 +0000 UTC m=+212.218819552" watchObservedRunningTime="2026-02-18 14:35:05.701587153 +0000 UTC m=+212.222451907" Feb 18 14:35:05 crc kubenswrapper[4957]: I0218 14:35:05.731146 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-gpb4w" podStartSLOduration=2.5311187950000003 podStartE2EDuration="1m0.731128684s" podCreationTimestamp="2026-02-18 14:34:05 +0000 UTC" firstStartedPulling="2026-02-18 14:34:06.887490843 +0000 UTC m=+153.408355607" lastFinishedPulling="2026-02-18 14:35:05.087500752 +0000 UTC m=+211.608365496" observedRunningTime="2026-02-18 14:35:05.727326514 +0000 UTC m=+212.248191258" watchObservedRunningTime="2026-02-18 14:35:05.731128684 +0000 UTC m=+212.251993428" Feb 18 14:35:05 crc kubenswrapper[4957]: I0218 14:35:05.747700 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-wh6pb" podStartSLOduration=3.336523117 podStartE2EDuration="1m3.74768362s" podCreationTimestamp="2026-02-18 14:34:02 +0000 UTC" firstStartedPulling="2026-02-18 14:34:04.735634986 +0000 UTC m=+151.256499740" lastFinishedPulling="2026-02-18 14:35:05.146795499 +0000 UTC m=+211.667660243" observedRunningTime="2026-02-18 14:35:05.746177707 +0000 UTC m=+212.267042461" watchObservedRunningTime="2026-02-18 14:35:05.74768362 +0000 UTC m=+212.268548364" Feb 18 14:35:05 crc kubenswrapper[4957]: I0218 14:35:05.948031 4957 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-wg8d9" podUID="ffc559a7-20d1-416b-ae18-bcbfc5193d32" containerName="registry-server" probeResult="failure" output=< Feb 18 14:35:05 crc kubenswrapper[4957]: timeout: failed to connect service ":50051" within 1s Feb 18 14:35:05 crc kubenswrapper[4957]: > Feb 18 14:35:05 crc kubenswrapper[4957]: I0218 14:35:05.959859 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-5tbbh" Feb 18 14:35:05 crc kubenswrapper[4957]: I0218 14:35:05.959906 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-5tbbh" Feb 18 14:35:06 crc kubenswrapper[4957]: I0218 14:35:06.714112 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p65zp" event={"ID":"af490140-34e3-4689-b13a-112b97f5cd9e","Type":"ContainerStarted","Data":"d97aba5261cebb04f6d8a81508e35c5e7d36e8d6907b02a16c62bed1d52ab7aa"} Feb 18 14:35:06 crc kubenswrapper[4957]: I0218 14:35:06.728533 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7fb88d4f5-hmhbz" event={"ID":"8095b960-a00c-4939-aa15-5bd9e6e9e6e3","Type":"ContainerStarted","Data":"6e345c590fa2c894eb640b3edf3c0e1beffd26ac5c9e0e756962022d28bcef6e"} Feb 18 14:35:06 crc kubenswrapper[4957]: I0218 14:35:06.728863 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7fb88d4f5-hmhbz" Feb 18 14:35:06 crc kubenswrapper[4957]: I0218 14:35:06.740203 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-p65zp" podStartSLOduration=4.35814444 podStartE2EDuration="1m5.740181797s" podCreationTimestamp="2026-02-18 14:34:01 +0000 UTC" firstStartedPulling="2026-02-18 14:34:04.823876997 +0000 UTC m=+151.344741731" lastFinishedPulling="2026-02-18 14:35:06.205914344 +0000 UTC m=+212.726779088" observedRunningTime="2026-02-18 14:35:06.738270082 +0000 UTC m=+213.259134826" watchObservedRunningTime="2026-02-18 14:35:06.740181797 +0000 UTC m=+213.261046541" Feb 18 14:35:06 crc kubenswrapper[4957]: I0218 14:35:06.741369 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7fb88d4f5-hmhbz" Feb 18 14:35:06 crc kubenswrapper[4957]: I0218 14:35:06.741449 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9jpjb" event={"ID":"fbe80d3e-51d2-41fa-aa8b-5607e7e69745","Type":"ContainerStarted","Data":"c3745d8f0874c512eb0ac1ff40fc218f33462a293807585fff760d986efd5f6c"} Feb 18 14:35:06 crc kubenswrapper[4957]: I0218 14:35:06.768832 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7fb88d4f5-hmhbz" podStartSLOduration=5.768797081 podStartE2EDuration="5.768797081s" podCreationTimestamp="2026-02-18 14:35:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:35:06.763750706 +0000 UTC m=+213.284615460" watchObservedRunningTime="2026-02-18 14:35:06.768797081 +0000 UTC m=+213.289661835" Feb 18 14:35:06 crc kubenswrapper[4957]: I0218 14:35:06.783584 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-9jpjb" podStartSLOduration=2.495425303 podStartE2EDuration="1m2.783560046s" podCreationTimestamp="2026-02-18 14:34:04 +0000 UTC" firstStartedPulling="2026-02-18 14:34:05.839308572 +0000 UTC m=+152.360173316" lastFinishedPulling="2026-02-18 14:35:06.127443315 +0000 UTC m=+212.648308059" observedRunningTime="2026-02-18 14:35:06.78230054 +0000 UTC m=+213.303165284" watchObservedRunningTime="2026-02-18 14:35:06.783560046 +0000 UTC m=+213.304424790" Feb 18 14:35:07 crc kubenswrapper[4957]: I0218 14:35:07.011601 4957 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-5tbbh" podUID="597544d6-7743-45b7-91d3-54e797b3e342" containerName="registry-server" probeResult="failure" output=< Feb 18 14:35:07 crc kubenswrapper[4957]: timeout: failed to connect service ":50051" within 1s Feb 18 14:35:07 crc kubenswrapper[4957]: > Feb 18 14:35:07 crc kubenswrapper[4957]: I0218 14:35:07.279076 4957 patch_prober.go:28] interesting pod/machine-config-daemon-x8wwg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 14:35:07 crc kubenswrapper[4957]: I0218 14:35:07.279145 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 14:35:07 crc kubenswrapper[4957]: I0218 14:35:07.279199 4957 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" Feb 18 14:35:07 crc kubenswrapper[4957]: I0218 14:35:07.279764 4957 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4227d09ba91769eac5df7ac50f6da20587fad22be3d3f27f0a938b37d76b457e"} pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 14:35:07 crc kubenswrapper[4957]: I0218 14:35:07.279814 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" containerName="machine-config-daemon" containerID="cri-o://4227d09ba91769eac5df7ac50f6da20587fad22be3d3f27f0a938b37d76b457e" gracePeriod=600 Feb 18 14:35:07 crc kubenswrapper[4957]: I0218 14:35:07.758034 4957 generic.go:334] "Generic (PLEG): container finished" podID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" containerID="4227d09ba91769eac5df7ac50f6da20587fad22be3d3f27f0a938b37d76b457e" exitCode=0 Feb 18 14:35:07 crc kubenswrapper[4957]: I0218 14:35:07.758870 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" event={"ID":"4cde17e3-43e9-4bed-afe8-5b76229e35cf","Type":"ContainerDied","Data":"4227d09ba91769eac5df7ac50f6da20587fad22be3d3f27f0a938b37d76b457e"} Feb 18 14:35:07 crc kubenswrapper[4957]: I0218 14:35:07.758896 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" event={"ID":"4cde17e3-43e9-4bed-afe8-5b76229e35cf","Type":"ContainerStarted","Data":"039cd260c9485199567f1825b67f6c50651e5a330eb2444ec1819bfdfe45f6a1"} Feb 18 14:35:12 crc kubenswrapper[4957]: I0218 14:35:12.371525 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-p65zp" Feb 18 14:35:12 crc kubenswrapper[4957]: I0218 14:35:12.372350 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-p65zp" Feb 18 14:35:12 crc kubenswrapper[4957]: I0218 14:35:12.442786 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-p65zp" Feb 18 14:35:12 crc kubenswrapper[4957]: I0218 14:35:12.473728 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-wh6pb" Feb 18 14:35:12 crc kubenswrapper[4957]: I0218 14:35:12.473793 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-wh6pb" Feb 18 14:35:12 crc kubenswrapper[4957]: I0218 14:35:12.518843 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-wh6pb" Feb 18 14:35:12 crc kubenswrapper[4957]: I0218 14:35:12.720619 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-lj4dn" Feb 18 14:35:12 crc kubenswrapper[4957]: I0218 14:35:12.720678 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-lj4dn" Feb 18 14:35:12 crc kubenswrapper[4957]: I0218 14:35:12.757570 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-lj4dn" Feb 18 14:35:12 crc kubenswrapper[4957]: I0218 14:35:12.826609 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-lj4dn" Feb 18 14:35:12 crc kubenswrapper[4957]: I0218 14:35:12.831730 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-wh6pb" Feb 18 14:35:12 crc kubenswrapper[4957]: I0218 14:35:12.833461 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-p65zp" Feb 18 14:35:12 crc kubenswrapper[4957]: I0218 14:35:12.922804 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-gnwwm" Feb 18 14:35:12 crc kubenswrapper[4957]: I0218 14:35:12.923296 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-gnwwm" Feb 18 14:35:12 crc kubenswrapper[4957]: I0218 14:35:12.972198 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-gnwwm" Feb 18 14:35:13 crc kubenswrapper[4957]: I0218 14:35:13.829139 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-gnwwm" Feb 18 14:35:14 crc kubenswrapper[4957]: I0218 14:35:14.686008 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-wg8d9" Feb 18 14:35:14 crc kubenswrapper[4957]: I0218 14:35:14.734374 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-wg8d9" Feb 18 14:35:14 crc kubenswrapper[4957]: I0218 14:35:14.844508 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-9jpjb" Feb 18 14:35:14 crc kubenswrapper[4957]: I0218 14:35:14.844579 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-9jpjb" Feb 18 14:35:14 crc kubenswrapper[4957]: I0218 14:35:14.886041 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-9jpjb" Feb 18 14:35:15 crc kubenswrapper[4957]: I0218 14:35:15.046606 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lj4dn"] Feb 18 14:35:15 crc kubenswrapper[4957]: I0218 14:35:15.046907 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-lj4dn" podUID="f83e4add-33e3-4e43-af47-f9980471df63" containerName="registry-server" containerID="cri-o://72add7e3816ed4e76a05547e4025bfbae55d05b9fda925d6e5170dc6791b2bb0" gracePeriod=2 Feb 18 14:35:15 crc kubenswrapper[4957]: I0218 14:35:15.512137 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-gpb4w" Feb 18 14:35:15 crc kubenswrapper[4957]: I0218 14:35:15.512201 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-gpb4w" Feb 18 14:35:15 crc kubenswrapper[4957]: I0218 14:35:15.557175 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-gpb4w" Feb 18 14:35:15 crc kubenswrapper[4957]: I0218 14:35:15.803490 4957 generic.go:334] "Generic (PLEG): container finished" podID="f83e4add-33e3-4e43-af47-f9980471df63" containerID="72add7e3816ed4e76a05547e4025bfbae55d05b9fda925d6e5170dc6791b2bb0" exitCode=0 Feb 18 14:35:15 crc kubenswrapper[4957]: I0218 14:35:15.803563 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lj4dn" event={"ID":"f83e4add-33e3-4e43-af47-f9980471df63","Type":"ContainerDied","Data":"72add7e3816ed4e76a05547e4025bfbae55d05b9fda925d6e5170dc6791b2bb0"} Feb 18 14:35:15 crc kubenswrapper[4957]: I0218 14:35:15.847514 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-9jpjb" Feb 18 14:35:15 crc kubenswrapper[4957]: I0218 14:35:15.847913 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-gpb4w" Feb 18 14:35:16 crc kubenswrapper[4957]: I0218 14:35:16.015759 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-5tbbh" Feb 18 14:35:16 crc kubenswrapper[4957]: I0218 14:35:16.076615 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gnwwm"] Feb 18 14:35:16 crc kubenswrapper[4957]: I0218 14:35:16.076856 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-gnwwm" podUID="0237301c-c7c4-490c-9be1-daaa5db30a10" containerName="registry-server" containerID="cri-o://5bd52f2bcc255fc3b41355e7e240114ceb74bdb8c1686360a044eeffd9dca01b" gracePeriod=2 Feb 18 14:35:16 crc kubenswrapper[4957]: I0218 14:35:16.093737 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-5tbbh" Feb 18 14:35:16 crc kubenswrapper[4957]: I0218 14:35:16.305332 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lj4dn" Feb 18 14:35:16 crc kubenswrapper[4957]: I0218 14:35:16.377264 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vlkgb\" (UniqueName: \"kubernetes.io/projected/f83e4add-33e3-4e43-af47-f9980471df63-kube-api-access-vlkgb\") pod \"f83e4add-33e3-4e43-af47-f9980471df63\" (UID: \"f83e4add-33e3-4e43-af47-f9980471df63\") " Feb 18 14:35:16 crc kubenswrapper[4957]: I0218 14:35:16.377328 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f83e4add-33e3-4e43-af47-f9980471df63-utilities\") pod \"f83e4add-33e3-4e43-af47-f9980471df63\" (UID: \"f83e4add-33e3-4e43-af47-f9980471df63\") " Feb 18 14:35:16 crc kubenswrapper[4957]: I0218 14:35:16.377470 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f83e4add-33e3-4e43-af47-f9980471df63-catalog-content\") pod \"f83e4add-33e3-4e43-af47-f9980471df63\" (UID: \"f83e4add-33e3-4e43-af47-f9980471df63\") " Feb 18 14:35:16 crc kubenswrapper[4957]: I0218 14:35:16.378552 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f83e4add-33e3-4e43-af47-f9980471df63-utilities" (OuterVolumeSpecName: "utilities") pod "f83e4add-33e3-4e43-af47-f9980471df63" (UID: "f83e4add-33e3-4e43-af47-f9980471df63"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:35:16 crc kubenswrapper[4957]: I0218 14:35:16.402630 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f83e4add-33e3-4e43-af47-f9980471df63-kube-api-access-vlkgb" (OuterVolumeSpecName: "kube-api-access-vlkgb") pod "f83e4add-33e3-4e43-af47-f9980471df63" (UID: "f83e4add-33e3-4e43-af47-f9980471df63"). InnerVolumeSpecName "kube-api-access-vlkgb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:35:16 crc kubenswrapper[4957]: I0218 14:35:16.466974 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f83e4add-33e3-4e43-af47-f9980471df63-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f83e4add-33e3-4e43-af47-f9980471df63" (UID: "f83e4add-33e3-4e43-af47-f9980471df63"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:35:16 crc kubenswrapper[4957]: I0218 14:35:16.478824 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vlkgb\" (UniqueName: \"kubernetes.io/projected/f83e4add-33e3-4e43-af47-f9980471df63-kube-api-access-vlkgb\") on node \"crc\" DevicePath \"\"" Feb 18 14:35:16 crc kubenswrapper[4957]: I0218 14:35:16.478895 4957 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f83e4add-33e3-4e43-af47-f9980471df63-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 14:35:16 crc kubenswrapper[4957]: I0218 14:35:16.478910 4957 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f83e4add-33e3-4e43-af47-f9980471df63-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 14:35:16 crc kubenswrapper[4957]: I0218 14:35:16.824266 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lj4dn" event={"ID":"f83e4add-33e3-4e43-af47-f9980471df63","Type":"ContainerDied","Data":"12e0ce09ce8f568e47cdc0f6c056e82eab414886c5f646e4512507f275f960e0"} Feb 18 14:35:16 crc kubenswrapper[4957]: I0218 14:35:16.824353 4957 scope.go:117] "RemoveContainer" containerID="72add7e3816ed4e76a05547e4025bfbae55d05b9fda925d6e5170dc6791b2bb0" Feb 18 14:35:16 crc kubenswrapper[4957]: I0218 14:35:16.824858 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lj4dn" Feb 18 14:35:16 crc kubenswrapper[4957]: I0218 14:35:16.841928 4957 scope.go:117] "RemoveContainer" containerID="302e99003891611fc6e14978182e8bffa1217e4f48712144aae55ea34361761a" Feb 18 14:35:16 crc kubenswrapper[4957]: I0218 14:35:16.863655 4957 scope.go:117] "RemoveContainer" containerID="c0aea824c0178765a471c2297627caad783fae42feb994db5d1aab99099b268a" Feb 18 14:35:16 crc kubenswrapper[4957]: I0218 14:35:16.868938 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lj4dn"] Feb 18 14:35:16 crc kubenswrapper[4957]: I0218 14:35:16.877872 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-lj4dn"] Feb 18 14:35:17 crc kubenswrapper[4957]: I0218 14:35:17.447079 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9jpjb"] Feb 18 14:35:17 crc kubenswrapper[4957]: I0218 14:35:17.831737 4957 generic.go:334] "Generic (PLEG): container finished" podID="0237301c-c7c4-490c-9be1-daaa5db30a10" containerID="5bd52f2bcc255fc3b41355e7e240114ceb74bdb8c1686360a044eeffd9dca01b" exitCode=0 Feb 18 14:35:17 crc kubenswrapper[4957]: I0218 14:35:17.831802 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gnwwm" event={"ID":"0237301c-c7c4-490c-9be1-daaa5db30a10","Type":"ContainerDied","Data":"5bd52f2bcc255fc3b41355e7e240114ceb74bdb8c1686360a044eeffd9dca01b"} Feb 18 14:35:17 crc kubenswrapper[4957]: I0218 14:35:17.834319 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-9jpjb" podUID="fbe80d3e-51d2-41fa-aa8b-5607e7e69745" containerName="registry-server" containerID="cri-o://c3745d8f0874c512eb0ac1ff40fc218f33462a293807585fff760d986efd5f6c" gracePeriod=2 Feb 18 14:35:17 crc kubenswrapper[4957]: I0218 14:35:17.964452 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gnwwm" Feb 18 14:35:17 crc kubenswrapper[4957]: I0218 14:35:17.994322 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0237301c-c7c4-490c-9be1-daaa5db30a10-utilities\") pod \"0237301c-c7c4-490c-9be1-daaa5db30a10\" (UID: \"0237301c-c7c4-490c-9be1-daaa5db30a10\") " Feb 18 14:35:17 crc kubenswrapper[4957]: I0218 14:35:17.994463 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vdh9w\" (UniqueName: \"kubernetes.io/projected/0237301c-c7c4-490c-9be1-daaa5db30a10-kube-api-access-vdh9w\") pod \"0237301c-c7c4-490c-9be1-daaa5db30a10\" (UID: \"0237301c-c7c4-490c-9be1-daaa5db30a10\") " Feb 18 14:35:17 crc kubenswrapper[4957]: I0218 14:35:17.994494 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0237301c-c7c4-490c-9be1-daaa5db30a10-catalog-content\") pod \"0237301c-c7c4-490c-9be1-daaa5db30a10\" (UID: \"0237301c-c7c4-490c-9be1-daaa5db30a10\") " Feb 18 14:35:17 crc kubenswrapper[4957]: I0218 14:35:17.995493 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0237301c-c7c4-490c-9be1-daaa5db30a10-utilities" (OuterVolumeSpecName: "utilities") pod "0237301c-c7c4-490c-9be1-daaa5db30a10" (UID: "0237301c-c7c4-490c-9be1-daaa5db30a10"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:35:17 crc kubenswrapper[4957]: I0218 14:35:17.999016 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0237301c-c7c4-490c-9be1-daaa5db30a10-kube-api-access-vdh9w" (OuterVolumeSpecName: "kube-api-access-vdh9w") pod "0237301c-c7c4-490c-9be1-daaa5db30a10" (UID: "0237301c-c7c4-490c-9be1-daaa5db30a10"). InnerVolumeSpecName "kube-api-access-vdh9w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:35:18 crc kubenswrapper[4957]: I0218 14:35:18.058862 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0237301c-c7c4-490c-9be1-daaa5db30a10-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0237301c-c7c4-490c-9be1-daaa5db30a10" (UID: "0237301c-c7c4-490c-9be1-daaa5db30a10"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:35:18 crc kubenswrapper[4957]: I0218 14:35:18.096156 4957 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0237301c-c7c4-490c-9be1-daaa5db30a10-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 14:35:18 crc kubenswrapper[4957]: I0218 14:35:18.096184 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vdh9w\" (UniqueName: \"kubernetes.io/projected/0237301c-c7c4-490c-9be1-daaa5db30a10-kube-api-access-vdh9w\") on node \"crc\" DevicePath \"\"" Feb 18 14:35:18 crc kubenswrapper[4957]: I0218 14:35:18.096198 4957 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0237301c-c7c4-490c-9be1-daaa5db30a10-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 14:35:18 crc kubenswrapper[4957]: I0218 14:35:18.219178 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f83e4add-33e3-4e43-af47-f9980471df63" path="/var/lib/kubelet/pods/f83e4add-33e3-4e43-af47-f9980471df63/volumes" Feb 18 14:35:18 crc kubenswrapper[4957]: I0218 14:35:18.285928 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9jpjb" Feb 18 14:35:18 crc kubenswrapper[4957]: I0218 14:35:18.399368 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fbe80d3e-51d2-41fa-aa8b-5607e7e69745-catalog-content\") pod \"fbe80d3e-51d2-41fa-aa8b-5607e7e69745\" (UID: \"fbe80d3e-51d2-41fa-aa8b-5607e7e69745\") " Feb 18 14:35:18 crc kubenswrapper[4957]: I0218 14:35:18.399451 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gtrn5\" (UniqueName: \"kubernetes.io/projected/fbe80d3e-51d2-41fa-aa8b-5607e7e69745-kube-api-access-gtrn5\") pod \"fbe80d3e-51d2-41fa-aa8b-5607e7e69745\" (UID: \"fbe80d3e-51d2-41fa-aa8b-5607e7e69745\") " Feb 18 14:35:18 crc kubenswrapper[4957]: I0218 14:35:18.399493 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fbe80d3e-51d2-41fa-aa8b-5607e7e69745-utilities\") pod \"fbe80d3e-51d2-41fa-aa8b-5607e7e69745\" (UID: \"fbe80d3e-51d2-41fa-aa8b-5607e7e69745\") " Feb 18 14:35:18 crc kubenswrapper[4957]: I0218 14:35:18.400515 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fbe80d3e-51d2-41fa-aa8b-5607e7e69745-utilities" (OuterVolumeSpecName: "utilities") pod "fbe80d3e-51d2-41fa-aa8b-5607e7e69745" (UID: "fbe80d3e-51d2-41fa-aa8b-5607e7e69745"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:35:18 crc kubenswrapper[4957]: I0218 14:35:18.400579 4957 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fbe80d3e-51d2-41fa-aa8b-5607e7e69745-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 14:35:18 crc kubenswrapper[4957]: I0218 14:35:18.411845 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fbe80d3e-51d2-41fa-aa8b-5607e7e69745-kube-api-access-gtrn5" (OuterVolumeSpecName: "kube-api-access-gtrn5") pod "fbe80d3e-51d2-41fa-aa8b-5607e7e69745" (UID: "fbe80d3e-51d2-41fa-aa8b-5607e7e69745"). InnerVolumeSpecName "kube-api-access-gtrn5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:35:18 crc kubenswrapper[4957]: I0218 14:35:18.425347 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fbe80d3e-51d2-41fa-aa8b-5607e7e69745-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fbe80d3e-51d2-41fa-aa8b-5607e7e69745" (UID: "fbe80d3e-51d2-41fa-aa8b-5607e7e69745"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:35:18 crc kubenswrapper[4957]: I0218 14:35:18.502310 4957 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fbe80d3e-51d2-41fa-aa8b-5607e7e69745-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 14:35:18 crc kubenswrapper[4957]: I0218 14:35:18.502346 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gtrn5\" (UniqueName: \"kubernetes.io/projected/fbe80d3e-51d2-41fa-aa8b-5607e7e69745-kube-api-access-gtrn5\") on node \"crc\" DevicePath \"\"" Feb 18 14:35:18 crc kubenswrapper[4957]: I0218 14:35:18.840871 4957 generic.go:334] "Generic (PLEG): container finished" podID="fbe80d3e-51d2-41fa-aa8b-5607e7e69745" containerID="c3745d8f0874c512eb0ac1ff40fc218f33462a293807585fff760d986efd5f6c" exitCode=0 Feb 18 14:35:18 crc kubenswrapper[4957]: I0218 14:35:18.840950 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9jpjb" Feb 18 14:35:18 crc kubenswrapper[4957]: I0218 14:35:18.840956 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9jpjb" event={"ID":"fbe80d3e-51d2-41fa-aa8b-5607e7e69745","Type":"ContainerDied","Data":"c3745d8f0874c512eb0ac1ff40fc218f33462a293807585fff760d986efd5f6c"} Feb 18 14:35:18 crc kubenswrapper[4957]: I0218 14:35:18.841082 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9jpjb" event={"ID":"fbe80d3e-51d2-41fa-aa8b-5607e7e69745","Type":"ContainerDied","Data":"190158fbf3221724df9d5bf7f29d47005dde94e402292b60b6664156459b2063"} Feb 18 14:35:18 crc kubenswrapper[4957]: I0218 14:35:18.841104 4957 scope.go:117] "RemoveContainer" containerID="c3745d8f0874c512eb0ac1ff40fc218f33462a293807585fff760d986efd5f6c" Feb 18 14:35:18 crc kubenswrapper[4957]: I0218 14:35:18.844199 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gnwwm" event={"ID":"0237301c-c7c4-490c-9be1-daaa5db30a10","Type":"ContainerDied","Data":"542b05c769c4073407e0bfad9ed2b4b5a7e1cced52b347b5c3b20e4eef21a112"} Feb 18 14:35:18 crc kubenswrapper[4957]: I0218 14:35:18.844272 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gnwwm" Feb 18 14:35:18 crc kubenswrapper[4957]: I0218 14:35:18.861036 4957 scope.go:117] "RemoveContainer" containerID="420c24a0e4e2caba7d093f683859e7c32662193a3f926d17e713a54fa4a6b5ae" Feb 18 14:35:18 crc kubenswrapper[4957]: I0218 14:35:18.869125 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gnwwm"] Feb 18 14:35:18 crc kubenswrapper[4957]: I0218 14:35:18.874459 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-gnwwm"] Feb 18 14:35:18 crc kubenswrapper[4957]: I0218 14:35:18.884897 4957 scope.go:117] "RemoveContainer" containerID="5df7239993bf74a801bd908e7d535230116dab2862ea1b4185d6c1dc8d3a08d9" Feb 18 14:35:18 crc kubenswrapper[4957]: I0218 14:35:18.890213 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9jpjb"] Feb 18 14:35:18 crc kubenswrapper[4957]: I0218 14:35:18.894210 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-9jpjb"] Feb 18 14:35:18 crc kubenswrapper[4957]: I0218 14:35:18.899713 4957 scope.go:117] "RemoveContainer" containerID="c3745d8f0874c512eb0ac1ff40fc218f33462a293807585fff760d986efd5f6c" Feb 18 14:35:18 crc kubenswrapper[4957]: E0218 14:35:18.900585 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c3745d8f0874c512eb0ac1ff40fc218f33462a293807585fff760d986efd5f6c\": container with ID starting with c3745d8f0874c512eb0ac1ff40fc218f33462a293807585fff760d986efd5f6c not found: ID does not exist" containerID="c3745d8f0874c512eb0ac1ff40fc218f33462a293807585fff760d986efd5f6c" Feb 18 14:35:18 crc kubenswrapper[4957]: I0218 14:35:18.900685 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c3745d8f0874c512eb0ac1ff40fc218f33462a293807585fff760d986efd5f6c"} err="failed to get container status \"c3745d8f0874c512eb0ac1ff40fc218f33462a293807585fff760d986efd5f6c\": rpc error: code = NotFound desc = could not find container \"c3745d8f0874c512eb0ac1ff40fc218f33462a293807585fff760d986efd5f6c\": container with ID starting with c3745d8f0874c512eb0ac1ff40fc218f33462a293807585fff760d986efd5f6c not found: ID does not exist" Feb 18 14:35:18 crc kubenswrapper[4957]: I0218 14:35:18.900752 4957 scope.go:117] "RemoveContainer" containerID="420c24a0e4e2caba7d093f683859e7c32662193a3f926d17e713a54fa4a6b5ae" Feb 18 14:35:18 crc kubenswrapper[4957]: E0218 14:35:18.901230 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"420c24a0e4e2caba7d093f683859e7c32662193a3f926d17e713a54fa4a6b5ae\": container with ID starting with 420c24a0e4e2caba7d093f683859e7c32662193a3f926d17e713a54fa4a6b5ae not found: ID does not exist" containerID="420c24a0e4e2caba7d093f683859e7c32662193a3f926d17e713a54fa4a6b5ae" Feb 18 14:35:18 crc kubenswrapper[4957]: I0218 14:35:18.901270 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"420c24a0e4e2caba7d093f683859e7c32662193a3f926d17e713a54fa4a6b5ae"} err="failed to get container status \"420c24a0e4e2caba7d093f683859e7c32662193a3f926d17e713a54fa4a6b5ae\": rpc error: code = NotFound desc = could not find container \"420c24a0e4e2caba7d093f683859e7c32662193a3f926d17e713a54fa4a6b5ae\": container with ID starting with 420c24a0e4e2caba7d093f683859e7c32662193a3f926d17e713a54fa4a6b5ae not found: ID does not exist" Feb 18 14:35:18 crc kubenswrapper[4957]: I0218 14:35:18.901302 4957 scope.go:117] "RemoveContainer" containerID="5df7239993bf74a801bd908e7d535230116dab2862ea1b4185d6c1dc8d3a08d9" Feb 18 14:35:18 crc kubenswrapper[4957]: E0218 14:35:18.901685 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5df7239993bf74a801bd908e7d535230116dab2862ea1b4185d6c1dc8d3a08d9\": container with ID starting with 5df7239993bf74a801bd908e7d535230116dab2862ea1b4185d6c1dc8d3a08d9 not found: ID does not exist" containerID="5df7239993bf74a801bd908e7d535230116dab2862ea1b4185d6c1dc8d3a08d9" Feb 18 14:35:18 crc kubenswrapper[4957]: I0218 14:35:18.901744 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5df7239993bf74a801bd908e7d535230116dab2862ea1b4185d6c1dc8d3a08d9"} err="failed to get container status \"5df7239993bf74a801bd908e7d535230116dab2862ea1b4185d6c1dc8d3a08d9\": rpc error: code = NotFound desc = could not find container \"5df7239993bf74a801bd908e7d535230116dab2862ea1b4185d6c1dc8d3a08d9\": container with ID starting with 5df7239993bf74a801bd908e7d535230116dab2862ea1b4185d6c1dc8d3a08d9 not found: ID does not exist" Feb 18 14:35:18 crc kubenswrapper[4957]: I0218 14:35:18.901784 4957 scope.go:117] "RemoveContainer" containerID="5bd52f2bcc255fc3b41355e7e240114ceb74bdb8c1686360a044eeffd9dca01b" Feb 18 14:35:18 crc kubenswrapper[4957]: I0218 14:35:18.915382 4957 scope.go:117] "RemoveContainer" containerID="e9064e566324427830a05c44aa17e4ef4e1c72f08c21e5d2d9b555a588dc039b" Feb 18 14:35:18 crc kubenswrapper[4957]: I0218 14:35:18.929820 4957 scope.go:117] "RemoveContainer" containerID="58d022c89bfe4386971455e907858bb48b6bfe6f7fe43a6ac02e0013ee815c93" Feb 18 14:35:19 crc kubenswrapper[4957]: I0218 14:35:19.172633 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-594sd" podUID="b8d0adaf-a3c6-4121-970c-1f6205db177e" containerName="oauth-openshift" containerID="cri-o://c458271912e707e3caa43b7614f2c63396fed6ee120fc00c9c1919b2ef86b017" gracePeriod=15 Feb 18 14:35:19 crc kubenswrapper[4957]: I0218 14:35:19.638873 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-594sd" Feb 18 14:35:19 crc kubenswrapper[4957]: I0218 14:35:19.716889 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b8d0adaf-a3c6-4121-970c-1f6205db177e-v4-0-config-system-service-ca\") pod \"b8d0adaf-a3c6-4121-970c-1f6205db177e\" (UID: \"b8d0adaf-a3c6-4121-970c-1f6205db177e\") " Feb 18 14:35:19 crc kubenswrapper[4957]: I0218 14:35:19.716942 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b8d0adaf-a3c6-4121-970c-1f6205db177e-v4-0-config-system-session\") pod \"b8d0adaf-a3c6-4121-970c-1f6205db177e\" (UID: \"b8d0adaf-a3c6-4121-970c-1f6205db177e\") " Feb 18 14:35:19 crc kubenswrapper[4957]: I0218 14:35:19.716972 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b8d0adaf-a3c6-4121-970c-1f6205db177e-v4-0-config-user-template-login\") pod \"b8d0adaf-a3c6-4121-970c-1f6205db177e\" (UID: \"b8d0adaf-a3c6-4121-970c-1f6205db177e\") " Feb 18 14:35:19 crc kubenswrapper[4957]: I0218 14:35:19.716988 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b8d0adaf-a3c6-4121-970c-1f6205db177e-v4-0-config-system-cliconfig\") pod \"b8d0adaf-a3c6-4121-970c-1f6205db177e\" (UID: \"b8d0adaf-a3c6-4121-970c-1f6205db177e\") " Feb 18 14:35:19 crc kubenswrapper[4957]: I0218 14:35:19.717006 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b8d0adaf-a3c6-4121-970c-1f6205db177e-v4-0-config-user-idp-0-file-data\") pod \"b8d0adaf-a3c6-4121-970c-1f6205db177e\" (UID: \"b8d0adaf-a3c6-4121-970c-1f6205db177e\") " Feb 18 14:35:19 crc kubenswrapper[4957]: I0218 14:35:19.717033 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mx625\" (UniqueName: \"kubernetes.io/projected/b8d0adaf-a3c6-4121-970c-1f6205db177e-kube-api-access-mx625\") pod \"b8d0adaf-a3c6-4121-970c-1f6205db177e\" (UID: \"b8d0adaf-a3c6-4121-970c-1f6205db177e\") " Feb 18 14:35:19 crc kubenswrapper[4957]: I0218 14:35:19.717051 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b8d0adaf-a3c6-4121-970c-1f6205db177e-v4-0-config-system-trusted-ca-bundle\") pod \"b8d0adaf-a3c6-4121-970c-1f6205db177e\" (UID: \"b8d0adaf-a3c6-4121-970c-1f6205db177e\") " Feb 18 14:35:19 crc kubenswrapper[4957]: I0218 14:35:19.717076 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b8d0adaf-a3c6-4121-970c-1f6205db177e-v4-0-config-system-serving-cert\") pod \"b8d0adaf-a3c6-4121-970c-1f6205db177e\" (UID: \"b8d0adaf-a3c6-4121-970c-1f6205db177e\") " Feb 18 14:35:19 crc kubenswrapper[4957]: I0218 14:35:19.717092 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b8d0adaf-a3c6-4121-970c-1f6205db177e-audit-policies\") pod \"b8d0adaf-a3c6-4121-970c-1f6205db177e\" (UID: \"b8d0adaf-a3c6-4121-970c-1f6205db177e\") " Feb 18 14:35:19 crc kubenswrapper[4957]: I0218 14:35:19.717133 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b8d0adaf-a3c6-4121-970c-1f6205db177e-v4-0-config-system-router-certs\") pod \"b8d0adaf-a3c6-4121-970c-1f6205db177e\" (UID: \"b8d0adaf-a3c6-4121-970c-1f6205db177e\") " Feb 18 14:35:19 crc kubenswrapper[4957]: I0218 14:35:19.717170 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b8d0adaf-a3c6-4121-970c-1f6205db177e-v4-0-config-user-template-error\") pod \"b8d0adaf-a3c6-4121-970c-1f6205db177e\" (UID: \"b8d0adaf-a3c6-4121-970c-1f6205db177e\") " Feb 18 14:35:19 crc kubenswrapper[4957]: I0218 14:35:19.717221 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b8d0adaf-a3c6-4121-970c-1f6205db177e-v4-0-config-system-ocp-branding-template\") pod \"b8d0adaf-a3c6-4121-970c-1f6205db177e\" (UID: \"b8d0adaf-a3c6-4121-970c-1f6205db177e\") " Feb 18 14:35:19 crc kubenswrapper[4957]: I0218 14:35:19.717251 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b8d0adaf-a3c6-4121-970c-1f6205db177e-v4-0-config-user-template-provider-selection\") pod \"b8d0adaf-a3c6-4121-970c-1f6205db177e\" (UID: \"b8d0adaf-a3c6-4121-970c-1f6205db177e\") " Feb 18 14:35:19 crc kubenswrapper[4957]: I0218 14:35:19.717394 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b8d0adaf-a3c6-4121-970c-1f6205db177e-audit-dir\") pod \"b8d0adaf-a3c6-4121-970c-1f6205db177e\" (UID: \"b8d0adaf-a3c6-4121-970c-1f6205db177e\") " Feb 18 14:35:19 crc kubenswrapper[4957]: I0218 14:35:19.718167 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b8d0adaf-a3c6-4121-970c-1f6205db177e-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "b8d0adaf-a3c6-4121-970c-1f6205db177e" (UID: "b8d0adaf-a3c6-4121-970c-1f6205db177e"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 14:35:19 crc kubenswrapper[4957]: I0218 14:35:19.718180 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b8d0adaf-a3c6-4121-970c-1f6205db177e-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "b8d0adaf-a3c6-4121-970c-1f6205db177e" (UID: "b8d0adaf-a3c6-4121-970c-1f6205db177e"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:35:19 crc kubenswrapper[4957]: I0218 14:35:19.718210 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b8d0adaf-a3c6-4121-970c-1f6205db177e-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "b8d0adaf-a3c6-4121-970c-1f6205db177e" (UID: "b8d0adaf-a3c6-4121-970c-1f6205db177e"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:35:19 crc kubenswrapper[4957]: I0218 14:35:19.718194 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b8d0adaf-a3c6-4121-970c-1f6205db177e-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "b8d0adaf-a3c6-4121-970c-1f6205db177e" (UID: "b8d0adaf-a3c6-4121-970c-1f6205db177e"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:35:19 crc kubenswrapper[4957]: I0218 14:35:19.718641 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b8d0adaf-a3c6-4121-970c-1f6205db177e-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "b8d0adaf-a3c6-4121-970c-1f6205db177e" (UID: "b8d0adaf-a3c6-4121-970c-1f6205db177e"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:35:19 crc kubenswrapper[4957]: I0218 14:35:19.723143 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8d0adaf-a3c6-4121-970c-1f6205db177e-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "b8d0adaf-a3c6-4121-970c-1f6205db177e" (UID: "b8d0adaf-a3c6-4121-970c-1f6205db177e"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:35:19 crc kubenswrapper[4957]: I0218 14:35:19.723178 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8d0adaf-a3c6-4121-970c-1f6205db177e-kube-api-access-mx625" (OuterVolumeSpecName: "kube-api-access-mx625") pod "b8d0adaf-a3c6-4121-970c-1f6205db177e" (UID: "b8d0adaf-a3c6-4121-970c-1f6205db177e"). InnerVolumeSpecName "kube-api-access-mx625". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:35:19 crc kubenswrapper[4957]: I0218 14:35:19.723193 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8d0adaf-a3c6-4121-970c-1f6205db177e-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "b8d0adaf-a3c6-4121-970c-1f6205db177e" (UID: "b8d0adaf-a3c6-4121-970c-1f6205db177e"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:35:19 crc kubenswrapper[4957]: I0218 14:35:19.723408 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8d0adaf-a3c6-4121-970c-1f6205db177e-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "b8d0adaf-a3c6-4121-970c-1f6205db177e" (UID: "b8d0adaf-a3c6-4121-970c-1f6205db177e"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:35:19 crc kubenswrapper[4957]: I0218 14:35:19.723760 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8d0adaf-a3c6-4121-970c-1f6205db177e-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "b8d0adaf-a3c6-4121-970c-1f6205db177e" (UID: "b8d0adaf-a3c6-4121-970c-1f6205db177e"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:35:19 crc kubenswrapper[4957]: I0218 14:35:19.723968 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8d0adaf-a3c6-4121-970c-1f6205db177e-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "b8d0adaf-a3c6-4121-970c-1f6205db177e" (UID: "b8d0adaf-a3c6-4121-970c-1f6205db177e"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:35:19 crc kubenswrapper[4957]: I0218 14:35:19.724215 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8d0adaf-a3c6-4121-970c-1f6205db177e-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "b8d0adaf-a3c6-4121-970c-1f6205db177e" (UID: "b8d0adaf-a3c6-4121-970c-1f6205db177e"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:35:19 crc kubenswrapper[4957]: I0218 14:35:19.727478 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8d0adaf-a3c6-4121-970c-1f6205db177e-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "b8d0adaf-a3c6-4121-970c-1f6205db177e" (UID: "b8d0adaf-a3c6-4121-970c-1f6205db177e"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:35:19 crc kubenswrapper[4957]: I0218 14:35:19.731014 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8d0adaf-a3c6-4121-970c-1f6205db177e-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "b8d0adaf-a3c6-4121-970c-1f6205db177e" (UID: "b8d0adaf-a3c6-4121-970c-1f6205db177e"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:35:19 crc kubenswrapper[4957]: I0218 14:35:19.819275 4957 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b8d0adaf-a3c6-4121-970c-1f6205db177e-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 18 14:35:19 crc kubenswrapper[4957]: I0218 14:35:19.819331 4957 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b8d0adaf-a3c6-4121-970c-1f6205db177e-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 18 14:35:19 crc kubenswrapper[4957]: I0218 14:35:19.819347 4957 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b8d0adaf-a3c6-4121-970c-1f6205db177e-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 18 14:35:19 crc kubenswrapper[4957]: I0218 14:35:19.819362 4957 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b8d0adaf-a3c6-4121-970c-1f6205db177e-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 18 14:35:19 crc kubenswrapper[4957]: I0218 14:35:19.819378 4957 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b8d0adaf-a3c6-4121-970c-1f6205db177e-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 18 14:35:19 crc kubenswrapper[4957]: I0218 14:35:19.819390 4957 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b8d0adaf-a3c6-4121-970c-1f6205db177e-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 18 14:35:19 crc kubenswrapper[4957]: I0218 14:35:19.819402 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mx625\" (UniqueName: \"kubernetes.io/projected/b8d0adaf-a3c6-4121-970c-1f6205db177e-kube-api-access-mx625\") on node \"crc\" DevicePath \"\"" Feb 18 14:35:19 crc kubenswrapper[4957]: I0218 14:35:19.819418 4957 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b8d0adaf-a3c6-4121-970c-1f6205db177e-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:35:19 crc kubenswrapper[4957]: I0218 14:35:19.819448 4957 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b8d0adaf-a3c6-4121-970c-1f6205db177e-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 14:35:19 crc kubenswrapper[4957]: I0218 14:35:19.819459 4957 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b8d0adaf-a3c6-4121-970c-1f6205db177e-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 18 14:35:19 crc kubenswrapper[4957]: I0218 14:35:19.819474 4957 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b8d0adaf-a3c6-4121-970c-1f6205db177e-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 18 14:35:19 crc kubenswrapper[4957]: I0218 14:35:19.819487 4957 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b8d0adaf-a3c6-4121-970c-1f6205db177e-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 18 14:35:19 crc kubenswrapper[4957]: I0218 14:35:19.819499 4957 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b8d0adaf-a3c6-4121-970c-1f6205db177e-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 18 14:35:19 crc kubenswrapper[4957]: I0218 14:35:19.819512 4957 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b8d0adaf-a3c6-4121-970c-1f6205db177e-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 18 14:35:19 crc kubenswrapper[4957]: I0218 14:35:19.855662 4957 generic.go:334] "Generic (PLEG): container finished" podID="b8d0adaf-a3c6-4121-970c-1f6205db177e" containerID="c458271912e707e3caa43b7614f2c63396fed6ee120fc00c9c1919b2ef86b017" exitCode=0 Feb 18 14:35:19 crc kubenswrapper[4957]: I0218 14:35:19.855720 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-594sd" event={"ID":"b8d0adaf-a3c6-4121-970c-1f6205db177e","Type":"ContainerDied","Data":"c458271912e707e3caa43b7614f2c63396fed6ee120fc00c9c1919b2ef86b017"} Feb 18 14:35:19 crc kubenswrapper[4957]: I0218 14:35:19.855738 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-594sd" Feb 18 14:35:19 crc kubenswrapper[4957]: I0218 14:35:19.855767 4957 scope.go:117] "RemoveContainer" containerID="c458271912e707e3caa43b7614f2c63396fed6ee120fc00c9c1919b2ef86b017" Feb 18 14:35:19 crc kubenswrapper[4957]: I0218 14:35:19.855753 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-594sd" event={"ID":"b8d0adaf-a3c6-4121-970c-1f6205db177e","Type":"ContainerDied","Data":"d1592f6d814192d72d91450a827e64e20af5adf513187b9ecdfef1d572ac3c49"} Feb 18 14:35:19 crc kubenswrapper[4957]: I0218 14:35:19.856008 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5tbbh"] Feb 18 14:35:19 crc kubenswrapper[4957]: I0218 14:35:19.857753 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-5tbbh" podUID="597544d6-7743-45b7-91d3-54e797b3e342" containerName="registry-server" containerID="cri-o://ef5dc7ca92d2eca00c471307d40c39405c081ffb5cd26d99231e5a6ae101c4a4" gracePeriod=2 Feb 18 14:35:19 crc kubenswrapper[4957]: I0218 14:35:19.881964 4957 scope.go:117] "RemoveContainer" containerID="c458271912e707e3caa43b7614f2c63396fed6ee120fc00c9c1919b2ef86b017" Feb 18 14:35:19 crc kubenswrapper[4957]: I0218 14:35:19.886169 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-594sd"] Feb 18 14:35:19 crc kubenswrapper[4957]: E0218 14:35:19.886717 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c458271912e707e3caa43b7614f2c63396fed6ee120fc00c9c1919b2ef86b017\": container with ID starting with c458271912e707e3caa43b7614f2c63396fed6ee120fc00c9c1919b2ef86b017 not found: ID does not exist" containerID="c458271912e707e3caa43b7614f2c63396fed6ee120fc00c9c1919b2ef86b017" Feb 18 14:35:19 crc kubenswrapper[4957]: I0218 14:35:19.886780 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c458271912e707e3caa43b7614f2c63396fed6ee120fc00c9c1919b2ef86b017"} err="failed to get container status \"c458271912e707e3caa43b7614f2c63396fed6ee120fc00c9c1919b2ef86b017\": rpc error: code = NotFound desc = could not find container \"c458271912e707e3caa43b7614f2c63396fed6ee120fc00c9c1919b2ef86b017\": container with ID starting with c458271912e707e3caa43b7614f2c63396fed6ee120fc00c9c1919b2ef86b017 not found: ID does not exist" Feb 18 14:35:19 crc kubenswrapper[4957]: I0218 14:35:19.891085 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-594sd"] Feb 18 14:35:20 crc kubenswrapper[4957]: I0218 14:35:20.222856 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0237301c-c7c4-490c-9be1-daaa5db30a10" path="/var/lib/kubelet/pods/0237301c-c7c4-490c-9be1-daaa5db30a10/volumes" Feb 18 14:35:20 crc kubenswrapper[4957]: I0218 14:35:20.223678 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b8d0adaf-a3c6-4121-970c-1f6205db177e" path="/var/lib/kubelet/pods/b8d0adaf-a3c6-4121-970c-1f6205db177e/volumes" Feb 18 14:35:20 crc kubenswrapper[4957]: I0218 14:35:20.224268 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fbe80d3e-51d2-41fa-aa8b-5607e7e69745" path="/var/lib/kubelet/pods/fbe80d3e-51d2-41fa-aa8b-5607e7e69745/volumes" Feb 18 14:35:20 crc kubenswrapper[4957]: I0218 14:35:20.334330 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5tbbh" Feb 18 14:35:20 crc kubenswrapper[4957]: I0218 14:35:20.428828 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vwkwt\" (UniqueName: \"kubernetes.io/projected/597544d6-7743-45b7-91d3-54e797b3e342-kube-api-access-vwkwt\") pod \"597544d6-7743-45b7-91d3-54e797b3e342\" (UID: \"597544d6-7743-45b7-91d3-54e797b3e342\") " Feb 18 14:35:20 crc kubenswrapper[4957]: I0218 14:35:20.428968 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/597544d6-7743-45b7-91d3-54e797b3e342-utilities\") pod \"597544d6-7743-45b7-91d3-54e797b3e342\" (UID: \"597544d6-7743-45b7-91d3-54e797b3e342\") " Feb 18 14:35:20 crc kubenswrapper[4957]: I0218 14:35:20.429017 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/597544d6-7743-45b7-91d3-54e797b3e342-catalog-content\") pod \"597544d6-7743-45b7-91d3-54e797b3e342\" (UID: \"597544d6-7743-45b7-91d3-54e797b3e342\") " Feb 18 14:35:20 crc kubenswrapper[4957]: I0218 14:35:20.430107 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/597544d6-7743-45b7-91d3-54e797b3e342-utilities" (OuterVolumeSpecName: "utilities") pod "597544d6-7743-45b7-91d3-54e797b3e342" (UID: "597544d6-7743-45b7-91d3-54e797b3e342"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:35:20 crc kubenswrapper[4957]: I0218 14:35:20.433660 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/597544d6-7743-45b7-91d3-54e797b3e342-kube-api-access-vwkwt" (OuterVolumeSpecName: "kube-api-access-vwkwt") pod "597544d6-7743-45b7-91d3-54e797b3e342" (UID: "597544d6-7743-45b7-91d3-54e797b3e342"). InnerVolumeSpecName "kube-api-access-vwkwt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:35:20 crc kubenswrapper[4957]: I0218 14:35:20.530611 4957 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/597544d6-7743-45b7-91d3-54e797b3e342-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 14:35:20 crc kubenswrapper[4957]: I0218 14:35:20.530671 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vwkwt\" (UniqueName: \"kubernetes.io/projected/597544d6-7743-45b7-91d3-54e797b3e342-kube-api-access-vwkwt\") on node \"crc\" DevicePath \"\"" Feb 18 14:35:20 crc kubenswrapper[4957]: I0218 14:35:20.580010 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/597544d6-7743-45b7-91d3-54e797b3e342-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "597544d6-7743-45b7-91d3-54e797b3e342" (UID: "597544d6-7743-45b7-91d3-54e797b3e342"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:35:20 crc kubenswrapper[4957]: I0218 14:35:20.632117 4957 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/597544d6-7743-45b7-91d3-54e797b3e342-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 14:35:20 crc kubenswrapper[4957]: I0218 14:35:20.865061 4957 generic.go:334] "Generic (PLEG): container finished" podID="597544d6-7743-45b7-91d3-54e797b3e342" containerID="ef5dc7ca92d2eca00c471307d40c39405c081ffb5cd26d99231e5a6ae101c4a4" exitCode=0 Feb 18 14:35:20 crc kubenswrapper[4957]: I0218 14:35:20.865167 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5tbbh" Feb 18 14:35:20 crc kubenswrapper[4957]: I0218 14:35:20.865204 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5tbbh" event={"ID":"597544d6-7743-45b7-91d3-54e797b3e342","Type":"ContainerDied","Data":"ef5dc7ca92d2eca00c471307d40c39405c081ffb5cd26d99231e5a6ae101c4a4"} Feb 18 14:35:20 crc kubenswrapper[4957]: I0218 14:35:20.865352 4957 scope.go:117] "RemoveContainer" containerID="ef5dc7ca92d2eca00c471307d40c39405c081ffb5cd26d99231e5a6ae101c4a4" Feb 18 14:35:20 crc kubenswrapper[4957]: I0218 14:35:20.865746 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5tbbh" event={"ID":"597544d6-7743-45b7-91d3-54e797b3e342","Type":"ContainerDied","Data":"edb1992b016c8f0e686b348126832778959b400038ad07c15e2e8da9cabd3e3d"} Feb 18 14:35:20 crc kubenswrapper[4957]: I0218 14:35:20.883304 4957 scope.go:117] "RemoveContainer" containerID="e8fa035f729ddfac56971bbb2c8fcaf987f976f07664e2c77c16a2a56b2f4d50" Feb 18 14:35:20 crc kubenswrapper[4957]: I0218 14:35:20.895259 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5tbbh"] Feb 18 14:35:20 crc kubenswrapper[4957]: I0218 14:35:20.900066 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-5tbbh"] Feb 18 14:35:20 crc kubenswrapper[4957]: I0218 14:35:20.914507 4957 scope.go:117] "RemoveContainer" containerID="dd90bc58813ef65e3c9361da444ac9058e6b3d085b09470ee0a3194fc95b8bd3" Feb 18 14:35:20 crc kubenswrapper[4957]: I0218 14:35:20.931297 4957 scope.go:117] "RemoveContainer" containerID="ef5dc7ca92d2eca00c471307d40c39405c081ffb5cd26d99231e5a6ae101c4a4" Feb 18 14:35:20 crc kubenswrapper[4957]: E0218 14:35:20.932007 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef5dc7ca92d2eca00c471307d40c39405c081ffb5cd26d99231e5a6ae101c4a4\": container with ID starting with ef5dc7ca92d2eca00c471307d40c39405c081ffb5cd26d99231e5a6ae101c4a4 not found: ID does not exist" containerID="ef5dc7ca92d2eca00c471307d40c39405c081ffb5cd26d99231e5a6ae101c4a4" Feb 18 14:35:20 crc kubenswrapper[4957]: I0218 14:35:20.932048 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef5dc7ca92d2eca00c471307d40c39405c081ffb5cd26d99231e5a6ae101c4a4"} err="failed to get container status \"ef5dc7ca92d2eca00c471307d40c39405c081ffb5cd26d99231e5a6ae101c4a4\": rpc error: code = NotFound desc = could not find container \"ef5dc7ca92d2eca00c471307d40c39405c081ffb5cd26d99231e5a6ae101c4a4\": container with ID starting with ef5dc7ca92d2eca00c471307d40c39405c081ffb5cd26d99231e5a6ae101c4a4 not found: ID does not exist" Feb 18 14:35:20 crc kubenswrapper[4957]: I0218 14:35:20.932079 4957 scope.go:117] "RemoveContainer" containerID="e8fa035f729ddfac56971bbb2c8fcaf987f976f07664e2c77c16a2a56b2f4d50" Feb 18 14:35:20 crc kubenswrapper[4957]: E0218 14:35:20.932536 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e8fa035f729ddfac56971bbb2c8fcaf987f976f07664e2c77c16a2a56b2f4d50\": container with ID starting with e8fa035f729ddfac56971bbb2c8fcaf987f976f07664e2c77c16a2a56b2f4d50 not found: ID does not exist" containerID="e8fa035f729ddfac56971bbb2c8fcaf987f976f07664e2c77c16a2a56b2f4d50" Feb 18 14:35:20 crc kubenswrapper[4957]: I0218 14:35:20.932560 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e8fa035f729ddfac56971bbb2c8fcaf987f976f07664e2c77c16a2a56b2f4d50"} err="failed to get container status \"e8fa035f729ddfac56971bbb2c8fcaf987f976f07664e2c77c16a2a56b2f4d50\": rpc error: code = NotFound desc = could not find container \"e8fa035f729ddfac56971bbb2c8fcaf987f976f07664e2c77c16a2a56b2f4d50\": container with ID starting with e8fa035f729ddfac56971bbb2c8fcaf987f976f07664e2c77c16a2a56b2f4d50 not found: ID does not exist" Feb 18 14:35:20 crc kubenswrapper[4957]: I0218 14:35:20.932574 4957 scope.go:117] "RemoveContainer" containerID="dd90bc58813ef65e3c9361da444ac9058e6b3d085b09470ee0a3194fc95b8bd3" Feb 18 14:35:20 crc kubenswrapper[4957]: E0218 14:35:20.932777 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd90bc58813ef65e3c9361da444ac9058e6b3d085b09470ee0a3194fc95b8bd3\": container with ID starting with dd90bc58813ef65e3c9361da444ac9058e6b3d085b09470ee0a3194fc95b8bd3 not found: ID does not exist" containerID="dd90bc58813ef65e3c9361da444ac9058e6b3d085b09470ee0a3194fc95b8bd3" Feb 18 14:35:20 crc kubenswrapper[4957]: I0218 14:35:20.932799 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd90bc58813ef65e3c9361da444ac9058e6b3d085b09470ee0a3194fc95b8bd3"} err="failed to get container status \"dd90bc58813ef65e3c9361da444ac9058e6b3d085b09470ee0a3194fc95b8bd3\": rpc error: code = NotFound desc = could not find container \"dd90bc58813ef65e3c9361da444ac9058e6b3d085b09470ee0a3194fc95b8bd3\": container with ID starting with dd90bc58813ef65e3c9361da444ac9058e6b3d085b09470ee0a3194fc95b8bd3 not found: ID does not exist" Feb 18 14:35:21 crc kubenswrapper[4957]: I0218 14:35:21.063830 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-768657449-zd8xv"] Feb 18 14:35:21 crc kubenswrapper[4957]: I0218 14:35:21.064146 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-768657449-zd8xv" podUID="6af986d4-5f4a-407a-b9c8-815250d3a145" containerName="controller-manager" containerID="cri-o://91d0110582942c4740b10d02f8ecda582646ac6a029d0cdb32be92ffa4396a2f" gracePeriod=30 Feb 18 14:35:21 crc kubenswrapper[4957]: I0218 14:35:21.149776 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7fb88d4f5-hmhbz"] Feb 18 14:35:21 crc kubenswrapper[4957]: I0218 14:35:21.150003 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-7fb88d4f5-hmhbz" podUID="8095b960-a00c-4939-aa15-5bd9e6e9e6e3" containerName="route-controller-manager" containerID="cri-o://6e345c590fa2c894eb640b3edf3c0e1beffd26ac5c9e0e756962022d28bcef6e" gracePeriod=30 Feb 18 14:35:21 crc kubenswrapper[4957]: I0218 14:35:21.636503 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7fb88d4f5-hmhbz" Feb 18 14:35:21 crc kubenswrapper[4957]: I0218 14:35:21.689638 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-768657449-zd8xv" Feb 18 14:35:21 crc kubenswrapper[4957]: I0218 14:35:21.744744 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l64c6\" (UniqueName: \"kubernetes.io/projected/6af986d4-5f4a-407a-b9c8-815250d3a145-kube-api-access-l64c6\") pod \"6af986d4-5f4a-407a-b9c8-815250d3a145\" (UID: \"6af986d4-5f4a-407a-b9c8-815250d3a145\") " Feb 18 14:35:21 crc kubenswrapper[4957]: I0218 14:35:21.744849 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8095b960-a00c-4939-aa15-5bd9e6e9e6e3-client-ca\") pod \"8095b960-a00c-4939-aa15-5bd9e6e9e6e3\" (UID: \"8095b960-a00c-4939-aa15-5bd9e6e9e6e3\") " Feb 18 14:35:21 crc kubenswrapper[4957]: I0218 14:35:21.744916 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8095b960-a00c-4939-aa15-5bd9e6e9e6e3-serving-cert\") pod \"8095b960-a00c-4939-aa15-5bd9e6e9e6e3\" (UID: \"8095b960-a00c-4939-aa15-5bd9e6e9e6e3\") " Feb 18 14:35:21 crc kubenswrapper[4957]: I0218 14:35:21.744954 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6af986d4-5f4a-407a-b9c8-815250d3a145-serving-cert\") pod \"6af986d4-5f4a-407a-b9c8-815250d3a145\" (UID: \"6af986d4-5f4a-407a-b9c8-815250d3a145\") " Feb 18 14:35:21 crc kubenswrapper[4957]: I0218 14:35:21.744981 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2bdpb\" (UniqueName: \"kubernetes.io/projected/8095b960-a00c-4939-aa15-5bd9e6e9e6e3-kube-api-access-2bdpb\") pod \"8095b960-a00c-4939-aa15-5bd9e6e9e6e3\" (UID: \"8095b960-a00c-4939-aa15-5bd9e6e9e6e3\") " Feb 18 14:35:21 crc kubenswrapper[4957]: I0218 14:35:21.745017 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8095b960-a00c-4939-aa15-5bd9e6e9e6e3-config\") pod \"8095b960-a00c-4939-aa15-5bd9e6e9e6e3\" (UID: \"8095b960-a00c-4939-aa15-5bd9e6e9e6e3\") " Feb 18 14:35:21 crc kubenswrapper[4957]: I0218 14:35:21.745037 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6af986d4-5f4a-407a-b9c8-815250d3a145-client-ca\") pod \"6af986d4-5f4a-407a-b9c8-815250d3a145\" (UID: \"6af986d4-5f4a-407a-b9c8-815250d3a145\") " Feb 18 14:35:21 crc kubenswrapper[4957]: I0218 14:35:21.745065 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6af986d4-5f4a-407a-b9c8-815250d3a145-proxy-ca-bundles\") pod \"6af986d4-5f4a-407a-b9c8-815250d3a145\" (UID: \"6af986d4-5f4a-407a-b9c8-815250d3a145\") " Feb 18 14:35:21 crc kubenswrapper[4957]: I0218 14:35:21.745483 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6af986d4-5f4a-407a-b9c8-815250d3a145-config\") pod \"6af986d4-5f4a-407a-b9c8-815250d3a145\" (UID: \"6af986d4-5f4a-407a-b9c8-815250d3a145\") " Feb 18 14:35:21 crc kubenswrapper[4957]: I0218 14:35:21.745824 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8095b960-a00c-4939-aa15-5bd9e6e9e6e3-client-ca" (OuterVolumeSpecName: "client-ca") pod "8095b960-a00c-4939-aa15-5bd9e6e9e6e3" (UID: "8095b960-a00c-4939-aa15-5bd9e6e9e6e3"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:35:21 crc kubenswrapper[4957]: I0218 14:35:21.746290 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6af986d4-5f4a-407a-b9c8-815250d3a145-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "6af986d4-5f4a-407a-b9c8-815250d3a145" (UID: "6af986d4-5f4a-407a-b9c8-815250d3a145"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:35:21 crc kubenswrapper[4957]: I0218 14:35:21.746311 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6af986d4-5f4a-407a-b9c8-815250d3a145-client-ca" (OuterVolumeSpecName: "client-ca") pod "6af986d4-5f4a-407a-b9c8-815250d3a145" (UID: "6af986d4-5f4a-407a-b9c8-815250d3a145"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:35:21 crc kubenswrapper[4957]: I0218 14:35:21.746487 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6af986d4-5f4a-407a-b9c8-815250d3a145-config" (OuterVolumeSpecName: "config") pod "6af986d4-5f4a-407a-b9c8-815250d3a145" (UID: "6af986d4-5f4a-407a-b9c8-815250d3a145"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:35:21 crc kubenswrapper[4957]: I0218 14:35:21.746670 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8095b960-a00c-4939-aa15-5bd9e6e9e6e3-config" (OuterVolumeSpecName: "config") pod "8095b960-a00c-4939-aa15-5bd9e6e9e6e3" (UID: "8095b960-a00c-4939-aa15-5bd9e6e9e6e3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:35:21 crc kubenswrapper[4957]: I0218 14:35:21.750061 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8095b960-a00c-4939-aa15-5bd9e6e9e6e3-kube-api-access-2bdpb" (OuterVolumeSpecName: "kube-api-access-2bdpb") pod "8095b960-a00c-4939-aa15-5bd9e6e9e6e3" (UID: "8095b960-a00c-4939-aa15-5bd9e6e9e6e3"). InnerVolumeSpecName "kube-api-access-2bdpb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:35:21 crc kubenswrapper[4957]: I0218 14:35:21.750138 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8095b960-a00c-4939-aa15-5bd9e6e9e6e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8095b960-a00c-4939-aa15-5bd9e6e9e6e3" (UID: "8095b960-a00c-4939-aa15-5bd9e6e9e6e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:35:21 crc kubenswrapper[4957]: I0218 14:35:21.750372 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6af986d4-5f4a-407a-b9c8-815250d3a145-kube-api-access-l64c6" (OuterVolumeSpecName: "kube-api-access-l64c6") pod "6af986d4-5f4a-407a-b9c8-815250d3a145" (UID: "6af986d4-5f4a-407a-b9c8-815250d3a145"). InnerVolumeSpecName "kube-api-access-l64c6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:35:21 crc kubenswrapper[4957]: I0218 14:35:21.750521 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6af986d4-5f4a-407a-b9c8-815250d3a145-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6af986d4-5f4a-407a-b9c8-815250d3a145" (UID: "6af986d4-5f4a-407a-b9c8-815250d3a145"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:35:21 crc kubenswrapper[4957]: I0218 14:35:21.847362 4957 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8095b960-a00c-4939-aa15-5bd9e6e9e6e3-client-ca\") on node \"crc\" DevicePath \"\"" Feb 18 14:35:21 crc kubenswrapper[4957]: I0218 14:35:21.847525 4957 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8095b960-a00c-4939-aa15-5bd9e6e9e6e3-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 14:35:21 crc kubenswrapper[4957]: I0218 14:35:21.847542 4957 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6af986d4-5f4a-407a-b9c8-815250d3a145-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 14:35:21 crc kubenswrapper[4957]: I0218 14:35:21.847556 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2bdpb\" (UniqueName: \"kubernetes.io/projected/8095b960-a00c-4939-aa15-5bd9e6e9e6e3-kube-api-access-2bdpb\") on node \"crc\" DevicePath \"\"" Feb 18 14:35:21 crc kubenswrapper[4957]: I0218 14:35:21.847570 4957 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8095b960-a00c-4939-aa15-5bd9e6e9e6e3-config\") on node \"crc\" DevicePath \"\"" Feb 18 14:35:21 crc kubenswrapper[4957]: I0218 14:35:21.847586 4957 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6af986d4-5f4a-407a-b9c8-815250d3a145-client-ca\") on node \"crc\" DevicePath \"\"" Feb 18 14:35:21 crc kubenswrapper[4957]: I0218 14:35:21.847598 4957 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6af986d4-5f4a-407a-b9c8-815250d3a145-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 18 14:35:21 crc kubenswrapper[4957]: I0218 14:35:21.847610 4957 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6af986d4-5f4a-407a-b9c8-815250d3a145-config\") on node \"crc\" DevicePath \"\"" Feb 18 14:35:21 crc kubenswrapper[4957]: I0218 14:35:21.847621 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l64c6\" (UniqueName: \"kubernetes.io/projected/6af986d4-5f4a-407a-b9c8-815250d3a145-kube-api-access-l64c6\") on node \"crc\" DevicePath \"\"" Feb 18 14:35:21 crc kubenswrapper[4957]: I0218 14:35:21.895718 4957 generic.go:334] "Generic (PLEG): container finished" podID="6af986d4-5f4a-407a-b9c8-815250d3a145" containerID="91d0110582942c4740b10d02f8ecda582646ac6a029d0cdb32be92ffa4396a2f" exitCode=0 Feb 18 14:35:21 crc kubenswrapper[4957]: I0218 14:35:21.895827 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-768657449-zd8xv" event={"ID":"6af986d4-5f4a-407a-b9c8-815250d3a145","Type":"ContainerDied","Data":"91d0110582942c4740b10d02f8ecda582646ac6a029d0cdb32be92ffa4396a2f"} Feb 18 14:35:21 crc kubenswrapper[4957]: I0218 14:35:21.895903 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-768657449-zd8xv" event={"ID":"6af986d4-5f4a-407a-b9c8-815250d3a145","Type":"ContainerDied","Data":"7a86329d350e65950e28edb3a822c99d7516681739433509d360914b3cc67dc3"} Feb 18 14:35:21 crc kubenswrapper[4957]: I0218 14:35:21.895850 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-768657449-zd8xv" Feb 18 14:35:21 crc kubenswrapper[4957]: I0218 14:35:21.895981 4957 scope.go:117] "RemoveContainer" containerID="91d0110582942c4740b10d02f8ecda582646ac6a029d0cdb32be92ffa4396a2f" Feb 18 14:35:21 crc kubenswrapper[4957]: I0218 14:35:21.898385 4957 generic.go:334] "Generic (PLEG): container finished" podID="8095b960-a00c-4939-aa15-5bd9e6e9e6e3" containerID="6e345c590fa2c894eb640b3edf3c0e1beffd26ac5c9e0e756962022d28bcef6e" exitCode=0 Feb 18 14:35:21 crc kubenswrapper[4957]: I0218 14:35:21.898447 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7fb88d4f5-hmhbz" event={"ID":"8095b960-a00c-4939-aa15-5bd9e6e9e6e3","Type":"ContainerDied","Data":"6e345c590fa2c894eb640b3edf3c0e1beffd26ac5c9e0e756962022d28bcef6e"} Feb 18 14:35:21 crc kubenswrapper[4957]: I0218 14:35:21.898501 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7fb88d4f5-hmhbz" event={"ID":"8095b960-a00c-4939-aa15-5bd9e6e9e6e3","Type":"ContainerDied","Data":"5c8544dd3bfa73a22430efcddfa34b954cdc4d4abdb7c174e3d9ecfd15bef8f4"} Feb 18 14:35:21 crc kubenswrapper[4957]: I0218 14:35:21.898513 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7fb88d4f5-hmhbz" Feb 18 14:35:21 crc kubenswrapper[4957]: I0218 14:35:21.917527 4957 scope.go:117] "RemoveContainer" containerID="91d0110582942c4740b10d02f8ecda582646ac6a029d0cdb32be92ffa4396a2f" Feb 18 14:35:21 crc kubenswrapper[4957]: E0218 14:35:21.918126 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"91d0110582942c4740b10d02f8ecda582646ac6a029d0cdb32be92ffa4396a2f\": container with ID starting with 91d0110582942c4740b10d02f8ecda582646ac6a029d0cdb32be92ffa4396a2f not found: ID does not exist" containerID="91d0110582942c4740b10d02f8ecda582646ac6a029d0cdb32be92ffa4396a2f" Feb 18 14:35:21 crc kubenswrapper[4957]: I0218 14:35:21.918167 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"91d0110582942c4740b10d02f8ecda582646ac6a029d0cdb32be92ffa4396a2f"} err="failed to get container status \"91d0110582942c4740b10d02f8ecda582646ac6a029d0cdb32be92ffa4396a2f\": rpc error: code = NotFound desc = could not find container \"91d0110582942c4740b10d02f8ecda582646ac6a029d0cdb32be92ffa4396a2f\": container with ID starting with 91d0110582942c4740b10d02f8ecda582646ac6a029d0cdb32be92ffa4396a2f not found: ID does not exist" Feb 18 14:35:21 crc kubenswrapper[4957]: I0218 14:35:21.918189 4957 scope.go:117] "RemoveContainer" containerID="6e345c590fa2c894eb640b3edf3c0e1beffd26ac5c9e0e756962022d28bcef6e" Feb 18 14:35:21 crc kubenswrapper[4957]: I0218 14:35:21.925941 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-768657449-zd8xv"] Feb 18 14:35:21 crc kubenswrapper[4957]: I0218 14:35:21.928514 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-768657449-zd8xv"] Feb 18 14:35:21 crc kubenswrapper[4957]: I0218 14:35:21.939254 4957 scope.go:117] "RemoveContainer" containerID="6e345c590fa2c894eb640b3edf3c0e1beffd26ac5c9e0e756962022d28bcef6e" Feb 18 14:35:21 crc kubenswrapper[4957]: E0218 14:35:21.940342 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6e345c590fa2c894eb640b3edf3c0e1beffd26ac5c9e0e756962022d28bcef6e\": container with ID starting with 6e345c590fa2c894eb640b3edf3c0e1beffd26ac5c9e0e756962022d28bcef6e not found: ID does not exist" containerID="6e345c590fa2c894eb640b3edf3c0e1beffd26ac5c9e0e756962022d28bcef6e" Feb 18 14:35:21 crc kubenswrapper[4957]: I0218 14:35:21.940370 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e345c590fa2c894eb640b3edf3c0e1beffd26ac5c9e0e756962022d28bcef6e"} err="failed to get container status \"6e345c590fa2c894eb640b3edf3c0e1beffd26ac5c9e0e756962022d28bcef6e\": rpc error: code = NotFound desc = could not find container \"6e345c590fa2c894eb640b3edf3c0e1beffd26ac5c9e0e756962022d28bcef6e\": container with ID starting with 6e345c590fa2c894eb640b3edf3c0e1beffd26ac5c9e0e756962022d28bcef6e not found: ID does not exist" Feb 18 14:35:21 crc kubenswrapper[4957]: I0218 14:35:21.941275 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7fb88d4f5-hmhbz"] Feb 18 14:35:21 crc kubenswrapper[4957]: I0218 14:35:21.943610 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7fb88d4f5-hmhbz"] Feb 18 14:35:22 crc kubenswrapper[4957]: I0218 14:35:22.221398 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="597544d6-7743-45b7-91d3-54e797b3e342" path="/var/lib/kubelet/pods/597544d6-7743-45b7-91d3-54e797b3e342/volumes" Feb 18 14:35:22 crc kubenswrapper[4957]: I0218 14:35:22.223145 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6af986d4-5f4a-407a-b9c8-815250d3a145" path="/var/lib/kubelet/pods/6af986d4-5f4a-407a-b9c8-815250d3a145/volumes" Feb 18 14:35:22 crc kubenswrapper[4957]: I0218 14:35:22.224234 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8095b960-a00c-4939-aa15-5bd9e6e9e6e3" path="/var/lib/kubelet/pods/8095b960-a00c-4939-aa15-5bd9e6e9e6e3/volumes" Feb 18 14:35:22 crc kubenswrapper[4957]: I0218 14:35:22.728482 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-65c9ff5d9d-fz5wk"] Feb 18 14:35:22 crc kubenswrapper[4957]: E0218 14:35:22.728726 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbe80d3e-51d2-41fa-aa8b-5607e7e69745" containerName="extract-content" Feb 18 14:35:22 crc kubenswrapper[4957]: I0218 14:35:22.728742 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbe80d3e-51d2-41fa-aa8b-5607e7e69745" containerName="extract-content" Feb 18 14:35:22 crc kubenswrapper[4957]: E0218 14:35:22.728754 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbe80d3e-51d2-41fa-aa8b-5607e7e69745" containerName="extract-utilities" Feb 18 14:35:22 crc kubenswrapper[4957]: I0218 14:35:22.728762 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbe80d3e-51d2-41fa-aa8b-5607e7e69745" containerName="extract-utilities" Feb 18 14:35:22 crc kubenswrapper[4957]: E0218 14:35:22.728773 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0237301c-c7c4-490c-9be1-daaa5db30a10" containerName="registry-server" Feb 18 14:35:22 crc kubenswrapper[4957]: I0218 14:35:22.728780 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="0237301c-c7c4-490c-9be1-daaa5db30a10" containerName="registry-server" Feb 18 14:35:22 crc kubenswrapper[4957]: E0218 14:35:22.728789 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbe80d3e-51d2-41fa-aa8b-5607e7e69745" containerName="registry-server" Feb 18 14:35:22 crc kubenswrapper[4957]: I0218 14:35:22.728796 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbe80d3e-51d2-41fa-aa8b-5607e7e69745" containerName="registry-server" Feb 18 14:35:22 crc kubenswrapper[4957]: E0218 14:35:22.728804 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0237301c-c7c4-490c-9be1-daaa5db30a10" containerName="extract-utilities" Feb 18 14:35:22 crc kubenswrapper[4957]: I0218 14:35:22.728811 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="0237301c-c7c4-490c-9be1-daaa5db30a10" containerName="extract-utilities" Feb 18 14:35:22 crc kubenswrapper[4957]: E0218 14:35:22.728819 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="597544d6-7743-45b7-91d3-54e797b3e342" containerName="registry-server" Feb 18 14:35:22 crc kubenswrapper[4957]: I0218 14:35:22.728825 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="597544d6-7743-45b7-91d3-54e797b3e342" containerName="registry-server" Feb 18 14:35:22 crc kubenswrapper[4957]: E0218 14:35:22.728832 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8d0adaf-a3c6-4121-970c-1f6205db177e" containerName="oauth-openshift" Feb 18 14:35:22 crc kubenswrapper[4957]: I0218 14:35:22.728839 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8d0adaf-a3c6-4121-970c-1f6205db177e" containerName="oauth-openshift" Feb 18 14:35:22 crc kubenswrapper[4957]: E0218 14:35:22.728848 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f83e4add-33e3-4e43-af47-f9980471df63" containerName="extract-content" Feb 18 14:35:22 crc kubenswrapper[4957]: I0218 14:35:22.728856 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="f83e4add-33e3-4e43-af47-f9980471df63" containerName="extract-content" Feb 18 14:35:22 crc kubenswrapper[4957]: E0218 14:35:22.728867 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8095b960-a00c-4939-aa15-5bd9e6e9e6e3" containerName="route-controller-manager" Feb 18 14:35:22 crc kubenswrapper[4957]: I0218 14:35:22.728874 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="8095b960-a00c-4939-aa15-5bd9e6e9e6e3" containerName="route-controller-manager" Feb 18 14:35:22 crc kubenswrapper[4957]: E0218 14:35:22.728883 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="597544d6-7743-45b7-91d3-54e797b3e342" containerName="extract-utilities" Feb 18 14:35:22 crc kubenswrapper[4957]: I0218 14:35:22.728890 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="597544d6-7743-45b7-91d3-54e797b3e342" containerName="extract-utilities" Feb 18 14:35:22 crc kubenswrapper[4957]: E0218 14:35:22.728901 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f83e4add-33e3-4e43-af47-f9980471df63" containerName="registry-server" Feb 18 14:35:22 crc kubenswrapper[4957]: I0218 14:35:22.728908 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="f83e4add-33e3-4e43-af47-f9980471df63" containerName="registry-server" Feb 18 14:35:22 crc kubenswrapper[4957]: E0218 14:35:22.728925 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f83e4add-33e3-4e43-af47-f9980471df63" containerName="extract-utilities" Feb 18 14:35:22 crc kubenswrapper[4957]: I0218 14:35:22.728933 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="f83e4add-33e3-4e43-af47-f9980471df63" containerName="extract-utilities" Feb 18 14:35:22 crc kubenswrapper[4957]: E0218 14:35:22.728944 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0237301c-c7c4-490c-9be1-daaa5db30a10" containerName="extract-content" Feb 18 14:35:22 crc kubenswrapper[4957]: I0218 14:35:22.728951 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="0237301c-c7c4-490c-9be1-daaa5db30a10" containerName="extract-content" Feb 18 14:35:22 crc kubenswrapper[4957]: E0218 14:35:22.728961 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="597544d6-7743-45b7-91d3-54e797b3e342" containerName="extract-content" Feb 18 14:35:22 crc kubenswrapper[4957]: I0218 14:35:22.728967 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="597544d6-7743-45b7-91d3-54e797b3e342" containerName="extract-content" Feb 18 14:35:22 crc kubenswrapper[4957]: E0218 14:35:22.728974 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6af986d4-5f4a-407a-b9c8-815250d3a145" containerName="controller-manager" Feb 18 14:35:22 crc kubenswrapper[4957]: I0218 14:35:22.728980 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="6af986d4-5f4a-407a-b9c8-815250d3a145" containerName="controller-manager" Feb 18 14:35:22 crc kubenswrapper[4957]: I0218 14:35:22.729107 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="6af986d4-5f4a-407a-b9c8-815250d3a145" containerName="controller-manager" Feb 18 14:35:22 crc kubenswrapper[4957]: I0218 14:35:22.729121 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="597544d6-7743-45b7-91d3-54e797b3e342" containerName="registry-server" Feb 18 14:35:22 crc kubenswrapper[4957]: I0218 14:35:22.729131 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="f83e4add-33e3-4e43-af47-f9980471df63" containerName="registry-server" Feb 18 14:35:22 crc kubenswrapper[4957]: I0218 14:35:22.729145 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8d0adaf-a3c6-4121-970c-1f6205db177e" containerName="oauth-openshift" Feb 18 14:35:22 crc kubenswrapper[4957]: I0218 14:35:22.729155 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="0237301c-c7c4-490c-9be1-daaa5db30a10" containerName="registry-server" Feb 18 14:35:22 crc kubenswrapper[4957]: I0218 14:35:22.729166 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="fbe80d3e-51d2-41fa-aa8b-5607e7e69745" containerName="registry-server" Feb 18 14:35:22 crc kubenswrapper[4957]: I0218 14:35:22.729240 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="8095b960-a00c-4939-aa15-5bd9e6e9e6e3" containerName="route-controller-manager" Feb 18 14:35:22 crc kubenswrapper[4957]: I0218 14:35:22.729978 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65c9ff5d9d-fz5wk" Feb 18 14:35:22 crc kubenswrapper[4957]: I0218 14:35:22.732835 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 18 14:35:22 crc kubenswrapper[4957]: I0218 14:35:22.733291 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 18 14:35:22 crc kubenswrapper[4957]: I0218 14:35:22.734576 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 18 14:35:22 crc kubenswrapper[4957]: I0218 14:35:22.734638 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 18 14:35:22 crc kubenswrapper[4957]: I0218 14:35:22.737026 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6f58845d78-bfqn7"] Feb 18 14:35:22 crc kubenswrapper[4957]: I0218 14:35:22.738089 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6f58845d78-bfqn7" Feb 18 14:35:22 crc kubenswrapper[4957]: I0218 14:35:22.740869 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-65c9ff5d9d-fz5wk"] Feb 18 14:35:22 crc kubenswrapper[4957]: I0218 14:35:22.743443 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 18 14:35:22 crc kubenswrapper[4957]: I0218 14:35:22.743825 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 18 14:35:22 crc kubenswrapper[4957]: I0218 14:35:22.744045 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 18 14:35:22 crc kubenswrapper[4957]: I0218 14:35:22.744725 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 18 14:35:22 crc kubenswrapper[4957]: I0218 14:35:22.745224 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 18 14:35:22 crc kubenswrapper[4957]: I0218 14:35:22.745505 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 18 14:35:22 crc kubenswrapper[4957]: I0218 14:35:22.745705 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 18 14:35:22 crc kubenswrapper[4957]: I0218 14:35:22.745949 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 18 14:35:22 crc kubenswrapper[4957]: I0218 14:35:22.746111 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 18 14:35:22 crc kubenswrapper[4957]: I0218 14:35:22.747090 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6f58845d78-bfqn7"] Feb 18 14:35:22 crc kubenswrapper[4957]: I0218 14:35:22.760009 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2fbd50ae-c490-4099-b01e-de491ad70559-serving-cert\") pod \"route-controller-manager-6f58845d78-bfqn7\" (UID: \"2fbd50ae-c490-4099-b01e-de491ad70559\") " pod="openshift-route-controller-manager/route-controller-manager-6f58845d78-bfqn7" Feb 18 14:35:22 crc kubenswrapper[4957]: I0218 14:35:22.760065 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1523e723-7145-4c5e-8834-990b6298db41-proxy-ca-bundles\") pod \"controller-manager-65c9ff5d9d-fz5wk\" (UID: \"1523e723-7145-4c5e-8834-990b6298db41\") " pod="openshift-controller-manager/controller-manager-65c9ff5d9d-fz5wk" Feb 18 14:35:22 crc kubenswrapper[4957]: I0218 14:35:22.760085 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1523e723-7145-4c5e-8834-990b6298db41-client-ca\") pod \"controller-manager-65c9ff5d9d-fz5wk\" (UID: \"1523e723-7145-4c5e-8834-990b6298db41\") " pod="openshift-controller-manager/controller-manager-65c9ff5d9d-fz5wk" Feb 18 14:35:22 crc kubenswrapper[4957]: I0218 14:35:22.760103 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1523e723-7145-4c5e-8834-990b6298db41-serving-cert\") pod \"controller-manager-65c9ff5d9d-fz5wk\" (UID: \"1523e723-7145-4c5e-8834-990b6298db41\") " pod="openshift-controller-manager/controller-manager-65c9ff5d9d-fz5wk" Feb 18 14:35:22 crc kubenswrapper[4957]: I0218 14:35:22.760124 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jd89g\" (UniqueName: \"kubernetes.io/projected/2fbd50ae-c490-4099-b01e-de491ad70559-kube-api-access-jd89g\") pod \"route-controller-manager-6f58845d78-bfqn7\" (UID: \"2fbd50ae-c490-4099-b01e-de491ad70559\") " pod="openshift-route-controller-manager/route-controller-manager-6f58845d78-bfqn7" Feb 18 14:35:22 crc kubenswrapper[4957]: I0218 14:35:22.760152 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9pnqp\" (UniqueName: \"kubernetes.io/projected/1523e723-7145-4c5e-8834-990b6298db41-kube-api-access-9pnqp\") pod \"controller-manager-65c9ff5d9d-fz5wk\" (UID: \"1523e723-7145-4c5e-8834-990b6298db41\") " pod="openshift-controller-manager/controller-manager-65c9ff5d9d-fz5wk" Feb 18 14:35:22 crc kubenswrapper[4957]: I0218 14:35:22.760172 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2fbd50ae-c490-4099-b01e-de491ad70559-client-ca\") pod \"route-controller-manager-6f58845d78-bfqn7\" (UID: \"2fbd50ae-c490-4099-b01e-de491ad70559\") " pod="openshift-route-controller-manager/route-controller-manager-6f58845d78-bfqn7" Feb 18 14:35:22 crc kubenswrapper[4957]: I0218 14:35:22.760201 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1523e723-7145-4c5e-8834-990b6298db41-config\") pod \"controller-manager-65c9ff5d9d-fz5wk\" (UID: \"1523e723-7145-4c5e-8834-990b6298db41\") " pod="openshift-controller-manager/controller-manager-65c9ff5d9d-fz5wk" Feb 18 14:35:22 crc kubenswrapper[4957]: I0218 14:35:22.760468 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2fbd50ae-c490-4099-b01e-de491ad70559-config\") pod \"route-controller-manager-6f58845d78-bfqn7\" (UID: \"2fbd50ae-c490-4099-b01e-de491ad70559\") " pod="openshift-route-controller-manager/route-controller-manager-6f58845d78-bfqn7" Feb 18 14:35:22 crc kubenswrapper[4957]: I0218 14:35:22.862316 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1523e723-7145-4c5e-8834-990b6298db41-client-ca\") pod \"controller-manager-65c9ff5d9d-fz5wk\" (UID: \"1523e723-7145-4c5e-8834-990b6298db41\") " pod="openshift-controller-manager/controller-manager-65c9ff5d9d-fz5wk" Feb 18 14:35:22 crc kubenswrapper[4957]: I0218 14:35:22.862748 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1523e723-7145-4c5e-8834-990b6298db41-serving-cert\") pod \"controller-manager-65c9ff5d9d-fz5wk\" (UID: \"1523e723-7145-4c5e-8834-990b6298db41\") " pod="openshift-controller-manager/controller-manager-65c9ff5d9d-fz5wk" Feb 18 14:35:22 crc kubenswrapper[4957]: I0218 14:35:22.862783 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jd89g\" (UniqueName: \"kubernetes.io/projected/2fbd50ae-c490-4099-b01e-de491ad70559-kube-api-access-jd89g\") pod \"route-controller-manager-6f58845d78-bfqn7\" (UID: \"2fbd50ae-c490-4099-b01e-de491ad70559\") " pod="openshift-route-controller-manager/route-controller-manager-6f58845d78-bfqn7" Feb 18 14:35:22 crc kubenswrapper[4957]: I0218 14:35:22.862818 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9pnqp\" (UniqueName: \"kubernetes.io/projected/1523e723-7145-4c5e-8834-990b6298db41-kube-api-access-9pnqp\") pod \"controller-manager-65c9ff5d9d-fz5wk\" (UID: \"1523e723-7145-4c5e-8834-990b6298db41\") " pod="openshift-controller-manager/controller-manager-65c9ff5d9d-fz5wk" Feb 18 14:35:22 crc kubenswrapper[4957]: I0218 14:35:22.862836 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2fbd50ae-c490-4099-b01e-de491ad70559-client-ca\") pod \"route-controller-manager-6f58845d78-bfqn7\" (UID: \"2fbd50ae-c490-4099-b01e-de491ad70559\") " pod="openshift-route-controller-manager/route-controller-manager-6f58845d78-bfqn7" Feb 18 14:35:22 crc kubenswrapper[4957]: I0218 14:35:22.862863 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1523e723-7145-4c5e-8834-990b6298db41-config\") pod \"controller-manager-65c9ff5d9d-fz5wk\" (UID: \"1523e723-7145-4c5e-8834-990b6298db41\") " pod="openshift-controller-manager/controller-manager-65c9ff5d9d-fz5wk" Feb 18 14:35:22 crc kubenswrapper[4957]: I0218 14:35:22.862896 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2fbd50ae-c490-4099-b01e-de491ad70559-config\") pod \"route-controller-manager-6f58845d78-bfqn7\" (UID: \"2fbd50ae-c490-4099-b01e-de491ad70559\") " pod="openshift-route-controller-manager/route-controller-manager-6f58845d78-bfqn7" Feb 18 14:35:22 crc kubenswrapper[4957]: I0218 14:35:22.862913 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2fbd50ae-c490-4099-b01e-de491ad70559-serving-cert\") pod \"route-controller-manager-6f58845d78-bfqn7\" (UID: \"2fbd50ae-c490-4099-b01e-de491ad70559\") " pod="openshift-route-controller-manager/route-controller-manager-6f58845d78-bfqn7" Feb 18 14:35:22 crc kubenswrapper[4957]: I0218 14:35:22.862942 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1523e723-7145-4c5e-8834-990b6298db41-proxy-ca-bundles\") pod \"controller-manager-65c9ff5d9d-fz5wk\" (UID: \"1523e723-7145-4c5e-8834-990b6298db41\") " pod="openshift-controller-manager/controller-manager-65c9ff5d9d-fz5wk" Feb 18 14:35:22 crc kubenswrapper[4957]: I0218 14:35:22.864040 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1523e723-7145-4c5e-8834-990b6298db41-client-ca\") pod \"controller-manager-65c9ff5d9d-fz5wk\" (UID: \"1523e723-7145-4c5e-8834-990b6298db41\") " pod="openshift-controller-manager/controller-manager-65c9ff5d9d-fz5wk" Feb 18 14:35:22 crc kubenswrapper[4957]: I0218 14:35:22.864284 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2fbd50ae-c490-4099-b01e-de491ad70559-client-ca\") pod \"route-controller-manager-6f58845d78-bfqn7\" (UID: \"2fbd50ae-c490-4099-b01e-de491ad70559\") " pod="openshift-route-controller-manager/route-controller-manager-6f58845d78-bfqn7" Feb 18 14:35:22 crc kubenswrapper[4957]: I0218 14:35:22.864412 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2fbd50ae-c490-4099-b01e-de491ad70559-config\") pod \"route-controller-manager-6f58845d78-bfqn7\" (UID: \"2fbd50ae-c490-4099-b01e-de491ad70559\") " pod="openshift-route-controller-manager/route-controller-manager-6f58845d78-bfqn7" Feb 18 14:35:22 crc kubenswrapper[4957]: I0218 14:35:22.864708 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1523e723-7145-4c5e-8834-990b6298db41-config\") pod \"controller-manager-65c9ff5d9d-fz5wk\" (UID: \"1523e723-7145-4c5e-8834-990b6298db41\") " pod="openshift-controller-manager/controller-manager-65c9ff5d9d-fz5wk" Feb 18 14:35:22 crc kubenswrapper[4957]: I0218 14:35:22.865517 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1523e723-7145-4c5e-8834-990b6298db41-proxy-ca-bundles\") pod \"controller-manager-65c9ff5d9d-fz5wk\" (UID: \"1523e723-7145-4c5e-8834-990b6298db41\") " pod="openshift-controller-manager/controller-manager-65c9ff5d9d-fz5wk" Feb 18 14:35:22 crc kubenswrapper[4957]: I0218 14:35:22.867608 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2fbd50ae-c490-4099-b01e-de491ad70559-serving-cert\") pod \"route-controller-manager-6f58845d78-bfqn7\" (UID: \"2fbd50ae-c490-4099-b01e-de491ad70559\") " pod="openshift-route-controller-manager/route-controller-manager-6f58845d78-bfqn7" Feb 18 14:35:22 crc kubenswrapper[4957]: I0218 14:35:22.867645 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1523e723-7145-4c5e-8834-990b6298db41-serving-cert\") pod \"controller-manager-65c9ff5d9d-fz5wk\" (UID: \"1523e723-7145-4c5e-8834-990b6298db41\") " pod="openshift-controller-manager/controller-manager-65c9ff5d9d-fz5wk" Feb 18 14:35:22 crc kubenswrapper[4957]: I0218 14:35:22.881381 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jd89g\" (UniqueName: \"kubernetes.io/projected/2fbd50ae-c490-4099-b01e-de491ad70559-kube-api-access-jd89g\") pod \"route-controller-manager-6f58845d78-bfqn7\" (UID: \"2fbd50ae-c490-4099-b01e-de491ad70559\") " pod="openshift-route-controller-manager/route-controller-manager-6f58845d78-bfqn7" Feb 18 14:35:22 crc kubenswrapper[4957]: I0218 14:35:22.883500 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9pnqp\" (UniqueName: \"kubernetes.io/projected/1523e723-7145-4c5e-8834-990b6298db41-kube-api-access-9pnqp\") pod \"controller-manager-65c9ff5d9d-fz5wk\" (UID: \"1523e723-7145-4c5e-8834-990b6298db41\") " pod="openshift-controller-manager/controller-manager-65c9ff5d9d-fz5wk" Feb 18 14:35:23 crc kubenswrapper[4957]: I0218 14:35:23.058000 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65c9ff5d9d-fz5wk" Feb 18 14:35:23 crc kubenswrapper[4957]: I0218 14:35:23.064441 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6f58845d78-bfqn7" Feb 18 14:35:23 crc kubenswrapper[4957]: I0218 14:35:23.317792 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6f58845d78-bfqn7"] Feb 18 14:35:23 crc kubenswrapper[4957]: W0218 14:35:23.324879 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2fbd50ae_c490_4099_b01e_de491ad70559.slice/crio-43ca15edc8d695dabb9d60c92c93485fa849a8d92615b6d043f748ce5cbc37d4 WatchSource:0}: Error finding container 43ca15edc8d695dabb9d60c92c93485fa849a8d92615b6d043f748ce5cbc37d4: Status 404 returned error can't find the container with id 43ca15edc8d695dabb9d60c92c93485fa849a8d92615b6d043f748ce5cbc37d4 Feb 18 14:35:23 crc kubenswrapper[4957]: I0218 14:35:23.461281 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-65c9ff5d9d-fz5wk"] Feb 18 14:35:23 crc kubenswrapper[4957]: I0218 14:35:23.916522 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6f58845d78-bfqn7" event={"ID":"2fbd50ae-c490-4099-b01e-de491ad70559","Type":"ContainerStarted","Data":"43ca15edc8d695dabb9d60c92c93485fa849a8d92615b6d043f748ce5cbc37d4"} Feb 18 14:35:23 crc kubenswrapper[4957]: I0218 14:35:23.917779 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65c9ff5d9d-fz5wk" event={"ID":"1523e723-7145-4c5e-8834-990b6298db41","Type":"ContainerStarted","Data":"a7484f301fb4294d6da9d1396ae6959922cdf71310dd6cde5735f45e7bd86672"} Feb 18 14:35:24 crc kubenswrapper[4957]: I0218 14:35:24.729736 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-76b96558df-8qxws"] Feb 18 14:35:24 crc kubenswrapper[4957]: I0218 14:35:24.731142 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-76b96558df-8qxws" Feb 18 14:35:24 crc kubenswrapper[4957]: I0218 14:35:24.734374 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 18 14:35:24 crc kubenswrapper[4957]: I0218 14:35:24.734719 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 18 14:35:24 crc kubenswrapper[4957]: I0218 14:35:24.734847 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 18 14:35:24 crc kubenswrapper[4957]: I0218 14:35:24.735300 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 18 14:35:24 crc kubenswrapper[4957]: I0218 14:35:24.735499 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 18 14:35:24 crc kubenswrapper[4957]: I0218 14:35:24.736274 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 18 14:35:24 crc kubenswrapper[4957]: I0218 14:35:24.736385 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 18 14:35:24 crc kubenswrapper[4957]: I0218 14:35:24.736412 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 18 14:35:24 crc kubenswrapper[4957]: I0218 14:35:24.736358 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 18 14:35:24 crc kubenswrapper[4957]: I0218 14:35:24.736807 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 18 14:35:24 crc kubenswrapper[4957]: I0218 14:35:24.737580 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 18 14:35:24 crc kubenswrapper[4957]: I0218 14:35:24.738005 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 18 14:35:24 crc kubenswrapper[4957]: I0218 14:35:24.756626 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 18 14:35:24 crc kubenswrapper[4957]: I0218 14:35:24.826728 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 18 14:35:24 crc kubenswrapper[4957]: I0218 14:35:24.833842 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-76b96558df-8qxws"] Feb 18 14:35:24 crc kubenswrapper[4957]: I0218 14:35:24.835972 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 18 14:35:24 crc kubenswrapper[4957]: I0218 14:35:24.924013 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/3c63d324-6b52-4815-922d-3dc270315126-v4-0-config-user-template-login\") pod \"oauth-openshift-76b96558df-8qxws\" (UID: \"3c63d324-6b52-4815-922d-3dc270315126\") " pod="openshift-authentication/oauth-openshift-76b96558df-8qxws" Feb 18 14:35:24 crc kubenswrapper[4957]: I0218 14:35:24.924069 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3c63d324-6b52-4815-922d-3dc270315126-audit-dir\") pod \"oauth-openshift-76b96558df-8qxws\" (UID: \"3c63d324-6b52-4815-922d-3dc270315126\") " pod="openshift-authentication/oauth-openshift-76b96558df-8qxws" Feb 18 14:35:24 crc kubenswrapper[4957]: I0218 14:35:24.924110 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/3c63d324-6b52-4815-922d-3dc270315126-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-76b96558df-8qxws\" (UID: \"3c63d324-6b52-4815-922d-3dc270315126\") " pod="openshift-authentication/oauth-openshift-76b96558df-8qxws" Feb 18 14:35:24 crc kubenswrapper[4957]: I0218 14:35:24.924132 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/3c63d324-6b52-4815-922d-3dc270315126-v4-0-config-system-serving-cert\") pod \"oauth-openshift-76b96558df-8qxws\" (UID: \"3c63d324-6b52-4815-922d-3dc270315126\") " pod="openshift-authentication/oauth-openshift-76b96558df-8qxws" Feb 18 14:35:24 crc kubenswrapper[4957]: I0218 14:35:24.924153 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/3c63d324-6b52-4815-922d-3dc270315126-v4-0-config-system-router-certs\") pod \"oauth-openshift-76b96558df-8qxws\" (UID: \"3c63d324-6b52-4815-922d-3dc270315126\") " pod="openshift-authentication/oauth-openshift-76b96558df-8qxws" Feb 18 14:35:24 crc kubenswrapper[4957]: I0218 14:35:24.924204 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3c63d324-6b52-4815-922d-3dc270315126-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-76b96558df-8qxws\" (UID: \"3c63d324-6b52-4815-922d-3dc270315126\") " pod="openshift-authentication/oauth-openshift-76b96558df-8qxws" Feb 18 14:35:24 crc kubenswrapper[4957]: I0218 14:35:24.924225 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/3c63d324-6b52-4815-922d-3dc270315126-audit-policies\") pod \"oauth-openshift-76b96558df-8qxws\" (UID: \"3c63d324-6b52-4815-922d-3dc270315126\") " pod="openshift-authentication/oauth-openshift-76b96558df-8qxws" Feb 18 14:35:24 crc kubenswrapper[4957]: I0218 14:35:24.924256 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/3c63d324-6b52-4815-922d-3dc270315126-v4-0-config-system-service-ca\") pod \"oauth-openshift-76b96558df-8qxws\" (UID: \"3c63d324-6b52-4815-922d-3dc270315126\") " pod="openshift-authentication/oauth-openshift-76b96558df-8qxws" Feb 18 14:35:24 crc kubenswrapper[4957]: I0218 14:35:24.924279 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/3c63d324-6b52-4815-922d-3dc270315126-v4-0-config-user-template-error\") pod \"oauth-openshift-76b96558df-8qxws\" (UID: \"3c63d324-6b52-4815-922d-3dc270315126\") " pod="openshift-authentication/oauth-openshift-76b96558df-8qxws" Feb 18 14:35:24 crc kubenswrapper[4957]: I0218 14:35:24.924300 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/3c63d324-6b52-4815-922d-3dc270315126-v4-0-config-system-session\") pod \"oauth-openshift-76b96558df-8qxws\" (UID: \"3c63d324-6b52-4815-922d-3dc270315126\") " pod="openshift-authentication/oauth-openshift-76b96558df-8qxws" Feb 18 14:35:24 crc kubenswrapper[4957]: I0218 14:35:24.924322 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/3c63d324-6b52-4815-922d-3dc270315126-v4-0-config-system-cliconfig\") pod \"oauth-openshift-76b96558df-8qxws\" (UID: \"3c63d324-6b52-4815-922d-3dc270315126\") " pod="openshift-authentication/oauth-openshift-76b96558df-8qxws" Feb 18 14:35:24 crc kubenswrapper[4957]: I0218 14:35:24.924344 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/3c63d324-6b52-4815-922d-3dc270315126-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-76b96558df-8qxws\" (UID: \"3c63d324-6b52-4815-922d-3dc270315126\") " pod="openshift-authentication/oauth-openshift-76b96558df-8qxws" Feb 18 14:35:24 crc kubenswrapper[4957]: I0218 14:35:24.924361 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5q7l5\" (UniqueName: \"kubernetes.io/projected/3c63d324-6b52-4815-922d-3dc270315126-kube-api-access-5q7l5\") pod \"oauth-openshift-76b96558df-8qxws\" (UID: \"3c63d324-6b52-4815-922d-3dc270315126\") " pod="openshift-authentication/oauth-openshift-76b96558df-8qxws" Feb 18 14:35:24 crc kubenswrapper[4957]: I0218 14:35:24.924383 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/3c63d324-6b52-4815-922d-3dc270315126-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-76b96558df-8qxws\" (UID: \"3c63d324-6b52-4815-922d-3dc270315126\") " pod="openshift-authentication/oauth-openshift-76b96558df-8qxws" Feb 18 14:35:24 crc kubenswrapper[4957]: I0218 14:35:24.924865 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65c9ff5d9d-fz5wk" event={"ID":"1523e723-7145-4c5e-8834-990b6298db41","Type":"ContainerStarted","Data":"fc933db99029da447db2b03fe12f3fa610f419a635cff15cf52b13279a672238"} Feb 18 14:35:24 crc kubenswrapper[4957]: I0218 14:35:24.926530 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-65c9ff5d9d-fz5wk" Feb 18 14:35:24 crc kubenswrapper[4957]: I0218 14:35:24.933343 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6f58845d78-bfqn7" event={"ID":"2fbd50ae-c490-4099-b01e-de491ad70559","Type":"ContainerStarted","Data":"3ebc227181edc68e492ff82667e56ce703d103b6b3b7df6b20a3689f34338478"} Feb 18 14:35:24 crc kubenswrapper[4957]: I0218 14:35:24.933918 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6f58845d78-bfqn7" Feb 18 14:35:24 crc kubenswrapper[4957]: I0218 14:35:24.941083 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-65c9ff5d9d-fz5wk" Feb 18 14:35:24 crc kubenswrapper[4957]: I0218 14:35:24.942331 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6f58845d78-bfqn7" Feb 18 14:35:24 crc kubenswrapper[4957]: I0218 14:35:24.954860 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-65c9ff5d9d-fz5wk" podStartSLOduration=3.954836143 podStartE2EDuration="3.954836143s" podCreationTimestamp="2026-02-18 14:35:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:35:24.951992641 +0000 UTC m=+231.472857385" watchObservedRunningTime="2026-02-18 14:35:24.954836143 +0000 UTC m=+231.475700887" Feb 18 14:35:24 crc kubenswrapper[4957]: I0218 14:35:24.981841 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6f58845d78-bfqn7" podStartSLOduration=3.9818152700000002 podStartE2EDuration="3.98181527s" podCreationTimestamp="2026-02-18 14:35:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:35:24.976206508 +0000 UTC m=+231.497071272" watchObservedRunningTime="2026-02-18 14:35:24.98181527 +0000 UTC m=+231.502680014" Feb 18 14:35:25 crc kubenswrapper[4957]: I0218 14:35:25.025503 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/3c63d324-6b52-4815-922d-3dc270315126-v4-0-config-system-service-ca\") pod \"oauth-openshift-76b96558df-8qxws\" (UID: \"3c63d324-6b52-4815-922d-3dc270315126\") " pod="openshift-authentication/oauth-openshift-76b96558df-8qxws" Feb 18 14:35:25 crc kubenswrapper[4957]: I0218 14:35:25.025575 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/3c63d324-6b52-4815-922d-3dc270315126-v4-0-config-user-template-error\") pod \"oauth-openshift-76b96558df-8qxws\" (UID: \"3c63d324-6b52-4815-922d-3dc270315126\") " pod="openshift-authentication/oauth-openshift-76b96558df-8qxws" Feb 18 14:35:25 crc kubenswrapper[4957]: I0218 14:35:25.025606 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/3c63d324-6b52-4815-922d-3dc270315126-v4-0-config-system-session\") pod \"oauth-openshift-76b96558df-8qxws\" (UID: \"3c63d324-6b52-4815-922d-3dc270315126\") " pod="openshift-authentication/oauth-openshift-76b96558df-8qxws" Feb 18 14:35:25 crc kubenswrapper[4957]: I0218 14:35:25.025632 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/3c63d324-6b52-4815-922d-3dc270315126-v4-0-config-system-cliconfig\") pod \"oauth-openshift-76b96558df-8qxws\" (UID: \"3c63d324-6b52-4815-922d-3dc270315126\") " pod="openshift-authentication/oauth-openshift-76b96558df-8qxws" Feb 18 14:35:25 crc kubenswrapper[4957]: I0218 14:35:25.025660 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/3c63d324-6b52-4815-922d-3dc270315126-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-76b96558df-8qxws\" (UID: \"3c63d324-6b52-4815-922d-3dc270315126\") " pod="openshift-authentication/oauth-openshift-76b96558df-8qxws" Feb 18 14:35:25 crc kubenswrapper[4957]: I0218 14:35:25.025690 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5q7l5\" (UniqueName: \"kubernetes.io/projected/3c63d324-6b52-4815-922d-3dc270315126-kube-api-access-5q7l5\") pod \"oauth-openshift-76b96558df-8qxws\" (UID: \"3c63d324-6b52-4815-922d-3dc270315126\") " pod="openshift-authentication/oauth-openshift-76b96558df-8qxws" Feb 18 14:35:25 crc kubenswrapper[4957]: I0218 14:35:25.025717 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/3c63d324-6b52-4815-922d-3dc270315126-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-76b96558df-8qxws\" (UID: \"3c63d324-6b52-4815-922d-3dc270315126\") " pod="openshift-authentication/oauth-openshift-76b96558df-8qxws" Feb 18 14:35:25 crc kubenswrapper[4957]: I0218 14:35:25.025755 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/3c63d324-6b52-4815-922d-3dc270315126-v4-0-config-user-template-login\") pod \"oauth-openshift-76b96558df-8qxws\" (UID: \"3c63d324-6b52-4815-922d-3dc270315126\") " pod="openshift-authentication/oauth-openshift-76b96558df-8qxws" Feb 18 14:35:25 crc kubenswrapper[4957]: I0218 14:35:25.025893 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3c63d324-6b52-4815-922d-3dc270315126-audit-dir\") pod \"oauth-openshift-76b96558df-8qxws\" (UID: \"3c63d324-6b52-4815-922d-3dc270315126\") " pod="openshift-authentication/oauth-openshift-76b96558df-8qxws" Feb 18 14:35:25 crc kubenswrapper[4957]: I0218 14:35:25.025985 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/3c63d324-6b52-4815-922d-3dc270315126-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-76b96558df-8qxws\" (UID: \"3c63d324-6b52-4815-922d-3dc270315126\") " pod="openshift-authentication/oauth-openshift-76b96558df-8qxws" Feb 18 14:35:25 crc kubenswrapper[4957]: I0218 14:35:25.026014 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/3c63d324-6b52-4815-922d-3dc270315126-v4-0-config-system-serving-cert\") pod \"oauth-openshift-76b96558df-8qxws\" (UID: \"3c63d324-6b52-4815-922d-3dc270315126\") " pod="openshift-authentication/oauth-openshift-76b96558df-8qxws" Feb 18 14:35:25 crc kubenswrapper[4957]: I0218 14:35:25.026039 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/3c63d324-6b52-4815-922d-3dc270315126-v4-0-config-system-router-certs\") pod \"oauth-openshift-76b96558df-8qxws\" (UID: \"3c63d324-6b52-4815-922d-3dc270315126\") " pod="openshift-authentication/oauth-openshift-76b96558df-8qxws" Feb 18 14:35:25 crc kubenswrapper[4957]: I0218 14:35:25.026085 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3c63d324-6b52-4815-922d-3dc270315126-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-76b96558df-8qxws\" (UID: \"3c63d324-6b52-4815-922d-3dc270315126\") " pod="openshift-authentication/oauth-openshift-76b96558df-8qxws" Feb 18 14:35:25 crc kubenswrapper[4957]: I0218 14:35:25.026114 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/3c63d324-6b52-4815-922d-3dc270315126-audit-policies\") pod \"oauth-openshift-76b96558df-8qxws\" (UID: \"3c63d324-6b52-4815-922d-3dc270315126\") " pod="openshift-authentication/oauth-openshift-76b96558df-8qxws" Feb 18 14:35:25 crc kubenswrapper[4957]: I0218 14:35:25.028128 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3c63d324-6b52-4815-922d-3dc270315126-audit-dir\") pod \"oauth-openshift-76b96558df-8qxws\" (UID: \"3c63d324-6b52-4815-922d-3dc270315126\") " pod="openshift-authentication/oauth-openshift-76b96558df-8qxws" Feb 18 14:35:25 crc kubenswrapper[4957]: I0218 14:35:25.030364 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/3c63d324-6b52-4815-922d-3dc270315126-v4-0-config-system-service-ca\") pod \"oauth-openshift-76b96558df-8qxws\" (UID: \"3c63d324-6b52-4815-922d-3dc270315126\") " pod="openshift-authentication/oauth-openshift-76b96558df-8qxws" Feb 18 14:35:25 crc kubenswrapper[4957]: I0218 14:35:25.033616 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3c63d324-6b52-4815-922d-3dc270315126-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-76b96558df-8qxws\" (UID: \"3c63d324-6b52-4815-922d-3dc270315126\") " pod="openshift-authentication/oauth-openshift-76b96558df-8qxws" Feb 18 14:35:25 crc kubenswrapper[4957]: I0218 14:35:25.034446 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/3c63d324-6b52-4815-922d-3dc270315126-audit-policies\") pod \"oauth-openshift-76b96558df-8qxws\" (UID: \"3c63d324-6b52-4815-922d-3dc270315126\") " pod="openshift-authentication/oauth-openshift-76b96558df-8qxws" Feb 18 14:35:25 crc kubenswrapper[4957]: I0218 14:35:25.036643 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/3c63d324-6b52-4815-922d-3dc270315126-v4-0-config-system-router-certs\") pod \"oauth-openshift-76b96558df-8qxws\" (UID: \"3c63d324-6b52-4815-922d-3dc270315126\") " pod="openshift-authentication/oauth-openshift-76b96558df-8qxws" Feb 18 14:35:25 crc kubenswrapper[4957]: I0218 14:35:25.036895 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/3c63d324-6b52-4815-922d-3dc270315126-v4-0-config-system-cliconfig\") pod \"oauth-openshift-76b96558df-8qxws\" (UID: \"3c63d324-6b52-4815-922d-3dc270315126\") " pod="openshift-authentication/oauth-openshift-76b96558df-8qxws" Feb 18 14:35:25 crc kubenswrapper[4957]: I0218 14:35:25.038811 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/3c63d324-6b52-4815-922d-3dc270315126-v4-0-config-user-template-login\") pod \"oauth-openshift-76b96558df-8qxws\" (UID: \"3c63d324-6b52-4815-922d-3dc270315126\") " pod="openshift-authentication/oauth-openshift-76b96558df-8qxws" Feb 18 14:35:25 crc kubenswrapper[4957]: I0218 14:35:25.039120 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/3c63d324-6b52-4815-922d-3dc270315126-v4-0-config-system-session\") pod \"oauth-openshift-76b96558df-8qxws\" (UID: \"3c63d324-6b52-4815-922d-3dc270315126\") " pod="openshift-authentication/oauth-openshift-76b96558df-8qxws" Feb 18 14:35:25 crc kubenswrapper[4957]: I0218 14:35:25.039383 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/3c63d324-6b52-4815-922d-3dc270315126-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-76b96558df-8qxws\" (UID: \"3c63d324-6b52-4815-922d-3dc270315126\") " pod="openshift-authentication/oauth-openshift-76b96558df-8qxws" Feb 18 14:35:25 crc kubenswrapper[4957]: I0218 14:35:25.039890 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/3c63d324-6b52-4815-922d-3dc270315126-v4-0-config-system-serving-cert\") pod \"oauth-openshift-76b96558df-8qxws\" (UID: \"3c63d324-6b52-4815-922d-3dc270315126\") " pod="openshift-authentication/oauth-openshift-76b96558df-8qxws" Feb 18 14:35:25 crc kubenswrapper[4957]: I0218 14:35:25.048437 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/3c63d324-6b52-4815-922d-3dc270315126-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-76b96558df-8qxws\" (UID: \"3c63d324-6b52-4815-922d-3dc270315126\") " pod="openshift-authentication/oauth-openshift-76b96558df-8qxws" Feb 18 14:35:25 crc kubenswrapper[4957]: I0218 14:35:25.051383 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5q7l5\" (UniqueName: \"kubernetes.io/projected/3c63d324-6b52-4815-922d-3dc270315126-kube-api-access-5q7l5\") pod \"oauth-openshift-76b96558df-8qxws\" (UID: \"3c63d324-6b52-4815-922d-3dc270315126\") " pod="openshift-authentication/oauth-openshift-76b96558df-8qxws" Feb 18 14:35:25 crc kubenswrapper[4957]: I0218 14:35:25.051494 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/3c63d324-6b52-4815-922d-3dc270315126-v4-0-config-user-template-error\") pod \"oauth-openshift-76b96558df-8qxws\" (UID: \"3c63d324-6b52-4815-922d-3dc270315126\") " pod="openshift-authentication/oauth-openshift-76b96558df-8qxws" Feb 18 14:35:25 crc kubenswrapper[4957]: I0218 14:35:25.059201 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/3c63d324-6b52-4815-922d-3dc270315126-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-76b96558df-8qxws\" (UID: \"3c63d324-6b52-4815-922d-3dc270315126\") " pod="openshift-authentication/oauth-openshift-76b96558df-8qxws" Feb 18 14:35:25 crc kubenswrapper[4957]: I0218 14:35:25.128415 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-76b96558df-8qxws" Feb 18 14:35:25 crc kubenswrapper[4957]: I0218 14:35:25.550236 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-76b96558df-8qxws"] Feb 18 14:35:25 crc kubenswrapper[4957]: W0218 14:35:25.556713 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3c63d324_6b52_4815_922d_3dc270315126.slice/crio-046fc8b45d243a3be6c42d3d225ef9a8d1f582d424408fba0bc3e6c7ea2c3a32 WatchSource:0}: Error finding container 046fc8b45d243a3be6c42d3d225ef9a8d1f582d424408fba0bc3e6c7ea2c3a32: Status 404 returned error can't find the container with id 046fc8b45d243a3be6c42d3d225ef9a8d1f582d424408fba0bc3e6c7ea2c3a32 Feb 18 14:35:25 crc kubenswrapper[4957]: I0218 14:35:25.941846 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-76b96558df-8qxws" event={"ID":"3c63d324-6b52-4815-922d-3dc270315126","Type":"ContainerStarted","Data":"64b9b8b9859838921b9eec373f19ff6a20f860a2f381d6a464a66b0e6a1348b1"} Feb 18 14:35:25 crc kubenswrapper[4957]: I0218 14:35:25.942288 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-76b96558df-8qxws" event={"ID":"3c63d324-6b52-4815-922d-3dc270315126","Type":"ContainerStarted","Data":"046fc8b45d243a3be6c42d3d225ef9a8d1f582d424408fba0bc3e6c7ea2c3a32"} Feb 18 14:35:26 crc kubenswrapper[4957]: I0218 14:35:26.962686 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-76b96558df-8qxws" Feb 18 14:35:26 crc kubenswrapper[4957]: I0218 14:35:26.969148 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-76b96558df-8qxws" Feb 18 14:35:26 crc kubenswrapper[4957]: I0218 14:35:26.990918 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-76b96558df-8qxws" podStartSLOduration=32.990892416 podStartE2EDuration="32.990892416s" podCreationTimestamp="2026-02-18 14:34:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:35:26.982948297 +0000 UTC m=+233.503813051" watchObservedRunningTime="2026-02-18 14:35:26.990892416 +0000 UTC m=+233.511757160" Feb 18 14:35:32 crc kubenswrapper[4957]: E0218 14:35:32.808640 4957 file.go:109] "Unable to process watch event" err="can't process config file \"/etc/kubernetes/manifests/kube-apiserver-startup-monitor-pod.yaml\": /etc/kubernetes/manifests/kube-apiserver-startup-monitor-pod.yaml: couldn't parse as pod(Object 'Kind' is missing in 'null'), please check config file" Feb 18 14:35:32 crc kubenswrapper[4957]: I0218 14:35:32.811339 4957 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 18 14:35:32 crc kubenswrapper[4957]: I0218 14:35:32.812477 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 14:35:32 crc kubenswrapper[4957]: I0218 14:35:32.852051 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 18 14:35:32 crc kubenswrapper[4957]: I0218 14:35:32.893170 4957 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 18 14:35:32 crc kubenswrapper[4957]: I0218 14:35:32.893470 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://51eded762a62fcb704d09a9c4336a0e308e44c662b636772ea79f754c09dcaaa" gracePeriod=15 Feb 18 14:35:32 crc kubenswrapper[4957]: I0218 14:35:32.893536 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://a8a11b36734e57eec3aba0b6d96b4102850e46b96eb16d99cfcacf273f1ce69f" gracePeriod=15 Feb 18 14:35:32 crc kubenswrapper[4957]: I0218 14:35:32.893616 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://d1c0f986e657c20a059efccb494462926f4fa62d30b012e6ce2b530d679d9d5c" gracePeriod=15 Feb 18 14:35:32 crc kubenswrapper[4957]: I0218 14:35:32.893556 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://6a2ec9911be4f4d8715bb722cdf72cdbf478111030a7311b93eb67f55ddbce77" gracePeriod=15 Feb 18 14:35:32 crc kubenswrapper[4957]: I0218 14:35:32.893606 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://6d46f625f180ab9afde6a7cb5d578b88012bf5b297e691a35f1fe1af000f1416" gracePeriod=15 Feb 18 14:35:32 crc kubenswrapper[4957]: I0218 14:35:32.895515 4957 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 18 14:35:32 crc kubenswrapper[4957]: E0218 14:35:32.895923 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 18 14:35:32 crc kubenswrapper[4957]: I0218 14:35:32.895940 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 18 14:35:32 crc kubenswrapper[4957]: E0218 14:35:32.895948 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 18 14:35:32 crc kubenswrapper[4957]: I0218 14:35:32.895954 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 18 14:35:32 crc kubenswrapper[4957]: E0218 14:35:32.895963 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 18 14:35:32 crc kubenswrapper[4957]: I0218 14:35:32.895969 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 18 14:35:32 crc kubenswrapper[4957]: E0218 14:35:32.895978 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 18 14:35:32 crc kubenswrapper[4957]: I0218 14:35:32.895983 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 18 14:35:32 crc kubenswrapper[4957]: E0218 14:35:32.895996 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 18 14:35:32 crc kubenswrapper[4957]: I0218 14:35:32.896002 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 18 14:35:32 crc kubenswrapper[4957]: E0218 14:35:32.896010 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 18 14:35:32 crc kubenswrapper[4957]: I0218 14:35:32.896016 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 18 14:35:32 crc kubenswrapper[4957]: E0218 14:35:32.896023 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 18 14:35:32 crc kubenswrapper[4957]: I0218 14:35:32.896029 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 18 14:35:32 crc kubenswrapper[4957]: I0218 14:35:32.896116 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 18 14:35:32 crc kubenswrapper[4957]: I0218 14:35:32.896126 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 18 14:35:32 crc kubenswrapper[4957]: I0218 14:35:32.896132 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 18 14:35:32 crc kubenswrapper[4957]: I0218 14:35:32.896144 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 18 14:35:32 crc kubenswrapper[4957]: I0218 14:35:32.896152 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 18 14:35:32 crc kubenswrapper[4957]: I0218 14:35:32.896354 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 18 14:35:32 crc kubenswrapper[4957]: I0218 14:35:32.939046 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 14:35:32 crc kubenswrapper[4957]: I0218 14:35:32.939122 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 14:35:32 crc kubenswrapper[4957]: I0218 14:35:32.939149 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 14:35:32 crc kubenswrapper[4957]: I0218 14:35:32.939363 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 14:35:32 crc kubenswrapper[4957]: I0218 14:35:32.939447 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 14:35:33 crc kubenswrapper[4957]: I0218 14:35:33.041110 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 14:35:33 crc kubenswrapper[4957]: I0218 14:35:33.041163 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 14:35:33 crc kubenswrapper[4957]: I0218 14:35:33.041181 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 14:35:33 crc kubenswrapper[4957]: I0218 14:35:33.041201 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 14:35:33 crc kubenswrapper[4957]: I0218 14:35:33.041220 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 14:35:33 crc kubenswrapper[4957]: I0218 14:35:33.041237 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 14:35:33 crc kubenswrapper[4957]: I0218 14:35:33.041258 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 14:35:33 crc kubenswrapper[4957]: I0218 14:35:33.041279 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 14:35:33 crc kubenswrapper[4957]: I0218 14:35:33.041273 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 14:35:33 crc kubenswrapper[4957]: I0218 14:35:33.041374 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 14:35:33 crc kubenswrapper[4957]: I0218 14:35:33.041406 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 14:35:33 crc kubenswrapper[4957]: I0218 14:35:33.041446 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 14:35:33 crc kubenswrapper[4957]: I0218 14:35:33.041460 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 14:35:33 crc kubenswrapper[4957]: I0218 14:35:33.108879 4957 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Feb 18 14:35:33 crc kubenswrapper[4957]: I0218 14:35:33.108955 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Feb 18 14:35:33 crc kubenswrapper[4957]: I0218 14:35:33.142862 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 14:35:33 crc kubenswrapper[4957]: I0218 14:35:33.142921 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 14:35:33 crc kubenswrapper[4957]: I0218 14:35:33.142955 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 14:35:33 crc kubenswrapper[4957]: I0218 14:35:33.142989 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 14:35:33 crc kubenswrapper[4957]: I0218 14:35:33.143080 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 14:35:33 crc kubenswrapper[4957]: I0218 14:35:33.143121 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 14:35:33 crc kubenswrapper[4957]: I0218 14:35:33.150678 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 14:35:33 crc kubenswrapper[4957]: W0218 14:35:33.171921 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-bc7ceeda4e09e968426d9c8404fcacbd5c1c1e4d116a8e6a853bfc4c0145a8d5 WatchSource:0}: Error finding container bc7ceeda4e09e968426d9c8404fcacbd5c1c1e4d116a8e6a853bfc4c0145a8d5: Status 404 returned error can't find the container with id bc7ceeda4e09e968426d9c8404fcacbd5c1c1e4d116a8e6a853bfc4c0145a8d5 Feb 18 14:35:33 crc kubenswrapper[4957]: E0218 14:35:33.175263 4957 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.213:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.18955df4820002db openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 14:35:33.174600411 +0000 UTC m=+239.695465165,LastTimestamp:2026-02-18 14:35:33.174600411 +0000 UTC m=+239.695465165,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 14:35:33 crc kubenswrapper[4957]: I0218 14:35:33.910089 4957 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:6443/readyz\": dial tcp 192.168.126.11:6443: connect: connection refused" start-of-body= Feb 18 14:35:33 crc kubenswrapper[4957]: I0218 14:35:33.910501 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/readyz\": dial tcp 192.168.126.11:6443: connect: connection refused" Feb 18 14:35:34 crc kubenswrapper[4957]: E0218 14:35:34.000246 4957 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 18 14:35:34 crc kubenswrapper[4957]: E0218 14:35:34.000445 4957 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 18 14:35:34 crc kubenswrapper[4957]: E0218 14:35:34.000714 4957 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 18 14:35:34 crc kubenswrapper[4957]: I0218 14:35:34.000909 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 18 14:35:34 crc kubenswrapper[4957]: E0218 14:35:34.001217 4957 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 18 14:35:34 crc kubenswrapper[4957]: E0218 14:35:34.001473 4957 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 18 14:35:34 crc kubenswrapper[4957]: I0218 14:35:34.001502 4957 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 18 14:35:34 crc kubenswrapper[4957]: E0218 14:35:34.001735 4957 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.213:6443: connect: connection refused" interval="200ms" Feb 18 14:35:34 crc kubenswrapper[4957]: I0218 14:35:34.002345 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 18 14:35:34 crc kubenswrapper[4957]: I0218 14:35:34.003119 4957 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="a8a11b36734e57eec3aba0b6d96b4102850e46b96eb16d99cfcacf273f1ce69f" exitCode=0 Feb 18 14:35:34 crc kubenswrapper[4957]: I0218 14:35:34.003151 4957 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="d1c0f986e657c20a059efccb494462926f4fa62d30b012e6ce2b530d679d9d5c" exitCode=0 Feb 18 14:35:34 crc kubenswrapper[4957]: I0218 14:35:34.003165 4957 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="6a2ec9911be4f4d8715bb722cdf72cdbf478111030a7311b93eb67f55ddbce77" exitCode=0 Feb 18 14:35:34 crc kubenswrapper[4957]: I0218 14:35:34.003174 4957 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="6d46f625f180ab9afde6a7cb5d578b88012bf5b297e691a35f1fe1af000f1416" exitCode=2 Feb 18 14:35:34 crc kubenswrapper[4957]: I0218 14:35:34.003255 4957 scope.go:117] "RemoveContainer" containerID="3ff013b6ed4aa92a4bda93a646e7ffd4a158daab436c25025c7fbee49716bcfd" Feb 18 14:35:34 crc kubenswrapper[4957]: I0218 14:35:34.005578 4957 generic.go:334] "Generic (PLEG): container finished" podID="2ea270e1-6732-4aed-8052-bc3a03f88791" containerID="fda91dd5fe6369a9cc479dc125beef4b5aeda1e8faf7de11ec41220bff3c0efb" exitCode=0 Feb 18 14:35:34 crc kubenswrapper[4957]: I0218 14:35:34.005659 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"2ea270e1-6732-4aed-8052-bc3a03f88791","Type":"ContainerDied","Data":"fda91dd5fe6369a9cc479dc125beef4b5aeda1e8faf7de11ec41220bff3c0efb"} Feb 18 14:35:34 crc kubenswrapper[4957]: I0218 14:35:34.006464 4957 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 18 14:35:34 crc kubenswrapper[4957]: I0218 14:35:34.006668 4957 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 18 14:35:34 crc kubenswrapper[4957]: I0218 14:35:34.006968 4957 status_manager.go:851] "Failed to get status for pod" podUID="2ea270e1-6732-4aed-8052-bc3a03f88791" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 18 14:35:34 crc kubenswrapper[4957]: I0218 14:35:34.007973 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"4924588e3eeea5e6366f50a717d811f1035ee266844faa77fa935150d150f8f3"} Feb 18 14:35:34 crc kubenswrapper[4957]: I0218 14:35:34.008007 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"bc7ceeda4e09e968426d9c8404fcacbd5c1c1e4d116a8e6a853bfc4c0145a8d5"} Feb 18 14:35:34 crc kubenswrapper[4957]: I0218 14:35:34.008899 4957 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 18 14:35:34 crc kubenswrapper[4957]: I0218 14:35:34.009078 4957 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 18 14:35:34 crc kubenswrapper[4957]: I0218 14:35:34.009312 4957 status_manager.go:851] "Failed to get status for pod" podUID="2ea270e1-6732-4aed-8052-bc3a03f88791" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 18 14:35:34 crc kubenswrapper[4957]: E0218 14:35:34.202623 4957 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.213:6443: connect: connection refused" interval="400ms" Feb 18 14:35:34 crc kubenswrapper[4957]: I0218 14:35:34.214883 4957 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 18 14:35:34 crc kubenswrapper[4957]: I0218 14:35:34.215447 4957 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 18 14:35:34 crc kubenswrapper[4957]: I0218 14:35:34.215832 4957 status_manager.go:851] "Failed to get status for pod" podUID="2ea270e1-6732-4aed-8052-bc3a03f88791" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 18 14:35:34 crc kubenswrapper[4957]: E0218 14:35:34.604099 4957 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.213:6443: connect: connection refused" interval="800ms" Feb 18 14:35:35 crc kubenswrapper[4957]: I0218 14:35:35.026854 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 18 14:35:35 crc kubenswrapper[4957]: I0218 14:35:35.288962 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 18 14:35:35 crc kubenswrapper[4957]: I0218 14:35:35.290188 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 14:35:35 crc kubenswrapper[4957]: I0218 14:35:35.291025 4957 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 18 14:35:35 crc kubenswrapper[4957]: I0218 14:35:35.291612 4957 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 18 14:35:35 crc kubenswrapper[4957]: I0218 14:35:35.291937 4957 status_manager.go:851] "Failed to get status for pod" podUID="2ea270e1-6732-4aed-8052-bc3a03f88791" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 18 14:35:35 crc kubenswrapper[4957]: I0218 14:35:35.384439 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 18 14:35:35 crc kubenswrapper[4957]: I0218 14:35:35.384542 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 18 14:35:35 crc kubenswrapper[4957]: I0218 14:35:35.384570 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 18 14:35:35 crc kubenswrapper[4957]: I0218 14:35:35.384589 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 14:35:35 crc kubenswrapper[4957]: I0218 14:35:35.384646 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 14:35:35 crc kubenswrapper[4957]: I0218 14:35:35.384792 4957 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 18 14:35:35 crc kubenswrapper[4957]: I0218 14:35:35.384805 4957 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 18 14:35:35 crc kubenswrapper[4957]: I0218 14:35:35.384786 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 14:35:35 crc kubenswrapper[4957]: E0218 14:35:35.404845 4957 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.213:6443: connect: connection refused" interval="1.6s" Feb 18 14:35:35 crc kubenswrapper[4957]: I0218 14:35:35.407588 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 18 14:35:35 crc kubenswrapper[4957]: I0218 14:35:35.407991 4957 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 18 14:35:35 crc kubenswrapper[4957]: I0218 14:35:35.408157 4957 status_manager.go:851] "Failed to get status for pod" podUID="2ea270e1-6732-4aed-8052-bc3a03f88791" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 18 14:35:35 crc kubenswrapper[4957]: I0218 14:35:35.408326 4957 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 18 14:35:35 crc kubenswrapper[4957]: I0218 14:35:35.485783 4957 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Feb 18 14:35:35 crc kubenswrapper[4957]: I0218 14:35:35.586920 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2ea270e1-6732-4aed-8052-bc3a03f88791-kubelet-dir\") pod \"2ea270e1-6732-4aed-8052-bc3a03f88791\" (UID: \"2ea270e1-6732-4aed-8052-bc3a03f88791\") " Feb 18 14:35:35 crc kubenswrapper[4957]: I0218 14:35:35.587050 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2ea270e1-6732-4aed-8052-bc3a03f88791-var-lock\") pod \"2ea270e1-6732-4aed-8052-bc3a03f88791\" (UID: \"2ea270e1-6732-4aed-8052-bc3a03f88791\") " Feb 18 14:35:35 crc kubenswrapper[4957]: I0218 14:35:35.587075 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2ea270e1-6732-4aed-8052-bc3a03f88791-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "2ea270e1-6732-4aed-8052-bc3a03f88791" (UID: "2ea270e1-6732-4aed-8052-bc3a03f88791"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 14:35:35 crc kubenswrapper[4957]: I0218 14:35:35.587113 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2ea270e1-6732-4aed-8052-bc3a03f88791-kube-api-access\") pod \"2ea270e1-6732-4aed-8052-bc3a03f88791\" (UID: \"2ea270e1-6732-4aed-8052-bc3a03f88791\") " Feb 18 14:35:35 crc kubenswrapper[4957]: I0218 14:35:35.587159 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2ea270e1-6732-4aed-8052-bc3a03f88791-var-lock" (OuterVolumeSpecName: "var-lock") pod "2ea270e1-6732-4aed-8052-bc3a03f88791" (UID: "2ea270e1-6732-4aed-8052-bc3a03f88791"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 14:35:35 crc kubenswrapper[4957]: I0218 14:35:35.587457 4957 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2ea270e1-6732-4aed-8052-bc3a03f88791-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 18 14:35:35 crc kubenswrapper[4957]: I0218 14:35:35.587476 4957 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2ea270e1-6732-4aed-8052-bc3a03f88791-var-lock\") on node \"crc\" DevicePath \"\"" Feb 18 14:35:35 crc kubenswrapper[4957]: I0218 14:35:35.592693 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ea270e1-6732-4aed-8052-bc3a03f88791-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "2ea270e1-6732-4aed-8052-bc3a03f88791" (UID: "2ea270e1-6732-4aed-8052-bc3a03f88791"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:35:35 crc kubenswrapper[4957]: I0218 14:35:35.688946 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2ea270e1-6732-4aed-8052-bc3a03f88791-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 18 14:35:36 crc kubenswrapper[4957]: I0218 14:35:36.038785 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 18 14:35:36 crc kubenswrapper[4957]: I0218 14:35:36.039995 4957 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="51eded762a62fcb704d09a9c4336a0e308e44c662b636772ea79f754c09dcaaa" exitCode=0 Feb 18 14:35:36 crc kubenswrapper[4957]: I0218 14:35:36.040083 4957 scope.go:117] "RemoveContainer" containerID="a8a11b36734e57eec3aba0b6d96b4102850e46b96eb16d99cfcacf273f1ce69f" Feb 18 14:35:36 crc kubenswrapper[4957]: I0218 14:35:36.040098 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 14:35:36 crc kubenswrapper[4957]: I0218 14:35:36.043011 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"2ea270e1-6732-4aed-8052-bc3a03f88791","Type":"ContainerDied","Data":"44add0e762f098374f054996ab4cad264e290e0bfaa54acf37440e24ac49436a"} Feb 18 14:35:36 crc kubenswrapper[4957]: I0218 14:35:36.043064 4957 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="44add0e762f098374f054996ab4cad264e290e0bfaa54acf37440e24ac49436a" Feb 18 14:35:36 crc kubenswrapper[4957]: I0218 14:35:36.043040 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 18 14:35:36 crc kubenswrapper[4957]: I0218 14:35:36.057641 4957 status_manager.go:851] "Failed to get status for pod" podUID="2ea270e1-6732-4aed-8052-bc3a03f88791" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 18 14:35:36 crc kubenswrapper[4957]: I0218 14:35:36.058569 4957 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 18 14:35:36 crc kubenswrapper[4957]: I0218 14:35:36.059039 4957 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 18 14:35:36 crc kubenswrapper[4957]: I0218 14:35:36.062815 4957 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 18 14:35:36 crc kubenswrapper[4957]: I0218 14:35:36.063090 4957 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 18 14:35:36 crc kubenswrapper[4957]: I0218 14:35:36.063371 4957 status_manager.go:851] "Failed to get status for pod" podUID="2ea270e1-6732-4aed-8052-bc3a03f88791" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 18 14:35:36 crc kubenswrapper[4957]: I0218 14:35:36.068845 4957 scope.go:117] "RemoveContainer" containerID="d1c0f986e657c20a059efccb494462926f4fa62d30b012e6ce2b530d679d9d5c" Feb 18 14:35:36 crc kubenswrapper[4957]: I0218 14:35:36.083871 4957 scope.go:117] "RemoveContainer" containerID="6a2ec9911be4f4d8715bb722cdf72cdbf478111030a7311b93eb67f55ddbce77" Feb 18 14:35:36 crc kubenswrapper[4957]: I0218 14:35:36.098729 4957 scope.go:117] "RemoveContainer" containerID="6d46f625f180ab9afde6a7cb5d578b88012bf5b297e691a35f1fe1af000f1416" Feb 18 14:35:36 crc kubenswrapper[4957]: I0218 14:35:36.114743 4957 scope.go:117] "RemoveContainer" containerID="51eded762a62fcb704d09a9c4336a0e308e44c662b636772ea79f754c09dcaaa" Feb 18 14:35:36 crc kubenswrapper[4957]: I0218 14:35:36.129774 4957 scope.go:117] "RemoveContainer" containerID="09ddd547ad67f12c0d4f3b0505206fc69b4bcfcfa966c6f98da8ef4f4e979711" Feb 18 14:35:36 crc kubenswrapper[4957]: I0218 14:35:36.146783 4957 scope.go:117] "RemoveContainer" containerID="a8a11b36734e57eec3aba0b6d96b4102850e46b96eb16d99cfcacf273f1ce69f" Feb 18 14:35:36 crc kubenswrapper[4957]: E0218 14:35:36.147426 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a8a11b36734e57eec3aba0b6d96b4102850e46b96eb16d99cfcacf273f1ce69f\": container with ID starting with a8a11b36734e57eec3aba0b6d96b4102850e46b96eb16d99cfcacf273f1ce69f not found: ID does not exist" containerID="a8a11b36734e57eec3aba0b6d96b4102850e46b96eb16d99cfcacf273f1ce69f" Feb 18 14:35:36 crc kubenswrapper[4957]: I0218 14:35:36.147468 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a8a11b36734e57eec3aba0b6d96b4102850e46b96eb16d99cfcacf273f1ce69f"} err="failed to get container status \"a8a11b36734e57eec3aba0b6d96b4102850e46b96eb16d99cfcacf273f1ce69f\": rpc error: code = NotFound desc = could not find container \"a8a11b36734e57eec3aba0b6d96b4102850e46b96eb16d99cfcacf273f1ce69f\": container with ID starting with a8a11b36734e57eec3aba0b6d96b4102850e46b96eb16d99cfcacf273f1ce69f not found: ID does not exist" Feb 18 14:35:36 crc kubenswrapper[4957]: I0218 14:35:36.147500 4957 scope.go:117] "RemoveContainer" containerID="d1c0f986e657c20a059efccb494462926f4fa62d30b012e6ce2b530d679d9d5c" Feb 18 14:35:36 crc kubenswrapper[4957]: E0218 14:35:36.147912 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d1c0f986e657c20a059efccb494462926f4fa62d30b012e6ce2b530d679d9d5c\": container with ID starting with d1c0f986e657c20a059efccb494462926f4fa62d30b012e6ce2b530d679d9d5c not found: ID does not exist" containerID="d1c0f986e657c20a059efccb494462926f4fa62d30b012e6ce2b530d679d9d5c" Feb 18 14:35:36 crc kubenswrapper[4957]: I0218 14:35:36.147937 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d1c0f986e657c20a059efccb494462926f4fa62d30b012e6ce2b530d679d9d5c"} err="failed to get container status \"d1c0f986e657c20a059efccb494462926f4fa62d30b012e6ce2b530d679d9d5c\": rpc error: code = NotFound desc = could not find container \"d1c0f986e657c20a059efccb494462926f4fa62d30b012e6ce2b530d679d9d5c\": container with ID starting with d1c0f986e657c20a059efccb494462926f4fa62d30b012e6ce2b530d679d9d5c not found: ID does not exist" Feb 18 14:35:36 crc kubenswrapper[4957]: I0218 14:35:36.147952 4957 scope.go:117] "RemoveContainer" containerID="6a2ec9911be4f4d8715bb722cdf72cdbf478111030a7311b93eb67f55ddbce77" Feb 18 14:35:36 crc kubenswrapper[4957]: E0218 14:35:36.148472 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a2ec9911be4f4d8715bb722cdf72cdbf478111030a7311b93eb67f55ddbce77\": container with ID starting with 6a2ec9911be4f4d8715bb722cdf72cdbf478111030a7311b93eb67f55ddbce77 not found: ID does not exist" containerID="6a2ec9911be4f4d8715bb722cdf72cdbf478111030a7311b93eb67f55ddbce77" Feb 18 14:35:36 crc kubenswrapper[4957]: I0218 14:35:36.148492 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a2ec9911be4f4d8715bb722cdf72cdbf478111030a7311b93eb67f55ddbce77"} err="failed to get container status \"6a2ec9911be4f4d8715bb722cdf72cdbf478111030a7311b93eb67f55ddbce77\": rpc error: code = NotFound desc = could not find container \"6a2ec9911be4f4d8715bb722cdf72cdbf478111030a7311b93eb67f55ddbce77\": container with ID starting with 6a2ec9911be4f4d8715bb722cdf72cdbf478111030a7311b93eb67f55ddbce77 not found: ID does not exist" Feb 18 14:35:36 crc kubenswrapper[4957]: I0218 14:35:36.148505 4957 scope.go:117] "RemoveContainer" containerID="6d46f625f180ab9afde6a7cb5d578b88012bf5b297e691a35f1fe1af000f1416" Feb 18 14:35:36 crc kubenswrapper[4957]: E0218 14:35:36.148779 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d46f625f180ab9afde6a7cb5d578b88012bf5b297e691a35f1fe1af000f1416\": container with ID starting with 6d46f625f180ab9afde6a7cb5d578b88012bf5b297e691a35f1fe1af000f1416 not found: ID does not exist" containerID="6d46f625f180ab9afde6a7cb5d578b88012bf5b297e691a35f1fe1af000f1416" Feb 18 14:35:36 crc kubenswrapper[4957]: I0218 14:35:36.148797 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d46f625f180ab9afde6a7cb5d578b88012bf5b297e691a35f1fe1af000f1416"} err="failed to get container status \"6d46f625f180ab9afde6a7cb5d578b88012bf5b297e691a35f1fe1af000f1416\": rpc error: code = NotFound desc = could not find container \"6d46f625f180ab9afde6a7cb5d578b88012bf5b297e691a35f1fe1af000f1416\": container with ID starting with 6d46f625f180ab9afde6a7cb5d578b88012bf5b297e691a35f1fe1af000f1416 not found: ID does not exist" Feb 18 14:35:36 crc kubenswrapper[4957]: I0218 14:35:36.148808 4957 scope.go:117] "RemoveContainer" containerID="51eded762a62fcb704d09a9c4336a0e308e44c662b636772ea79f754c09dcaaa" Feb 18 14:35:36 crc kubenswrapper[4957]: E0218 14:35:36.149254 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"51eded762a62fcb704d09a9c4336a0e308e44c662b636772ea79f754c09dcaaa\": container with ID starting with 51eded762a62fcb704d09a9c4336a0e308e44c662b636772ea79f754c09dcaaa not found: ID does not exist" containerID="51eded762a62fcb704d09a9c4336a0e308e44c662b636772ea79f754c09dcaaa" Feb 18 14:35:36 crc kubenswrapper[4957]: I0218 14:35:36.149272 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"51eded762a62fcb704d09a9c4336a0e308e44c662b636772ea79f754c09dcaaa"} err="failed to get container status \"51eded762a62fcb704d09a9c4336a0e308e44c662b636772ea79f754c09dcaaa\": rpc error: code = NotFound desc = could not find container \"51eded762a62fcb704d09a9c4336a0e308e44c662b636772ea79f754c09dcaaa\": container with ID starting with 51eded762a62fcb704d09a9c4336a0e308e44c662b636772ea79f754c09dcaaa not found: ID does not exist" Feb 18 14:35:36 crc kubenswrapper[4957]: I0218 14:35:36.149284 4957 scope.go:117] "RemoveContainer" containerID="09ddd547ad67f12c0d4f3b0505206fc69b4bcfcfa966c6f98da8ef4f4e979711" Feb 18 14:35:36 crc kubenswrapper[4957]: E0218 14:35:36.149641 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"09ddd547ad67f12c0d4f3b0505206fc69b4bcfcfa966c6f98da8ef4f4e979711\": container with ID starting with 09ddd547ad67f12c0d4f3b0505206fc69b4bcfcfa966c6f98da8ef4f4e979711 not found: ID does not exist" containerID="09ddd547ad67f12c0d4f3b0505206fc69b4bcfcfa966c6f98da8ef4f4e979711" Feb 18 14:35:36 crc kubenswrapper[4957]: I0218 14:35:36.149662 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"09ddd547ad67f12c0d4f3b0505206fc69b4bcfcfa966c6f98da8ef4f4e979711"} err="failed to get container status \"09ddd547ad67f12c0d4f3b0505206fc69b4bcfcfa966c6f98da8ef4f4e979711\": rpc error: code = NotFound desc = could not find container \"09ddd547ad67f12c0d4f3b0505206fc69b4bcfcfa966c6f98da8ef4f4e979711\": container with ID starting with 09ddd547ad67f12c0d4f3b0505206fc69b4bcfcfa966c6f98da8ef4f4e979711 not found: ID does not exist" Feb 18 14:35:36 crc kubenswrapper[4957]: I0218 14:35:36.220387 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Feb 18 14:35:37 crc kubenswrapper[4957]: E0218 14:35:37.006850 4957 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.213:6443: connect: connection refused" interval="3.2s" Feb 18 14:35:38 crc kubenswrapper[4957]: E0218 14:35:38.226360 4957 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.102.83.213:6443: connect: connection refused" pod="openshift-image-registry/image-registry-697d97f7c8-hfcww" volumeName="registry-storage" Feb 18 14:35:39 crc kubenswrapper[4957]: E0218 14:35:39.953070 4957 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.213:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.18955df4820002db openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 14:35:33.174600411 +0000 UTC m=+239.695465165,LastTimestamp:2026-02-18 14:35:33.174600411 +0000 UTC m=+239.695465165,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 14:35:40 crc kubenswrapper[4957]: E0218 14:35:40.208450 4957 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.213:6443: connect: connection refused" interval="6.4s" Feb 18 14:35:44 crc kubenswrapper[4957]: I0218 14:35:44.215159 4957 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 18 14:35:44 crc kubenswrapper[4957]: I0218 14:35:44.216003 4957 status_manager.go:851] "Failed to get status for pod" podUID="2ea270e1-6732-4aed-8052-bc3a03f88791" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 18 14:35:46 crc kubenswrapper[4957]: I0218 14:35:46.097023 4957 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Feb 18 14:35:46 crc kubenswrapper[4957]: I0218 14:35:46.098028 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Feb 18 14:35:46 crc kubenswrapper[4957]: I0218 14:35:46.109274 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 18 14:35:46 crc kubenswrapper[4957]: I0218 14:35:46.109357 4957 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="bb9f54ba352c3e8d1cb70e31e2321f32cc741e9ea7fa871ac8db9cb380486d6c" exitCode=1 Feb 18 14:35:46 crc kubenswrapper[4957]: I0218 14:35:46.109406 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"bb9f54ba352c3e8d1cb70e31e2321f32cc741e9ea7fa871ac8db9cb380486d6c"} Feb 18 14:35:46 crc kubenswrapper[4957]: I0218 14:35:46.110127 4957 scope.go:117] "RemoveContainer" containerID="bb9f54ba352c3e8d1cb70e31e2321f32cc741e9ea7fa871ac8db9cb380486d6c" Feb 18 14:35:46 crc kubenswrapper[4957]: I0218 14:35:46.110266 4957 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 18 14:35:46 crc kubenswrapper[4957]: I0218 14:35:46.110872 4957 status_manager.go:851] "Failed to get status for pod" podUID="2ea270e1-6732-4aed-8052-bc3a03f88791" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 18 14:35:46 crc kubenswrapper[4957]: I0218 14:35:46.111339 4957 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 18 14:35:46 crc kubenswrapper[4957]: I0218 14:35:46.212817 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 14:35:46 crc kubenswrapper[4957]: I0218 14:35:46.214033 4957 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 18 14:35:46 crc kubenswrapper[4957]: I0218 14:35:46.214694 4957 status_manager.go:851] "Failed to get status for pod" podUID="2ea270e1-6732-4aed-8052-bc3a03f88791" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 18 14:35:46 crc kubenswrapper[4957]: I0218 14:35:46.215398 4957 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 18 14:35:46 crc kubenswrapper[4957]: I0218 14:35:46.226869 4957 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="6b3bef1f-7cee-4035-bc8e-195fadcf2d19" Feb 18 14:35:46 crc kubenswrapper[4957]: I0218 14:35:46.226902 4957 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="6b3bef1f-7cee-4035-bc8e-195fadcf2d19" Feb 18 14:35:46 crc kubenswrapper[4957]: E0218 14:35:46.228010 4957 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.213:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 14:35:46 crc kubenswrapper[4957]: I0218 14:35:46.228715 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 14:35:46 crc kubenswrapper[4957]: E0218 14:35:46.609675 4957 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.213:6443: connect: connection refused" interval="7s" Feb 18 14:35:47 crc kubenswrapper[4957]: I0218 14:35:47.120838 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 18 14:35:47 crc kubenswrapper[4957]: I0218 14:35:47.120975 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"72629481855f492daa6c750fb938bc0e4083bafacfb2a9c5025a2eb58e3dd607"} Feb 18 14:35:47 crc kubenswrapper[4957]: I0218 14:35:47.122265 4957 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 18 14:35:47 crc kubenswrapper[4957]: I0218 14:35:47.122958 4957 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 18 14:35:47 crc kubenswrapper[4957]: I0218 14:35:47.123524 4957 status_manager.go:851] "Failed to get status for pod" podUID="2ea270e1-6732-4aed-8052-bc3a03f88791" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 18 14:35:47 crc kubenswrapper[4957]: I0218 14:35:47.124357 4957 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="8d8f5a0910e678d96d7fd80f767e5c725477c1111a3097eb97f9929b209642a9" exitCode=0 Feb 18 14:35:47 crc kubenswrapper[4957]: I0218 14:35:47.124406 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"8d8f5a0910e678d96d7fd80f767e5c725477c1111a3097eb97f9929b209642a9"} Feb 18 14:35:47 crc kubenswrapper[4957]: I0218 14:35:47.124471 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"6f8eeac6719c9974661710fd50b3db71b39b3af12d2ac05aaafb639192b58af4"} Feb 18 14:35:47 crc kubenswrapper[4957]: I0218 14:35:47.124774 4957 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="6b3bef1f-7cee-4035-bc8e-195fadcf2d19" Feb 18 14:35:47 crc kubenswrapper[4957]: I0218 14:35:47.124789 4957 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="6b3bef1f-7cee-4035-bc8e-195fadcf2d19" Feb 18 14:35:47 crc kubenswrapper[4957]: I0218 14:35:47.125072 4957 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 18 14:35:47 crc kubenswrapper[4957]: E0218 14:35:47.125257 4957 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.213:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 14:35:47 crc kubenswrapper[4957]: I0218 14:35:47.125483 4957 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 18 14:35:47 crc kubenswrapper[4957]: I0218 14:35:47.125799 4957 status_manager.go:851] "Failed to get status for pod" podUID="2ea270e1-6732-4aed-8052-bc3a03f88791" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 18 14:35:48 crc kubenswrapper[4957]: I0218 14:35:48.137347 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"c99f1c5ce344328638ec283b1e7799eb80096af70f4c207ecb1c80796e360270"} Feb 18 14:35:48 crc kubenswrapper[4957]: I0218 14:35:48.137768 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"8cab88d601001b3a6cdf8e5779b39453e08e8483d9f60b095837393b83817a4c"} Feb 18 14:35:48 crc kubenswrapper[4957]: I0218 14:35:48.137782 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"34fcc818b90e0c40dbb694983d43d3298275cc94b8370e79be3f24885c14b2ed"} Feb 18 14:35:49 crc kubenswrapper[4957]: I0218 14:35:49.146291 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"1f454cee039eea3abb050cbe0abf9aee0dfad810ec1886a970e2df82004c146f"} Feb 18 14:35:49 crc kubenswrapper[4957]: I0218 14:35:49.146780 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 14:35:49 crc kubenswrapper[4957]: I0218 14:35:49.146797 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"74e96c0c2c8b091f3b79ce40857171ba63325ea44c6a079e6bbbca25a0f63957"} Feb 18 14:35:49 crc kubenswrapper[4957]: I0218 14:35:49.146644 4957 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="6b3bef1f-7cee-4035-bc8e-195fadcf2d19" Feb 18 14:35:49 crc kubenswrapper[4957]: I0218 14:35:49.146818 4957 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="6b3bef1f-7cee-4035-bc8e-195fadcf2d19" Feb 18 14:35:51 crc kubenswrapper[4957]: I0218 14:35:51.229362 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 14:35:51 crc kubenswrapper[4957]: I0218 14:35:51.229784 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 14:35:51 crc kubenswrapper[4957]: I0218 14:35:51.234792 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 14:35:54 crc kubenswrapper[4957]: I0218 14:35:54.168990 4957 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 14:35:54 crc kubenswrapper[4957]: I0218 14:35:54.208008 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 14:35:54 crc kubenswrapper[4957]: I0218 14:35:54.220509 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 14:35:54 crc kubenswrapper[4957]: I0218 14:35:54.264939 4957 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="cc302efd-94df-4de8-9225-0c629c2222c8" Feb 18 14:35:55 crc kubenswrapper[4957]: I0218 14:35:55.185492 4957 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="6b3bef1f-7cee-4035-bc8e-195fadcf2d19" Feb 18 14:35:55 crc kubenswrapper[4957]: I0218 14:35:55.185529 4957 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="6b3bef1f-7cee-4035-bc8e-195fadcf2d19" Feb 18 14:35:55 crc kubenswrapper[4957]: I0218 14:35:55.185568 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 14:35:55 crc kubenswrapper[4957]: I0218 14:35:55.188628 4957 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="cc302efd-94df-4de8-9225-0c629c2222c8" Feb 18 14:35:56 crc kubenswrapper[4957]: I0218 14:35:56.102719 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 14:36:04 crc kubenswrapper[4957]: I0218 14:36:04.520457 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 18 14:36:04 crc kubenswrapper[4957]: I0218 14:36:04.696408 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 18 14:36:04 crc kubenswrapper[4957]: I0218 14:36:04.728341 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 18 14:36:04 crc kubenswrapper[4957]: I0218 14:36:04.924908 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 18 14:36:05 crc kubenswrapper[4957]: I0218 14:36:05.033500 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 18 14:36:05 crc kubenswrapper[4957]: I0218 14:36:05.311121 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 18 14:36:05 crc kubenswrapper[4957]: I0218 14:36:05.320572 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 18 14:36:05 crc kubenswrapper[4957]: I0218 14:36:05.499612 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 18 14:36:05 crc kubenswrapper[4957]: I0218 14:36:05.570470 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 18 14:36:05 crc kubenswrapper[4957]: I0218 14:36:05.803939 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 18 14:36:05 crc kubenswrapper[4957]: I0218 14:36:05.967988 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 18 14:36:06 crc kubenswrapper[4957]: I0218 14:36:06.038818 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 18 14:36:06 crc kubenswrapper[4957]: I0218 14:36:06.066371 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 18 14:36:06 crc kubenswrapper[4957]: I0218 14:36:06.348409 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 18 14:36:06 crc kubenswrapper[4957]: I0218 14:36:06.452995 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 18 14:36:06 crc kubenswrapper[4957]: I0218 14:36:06.600374 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 18 14:36:06 crc kubenswrapper[4957]: I0218 14:36:06.609533 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 18 14:36:06 crc kubenswrapper[4957]: I0218 14:36:06.685279 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 18 14:36:06 crc kubenswrapper[4957]: I0218 14:36:06.709883 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 18 14:36:06 crc kubenswrapper[4957]: I0218 14:36:06.767705 4957 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 18 14:36:06 crc kubenswrapper[4957]: I0218 14:36:06.819819 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 18 14:36:06 crc kubenswrapper[4957]: I0218 14:36:06.940440 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 18 14:36:07 crc kubenswrapper[4957]: I0218 14:36:07.020625 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 18 14:36:07 crc kubenswrapper[4957]: I0218 14:36:07.022513 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 18 14:36:07 crc kubenswrapper[4957]: I0218 14:36:07.038327 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 18 14:36:07 crc kubenswrapper[4957]: I0218 14:36:07.209389 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 18 14:36:07 crc kubenswrapper[4957]: I0218 14:36:07.360393 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 18 14:36:07 crc kubenswrapper[4957]: I0218 14:36:07.369594 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 18 14:36:07 crc kubenswrapper[4957]: I0218 14:36:07.612254 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 18 14:36:07 crc kubenswrapper[4957]: I0218 14:36:07.744374 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 18 14:36:07 crc kubenswrapper[4957]: I0218 14:36:07.843351 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 18 14:36:07 crc kubenswrapper[4957]: I0218 14:36:07.854908 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 18 14:36:07 crc kubenswrapper[4957]: I0218 14:36:07.948814 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 18 14:36:08 crc kubenswrapper[4957]: I0218 14:36:08.030015 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 18 14:36:08 crc kubenswrapper[4957]: I0218 14:36:08.040230 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 18 14:36:08 crc kubenswrapper[4957]: I0218 14:36:08.056728 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 18 14:36:08 crc kubenswrapper[4957]: I0218 14:36:08.100505 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 18 14:36:08 crc kubenswrapper[4957]: I0218 14:36:08.107481 4957 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 18 14:36:08 crc kubenswrapper[4957]: I0218 14:36:08.206437 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 18 14:36:08 crc kubenswrapper[4957]: I0218 14:36:08.213249 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 18 14:36:08 crc kubenswrapper[4957]: I0218 14:36:08.241762 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 18 14:36:08 crc kubenswrapper[4957]: I0218 14:36:08.335745 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 18 14:36:08 crc kubenswrapper[4957]: I0218 14:36:08.478615 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 18 14:36:08 crc kubenswrapper[4957]: I0218 14:36:08.478865 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 18 14:36:08 crc kubenswrapper[4957]: I0218 14:36:08.500093 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 18 14:36:08 crc kubenswrapper[4957]: I0218 14:36:08.505470 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 18 14:36:08 crc kubenswrapper[4957]: I0218 14:36:08.524492 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 18 14:36:08 crc kubenswrapper[4957]: I0218 14:36:08.609338 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 18 14:36:08 crc kubenswrapper[4957]: I0218 14:36:08.642062 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 18 14:36:08 crc kubenswrapper[4957]: I0218 14:36:08.703727 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 18 14:36:08 crc kubenswrapper[4957]: I0218 14:36:08.716463 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 18 14:36:08 crc kubenswrapper[4957]: I0218 14:36:08.745488 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 18 14:36:08 crc kubenswrapper[4957]: I0218 14:36:08.807713 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 18 14:36:08 crc kubenswrapper[4957]: I0218 14:36:08.820038 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 18 14:36:08 crc kubenswrapper[4957]: I0218 14:36:08.826122 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 18 14:36:08 crc kubenswrapper[4957]: I0218 14:36:08.905379 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 18 14:36:08 crc kubenswrapper[4957]: I0218 14:36:08.977654 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 18 14:36:09 crc kubenswrapper[4957]: I0218 14:36:09.169830 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 18 14:36:09 crc kubenswrapper[4957]: I0218 14:36:09.176249 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 18 14:36:09 crc kubenswrapper[4957]: I0218 14:36:09.277880 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 18 14:36:09 crc kubenswrapper[4957]: I0218 14:36:09.293476 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 18 14:36:09 crc kubenswrapper[4957]: I0218 14:36:09.471684 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 18 14:36:09 crc kubenswrapper[4957]: I0218 14:36:09.552184 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 18 14:36:09 crc kubenswrapper[4957]: I0218 14:36:09.575372 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 18 14:36:09 crc kubenswrapper[4957]: I0218 14:36:09.576357 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 18 14:36:09 crc kubenswrapper[4957]: I0218 14:36:09.624319 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 18 14:36:09 crc kubenswrapper[4957]: I0218 14:36:09.674592 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 18 14:36:09 crc kubenswrapper[4957]: I0218 14:36:09.756033 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 18 14:36:09 crc kubenswrapper[4957]: I0218 14:36:09.825080 4957 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 18 14:36:09 crc kubenswrapper[4957]: I0218 14:36:09.825837 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=37.825816798 podStartE2EDuration="37.825816798s" podCreationTimestamp="2026-02-18 14:35:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:35:54.20820115 +0000 UTC m=+260.729065894" watchObservedRunningTime="2026-02-18 14:36:09.825816798 +0000 UTC m=+276.346681542" Feb 18 14:36:09 crc kubenswrapper[4957]: I0218 14:36:09.829330 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 18 14:36:09 crc kubenswrapper[4957]: I0218 14:36:09.829379 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 18 14:36:09 crc kubenswrapper[4957]: I0218 14:36:09.834332 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 14:36:09 crc kubenswrapper[4957]: I0218 14:36:09.835773 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 14:36:09 crc kubenswrapper[4957]: I0218 14:36:09.846881 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=15.846856858 podStartE2EDuration="15.846856858s" podCreationTimestamp="2026-02-18 14:35:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:36:09.84484723 +0000 UTC m=+276.365711984" watchObservedRunningTime="2026-02-18 14:36:09.846856858 +0000 UTC m=+276.367721612" Feb 18 14:36:09 crc kubenswrapper[4957]: I0218 14:36:09.861259 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 18 14:36:09 crc kubenswrapper[4957]: I0218 14:36:09.927187 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 18 14:36:09 crc kubenswrapper[4957]: I0218 14:36:09.999076 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 18 14:36:10 crc kubenswrapper[4957]: I0218 14:36:10.029217 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 18 14:36:10 crc kubenswrapper[4957]: I0218 14:36:10.069777 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 18 14:36:10 crc kubenswrapper[4957]: I0218 14:36:10.101915 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 18 14:36:10 crc kubenswrapper[4957]: I0218 14:36:10.144457 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 18 14:36:10 crc kubenswrapper[4957]: I0218 14:36:10.162097 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 18 14:36:10 crc kubenswrapper[4957]: I0218 14:36:10.203358 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 18 14:36:10 crc kubenswrapper[4957]: I0218 14:36:10.212594 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 18 14:36:10 crc kubenswrapper[4957]: I0218 14:36:10.252247 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 18 14:36:10 crc kubenswrapper[4957]: I0218 14:36:10.298018 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 18 14:36:10 crc kubenswrapper[4957]: I0218 14:36:10.326945 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 18 14:36:10 crc kubenswrapper[4957]: I0218 14:36:10.368723 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 18 14:36:10 crc kubenswrapper[4957]: I0218 14:36:10.380403 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 18 14:36:10 crc kubenswrapper[4957]: I0218 14:36:10.417363 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 18 14:36:10 crc kubenswrapper[4957]: I0218 14:36:10.454668 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 18 14:36:10 crc kubenswrapper[4957]: I0218 14:36:10.576248 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 18 14:36:10 crc kubenswrapper[4957]: I0218 14:36:10.701130 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 18 14:36:10 crc kubenswrapper[4957]: I0218 14:36:10.737706 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 18 14:36:10 crc kubenswrapper[4957]: I0218 14:36:10.748617 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 18 14:36:10 crc kubenswrapper[4957]: I0218 14:36:10.753557 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 18 14:36:10 crc kubenswrapper[4957]: I0218 14:36:10.754293 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 18 14:36:10 crc kubenswrapper[4957]: I0218 14:36:10.774201 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 18 14:36:11 crc kubenswrapper[4957]: I0218 14:36:11.019043 4957 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 18 14:36:11 crc kubenswrapper[4957]: I0218 14:36:11.030229 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 18 14:36:11 crc kubenswrapper[4957]: I0218 14:36:11.062992 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 18 14:36:11 crc kubenswrapper[4957]: I0218 14:36:11.089669 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 18 14:36:11 crc kubenswrapper[4957]: I0218 14:36:11.275401 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 18 14:36:11 crc kubenswrapper[4957]: I0218 14:36:11.338533 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 18 14:36:11 crc kubenswrapper[4957]: I0218 14:36:11.342336 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 18 14:36:11 crc kubenswrapper[4957]: I0218 14:36:11.362059 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 18 14:36:11 crc kubenswrapper[4957]: I0218 14:36:11.400606 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 18 14:36:11 crc kubenswrapper[4957]: I0218 14:36:11.442919 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 18 14:36:11 crc kubenswrapper[4957]: I0218 14:36:11.549987 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 18 14:36:11 crc kubenswrapper[4957]: I0218 14:36:11.585652 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 18 14:36:11 crc kubenswrapper[4957]: I0218 14:36:11.619296 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 18 14:36:11 crc kubenswrapper[4957]: I0218 14:36:11.634075 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 18 14:36:11 crc kubenswrapper[4957]: I0218 14:36:11.720902 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 18 14:36:11 crc kubenswrapper[4957]: I0218 14:36:11.753118 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 18 14:36:11 crc kubenswrapper[4957]: I0218 14:36:11.778521 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 18 14:36:11 crc kubenswrapper[4957]: I0218 14:36:11.796326 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 18 14:36:11 crc kubenswrapper[4957]: I0218 14:36:11.842137 4957 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 18 14:36:11 crc kubenswrapper[4957]: I0218 14:36:11.938812 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 18 14:36:11 crc kubenswrapper[4957]: I0218 14:36:11.958482 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 18 14:36:12 crc kubenswrapper[4957]: I0218 14:36:12.104578 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 18 14:36:12 crc kubenswrapper[4957]: I0218 14:36:12.181109 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 18 14:36:12 crc kubenswrapper[4957]: I0218 14:36:12.416001 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 18 14:36:12 crc kubenswrapper[4957]: I0218 14:36:12.429297 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 18 14:36:12 crc kubenswrapper[4957]: I0218 14:36:12.445733 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 18 14:36:12 crc kubenswrapper[4957]: I0218 14:36:12.454770 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 18 14:36:12 crc kubenswrapper[4957]: I0218 14:36:12.540854 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 18 14:36:12 crc kubenswrapper[4957]: I0218 14:36:12.542070 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 18 14:36:12 crc kubenswrapper[4957]: I0218 14:36:12.633260 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 18 14:36:12 crc kubenswrapper[4957]: I0218 14:36:12.707445 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 18 14:36:12 crc kubenswrapper[4957]: I0218 14:36:12.819142 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 18 14:36:12 crc kubenswrapper[4957]: I0218 14:36:12.961712 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 18 14:36:12 crc kubenswrapper[4957]: I0218 14:36:12.976593 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 18 14:36:12 crc kubenswrapper[4957]: I0218 14:36:12.997219 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 18 14:36:13 crc kubenswrapper[4957]: I0218 14:36:13.006697 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 18 14:36:13 crc kubenswrapper[4957]: I0218 14:36:13.032263 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 18 14:36:13 crc kubenswrapper[4957]: I0218 14:36:13.118032 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 18 14:36:13 crc kubenswrapper[4957]: I0218 14:36:13.185149 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 18 14:36:13 crc kubenswrapper[4957]: I0218 14:36:13.185186 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 18 14:36:13 crc kubenswrapper[4957]: I0218 14:36:13.199873 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 18 14:36:13 crc kubenswrapper[4957]: I0218 14:36:13.224812 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 18 14:36:13 crc kubenswrapper[4957]: I0218 14:36:13.255708 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 18 14:36:13 crc kubenswrapper[4957]: I0218 14:36:13.264098 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 18 14:36:13 crc kubenswrapper[4957]: I0218 14:36:13.288854 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 18 14:36:13 crc kubenswrapper[4957]: I0218 14:36:13.299198 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 18 14:36:13 crc kubenswrapper[4957]: I0218 14:36:13.306645 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 18 14:36:13 crc kubenswrapper[4957]: I0218 14:36:13.348381 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 18 14:36:13 crc kubenswrapper[4957]: I0218 14:36:13.361907 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 18 14:36:13 crc kubenswrapper[4957]: I0218 14:36:13.388311 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 18 14:36:13 crc kubenswrapper[4957]: I0218 14:36:13.520463 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 18 14:36:13 crc kubenswrapper[4957]: I0218 14:36:13.544985 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 18 14:36:13 crc kubenswrapper[4957]: I0218 14:36:13.695347 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 18 14:36:13 crc kubenswrapper[4957]: I0218 14:36:13.721079 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 18 14:36:13 crc kubenswrapper[4957]: I0218 14:36:13.778877 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 18 14:36:13 crc kubenswrapper[4957]: I0218 14:36:13.812887 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 18 14:36:14 crc kubenswrapper[4957]: I0218 14:36:14.052857 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 18 14:36:14 crc kubenswrapper[4957]: I0218 14:36:14.053155 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 18 14:36:14 crc kubenswrapper[4957]: I0218 14:36:14.058526 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 18 14:36:14 crc kubenswrapper[4957]: I0218 14:36:14.172494 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 18 14:36:14 crc kubenswrapper[4957]: I0218 14:36:14.192609 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 18 14:36:14 crc kubenswrapper[4957]: I0218 14:36:14.209583 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 18 14:36:14 crc kubenswrapper[4957]: I0218 14:36:14.478686 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 18 14:36:14 crc kubenswrapper[4957]: I0218 14:36:14.484903 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 18 14:36:14 crc kubenswrapper[4957]: I0218 14:36:14.504937 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 18 14:36:14 crc kubenswrapper[4957]: I0218 14:36:14.568107 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 18 14:36:14 crc kubenswrapper[4957]: I0218 14:36:14.569347 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 18 14:36:14 crc kubenswrapper[4957]: I0218 14:36:14.667604 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 18 14:36:14 crc kubenswrapper[4957]: I0218 14:36:14.809847 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 18 14:36:14 crc kubenswrapper[4957]: I0218 14:36:14.883639 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 18 14:36:14 crc kubenswrapper[4957]: I0218 14:36:14.907817 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 18 14:36:14 crc kubenswrapper[4957]: I0218 14:36:14.960212 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 18 14:36:15 crc kubenswrapper[4957]: I0218 14:36:15.024647 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 18 14:36:15 crc kubenswrapper[4957]: I0218 14:36:15.066147 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 18 14:36:15 crc kubenswrapper[4957]: I0218 14:36:15.103948 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 18 14:36:15 crc kubenswrapper[4957]: I0218 14:36:15.144671 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 18 14:36:15 crc kubenswrapper[4957]: I0218 14:36:15.170866 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 18 14:36:15 crc kubenswrapper[4957]: I0218 14:36:15.199567 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 18 14:36:15 crc kubenswrapper[4957]: I0218 14:36:15.206238 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 18 14:36:15 crc kubenswrapper[4957]: I0218 14:36:15.208385 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 18 14:36:15 crc kubenswrapper[4957]: I0218 14:36:15.289567 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 18 14:36:15 crc kubenswrapper[4957]: I0218 14:36:15.400270 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 18 14:36:15 crc kubenswrapper[4957]: I0218 14:36:15.519635 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 18 14:36:15 crc kubenswrapper[4957]: I0218 14:36:15.622584 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 18 14:36:15 crc kubenswrapper[4957]: I0218 14:36:15.760257 4957 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 18 14:36:15 crc kubenswrapper[4957]: I0218 14:36:15.795268 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 18 14:36:15 crc kubenswrapper[4957]: I0218 14:36:15.809148 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 18 14:36:15 crc kubenswrapper[4957]: I0218 14:36:15.846325 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 18 14:36:15 crc kubenswrapper[4957]: I0218 14:36:15.988842 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 18 14:36:16 crc kubenswrapper[4957]: I0218 14:36:16.020341 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 18 14:36:16 crc kubenswrapper[4957]: I0218 14:36:16.113070 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 18 14:36:16 crc kubenswrapper[4957]: I0218 14:36:16.150264 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 18 14:36:16 crc kubenswrapper[4957]: I0218 14:36:16.213562 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 18 14:36:16 crc kubenswrapper[4957]: I0218 14:36:16.235040 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 18 14:36:16 crc kubenswrapper[4957]: I0218 14:36:16.253117 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 18 14:36:16 crc kubenswrapper[4957]: I0218 14:36:16.316300 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 18 14:36:16 crc kubenswrapper[4957]: I0218 14:36:16.343136 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 18 14:36:16 crc kubenswrapper[4957]: I0218 14:36:16.372842 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 18 14:36:16 crc kubenswrapper[4957]: I0218 14:36:16.392303 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 18 14:36:16 crc kubenswrapper[4957]: I0218 14:36:16.532234 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 18 14:36:16 crc kubenswrapper[4957]: I0218 14:36:16.566520 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 18 14:36:16 crc kubenswrapper[4957]: I0218 14:36:16.598842 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 18 14:36:16 crc kubenswrapper[4957]: I0218 14:36:16.600337 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 18 14:36:16 crc kubenswrapper[4957]: I0218 14:36:16.609047 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 18 14:36:16 crc kubenswrapper[4957]: I0218 14:36:16.769620 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 18 14:36:16 crc kubenswrapper[4957]: I0218 14:36:16.774914 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 18 14:36:16 crc kubenswrapper[4957]: I0218 14:36:16.827705 4957 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 18 14:36:16 crc kubenswrapper[4957]: I0218 14:36:16.827997 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://4924588e3eeea5e6366f50a717d811f1035ee266844faa77fa935150d150f8f3" gracePeriod=5 Feb 18 14:36:16 crc kubenswrapper[4957]: I0218 14:36:16.850378 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 18 14:36:16 crc kubenswrapper[4957]: I0218 14:36:16.935057 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 18 14:36:16 crc kubenswrapper[4957]: I0218 14:36:16.944375 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 18 14:36:16 crc kubenswrapper[4957]: I0218 14:36:16.958327 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 18 14:36:17 crc kubenswrapper[4957]: I0218 14:36:17.042384 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 18 14:36:17 crc kubenswrapper[4957]: I0218 14:36:17.105559 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 18 14:36:17 crc kubenswrapper[4957]: I0218 14:36:17.184068 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 18 14:36:17 crc kubenswrapper[4957]: I0218 14:36:17.201103 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 18 14:36:17 crc kubenswrapper[4957]: I0218 14:36:17.288138 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 18 14:36:17 crc kubenswrapper[4957]: I0218 14:36:17.293726 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 18 14:36:17 crc kubenswrapper[4957]: I0218 14:36:17.358268 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 18 14:36:17 crc kubenswrapper[4957]: I0218 14:36:17.396025 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 18 14:36:17 crc kubenswrapper[4957]: I0218 14:36:17.466828 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 18 14:36:17 crc kubenswrapper[4957]: I0218 14:36:17.538253 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 18 14:36:17 crc kubenswrapper[4957]: I0218 14:36:17.549143 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 18 14:36:17 crc kubenswrapper[4957]: I0218 14:36:17.674939 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 18 14:36:17 crc kubenswrapper[4957]: I0218 14:36:17.916276 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 18 14:36:17 crc kubenswrapper[4957]: I0218 14:36:17.916453 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 18 14:36:18 crc kubenswrapper[4957]: I0218 14:36:18.092372 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 18 14:36:18 crc kubenswrapper[4957]: I0218 14:36:18.118974 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 18 14:36:18 crc kubenswrapper[4957]: I0218 14:36:18.158996 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 18 14:36:18 crc kubenswrapper[4957]: I0218 14:36:18.161918 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 18 14:36:18 crc kubenswrapper[4957]: I0218 14:36:18.241328 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 18 14:36:18 crc kubenswrapper[4957]: I0218 14:36:18.315079 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 18 14:36:18 crc kubenswrapper[4957]: I0218 14:36:18.323917 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 18 14:36:18 crc kubenswrapper[4957]: I0218 14:36:18.450601 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 18 14:36:18 crc kubenswrapper[4957]: I0218 14:36:18.573955 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 18 14:36:18 crc kubenswrapper[4957]: I0218 14:36:18.581178 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 18 14:36:18 crc kubenswrapper[4957]: I0218 14:36:18.598952 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 18 14:36:18 crc kubenswrapper[4957]: I0218 14:36:18.684824 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 18 14:36:18 crc kubenswrapper[4957]: I0218 14:36:18.779462 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 18 14:36:18 crc kubenswrapper[4957]: I0218 14:36:18.824633 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 18 14:36:18 crc kubenswrapper[4957]: I0218 14:36:18.918326 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 18 14:36:19 crc kubenswrapper[4957]: I0218 14:36:19.050177 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 18 14:36:19 crc kubenswrapper[4957]: I0218 14:36:19.058897 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 18 14:36:19 crc kubenswrapper[4957]: I0218 14:36:19.128100 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 18 14:36:19 crc kubenswrapper[4957]: I0218 14:36:19.357515 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 18 14:36:19 crc kubenswrapper[4957]: I0218 14:36:19.361680 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 18 14:36:19 crc kubenswrapper[4957]: I0218 14:36:19.398015 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 18 14:36:19 crc kubenswrapper[4957]: I0218 14:36:19.478955 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 18 14:36:19 crc kubenswrapper[4957]: I0218 14:36:19.489221 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 18 14:36:19 crc kubenswrapper[4957]: I0218 14:36:19.634059 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 18 14:36:19 crc kubenswrapper[4957]: I0218 14:36:19.653277 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 18 14:36:19 crc kubenswrapper[4957]: I0218 14:36:19.684735 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 18 14:36:19 crc kubenswrapper[4957]: I0218 14:36:19.845556 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 18 14:36:20 crc kubenswrapper[4957]: I0218 14:36:20.113206 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 18 14:36:20 crc kubenswrapper[4957]: I0218 14:36:20.246783 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 18 14:36:20 crc kubenswrapper[4957]: I0218 14:36:20.273788 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 18 14:36:20 crc kubenswrapper[4957]: I0218 14:36:20.505987 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 18 14:36:20 crc kubenswrapper[4957]: I0218 14:36:20.623394 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 18 14:36:20 crc kubenswrapper[4957]: I0218 14:36:20.833742 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 18 14:36:21 crc kubenswrapper[4957]: I0218 14:36:21.096695 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 18 14:36:21 crc kubenswrapper[4957]: I0218 14:36:21.653536 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 18 14:36:22 crc kubenswrapper[4957]: I0218 14:36:22.338179 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 18 14:36:22 crc kubenswrapper[4957]: I0218 14:36:22.338245 4957 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="4924588e3eeea5e6366f50a717d811f1035ee266844faa77fa935150d150f8f3" exitCode=137 Feb 18 14:36:22 crc kubenswrapper[4957]: I0218 14:36:22.423035 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 18 14:36:22 crc kubenswrapper[4957]: I0218 14:36:22.423514 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 14:36:22 crc kubenswrapper[4957]: I0218 14:36:22.580561 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 18 14:36:22 crc kubenswrapper[4957]: I0218 14:36:22.580626 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 18 14:36:22 crc kubenswrapper[4957]: I0218 14:36:22.580748 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 18 14:36:22 crc kubenswrapper[4957]: I0218 14:36:22.580775 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 18 14:36:22 crc kubenswrapper[4957]: I0218 14:36:22.580803 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 18 14:36:22 crc kubenswrapper[4957]: I0218 14:36:22.580897 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 14:36:22 crc kubenswrapper[4957]: I0218 14:36:22.580994 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 14:36:22 crc kubenswrapper[4957]: I0218 14:36:22.581017 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 14:36:22 crc kubenswrapper[4957]: I0218 14:36:22.581032 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 14:36:22 crc kubenswrapper[4957]: I0218 14:36:22.581145 4957 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Feb 18 14:36:22 crc kubenswrapper[4957]: I0218 14:36:22.581188 4957 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Feb 18 14:36:22 crc kubenswrapper[4957]: I0218 14:36:22.581204 4957 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 18 14:36:22 crc kubenswrapper[4957]: I0218 14:36:22.581216 4957 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Feb 18 14:36:22 crc kubenswrapper[4957]: I0218 14:36:22.588924 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 14:36:22 crc kubenswrapper[4957]: I0218 14:36:22.682341 4957 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 18 14:36:23 crc kubenswrapper[4957]: I0218 14:36:23.345585 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 18 14:36:23 crc kubenswrapper[4957]: I0218 14:36:23.345680 4957 scope.go:117] "RemoveContainer" containerID="4924588e3eeea5e6366f50a717d811f1035ee266844faa77fa935150d150f8f3" Feb 18 14:36:23 crc kubenswrapper[4957]: I0218 14:36:23.345782 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 14:36:24 crc kubenswrapper[4957]: I0218 14:36:24.223711 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Feb 18 14:36:24 crc kubenswrapper[4957]: I0218 14:36:24.223954 4957 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Feb 18 14:36:24 crc kubenswrapper[4957]: I0218 14:36:24.236319 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 18 14:36:24 crc kubenswrapper[4957]: I0218 14:36:24.236381 4957 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="95adb6d7-8ae2-4712-bdf6-50f01efde972" Feb 18 14:36:24 crc kubenswrapper[4957]: I0218 14:36:24.239457 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 18 14:36:24 crc kubenswrapper[4957]: I0218 14:36:24.239510 4957 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="95adb6d7-8ae2-4712-bdf6-50f01efde972" Feb 18 14:36:33 crc kubenswrapper[4957]: I0218 14:36:33.977172 4957 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Feb 18 14:36:34 crc kubenswrapper[4957]: I0218 14:36:34.402942 4957 generic.go:334] "Generic (PLEG): container finished" podID="06d8fe1f-0a71-480f-899e-a070c091cb79" containerID="77bcd1e092adb83f4c51065440a331bbd68ddf9dab496ab869c506288a8f5cce" exitCode=0 Feb 18 14:36:34 crc kubenswrapper[4957]: I0218 14:36:34.402993 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-sfz7k" event={"ID":"06d8fe1f-0a71-480f-899e-a070c091cb79","Type":"ContainerDied","Data":"77bcd1e092adb83f4c51065440a331bbd68ddf9dab496ab869c506288a8f5cce"} Feb 18 14:36:34 crc kubenswrapper[4957]: I0218 14:36:34.403474 4957 scope.go:117] "RemoveContainer" containerID="77bcd1e092adb83f4c51065440a331bbd68ddf9dab496ab869c506288a8f5cce" Feb 18 14:36:35 crc kubenswrapper[4957]: I0218 14:36:35.410552 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-sfz7k" event={"ID":"06d8fe1f-0a71-480f-899e-a070c091cb79","Type":"ContainerStarted","Data":"884b434ea1e0be5f31e7ec9afe8bfd96de52d171b7ec7263e689ef6d2e8c694f"} Feb 18 14:36:35 crc kubenswrapper[4957]: I0218 14:36:35.411311 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-sfz7k" Feb 18 14:36:35 crc kubenswrapper[4957]: I0218 14:36:35.413460 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-sfz7k" Feb 18 14:37:07 crc kubenswrapper[4957]: I0218 14:37:07.279657 4957 patch_prober.go:28] interesting pod/machine-config-daemon-x8wwg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 14:37:07 crc kubenswrapper[4957]: I0218 14:37:07.281322 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 14:37:32 crc kubenswrapper[4957]: I0218 14:37:32.078969 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-97lxp"] Feb 18 14:37:32 crc kubenswrapper[4957]: E0218 14:37:32.079885 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 18 14:37:32 crc kubenswrapper[4957]: I0218 14:37:32.079900 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 18 14:37:32 crc kubenswrapper[4957]: E0218 14:37:32.079908 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ea270e1-6732-4aed-8052-bc3a03f88791" containerName="installer" Feb 18 14:37:32 crc kubenswrapper[4957]: I0218 14:37:32.079914 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ea270e1-6732-4aed-8052-bc3a03f88791" containerName="installer" Feb 18 14:37:32 crc kubenswrapper[4957]: I0218 14:37:32.079997 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 18 14:37:32 crc kubenswrapper[4957]: I0218 14:37:32.080015 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ea270e1-6732-4aed-8052-bc3a03f88791" containerName="installer" Feb 18 14:37:32 crc kubenswrapper[4957]: I0218 14:37:32.080402 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-97lxp" Feb 18 14:37:32 crc kubenswrapper[4957]: I0218 14:37:32.093410 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-97lxp"] Feb 18 14:37:32 crc kubenswrapper[4957]: I0218 14:37:32.241893 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ace67b60-50d9-4a21-bbfc-fb3bf0ff40d6-trusted-ca\") pod \"image-registry-66df7c8f76-97lxp\" (UID: \"ace67b60-50d9-4a21-bbfc-fb3bf0ff40d6\") " pod="openshift-image-registry/image-registry-66df7c8f76-97lxp" Feb 18 14:37:32 crc kubenswrapper[4957]: I0218 14:37:32.242197 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-97lxp\" (UID: \"ace67b60-50d9-4a21-bbfc-fb3bf0ff40d6\") " pod="openshift-image-registry/image-registry-66df7c8f76-97lxp" Feb 18 14:37:32 crc kubenswrapper[4957]: I0218 14:37:32.242222 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ace67b60-50d9-4a21-bbfc-fb3bf0ff40d6-installation-pull-secrets\") pod \"image-registry-66df7c8f76-97lxp\" (UID: \"ace67b60-50d9-4a21-bbfc-fb3bf0ff40d6\") " pod="openshift-image-registry/image-registry-66df7c8f76-97lxp" Feb 18 14:37:32 crc kubenswrapper[4957]: I0218 14:37:32.242260 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ace67b60-50d9-4a21-bbfc-fb3bf0ff40d6-bound-sa-token\") pod \"image-registry-66df7c8f76-97lxp\" (UID: \"ace67b60-50d9-4a21-bbfc-fb3bf0ff40d6\") " pod="openshift-image-registry/image-registry-66df7c8f76-97lxp" Feb 18 14:37:32 crc kubenswrapper[4957]: I0218 14:37:32.242279 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86znz\" (UniqueName: \"kubernetes.io/projected/ace67b60-50d9-4a21-bbfc-fb3bf0ff40d6-kube-api-access-86znz\") pod \"image-registry-66df7c8f76-97lxp\" (UID: \"ace67b60-50d9-4a21-bbfc-fb3bf0ff40d6\") " pod="openshift-image-registry/image-registry-66df7c8f76-97lxp" Feb 18 14:37:32 crc kubenswrapper[4957]: I0218 14:37:32.242311 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ace67b60-50d9-4a21-bbfc-fb3bf0ff40d6-ca-trust-extracted\") pod \"image-registry-66df7c8f76-97lxp\" (UID: \"ace67b60-50d9-4a21-bbfc-fb3bf0ff40d6\") " pod="openshift-image-registry/image-registry-66df7c8f76-97lxp" Feb 18 14:37:32 crc kubenswrapper[4957]: I0218 14:37:32.242400 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ace67b60-50d9-4a21-bbfc-fb3bf0ff40d6-registry-certificates\") pod \"image-registry-66df7c8f76-97lxp\" (UID: \"ace67b60-50d9-4a21-bbfc-fb3bf0ff40d6\") " pod="openshift-image-registry/image-registry-66df7c8f76-97lxp" Feb 18 14:37:32 crc kubenswrapper[4957]: I0218 14:37:32.242451 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ace67b60-50d9-4a21-bbfc-fb3bf0ff40d6-registry-tls\") pod \"image-registry-66df7c8f76-97lxp\" (UID: \"ace67b60-50d9-4a21-bbfc-fb3bf0ff40d6\") " pod="openshift-image-registry/image-registry-66df7c8f76-97lxp" Feb 18 14:37:32 crc kubenswrapper[4957]: I0218 14:37:32.283273 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-97lxp\" (UID: \"ace67b60-50d9-4a21-bbfc-fb3bf0ff40d6\") " pod="openshift-image-registry/image-registry-66df7c8f76-97lxp" Feb 18 14:37:32 crc kubenswrapper[4957]: I0218 14:37:32.344089 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ace67b60-50d9-4a21-bbfc-fb3bf0ff40d6-installation-pull-secrets\") pod \"image-registry-66df7c8f76-97lxp\" (UID: \"ace67b60-50d9-4a21-bbfc-fb3bf0ff40d6\") " pod="openshift-image-registry/image-registry-66df7c8f76-97lxp" Feb 18 14:37:32 crc kubenswrapper[4957]: I0218 14:37:32.344168 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ace67b60-50d9-4a21-bbfc-fb3bf0ff40d6-bound-sa-token\") pod \"image-registry-66df7c8f76-97lxp\" (UID: \"ace67b60-50d9-4a21-bbfc-fb3bf0ff40d6\") " pod="openshift-image-registry/image-registry-66df7c8f76-97lxp" Feb 18 14:37:32 crc kubenswrapper[4957]: I0218 14:37:32.344200 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-86znz\" (UniqueName: \"kubernetes.io/projected/ace67b60-50d9-4a21-bbfc-fb3bf0ff40d6-kube-api-access-86znz\") pod \"image-registry-66df7c8f76-97lxp\" (UID: \"ace67b60-50d9-4a21-bbfc-fb3bf0ff40d6\") " pod="openshift-image-registry/image-registry-66df7c8f76-97lxp" Feb 18 14:37:32 crc kubenswrapper[4957]: I0218 14:37:32.344236 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ace67b60-50d9-4a21-bbfc-fb3bf0ff40d6-ca-trust-extracted\") pod \"image-registry-66df7c8f76-97lxp\" (UID: \"ace67b60-50d9-4a21-bbfc-fb3bf0ff40d6\") " pod="openshift-image-registry/image-registry-66df7c8f76-97lxp" Feb 18 14:37:32 crc kubenswrapper[4957]: I0218 14:37:32.344264 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ace67b60-50d9-4a21-bbfc-fb3bf0ff40d6-registry-certificates\") pod \"image-registry-66df7c8f76-97lxp\" (UID: \"ace67b60-50d9-4a21-bbfc-fb3bf0ff40d6\") " pod="openshift-image-registry/image-registry-66df7c8f76-97lxp" Feb 18 14:37:32 crc kubenswrapper[4957]: I0218 14:37:32.344299 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ace67b60-50d9-4a21-bbfc-fb3bf0ff40d6-registry-tls\") pod \"image-registry-66df7c8f76-97lxp\" (UID: \"ace67b60-50d9-4a21-bbfc-fb3bf0ff40d6\") " pod="openshift-image-registry/image-registry-66df7c8f76-97lxp" Feb 18 14:37:32 crc kubenswrapper[4957]: I0218 14:37:32.344353 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ace67b60-50d9-4a21-bbfc-fb3bf0ff40d6-trusted-ca\") pod \"image-registry-66df7c8f76-97lxp\" (UID: \"ace67b60-50d9-4a21-bbfc-fb3bf0ff40d6\") " pod="openshift-image-registry/image-registry-66df7c8f76-97lxp" Feb 18 14:37:32 crc kubenswrapper[4957]: I0218 14:37:32.346961 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ace67b60-50d9-4a21-bbfc-fb3bf0ff40d6-ca-trust-extracted\") pod \"image-registry-66df7c8f76-97lxp\" (UID: \"ace67b60-50d9-4a21-bbfc-fb3bf0ff40d6\") " pod="openshift-image-registry/image-registry-66df7c8f76-97lxp" Feb 18 14:37:32 crc kubenswrapper[4957]: I0218 14:37:32.348462 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ace67b60-50d9-4a21-bbfc-fb3bf0ff40d6-registry-certificates\") pod \"image-registry-66df7c8f76-97lxp\" (UID: \"ace67b60-50d9-4a21-bbfc-fb3bf0ff40d6\") " pod="openshift-image-registry/image-registry-66df7c8f76-97lxp" Feb 18 14:37:32 crc kubenswrapper[4957]: I0218 14:37:32.352357 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ace67b60-50d9-4a21-bbfc-fb3bf0ff40d6-trusted-ca\") pod \"image-registry-66df7c8f76-97lxp\" (UID: \"ace67b60-50d9-4a21-bbfc-fb3bf0ff40d6\") " pod="openshift-image-registry/image-registry-66df7c8f76-97lxp" Feb 18 14:37:32 crc kubenswrapper[4957]: I0218 14:37:32.359601 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ace67b60-50d9-4a21-bbfc-fb3bf0ff40d6-installation-pull-secrets\") pod \"image-registry-66df7c8f76-97lxp\" (UID: \"ace67b60-50d9-4a21-bbfc-fb3bf0ff40d6\") " pod="openshift-image-registry/image-registry-66df7c8f76-97lxp" Feb 18 14:37:32 crc kubenswrapper[4957]: I0218 14:37:32.360109 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ace67b60-50d9-4a21-bbfc-fb3bf0ff40d6-registry-tls\") pod \"image-registry-66df7c8f76-97lxp\" (UID: \"ace67b60-50d9-4a21-bbfc-fb3bf0ff40d6\") " pod="openshift-image-registry/image-registry-66df7c8f76-97lxp" Feb 18 14:37:32 crc kubenswrapper[4957]: I0218 14:37:32.380367 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ace67b60-50d9-4a21-bbfc-fb3bf0ff40d6-bound-sa-token\") pod \"image-registry-66df7c8f76-97lxp\" (UID: \"ace67b60-50d9-4a21-bbfc-fb3bf0ff40d6\") " pod="openshift-image-registry/image-registry-66df7c8f76-97lxp" Feb 18 14:37:32 crc kubenswrapper[4957]: I0218 14:37:32.382493 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-86znz\" (UniqueName: \"kubernetes.io/projected/ace67b60-50d9-4a21-bbfc-fb3bf0ff40d6-kube-api-access-86znz\") pod \"image-registry-66df7c8f76-97lxp\" (UID: \"ace67b60-50d9-4a21-bbfc-fb3bf0ff40d6\") " pod="openshift-image-registry/image-registry-66df7c8f76-97lxp" Feb 18 14:37:32 crc kubenswrapper[4957]: I0218 14:37:32.410766 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-97lxp" Feb 18 14:37:32 crc kubenswrapper[4957]: I0218 14:37:32.824866 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-97lxp"] Feb 18 14:37:33 crc kubenswrapper[4957]: I0218 14:37:33.719063 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-97lxp" event={"ID":"ace67b60-50d9-4a21-bbfc-fb3bf0ff40d6","Type":"ContainerStarted","Data":"d0b729c9bc30b17b7c3560fa5070304c309cfba08fd9f1aa934c5cd190e8ef1e"} Feb 18 14:37:33 crc kubenswrapper[4957]: I0218 14:37:33.719849 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-97lxp" Feb 18 14:37:33 crc kubenswrapper[4957]: I0218 14:37:33.719866 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-97lxp" event={"ID":"ace67b60-50d9-4a21-bbfc-fb3bf0ff40d6","Type":"ContainerStarted","Data":"90567fdb3afc7340329ecfc6bfcbe7c7acfc5cb7b425c011df0ab379e2f15760"} Feb 18 14:37:33 crc kubenswrapper[4957]: I0218 14:37:33.749981 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-97lxp" podStartSLOduration=1.749951633 podStartE2EDuration="1.749951633s" podCreationTimestamp="2026-02-18 14:37:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:37:33.74334868 +0000 UTC m=+360.264213434" watchObservedRunningTime="2026-02-18 14:37:33.749951633 +0000 UTC m=+360.270816377" Feb 18 14:37:37 crc kubenswrapper[4957]: I0218 14:37:37.279520 4957 patch_prober.go:28] interesting pod/machine-config-daemon-x8wwg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 14:37:37 crc kubenswrapper[4957]: I0218 14:37:37.280299 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 14:37:37 crc kubenswrapper[4957]: I0218 14:37:37.516242 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-p65zp"] Feb 18 14:37:37 crc kubenswrapper[4957]: I0218 14:37:37.516610 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-p65zp" podUID="af490140-34e3-4689-b13a-112b97f5cd9e" containerName="registry-server" containerID="cri-o://d97aba5261cebb04f6d8a81508e35c5e7d36e8d6907b02a16c62bed1d52ab7aa" gracePeriod=30 Feb 18 14:37:37 crc kubenswrapper[4957]: I0218 14:37:37.533614 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wh6pb"] Feb 18 14:37:37 crc kubenswrapper[4957]: I0218 14:37:37.543104 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-sfz7k"] Feb 18 14:37:37 crc kubenswrapper[4957]: I0218 14:37:37.543487 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-sfz7k" podUID="06d8fe1f-0a71-480f-899e-a070c091cb79" containerName="marketplace-operator" containerID="cri-o://884b434ea1e0be5f31e7ec9afe8bfd96de52d171b7ec7263e689ef6d2e8c694f" gracePeriod=30 Feb 18 14:37:37 crc kubenswrapper[4957]: I0218 14:37:37.554280 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wg8d9"] Feb 18 14:37:37 crc kubenswrapper[4957]: I0218 14:37:37.554594 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-wg8d9" podUID="ffc559a7-20d1-416b-ae18-bcbfc5193d32" containerName="registry-server" containerID="cri-o://2c34676225b454785f4d326bc3bf8f563a626f8d4f5706567491f71d3539744b" gracePeriod=30 Feb 18 14:37:37 crc kubenswrapper[4957]: I0218 14:37:37.566673 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gpb4w"] Feb 18 14:37:37 crc kubenswrapper[4957]: I0218 14:37:37.567030 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-gpb4w" podUID="f4d1018f-2e03-4372-98e6-cbba16adff43" containerName="registry-server" containerID="cri-o://584b5b4a77173200d863df2124a9f387ddb936c8f02e90a33157512daac54a56" gracePeriod=30 Feb 18 14:37:37 crc kubenswrapper[4957]: I0218 14:37:37.571011 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-cvskg"] Feb 18 14:37:37 crc kubenswrapper[4957]: I0218 14:37:37.573455 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-cvskg" Feb 18 14:37:37 crc kubenswrapper[4957]: I0218 14:37:37.581189 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-cvskg"] Feb 18 14:37:37 crc kubenswrapper[4957]: I0218 14:37:37.722247 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zf88d\" (UniqueName: \"kubernetes.io/projected/2d4f652d-1d92-4ace-8ac1-51aaf64ff1cd-kube-api-access-zf88d\") pod \"marketplace-operator-79b997595-cvskg\" (UID: \"2d4f652d-1d92-4ace-8ac1-51aaf64ff1cd\") " pod="openshift-marketplace/marketplace-operator-79b997595-cvskg" Feb 18 14:37:37 crc kubenswrapper[4957]: I0218 14:37:37.722679 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/2d4f652d-1d92-4ace-8ac1-51aaf64ff1cd-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-cvskg\" (UID: \"2d4f652d-1d92-4ace-8ac1-51aaf64ff1cd\") " pod="openshift-marketplace/marketplace-operator-79b997595-cvskg" Feb 18 14:37:37 crc kubenswrapper[4957]: I0218 14:37:37.722757 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2d4f652d-1d92-4ace-8ac1-51aaf64ff1cd-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-cvskg\" (UID: \"2d4f652d-1d92-4ace-8ac1-51aaf64ff1cd\") " pod="openshift-marketplace/marketplace-operator-79b997595-cvskg" Feb 18 14:37:37 crc kubenswrapper[4957]: I0218 14:37:37.761879 4957 generic.go:334] "Generic (PLEG): container finished" podID="f4d1018f-2e03-4372-98e6-cbba16adff43" containerID="584b5b4a77173200d863df2124a9f387ddb936c8f02e90a33157512daac54a56" exitCode=0 Feb 18 14:37:37 crc kubenswrapper[4957]: I0218 14:37:37.761966 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gpb4w" event={"ID":"f4d1018f-2e03-4372-98e6-cbba16adff43","Type":"ContainerDied","Data":"584b5b4a77173200d863df2124a9f387ddb936c8f02e90a33157512daac54a56"} Feb 18 14:37:37 crc kubenswrapper[4957]: I0218 14:37:37.764617 4957 generic.go:334] "Generic (PLEG): container finished" podID="af490140-34e3-4689-b13a-112b97f5cd9e" containerID="d97aba5261cebb04f6d8a81508e35c5e7d36e8d6907b02a16c62bed1d52ab7aa" exitCode=0 Feb 18 14:37:37 crc kubenswrapper[4957]: I0218 14:37:37.764669 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p65zp" event={"ID":"af490140-34e3-4689-b13a-112b97f5cd9e","Type":"ContainerDied","Data":"d97aba5261cebb04f6d8a81508e35c5e7d36e8d6907b02a16c62bed1d52ab7aa"} Feb 18 14:37:37 crc kubenswrapper[4957]: I0218 14:37:37.766353 4957 generic.go:334] "Generic (PLEG): container finished" podID="06d8fe1f-0a71-480f-899e-a070c091cb79" containerID="884b434ea1e0be5f31e7ec9afe8bfd96de52d171b7ec7263e689ef6d2e8c694f" exitCode=0 Feb 18 14:37:37 crc kubenswrapper[4957]: I0218 14:37:37.766448 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-sfz7k" event={"ID":"06d8fe1f-0a71-480f-899e-a070c091cb79","Type":"ContainerDied","Data":"884b434ea1e0be5f31e7ec9afe8bfd96de52d171b7ec7263e689ef6d2e8c694f"} Feb 18 14:37:37 crc kubenswrapper[4957]: I0218 14:37:37.766513 4957 scope.go:117] "RemoveContainer" containerID="77bcd1e092adb83f4c51065440a331bbd68ddf9dab496ab869c506288a8f5cce" Feb 18 14:37:37 crc kubenswrapper[4957]: I0218 14:37:37.782645 4957 generic.go:334] "Generic (PLEG): container finished" podID="ffc559a7-20d1-416b-ae18-bcbfc5193d32" containerID="2c34676225b454785f4d326bc3bf8f563a626f8d4f5706567491f71d3539744b" exitCode=0 Feb 18 14:37:37 crc kubenswrapper[4957]: I0218 14:37:37.783108 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-wh6pb" podUID="74312833-84d3-4221-a8f7-07c892db5165" containerName="registry-server" containerID="cri-o://dbaa3e944aa77d4c3e27954dd14e21624758f428d52bd4a63750dfe0985cebe5" gracePeriod=30 Feb 18 14:37:37 crc kubenswrapper[4957]: I0218 14:37:37.783688 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wg8d9" event={"ID":"ffc559a7-20d1-416b-ae18-bcbfc5193d32","Type":"ContainerDied","Data":"2c34676225b454785f4d326bc3bf8f563a626f8d4f5706567491f71d3539744b"} Feb 18 14:37:37 crc kubenswrapper[4957]: I0218 14:37:37.823536 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/2d4f652d-1d92-4ace-8ac1-51aaf64ff1cd-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-cvskg\" (UID: \"2d4f652d-1d92-4ace-8ac1-51aaf64ff1cd\") " pod="openshift-marketplace/marketplace-operator-79b997595-cvskg" Feb 18 14:37:37 crc kubenswrapper[4957]: I0218 14:37:37.823609 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2d4f652d-1d92-4ace-8ac1-51aaf64ff1cd-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-cvskg\" (UID: \"2d4f652d-1d92-4ace-8ac1-51aaf64ff1cd\") " pod="openshift-marketplace/marketplace-operator-79b997595-cvskg" Feb 18 14:37:37 crc kubenswrapper[4957]: I0218 14:37:37.823680 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zf88d\" (UniqueName: \"kubernetes.io/projected/2d4f652d-1d92-4ace-8ac1-51aaf64ff1cd-kube-api-access-zf88d\") pod \"marketplace-operator-79b997595-cvskg\" (UID: \"2d4f652d-1d92-4ace-8ac1-51aaf64ff1cd\") " pod="openshift-marketplace/marketplace-operator-79b997595-cvskg" Feb 18 14:37:37 crc kubenswrapper[4957]: I0218 14:37:37.826285 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2d4f652d-1d92-4ace-8ac1-51aaf64ff1cd-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-cvskg\" (UID: \"2d4f652d-1d92-4ace-8ac1-51aaf64ff1cd\") " pod="openshift-marketplace/marketplace-operator-79b997595-cvskg" Feb 18 14:37:37 crc kubenswrapper[4957]: I0218 14:37:37.830687 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/2d4f652d-1d92-4ace-8ac1-51aaf64ff1cd-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-cvskg\" (UID: \"2d4f652d-1d92-4ace-8ac1-51aaf64ff1cd\") " pod="openshift-marketplace/marketplace-operator-79b997595-cvskg" Feb 18 14:37:37 crc kubenswrapper[4957]: E0218 14:37:37.875401 4957 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod06d8fe1f_0a71_480f_899e_a070c091cb79.slice/crio-conmon-884b434ea1e0be5f31e7ec9afe8bfd96de52d171b7ec7263e689ef6d2e8c694f.scope\": RecentStats: unable to find data in memory cache]" Feb 18 14:37:37 crc kubenswrapper[4957]: I0218 14:37:37.876075 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zf88d\" (UniqueName: \"kubernetes.io/projected/2d4f652d-1d92-4ace-8ac1-51aaf64ff1cd-kube-api-access-zf88d\") pod \"marketplace-operator-79b997595-cvskg\" (UID: \"2d4f652d-1d92-4ace-8ac1-51aaf64ff1cd\") " pod="openshift-marketplace/marketplace-operator-79b997595-cvskg" Feb 18 14:37:37 crc kubenswrapper[4957]: I0218 14:37:37.904603 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-cvskg" Feb 18 14:37:38 crc kubenswrapper[4957]: I0218 14:37:38.067906 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gpb4w" Feb 18 14:37:38 crc kubenswrapper[4957]: I0218 14:37:38.069385 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wg8d9" Feb 18 14:37:38 crc kubenswrapper[4957]: I0218 14:37:38.085996 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p65zp" Feb 18 14:37:38 crc kubenswrapper[4957]: I0218 14:37:38.102672 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-sfz7k" Feb 18 14:37:38 crc kubenswrapper[4957]: I0218 14:37:38.228072 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9c5w5\" (UniqueName: \"kubernetes.io/projected/ffc559a7-20d1-416b-ae18-bcbfc5193d32-kube-api-access-9c5w5\") pod \"ffc559a7-20d1-416b-ae18-bcbfc5193d32\" (UID: \"ffc559a7-20d1-416b-ae18-bcbfc5193d32\") " Feb 18 14:37:38 crc kubenswrapper[4957]: I0218 14:37:38.228128 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dffxq\" (UniqueName: \"kubernetes.io/projected/af490140-34e3-4689-b13a-112b97f5cd9e-kube-api-access-dffxq\") pod \"af490140-34e3-4689-b13a-112b97f5cd9e\" (UID: \"af490140-34e3-4689-b13a-112b97f5cd9e\") " Feb 18 14:37:38 crc kubenswrapper[4957]: I0218 14:37:38.228229 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ffc559a7-20d1-416b-ae18-bcbfc5193d32-utilities\") pod \"ffc559a7-20d1-416b-ae18-bcbfc5193d32\" (UID: \"ffc559a7-20d1-416b-ae18-bcbfc5193d32\") " Feb 18 14:37:38 crc kubenswrapper[4957]: I0218 14:37:38.228256 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4d1018f-2e03-4372-98e6-cbba16adff43-catalog-content\") pod \"f4d1018f-2e03-4372-98e6-cbba16adff43\" (UID: \"f4d1018f-2e03-4372-98e6-cbba16adff43\") " Feb 18 14:37:38 crc kubenswrapper[4957]: I0218 14:37:38.228284 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zxcvg\" (UniqueName: \"kubernetes.io/projected/f4d1018f-2e03-4372-98e6-cbba16adff43-kube-api-access-zxcvg\") pod \"f4d1018f-2e03-4372-98e6-cbba16adff43\" (UID: \"f4d1018f-2e03-4372-98e6-cbba16adff43\") " Feb 18 14:37:38 crc kubenswrapper[4957]: I0218 14:37:38.228313 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4d1018f-2e03-4372-98e6-cbba16adff43-utilities\") pod \"f4d1018f-2e03-4372-98e6-cbba16adff43\" (UID: \"f4d1018f-2e03-4372-98e6-cbba16adff43\") " Feb 18 14:37:38 crc kubenswrapper[4957]: I0218 14:37:38.228353 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/06d8fe1f-0a71-480f-899e-a070c091cb79-marketplace-operator-metrics\") pod \"06d8fe1f-0a71-480f-899e-a070c091cb79\" (UID: \"06d8fe1f-0a71-480f-899e-a070c091cb79\") " Feb 18 14:37:38 crc kubenswrapper[4957]: I0218 14:37:38.228389 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jdv4s\" (UniqueName: \"kubernetes.io/projected/06d8fe1f-0a71-480f-899e-a070c091cb79-kube-api-access-jdv4s\") pod \"06d8fe1f-0a71-480f-899e-a070c091cb79\" (UID: \"06d8fe1f-0a71-480f-899e-a070c091cb79\") " Feb 18 14:37:38 crc kubenswrapper[4957]: I0218 14:37:38.228445 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/06d8fe1f-0a71-480f-899e-a070c091cb79-marketplace-trusted-ca\") pod \"06d8fe1f-0a71-480f-899e-a070c091cb79\" (UID: \"06d8fe1f-0a71-480f-899e-a070c091cb79\") " Feb 18 14:37:38 crc kubenswrapper[4957]: I0218 14:37:38.228484 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af490140-34e3-4689-b13a-112b97f5cd9e-utilities\") pod \"af490140-34e3-4689-b13a-112b97f5cd9e\" (UID: \"af490140-34e3-4689-b13a-112b97f5cd9e\") " Feb 18 14:37:38 crc kubenswrapper[4957]: I0218 14:37:38.228515 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af490140-34e3-4689-b13a-112b97f5cd9e-catalog-content\") pod \"af490140-34e3-4689-b13a-112b97f5cd9e\" (UID: \"af490140-34e3-4689-b13a-112b97f5cd9e\") " Feb 18 14:37:38 crc kubenswrapper[4957]: I0218 14:37:38.228544 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ffc559a7-20d1-416b-ae18-bcbfc5193d32-catalog-content\") pod \"ffc559a7-20d1-416b-ae18-bcbfc5193d32\" (UID: \"ffc559a7-20d1-416b-ae18-bcbfc5193d32\") " Feb 18 14:37:38 crc kubenswrapper[4957]: I0218 14:37:38.229215 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f4d1018f-2e03-4372-98e6-cbba16adff43-utilities" (OuterVolumeSpecName: "utilities") pod "f4d1018f-2e03-4372-98e6-cbba16adff43" (UID: "f4d1018f-2e03-4372-98e6-cbba16adff43"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:37:38 crc kubenswrapper[4957]: I0218 14:37:38.229257 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06d8fe1f-0a71-480f-899e-a070c091cb79-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "06d8fe1f-0a71-480f-899e-a070c091cb79" (UID: "06d8fe1f-0a71-480f-899e-a070c091cb79"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:37:38 crc kubenswrapper[4957]: I0218 14:37:38.230437 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ffc559a7-20d1-416b-ae18-bcbfc5193d32-utilities" (OuterVolumeSpecName: "utilities") pod "ffc559a7-20d1-416b-ae18-bcbfc5193d32" (UID: "ffc559a7-20d1-416b-ae18-bcbfc5193d32"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:37:38 crc kubenswrapper[4957]: I0218 14:37:38.232605 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/af490140-34e3-4689-b13a-112b97f5cd9e-utilities" (OuterVolumeSpecName: "utilities") pod "af490140-34e3-4689-b13a-112b97f5cd9e" (UID: "af490140-34e3-4689-b13a-112b97f5cd9e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:37:38 crc kubenswrapper[4957]: I0218 14:37:38.232737 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4d1018f-2e03-4372-98e6-cbba16adff43-kube-api-access-zxcvg" (OuterVolumeSpecName: "kube-api-access-zxcvg") pod "f4d1018f-2e03-4372-98e6-cbba16adff43" (UID: "f4d1018f-2e03-4372-98e6-cbba16adff43"). InnerVolumeSpecName "kube-api-access-zxcvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:37:38 crc kubenswrapper[4957]: I0218 14:37:38.233008 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ffc559a7-20d1-416b-ae18-bcbfc5193d32-kube-api-access-9c5w5" (OuterVolumeSpecName: "kube-api-access-9c5w5") pod "ffc559a7-20d1-416b-ae18-bcbfc5193d32" (UID: "ffc559a7-20d1-416b-ae18-bcbfc5193d32"). InnerVolumeSpecName "kube-api-access-9c5w5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:37:38 crc kubenswrapper[4957]: I0218 14:37:38.233086 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06d8fe1f-0a71-480f-899e-a070c091cb79-kube-api-access-jdv4s" (OuterVolumeSpecName: "kube-api-access-jdv4s") pod "06d8fe1f-0a71-480f-899e-a070c091cb79" (UID: "06d8fe1f-0a71-480f-899e-a070c091cb79"). InnerVolumeSpecName "kube-api-access-jdv4s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:37:38 crc kubenswrapper[4957]: I0218 14:37:38.233125 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06d8fe1f-0a71-480f-899e-a070c091cb79-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "06d8fe1f-0a71-480f-899e-a070c091cb79" (UID: "06d8fe1f-0a71-480f-899e-a070c091cb79"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:37:38 crc kubenswrapper[4957]: I0218 14:37:38.233138 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af490140-34e3-4689-b13a-112b97f5cd9e-kube-api-access-dffxq" (OuterVolumeSpecName: "kube-api-access-dffxq") pod "af490140-34e3-4689-b13a-112b97f5cd9e" (UID: "af490140-34e3-4689-b13a-112b97f5cd9e"). InnerVolumeSpecName "kube-api-access-dffxq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:37:38 crc kubenswrapper[4957]: I0218 14:37:38.261159 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ffc559a7-20d1-416b-ae18-bcbfc5193d32-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ffc559a7-20d1-416b-ae18-bcbfc5193d32" (UID: "ffc559a7-20d1-416b-ae18-bcbfc5193d32"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:37:38 crc kubenswrapper[4957]: I0218 14:37:38.310938 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/af490140-34e3-4689-b13a-112b97f5cd9e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "af490140-34e3-4689-b13a-112b97f5cd9e" (UID: "af490140-34e3-4689-b13a-112b97f5cd9e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:37:38 crc kubenswrapper[4957]: I0218 14:37:38.330658 4957 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ffc559a7-20d1-416b-ae18-bcbfc5193d32-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 14:37:38 crc kubenswrapper[4957]: I0218 14:37:38.330724 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zxcvg\" (UniqueName: \"kubernetes.io/projected/f4d1018f-2e03-4372-98e6-cbba16adff43-kube-api-access-zxcvg\") on node \"crc\" DevicePath \"\"" Feb 18 14:37:38 crc kubenswrapper[4957]: I0218 14:37:38.330752 4957 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4d1018f-2e03-4372-98e6-cbba16adff43-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 14:37:38 crc kubenswrapper[4957]: I0218 14:37:38.330770 4957 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/06d8fe1f-0a71-480f-899e-a070c091cb79-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 18 14:37:38 crc kubenswrapper[4957]: I0218 14:37:38.330790 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jdv4s\" (UniqueName: \"kubernetes.io/projected/06d8fe1f-0a71-480f-899e-a070c091cb79-kube-api-access-jdv4s\") on node \"crc\" DevicePath \"\"" Feb 18 14:37:38 crc kubenswrapper[4957]: I0218 14:37:38.330809 4957 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/06d8fe1f-0a71-480f-899e-a070c091cb79-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 18 14:37:38 crc kubenswrapper[4957]: I0218 14:37:38.330826 4957 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af490140-34e3-4689-b13a-112b97f5cd9e-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 14:37:38 crc kubenswrapper[4957]: I0218 14:37:38.330844 4957 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af490140-34e3-4689-b13a-112b97f5cd9e-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 14:37:38 crc kubenswrapper[4957]: I0218 14:37:38.330862 4957 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ffc559a7-20d1-416b-ae18-bcbfc5193d32-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 14:37:38 crc kubenswrapper[4957]: I0218 14:37:38.330882 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9c5w5\" (UniqueName: \"kubernetes.io/projected/ffc559a7-20d1-416b-ae18-bcbfc5193d32-kube-api-access-9c5w5\") on node \"crc\" DevicePath \"\"" Feb 18 14:37:38 crc kubenswrapper[4957]: I0218 14:37:38.330901 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dffxq\" (UniqueName: \"kubernetes.io/projected/af490140-34e3-4689-b13a-112b97f5cd9e-kube-api-access-dffxq\") on node \"crc\" DevicePath \"\"" Feb 18 14:37:38 crc kubenswrapper[4957]: I0218 14:37:38.382554 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f4d1018f-2e03-4372-98e6-cbba16adff43-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f4d1018f-2e03-4372-98e6-cbba16adff43" (UID: "f4d1018f-2e03-4372-98e6-cbba16adff43"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:37:38 crc kubenswrapper[4957]: I0218 14:37:38.432708 4957 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4d1018f-2e03-4372-98e6-cbba16adff43-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 14:37:38 crc kubenswrapper[4957]: I0218 14:37:38.446937 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-cvskg"] Feb 18 14:37:38 crc kubenswrapper[4957]: I0218 14:37:38.662959 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wh6pb" Feb 18 14:37:38 crc kubenswrapper[4957]: I0218 14:37:38.791957 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wh6pb" event={"ID":"74312833-84d3-4221-a8f7-07c892db5165","Type":"ContainerDied","Data":"dbaa3e944aa77d4c3e27954dd14e21624758f428d52bd4a63750dfe0985cebe5"} Feb 18 14:37:38 crc kubenswrapper[4957]: I0218 14:37:38.791800 4957 generic.go:334] "Generic (PLEG): container finished" podID="74312833-84d3-4221-a8f7-07c892db5165" containerID="dbaa3e944aa77d4c3e27954dd14e21624758f428d52bd4a63750dfe0985cebe5" exitCode=0 Feb 18 14:37:38 crc kubenswrapper[4957]: I0218 14:37:38.792027 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wh6pb" event={"ID":"74312833-84d3-4221-a8f7-07c892db5165","Type":"ContainerDied","Data":"0a11791d259e2a1bab4a1b5628d1eccd2b70c0695a378c0eeec65cf412db93dc"} Feb 18 14:37:38 crc kubenswrapper[4957]: I0218 14:37:38.792048 4957 scope.go:117] "RemoveContainer" containerID="dbaa3e944aa77d4c3e27954dd14e21624758f428d52bd4a63750dfe0985cebe5" Feb 18 14:37:38 crc kubenswrapper[4957]: I0218 14:37:38.791984 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wh6pb" Feb 18 14:37:38 crc kubenswrapper[4957]: I0218 14:37:38.794750 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gpb4w" event={"ID":"f4d1018f-2e03-4372-98e6-cbba16adff43","Type":"ContainerDied","Data":"fbb1194a118bb54e6ce20d642367aa7092b892befb61fe853d343736f8361e12"} Feb 18 14:37:38 crc kubenswrapper[4957]: I0218 14:37:38.794780 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gpb4w" Feb 18 14:37:38 crc kubenswrapper[4957]: I0218 14:37:38.797203 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p65zp" event={"ID":"af490140-34e3-4689-b13a-112b97f5cd9e","Type":"ContainerDied","Data":"afbd2873a934416248bac03b934d4a40e637756f15f94211e3f998840349075a"} Feb 18 14:37:38 crc kubenswrapper[4957]: I0218 14:37:38.797348 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p65zp" Feb 18 14:37:38 crc kubenswrapper[4957]: I0218 14:37:38.802822 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-sfz7k" event={"ID":"06d8fe1f-0a71-480f-899e-a070c091cb79","Type":"ContainerDied","Data":"d557f13f07fc41eb40f307ff8dd39eadfa840dc297d9d8631a8936835b7c521f"} Feb 18 14:37:38 crc kubenswrapper[4957]: I0218 14:37:38.802978 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-sfz7k" Feb 18 14:37:38 crc kubenswrapper[4957]: I0218 14:37:38.805612 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wg8d9" Feb 18 14:37:38 crc kubenswrapper[4957]: I0218 14:37:38.805625 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wg8d9" event={"ID":"ffc559a7-20d1-416b-ae18-bcbfc5193d32","Type":"ContainerDied","Data":"1e4679f64a5102c65cfc61d96a99f281f67b7b285f11e659a85efde3304a2b43"} Feb 18 14:37:38 crc kubenswrapper[4957]: I0218 14:37:38.808788 4957 scope.go:117] "RemoveContainer" containerID="34bcb97b082fca12d9fccbfd6e09bc34808de7e528457b959bee3a5bac3fa1d1" Feb 18 14:37:38 crc kubenswrapper[4957]: I0218 14:37:38.810397 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-cvskg" event={"ID":"2d4f652d-1d92-4ace-8ac1-51aaf64ff1cd","Type":"ContainerStarted","Data":"1830e452e88ead9167650804bef4fbf015b4ee32728ed0842a8afb13ee9ec0f5"} Feb 18 14:37:38 crc kubenswrapper[4957]: I0218 14:37:38.810501 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-cvskg" event={"ID":"2d4f652d-1d92-4ace-8ac1-51aaf64ff1cd","Type":"ContainerStarted","Data":"428aa9b40be6e64b4eb722661c006ede7c1609e4641f8a39588e28084e9d8812"} Feb 18 14:37:38 crc kubenswrapper[4957]: I0218 14:37:38.811452 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-cvskg" Feb 18 14:37:38 crc kubenswrapper[4957]: I0218 14:37:38.813493 4957 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-cvskg container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/healthz\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Feb 18 14:37:38 crc kubenswrapper[4957]: I0218 14:37:38.813563 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-cvskg" podUID="2d4f652d-1d92-4ace-8ac1-51aaf64ff1cd" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.66:8080/healthz\": dial tcp 10.217.0.66:8080: connect: connection refused" Feb 18 14:37:38 crc kubenswrapper[4957]: I0218 14:37:38.838243 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-67dpz\" (UniqueName: \"kubernetes.io/projected/74312833-84d3-4221-a8f7-07c892db5165-kube-api-access-67dpz\") pod \"74312833-84d3-4221-a8f7-07c892db5165\" (UID: \"74312833-84d3-4221-a8f7-07c892db5165\") " Feb 18 14:37:38 crc kubenswrapper[4957]: I0218 14:37:38.839797 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74312833-84d3-4221-a8f7-07c892db5165-utilities\") pod \"74312833-84d3-4221-a8f7-07c892db5165\" (UID: \"74312833-84d3-4221-a8f7-07c892db5165\") " Feb 18 14:37:38 crc kubenswrapper[4957]: I0218 14:37:38.839943 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74312833-84d3-4221-a8f7-07c892db5165-catalog-content\") pod \"74312833-84d3-4221-a8f7-07c892db5165\" (UID: \"74312833-84d3-4221-a8f7-07c892db5165\") " Feb 18 14:37:38 crc kubenswrapper[4957]: I0218 14:37:38.842927 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/74312833-84d3-4221-a8f7-07c892db5165-utilities" (OuterVolumeSpecName: "utilities") pod "74312833-84d3-4221-a8f7-07c892db5165" (UID: "74312833-84d3-4221-a8f7-07c892db5165"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:37:38 crc kubenswrapper[4957]: I0218 14:37:38.847059 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-cvskg" podStartSLOduration=1.846374414 podStartE2EDuration="1.846374414s" podCreationTimestamp="2026-02-18 14:37:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:37:38.837788312 +0000 UTC m=+365.358653056" watchObservedRunningTime="2026-02-18 14:37:38.846374414 +0000 UTC m=+365.367239198" Feb 18 14:37:38 crc kubenswrapper[4957]: I0218 14:37:38.855579 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74312833-84d3-4221-a8f7-07c892db5165-kube-api-access-67dpz" (OuterVolumeSpecName: "kube-api-access-67dpz") pod "74312833-84d3-4221-a8f7-07c892db5165" (UID: "74312833-84d3-4221-a8f7-07c892db5165"). InnerVolumeSpecName "kube-api-access-67dpz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:37:38 crc kubenswrapper[4957]: I0218 14:37:38.855762 4957 scope.go:117] "RemoveContainer" containerID="b089905d3f5dd76c2ff0ffca9f1c761476dcb7730038874fdfd2e39c1f462d5a" Feb 18 14:37:38 crc kubenswrapper[4957]: I0218 14:37:38.862388 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gpb4w"] Feb 18 14:37:38 crc kubenswrapper[4957]: I0218 14:37:38.867601 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-gpb4w"] Feb 18 14:37:38 crc kubenswrapper[4957]: I0218 14:37:38.880525 4957 scope.go:117] "RemoveContainer" containerID="dbaa3e944aa77d4c3e27954dd14e21624758f428d52bd4a63750dfe0985cebe5" Feb 18 14:37:38 crc kubenswrapper[4957]: E0218 14:37:38.884741 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dbaa3e944aa77d4c3e27954dd14e21624758f428d52bd4a63750dfe0985cebe5\": container with ID starting with dbaa3e944aa77d4c3e27954dd14e21624758f428d52bd4a63750dfe0985cebe5 not found: ID does not exist" containerID="dbaa3e944aa77d4c3e27954dd14e21624758f428d52bd4a63750dfe0985cebe5" Feb 18 14:37:38 crc kubenswrapper[4957]: I0218 14:37:38.884806 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dbaa3e944aa77d4c3e27954dd14e21624758f428d52bd4a63750dfe0985cebe5"} err="failed to get container status \"dbaa3e944aa77d4c3e27954dd14e21624758f428d52bd4a63750dfe0985cebe5\": rpc error: code = NotFound desc = could not find container \"dbaa3e944aa77d4c3e27954dd14e21624758f428d52bd4a63750dfe0985cebe5\": container with ID starting with dbaa3e944aa77d4c3e27954dd14e21624758f428d52bd4a63750dfe0985cebe5 not found: ID does not exist" Feb 18 14:37:38 crc kubenswrapper[4957]: I0218 14:37:38.884834 4957 scope.go:117] "RemoveContainer" containerID="34bcb97b082fca12d9fccbfd6e09bc34808de7e528457b959bee3a5bac3fa1d1" Feb 18 14:37:38 crc kubenswrapper[4957]: E0218 14:37:38.885481 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"34bcb97b082fca12d9fccbfd6e09bc34808de7e528457b959bee3a5bac3fa1d1\": container with ID starting with 34bcb97b082fca12d9fccbfd6e09bc34808de7e528457b959bee3a5bac3fa1d1 not found: ID does not exist" containerID="34bcb97b082fca12d9fccbfd6e09bc34808de7e528457b959bee3a5bac3fa1d1" Feb 18 14:37:38 crc kubenswrapper[4957]: I0218 14:37:38.885561 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34bcb97b082fca12d9fccbfd6e09bc34808de7e528457b959bee3a5bac3fa1d1"} err="failed to get container status \"34bcb97b082fca12d9fccbfd6e09bc34808de7e528457b959bee3a5bac3fa1d1\": rpc error: code = NotFound desc = could not find container \"34bcb97b082fca12d9fccbfd6e09bc34808de7e528457b959bee3a5bac3fa1d1\": container with ID starting with 34bcb97b082fca12d9fccbfd6e09bc34808de7e528457b959bee3a5bac3fa1d1 not found: ID does not exist" Feb 18 14:37:38 crc kubenswrapper[4957]: I0218 14:37:38.885612 4957 scope.go:117] "RemoveContainer" containerID="b089905d3f5dd76c2ff0ffca9f1c761476dcb7730038874fdfd2e39c1f462d5a" Feb 18 14:37:38 crc kubenswrapper[4957]: E0218 14:37:38.886134 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b089905d3f5dd76c2ff0ffca9f1c761476dcb7730038874fdfd2e39c1f462d5a\": container with ID starting with b089905d3f5dd76c2ff0ffca9f1c761476dcb7730038874fdfd2e39c1f462d5a not found: ID does not exist" containerID="b089905d3f5dd76c2ff0ffca9f1c761476dcb7730038874fdfd2e39c1f462d5a" Feb 18 14:37:38 crc kubenswrapper[4957]: I0218 14:37:38.886161 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b089905d3f5dd76c2ff0ffca9f1c761476dcb7730038874fdfd2e39c1f462d5a"} err="failed to get container status \"b089905d3f5dd76c2ff0ffca9f1c761476dcb7730038874fdfd2e39c1f462d5a\": rpc error: code = NotFound desc = could not find container \"b089905d3f5dd76c2ff0ffca9f1c761476dcb7730038874fdfd2e39c1f462d5a\": container with ID starting with b089905d3f5dd76c2ff0ffca9f1c761476dcb7730038874fdfd2e39c1f462d5a not found: ID does not exist" Feb 18 14:37:38 crc kubenswrapper[4957]: I0218 14:37:38.886178 4957 scope.go:117] "RemoveContainer" containerID="584b5b4a77173200d863df2124a9f387ddb936c8f02e90a33157512daac54a56" Feb 18 14:37:38 crc kubenswrapper[4957]: I0218 14:37:38.892372 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-sfz7k"] Feb 18 14:37:38 crc kubenswrapper[4957]: I0218 14:37:38.900071 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-sfz7k"] Feb 18 14:37:38 crc kubenswrapper[4957]: I0218 14:37:38.905110 4957 scope.go:117] "RemoveContainer" containerID="242405b454f851299caf8bc87a30864e03ccb12c9699ed4c01538ce0b122e006" Feb 18 14:37:38 crc kubenswrapper[4957]: I0218 14:37:38.916168 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wg8d9"] Feb 18 14:37:38 crc kubenswrapper[4957]: I0218 14:37:38.927080 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/74312833-84d3-4221-a8f7-07c892db5165-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "74312833-84d3-4221-a8f7-07c892db5165" (UID: "74312833-84d3-4221-a8f7-07c892db5165"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:37:38 crc kubenswrapper[4957]: I0218 14:37:38.931576 4957 scope.go:117] "RemoveContainer" containerID="5fad4694dfc8b3de7e40efa9fadd2ec82a1ab862d10013beb1fed441838e534f" Feb 18 14:37:38 crc kubenswrapper[4957]: I0218 14:37:38.937536 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-wg8d9"] Feb 18 14:37:38 crc kubenswrapper[4957]: I0218 14:37:38.941603 4957 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74312833-84d3-4221-a8f7-07c892db5165-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 14:37:38 crc kubenswrapper[4957]: I0218 14:37:38.941629 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-67dpz\" (UniqueName: \"kubernetes.io/projected/74312833-84d3-4221-a8f7-07c892db5165-kube-api-access-67dpz\") on node \"crc\" DevicePath \"\"" Feb 18 14:37:38 crc kubenswrapper[4957]: I0218 14:37:38.941640 4957 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74312833-84d3-4221-a8f7-07c892db5165-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 14:37:38 crc kubenswrapper[4957]: I0218 14:37:38.944210 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-p65zp"] Feb 18 14:37:38 crc kubenswrapper[4957]: I0218 14:37:38.947677 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-p65zp"] Feb 18 14:37:38 crc kubenswrapper[4957]: I0218 14:37:38.949383 4957 scope.go:117] "RemoveContainer" containerID="d97aba5261cebb04f6d8a81508e35c5e7d36e8d6907b02a16c62bed1d52ab7aa" Feb 18 14:37:38 crc kubenswrapper[4957]: I0218 14:37:38.963899 4957 scope.go:117] "RemoveContainer" containerID="855e6077a7264473e15c1864077ce238d3f521ccfa71b8ceee6a5af2408ab52f" Feb 18 14:37:38 crc kubenswrapper[4957]: I0218 14:37:38.980283 4957 scope.go:117] "RemoveContainer" containerID="a64b56e72369eec2d9b55244bbd3d5b4f0d4669e627896dd0f530abd8bf14204" Feb 18 14:37:38 crc kubenswrapper[4957]: I0218 14:37:38.995983 4957 scope.go:117] "RemoveContainer" containerID="884b434ea1e0be5f31e7ec9afe8bfd96de52d171b7ec7263e689ef6d2e8c694f" Feb 18 14:37:39 crc kubenswrapper[4957]: I0218 14:37:39.017387 4957 scope.go:117] "RemoveContainer" containerID="2c34676225b454785f4d326bc3bf8f563a626f8d4f5706567491f71d3539744b" Feb 18 14:37:39 crc kubenswrapper[4957]: I0218 14:37:39.030274 4957 scope.go:117] "RemoveContainer" containerID="ffb32e84e89226a933d6a9cc18617bb1459d3b66f1433f4fc7302aa52f5a599a" Feb 18 14:37:39 crc kubenswrapper[4957]: I0218 14:37:39.043886 4957 scope.go:117] "RemoveContainer" containerID="6d19ba2bd7747d06d8dc2f1e82b008add7b9e6a84c91ec6bbda37c727b74d811" Feb 18 14:37:39 crc kubenswrapper[4957]: I0218 14:37:39.146783 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wh6pb"] Feb 18 14:37:39 crc kubenswrapper[4957]: I0218 14:37:39.151605 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-wh6pb"] Feb 18 14:37:39 crc kubenswrapper[4957]: I0218 14:37:39.391807 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-gllqp"] Feb 18 14:37:39 crc kubenswrapper[4957]: E0218 14:37:39.392001 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4d1018f-2e03-4372-98e6-cbba16adff43" containerName="extract-content" Feb 18 14:37:39 crc kubenswrapper[4957]: I0218 14:37:39.392013 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4d1018f-2e03-4372-98e6-cbba16adff43" containerName="extract-content" Feb 18 14:37:39 crc kubenswrapper[4957]: E0218 14:37:39.392101 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4d1018f-2e03-4372-98e6-cbba16adff43" containerName="registry-server" Feb 18 14:37:39 crc kubenswrapper[4957]: I0218 14:37:39.392108 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4d1018f-2e03-4372-98e6-cbba16adff43" containerName="registry-server" Feb 18 14:37:39 crc kubenswrapper[4957]: E0218 14:37:39.392119 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74312833-84d3-4221-a8f7-07c892db5165" containerName="extract-utilities" Feb 18 14:37:39 crc kubenswrapper[4957]: I0218 14:37:39.392125 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="74312833-84d3-4221-a8f7-07c892db5165" containerName="extract-utilities" Feb 18 14:37:39 crc kubenswrapper[4957]: E0218 14:37:39.392132 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af490140-34e3-4689-b13a-112b97f5cd9e" containerName="registry-server" Feb 18 14:37:39 crc kubenswrapper[4957]: I0218 14:37:39.396349 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="af490140-34e3-4689-b13a-112b97f5cd9e" containerName="registry-server" Feb 18 14:37:39 crc kubenswrapper[4957]: E0218 14:37:39.396376 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06d8fe1f-0a71-480f-899e-a070c091cb79" containerName="marketplace-operator" Feb 18 14:37:39 crc kubenswrapper[4957]: I0218 14:37:39.396384 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="06d8fe1f-0a71-480f-899e-a070c091cb79" containerName="marketplace-operator" Feb 18 14:37:39 crc kubenswrapper[4957]: E0218 14:37:39.396395 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06d8fe1f-0a71-480f-899e-a070c091cb79" containerName="marketplace-operator" Feb 18 14:37:39 crc kubenswrapper[4957]: I0218 14:37:39.396402 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="06d8fe1f-0a71-480f-899e-a070c091cb79" containerName="marketplace-operator" Feb 18 14:37:39 crc kubenswrapper[4957]: E0218 14:37:39.396411 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffc559a7-20d1-416b-ae18-bcbfc5193d32" containerName="registry-server" Feb 18 14:37:39 crc kubenswrapper[4957]: I0218 14:37:39.396432 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffc559a7-20d1-416b-ae18-bcbfc5193d32" containerName="registry-server" Feb 18 14:37:39 crc kubenswrapper[4957]: E0218 14:37:39.396438 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af490140-34e3-4689-b13a-112b97f5cd9e" containerName="extract-content" Feb 18 14:37:39 crc kubenswrapper[4957]: I0218 14:37:39.396444 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="af490140-34e3-4689-b13a-112b97f5cd9e" containerName="extract-content" Feb 18 14:37:39 crc kubenswrapper[4957]: E0218 14:37:39.396459 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af490140-34e3-4689-b13a-112b97f5cd9e" containerName="extract-utilities" Feb 18 14:37:39 crc kubenswrapper[4957]: I0218 14:37:39.396481 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="af490140-34e3-4689-b13a-112b97f5cd9e" containerName="extract-utilities" Feb 18 14:37:39 crc kubenswrapper[4957]: E0218 14:37:39.396490 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74312833-84d3-4221-a8f7-07c892db5165" containerName="extract-content" Feb 18 14:37:39 crc kubenswrapper[4957]: I0218 14:37:39.396496 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="74312833-84d3-4221-a8f7-07c892db5165" containerName="extract-content" Feb 18 14:37:39 crc kubenswrapper[4957]: E0218 14:37:39.396504 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4d1018f-2e03-4372-98e6-cbba16adff43" containerName="extract-utilities" Feb 18 14:37:39 crc kubenswrapper[4957]: I0218 14:37:39.396511 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4d1018f-2e03-4372-98e6-cbba16adff43" containerName="extract-utilities" Feb 18 14:37:39 crc kubenswrapper[4957]: E0218 14:37:39.396522 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffc559a7-20d1-416b-ae18-bcbfc5193d32" containerName="extract-content" Feb 18 14:37:39 crc kubenswrapper[4957]: I0218 14:37:39.396529 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffc559a7-20d1-416b-ae18-bcbfc5193d32" containerName="extract-content" Feb 18 14:37:39 crc kubenswrapper[4957]: E0218 14:37:39.396540 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffc559a7-20d1-416b-ae18-bcbfc5193d32" containerName="extract-utilities" Feb 18 14:37:39 crc kubenswrapper[4957]: I0218 14:37:39.396545 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffc559a7-20d1-416b-ae18-bcbfc5193d32" containerName="extract-utilities" Feb 18 14:37:39 crc kubenswrapper[4957]: E0218 14:37:39.396552 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74312833-84d3-4221-a8f7-07c892db5165" containerName="registry-server" Feb 18 14:37:39 crc kubenswrapper[4957]: I0218 14:37:39.396558 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="74312833-84d3-4221-a8f7-07c892db5165" containerName="registry-server" Feb 18 14:37:39 crc kubenswrapper[4957]: I0218 14:37:39.396696 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="ffc559a7-20d1-416b-ae18-bcbfc5193d32" containerName="registry-server" Feb 18 14:37:39 crc kubenswrapper[4957]: I0218 14:37:39.396717 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="06d8fe1f-0a71-480f-899e-a070c091cb79" containerName="marketplace-operator" Feb 18 14:37:39 crc kubenswrapper[4957]: I0218 14:37:39.396725 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="74312833-84d3-4221-a8f7-07c892db5165" containerName="registry-server" Feb 18 14:37:39 crc kubenswrapper[4957]: I0218 14:37:39.396740 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="af490140-34e3-4689-b13a-112b97f5cd9e" containerName="registry-server" Feb 18 14:37:39 crc kubenswrapper[4957]: I0218 14:37:39.396751 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4d1018f-2e03-4372-98e6-cbba16adff43" containerName="registry-server" Feb 18 14:37:39 crc kubenswrapper[4957]: I0218 14:37:39.396942 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="06d8fe1f-0a71-480f-899e-a070c091cb79" containerName="marketplace-operator" Feb 18 14:37:39 crc kubenswrapper[4957]: I0218 14:37:39.397486 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gllqp"] Feb 18 14:37:39 crc kubenswrapper[4957]: I0218 14:37:39.397575 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gllqp" Feb 18 14:37:39 crc kubenswrapper[4957]: I0218 14:37:39.399783 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 18 14:37:39 crc kubenswrapper[4957]: I0218 14:37:39.550734 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b95ede57-e275-4ba0-834d-43356f6b960b-catalog-content\") pod \"redhat-marketplace-gllqp\" (UID: \"b95ede57-e275-4ba0-834d-43356f6b960b\") " pod="openshift-marketplace/redhat-marketplace-gllqp" Feb 18 14:37:39 crc kubenswrapper[4957]: I0218 14:37:39.550800 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b95ede57-e275-4ba0-834d-43356f6b960b-utilities\") pod \"redhat-marketplace-gllqp\" (UID: \"b95ede57-e275-4ba0-834d-43356f6b960b\") " pod="openshift-marketplace/redhat-marketplace-gllqp" Feb 18 14:37:39 crc kubenswrapper[4957]: I0218 14:37:39.550918 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zh4t8\" (UniqueName: \"kubernetes.io/projected/b95ede57-e275-4ba0-834d-43356f6b960b-kube-api-access-zh4t8\") pod \"redhat-marketplace-gllqp\" (UID: \"b95ede57-e275-4ba0-834d-43356f6b960b\") " pod="openshift-marketplace/redhat-marketplace-gllqp" Feb 18 14:37:39 crc kubenswrapper[4957]: I0218 14:37:39.651959 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b95ede57-e275-4ba0-834d-43356f6b960b-utilities\") pod \"redhat-marketplace-gllqp\" (UID: \"b95ede57-e275-4ba0-834d-43356f6b960b\") " pod="openshift-marketplace/redhat-marketplace-gllqp" Feb 18 14:37:39 crc kubenswrapper[4957]: I0218 14:37:39.652093 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zh4t8\" (UniqueName: \"kubernetes.io/projected/b95ede57-e275-4ba0-834d-43356f6b960b-kube-api-access-zh4t8\") pod \"redhat-marketplace-gllqp\" (UID: \"b95ede57-e275-4ba0-834d-43356f6b960b\") " pod="openshift-marketplace/redhat-marketplace-gllqp" Feb 18 14:37:39 crc kubenswrapper[4957]: I0218 14:37:39.652152 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b95ede57-e275-4ba0-834d-43356f6b960b-catalog-content\") pod \"redhat-marketplace-gllqp\" (UID: \"b95ede57-e275-4ba0-834d-43356f6b960b\") " pod="openshift-marketplace/redhat-marketplace-gllqp" Feb 18 14:37:39 crc kubenswrapper[4957]: I0218 14:37:39.652698 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b95ede57-e275-4ba0-834d-43356f6b960b-catalog-content\") pod \"redhat-marketplace-gllqp\" (UID: \"b95ede57-e275-4ba0-834d-43356f6b960b\") " pod="openshift-marketplace/redhat-marketplace-gllqp" Feb 18 14:37:39 crc kubenswrapper[4957]: I0218 14:37:39.652990 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b95ede57-e275-4ba0-834d-43356f6b960b-utilities\") pod \"redhat-marketplace-gllqp\" (UID: \"b95ede57-e275-4ba0-834d-43356f6b960b\") " pod="openshift-marketplace/redhat-marketplace-gllqp" Feb 18 14:37:39 crc kubenswrapper[4957]: I0218 14:37:39.677269 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zh4t8\" (UniqueName: \"kubernetes.io/projected/b95ede57-e275-4ba0-834d-43356f6b960b-kube-api-access-zh4t8\") pod \"redhat-marketplace-gllqp\" (UID: \"b95ede57-e275-4ba0-834d-43356f6b960b\") " pod="openshift-marketplace/redhat-marketplace-gllqp" Feb 18 14:37:39 crc kubenswrapper[4957]: I0218 14:37:39.713482 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gllqp" Feb 18 14:37:39 crc kubenswrapper[4957]: I0218 14:37:39.824062 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-cvskg" Feb 18 14:37:39 crc kubenswrapper[4957]: I0218 14:37:39.912842 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gllqp"] Feb 18 14:37:39 crc kubenswrapper[4957]: W0218 14:37:39.924932 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb95ede57_e275_4ba0_834d_43356f6b960b.slice/crio-c154b78095ab6a8b03c9e4ee4c5b514eeb1f1955452ebf46b290bfdef66afa33 WatchSource:0}: Error finding container c154b78095ab6a8b03c9e4ee4c5b514eeb1f1955452ebf46b290bfdef66afa33: Status 404 returned error can't find the container with id c154b78095ab6a8b03c9e4ee4c5b514eeb1f1955452ebf46b290bfdef66afa33 Feb 18 14:37:39 crc kubenswrapper[4957]: I0218 14:37:39.994588 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-vrpnn"] Feb 18 14:37:39 crc kubenswrapper[4957]: I0218 14:37:39.995690 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vrpnn" Feb 18 14:37:40 crc kubenswrapper[4957]: I0218 14:37:39.999558 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 18 14:37:40 crc kubenswrapper[4957]: I0218 14:37:40.003672 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vrpnn"] Feb 18 14:37:40 crc kubenswrapper[4957]: I0218 14:37:40.160460 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f04a8b9-47dc-4fdf-b0fa-b39ee03d5f00-utilities\") pod \"redhat-operators-vrpnn\" (UID: \"9f04a8b9-47dc-4fdf-b0fa-b39ee03d5f00\") " pod="openshift-marketplace/redhat-operators-vrpnn" Feb 18 14:37:40 crc kubenswrapper[4957]: I0218 14:37:40.161216 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9hds8\" (UniqueName: \"kubernetes.io/projected/9f04a8b9-47dc-4fdf-b0fa-b39ee03d5f00-kube-api-access-9hds8\") pod \"redhat-operators-vrpnn\" (UID: \"9f04a8b9-47dc-4fdf-b0fa-b39ee03d5f00\") " pod="openshift-marketplace/redhat-operators-vrpnn" Feb 18 14:37:40 crc kubenswrapper[4957]: I0218 14:37:40.161503 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f04a8b9-47dc-4fdf-b0fa-b39ee03d5f00-catalog-content\") pod \"redhat-operators-vrpnn\" (UID: \"9f04a8b9-47dc-4fdf-b0fa-b39ee03d5f00\") " pod="openshift-marketplace/redhat-operators-vrpnn" Feb 18 14:37:40 crc kubenswrapper[4957]: I0218 14:37:40.223399 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="06d8fe1f-0a71-480f-899e-a070c091cb79" path="/var/lib/kubelet/pods/06d8fe1f-0a71-480f-899e-a070c091cb79/volumes" Feb 18 14:37:40 crc kubenswrapper[4957]: I0218 14:37:40.224905 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="74312833-84d3-4221-a8f7-07c892db5165" path="/var/lib/kubelet/pods/74312833-84d3-4221-a8f7-07c892db5165/volumes" Feb 18 14:37:40 crc kubenswrapper[4957]: I0218 14:37:40.226067 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af490140-34e3-4689-b13a-112b97f5cd9e" path="/var/lib/kubelet/pods/af490140-34e3-4689-b13a-112b97f5cd9e/volumes" Feb 18 14:37:40 crc kubenswrapper[4957]: I0218 14:37:40.228046 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4d1018f-2e03-4372-98e6-cbba16adff43" path="/var/lib/kubelet/pods/f4d1018f-2e03-4372-98e6-cbba16adff43/volumes" Feb 18 14:37:40 crc kubenswrapper[4957]: I0218 14:37:40.229087 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ffc559a7-20d1-416b-ae18-bcbfc5193d32" path="/var/lib/kubelet/pods/ffc559a7-20d1-416b-ae18-bcbfc5193d32/volumes" Feb 18 14:37:40 crc kubenswrapper[4957]: I0218 14:37:40.262669 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f04a8b9-47dc-4fdf-b0fa-b39ee03d5f00-utilities\") pod \"redhat-operators-vrpnn\" (UID: \"9f04a8b9-47dc-4fdf-b0fa-b39ee03d5f00\") " pod="openshift-marketplace/redhat-operators-vrpnn" Feb 18 14:37:40 crc kubenswrapper[4957]: I0218 14:37:40.262756 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9hds8\" (UniqueName: \"kubernetes.io/projected/9f04a8b9-47dc-4fdf-b0fa-b39ee03d5f00-kube-api-access-9hds8\") pod \"redhat-operators-vrpnn\" (UID: \"9f04a8b9-47dc-4fdf-b0fa-b39ee03d5f00\") " pod="openshift-marketplace/redhat-operators-vrpnn" Feb 18 14:37:40 crc kubenswrapper[4957]: I0218 14:37:40.262826 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f04a8b9-47dc-4fdf-b0fa-b39ee03d5f00-catalog-content\") pod \"redhat-operators-vrpnn\" (UID: \"9f04a8b9-47dc-4fdf-b0fa-b39ee03d5f00\") " pod="openshift-marketplace/redhat-operators-vrpnn" Feb 18 14:37:40 crc kubenswrapper[4957]: I0218 14:37:40.263709 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f04a8b9-47dc-4fdf-b0fa-b39ee03d5f00-utilities\") pod \"redhat-operators-vrpnn\" (UID: \"9f04a8b9-47dc-4fdf-b0fa-b39ee03d5f00\") " pod="openshift-marketplace/redhat-operators-vrpnn" Feb 18 14:37:40 crc kubenswrapper[4957]: I0218 14:37:40.263773 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f04a8b9-47dc-4fdf-b0fa-b39ee03d5f00-catalog-content\") pod \"redhat-operators-vrpnn\" (UID: \"9f04a8b9-47dc-4fdf-b0fa-b39ee03d5f00\") " pod="openshift-marketplace/redhat-operators-vrpnn" Feb 18 14:37:40 crc kubenswrapper[4957]: I0218 14:37:40.283959 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9hds8\" (UniqueName: \"kubernetes.io/projected/9f04a8b9-47dc-4fdf-b0fa-b39ee03d5f00-kube-api-access-9hds8\") pod \"redhat-operators-vrpnn\" (UID: \"9f04a8b9-47dc-4fdf-b0fa-b39ee03d5f00\") " pod="openshift-marketplace/redhat-operators-vrpnn" Feb 18 14:37:40 crc kubenswrapper[4957]: I0218 14:37:40.344760 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vrpnn" Feb 18 14:37:40 crc kubenswrapper[4957]: I0218 14:37:40.522271 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vrpnn"] Feb 18 14:37:40 crc kubenswrapper[4957]: W0218 14:37:40.528791 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9f04a8b9_47dc_4fdf_b0fa_b39ee03d5f00.slice/crio-c720ea27e2c752eaf35d404e0a2abed9da9d1894fddbaccc74fdf4f2a9b66915 WatchSource:0}: Error finding container c720ea27e2c752eaf35d404e0a2abed9da9d1894fddbaccc74fdf4f2a9b66915: Status 404 returned error can't find the container with id c720ea27e2c752eaf35d404e0a2abed9da9d1894fddbaccc74fdf4f2a9b66915 Feb 18 14:37:40 crc kubenswrapper[4957]: I0218 14:37:40.828035 4957 generic.go:334] "Generic (PLEG): container finished" podID="b95ede57-e275-4ba0-834d-43356f6b960b" containerID="e2d9cf65470ef45ee26d02963fe307d22188b216c250d0c3b19a9fadc7727d6a" exitCode=0 Feb 18 14:37:40 crc kubenswrapper[4957]: I0218 14:37:40.828237 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gllqp" event={"ID":"b95ede57-e275-4ba0-834d-43356f6b960b","Type":"ContainerDied","Data":"e2d9cf65470ef45ee26d02963fe307d22188b216c250d0c3b19a9fadc7727d6a"} Feb 18 14:37:40 crc kubenswrapper[4957]: I0218 14:37:40.828509 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gllqp" event={"ID":"b95ede57-e275-4ba0-834d-43356f6b960b","Type":"ContainerStarted","Data":"c154b78095ab6a8b03c9e4ee4c5b514eeb1f1955452ebf46b290bfdef66afa33"} Feb 18 14:37:40 crc kubenswrapper[4957]: I0218 14:37:40.830477 4957 generic.go:334] "Generic (PLEG): container finished" podID="9f04a8b9-47dc-4fdf-b0fa-b39ee03d5f00" containerID="98e175a08d48848566a5f995eccdfb671697af4cdad7fb6c6b5ade1708dc6017" exitCode=0 Feb 18 14:37:40 crc kubenswrapper[4957]: I0218 14:37:40.830621 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vrpnn" event={"ID":"9f04a8b9-47dc-4fdf-b0fa-b39ee03d5f00","Type":"ContainerDied","Data":"98e175a08d48848566a5f995eccdfb671697af4cdad7fb6c6b5ade1708dc6017"} Feb 18 14:37:40 crc kubenswrapper[4957]: I0218 14:37:40.830666 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vrpnn" event={"ID":"9f04a8b9-47dc-4fdf-b0fa-b39ee03d5f00","Type":"ContainerStarted","Data":"c720ea27e2c752eaf35d404e0a2abed9da9d1894fddbaccc74fdf4f2a9b66915"} Feb 18 14:37:41 crc kubenswrapper[4957]: I0218 14:37:41.839865 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vrpnn" event={"ID":"9f04a8b9-47dc-4fdf-b0fa-b39ee03d5f00","Type":"ContainerStarted","Data":"74c401adc17ef8ccef40e2d9a358b735cb288099545d35004f38cebbc600ba8b"} Feb 18 14:37:41 crc kubenswrapper[4957]: I0218 14:37:41.847129 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gllqp" event={"ID":"b95ede57-e275-4ba0-834d-43356f6b960b","Type":"ContainerStarted","Data":"c2f8c0e4786414e7d5bcd0d103f8f2d54e41a0a86dcf067cf09f23ea6068fae2"} Feb 18 14:37:42 crc kubenswrapper[4957]: I0218 14:37:42.189843 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-4d2wd"] Feb 18 14:37:42 crc kubenswrapper[4957]: I0218 14:37:42.190992 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4d2wd" Feb 18 14:37:42 crc kubenswrapper[4957]: I0218 14:37:42.193323 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 18 14:37:42 crc kubenswrapper[4957]: I0218 14:37:42.228218 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4d2wd"] Feb 18 14:37:42 crc kubenswrapper[4957]: I0218 14:37:42.291035 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fad52f8a-f601-4c92-8545-2e384475a5d2-utilities\") pod \"community-operators-4d2wd\" (UID: \"fad52f8a-f601-4c92-8545-2e384475a5d2\") " pod="openshift-marketplace/community-operators-4d2wd" Feb 18 14:37:42 crc kubenswrapper[4957]: I0218 14:37:42.291481 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fad52f8a-f601-4c92-8545-2e384475a5d2-catalog-content\") pod \"community-operators-4d2wd\" (UID: \"fad52f8a-f601-4c92-8545-2e384475a5d2\") " pod="openshift-marketplace/community-operators-4d2wd" Feb 18 14:37:42 crc kubenswrapper[4957]: I0218 14:37:42.291664 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2ls8\" (UniqueName: \"kubernetes.io/projected/fad52f8a-f601-4c92-8545-2e384475a5d2-kube-api-access-j2ls8\") pod \"community-operators-4d2wd\" (UID: \"fad52f8a-f601-4c92-8545-2e384475a5d2\") " pod="openshift-marketplace/community-operators-4d2wd" Feb 18 14:37:42 crc kubenswrapper[4957]: I0218 14:37:42.392838 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j2ls8\" (UniqueName: \"kubernetes.io/projected/fad52f8a-f601-4c92-8545-2e384475a5d2-kube-api-access-j2ls8\") pod \"community-operators-4d2wd\" (UID: \"fad52f8a-f601-4c92-8545-2e384475a5d2\") " pod="openshift-marketplace/community-operators-4d2wd" Feb 18 14:37:42 crc kubenswrapper[4957]: I0218 14:37:42.392901 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fad52f8a-f601-4c92-8545-2e384475a5d2-utilities\") pod \"community-operators-4d2wd\" (UID: \"fad52f8a-f601-4c92-8545-2e384475a5d2\") " pod="openshift-marketplace/community-operators-4d2wd" Feb 18 14:37:42 crc kubenswrapper[4957]: I0218 14:37:42.392933 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fad52f8a-f601-4c92-8545-2e384475a5d2-catalog-content\") pod \"community-operators-4d2wd\" (UID: \"fad52f8a-f601-4c92-8545-2e384475a5d2\") " pod="openshift-marketplace/community-operators-4d2wd" Feb 18 14:37:42 crc kubenswrapper[4957]: I0218 14:37:42.393522 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fad52f8a-f601-4c92-8545-2e384475a5d2-catalog-content\") pod \"community-operators-4d2wd\" (UID: \"fad52f8a-f601-4c92-8545-2e384475a5d2\") " pod="openshift-marketplace/community-operators-4d2wd" Feb 18 14:37:42 crc kubenswrapper[4957]: I0218 14:37:42.393951 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fad52f8a-f601-4c92-8545-2e384475a5d2-utilities\") pod \"community-operators-4d2wd\" (UID: \"fad52f8a-f601-4c92-8545-2e384475a5d2\") " pod="openshift-marketplace/community-operators-4d2wd" Feb 18 14:37:42 crc kubenswrapper[4957]: I0218 14:37:42.426096 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j2ls8\" (UniqueName: \"kubernetes.io/projected/fad52f8a-f601-4c92-8545-2e384475a5d2-kube-api-access-j2ls8\") pod \"community-operators-4d2wd\" (UID: \"fad52f8a-f601-4c92-8545-2e384475a5d2\") " pod="openshift-marketplace/community-operators-4d2wd" Feb 18 14:37:42 crc kubenswrapper[4957]: I0218 14:37:42.507814 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4d2wd" Feb 18 14:37:42 crc kubenswrapper[4957]: I0218 14:37:42.589619 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-mjqvd"] Feb 18 14:37:42 crc kubenswrapper[4957]: I0218 14:37:42.590967 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mjqvd" Feb 18 14:37:42 crc kubenswrapper[4957]: I0218 14:37:42.593386 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 18 14:37:42 crc kubenswrapper[4957]: I0218 14:37:42.602298 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mjqvd"] Feb 18 14:37:42 crc kubenswrapper[4957]: I0218 14:37:42.695920 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d9fa28a-2d86-4e9f-a5da-d5f545bb0331-catalog-content\") pod \"certified-operators-mjqvd\" (UID: \"3d9fa28a-2d86-4e9f-a5da-d5f545bb0331\") " pod="openshift-marketplace/certified-operators-mjqvd" Feb 18 14:37:42 crc kubenswrapper[4957]: I0218 14:37:42.696591 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wrm2\" (UniqueName: \"kubernetes.io/projected/3d9fa28a-2d86-4e9f-a5da-d5f545bb0331-kube-api-access-8wrm2\") pod \"certified-operators-mjqvd\" (UID: \"3d9fa28a-2d86-4e9f-a5da-d5f545bb0331\") " pod="openshift-marketplace/certified-operators-mjqvd" Feb 18 14:37:42 crc kubenswrapper[4957]: I0218 14:37:42.696670 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d9fa28a-2d86-4e9f-a5da-d5f545bb0331-utilities\") pod \"certified-operators-mjqvd\" (UID: \"3d9fa28a-2d86-4e9f-a5da-d5f545bb0331\") " pod="openshift-marketplace/certified-operators-mjqvd" Feb 18 14:37:42 crc kubenswrapper[4957]: I0218 14:37:42.754460 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4d2wd"] Feb 18 14:37:42 crc kubenswrapper[4957]: I0218 14:37:42.798078 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d9fa28a-2d86-4e9f-a5da-d5f545bb0331-catalog-content\") pod \"certified-operators-mjqvd\" (UID: \"3d9fa28a-2d86-4e9f-a5da-d5f545bb0331\") " pod="openshift-marketplace/certified-operators-mjqvd" Feb 18 14:37:42 crc kubenswrapper[4957]: I0218 14:37:42.798128 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8wrm2\" (UniqueName: \"kubernetes.io/projected/3d9fa28a-2d86-4e9f-a5da-d5f545bb0331-kube-api-access-8wrm2\") pod \"certified-operators-mjqvd\" (UID: \"3d9fa28a-2d86-4e9f-a5da-d5f545bb0331\") " pod="openshift-marketplace/certified-operators-mjqvd" Feb 18 14:37:42 crc kubenswrapper[4957]: I0218 14:37:42.798165 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d9fa28a-2d86-4e9f-a5da-d5f545bb0331-utilities\") pod \"certified-operators-mjqvd\" (UID: \"3d9fa28a-2d86-4e9f-a5da-d5f545bb0331\") " pod="openshift-marketplace/certified-operators-mjqvd" Feb 18 14:37:42 crc kubenswrapper[4957]: I0218 14:37:42.798682 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d9fa28a-2d86-4e9f-a5da-d5f545bb0331-utilities\") pod \"certified-operators-mjqvd\" (UID: \"3d9fa28a-2d86-4e9f-a5da-d5f545bb0331\") " pod="openshift-marketplace/certified-operators-mjqvd" Feb 18 14:37:42 crc kubenswrapper[4957]: I0218 14:37:42.798777 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d9fa28a-2d86-4e9f-a5da-d5f545bb0331-catalog-content\") pod \"certified-operators-mjqvd\" (UID: \"3d9fa28a-2d86-4e9f-a5da-d5f545bb0331\") " pod="openshift-marketplace/certified-operators-mjqvd" Feb 18 14:37:42 crc kubenswrapper[4957]: I0218 14:37:42.820129 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8wrm2\" (UniqueName: \"kubernetes.io/projected/3d9fa28a-2d86-4e9f-a5da-d5f545bb0331-kube-api-access-8wrm2\") pod \"certified-operators-mjqvd\" (UID: \"3d9fa28a-2d86-4e9f-a5da-d5f545bb0331\") " pod="openshift-marketplace/certified-operators-mjqvd" Feb 18 14:37:42 crc kubenswrapper[4957]: I0218 14:37:42.861433 4957 generic.go:334] "Generic (PLEG): container finished" podID="9f04a8b9-47dc-4fdf-b0fa-b39ee03d5f00" containerID="74c401adc17ef8ccef40e2d9a358b735cb288099545d35004f38cebbc600ba8b" exitCode=0 Feb 18 14:37:42 crc kubenswrapper[4957]: I0218 14:37:42.861517 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vrpnn" event={"ID":"9f04a8b9-47dc-4fdf-b0fa-b39ee03d5f00","Type":"ContainerDied","Data":"74c401adc17ef8ccef40e2d9a358b735cb288099545d35004f38cebbc600ba8b"} Feb 18 14:37:42 crc kubenswrapper[4957]: I0218 14:37:42.866797 4957 generic.go:334] "Generic (PLEG): container finished" podID="b95ede57-e275-4ba0-834d-43356f6b960b" containerID="c2f8c0e4786414e7d5bcd0d103f8f2d54e41a0a86dcf067cf09f23ea6068fae2" exitCode=0 Feb 18 14:37:42 crc kubenswrapper[4957]: I0218 14:37:42.866983 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gllqp" event={"ID":"b95ede57-e275-4ba0-834d-43356f6b960b","Type":"ContainerDied","Data":"c2f8c0e4786414e7d5bcd0d103f8f2d54e41a0a86dcf067cf09f23ea6068fae2"} Feb 18 14:37:42 crc kubenswrapper[4957]: I0218 14:37:42.869661 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4d2wd" event={"ID":"fad52f8a-f601-4c92-8545-2e384475a5d2","Type":"ContainerStarted","Data":"38dccf78b00c5de87e3ae2593540563c727e68e3d6e5acb70e72feebb58d41e0"} Feb 18 14:37:42 crc kubenswrapper[4957]: I0218 14:37:42.955286 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mjqvd" Feb 18 14:37:43 crc kubenswrapper[4957]: I0218 14:37:43.160981 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mjqvd"] Feb 18 14:37:43 crc kubenswrapper[4957]: W0218 14:37:43.165732 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3d9fa28a_2d86_4e9f_a5da_d5f545bb0331.slice/crio-e8a1ea1283fbad6e4850125deb3f80f66d845b5a9ae72d766acadd35af3af6e2 WatchSource:0}: Error finding container e8a1ea1283fbad6e4850125deb3f80f66d845b5a9ae72d766acadd35af3af6e2: Status 404 returned error can't find the container with id e8a1ea1283fbad6e4850125deb3f80f66d845b5a9ae72d766acadd35af3af6e2 Feb 18 14:37:43 crc kubenswrapper[4957]: I0218 14:37:43.876892 4957 generic.go:334] "Generic (PLEG): container finished" podID="3d9fa28a-2d86-4e9f-a5da-d5f545bb0331" containerID="b746b2bad616a71f734edb46d8c4666d98c115252b22557ee49960b17bc79838" exitCode=0 Feb 18 14:37:43 crc kubenswrapper[4957]: I0218 14:37:43.876961 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mjqvd" event={"ID":"3d9fa28a-2d86-4e9f-a5da-d5f545bb0331","Type":"ContainerDied","Data":"b746b2bad616a71f734edb46d8c4666d98c115252b22557ee49960b17bc79838"} Feb 18 14:37:43 crc kubenswrapper[4957]: I0218 14:37:43.877322 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mjqvd" event={"ID":"3d9fa28a-2d86-4e9f-a5da-d5f545bb0331","Type":"ContainerStarted","Data":"e8a1ea1283fbad6e4850125deb3f80f66d845b5a9ae72d766acadd35af3af6e2"} Feb 18 14:37:43 crc kubenswrapper[4957]: I0218 14:37:43.879844 4957 generic.go:334] "Generic (PLEG): container finished" podID="fad52f8a-f601-4c92-8545-2e384475a5d2" containerID="d7670c33e30a9db9b44d7e56018e11e7ce0b6b7157d7df8cfc7fc98fe5c3f7d8" exitCode=0 Feb 18 14:37:43 crc kubenswrapper[4957]: I0218 14:37:43.879881 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4d2wd" event={"ID":"fad52f8a-f601-4c92-8545-2e384475a5d2","Type":"ContainerDied","Data":"d7670c33e30a9db9b44d7e56018e11e7ce0b6b7157d7df8cfc7fc98fe5c3f7d8"} Feb 18 14:37:44 crc kubenswrapper[4957]: I0218 14:37:44.885434 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mjqvd" event={"ID":"3d9fa28a-2d86-4e9f-a5da-d5f545bb0331","Type":"ContainerStarted","Data":"db095347aae0e6bc5e6a3d0284f77bfd1e1d43538ffe45c9bfd3fcbcd9aa45db"} Feb 18 14:37:44 crc kubenswrapper[4957]: I0218 14:37:44.887349 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4d2wd" event={"ID":"fad52f8a-f601-4c92-8545-2e384475a5d2","Type":"ContainerStarted","Data":"00382dc050747dfa9494cde8a13edb284c7b71cb29c53227cf9c5c49c8be1eb6"} Feb 18 14:37:44 crc kubenswrapper[4957]: I0218 14:37:44.889267 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gllqp" event={"ID":"b95ede57-e275-4ba0-834d-43356f6b960b","Type":"ContainerStarted","Data":"0a85b3f07e602aefbbbda7dac31c3b119df8cb10abdcb7825d70f1610a35622c"} Feb 18 14:37:44 crc kubenswrapper[4957]: I0218 14:37:44.891583 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vrpnn" event={"ID":"9f04a8b9-47dc-4fdf-b0fa-b39ee03d5f00","Type":"ContainerStarted","Data":"2fde7c5ef212618d402aa21565652c78d592fc74e88dbe573e09bdb5b2127eda"} Feb 18 14:37:44 crc kubenswrapper[4957]: I0218 14:37:44.925316 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-vrpnn" podStartSLOduration=2.5173646979999997 podStartE2EDuration="5.925300271s" podCreationTimestamp="2026-02-18 14:37:39 +0000 UTC" firstStartedPulling="2026-02-18 14:37:40.831460565 +0000 UTC m=+367.352325309" lastFinishedPulling="2026-02-18 14:37:44.239396138 +0000 UTC m=+370.760260882" observedRunningTime="2026-02-18 14:37:44.920321765 +0000 UTC m=+371.441186509" watchObservedRunningTime="2026-02-18 14:37:44.925300271 +0000 UTC m=+371.446165015" Feb 18 14:37:44 crc kubenswrapper[4957]: I0218 14:37:44.944799 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-gllqp" podStartSLOduration=2.461533152 podStartE2EDuration="5.944778442s" podCreationTimestamp="2026-02-18 14:37:39 +0000 UTC" firstStartedPulling="2026-02-18 14:37:40.829557619 +0000 UTC m=+367.350422363" lastFinishedPulling="2026-02-18 14:37:44.312802919 +0000 UTC m=+370.833667653" observedRunningTime="2026-02-18 14:37:44.942079553 +0000 UTC m=+371.462944287" watchObservedRunningTime="2026-02-18 14:37:44.944778442 +0000 UTC m=+371.465643186" Feb 18 14:37:45 crc kubenswrapper[4957]: I0218 14:37:45.899877 4957 generic.go:334] "Generic (PLEG): container finished" podID="fad52f8a-f601-4c92-8545-2e384475a5d2" containerID="00382dc050747dfa9494cde8a13edb284c7b71cb29c53227cf9c5c49c8be1eb6" exitCode=0 Feb 18 14:37:45 crc kubenswrapper[4957]: I0218 14:37:45.899988 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4d2wd" event={"ID":"fad52f8a-f601-4c92-8545-2e384475a5d2","Type":"ContainerDied","Data":"00382dc050747dfa9494cde8a13edb284c7b71cb29c53227cf9c5c49c8be1eb6"} Feb 18 14:37:45 crc kubenswrapper[4957]: I0218 14:37:45.902364 4957 generic.go:334] "Generic (PLEG): container finished" podID="3d9fa28a-2d86-4e9f-a5da-d5f545bb0331" containerID="db095347aae0e6bc5e6a3d0284f77bfd1e1d43538ffe45c9bfd3fcbcd9aa45db" exitCode=0 Feb 18 14:37:45 crc kubenswrapper[4957]: I0218 14:37:45.903047 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mjqvd" event={"ID":"3d9fa28a-2d86-4e9f-a5da-d5f545bb0331","Type":"ContainerDied","Data":"db095347aae0e6bc5e6a3d0284f77bfd1e1d43538ffe45c9bfd3fcbcd9aa45db"} Feb 18 14:37:46 crc kubenswrapper[4957]: I0218 14:37:46.914497 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mjqvd" event={"ID":"3d9fa28a-2d86-4e9f-a5da-d5f545bb0331","Type":"ContainerStarted","Data":"c6f2db998ede115b1501e4eb2712a2af1fb9cf007049c9de15856ddb67380244"} Feb 18 14:37:46 crc kubenswrapper[4957]: I0218 14:37:46.917967 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4d2wd" event={"ID":"fad52f8a-f601-4c92-8545-2e384475a5d2","Type":"ContainerStarted","Data":"59dd8de192c07315ff240d021ddb1434f7b3985bb35d2e2b0a569b9181b13ab4"} Feb 18 14:37:46 crc kubenswrapper[4957]: I0218 14:37:46.939613 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-mjqvd" podStartSLOduration=2.521727153 podStartE2EDuration="4.939586608s" podCreationTimestamp="2026-02-18 14:37:42 +0000 UTC" firstStartedPulling="2026-02-18 14:37:43.879300034 +0000 UTC m=+370.400164778" lastFinishedPulling="2026-02-18 14:37:46.297159489 +0000 UTC m=+372.818024233" observedRunningTime="2026-02-18 14:37:46.933362225 +0000 UTC m=+373.454226969" watchObservedRunningTime="2026-02-18 14:37:46.939586608 +0000 UTC m=+373.460451352" Feb 18 14:37:46 crc kubenswrapper[4957]: I0218 14:37:46.957324 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-4d2wd" podStartSLOduration=2.5616318319999998 podStartE2EDuration="4.957296007s" podCreationTimestamp="2026-02-18 14:37:42 +0000 UTC" firstStartedPulling="2026-02-18 14:37:43.882079535 +0000 UTC m=+370.402944279" lastFinishedPulling="2026-02-18 14:37:46.27774371 +0000 UTC m=+372.798608454" observedRunningTime="2026-02-18 14:37:46.952759524 +0000 UTC m=+373.473624278" watchObservedRunningTime="2026-02-18 14:37:46.957296007 +0000 UTC m=+373.478160751" Feb 18 14:37:49 crc kubenswrapper[4957]: I0218 14:37:49.713702 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-gllqp" Feb 18 14:37:49 crc kubenswrapper[4957]: I0218 14:37:49.714137 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-gllqp" Feb 18 14:37:49 crc kubenswrapper[4957]: I0218 14:37:49.761917 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-gllqp" Feb 18 14:37:49 crc kubenswrapper[4957]: I0218 14:37:49.974374 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-gllqp" Feb 18 14:37:50 crc kubenswrapper[4957]: I0218 14:37:50.345639 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-vrpnn" Feb 18 14:37:50 crc kubenswrapper[4957]: I0218 14:37:50.346095 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-vrpnn" Feb 18 14:37:51 crc kubenswrapper[4957]: I0218 14:37:51.386121 4957 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-vrpnn" podUID="9f04a8b9-47dc-4fdf-b0fa-b39ee03d5f00" containerName="registry-server" probeResult="failure" output=< Feb 18 14:37:51 crc kubenswrapper[4957]: timeout: failed to connect service ":50051" within 1s Feb 18 14:37:51 crc kubenswrapper[4957]: > Feb 18 14:37:52 crc kubenswrapper[4957]: I0218 14:37:52.419535 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-97lxp" Feb 18 14:37:52 crc kubenswrapper[4957]: I0218 14:37:52.483645 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-hfcww"] Feb 18 14:37:52 crc kubenswrapper[4957]: I0218 14:37:52.522930 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-4d2wd" Feb 18 14:37:52 crc kubenswrapper[4957]: I0218 14:37:52.523291 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-4d2wd" Feb 18 14:37:52 crc kubenswrapper[4957]: I0218 14:37:52.566763 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-4d2wd" Feb 18 14:37:52 crc kubenswrapper[4957]: I0218 14:37:52.955680 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-mjqvd" Feb 18 14:37:52 crc kubenswrapper[4957]: I0218 14:37:52.956129 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-mjqvd" Feb 18 14:37:52 crc kubenswrapper[4957]: I0218 14:37:52.995078 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-mjqvd" Feb 18 14:37:53 crc kubenswrapper[4957]: I0218 14:37:53.140996 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-mjqvd" Feb 18 14:37:53 crc kubenswrapper[4957]: I0218 14:37:53.141050 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-4d2wd" Feb 18 14:38:00 crc kubenswrapper[4957]: I0218 14:38:00.388586 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-vrpnn" Feb 18 14:38:00 crc kubenswrapper[4957]: I0218 14:38:00.433332 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-vrpnn" Feb 18 14:38:07 crc kubenswrapper[4957]: I0218 14:38:07.279753 4957 patch_prober.go:28] interesting pod/machine-config-daemon-x8wwg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 14:38:07 crc kubenswrapper[4957]: I0218 14:38:07.280070 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 14:38:07 crc kubenswrapper[4957]: I0218 14:38:07.280124 4957 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" Feb 18 14:38:07 crc kubenswrapper[4957]: I0218 14:38:07.280728 4957 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"039cd260c9485199567f1825b67f6c50651e5a330eb2444ec1819bfdfe45f6a1"} pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 14:38:07 crc kubenswrapper[4957]: I0218 14:38:07.280785 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" containerName="machine-config-daemon" containerID="cri-o://039cd260c9485199567f1825b67f6c50651e5a330eb2444ec1819bfdfe45f6a1" gracePeriod=600 Feb 18 14:38:08 crc kubenswrapper[4957]: I0218 14:38:08.176569 4957 generic.go:334] "Generic (PLEG): container finished" podID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" containerID="039cd260c9485199567f1825b67f6c50651e5a330eb2444ec1819bfdfe45f6a1" exitCode=0 Feb 18 14:38:08 crc kubenswrapper[4957]: I0218 14:38:08.176642 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" event={"ID":"4cde17e3-43e9-4bed-afe8-5b76229e35cf","Type":"ContainerDied","Data":"039cd260c9485199567f1825b67f6c50651e5a330eb2444ec1819bfdfe45f6a1"} Feb 18 14:38:08 crc kubenswrapper[4957]: I0218 14:38:08.176964 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" event={"ID":"4cde17e3-43e9-4bed-afe8-5b76229e35cf","Type":"ContainerStarted","Data":"135878b003cd530333b1214c0015b7d7b2523f3ba8a66d6b435c6895505efe92"} Feb 18 14:38:08 crc kubenswrapper[4957]: I0218 14:38:08.176994 4957 scope.go:117] "RemoveContainer" containerID="4227d09ba91769eac5df7ac50f6da20587fad22be3d3f27f0a938b37d76b457e" Feb 18 14:38:09 crc kubenswrapper[4957]: I0218 14:38:09.794637 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6d5b84845-rntqr"] Feb 18 14:38:09 crc kubenswrapper[4957]: I0218 14:38:09.795639 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-rntqr" Feb 18 14:38:09 crc kubenswrapper[4957]: I0218 14:38:09.798088 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Feb 18 14:38:09 crc kubenswrapper[4957]: I0218 14:38:09.798365 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Feb 18 14:38:09 crc kubenswrapper[4957]: I0218 14:38:09.799084 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-dockercfg-wwt9l" Feb 18 14:38:09 crc kubenswrapper[4957]: I0218 14:38:09.799528 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Feb 18 14:38:09 crc kubenswrapper[4957]: I0218 14:38:09.799683 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Feb 18 14:38:09 crc kubenswrapper[4957]: I0218 14:38:09.803834 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6d5b84845-rntqr"] Feb 18 14:38:09 crc kubenswrapper[4957]: I0218 14:38:09.943375 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qlc9q\" (UniqueName: \"kubernetes.io/projected/533ebe87-2229-480a-9df2-0e9af0b02546-kube-api-access-qlc9q\") pod \"cluster-monitoring-operator-6d5b84845-rntqr\" (UID: \"533ebe87-2229-480a-9df2-0e9af0b02546\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-rntqr" Feb 18 14:38:09 crc kubenswrapper[4957]: I0218 14:38:09.943467 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/533ebe87-2229-480a-9df2-0e9af0b02546-telemetry-config\") pod \"cluster-monitoring-operator-6d5b84845-rntqr\" (UID: \"533ebe87-2229-480a-9df2-0e9af0b02546\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-rntqr" Feb 18 14:38:09 crc kubenswrapper[4957]: I0218 14:38:09.943554 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/533ebe87-2229-480a-9df2-0e9af0b02546-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6d5b84845-rntqr\" (UID: \"533ebe87-2229-480a-9df2-0e9af0b02546\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-rntqr" Feb 18 14:38:10 crc kubenswrapper[4957]: I0218 14:38:10.044646 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qlc9q\" (UniqueName: \"kubernetes.io/projected/533ebe87-2229-480a-9df2-0e9af0b02546-kube-api-access-qlc9q\") pod \"cluster-monitoring-operator-6d5b84845-rntqr\" (UID: \"533ebe87-2229-480a-9df2-0e9af0b02546\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-rntqr" Feb 18 14:38:10 crc kubenswrapper[4957]: I0218 14:38:10.044722 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/533ebe87-2229-480a-9df2-0e9af0b02546-telemetry-config\") pod \"cluster-monitoring-operator-6d5b84845-rntqr\" (UID: \"533ebe87-2229-480a-9df2-0e9af0b02546\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-rntqr" Feb 18 14:38:10 crc kubenswrapper[4957]: I0218 14:38:10.044772 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/533ebe87-2229-480a-9df2-0e9af0b02546-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6d5b84845-rntqr\" (UID: \"533ebe87-2229-480a-9df2-0e9af0b02546\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-rntqr" Feb 18 14:38:10 crc kubenswrapper[4957]: I0218 14:38:10.045956 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/533ebe87-2229-480a-9df2-0e9af0b02546-telemetry-config\") pod \"cluster-monitoring-operator-6d5b84845-rntqr\" (UID: \"533ebe87-2229-480a-9df2-0e9af0b02546\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-rntqr" Feb 18 14:38:10 crc kubenswrapper[4957]: I0218 14:38:10.055734 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/533ebe87-2229-480a-9df2-0e9af0b02546-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6d5b84845-rntqr\" (UID: \"533ebe87-2229-480a-9df2-0e9af0b02546\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-rntqr" Feb 18 14:38:10 crc kubenswrapper[4957]: I0218 14:38:10.065410 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qlc9q\" (UniqueName: \"kubernetes.io/projected/533ebe87-2229-480a-9df2-0e9af0b02546-kube-api-access-qlc9q\") pod \"cluster-monitoring-operator-6d5b84845-rntqr\" (UID: \"533ebe87-2229-480a-9df2-0e9af0b02546\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-rntqr" Feb 18 14:38:10 crc kubenswrapper[4957]: I0218 14:38:10.117546 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-rntqr" Feb 18 14:38:10 crc kubenswrapper[4957]: I0218 14:38:10.283306 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6d5b84845-rntqr"] Feb 18 14:38:11 crc kubenswrapper[4957]: I0218 14:38:11.197931 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-rntqr" event={"ID":"533ebe87-2229-480a-9df2-0e9af0b02546","Type":"ContainerStarted","Data":"2ab702f68983906544a0ee229d6e4546292eb6e217b6e1bcb3841f00aee07ab9"} Feb 18 14:38:12 crc kubenswrapper[4957]: I0218 14:38:12.204645 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-rntqr" event={"ID":"533ebe87-2229-480a-9df2-0e9af0b02546","Type":"ContainerStarted","Data":"a5396507ebf70e0df3278fb83af4842bef077ba0aa1e7a91e98ddf176e1a9906"} Feb 18 14:38:12 crc kubenswrapper[4957]: I0218 14:38:12.252633 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-rntqr" podStartSLOduration=1.725066203 podStartE2EDuration="3.252601913s" podCreationTimestamp="2026-02-18 14:38:09 +0000 UTC" firstStartedPulling="2026-02-18 14:38:10.291264443 +0000 UTC m=+396.812129187" lastFinishedPulling="2026-02-18 14:38:11.818800153 +0000 UTC m=+398.339664897" observedRunningTime="2026-02-18 14:38:12.238134606 +0000 UTC m=+398.758999350" watchObservedRunningTime="2026-02-18 14:38:12.252601913 +0000 UTC m=+398.773466657" Feb 18 14:38:12 crc kubenswrapper[4957]: I0218 14:38:12.413593 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-htr7f"] Feb 18 14:38:12 crc kubenswrapper[4957]: I0218 14:38:12.414307 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-htr7f" Feb 18 14:38:12 crc kubenswrapper[4957]: I0218 14:38:12.418813 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-dockercfg-qcb2z" Feb 18 14:38:12 crc kubenswrapper[4957]: I0218 14:38:12.418948 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-tls" Feb 18 14:38:12 crc kubenswrapper[4957]: I0218 14:38:12.425985 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-htr7f"] Feb 18 14:38:12 crc kubenswrapper[4957]: I0218 14:38:12.480510 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/eba7e3a4-c6f4-4473-bcec-23a777ba8798-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-htr7f\" (UID: \"eba7e3a4-c6f4-4473-bcec-23a777ba8798\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-htr7f" Feb 18 14:38:12 crc kubenswrapper[4957]: I0218 14:38:12.581857 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/eba7e3a4-c6f4-4473-bcec-23a777ba8798-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-htr7f\" (UID: \"eba7e3a4-c6f4-4473-bcec-23a777ba8798\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-htr7f" Feb 18 14:38:12 crc kubenswrapper[4957]: I0218 14:38:12.588741 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/eba7e3a4-c6f4-4473-bcec-23a777ba8798-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-htr7f\" (UID: \"eba7e3a4-c6f4-4473-bcec-23a777ba8798\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-htr7f" Feb 18 14:38:12 crc kubenswrapper[4957]: I0218 14:38:12.731866 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-htr7f" Feb 18 14:38:12 crc kubenswrapper[4957]: I0218 14:38:12.904211 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-htr7f"] Feb 18 14:38:12 crc kubenswrapper[4957]: W0218 14:38:12.909613 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeba7e3a4_c6f4_4473_bcec_23a777ba8798.slice/crio-989589da05b452a66a700c7ea8ec600f64edd41f4206589b4317a439557131d3 WatchSource:0}: Error finding container 989589da05b452a66a700c7ea8ec600f64edd41f4206589b4317a439557131d3: Status 404 returned error can't find the container with id 989589da05b452a66a700c7ea8ec600f64edd41f4206589b4317a439557131d3 Feb 18 14:38:13 crc kubenswrapper[4957]: I0218 14:38:13.209853 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-htr7f" event={"ID":"eba7e3a4-c6f4-4473-bcec-23a777ba8798","Type":"ContainerStarted","Data":"989589da05b452a66a700c7ea8ec600f64edd41f4206589b4317a439557131d3"} Feb 18 14:38:15 crc kubenswrapper[4957]: I0218 14:38:15.221432 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-htr7f" event={"ID":"eba7e3a4-c6f4-4473-bcec-23a777ba8798","Type":"ContainerStarted","Data":"a95bc19447cc8da6480533d1418392662f3d637a86c7ede2044b609a58c281db"} Feb 18 14:38:15 crc kubenswrapper[4957]: I0218 14:38:15.221777 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-htr7f" Feb 18 14:38:15 crc kubenswrapper[4957]: I0218 14:38:15.232446 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-htr7f" Feb 18 14:38:15 crc kubenswrapper[4957]: I0218 14:38:15.238039 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-htr7f" podStartSLOduration=1.993188846 podStartE2EDuration="3.238012134s" podCreationTimestamp="2026-02-18 14:38:12 +0000 UTC" firstStartedPulling="2026-02-18 14:38:12.911955026 +0000 UTC m=+399.432819790" lastFinishedPulling="2026-02-18 14:38:14.156778334 +0000 UTC m=+400.677643078" observedRunningTime="2026-02-18 14:38:15.233661056 +0000 UTC m=+401.754525800" watchObservedRunningTime="2026-02-18 14:38:15.238012134 +0000 UTC m=+401.758876898" Feb 18 14:38:15 crc kubenswrapper[4957]: I0218 14:38:15.480226 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-db54df47d-65pzp"] Feb 18 14:38:15 crc kubenswrapper[4957]: I0218 14:38:15.481410 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-db54df47d-65pzp" Feb 18 14:38:15 crc kubenswrapper[4957]: I0218 14:38:15.483704 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-tls" Feb 18 14:38:15 crc kubenswrapper[4957]: I0218 14:38:15.483936 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-client-ca" Feb 18 14:38:15 crc kubenswrapper[4957]: I0218 14:38:15.483997 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-dockercfg-mp9vs" Feb 18 14:38:15 crc kubenswrapper[4957]: I0218 14:38:15.487591 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" Feb 18 14:38:15 crc kubenswrapper[4957]: I0218 14:38:15.495387 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-db54df47d-65pzp"] Feb 18 14:38:15 crc kubenswrapper[4957]: I0218 14:38:15.520225 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvrwg\" (UniqueName: \"kubernetes.io/projected/583e712f-4dc8-45d0-a5f5-9c9053290693-kube-api-access-jvrwg\") pod \"prometheus-operator-db54df47d-65pzp\" (UID: \"583e712f-4dc8-45d0-a5f5-9c9053290693\") " pod="openshift-monitoring/prometheus-operator-db54df47d-65pzp" Feb 18 14:38:15 crc kubenswrapper[4957]: I0218 14:38:15.520296 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/583e712f-4dc8-45d0-a5f5-9c9053290693-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-db54df47d-65pzp\" (UID: \"583e712f-4dc8-45d0-a5f5-9c9053290693\") " pod="openshift-monitoring/prometheus-operator-db54df47d-65pzp" Feb 18 14:38:15 crc kubenswrapper[4957]: I0218 14:38:15.520369 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/583e712f-4dc8-45d0-a5f5-9c9053290693-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-65pzp\" (UID: \"583e712f-4dc8-45d0-a5f5-9c9053290693\") " pod="openshift-monitoring/prometheus-operator-db54df47d-65pzp" Feb 18 14:38:15 crc kubenswrapper[4957]: I0218 14:38:15.520393 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/583e712f-4dc8-45d0-a5f5-9c9053290693-metrics-client-ca\") pod \"prometheus-operator-db54df47d-65pzp\" (UID: \"583e712f-4dc8-45d0-a5f5-9c9053290693\") " pod="openshift-monitoring/prometheus-operator-db54df47d-65pzp" Feb 18 14:38:15 crc kubenswrapper[4957]: I0218 14:38:15.621869 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/583e712f-4dc8-45d0-a5f5-9c9053290693-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-65pzp\" (UID: \"583e712f-4dc8-45d0-a5f5-9c9053290693\") " pod="openshift-monitoring/prometheus-operator-db54df47d-65pzp" Feb 18 14:38:15 crc kubenswrapper[4957]: I0218 14:38:15.621931 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/583e712f-4dc8-45d0-a5f5-9c9053290693-metrics-client-ca\") pod \"prometheus-operator-db54df47d-65pzp\" (UID: \"583e712f-4dc8-45d0-a5f5-9c9053290693\") " pod="openshift-monitoring/prometheus-operator-db54df47d-65pzp" Feb 18 14:38:15 crc kubenswrapper[4957]: I0218 14:38:15.621990 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jvrwg\" (UniqueName: \"kubernetes.io/projected/583e712f-4dc8-45d0-a5f5-9c9053290693-kube-api-access-jvrwg\") pod \"prometheus-operator-db54df47d-65pzp\" (UID: \"583e712f-4dc8-45d0-a5f5-9c9053290693\") " pod="openshift-monitoring/prometheus-operator-db54df47d-65pzp" Feb 18 14:38:15 crc kubenswrapper[4957]: I0218 14:38:15.622017 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/583e712f-4dc8-45d0-a5f5-9c9053290693-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-db54df47d-65pzp\" (UID: \"583e712f-4dc8-45d0-a5f5-9c9053290693\") " pod="openshift-monitoring/prometheus-operator-db54df47d-65pzp" Feb 18 14:38:15 crc kubenswrapper[4957]: I0218 14:38:15.623482 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/583e712f-4dc8-45d0-a5f5-9c9053290693-metrics-client-ca\") pod \"prometheus-operator-db54df47d-65pzp\" (UID: \"583e712f-4dc8-45d0-a5f5-9c9053290693\") " pod="openshift-monitoring/prometheus-operator-db54df47d-65pzp" Feb 18 14:38:15 crc kubenswrapper[4957]: I0218 14:38:15.633588 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/583e712f-4dc8-45d0-a5f5-9c9053290693-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-65pzp\" (UID: \"583e712f-4dc8-45d0-a5f5-9c9053290693\") " pod="openshift-monitoring/prometheus-operator-db54df47d-65pzp" Feb 18 14:38:15 crc kubenswrapper[4957]: I0218 14:38:15.634763 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/583e712f-4dc8-45d0-a5f5-9c9053290693-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-db54df47d-65pzp\" (UID: \"583e712f-4dc8-45d0-a5f5-9c9053290693\") " pod="openshift-monitoring/prometheus-operator-db54df47d-65pzp" Feb 18 14:38:15 crc kubenswrapper[4957]: I0218 14:38:15.638085 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jvrwg\" (UniqueName: \"kubernetes.io/projected/583e712f-4dc8-45d0-a5f5-9c9053290693-kube-api-access-jvrwg\") pod \"prometheus-operator-db54df47d-65pzp\" (UID: \"583e712f-4dc8-45d0-a5f5-9c9053290693\") " pod="openshift-monitoring/prometheus-operator-db54df47d-65pzp" Feb 18 14:38:15 crc kubenswrapper[4957]: I0218 14:38:15.796800 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-db54df47d-65pzp" Feb 18 14:38:16 crc kubenswrapper[4957]: I0218 14:38:16.000330 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-db54df47d-65pzp"] Feb 18 14:38:16 crc kubenswrapper[4957]: W0218 14:38:16.007750 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod583e712f_4dc8_45d0_a5f5_9c9053290693.slice/crio-ccabf822b88ffadb91c4434cecae0e11321cd37361dea8ad38666a88bfed3216 WatchSource:0}: Error finding container ccabf822b88ffadb91c4434cecae0e11321cd37361dea8ad38666a88bfed3216: Status 404 returned error can't find the container with id ccabf822b88ffadb91c4434cecae0e11321cd37361dea8ad38666a88bfed3216 Feb 18 14:38:16 crc kubenswrapper[4957]: I0218 14:38:16.227263 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-db54df47d-65pzp" event={"ID":"583e712f-4dc8-45d0-a5f5-9c9053290693","Type":"ContainerStarted","Data":"ccabf822b88ffadb91c4434cecae0e11321cd37361dea8ad38666a88bfed3216"} Feb 18 14:38:17 crc kubenswrapper[4957]: I0218 14:38:17.525487 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-hfcww" podUID="1c7a025e-0270-445c-ac05-34ffe3502176" containerName="registry" containerID="cri-o://4d4a82ad50b3e71d9bb688538b27c522aedfcb1d714475cdf4e65bbac5113ac1" gracePeriod=30 Feb 18 14:38:17 crc kubenswrapper[4957]: I0218 14:38:17.893027 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-hfcww" Feb 18 14:38:17 crc kubenswrapper[4957]: I0218 14:38:17.953001 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"1c7a025e-0270-445c-ac05-34ffe3502176\" (UID: \"1c7a025e-0270-445c-ac05-34ffe3502176\") " Feb 18 14:38:17 crc kubenswrapper[4957]: I0218 14:38:17.953076 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/1c7a025e-0270-445c-ac05-34ffe3502176-ca-trust-extracted\") pod \"1c7a025e-0270-445c-ac05-34ffe3502176\" (UID: \"1c7a025e-0270-445c-ac05-34ffe3502176\") " Feb 18 14:38:17 crc kubenswrapper[4957]: I0218 14:38:17.953137 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1c7a025e-0270-445c-ac05-34ffe3502176-trusted-ca\") pod \"1c7a025e-0270-445c-ac05-34ffe3502176\" (UID: \"1c7a025e-0270-445c-ac05-34ffe3502176\") " Feb 18 14:38:17 crc kubenswrapper[4957]: I0218 14:38:17.953159 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1c7a025e-0270-445c-ac05-34ffe3502176-bound-sa-token\") pod \"1c7a025e-0270-445c-ac05-34ffe3502176\" (UID: \"1c7a025e-0270-445c-ac05-34ffe3502176\") " Feb 18 14:38:17 crc kubenswrapper[4957]: I0218 14:38:17.953200 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/1c7a025e-0270-445c-ac05-34ffe3502176-registry-certificates\") pod \"1c7a025e-0270-445c-ac05-34ffe3502176\" (UID: \"1c7a025e-0270-445c-ac05-34ffe3502176\") " Feb 18 14:38:17 crc kubenswrapper[4957]: I0218 14:38:17.953217 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/1c7a025e-0270-445c-ac05-34ffe3502176-registry-tls\") pod \"1c7a025e-0270-445c-ac05-34ffe3502176\" (UID: \"1c7a025e-0270-445c-ac05-34ffe3502176\") " Feb 18 14:38:17 crc kubenswrapper[4957]: I0218 14:38:17.953240 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/1c7a025e-0270-445c-ac05-34ffe3502176-installation-pull-secrets\") pod \"1c7a025e-0270-445c-ac05-34ffe3502176\" (UID: \"1c7a025e-0270-445c-ac05-34ffe3502176\") " Feb 18 14:38:17 crc kubenswrapper[4957]: I0218 14:38:17.953277 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8hmnd\" (UniqueName: \"kubernetes.io/projected/1c7a025e-0270-445c-ac05-34ffe3502176-kube-api-access-8hmnd\") pod \"1c7a025e-0270-445c-ac05-34ffe3502176\" (UID: \"1c7a025e-0270-445c-ac05-34ffe3502176\") " Feb 18 14:38:17 crc kubenswrapper[4957]: I0218 14:38:17.954577 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c7a025e-0270-445c-ac05-34ffe3502176-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "1c7a025e-0270-445c-ac05-34ffe3502176" (UID: "1c7a025e-0270-445c-ac05-34ffe3502176"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:38:17 crc kubenswrapper[4957]: I0218 14:38:17.955687 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c7a025e-0270-445c-ac05-34ffe3502176-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "1c7a025e-0270-445c-ac05-34ffe3502176" (UID: "1c7a025e-0270-445c-ac05-34ffe3502176"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:38:17 crc kubenswrapper[4957]: I0218 14:38:17.959997 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c7a025e-0270-445c-ac05-34ffe3502176-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "1c7a025e-0270-445c-ac05-34ffe3502176" (UID: "1c7a025e-0270-445c-ac05-34ffe3502176"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:38:17 crc kubenswrapper[4957]: I0218 14:38:17.960300 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c7a025e-0270-445c-ac05-34ffe3502176-kube-api-access-8hmnd" (OuterVolumeSpecName: "kube-api-access-8hmnd") pod "1c7a025e-0270-445c-ac05-34ffe3502176" (UID: "1c7a025e-0270-445c-ac05-34ffe3502176"). InnerVolumeSpecName "kube-api-access-8hmnd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:38:17 crc kubenswrapper[4957]: I0218 14:38:17.960373 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c7a025e-0270-445c-ac05-34ffe3502176-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "1c7a025e-0270-445c-ac05-34ffe3502176" (UID: "1c7a025e-0270-445c-ac05-34ffe3502176"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:38:17 crc kubenswrapper[4957]: I0218 14:38:17.960432 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c7a025e-0270-445c-ac05-34ffe3502176-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "1c7a025e-0270-445c-ac05-34ffe3502176" (UID: "1c7a025e-0270-445c-ac05-34ffe3502176"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:38:17 crc kubenswrapper[4957]: I0218 14:38:17.962782 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "1c7a025e-0270-445c-ac05-34ffe3502176" (UID: "1c7a025e-0270-445c-ac05-34ffe3502176"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 18 14:38:17 crc kubenswrapper[4957]: I0218 14:38:17.971857 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1c7a025e-0270-445c-ac05-34ffe3502176-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "1c7a025e-0270-445c-ac05-34ffe3502176" (UID: "1c7a025e-0270-445c-ac05-34ffe3502176"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:38:18 crc kubenswrapper[4957]: I0218 14:38:18.054895 4957 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/1c7a025e-0270-445c-ac05-34ffe3502176-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 18 14:38:18 crc kubenswrapper[4957]: I0218 14:38:18.054953 4957 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/1c7a025e-0270-445c-ac05-34ffe3502176-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 18 14:38:18 crc kubenswrapper[4957]: I0218 14:38:18.054964 4957 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/1c7a025e-0270-445c-ac05-34ffe3502176-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 18 14:38:18 crc kubenswrapper[4957]: I0218 14:38:18.054975 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8hmnd\" (UniqueName: \"kubernetes.io/projected/1c7a025e-0270-445c-ac05-34ffe3502176-kube-api-access-8hmnd\") on node \"crc\" DevicePath \"\"" Feb 18 14:38:18 crc kubenswrapper[4957]: I0218 14:38:18.054984 4957 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/1c7a025e-0270-445c-ac05-34ffe3502176-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 18 14:38:18 crc kubenswrapper[4957]: I0218 14:38:18.054992 4957 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1c7a025e-0270-445c-ac05-34ffe3502176-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 18 14:38:18 crc kubenswrapper[4957]: I0218 14:38:18.055001 4957 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1c7a025e-0270-445c-ac05-34ffe3502176-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 18 14:38:18 crc kubenswrapper[4957]: I0218 14:38:18.245372 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-db54df47d-65pzp" event={"ID":"583e712f-4dc8-45d0-a5f5-9c9053290693","Type":"ContainerStarted","Data":"67f0e2c8d91db565659318d901c897b34610f7dc1c2da4637ce474f6bfd677f2"} Feb 18 14:38:18 crc kubenswrapper[4957]: I0218 14:38:18.247149 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-db54df47d-65pzp" event={"ID":"583e712f-4dc8-45d0-a5f5-9c9053290693","Type":"ContainerStarted","Data":"30fc70d6f65d6e03e92c453cd2d69fa103f8c29e54332d5538cd322fd889944c"} Feb 18 14:38:18 crc kubenswrapper[4957]: I0218 14:38:18.247853 4957 generic.go:334] "Generic (PLEG): container finished" podID="1c7a025e-0270-445c-ac05-34ffe3502176" containerID="4d4a82ad50b3e71d9bb688538b27c522aedfcb1d714475cdf4e65bbac5113ac1" exitCode=0 Feb 18 14:38:18 crc kubenswrapper[4957]: I0218 14:38:18.247897 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-hfcww" event={"ID":"1c7a025e-0270-445c-ac05-34ffe3502176","Type":"ContainerDied","Data":"4d4a82ad50b3e71d9bb688538b27c522aedfcb1d714475cdf4e65bbac5113ac1"} Feb 18 14:38:18 crc kubenswrapper[4957]: I0218 14:38:18.247935 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-hfcww" event={"ID":"1c7a025e-0270-445c-ac05-34ffe3502176","Type":"ContainerDied","Data":"a73696d362f145a294c2ebfcd25250a3feb7c9ac05c6d0a1efd85f0e48b58fd1"} Feb 18 14:38:18 crc kubenswrapper[4957]: I0218 14:38:18.247959 4957 scope.go:117] "RemoveContainer" containerID="4d4a82ad50b3e71d9bb688538b27c522aedfcb1d714475cdf4e65bbac5113ac1" Feb 18 14:38:18 crc kubenswrapper[4957]: I0218 14:38:18.248136 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-hfcww" Feb 18 14:38:18 crc kubenswrapper[4957]: I0218 14:38:18.278685 4957 scope.go:117] "RemoveContainer" containerID="4d4a82ad50b3e71d9bb688538b27c522aedfcb1d714475cdf4e65bbac5113ac1" Feb 18 14:38:18 crc kubenswrapper[4957]: I0218 14:38:18.279734 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-db54df47d-65pzp" podStartSLOduration=1.9650208839999999 podStartE2EDuration="3.279696229s" podCreationTimestamp="2026-02-18 14:38:15 +0000 UTC" firstStartedPulling="2026-02-18 14:38:16.009607096 +0000 UTC m=+402.530471840" lastFinishedPulling="2026-02-18 14:38:17.324282451 +0000 UTC m=+403.845147185" observedRunningTime="2026-02-18 14:38:18.268532669 +0000 UTC m=+404.789397413" watchObservedRunningTime="2026-02-18 14:38:18.279696229 +0000 UTC m=+404.800560973" Feb 18 14:38:18 crc kubenswrapper[4957]: E0218 14:38:18.282995 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d4a82ad50b3e71d9bb688538b27c522aedfcb1d714475cdf4e65bbac5113ac1\": container with ID starting with 4d4a82ad50b3e71d9bb688538b27c522aedfcb1d714475cdf4e65bbac5113ac1 not found: ID does not exist" containerID="4d4a82ad50b3e71d9bb688538b27c522aedfcb1d714475cdf4e65bbac5113ac1" Feb 18 14:38:18 crc kubenswrapper[4957]: I0218 14:38:18.283050 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d4a82ad50b3e71d9bb688538b27c522aedfcb1d714475cdf4e65bbac5113ac1"} err="failed to get container status \"4d4a82ad50b3e71d9bb688538b27c522aedfcb1d714475cdf4e65bbac5113ac1\": rpc error: code = NotFound desc = could not find container \"4d4a82ad50b3e71d9bb688538b27c522aedfcb1d714475cdf4e65bbac5113ac1\": container with ID starting with 4d4a82ad50b3e71d9bb688538b27c522aedfcb1d714475cdf4e65bbac5113ac1 not found: ID does not exist" Feb 18 14:38:18 crc kubenswrapper[4957]: I0218 14:38:18.287738 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-hfcww"] Feb 18 14:38:18 crc kubenswrapper[4957]: I0218 14:38:18.293616 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-hfcww"] Feb 18 14:38:19 crc kubenswrapper[4957]: I0218 14:38:19.826863 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/kube-state-metrics-777cb5bd5d-88qng"] Feb 18 14:38:19 crc kubenswrapper[4957]: E0218 14:38:19.828616 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c7a025e-0270-445c-ac05-34ffe3502176" containerName="registry" Feb 18 14:38:19 crc kubenswrapper[4957]: I0218 14:38:19.828773 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c7a025e-0270-445c-ac05-34ffe3502176" containerName="registry" Feb 18 14:38:19 crc kubenswrapper[4957]: I0218 14:38:19.828965 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c7a025e-0270-445c-ac05-34ffe3502176" containerName="registry" Feb 18 14:38:19 crc kubenswrapper[4957]: I0218 14:38:19.829973 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-88qng" Feb 18 14:38:19 crc kubenswrapper[4957]: I0218 14:38:19.834096 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" Feb 18 14:38:19 crc kubenswrapper[4957]: I0218 14:38:19.834361 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" Feb 18 14:38:19 crc kubenswrapper[4957]: I0218 14:38:19.834519 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-dockercfg-5h89m" Feb 18 14:38:19 crc kubenswrapper[4957]: I0218 14:38:19.836381 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-tls" Feb 18 14:38:19 crc kubenswrapper[4957]: I0218 14:38:19.838235 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/openshift-state-metrics-566fddb674-5f9rm"] Feb 18 14:38:19 crc kubenswrapper[4957]: I0218 14:38:19.839518 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-566fddb674-5f9rm" Feb 18 14:38:19 crc kubenswrapper[4957]: I0218 14:38:19.846449 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-dockercfg-xmzln" Feb 18 14:38:19 crc kubenswrapper[4957]: I0218 14:38:19.846803 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" Feb 18 14:38:19 crc kubenswrapper[4957]: I0218 14:38:19.847013 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-tls" Feb 18 14:38:19 crc kubenswrapper[4957]: I0218 14:38:19.857170 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-777cb5bd5d-88qng"] Feb 18 14:38:19 crc kubenswrapper[4957]: I0218 14:38:19.861059 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-566fddb674-5f9rm"] Feb 18 14:38:19 crc kubenswrapper[4957]: I0218 14:38:19.878379 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/03237575-0216-483d-8871-dfe3b61a7d94-metrics-client-ca\") pod \"kube-state-metrics-777cb5bd5d-88qng\" (UID: \"03237575-0216-483d-8871-dfe3b61a7d94\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-88qng" Feb 18 14:38:19 crc kubenswrapper[4957]: I0218 14:38:19.878469 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/03237575-0216-483d-8871-dfe3b61a7d94-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-88qng\" (UID: \"03237575-0216-483d-8871-dfe3b61a7d94\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-88qng" Feb 18 14:38:19 crc kubenswrapper[4957]: I0218 14:38:19.878506 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/03237575-0216-483d-8871-dfe3b61a7d94-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-777cb5bd5d-88qng\" (UID: \"03237575-0216-483d-8871-dfe3b61a7d94\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-88qng" Feb 18 14:38:19 crc kubenswrapper[4957]: I0218 14:38:19.878544 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8sdkm\" (UniqueName: \"kubernetes.io/projected/8e8ec204-6b21-4ca6-8f41-86d8ee06b82a-kube-api-access-8sdkm\") pod \"openshift-state-metrics-566fddb674-5f9rm\" (UID: \"8e8ec204-6b21-4ca6-8f41-86d8ee06b82a\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-5f9rm" Feb 18 14:38:19 crc kubenswrapper[4957]: I0218 14:38:19.878585 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/03237575-0216-483d-8871-dfe3b61a7d94-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-777cb5bd5d-88qng\" (UID: \"03237575-0216-483d-8871-dfe3b61a7d94\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-88qng" Feb 18 14:38:19 crc kubenswrapper[4957]: I0218 14:38:19.878614 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/8e8ec204-6b21-4ca6-8f41-86d8ee06b82a-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-566fddb674-5f9rm\" (UID: \"8e8ec204-6b21-4ca6-8f41-86d8ee06b82a\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-5f9rm" Feb 18 14:38:19 crc kubenswrapper[4957]: I0218 14:38:19.878648 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cb2nf\" (UniqueName: \"kubernetes.io/projected/03237575-0216-483d-8871-dfe3b61a7d94-kube-api-access-cb2nf\") pod \"kube-state-metrics-777cb5bd5d-88qng\" (UID: \"03237575-0216-483d-8871-dfe3b61a7d94\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-88qng" Feb 18 14:38:19 crc kubenswrapper[4957]: I0218 14:38:19.878695 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/03237575-0216-483d-8871-dfe3b61a7d94-volume-directive-shadow\") pod \"kube-state-metrics-777cb5bd5d-88qng\" (UID: \"03237575-0216-483d-8871-dfe3b61a7d94\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-88qng" Feb 18 14:38:19 crc kubenswrapper[4957]: I0218 14:38:19.878774 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/8e8ec204-6b21-4ca6-8f41-86d8ee06b82a-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-5f9rm\" (UID: \"8e8ec204-6b21-4ca6-8f41-86d8ee06b82a\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-5f9rm" Feb 18 14:38:19 crc kubenswrapper[4957]: I0218 14:38:19.878796 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/8e8ec204-6b21-4ca6-8f41-86d8ee06b82a-metrics-client-ca\") pod \"openshift-state-metrics-566fddb674-5f9rm\" (UID: \"8e8ec204-6b21-4ca6-8f41-86d8ee06b82a\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-5f9rm" Feb 18 14:38:19 crc kubenswrapper[4957]: I0218 14:38:19.896876 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/node-exporter-ccnfk"] Feb 18 14:38:19 crc kubenswrapper[4957]: I0218 14:38:19.898065 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-ccnfk" Feb 18 14:38:19 crc kubenswrapper[4957]: I0218 14:38:19.902753 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-kube-rbac-proxy-config" Feb 18 14:38:19 crc kubenswrapper[4957]: I0218 14:38:19.902760 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-tls" Feb 18 14:38:19 crc kubenswrapper[4957]: I0218 14:38:19.906021 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-dockercfg-4x4xl" Feb 18 14:38:19 crc kubenswrapper[4957]: I0218 14:38:19.979677 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/03237575-0216-483d-8871-dfe3b61a7d94-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-777cb5bd5d-88qng\" (UID: \"03237575-0216-483d-8871-dfe3b61a7d94\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-88qng" Feb 18 14:38:19 crc kubenswrapper[4957]: I0218 14:38:19.979964 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/8e8ec204-6b21-4ca6-8f41-86d8ee06b82a-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-566fddb674-5f9rm\" (UID: \"8e8ec204-6b21-4ca6-8f41-86d8ee06b82a\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-5f9rm" Feb 18 14:38:19 crc kubenswrapper[4957]: I0218 14:38:19.980058 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cb2nf\" (UniqueName: \"kubernetes.io/projected/03237575-0216-483d-8871-dfe3b61a7d94-kube-api-access-cb2nf\") pod \"kube-state-metrics-777cb5bd5d-88qng\" (UID: \"03237575-0216-483d-8871-dfe3b61a7d94\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-88qng" Feb 18 14:38:19 crc kubenswrapper[4957]: I0218 14:38:19.980147 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/fa38d701-d862-47f8-bb5e-e00954a1054f-metrics-client-ca\") pod \"node-exporter-ccnfk\" (UID: \"fa38d701-d862-47f8-bb5e-e00954a1054f\") " pod="openshift-monitoring/node-exporter-ccnfk" Feb 18 14:38:19 crc kubenswrapper[4957]: I0218 14:38:19.980219 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84tsm\" (UniqueName: \"kubernetes.io/projected/fa38d701-d862-47f8-bb5e-e00954a1054f-kube-api-access-84tsm\") pod \"node-exporter-ccnfk\" (UID: \"fa38d701-d862-47f8-bb5e-e00954a1054f\") " pod="openshift-monitoring/node-exporter-ccnfk" Feb 18 14:38:19 crc kubenswrapper[4957]: I0218 14:38:19.980331 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/fa38d701-d862-47f8-bb5e-e00954a1054f-node-exporter-textfile\") pod \"node-exporter-ccnfk\" (UID: \"fa38d701-d862-47f8-bb5e-e00954a1054f\") " pod="openshift-monitoring/node-exporter-ccnfk" Feb 18 14:38:19 crc kubenswrapper[4957]: I0218 14:38:19.980433 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/03237575-0216-483d-8871-dfe3b61a7d94-volume-directive-shadow\") pod \"kube-state-metrics-777cb5bd5d-88qng\" (UID: \"03237575-0216-483d-8871-dfe3b61a7d94\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-88qng" Feb 18 14:38:19 crc kubenswrapper[4957]: I0218 14:38:19.980521 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/fa38d701-d862-47f8-bb5e-e00954a1054f-node-exporter-tls\") pod \"node-exporter-ccnfk\" (UID: \"fa38d701-d862-47f8-bb5e-e00954a1054f\") " pod="openshift-monitoring/node-exporter-ccnfk" Feb 18 14:38:19 crc kubenswrapper[4957]: I0218 14:38:19.980616 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/fa38d701-d862-47f8-bb5e-e00954a1054f-node-exporter-wtmp\") pod \"node-exporter-ccnfk\" (UID: \"fa38d701-d862-47f8-bb5e-e00954a1054f\") " pod="openshift-monitoring/node-exporter-ccnfk" Feb 18 14:38:19 crc kubenswrapper[4957]: I0218 14:38:19.980707 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/fa38d701-d862-47f8-bb5e-e00954a1054f-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-ccnfk\" (UID: \"fa38d701-d862-47f8-bb5e-e00954a1054f\") " pod="openshift-monitoring/node-exporter-ccnfk" Feb 18 14:38:19 crc kubenswrapper[4957]: I0218 14:38:19.980789 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/fa38d701-d862-47f8-bb5e-e00954a1054f-sys\") pod \"node-exporter-ccnfk\" (UID: \"fa38d701-d862-47f8-bb5e-e00954a1054f\") " pod="openshift-monitoring/node-exporter-ccnfk" Feb 18 14:38:19 crc kubenswrapper[4957]: I0218 14:38:19.980870 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/8e8ec204-6b21-4ca6-8f41-86d8ee06b82a-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-5f9rm\" (UID: \"8e8ec204-6b21-4ca6-8f41-86d8ee06b82a\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-5f9rm" Feb 18 14:38:19 crc kubenswrapper[4957]: I0218 14:38:19.980941 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/8e8ec204-6b21-4ca6-8f41-86d8ee06b82a-metrics-client-ca\") pod \"openshift-state-metrics-566fddb674-5f9rm\" (UID: \"8e8ec204-6b21-4ca6-8f41-86d8ee06b82a\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-5f9rm" Feb 18 14:38:19 crc kubenswrapper[4957]: I0218 14:38:19.981036 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/03237575-0216-483d-8871-dfe3b61a7d94-metrics-client-ca\") pod \"kube-state-metrics-777cb5bd5d-88qng\" (UID: \"03237575-0216-483d-8871-dfe3b61a7d94\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-88qng" Feb 18 14:38:19 crc kubenswrapper[4957]: I0218 14:38:19.981386 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/03237575-0216-483d-8871-dfe3b61a7d94-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-88qng\" (UID: \"03237575-0216-483d-8871-dfe3b61a7d94\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-88qng" Feb 18 14:38:19 crc kubenswrapper[4957]: I0218 14:38:19.981477 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/03237575-0216-483d-8871-dfe3b61a7d94-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-777cb5bd5d-88qng\" (UID: \"03237575-0216-483d-8871-dfe3b61a7d94\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-88qng" Feb 18 14:38:19 crc kubenswrapper[4957]: I0218 14:38:19.981568 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8sdkm\" (UniqueName: \"kubernetes.io/projected/8e8ec204-6b21-4ca6-8f41-86d8ee06b82a-kube-api-access-8sdkm\") pod \"openshift-state-metrics-566fddb674-5f9rm\" (UID: \"8e8ec204-6b21-4ca6-8f41-86d8ee06b82a\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-5f9rm" Feb 18 14:38:19 crc kubenswrapper[4957]: I0218 14:38:19.981644 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/fa38d701-d862-47f8-bb5e-e00954a1054f-root\") pod \"node-exporter-ccnfk\" (UID: \"fa38d701-d862-47f8-bb5e-e00954a1054f\") " pod="openshift-monitoring/node-exporter-ccnfk" Feb 18 14:38:19 crc kubenswrapper[4957]: E0218 14:38:19.981054 4957 secret.go:188] Couldn't get secret openshift-monitoring/openshift-state-metrics-tls: secret "openshift-state-metrics-tls" not found Feb 18 14:38:19 crc kubenswrapper[4957]: E0218 14:38:19.981848 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8e8ec204-6b21-4ca6-8f41-86d8ee06b82a-openshift-state-metrics-tls podName:8e8ec204-6b21-4ca6-8f41-86d8ee06b82a nodeName:}" failed. No retries permitted until 2026-02-18 14:38:20.481831724 +0000 UTC m=+407.002696468 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "openshift-state-metrics-tls" (UniqueName: "kubernetes.io/secret/8e8ec204-6b21-4ca6-8f41-86d8ee06b82a-openshift-state-metrics-tls") pod "openshift-state-metrics-566fddb674-5f9rm" (UID: "8e8ec204-6b21-4ca6-8f41-86d8ee06b82a") : secret "openshift-state-metrics-tls" not found Feb 18 14:38:19 crc kubenswrapper[4957]: I0218 14:38:19.981878 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/8e8ec204-6b21-4ca6-8f41-86d8ee06b82a-metrics-client-ca\") pod \"openshift-state-metrics-566fddb674-5f9rm\" (UID: \"8e8ec204-6b21-4ca6-8f41-86d8ee06b82a\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-5f9rm" Feb 18 14:38:19 crc kubenswrapper[4957]: I0218 14:38:19.980979 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/03237575-0216-483d-8871-dfe3b61a7d94-volume-directive-shadow\") pod \"kube-state-metrics-777cb5bd5d-88qng\" (UID: \"03237575-0216-483d-8871-dfe3b61a7d94\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-88qng" Feb 18 14:38:19 crc kubenswrapper[4957]: E0218 14:38:19.981940 4957 secret.go:188] Couldn't get secret openshift-monitoring/kube-state-metrics-tls: secret "kube-state-metrics-tls" not found Feb 18 14:38:19 crc kubenswrapper[4957]: E0218 14:38:19.982122 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/03237575-0216-483d-8871-dfe3b61a7d94-kube-state-metrics-tls podName:03237575-0216-483d-8871-dfe3b61a7d94 nodeName:}" failed. No retries permitted until 2026-02-18 14:38:20.482113162 +0000 UTC m=+407.002977906 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-state-metrics-tls" (UniqueName: "kubernetes.io/secret/03237575-0216-483d-8871-dfe3b61a7d94-kube-state-metrics-tls") pod "kube-state-metrics-777cb5bd5d-88qng" (UID: "03237575-0216-483d-8871-dfe3b61a7d94") : secret "kube-state-metrics-tls" not found Feb 18 14:38:19 crc kubenswrapper[4957]: I0218 14:38:19.982641 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/03237575-0216-483d-8871-dfe3b61a7d94-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-777cb5bd5d-88qng\" (UID: \"03237575-0216-483d-8871-dfe3b61a7d94\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-88qng" Feb 18 14:38:19 crc kubenswrapper[4957]: I0218 14:38:19.982687 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/03237575-0216-483d-8871-dfe3b61a7d94-metrics-client-ca\") pod \"kube-state-metrics-777cb5bd5d-88qng\" (UID: \"03237575-0216-483d-8871-dfe3b61a7d94\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-88qng" Feb 18 14:38:19 crc kubenswrapper[4957]: I0218 14:38:19.985487 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/03237575-0216-483d-8871-dfe3b61a7d94-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-777cb5bd5d-88qng\" (UID: \"03237575-0216-483d-8871-dfe3b61a7d94\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-88qng" Feb 18 14:38:19 crc kubenswrapper[4957]: I0218 14:38:19.985495 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/8e8ec204-6b21-4ca6-8f41-86d8ee06b82a-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-566fddb674-5f9rm\" (UID: \"8e8ec204-6b21-4ca6-8f41-86d8ee06b82a\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-5f9rm" Feb 18 14:38:20 crc kubenswrapper[4957]: I0218 14:38:20.009303 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8sdkm\" (UniqueName: \"kubernetes.io/projected/8e8ec204-6b21-4ca6-8f41-86d8ee06b82a-kube-api-access-8sdkm\") pod \"openshift-state-metrics-566fddb674-5f9rm\" (UID: \"8e8ec204-6b21-4ca6-8f41-86d8ee06b82a\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-5f9rm" Feb 18 14:38:20 crc kubenswrapper[4957]: I0218 14:38:20.046508 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cb2nf\" (UniqueName: \"kubernetes.io/projected/03237575-0216-483d-8871-dfe3b61a7d94-kube-api-access-cb2nf\") pod \"kube-state-metrics-777cb5bd5d-88qng\" (UID: \"03237575-0216-483d-8871-dfe3b61a7d94\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-88qng" Feb 18 14:38:20 crc kubenswrapper[4957]: I0218 14:38:20.083167 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/fa38d701-d862-47f8-bb5e-e00954a1054f-root\") pod \"node-exporter-ccnfk\" (UID: \"fa38d701-d862-47f8-bb5e-e00954a1054f\") " pod="openshift-monitoring/node-exporter-ccnfk" Feb 18 14:38:20 crc kubenswrapper[4957]: I0218 14:38:20.083248 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/fa38d701-d862-47f8-bb5e-e00954a1054f-metrics-client-ca\") pod \"node-exporter-ccnfk\" (UID: \"fa38d701-d862-47f8-bb5e-e00954a1054f\") " pod="openshift-monitoring/node-exporter-ccnfk" Feb 18 14:38:20 crc kubenswrapper[4957]: I0218 14:38:20.083273 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-84tsm\" (UniqueName: \"kubernetes.io/projected/fa38d701-d862-47f8-bb5e-e00954a1054f-kube-api-access-84tsm\") pod \"node-exporter-ccnfk\" (UID: \"fa38d701-d862-47f8-bb5e-e00954a1054f\") " pod="openshift-monitoring/node-exporter-ccnfk" Feb 18 14:38:20 crc kubenswrapper[4957]: I0218 14:38:20.083315 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/fa38d701-d862-47f8-bb5e-e00954a1054f-node-exporter-textfile\") pod \"node-exporter-ccnfk\" (UID: \"fa38d701-d862-47f8-bb5e-e00954a1054f\") " pod="openshift-monitoring/node-exporter-ccnfk" Feb 18 14:38:20 crc kubenswrapper[4957]: I0218 14:38:20.083321 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"root\" (UniqueName: \"kubernetes.io/host-path/fa38d701-d862-47f8-bb5e-e00954a1054f-root\") pod \"node-exporter-ccnfk\" (UID: \"fa38d701-d862-47f8-bb5e-e00954a1054f\") " pod="openshift-monitoring/node-exporter-ccnfk" Feb 18 14:38:20 crc kubenswrapper[4957]: I0218 14:38:20.083338 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/fa38d701-d862-47f8-bb5e-e00954a1054f-node-exporter-tls\") pod \"node-exporter-ccnfk\" (UID: \"fa38d701-d862-47f8-bb5e-e00954a1054f\") " pod="openshift-monitoring/node-exporter-ccnfk" Feb 18 14:38:20 crc kubenswrapper[4957]: I0218 14:38:20.083437 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/fa38d701-d862-47f8-bb5e-e00954a1054f-node-exporter-wtmp\") pod \"node-exporter-ccnfk\" (UID: \"fa38d701-d862-47f8-bb5e-e00954a1054f\") " pod="openshift-monitoring/node-exporter-ccnfk" Feb 18 14:38:20 crc kubenswrapper[4957]: E0218 14:38:20.083468 4957 secret.go:188] Couldn't get secret openshift-monitoring/node-exporter-tls: secret "node-exporter-tls" not found Feb 18 14:38:20 crc kubenswrapper[4957]: E0218 14:38:20.083527 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fa38d701-d862-47f8-bb5e-e00954a1054f-node-exporter-tls podName:fa38d701-d862-47f8-bb5e-e00954a1054f nodeName:}" failed. No retries permitted until 2026-02-18 14:38:20.583510461 +0000 UTC m=+407.104375205 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-exporter-tls" (UniqueName: "kubernetes.io/secret/fa38d701-d862-47f8-bb5e-e00954a1054f-node-exporter-tls") pod "node-exporter-ccnfk" (UID: "fa38d701-d862-47f8-bb5e-e00954a1054f") : secret "node-exporter-tls" not found Feb 18 14:38:20 crc kubenswrapper[4957]: I0218 14:38:20.083823 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/fa38d701-d862-47f8-bb5e-e00954a1054f-node-exporter-wtmp\") pod \"node-exporter-ccnfk\" (UID: \"fa38d701-d862-47f8-bb5e-e00954a1054f\") " pod="openshift-monitoring/node-exporter-ccnfk" Feb 18 14:38:20 crc kubenswrapper[4957]: I0218 14:38:20.083472 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/fa38d701-d862-47f8-bb5e-e00954a1054f-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-ccnfk\" (UID: \"fa38d701-d862-47f8-bb5e-e00954a1054f\") " pod="openshift-monitoring/node-exporter-ccnfk" Feb 18 14:38:20 crc kubenswrapper[4957]: I0218 14:38:20.083908 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/fa38d701-d862-47f8-bb5e-e00954a1054f-sys\") pod \"node-exporter-ccnfk\" (UID: \"fa38d701-d862-47f8-bb5e-e00954a1054f\") " pod="openshift-monitoring/node-exporter-ccnfk" Feb 18 14:38:20 crc kubenswrapper[4957]: I0218 14:38:20.084032 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/fa38d701-d862-47f8-bb5e-e00954a1054f-sys\") pod \"node-exporter-ccnfk\" (UID: \"fa38d701-d862-47f8-bb5e-e00954a1054f\") " pod="openshift-monitoring/node-exporter-ccnfk" Feb 18 14:38:20 crc kubenswrapper[4957]: I0218 14:38:20.084238 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/fa38d701-d862-47f8-bb5e-e00954a1054f-node-exporter-textfile\") pod \"node-exporter-ccnfk\" (UID: \"fa38d701-d862-47f8-bb5e-e00954a1054f\") " pod="openshift-monitoring/node-exporter-ccnfk" Feb 18 14:38:20 crc kubenswrapper[4957]: I0218 14:38:20.084521 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/fa38d701-d862-47f8-bb5e-e00954a1054f-metrics-client-ca\") pod \"node-exporter-ccnfk\" (UID: \"fa38d701-d862-47f8-bb5e-e00954a1054f\") " pod="openshift-monitoring/node-exporter-ccnfk" Feb 18 14:38:20 crc kubenswrapper[4957]: I0218 14:38:20.086913 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/fa38d701-d862-47f8-bb5e-e00954a1054f-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-ccnfk\" (UID: \"fa38d701-d862-47f8-bb5e-e00954a1054f\") " pod="openshift-monitoring/node-exporter-ccnfk" Feb 18 14:38:20 crc kubenswrapper[4957]: I0218 14:38:20.104469 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-84tsm\" (UniqueName: \"kubernetes.io/projected/fa38d701-d862-47f8-bb5e-e00954a1054f-kube-api-access-84tsm\") pod \"node-exporter-ccnfk\" (UID: \"fa38d701-d862-47f8-bb5e-e00954a1054f\") " pod="openshift-monitoring/node-exporter-ccnfk" Feb 18 14:38:20 crc kubenswrapper[4957]: I0218 14:38:20.223163 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c7a025e-0270-445c-ac05-34ffe3502176" path="/var/lib/kubelet/pods/1c7a025e-0270-445c-ac05-34ffe3502176/volumes" Feb 18 14:38:20 crc kubenswrapper[4957]: I0218 14:38:20.489475 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/03237575-0216-483d-8871-dfe3b61a7d94-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-88qng\" (UID: \"03237575-0216-483d-8871-dfe3b61a7d94\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-88qng" Feb 18 14:38:20 crc kubenswrapper[4957]: I0218 14:38:20.489607 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/8e8ec204-6b21-4ca6-8f41-86d8ee06b82a-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-5f9rm\" (UID: \"8e8ec204-6b21-4ca6-8f41-86d8ee06b82a\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-5f9rm" Feb 18 14:38:20 crc kubenswrapper[4957]: I0218 14:38:20.492751 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/03237575-0216-483d-8871-dfe3b61a7d94-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-88qng\" (UID: \"03237575-0216-483d-8871-dfe3b61a7d94\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-88qng" Feb 18 14:38:20 crc kubenswrapper[4957]: I0218 14:38:20.493400 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/8e8ec204-6b21-4ca6-8f41-86d8ee06b82a-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-5f9rm\" (UID: \"8e8ec204-6b21-4ca6-8f41-86d8ee06b82a\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-5f9rm" Feb 18 14:38:20 crc kubenswrapper[4957]: I0218 14:38:20.591061 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/fa38d701-d862-47f8-bb5e-e00954a1054f-node-exporter-tls\") pod \"node-exporter-ccnfk\" (UID: \"fa38d701-d862-47f8-bb5e-e00954a1054f\") " pod="openshift-monitoring/node-exporter-ccnfk" Feb 18 14:38:20 crc kubenswrapper[4957]: I0218 14:38:20.594872 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/fa38d701-d862-47f8-bb5e-e00954a1054f-node-exporter-tls\") pod \"node-exporter-ccnfk\" (UID: \"fa38d701-d862-47f8-bb5e-e00954a1054f\") " pod="openshift-monitoring/node-exporter-ccnfk" Feb 18 14:38:20 crc kubenswrapper[4957]: I0218 14:38:20.758141 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-88qng" Feb 18 14:38:20 crc kubenswrapper[4957]: I0218 14:38:20.771794 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-566fddb674-5f9rm" Feb 18 14:38:20 crc kubenswrapper[4957]: I0218 14:38:20.823444 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-ccnfk" Feb 18 14:38:20 crc kubenswrapper[4957]: W0218 14:38:20.844749 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfa38d701_d862_47f8_bb5e_e00954a1054f.slice/crio-6b7337cccd5766cbabb8311845b5efc53e2230a5899bf39b43620353bb303c76 WatchSource:0}: Error finding container 6b7337cccd5766cbabb8311845b5efc53e2230a5899bf39b43620353bb303c76: Status 404 returned error can't find the container with id 6b7337cccd5766cbabb8311845b5efc53e2230a5899bf39b43620353bb303c76 Feb 18 14:38:20 crc kubenswrapper[4957]: I0218 14:38:20.885285 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Feb 18 14:38:20 crc kubenswrapper[4957]: I0218 14:38:20.887028 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Feb 18 14:38:20 crc kubenswrapper[4957]: I0218 14:38:20.893082 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" Feb 18 14:38:20 crc kubenswrapper[4957]: I0218 14:38:20.893330 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" Feb 18 14:38:20 crc kubenswrapper[4957]: I0218 14:38:20.893509 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-web-config" Feb 18 14:38:20 crc kubenswrapper[4957]: I0218 14:38:20.893649 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-generated" Feb 18 14:38:20 crc kubenswrapper[4957]: I0218 14:38:20.893945 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-dockercfg-bcczj" Feb 18 14:38:20 crc kubenswrapper[4957]: I0218 14:38:20.894190 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy" Feb 18 14:38:20 crc kubenswrapper[4957]: I0218 14:38:20.894310 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls-assets-0" Feb 18 14:38:20 crc kubenswrapper[4957]: I0218 14:38:20.903825 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls" Feb 18 14:38:20 crc kubenswrapper[4957]: I0218 14:38:20.928441 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"alertmanager-trusted-ca-bundle" Feb 18 14:38:20 crc kubenswrapper[4957]: I0218 14:38:20.937613 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Feb 18 14:38:20 crc kubenswrapper[4957]: I0218 14:38:20.998749 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/fc66d3f7-be43-4189-94c5-59c46296305f-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"fc66d3f7-be43-4189-94c5-59c46296305f\") " pod="openshift-monitoring/alertmanager-main-0" Feb 18 14:38:20 crc kubenswrapper[4957]: I0218 14:38:20.998806 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/fc66d3f7-be43-4189-94c5-59c46296305f-config-out\") pod \"alertmanager-main-0\" (UID: \"fc66d3f7-be43-4189-94c5-59c46296305f\") " pod="openshift-monitoring/alertmanager-main-0" Feb 18 14:38:20 crc kubenswrapper[4957]: I0218 14:38:20.998844 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m762t\" (UniqueName: \"kubernetes.io/projected/fc66d3f7-be43-4189-94c5-59c46296305f-kube-api-access-m762t\") pod \"alertmanager-main-0\" (UID: \"fc66d3f7-be43-4189-94c5-59c46296305f\") " pod="openshift-monitoring/alertmanager-main-0" Feb 18 14:38:20 crc kubenswrapper[4957]: I0218 14:38:20.998876 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/fc66d3f7-be43-4189-94c5-59c46296305f-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"fc66d3f7-be43-4189-94c5-59c46296305f\") " pod="openshift-monitoring/alertmanager-main-0" Feb 18 14:38:20 crc kubenswrapper[4957]: I0218 14:38:20.998905 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/fc66d3f7-be43-4189-94c5-59c46296305f-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"fc66d3f7-be43-4189-94c5-59c46296305f\") " pod="openshift-monitoring/alertmanager-main-0" Feb 18 14:38:20 crc kubenswrapper[4957]: I0218 14:38:20.998925 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/fc66d3f7-be43-4189-94c5-59c46296305f-web-config\") pod \"alertmanager-main-0\" (UID: \"fc66d3f7-be43-4189-94c5-59c46296305f\") " pod="openshift-monitoring/alertmanager-main-0" Feb 18 14:38:20 crc kubenswrapper[4957]: I0218 14:38:20.998959 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/fc66d3f7-be43-4189-94c5-59c46296305f-tls-assets\") pod \"alertmanager-main-0\" (UID: \"fc66d3f7-be43-4189-94c5-59c46296305f\") " pod="openshift-monitoring/alertmanager-main-0" Feb 18 14:38:20 crc kubenswrapper[4957]: I0218 14:38:20.998981 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/fc66d3f7-be43-4189-94c5-59c46296305f-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"fc66d3f7-be43-4189-94c5-59c46296305f\") " pod="openshift-monitoring/alertmanager-main-0" Feb 18 14:38:20 crc kubenswrapper[4957]: I0218 14:38:20.999002 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/fc66d3f7-be43-4189-94c5-59c46296305f-config-volume\") pod \"alertmanager-main-0\" (UID: \"fc66d3f7-be43-4189-94c5-59c46296305f\") " pod="openshift-monitoring/alertmanager-main-0" Feb 18 14:38:20 crc kubenswrapper[4957]: I0218 14:38:20.999031 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/fc66d3f7-be43-4189-94c5-59c46296305f-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"fc66d3f7-be43-4189-94c5-59c46296305f\") " pod="openshift-monitoring/alertmanager-main-0" Feb 18 14:38:20 crc kubenswrapper[4957]: I0218 14:38:20.999060 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/fc66d3f7-be43-4189-94c5-59c46296305f-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"fc66d3f7-be43-4189-94c5-59c46296305f\") " pod="openshift-monitoring/alertmanager-main-0" Feb 18 14:38:20 crc kubenswrapper[4957]: I0218 14:38:20.999076 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fc66d3f7-be43-4189-94c5-59c46296305f-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"fc66d3f7-be43-4189-94c5-59c46296305f\") " pod="openshift-monitoring/alertmanager-main-0" Feb 18 14:38:21 crc kubenswrapper[4957]: I0218 14:38:21.090985 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-777cb5bd5d-88qng"] Feb 18 14:38:21 crc kubenswrapper[4957]: W0218 14:38:21.097271 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod03237575_0216_483d_8871_dfe3b61a7d94.slice/crio-9531492dffe6c837309c5e42df67664318837a4756ed69da52a35b247b931175 WatchSource:0}: Error finding container 9531492dffe6c837309c5e42df67664318837a4756ed69da52a35b247b931175: Status 404 returned error can't find the container with id 9531492dffe6c837309c5e42df67664318837a4756ed69da52a35b247b931175 Feb 18 14:38:21 crc kubenswrapper[4957]: I0218 14:38:21.099824 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/fc66d3f7-be43-4189-94c5-59c46296305f-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"fc66d3f7-be43-4189-94c5-59c46296305f\") " pod="openshift-monitoring/alertmanager-main-0" Feb 18 14:38:21 crc kubenswrapper[4957]: I0218 14:38:21.099860 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/fc66d3f7-be43-4189-94c5-59c46296305f-web-config\") pod \"alertmanager-main-0\" (UID: \"fc66d3f7-be43-4189-94c5-59c46296305f\") " pod="openshift-monitoring/alertmanager-main-0" Feb 18 14:38:21 crc kubenswrapper[4957]: I0218 14:38:21.099902 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/fc66d3f7-be43-4189-94c5-59c46296305f-tls-assets\") pod \"alertmanager-main-0\" (UID: \"fc66d3f7-be43-4189-94c5-59c46296305f\") " pod="openshift-monitoring/alertmanager-main-0" Feb 18 14:38:21 crc kubenswrapper[4957]: I0218 14:38:21.099923 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/fc66d3f7-be43-4189-94c5-59c46296305f-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"fc66d3f7-be43-4189-94c5-59c46296305f\") " pod="openshift-monitoring/alertmanager-main-0" Feb 18 14:38:21 crc kubenswrapper[4957]: I0218 14:38:21.099946 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/fc66d3f7-be43-4189-94c5-59c46296305f-config-volume\") pod \"alertmanager-main-0\" (UID: \"fc66d3f7-be43-4189-94c5-59c46296305f\") " pod="openshift-monitoring/alertmanager-main-0" Feb 18 14:38:21 crc kubenswrapper[4957]: I0218 14:38:21.099976 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/fc66d3f7-be43-4189-94c5-59c46296305f-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"fc66d3f7-be43-4189-94c5-59c46296305f\") " pod="openshift-monitoring/alertmanager-main-0" Feb 18 14:38:21 crc kubenswrapper[4957]: I0218 14:38:21.100012 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/fc66d3f7-be43-4189-94c5-59c46296305f-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"fc66d3f7-be43-4189-94c5-59c46296305f\") " pod="openshift-monitoring/alertmanager-main-0" Feb 18 14:38:21 crc kubenswrapper[4957]: I0218 14:38:21.100039 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fc66d3f7-be43-4189-94c5-59c46296305f-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"fc66d3f7-be43-4189-94c5-59c46296305f\") " pod="openshift-monitoring/alertmanager-main-0" Feb 18 14:38:21 crc kubenswrapper[4957]: I0218 14:38:21.100077 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/fc66d3f7-be43-4189-94c5-59c46296305f-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"fc66d3f7-be43-4189-94c5-59c46296305f\") " pod="openshift-monitoring/alertmanager-main-0" Feb 18 14:38:21 crc kubenswrapper[4957]: I0218 14:38:21.100104 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/fc66d3f7-be43-4189-94c5-59c46296305f-config-out\") pod \"alertmanager-main-0\" (UID: \"fc66d3f7-be43-4189-94c5-59c46296305f\") " pod="openshift-monitoring/alertmanager-main-0" Feb 18 14:38:21 crc kubenswrapper[4957]: I0218 14:38:21.100138 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m762t\" (UniqueName: \"kubernetes.io/projected/fc66d3f7-be43-4189-94c5-59c46296305f-kube-api-access-m762t\") pod \"alertmanager-main-0\" (UID: \"fc66d3f7-be43-4189-94c5-59c46296305f\") " pod="openshift-monitoring/alertmanager-main-0" Feb 18 14:38:21 crc kubenswrapper[4957]: I0218 14:38:21.100301 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/fc66d3f7-be43-4189-94c5-59c46296305f-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"fc66d3f7-be43-4189-94c5-59c46296305f\") " pod="openshift-monitoring/alertmanager-main-0" Feb 18 14:38:21 crc kubenswrapper[4957]: I0218 14:38:21.102156 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/fc66d3f7-be43-4189-94c5-59c46296305f-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"fc66d3f7-be43-4189-94c5-59c46296305f\") " pod="openshift-monitoring/alertmanager-main-0" Feb 18 14:38:21 crc kubenswrapper[4957]: I0218 14:38:21.103730 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/fc66d3f7-be43-4189-94c5-59c46296305f-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"fc66d3f7-be43-4189-94c5-59c46296305f\") " pod="openshift-monitoring/alertmanager-main-0" Feb 18 14:38:21 crc kubenswrapper[4957]: I0218 14:38:21.105191 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fc66d3f7-be43-4189-94c5-59c46296305f-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"fc66d3f7-be43-4189-94c5-59c46296305f\") " pod="openshift-monitoring/alertmanager-main-0" Feb 18 14:38:21 crc kubenswrapper[4957]: I0218 14:38:21.107854 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/fc66d3f7-be43-4189-94c5-59c46296305f-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"fc66d3f7-be43-4189-94c5-59c46296305f\") " pod="openshift-monitoring/alertmanager-main-0" Feb 18 14:38:21 crc kubenswrapper[4957]: I0218 14:38:21.108576 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/fc66d3f7-be43-4189-94c5-59c46296305f-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"fc66d3f7-be43-4189-94c5-59c46296305f\") " pod="openshift-monitoring/alertmanager-main-0" Feb 18 14:38:21 crc kubenswrapper[4957]: I0218 14:38:21.108742 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/fc66d3f7-be43-4189-94c5-59c46296305f-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"fc66d3f7-be43-4189-94c5-59c46296305f\") " pod="openshift-monitoring/alertmanager-main-0" Feb 18 14:38:21 crc kubenswrapper[4957]: I0218 14:38:21.109039 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/fc66d3f7-be43-4189-94c5-59c46296305f-config-volume\") pod \"alertmanager-main-0\" (UID: \"fc66d3f7-be43-4189-94c5-59c46296305f\") " pod="openshift-monitoring/alertmanager-main-0" Feb 18 14:38:21 crc kubenswrapper[4957]: I0218 14:38:21.110188 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/fc66d3f7-be43-4189-94c5-59c46296305f-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"fc66d3f7-be43-4189-94c5-59c46296305f\") " pod="openshift-monitoring/alertmanager-main-0" Feb 18 14:38:21 crc kubenswrapper[4957]: I0218 14:38:21.110649 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/fc66d3f7-be43-4189-94c5-59c46296305f-tls-assets\") pod \"alertmanager-main-0\" (UID: \"fc66d3f7-be43-4189-94c5-59c46296305f\") " pod="openshift-monitoring/alertmanager-main-0" Feb 18 14:38:21 crc kubenswrapper[4957]: I0218 14:38:21.115674 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/fc66d3f7-be43-4189-94c5-59c46296305f-config-out\") pod \"alertmanager-main-0\" (UID: \"fc66d3f7-be43-4189-94c5-59c46296305f\") " pod="openshift-monitoring/alertmanager-main-0" Feb 18 14:38:21 crc kubenswrapper[4957]: I0218 14:38:21.116117 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/fc66d3f7-be43-4189-94c5-59c46296305f-web-config\") pod \"alertmanager-main-0\" (UID: \"fc66d3f7-be43-4189-94c5-59c46296305f\") " pod="openshift-monitoring/alertmanager-main-0" Feb 18 14:38:21 crc kubenswrapper[4957]: I0218 14:38:21.122787 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m762t\" (UniqueName: \"kubernetes.io/projected/fc66d3f7-be43-4189-94c5-59c46296305f-kube-api-access-m762t\") pod \"alertmanager-main-0\" (UID: \"fc66d3f7-be43-4189-94c5-59c46296305f\") " pod="openshift-monitoring/alertmanager-main-0" Feb 18 14:38:21 crc kubenswrapper[4957]: I0218 14:38:21.240878 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Feb 18 14:38:21 crc kubenswrapper[4957]: I0218 14:38:21.274188 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-88qng" event={"ID":"03237575-0216-483d-8871-dfe3b61a7d94","Type":"ContainerStarted","Data":"9531492dffe6c837309c5e42df67664318837a4756ed69da52a35b247b931175"} Feb 18 14:38:21 crc kubenswrapper[4957]: I0218 14:38:21.275160 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-ccnfk" event={"ID":"fa38d701-d862-47f8-bb5e-e00954a1054f","Type":"ContainerStarted","Data":"6b7337cccd5766cbabb8311845b5efc53e2230a5899bf39b43620353bb303c76"} Feb 18 14:38:21 crc kubenswrapper[4957]: I0218 14:38:21.371048 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-566fddb674-5f9rm"] Feb 18 14:38:21 crc kubenswrapper[4957]: W0218 14:38:21.396701 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8e8ec204_6b21_4ca6_8f41_86d8ee06b82a.slice/crio-86dc260aab62c24b9a7dbfc06ca38d2a2831da26acd3f82d72534560c6cd7545 WatchSource:0}: Error finding container 86dc260aab62c24b9a7dbfc06ca38d2a2831da26acd3f82d72534560c6cd7545: Status 404 returned error can't find the container with id 86dc260aab62c24b9a7dbfc06ca38d2a2831da26acd3f82d72534560c6cd7545 Feb 18 14:38:21 crc kubenswrapper[4957]: I0218 14:38:21.447071 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Feb 18 14:38:21 crc kubenswrapper[4957]: I0218 14:38:21.802498 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/thanos-querier-5bd76d8bd7-7bctr"] Feb 18 14:38:21 crc kubenswrapper[4957]: I0218 14:38:21.804382 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-5bd76d8bd7-7bctr" Feb 18 14:38:21 crc kubenswrapper[4957]: I0218 14:38:21.807901 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" Feb 18 14:38:21 crc kubenswrapper[4957]: I0218 14:38:21.808020 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-tls" Feb 18 14:38:21 crc kubenswrapper[4957]: I0218 14:38:21.809205 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy" Feb 18 14:38:21 crc kubenswrapper[4957]: I0218 14:38:21.811638 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-grpc-tls-e6s3u2flchvar" Feb 18 14:38:21 crc kubenswrapper[4957]: I0218 14:38:21.812179 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-dockercfg-q24mn" Feb 18 14:38:21 crc kubenswrapper[4957]: I0218 14:38:21.812353 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" Feb 18 14:38:21 crc kubenswrapper[4957]: I0218 14:38:21.815262 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" Feb 18 14:38:21 crc kubenswrapper[4957]: I0218 14:38:21.822025 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-5bd76d8bd7-7bctr"] Feb 18 14:38:21 crc kubenswrapper[4957]: I0218 14:38:21.911708 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/513d3d65-d89a-4418-b0c2-4eadd3ae5600-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-5bd76d8bd7-7bctr\" (UID: \"513d3d65-d89a-4418-b0c2-4eadd3ae5600\") " pod="openshift-monitoring/thanos-querier-5bd76d8bd7-7bctr" Feb 18 14:38:21 crc kubenswrapper[4957]: I0218 14:38:21.911781 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/513d3d65-d89a-4418-b0c2-4eadd3ae5600-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-5bd76d8bd7-7bctr\" (UID: \"513d3d65-d89a-4418-b0c2-4eadd3ae5600\") " pod="openshift-monitoring/thanos-querier-5bd76d8bd7-7bctr" Feb 18 14:38:21 crc kubenswrapper[4957]: I0218 14:38:21.911856 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/513d3d65-d89a-4418-b0c2-4eadd3ae5600-secret-grpc-tls\") pod \"thanos-querier-5bd76d8bd7-7bctr\" (UID: \"513d3d65-d89a-4418-b0c2-4eadd3ae5600\") " pod="openshift-monitoring/thanos-querier-5bd76d8bd7-7bctr" Feb 18 14:38:21 crc kubenswrapper[4957]: I0218 14:38:21.911931 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/513d3d65-d89a-4418-b0c2-4eadd3ae5600-metrics-client-ca\") pod \"thanos-querier-5bd76d8bd7-7bctr\" (UID: \"513d3d65-d89a-4418-b0c2-4eadd3ae5600\") " pod="openshift-monitoring/thanos-querier-5bd76d8bd7-7bctr" Feb 18 14:38:21 crc kubenswrapper[4957]: I0218 14:38:21.911962 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5n8m\" (UniqueName: \"kubernetes.io/projected/513d3d65-d89a-4418-b0c2-4eadd3ae5600-kube-api-access-q5n8m\") pod \"thanos-querier-5bd76d8bd7-7bctr\" (UID: \"513d3d65-d89a-4418-b0c2-4eadd3ae5600\") " pod="openshift-monitoring/thanos-querier-5bd76d8bd7-7bctr" Feb 18 14:38:21 crc kubenswrapper[4957]: I0218 14:38:21.912007 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/513d3d65-d89a-4418-b0c2-4eadd3ae5600-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-5bd76d8bd7-7bctr\" (UID: \"513d3d65-d89a-4418-b0c2-4eadd3ae5600\") " pod="openshift-monitoring/thanos-querier-5bd76d8bd7-7bctr" Feb 18 14:38:21 crc kubenswrapper[4957]: I0218 14:38:21.912039 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/513d3d65-d89a-4418-b0c2-4eadd3ae5600-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-5bd76d8bd7-7bctr\" (UID: \"513d3d65-d89a-4418-b0c2-4eadd3ae5600\") " pod="openshift-monitoring/thanos-querier-5bd76d8bd7-7bctr" Feb 18 14:38:21 crc kubenswrapper[4957]: I0218 14:38:21.912098 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/513d3d65-d89a-4418-b0c2-4eadd3ae5600-secret-thanos-querier-tls\") pod \"thanos-querier-5bd76d8bd7-7bctr\" (UID: \"513d3d65-d89a-4418-b0c2-4eadd3ae5600\") " pod="openshift-monitoring/thanos-querier-5bd76d8bd7-7bctr" Feb 18 14:38:22 crc kubenswrapper[4957]: I0218 14:38:22.013754 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/513d3d65-d89a-4418-b0c2-4eadd3ae5600-metrics-client-ca\") pod \"thanos-querier-5bd76d8bd7-7bctr\" (UID: \"513d3d65-d89a-4418-b0c2-4eadd3ae5600\") " pod="openshift-monitoring/thanos-querier-5bd76d8bd7-7bctr" Feb 18 14:38:22 crc kubenswrapper[4957]: I0218 14:38:22.013828 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q5n8m\" (UniqueName: \"kubernetes.io/projected/513d3d65-d89a-4418-b0c2-4eadd3ae5600-kube-api-access-q5n8m\") pod \"thanos-querier-5bd76d8bd7-7bctr\" (UID: \"513d3d65-d89a-4418-b0c2-4eadd3ae5600\") " pod="openshift-monitoring/thanos-querier-5bd76d8bd7-7bctr" Feb 18 14:38:22 crc kubenswrapper[4957]: I0218 14:38:22.013860 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/513d3d65-d89a-4418-b0c2-4eadd3ae5600-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-5bd76d8bd7-7bctr\" (UID: \"513d3d65-d89a-4418-b0c2-4eadd3ae5600\") " pod="openshift-monitoring/thanos-querier-5bd76d8bd7-7bctr" Feb 18 14:38:22 crc kubenswrapper[4957]: I0218 14:38:22.013891 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/513d3d65-d89a-4418-b0c2-4eadd3ae5600-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-5bd76d8bd7-7bctr\" (UID: \"513d3d65-d89a-4418-b0c2-4eadd3ae5600\") " pod="openshift-monitoring/thanos-querier-5bd76d8bd7-7bctr" Feb 18 14:38:22 crc kubenswrapper[4957]: I0218 14:38:22.013937 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/513d3d65-d89a-4418-b0c2-4eadd3ae5600-secret-thanos-querier-tls\") pod \"thanos-querier-5bd76d8bd7-7bctr\" (UID: \"513d3d65-d89a-4418-b0c2-4eadd3ae5600\") " pod="openshift-monitoring/thanos-querier-5bd76d8bd7-7bctr" Feb 18 14:38:22 crc kubenswrapper[4957]: I0218 14:38:22.013985 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/513d3d65-d89a-4418-b0c2-4eadd3ae5600-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-5bd76d8bd7-7bctr\" (UID: \"513d3d65-d89a-4418-b0c2-4eadd3ae5600\") " pod="openshift-monitoring/thanos-querier-5bd76d8bd7-7bctr" Feb 18 14:38:22 crc kubenswrapper[4957]: I0218 14:38:22.014008 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/513d3d65-d89a-4418-b0c2-4eadd3ae5600-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-5bd76d8bd7-7bctr\" (UID: \"513d3d65-d89a-4418-b0c2-4eadd3ae5600\") " pod="openshift-monitoring/thanos-querier-5bd76d8bd7-7bctr" Feb 18 14:38:22 crc kubenswrapper[4957]: I0218 14:38:22.014060 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/513d3d65-d89a-4418-b0c2-4eadd3ae5600-secret-grpc-tls\") pod \"thanos-querier-5bd76d8bd7-7bctr\" (UID: \"513d3d65-d89a-4418-b0c2-4eadd3ae5600\") " pod="openshift-monitoring/thanos-querier-5bd76d8bd7-7bctr" Feb 18 14:38:22 crc kubenswrapper[4957]: I0218 14:38:22.014779 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/513d3d65-d89a-4418-b0c2-4eadd3ae5600-metrics-client-ca\") pod \"thanos-querier-5bd76d8bd7-7bctr\" (UID: \"513d3d65-d89a-4418-b0c2-4eadd3ae5600\") " pod="openshift-monitoring/thanos-querier-5bd76d8bd7-7bctr" Feb 18 14:38:22 crc kubenswrapper[4957]: I0218 14:38:22.019373 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/513d3d65-d89a-4418-b0c2-4eadd3ae5600-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-5bd76d8bd7-7bctr\" (UID: \"513d3d65-d89a-4418-b0c2-4eadd3ae5600\") " pod="openshift-monitoring/thanos-querier-5bd76d8bd7-7bctr" Feb 18 14:38:22 crc kubenswrapper[4957]: I0218 14:38:22.019708 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/513d3d65-d89a-4418-b0c2-4eadd3ae5600-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-5bd76d8bd7-7bctr\" (UID: \"513d3d65-d89a-4418-b0c2-4eadd3ae5600\") " pod="openshift-monitoring/thanos-querier-5bd76d8bd7-7bctr" Feb 18 14:38:22 crc kubenswrapper[4957]: I0218 14:38:22.020049 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/513d3d65-d89a-4418-b0c2-4eadd3ae5600-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-5bd76d8bd7-7bctr\" (UID: \"513d3d65-d89a-4418-b0c2-4eadd3ae5600\") " pod="openshift-monitoring/thanos-querier-5bd76d8bd7-7bctr" Feb 18 14:38:22 crc kubenswrapper[4957]: I0218 14:38:22.020298 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/513d3d65-d89a-4418-b0c2-4eadd3ae5600-secret-thanos-querier-tls\") pod \"thanos-querier-5bd76d8bd7-7bctr\" (UID: \"513d3d65-d89a-4418-b0c2-4eadd3ae5600\") " pod="openshift-monitoring/thanos-querier-5bd76d8bd7-7bctr" Feb 18 14:38:22 crc kubenswrapper[4957]: I0218 14:38:22.021033 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/513d3d65-d89a-4418-b0c2-4eadd3ae5600-secret-grpc-tls\") pod \"thanos-querier-5bd76d8bd7-7bctr\" (UID: \"513d3d65-d89a-4418-b0c2-4eadd3ae5600\") " pod="openshift-monitoring/thanos-querier-5bd76d8bd7-7bctr" Feb 18 14:38:22 crc kubenswrapper[4957]: I0218 14:38:22.029407 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q5n8m\" (UniqueName: \"kubernetes.io/projected/513d3d65-d89a-4418-b0c2-4eadd3ae5600-kube-api-access-q5n8m\") pod \"thanos-querier-5bd76d8bd7-7bctr\" (UID: \"513d3d65-d89a-4418-b0c2-4eadd3ae5600\") " pod="openshift-monitoring/thanos-querier-5bd76d8bd7-7bctr" Feb 18 14:38:22 crc kubenswrapper[4957]: I0218 14:38:22.031756 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/513d3d65-d89a-4418-b0c2-4eadd3ae5600-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-5bd76d8bd7-7bctr\" (UID: \"513d3d65-d89a-4418-b0c2-4eadd3ae5600\") " pod="openshift-monitoring/thanos-querier-5bd76d8bd7-7bctr" Feb 18 14:38:22 crc kubenswrapper[4957]: I0218 14:38:22.174161 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-5bd76d8bd7-7bctr" Feb 18 14:38:22 crc kubenswrapper[4957]: I0218 14:38:22.295913 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-5f9rm" event={"ID":"8e8ec204-6b21-4ca6-8f41-86d8ee06b82a","Type":"ContainerStarted","Data":"61c6ed915dba3ef6ac51775920391dfb837b6174d0d3a2ff80ae89f72da980c9"} Feb 18 14:38:22 crc kubenswrapper[4957]: I0218 14:38:22.295991 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-5f9rm" event={"ID":"8e8ec204-6b21-4ca6-8f41-86d8ee06b82a","Type":"ContainerStarted","Data":"82e90db3a869b9f51b2f25d728c1c9fb4362a6438f1fec2f345f05dc7f5bfb09"} Feb 18 14:38:22 crc kubenswrapper[4957]: I0218 14:38:22.296027 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-5f9rm" event={"ID":"8e8ec204-6b21-4ca6-8f41-86d8ee06b82a","Type":"ContainerStarted","Data":"86dc260aab62c24b9a7dbfc06ca38d2a2831da26acd3f82d72534560c6cd7545"} Feb 18 14:38:22 crc kubenswrapper[4957]: I0218 14:38:22.297538 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"fc66d3f7-be43-4189-94c5-59c46296305f","Type":"ContainerStarted","Data":"89b889a103d20392e1032496a168580267adc56606ca60515f850a0e9fb969ac"} Feb 18 14:38:22 crc kubenswrapper[4957]: I0218 14:38:22.964120 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-5bd76d8bd7-7bctr"] Feb 18 14:38:23 crc kubenswrapper[4957]: W0218 14:38:23.188065 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod513d3d65_d89a_4418_b0c2_4eadd3ae5600.slice/crio-d240f6c62523e526ca968cbfac8c231a175a67acdeb1708cbe49142643303fb3 WatchSource:0}: Error finding container d240f6c62523e526ca968cbfac8c231a175a67acdeb1708cbe49142643303fb3: Status 404 returned error can't find the container with id d240f6c62523e526ca968cbfac8c231a175a67acdeb1708cbe49142643303fb3 Feb 18 14:38:23 crc kubenswrapper[4957]: I0218 14:38:23.303990 4957 generic.go:334] "Generic (PLEG): container finished" podID="fa38d701-d862-47f8-bb5e-e00954a1054f" containerID="51ee62a16bbeaf4c990aeac04ef02e54b2c29a45b0d46c12deec51a727e52bee" exitCode=0 Feb 18 14:38:23 crc kubenswrapper[4957]: I0218 14:38:23.304034 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-ccnfk" event={"ID":"fa38d701-d862-47f8-bb5e-e00954a1054f","Type":"ContainerDied","Data":"51ee62a16bbeaf4c990aeac04ef02e54b2c29a45b0d46c12deec51a727e52bee"} Feb 18 14:38:23 crc kubenswrapper[4957]: I0218 14:38:23.305544 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-5bd76d8bd7-7bctr" event={"ID":"513d3d65-d89a-4418-b0c2-4eadd3ae5600","Type":"ContainerStarted","Data":"d240f6c62523e526ca968cbfac8c231a175a67acdeb1708cbe49142643303fb3"} Feb 18 14:38:23 crc kubenswrapper[4957]: I0218 14:38:23.306870 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-88qng" event={"ID":"03237575-0216-483d-8871-dfe3b61a7d94","Type":"ContainerStarted","Data":"f20c230e22c493cb374566ce2229aa3dc4eee38e50c489cb2192fd0770fc65ac"} Feb 18 14:38:24 crc kubenswrapper[4957]: I0218 14:38:24.317486 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-ccnfk" event={"ID":"fa38d701-d862-47f8-bb5e-e00954a1054f","Type":"ContainerStarted","Data":"6c43abf5e2fdd53f82741f0e6a25aea5ca48db39fcf97c2502365d3b4026a94f"} Feb 18 14:38:24 crc kubenswrapper[4957]: I0218 14:38:24.318222 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-ccnfk" event={"ID":"fa38d701-d862-47f8-bb5e-e00954a1054f","Type":"ContainerStarted","Data":"010c8cb00440e5bba4fee3075f26583026f86cc3401f1fa7d7a7539293c88c55"} Feb 18 14:38:24 crc kubenswrapper[4957]: I0218 14:38:24.322590 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-5f9rm" event={"ID":"8e8ec204-6b21-4ca6-8f41-86d8ee06b82a","Type":"ContainerStarted","Data":"9d73c6fed1e799199a6ff1e54d08300dc787817c5008bb6ebde65e32055791dd"} Feb 18 14:38:24 crc kubenswrapper[4957]: I0218 14:38:24.333882 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-88qng" event={"ID":"03237575-0216-483d-8871-dfe3b61a7d94","Type":"ContainerStarted","Data":"1008d9796e8c88c809c7d5a43022a7f67c36e0af8c470b954cc3bb3de3e9396b"} Feb 18 14:38:24 crc kubenswrapper[4957]: I0218 14:38:24.333934 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-88qng" event={"ID":"03237575-0216-483d-8871-dfe3b61a7d94","Type":"ContainerStarted","Data":"b9a4bf3cb0adc4033f39a9f7d6c5207b9d536af1c3d2a9be7cfd77b3085ca129"} Feb 18 14:38:24 crc kubenswrapper[4957]: I0218 14:38:24.338536 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/node-exporter-ccnfk" podStartSLOduration=3.684389129 podStartE2EDuration="5.338514242s" podCreationTimestamp="2026-02-18 14:38:19 +0000 UTC" firstStartedPulling="2026-02-18 14:38:20.857329529 +0000 UTC m=+407.378194273" lastFinishedPulling="2026-02-18 14:38:22.511454642 +0000 UTC m=+409.032319386" observedRunningTime="2026-02-18 14:38:24.33639728 +0000 UTC m=+410.857262044" watchObservedRunningTime="2026-02-18 14:38:24.338514242 +0000 UTC m=+410.859378996" Feb 18 14:38:24 crc kubenswrapper[4957]: I0218 14:38:24.343446 4957 generic.go:334] "Generic (PLEG): container finished" podID="fc66d3f7-be43-4189-94c5-59c46296305f" containerID="7cf1949f1c3ec374f7c21034606d8aa499cce659f8e237b8ee2bf20f7f584c77" exitCode=0 Feb 18 14:38:24 crc kubenswrapper[4957]: I0218 14:38:24.343497 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"fc66d3f7-be43-4189-94c5-59c46296305f","Type":"ContainerDied","Data":"7cf1949f1c3ec374f7c21034606d8aa499cce659f8e237b8ee2bf20f7f584c77"} Feb 18 14:38:24 crc kubenswrapper[4957]: I0218 14:38:24.371625 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/openshift-state-metrics-566fddb674-5f9rm" podStartSLOduration=3.450631454 podStartE2EDuration="5.371599441s" podCreationTimestamp="2026-02-18 14:38:19 +0000 UTC" firstStartedPulling="2026-02-18 14:38:21.631830356 +0000 UTC m=+408.152695100" lastFinishedPulling="2026-02-18 14:38:23.552798333 +0000 UTC m=+410.073663087" observedRunningTime="2026-02-18 14:38:24.368684544 +0000 UTC m=+410.889549298" watchObservedRunningTime="2026-02-18 14:38:24.371599441 +0000 UTC m=+410.892464185" Feb 18 14:38:24 crc kubenswrapper[4957]: I0218 14:38:24.398952 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-88qng" podStartSLOduration=3.981244469 podStartE2EDuration="5.398892178s" podCreationTimestamp="2026-02-18 14:38:19 +0000 UTC" firstStartedPulling="2026-02-18 14:38:21.098792341 +0000 UTC m=+407.619657085" lastFinishedPulling="2026-02-18 14:38:22.51644005 +0000 UTC m=+409.037304794" observedRunningTime="2026-02-18 14:38:24.392952272 +0000 UTC m=+410.913817026" watchObservedRunningTime="2026-02-18 14:38:24.398892178 +0000 UTC m=+410.919756922" Feb 18 14:38:24 crc kubenswrapper[4957]: I0218 14:38:24.615447 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-66744675f6-ctstc"] Feb 18 14:38:24 crc kubenswrapper[4957]: I0218 14:38:24.616365 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-66744675f6-ctstc" Feb 18 14:38:24 crc kubenswrapper[4957]: I0218 14:38:24.636979 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-66744675f6-ctstc"] Feb 18 14:38:24 crc kubenswrapper[4957]: I0218 14:38:24.759763 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6e57bd79-bd26-4c30-ae82-8cf6215bad62-trusted-ca-bundle\") pod \"console-66744675f6-ctstc\" (UID: \"6e57bd79-bd26-4c30-ae82-8cf6215bad62\") " pod="openshift-console/console-66744675f6-ctstc" Feb 18 14:38:24 crc kubenswrapper[4957]: I0218 14:38:24.759814 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6e57bd79-bd26-4c30-ae82-8cf6215bad62-console-config\") pod \"console-66744675f6-ctstc\" (UID: \"6e57bd79-bd26-4c30-ae82-8cf6215bad62\") " pod="openshift-console/console-66744675f6-ctstc" Feb 18 14:38:24 crc kubenswrapper[4957]: I0218 14:38:24.759835 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6e57bd79-bd26-4c30-ae82-8cf6215bad62-console-serving-cert\") pod \"console-66744675f6-ctstc\" (UID: \"6e57bd79-bd26-4c30-ae82-8cf6215bad62\") " pod="openshift-console/console-66744675f6-ctstc" Feb 18 14:38:24 crc kubenswrapper[4957]: I0218 14:38:24.759884 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6e57bd79-bd26-4c30-ae82-8cf6215bad62-console-oauth-config\") pod \"console-66744675f6-ctstc\" (UID: \"6e57bd79-bd26-4c30-ae82-8cf6215bad62\") " pod="openshift-console/console-66744675f6-ctstc" Feb 18 14:38:24 crc kubenswrapper[4957]: I0218 14:38:24.759907 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6e57bd79-bd26-4c30-ae82-8cf6215bad62-service-ca\") pod \"console-66744675f6-ctstc\" (UID: \"6e57bd79-bd26-4c30-ae82-8cf6215bad62\") " pod="openshift-console/console-66744675f6-ctstc" Feb 18 14:38:24 crc kubenswrapper[4957]: I0218 14:38:24.759949 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6e57bd79-bd26-4c30-ae82-8cf6215bad62-oauth-serving-cert\") pod \"console-66744675f6-ctstc\" (UID: \"6e57bd79-bd26-4c30-ae82-8cf6215bad62\") " pod="openshift-console/console-66744675f6-ctstc" Feb 18 14:38:24 crc kubenswrapper[4957]: I0218 14:38:24.759973 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frcgw\" (UniqueName: \"kubernetes.io/projected/6e57bd79-bd26-4c30-ae82-8cf6215bad62-kube-api-access-frcgw\") pod \"console-66744675f6-ctstc\" (UID: \"6e57bd79-bd26-4c30-ae82-8cf6215bad62\") " pod="openshift-console/console-66744675f6-ctstc" Feb 18 14:38:24 crc kubenswrapper[4957]: I0218 14:38:24.861356 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6e57bd79-bd26-4c30-ae82-8cf6215bad62-console-oauth-config\") pod \"console-66744675f6-ctstc\" (UID: \"6e57bd79-bd26-4c30-ae82-8cf6215bad62\") " pod="openshift-console/console-66744675f6-ctstc" Feb 18 14:38:24 crc kubenswrapper[4957]: I0218 14:38:24.861724 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6e57bd79-bd26-4c30-ae82-8cf6215bad62-service-ca\") pod \"console-66744675f6-ctstc\" (UID: \"6e57bd79-bd26-4c30-ae82-8cf6215bad62\") " pod="openshift-console/console-66744675f6-ctstc" Feb 18 14:38:24 crc kubenswrapper[4957]: I0218 14:38:24.861769 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6e57bd79-bd26-4c30-ae82-8cf6215bad62-oauth-serving-cert\") pod \"console-66744675f6-ctstc\" (UID: \"6e57bd79-bd26-4c30-ae82-8cf6215bad62\") " pod="openshift-console/console-66744675f6-ctstc" Feb 18 14:38:24 crc kubenswrapper[4957]: I0218 14:38:24.861802 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-frcgw\" (UniqueName: \"kubernetes.io/projected/6e57bd79-bd26-4c30-ae82-8cf6215bad62-kube-api-access-frcgw\") pod \"console-66744675f6-ctstc\" (UID: \"6e57bd79-bd26-4c30-ae82-8cf6215bad62\") " pod="openshift-console/console-66744675f6-ctstc" Feb 18 14:38:24 crc kubenswrapper[4957]: I0218 14:38:24.861825 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6e57bd79-bd26-4c30-ae82-8cf6215bad62-trusted-ca-bundle\") pod \"console-66744675f6-ctstc\" (UID: \"6e57bd79-bd26-4c30-ae82-8cf6215bad62\") " pod="openshift-console/console-66744675f6-ctstc" Feb 18 14:38:24 crc kubenswrapper[4957]: I0218 14:38:24.861843 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6e57bd79-bd26-4c30-ae82-8cf6215bad62-console-config\") pod \"console-66744675f6-ctstc\" (UID: \"6e57bd79-bd26-4c30-ae82-8cf6215bad62\") " pod="openshift-console/console-66744675f6-ctstc" Feb 18 14:38:24 crc kubenswrapper[4957]: I0218 14:38:24.861859 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6e57bd79-bd26-4c30-ae82-8cf6215bad62-console-serving-cert\") pod \"console-66744675f6-ctstc\" (UID: \"6e57bd79-bd26-4c30-ae82-8cf6215bad62\") " pod="openshift-console/console-66744675f6-ctstc" Feb 18 14:38:24 crc kubenswrapper[4957]: I0218 14:38:24.862975 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6e57bd79-bd26-4c30-ae82-8cf6215bad62-service-ca\") pod \"console-66744675f6-ctstc\" (UID: \"6e57bd79-bd26-4c30-ae82-8cf6215bad62\") " pod="openshift-console/console-66744675f6-ctstc" Feb 18 14:38:24 crc kubenswrapper[4957]: I0218 14:38:24.863105 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6e57bd79-bd26-4c30-ae82-8cf6215bad62-console-config\") pod \"console-66744675f6-ctstc\" (UID: \"6e57bd79-bd26-4c30-ae82-8cf6215bad62\") " pod="openshift-console/console-66744675f6-ctstc" Feb 18 14:38:24 crc kubenswrapper[4957]: I0218 14:38:24.863132 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6e57bd79-bd26-4c30-ae82-8cf6215bad62-trusted-ca-bundle\") pod \"console-66744675f6-ctstc\" (UID: \"6e57bd79-bd26-4c30-ae82-8cf6215bad62\") " pod="openshift-console/console-66744675f6-ctstc" Feb 18 14:38:24 crc kubenswrapper[4957]: I0218 14:38:24.863369 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6e57bd79-bd26-4c30-ae82-8cf6215bad62-oauth-serving-cert\") pod \"console-66744675f6-ctstc\" (UID: \"6e57bd79-bd26-4c30-ae82-8cf6215bad62\") " pod="openshift-console/console-66744675f6-ctstc" Feb 18 14:38:24 crc kubenswrapper[4957]: I0218 14:38:24.868639 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6e57bd79-bd26-4c30-ae82-8cf6215bad62-console-serving-cert\") pod \"console-66744675f6-ctstc\" (UID: \"6e57bd79-bd26-4c30-ae82-8cf6215bad62\") " pod="openshift-console/console-66744675f6-ctstc" Feb 18 14:38:24 crc kubenswrapper[4957]: I0218 14:38:24.876525 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6e57bd79-bd26-4c30-ae82-8cf6215bad62-console-oauth-config\") pod \"console-66744675f6-ctstc\" (UID: \"6e57bd79-bd26-4c30-ae82-8cf6215bad62\") " pod="openshift-console/console-66744675f6-ctstc" Feb 18 14:38:24 crc kubenswrapper[4957]: I0218 14:38:24.886197 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-frcgw\" (UniqueName: \"kubernetes.io/projected/6e57bd79-bd26-4c30-ae82-8cf6215bad62-kube-api-access-frcgw\") pod \"console-66744675f6-ctstc\" (UID: \"6e57bd79-bd26-4c30-ae82-8cf6215bad62\") " pod="openshift-console/console-66744675f6-ctstc" Feb 18 14:38:24 crc kubenswrapper[4957]: I0218 14:38:24.933401 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-66744675f6-ctstc" Feb 18 14:38:25 crc kubenswrapper[4957]: I0218 14:38:25.098144 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/metrics-server-7ffc4d6784-c7kvk"] Feb 18 14:38:25 crc kubenswrapper[4957]: I0218 14:38:25.099033 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-7ffc4d6784-c7kvk" Feb 18 14:38:25 crc kubenswrapper[4957]: I0218 14:38:25.101729 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-server-audit-profiles" Feb 18 14:38:25 crc kubenswrapper[4957]: I0218 14:38:25.101804 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kubelet-serving-ca-bundle" Feb 18 14:38:25 crc kubenswrapper[4957]: I0218 14:38:25.101961 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-client-certs" Feb 18 14:38:25 crc kubenswrapper[4957]: I0218 14:38:25.102014 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-fra4n4n5oa7mc" Feb 18 14:38:25 crc kubenswrapper[4957]: I0218 14:38:25.102275 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-tls" Feb 18 14:38:25 crc kubenswrapper[4957]: I0218 14:38:25.102513 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-dockercfg-cs8jt" Feb 18 14:38:25 crc kubenswrapper[4957]: I0218 14:38:25.106949 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-7ffc4d6784-c7kvk"] Feb 18 14:38:25 crc kubenswrapper[4957]: I0218 14:38:25.168029 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdcbb72c-6e5e-4167-baf4-ca754b4122a0-client-ca-bundle\") pod \"metrics-server-7ffc4d6784-c7kvk\" (UID: \"bdcbb72c-6e5e-4167-baf4-ca754b4122a0\") " pod="openshift-monitoring/metrics-server-7ffc4d6784-c7kvk" Feb 18 14:38:25 crc kubenswrapper[4957]: I0218 14:38:25.168212 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/bdcbb72c-6e5e-4167-baf4-ca754b4122a0-secret-metrics-client-certs\") pod \"metrics-server-7ffc4d6784-c7kvk\" (UID: \"bdcbb72c-6e5e-4167-baf4-ca754b4122a0\") " pod="openshift-monitoring/metrics-server-7ffc4d6784-c7kvk" Feb 18 14:38:25 crc kubenswrapper[4957]: I0218 14:38:25.168243 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/bdcbb72c-6e5e-4167-baf4-ca754b4122a0-secret-metrics-server-tls\") pod \"metrics-server-7ffc4d6784-c7kvk\" (UID: \"bdcbb72c-6e5e-4167-baf4-ca754b4122a0\") " pod="openshift-monitoring/metrics-server-7ffc4d6784-c7kvk" Feb 18 14:38:25 crc kubenswrapper[4957]: I0218 14:38:25.168302 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bdcbb72c-6e5e-4167-baf4-ca754b4122a0-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-7ffc4d6784-c7kvk\" (UID: \"bdcbb72c-6e5e-4167-baf4-ca754b4122a0\") " pod="openshift-monitoring/metrics-server-7ffc4d6784-c7kvk" Feb 18 14:38:25 crc kubenswrapper[4957]: I0218 14:38:25.168327 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/bdcbb72c-6e5e-4167-baf4-ca754b4122a0-audit-log\") pod \"metrics-server-7ffc4d6784-c7kvk\" (UID: \"bdcbb72c-6e5e-4167-baf4-ca754b4122a0\") " pod="openshift-monitoring/metrics-server-7ffc4d6784-c7kvk" Feb 18 14:38:25 crc kubenswrapper[4957]: I0218 14:38:25.168348 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/bdcbb72c-6e5e-4167-baf4-ca754b4122a0-metrics-server-audit-profiles\") pod \"metrics-server-7ffc4d6784-c7kvk\" (UID: \"bdcbb72c-6e5e-4167-baf4-ca754b4122a0\") " pod="openshift-monitoring/metrics-server-7ffc4d6784-c7kvk" Feb 18 14:38:25 crc kubenswrapper[4957]: I0218 14:38:25.168439 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w844t\" (UniqueName: \"kubernetes.io/projected/bdcbb72c-6e5e-4167-baf4-ca754b4122a0-kube-api-access-w844t\") pod \"metrics-server-7ffc4d6784-c7kvk\" (UID: \"bdcbb72c-6e5e-4167-baf4-ca754b4122a0\") " pod="openshift-monitoring/metrics-server-7ffc4d6784-c7kvk" Feb 18 14:38:25 crc kubenswrapper[4957]: I0218 14:38:25.269208 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdcbb72c-6e5e-4167-baf4-ca754b4122a0-client-ca-bundle\") pod \"metrics-server-7ffc4d6784-c7kvk\" (UID: \"bdcbb72c-6e5e-4167-baf4-ca754b4122a0\") " pod="openshift-monitoring/metrics-server-7ffc4d6784-c7kvk" Feb 18 14:38:25 crc kubenswrapper[4957]: I0218 14:38:25.269251 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/bdcbb72c-6e5e-4167-baf4-ca754b4122a0-secret-metrics-client-certs\") pod \"metrics-server-7ffc4d6784-c7kvk\" (UID: \"bdcbb72c-6e5e-4167-baf4-ca754b4122a0\") " pod="openshift-monitoring/metrics-server-7ffc4d6784-c7kvk" Feb 18 14:38:25 crc kubenswrapper[4957]: I0218 14:38:25.269271 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/bdcbb72c-6e5e-4167-baf4-ca754b4122a0-secret-metrics-server-tls\") pod \"metrics-server-7ffc4d6784-c7kvk\" (UID: \"bdcbb72c-6e5e-4167-baf4-ca754b4122a0\") " pod="openshift-monitoring/metrics-server-7ffc4d6784-c7kvk" Feb 18 14:38:25 crc kubenswrapper[4957]: I0218 14:38:25.269297 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bdcbb72c-6e5e-4167-baf4-ca754b4122a0-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-7ffc4d6784-c7kvk\" (UID: \"bdcbb72c-6e5e-4167-baf4-ca754b4122a0\") " pod="openshift-monitoring/metrics-server-7ffc4d6784-c7kvk" Feb 18 14:38:25 crc kubenswrapper[4957]: I0218 14:38:25.269350 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/bdcbb72c-6e5e-4167-baf4-ca754b4122a0-audit-log\") pod \"metrics-server-7ffc4d6784-c7kvk\" (UID: \"bdcbb72c-6e5e-4167-baf4-ca754b4122a0\") " pod="openshift-monitoring/metrics-server-7ffc4d6784-c7kvk" Feb 18 14:38:25 crc kubenswrapper[4957]: I0218 14:38:25.269389 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/bdcbb72c-6e5e-4167-baf4-ca754b4122a0-metrics-server-audit-profiles\") pod \"metrics-server-7ffc4d6784-c7kvk\" (UID: \"bdcbb72c-6e5e-4167-baf4-ca754b4122a0\") " pod="openshift-monitoring/metrics-server-7ffc4d6784-c7kvk" Feb 18 14:38:25 crc kubenswrapper[4957]: I0218 14:38:25.269461 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w844t\" (UniqueName: \"kubernetes.io/projected/bdcbb72c-6e5e-4167-baf4-ca754b4122a0-kube-api-access-w844t\") pod \"metrics-server-7ffc4d6784-c7kvk\" (UID: \"bdcbb72c-6e5e-4167-baf4-ca754b4122a0\") " pod="openshift-monitoring/metrics-server-7ffc4d6784-c7kvk" Feb 18 14:38:25 crc kubenswrapper[4957]: I0218 14:38:25.270376 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/bdcbb72c-6e5e-4167-baf4-ca754b4122a0-audit-log\") pod \"metrics-server-7ffc4d6784-c7kvk\" (UID: \"bdcbb72c-6e5e-4167-baf4-ca754b4122a0\") " pod="openshift-monitoring/metrics-server-7ffc4d6784-c7kvk" Feb 18 14:38:25 crc kubenswrapper[4957]: I0218 14:38:25.270717 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bdcbb72c-6e5e-4167-baf4-ca754b4122a0-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-7ffc4d6784-c7kvk\" (UID: \"bdcbb72c-6e5e-4167-baf4-ca754b4122a0\") " pod="openshift-monitoring/metrics-server-7ffc4d6784-c7kvk" Feb 18 14:38:25 crc kubenswrapper[4957]: I0218 14:38:25.271005 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/bdcbb72c-6e5e-4167-baf4-ca754b4122a0-metrics-server-audit-profiles\") pod \"metrics-server-7ffc4d6784-c7kvk\" (UID: \"bdcbb72c-6e5e-4167-baf4-ca754b4122a0\") " pod="openshift-monitoring/metrics-server-7ffc4d6784-c7kvk" Feb 18 14:38:25 crc kubenswrapper[4957]: I0218 14:38:25.273348 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdcbb72c-6e5e-4167-baf4-ca754b4122a0-client-ca-bundle\") pod \"metrics-server-7ffc4d6784-c7kvk\" (UID: \"bdcbb72c-6e5e-4167-baf4-ca754b4122a0\") " pod="openshift-monitoring/metrics-server-7ffc4d6784-c7kvk" Feb 18 14:38:25 crc kubenswrapper[4957]: I0218 14:38:25.274610 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/bdcbb72c-6e5e-4167-baf4-ca754b4122a0-secret-metrics-client-certs\") pod \"metrics-server-7ffc4d6784-c7kvk\" (UID: \"bdcbb72c-6e5e-4167-baf4-ca754b4122a0\") " pod="openshift-monitoring/metrics-server-7ffc4d6784-c7kvk" Feb 18 14:38:25 crc kubenswrapper[4957]: I0218 14:38:25.274932 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/bdcbb72c-6e5e-4167-baf4-ca754b4122a0-secret-metrics-server-tls\") pod \"metrics-server-7ffc4d6784-c7kvk\" (UID: \"bdcbb72c-6e5e-4167-baf4-ca754b4122a0\") " pod="openshift-monitoring/metrics-server-7ffc4d6784-c7kvk" Feb 18 14:38:25 crc kubenswrapper[4957]: I0218 14:38:25.284823 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w844t\" (UniqueName: \"kubernetes.io/projected/bdcbb72c-6e5e-4167-baf4-ca754b4122a0-kube-api-access-w844t\") pod \"metrics-server-7ffc4d6784-c7kvk\" (UID: \"bdcbb72c-6e5e-4167-baf4-ca754b4122a0\") " pod="openshift-monitoring/metrics-server-7ffc4d6784-c7kvk" Feb 18 14:38:25 crc kubenswrapper[4957]: I0218 14:38:25.421724 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-7ffc4d6784-c7kvk" Feb 18 14:38:25 crc kubenswrapper[4957]: I0218 14:38:25.605730 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/monitoring-plugin-6fb88c9bd-6wgxk"] Feb 18 14:38:25 crc kubenswrapper[4957]: I0218 14:38:25.607135 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-6fb88c9bd-6wgxk" Feb 18 14:38:25 crc kubenswrapper[4957]: I0218 14:38:25.610448 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"monitoring-plugin-cert" Feb 18 14:38:25 crc kubenswrapper[4957]: I0218 14:38:25.610497 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"default-dockercfg-6tstp" Feb 18 14:38:25 crc kubenswrapper[4957]: I0218 14:38:25.613841 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-6fb88c9bd-6wgxk"] Feb 18 14:38:25 crc kubenswrapper[4957]: I0218 14:38:25.658601 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-7ffc4d6784-c7kvk"] Feb 18 14:38:25 crc kubenswrapper[4957]: W0218 14:38:25.671172 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbdcbb72c_6e5e_4167_baf4_ca754b4122a0.slice/crio-8fed578f1a7bea3082e2de6215d54d01cb7444f068f789982f7b8d544684cbe7 WatchSource:0}: Error finding container 8fed578f1a7bea3082e2de6215d54d01cb7444f068f789982f7b8d544684cbe7: Status 404 returned error can't find the container with id 8fed578f1a7bea3082e2de6215d54d01cb7444f068f789982f7b8d544684cbe7 Feb 18 14:38:25 crc kubenswrapper[4957]: I0218 14:38:25.675259 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/4d4390cd-8a73-4bc0-8a08-8f018c308d17-monitoring-plugin-cert\") pod \"monitoring-plugin-6fb88c9bd-6wgxk\" (UID: \"4d4390cd-8a73-4bc0-8a08-8f018c308d17\") " pod="openshift-monitoring/monitoring-plugin-6fb88c9bd-6wgxk" Feb 18 14:38:25 crc kubenswrapper[4957]: I0218 14:38:25.777102 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/4d4390cd-8a73-4bc0-8a08-8f018c308d17-monitoring-plugin-cert\") pod \"monitoring-plugin-6fb88c9bd-6wgxk\" (UID: \"4d4390cd-8a73-4bc0-8a08-8f018c308d17\") " pod="openshift-monitoring/monitoring-plugin-6fb88c9bd-6wgxk" Feb 18 14:38:25 crc kubenswrapper[4957]: I0218 14:38:25.785189 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/4d4390cd-8a73-4bc0-8a08-8f018c308d17-monitoring-plugin-cert\") pod \"monitoring-plugin-6fb88c9bd-6wgxk\" (UID: \"4d4390cd-8a73-4bc0-8a08-8f018c308d17\") " pod="openshift-monitoring/monitoring-plugin-6fb88c9bd-6wgxk" Feb 18 14:38:25 crc kubenswrapper[4957]: I0218 14:38:25.908750 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-66744675f6-ctstc"] Feb 18 14:38:25 crc kubenswrapper[4957]: W0218 14:38:25.917142 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6e57bd79_bd26_4c30_ae82_8cf6215bad62.slice/crio-9d5e3b08e253955b9c1542975a73a24b759957e0c4bce6c28db907bec9c1ae38 WatchSource:0}: Error finding container 9d5e3b08e253955b9c1542975a73a24b759957e0c4bce6c28db907bec9c1ae38: Status 404 returned error can't find the container with id 9d5e3b08e253955b9c1542975a73a24b759957e0c4bce6c28db907bec9c1ae38 Feb 18 14:38:25 crc kubenswrapper[4957]: I0218 14:38:25.962851 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-6fb88c9bd-6wgxk" Feb 18 14:38:26 crc kubenswrapper[4957]: I0218 14:38:26.183554 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Feb 18 14:38:26 crc kubenswrapper[4957]: I0218 14:38:26.188016 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Feb 18 14:38:26 crc kubenswrapper[4957]: I0218 14:38:26.192834 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-dockercfg-hdwwv" Feb 18 14:38:26 crc kubenswrapper[4957]: I0218 14:38:26.193485 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-grpc-tls-f34opb3rar1tq" Feb 18 14:38:26 crc kubenswrapper[4957]: I0218 14:38:26.193810 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls-assets-0" Feb 18 14:38:26 crc kubenswrapper[4957]: I0218 14:38:26.193948 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-web-config" Feb 18 14:38:26 crc kubenswrapper[4957]: I0218 14:38:26.194077 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-rbac-proxy" Feb 18 14:38:26 crc kubenswrapper[4957]: I0218 14:38:26.194249 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" Feb 18 14:38:26 crc kubenswrapper[4957]: I0218 14:38:26.194366 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s" Feb 18 14:38:26 crc kubenswrapper[4957]: I0218 14:38:26.194770 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"serving-certs-ca-bundle" Feb 18 14:38:26 crc kubenswrapper[4957]: I0218 14:38:26.194908 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" Feb 18 14:38:26 crc kubenswrapper[4957]: I0218 14:38:26.195266 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" Feb 18 14:38:26 crc kubenswrapper[4957]: I0218 14:38:26.195385 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls" Feb 18 14:38:26 crc kubenswrapper[4957]: I0218 14:38:26.198113 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-k8s-rulefiles-0" Feb 18 14:38:26 crc kubenswrapper[4957]: I0218 14:38:26.206213 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-trusted-ca-bundle" Feb 18 14:38:26 crc kubenswrapper[4957]: I0218 14:38:26.289660 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-6fb88c9bd-6wgxk"] Feb 18 14:38:26 crc kubenswrapper[4957]: I0218 14:38:26.290593 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/564a48e7-438e-4374-9b43-92409e093ae2-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"564a48e7-438e-4374-9b43-92409e093ae2\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 18 14:38:26 crc kubenswrapper[4957]: I0218 14:38:26.290723 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/564a48e7-438e-4374-9b43-92409e093ae2-web-config\") pod \"prometheus-k8s-0\" (UID: \"564a48e7-438e-4374-9b43-92409e093ae2\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 18 14:38:26 crc kubenswrapper[4957]: I0218 14:38:26.290839 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/564a48e7-438e-4374-9b43-92409e093ae2-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"564a48e7-438e-4374-9b43-92409e093ae2\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 18 14:38:26 crc kubenswrapper[4957]: I0218 14:38:26.290923 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/564a48e7-438e-4374-9b43-92409e093ae2-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"564a48e7-438e-4374-9b43-92409e093ae2\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 18 14:38:26 crc kubenswrapper[4957]: I0218 14:38:26.290995 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/564a48e7-438e-4374-9b43-92409e093ae2-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"564a48e7-438e-4374-9b43-92409e093ae2\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 18 14:38:26 crc kubenswrapper[4957]: I0218 14:38:26.291074 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/564a48e7-438e-4374-9b43-92409e093ae2-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"564a48e7-438e-4374-9b43-92409e093ae2\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 18 14:38:26 crc kubenswrapper[4957]: I0218 14:38:26.291156 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/564a48e7-438e-4374-9b43-92409e093ae2-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"564a48e7-438e-4374-9b43-92409e093ae2\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 18 14:38:26 crc kubenswrapper[4957]: I0218 14:38:26.291240 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/564a48e7-438e-4374-9b43-92409e093ae2-config\") pod \"prometheus-k8s-0\" (UID: \"564a48e7-438e-4374-9b43-92409e093ae2\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 18 14:38:26 crc kubenswrapper[4957]: I0218 14:38:26.291326 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/564a48e7-438e-4374-9b43-92409e093ae2-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"564a48e7-438e-4374-9b43-92409e093ae2\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 18 14:38:26 crc kubenswrapper[4957]: I0218 14:38:26.291403 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/564a48e7-438e-4374-9b43-92409e093ae2-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"564a48e7-438e-4374-9b43-92409e093ae2\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 18 14:38:26 crc kubenswrapper[4957]: I0218 14:38:26.291506 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/564a48e7-438e-4374-9b43-92409e093ae2-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"564a48e7-438e-4374-9b43-92409e093ae2\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 18 14:38:26 crc kubenswrapper[4957]: I0218 14:38:26.291596 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/564a48e7-438e-4374-9b43-92409e093ae2-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"564a48e7-438e-4374-9b43-92409e093ae2\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 18 14:38:26 crc kubenswrapper[4957]: I0218 14:38:26.291676 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/564a48e7-438e-4374-9b43-92409e093ae2-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"564a48e7-438e-4374-9b43-92409e093ae2\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 18 14:38:26 crc kubenswrapper[4957]: I0218 14:38:26.291746 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rvnb\" (UniqueName: \"kubernetes.io/projected/564a48e7-438e-4374-9b43-92409e093ae2-kube-api-access-8rvnb\") pod \"prometheus-k8s-0\" (UID: \"564a48e7-438e-4374-9b43-92409e093ae2\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 18 14:38:26 crc kubenswrapper[4957]: I0218 14:38:26.292925 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/564a48e7-438e-4374-9b43-92409e093ae2-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"564a48e7-438e-4374-9b43-92409e093ae2\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 18 14:38:26 crc kubenswrapper[4957]: I0218 14:38:26.293017 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/564a48e7-438e-4374-9b43-92409e093ae2-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"564a48e7-438e-4374-9b43-92409e093ae2\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 18 14:38:26 crc kubenswrapper[4957]: I0218 14:38:26.293172 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/564a48e7-438e-4374-9b43-92409e093ae2-config-out\") pod \"prometheus-k8s-0\" (UID: \"564a48e7-438e-4374-9b43-92409e093ae2\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 18 14:38:26 crc kubenswrapper[4957]: I0218 14:38:26.293270 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/564a48e7-438e-4374-9b43-92409e093ae2-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"564a48e7-438e-4374-9b43-92409e093ae2\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 18 14:38:26 crc kubenswrapper[4957]: I0218 14:38:26.309203 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Feb 18 14:38:26 crc kubenswrapper[4957]: W0218 14:38:26.322705 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4d4390cd_8a73_4bc0_8a08_8f018c308d17.slice/crio-4cd254890a287a062f6d0660d2e87418d9a578c55735671d34d0382d387d4812 WatchSource:0}: Error finding container 4cd254890a287a062f6d0660d2e87418d9a578c55735671d34d0382d387d4812: Status 404 returned error can't find the container with id 4cd254890a287a062f6d0660d2e87418d9a578c55735671d34d0382d387d4812 Feb 18 14:38:26 crc kubenswrapper[4957]: I0218 14:38:26.365860 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-66744675f6-ctstc" event={"ID":"6e57bd79-bd26-4c30-ae82-8cf6215bad62","Type":"ContainerStarted","Data":"ddb97e767db08834fa3b55997e5794616787eeae405d356b7fe1b40ae91f7cb5"} Feb 18 14:38:26 crc kubenswrapper[4957]: I0218 14:38:26.365911 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-66744675f6-ctstc" event={"ID":"6e57bd79-bd26-4c30-ae82-8cf6215bad62","Type":"ContainerStarted","Data":"9d5e3b08e253955b9c1542975a73a24b759957e0c4bce6c28db907bec9c1ae38"} Feb 18 14:38:26 crc kubenswrapper[4957]: I0218 14:38:26.370676 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-7ffc4d6784-c7kvk" event={"ID":"bdcbb72c-6e5e-4167-baf4-ca754b4122a0","Type":"ContainerStarted","Data":"8fed578f1a7bea3082e2de6215d54d01cb7444f068f789982f7b8d544684cbe7"} Feb 18 14:38:26 crc kubenswrapper[4957]: I0218 14:38:26.372223 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-6fb88c9bd-6wgxk" event={"ID":"4d4390cd-8a73-4bc0-8a08-8f018c308d17","Type":"ContainerStarted","Data":"4cd254890a287a062f6d0660d2e87418d9a578c55735671d34d0382d387d4812"} Feb 18 14:38:26 crc kubenswrapper[4957]: I0218 14:38:26.374642 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-5bd76d8bd7-7bctr" event={"ID":"513d3d65-d89a-4418-b0c2-4eadd3ae5600","Type":"ContainerStarted","Data":"23c764dd89c8b85c30c35063e50247e7bb14d8b5b51397699604fb5d3f0e7598"} Feb 18 14:38:26 crc kubenswrapper[4957]: I0218 14:38:26.374733 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-5bd76d8bd7-7bctr" event={"ID":"513d3d65-d89a-4418-b0c2-4eadd3ae5600","Type":"ContainerStarted","Data":"44c86584f156cf5714913bd0d929589b0dbaff1d92badc80c7b9fde4320d8dec"} Feb 18 14:38:26 crc kubenswrapper[4957]: I0218 14:38:26.374819 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-5bd76d8bd7-7bctr" event={"ID":"513d3d65-d89a-4418-b0c2-4eadd3ae5600","Type":"ContainerStarted","Data":"4c821d59e76dcc9ece49fcdf3fa6702a52253557b430b65957177b565bb2e70b"} Feb 18 14:38:26 crc kubenswrapper[4957]: I0218 14:38:26.398841 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/564a48e7-438e-4374-9b43-92409e093ae2-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"564a48e7-438e-4374-9b43-92409e093ae2\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 18 14:38:26 crc kubenswrapper[4957]: I0218 14:38:26.399153 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/564a48e7-438e-4374-9b43-92409e093ae2-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"564a48e7-438e-4374-9b43-92409e093ae2\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 18 14:38:26 crc kubenswrapper[4957]: I0218 14:38:26.399229 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/564a48e7-438e-4374-9b43-92409e093ae2-web-config\") pod \"prometheus-k8s-0\" (UID: \"564a48e7-438e-4374-9b43-92409e093ae2\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 18 14:38:26 crc kubenswrapper[4957]: I0218 14:38:26.399305 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/564a48e7-438e-4374-9b43-92409e093ae2-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"564a48e7-438e-4374-9b43-92409e093ae2\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 18 14:38:26 crc kubenswrapper[4957]: I0218 14:38:26.399376 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/564a48e7-438e-4374-9b43-92409e093ae2-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"564a48e7-438e-4374-9b43-92409e093ae2\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 18 14:38:26 crc kubenswrapper[4957]: I0218 14:38:26.399506 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/564a48e7-438e-4374-9b43-92409e093ae2-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"564a48e7-438e-4374-9b43-92409e093ae2\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 18 14:38:26 crc kubenswrapper[4957]: I0218 14:38:26.399586 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/564a48e7-438e-4374-9b43-92409e093ae2-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"564a48e7-438e-4374-9b43-92409e093ae2\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 18 14:38:26 crc kubenswrapper[4957]: I0218 14:38:26.399662 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/564a48e7-438e-4374-9b43-92409e093ae2-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"564a48e7-438e-4374-9b43-92409e093ae2\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 18 14:38:26 crc kubenswrapper[4957]: I0218 14:38:26.399746 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/564a48e7-438e-4374-9b43-92409e093ae2-config\") pod \"prometheus-k8s-0\" (UID: \"564a48e7-438e-4374-9b43-92409e093ae2\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 18 14:38:26 crc kubenswrapper[4957]: I0218 14:38:26.399824 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/564a48e7-438e-4374-9b43-92409e093ae2-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"564a48e7-438e-4374-9b43-92409e093ae2\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 18 14:38:26 crc kubenswrapper[4957]: I0218 14:38:26.399902 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/564a48e7-438e-4374-9b43-92409e093ae2-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"564a48e7-438e-4374-9b43-92409e093ae2\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 18 14:38:26 crc kubenswrapper[4957]: I0218 14:38:26.399980 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/564a48e7-438e-4374-9b43-92409e093ae2-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"564a48e7-438e-4374-9b43-92409e093ae2\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 18 14:38:26 crc kubenswrapper[4957]: I0218 14:38:26.400057 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/564a48e7-438e-4374-9b43-92409e093ae2-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"564a48e7-438e-4374-9b43-92409e093ae2\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 18 14:38:26 crc kubenswrapper[4957]: I0218 14:38:26.400133 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/564a48e7-438e-4374-9b43-92409e093ae2-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"564a48e7-438e-4374-9b43-92409e093ae2\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 18 14:38:26 crc kubenswrapper[4957]: I0218 14:38:26.400206 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rvnb\" (UniqueName: \"kubernetes.io/projected/564a48e7-438e-4374-9b43-92409e093ae2-kube-api-access-8rvnb\") pod \"prometheus-k8s-0\" (UID: \"564a48e7-438e-4374-9b43-92409e093ae2\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 18 14:38:26 crc kubenswrapper[4957]: I0218 14:38:26.400282 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/564a48e7-438e-4374-9b43-92409e093ae2-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"564a48e7-438e-4374-9b43-92409e093ae2\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 18 14:38:26 crc kubenswrapper[4957]: I0218 14:38:26.400352 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/564a48e7-438e-4374-9b43-92409e093ae2-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"564a48e7-438e-4374-9b43-92409e093ae2\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 18 14:38:26 crc kubenswrapper[4957]: I0218 14:38:26.400443 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/564a48e7-438e-4374-9b43-92409e093ae2-config-out\") pod \"prometheus-k8s-0\" (UID: \"564a48e7-438e-4374-9b43-92409e093ae2\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 18 14:38:26 crc kubenswrapper[4957]: I0218 14:38:26.401540 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/564a48e7-438e-4374-9b43-92409e093ae2-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"564a48e7-438e-4374-9b43-92409e093ae2\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 18 14:38:26 crc kubenswrapper[4957]: I0218 14:38:26.401599 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/564a48e7-438e-4374-9b43-92409e093ae2-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"564a48e7-438e-4374-9b43-92409e093ae2\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 18 14:38:26 crc kubenswrapper[4957]: I0218 14:38:26.401742 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/564a48e7-438e-4374-9b43-92409e093ae2-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"564a48e7-438e-4374-9b43-92409e093ae2\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 18 14:38:26 crc kubenswrapper[4957]: I0218 14:38:26.402548 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/564a48e7-438e-4374-9b43-92409e093ae2-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"564a48e7-438e-4374-9b43-92409e093ae2\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 18 14:38:26 crc kubenswrapper[4957]: I0218 14:38:26.403280 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/564a48e7-438e-4374-9b43-92409e093ae2-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"564a48e7-438e-4374-9b43-92409e093ae2\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 18 14:38:26 crc kubenswrapper[4957]: I0218 14:38:26.408056 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/564a48e7-438e-4374-9b43-92409e093ae2-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"564a48e7-438e-4374-9b43-92409e093ae2\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 18 14:38:26 crc kubenswrapper[4957]: I0218 14:38:26.408744 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/564a48e7-438e-4374-9b43-92409e093ae2-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"564a48e7-438e-4374-9b43-92409e093ae2\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 18 14:38:26 crc kubenswrapper[4957]: I0218 14:38:26.409214 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/564a48e7-438e-4374-9b43-92409e093ae2-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"564a48e7-438e-4374-9b43-92409e093ae2\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 18 14:38:26 crc kubenswrapper[4957]: I0218 14:38:26.416945 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/564a48e7-438e-4374-9b43-92409e093ae2-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"564a48e7-438e-4374-9b43-92409e093ae2\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 18 14:38:26 crc kubenswrapper[4957]: I0218 14:38:26.417445 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/564a48e7-438e-4374-9b43-92409e093ae2-config-out\") pod \"prometheus-k8s-0\" (UID: \"564a48e7-438e-4374-9b43-92409e093ae2\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 18 14:38:26 crc kubenswrapper[4957]: I0218 14:38:26.417558 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/564a48e7-438e-4374-9b43-92409e093ae2-web-config\") pod \"prometheus-k8s-0\" (UID: \"564a48e7-438e-4374-9b43-92409e093ae2\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 18 14:38:26 crc kubenswrapper[4957]: I0218 14:38:26.417730 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/564a48e7-438e-4374-9b43-92409e093ae2-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"564a48e7-438e-4374-9b43-92409e093ae2\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 18 14:38:26 crc kubenswrapper[4957]: I0218 14:38:26.417863 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/564a48e7-438e-4374-9b43-92409e093ae2-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"564a48e7-438e-4374-9b43-92409e093ae2\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 18 14:38:26 crc kubenswrapper[4957]: I0218 14:38:26.417855 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/564a48e7-438e-4374-9b43-92409e093ae2-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"564a48e7-438e-4374-9b43-92409e093ae2\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 18 14:38:26 crc kubenswrapper[4957]: I0218 14:38:26.418001 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/564a48e7-438e-4374-9b43-92409e093ae2-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"564a48e7-438e-4374-9b43-92409e093ae2\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 18 14:38:26 crc kubenswrapper[4957]: I0218 14:38:26.421521 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/564a48e7-438e-4374-9b43-92409e093ae2-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"564a48e7-438e-4374-9b43-92409e093ae2\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 18 14:38:26 crc kubenswrapper[4957]: I0218 14:38:26.424977 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rvnb\" (UniqueName: \"kubernetes.io/projected/564a48e7-438e-4374-9b43-92409e093ae2-kube-api-access-8rvnb\") pod \"prometheus-k8s-0\" (UID: \"564a48e7-438e-4374-9b43-92409e093ae2\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 18 14:38:26 crc kubenswrapper[4957]: I0218 14:38:26.426778 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/564a48e7-438e-4374-9b43-92409e093ae2-config\") pod \"prometheus-k8s-0\" (UID: \"564a48e7-438e-4374-9b43-92409e093ae2\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 18 14:38:26 crc kubenswrapper[4957]: I0218 14:38:26.514654 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Feb 18 14:38:28 crc kubenswrapper[4957]: I0218 14:38:28.009816 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-66744675f6-ctstc" podStartSLOduration=4.009794898 podStartE2EDuration="4.009794898s" podCreationTimestamp="2026-02-18 14:38:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:38:26.398182261 +0000 UTC m=+412.919047005" watchObservedRunningTime="2026-02-18 14:38:28.009794898 +0000 UTC m=+414.530659642" Feb 18 14:38:28 crc kubenswrapper[4957]: I0218 14:38:28.014685 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Feb 18 14:38:28 crc kubenswrapper[4957]: W0218 14:38:28.362459 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod564a48e7_438e_4374_9b43_92409e093ae2.slice/crio-4eaa06a024365a11171b13138507906fa8178d21640c916dad7adc43f4e965fe WatchSource:0}: Error finding container 4eaa06a024365a11171b13138507906fa8178d21640c916dad7adc43f4e965fe: Status 404 returned error can't find the container with id 4eaa06a024365a11171b13138507906fa8178d21640c916dad7adc43f4e965fe Feb 18 14:38:28 crc kubenswrapper[4957]: I0218 14:38:28.387825 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"564a48e7-438e-4374-9b43-92409e093ae2","Type":"ContainerStarted","Data":"4eaa06a024365a11171b13138507906fa8178d21640c916dad7adc43f4e965fe"} Feb 18 14:38:29 crc kubenswrapper[4957]: I0218 14:38:29.403397 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"fc66d3f7-be43-4189-94c5-59c46296305f","Type":"ContainerStarted","Data":"9f883b587f74ae5f753ee57d31cdf3182b2f1320b34ee653f299b3a601df7dc7"} Feb 18 14:38:29 crc kubenswrapper[4957]: I0218 14:38:29.404022 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"fc66d3f7-be43-4189-94c5-59c46296305f","Type":"ContainerStarted","Data":"6814c6959b222df0f46a44a7451f8bec0063b6502c4cabfdb975c742b4c948da"} Feb 18 14:38:29 crc kubenswrapper[4957]: I0218 14:38:29.404034 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"fc66d3f7-be43-4189-94c5-59c46296305f","Type":"ContainerStarted","Data":"d899d4e79e1cef267cc8f27cdae9e16eaad55fc3f13baffb6a132593d548090e"} Feb 18 14:38:29 crc kubenswrapper[4957]: I0218 14:38:29.404043 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"fc66d3f7-be43-4189-94c5-59c46296305f","Type":"ContainerStarted","Data":"b95caa45a6b110b9faa4feaa22fd2b2ed2ecf225692a05a2af4989028137110f"} Feb 18 14:38:29 crc kubenswrapper[4957]: I0218 14:38:29.404860 4957 generic.go:334] "Generic (PLEG): container finished" podID="564a48e7-438e-4374-9b43-92409e093ae2" containerID="a5b8fc78e56ba43fe5467afc0470f23f74426237487eeb2351c88060d5861e86" exitCode=0 Feb 18 14:38:29 crc kubenswrapper[4957]: I0218 14:38:29.405545 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"564a48e7-438e-4374-9b43-92409e093ae2","Type":"ContainerDied","Data":"a5b8fc78e56ba43fe5467afc0470f23f74426237487eeb2351c88060d5861e86"} Feb 18 14:38:29 crc kubenswrapper[4957]: I0218 14:38:29.407599 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-7ffc4d6784-c7kvk" event={"ID":"bdcbb72c-6e5e-4167-baf4-ca754b4122a0","Type":"ContainerStarted","Data":"4c75ce34f7912c1678d6e79c294eb2798fa93aa8c3f970e4c466fb3bab99e09b"} Feb 18 14:38:29 crc kubenswrapper[4957]: I0218 14:38:29.412256 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-6fb88c9bd-6wgxk" event={"ID":"4d4390cd-8a73-4bc0-8a08-8f018c308d17","Type":"ContainerStarted","Data":"9d8bf8f3f24649e89e9f291d7ecbe95de8783074f7f3f6bae79aedd1b78ee09c"} Feb 18 14:38:29 crc kubenswrapper[4957]: I0218 14:38:29.412497 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/monitoring-plugin-6fb88c9bd-6wgxk" Feb 18 14:38:29 crc kubenswrapper[4957]: I0218 14:38:29.416716 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-5bd76d8bd7-7bctr" event={"ID":"513d3d65-d89a-4418-b0c2-4eadd3ae5600","Type":"ContainerStarted","Data":"47266fdb6ad714810a6ab1b5135e9df01fe64ee82f5870b188bdca18cb88e601"} Feb 18 14:38:29 crc kubenswrapper[4957]: I0218 14:38:29.416754 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-5bd76d8bd7-7bctr" event={"ID":"513d3d65-d89a-4418-b0c2-4eadd3ae5600","Type":"ContainerStarted","Data":"a1b3497d23e1e837334cc66173cb5e1031061362afcb1a97d343d423f3771941"} Feb 18 14:38:29 crc kubenswrapper[4957]: I0218 14:38:29.416769 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-5bd76d8bd7-7bctr" event={"ID":"513d3d65-d89a-4418-b0c2-4eadd3ae5600","Type":"ContainerStarted","Data":"7758cadfbb444377dfa983ff92223661e7457d792bf9c6bf7cffb3891a83a4d8"} Feb 18 14:38:29 crc kubenswrapper[4957]: I0218 14:38:29.416870 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/thanos-querier-5bd76d8bd7-7bctr" Feb 18 14:38:29 crc kubenswrapper[4957]: I0218 14:38:29.417469 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/monitoring-plugin-6fb88c9bd-6wgxk" Feb 18 14:38:29 crc kubenswrapper[4957]: I0218 14:38:29.466945 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/thanos-querier-5bd76d8bd7-7bctr" podStartSLOduration=2.967629371 podStartE2EDuration="8.466924615s" podCreationTimestamp="2026-02-18 14:38:21 +0000 UTC" firstStartedPulling="2026-02-18 14:38:23.19062243 +0000 UTC m=+409.711487174" lastFinishedPulling="2026-02-18 14:38:28.689917684 +0000 UTC m=+415.210782418" observedRunningTime="2026-02-18 14:38:29.460627379 +0000 UTC m=+415.981492123" watchObservedRunningTime="2026-02-18 14:38:29.466924615 +0000 UTC m=+415.987789359" Feb 18 14:38:29 crc kubenswrapper[4957]: I0218 14:38:29.479054 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/metrics-server-7ffc4d6784-c7kvk" podStartSLOduration=1.465011026 podStartE2EDuration="4.479038383s" podCreationTimestamp="2026-02-18 14:38:25 +0000 UTC" firstStartedPulling="2026-02-18 14:38:25.675889207 +0000 UTC m=+412.196753951" lastFinishedPulling="2026-02-18 14:38:28.689916564 +0000 UTC m=+415.210781308" observedRunningTime="2026-02-18 14:38:29.476270061 +0000 UTC m=+415.997134806" watchObservedRunningTime="2026-02-18 14:38:29.479038383 +0000 UTC m=+415.999903117" Feb 18 14:38:29 crc kubenswrapper[4957]: I0218 14:38:29.491434 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/monitoring-plugin-6fb88c9bd-6wgxk" podStartSLOduration=2.037352275 podStartE2EDuration="4.491395449s" podCreationTimestamp="2026-02-18 14:38:25 +0000 UTC" firstStartedPulling="2026-02-18 14:38:26.340768603 +0000 UTC m=+412.861633357" lastFinishedPulling="2026-02-18 14:38:28.794811787 +0000 UTC m=+415.315676531" observedRunningTime="2026-02-18 14:38:29.489453811 +0000 UTC m=+416.010318545" watchObservedRunningTime="2026-02-18 14:38:29.491395449 +0000 UTC m=+416.012260193" Feb 18 14:38:30 crc kubenswrapper[4957]: I0218 14:38:30.426097 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"fc66d3f7-be43-4189-94c5-59c46296305f","Type":"ContainerStarted","Data":"5263fa94e1202c0dfc5805e3388a6881e14eb034209ad8b7d09dd5bf42738892"} Feb 18 14:38:30 crc kubenswrapper[4957]: I0218 14:38:30.426931 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"fc66d3f7-be43-4189-94c5-59c46296305f","Type":"ContainerStarted","Data":"27a8f7c540db0107db612aec13c79b2588bbf5d5b65573bd5bfbce760e3caab2"} Feb 18 14:38:30 crc kubenswrapper[4957]: I0218 14:38:30.437708 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/thanos-querier-5bd76d8bd7-7bctr" Feb 18 14:38:30 crc kubenswrapper[4957]: I0218 14:38:30.462339 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/alertmanager-main-0" podStartSLOduration=3.228246714 podStartE2EDuration="10.462300366s" podCreationTimestamp="2026-02-18 14:38:20 +0000 UTC" firstStartedPulling="2026-02-18 14:38:21.455823301 +0000 UTC m=+407.976688055" lastFinishedPulling="2026-02-18 14:38:28.689876963 +0000 UTC m=+415.210741707" observedRunningTime="2026-02-18 14:38:30.45398329 +0000 UTC m=+416.974848044" watchObservedRunningTime="2026-02-18 14:38:30.462300366 +0000 UTC m=+416.983165110" Feb 18 14:38:33 crc kubenswrapper[4957]: I0218 14:38:33.449124 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"564a48e7-438e-4374-9b43-92409e093ae2","Type":"ContainerStarted","Data":"3833f070a84dfcccdf1d2a7e54c3846054a51f765cae8d56c292ed7c6f5d3192"} Feb 18 14:38:33 crc kubenswrapper[4957]: I0218 14:38:33.449782 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"564a48e7-438e-4374-9b43-92409e093ae2","Type":"ContainerStarted","Data":"61706a1295e2e0b03949fe2d110236109f860f02944f9253a020475ec6744bae"} Feb 18 14:38:33 crc kubenswrapper[4957]: I0218 14:38:33.449800 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"564a48e7-438e-4374-9b43-92409e093ae2","Type":"ContainerStarted","Data":"b4c871c14a51725ebf3796f8746cf035460ac49f73e5c78bf4515d75722dfd0b"} Feb 18 14:38:33 crc kubenswrapper[4957]: I0218 14:38:33.449813 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"564a48e7-438e-4374-9b43-92409e093ae2","Type":"ContainerStarted","Data":"69a8bbddce444251ad0f3d2863e32d045fb44ebada43a5a14f8beaef3f0f0e22"} Feb 18 14:38:33 crc kubenswrapper[4957]: I0218 14:38:33.449824 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"564a48e7-438e-4374-9b43-92409e093ae2","Type":"ContainerStarted","Data":"8ba3262a879c6a0bbe38c431c19cbff1d6ba81480df2e43affbc70c27daaf5bb"} Feb 18 14:38:34 crc kubenswrapper[4957]: I0218 14:38:34.934967 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-66744675f6-ctstc" Feb 18 14:38:34 crc kubenswrapper[4957]: I0218 14:38:34.935498 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-66744675f6-ctstc" Feb 18 14:38:34 crc kubenswrapper[4957]: I0218 14:38:34.941444 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-66744675f6-ctstc" Feb 18 14:38:35 crc kubenswrapper[4957]: I0218 14:38:35.466258 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"564a48e7-438e-4374-9b43-92409e093ae2","Type":"ContainerStarted","Data":"49ba6a3a306b04197910d0b99edcb491960b42d0eb79f5b774e185e5bf9d42df"} Feb 18 14:38:35 crc kubenswrapper[4957]: I0218 14:38:35.470164 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-66744675f6-ctstc" Feb 18 14:38:35 crc kubenswrapper[4957]: I0218 14:38:35.519749 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-hvb66"] Feb 18 14:38:36 crc kubenswrapper[4957]: I0218 14:38:36.515097 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-k8s-0" Feb 18 14:38:36 crc kubenswrapper[4957]: I0218 14:38:36.530657 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-k8s-0" podStartSLOduration=7.572908477 podStartE2EDuration="10.530626789s" podCreationTimestamp="2026-02-18 14:38:26 +0000 UTC" firstStartedPulling="2026-02-18 14:38:29.406597641 +0000 UTC m=+415.927462385" lastFinishedPulling="2026-02-18 14:38:32.364315943 +0000 UTC m=+418.885180697" observedRunningTime="2026-02-18 14:38:36.519677536 +0000 UTC m=+423.040542290" watchObservedRunningTime="2026-02-18 14:38:36.530626789 +0000 UTC m=+423.051491543" Feb 18 14:38:45 crc kubenswrapper[4957]: I0218 14:38:45.422230 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/metrics-server-7ffc4d6784-c7kvk" Feb 18 14:38:45 crc kubenswrapper[4957]: I0218 14:38:45.422796 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/metrics-server-7ffc4d6784-c7kvk" Feb 18 14:39:00 crc kubenswrapper[4957]: I0218 14:39:00.585969 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-hvb66" podUID="3da7ba0a-4ddc-4bca-acd8-e598854eceec" containerName="console" containerID="cri-o://ba042a90b28b4f74a10a27ab7f2f733d6eaf97aba98e22701ba26f4b80ff6555" gracePeriod=15 Feb 18 14:39:00 crc kubenswrapper[4957]: I0218 14:39:00.944987 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-hvb66_3da7ba0a-4ddc-4bca-acd8-e598854eceec/console/0.log" Feb 18 14:39:00 crc kubenswrapper[4957]: I0218 14:39:00.945406 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-hvb66" Feb 18 14:39:01 crc kubenswrapper[4957]: I0218 14:39:01.096758 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/3da7ba0a-4ddc-4bca-acd8-e598854eceec-console-config\") pod \"3da7ba0a-4ddc-4bca-acd8-e598854eceec\" (UID: \"3da7ba0a-4ddc-4bca-acd8-e598854eceec\") " Feb 18 14:39:01 crc kubenswrapper[4957]: I0218 14:39:01.096856 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/3da7ba0a-4ddc-4bca-acd8-e598854eceec-console-serving-cert\") pod \"3da7ba0a-4ddc-4bca-acd8-e598854eceec\" (UID: \"3da7ba0a-4ddc-4bca-acd8-e598854eceec\") " Feb 18 14:39:01 crc kubenswrapper[4957]: I0218 14:39:01.096923 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/3da7ba0a-4ddc-4bca-acd8-e598854eceec-oauth-serving-cert\") pod \"3da7ba0a-4ddc-4bca-acd8-e598854eceec\" (UID: \"3da7ba0a-4ddc-4bca-acd8-e598854eceec\") " Feb 18 14:39:01 crc kubenswrapper[4957]: I0218 14:39:01.096957 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3da7ba0a-4ddc-4bca-acd8-e598854eceec-service-ca\") pod \"3da7ba0a-4ddc-4bca-acd8-e598854eceec\" (UID: \"3da7ba0a-4ddc-4bca-acd8-e598854eceec\") " Feb 18 14:39:01 crc kubenswrapper[4957]: I0218 14:39:01.096996 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3da7ba0a-4ddc-4bca-acd8-e598854eceec-trusted-ca-bundle\") pod \"3da7ba0a-4ddc-4bca-acd8-e598854eceec\" (UID: \"3da7ba0a-4ddc-4bca-acd8-e598854eceec\") " Feb 18 14:39:01 crc kubenswrapper[4957]: I0218 14:39:01.097024 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tgrhq\" (UniqueName: \"kubernetes.io/projected/3da7ba0a-4ddc-4bca-acd8-e598854eceec-kube-api-access-tgrhq\") pod \"3da7ba0a-4ddc-4bca-acd8-e598854eceec\" (UID: \"3da7ba0a-4ddc-4bca-acd8-e598854eceec\") " Feb 18 14:39:01 crc kubenswrapper[4957]: I0218 14:39:01.097059 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/3da7ba0a-4ddc-4bca-acd8-e598854eceec-console-oauth-config\") pod \"3da7ba0a-4ddc-4bca-acd8-e598854eceec\" (UID: \"3da7ba0a-4ddc-4bca-acd8-e598854eceec\") " Feb 18 14:39:01 crc kubenswrapper[4957]: I0218 14:39:01.098258 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3da7ba0a-4ddc-4bca-acd8-e598854eceec-service-ca" (OuterVolumeSpecName: "service-ca") pod "3da7ba0a-4ddc-4bca-acd8-e598854eceec" (UID: "3da7ba0a-4ddc-4bca-acd8-e598854eceec"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:39:01 crc kubenswrapper[4957]: I0218 14:39:01.098258 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3da7ba0a-4ddc-4bca-acd8-e598854eceec-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "3da7ba0a-4ddc-4bca-acd8-e598854eceec" (UID: "3da7ba0a-4ddc-4bca-acd8-e598854eceec"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:39:01 crc kubenswrapper[4957]: I0218 14:39:01.098276 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3da7ba0a-4ddc-4bca-acd8-e598854eceec-console-config" (OuterVolumeSpecName: "console-config") pod "3da7ba0a-4ddc-4bca-acd8-e598854eceec" (UID: "3da7ba0a-4ddc-4bca-acd8-e598854eceec"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:39:01 crc kubenswrapper[4957]: I0218 14:39:01.098329 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3da7ba0a-4ddc-4bca-acd8-e598854eceec-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "3da7ba0a-4ddc-4bca-acd8-e598854eceec" (UID: "3da7ba0a-4ddc-4bca-acd8-e598854eceec"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:39:01 crc kubenswrapper[4957]: I0218 14:39:01.099035 4957 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3da7ba0a-4ddc-4bca-acd8-e598854eceec-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:39:01 crc kubenswrapper[4957]: I0218 14:39:01.099066 4957 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/3da7ba0a-4ddc-4bca-acd8-e598854eceec-console-config\") on node \"crc\" DevicePath \"\"" Feb 18 14:39:01 crc kubenswrapper[4957]: I0218 14:39:01.099079 4957 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/3da7ba0a-4ddc-4bca-acd8-e598854eceec-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 14:39:01 crc kubenswrapper[4957]: I0218 14:39:01.099092 4957 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3da7ba0a-4ddc-4bca-acd8-e598854eceec-service-ca\") on node \"crc\" DevicePath \"\"" Feb 18 14:39:01 crc kubenswrapper[4957]: I0218 14:39:01.104660 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3da7ba0a-4ddc-4bca-acd8-e598854eceec-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "3da7ba0a-4ddc-4bca-acd8-e598854eceec" (UID: "3da7ba0a-4ddc-4bca-acd8-e598854eceec"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:39:01 crc kubenswrapper[4957]: I0218 14:39:01.108318 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3da7ba0a-4ddc-4bca-acd8-e598854eceec-kube-api-access-tgrhq" (OuterVolumeSpecName: "kube-api-access-tgrhq") pod "3da7ba0a-4ddc-4bca-acd8-e598854eceec" (UID: "3da7ba0a-4ddc-4bca-acd8-e598854eceec"). InnerVolumeSpecName "kube-api-access-tgrhq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:39:01 crc kubenswrapper[4957]: I0218 14:39:01.118341 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3da7ba0a-4ddc-4bca-acd8-e598854eceec-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "3da7ba0a-4ddc-4bca-acd8-e598854eceec" (UID: "3da7ba0a-4ddc-4bca-acd8-e598854eceec"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:39:01 crc kubenswrapper[4957]: I0218 14:39:01.201019 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tgrhq\" (UniqueName: \"kubernetes.io/projected/3da7ba0a-4ddc-4bca-acd8-e598854eceec-kube-api-access-tgrhq\") on node \"crc\" DevicePath \"\"" Feb 18 14:39:01 crc kubenswrapper[4957]: I0218 14:39:01.201090 4957 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/3da7ba0a-4ddc-4bca-acd8-e598854eceec-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 18 14:39:01 crc kubenswrapper[4957]: I0218 14:39:01.201104 4957 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/3da7ba0a-4ddc-4bca-acd8-e598854eceec-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 14:39:01 crc kubenswrapper[4957]: I0218 14:39:01.645656 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-hvb66_3da7ba0a-4ddc-4bca-acd8-e598854eceec/console/0.log" Feb 18 14:39:01 crc kubenswrapper[4957]: I0218 14:39:01.645712 4957 generic.go:334] "Generic (PLEG): container finished" podID="3da7ba0a-4ddc-4bca-acd8-e598854eceec" containerID="ba042a90b28b4f74a10a27ab7f2f733d6eaf97aba98e22701ba26f4b80ff6555" exitCode=2 Feb 18 14:39:01 crc kubenswrapper[4957]: I0218 14:39:01.645756 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-hvb66" event={"ID":"3da7ba0a-4ddc-4bca-acd8-e598854eceec","Type":"ContainerDied","Data":"ba042a90b28b4f74a10a27ab7f2f733d6eaf97aba98e22701ba26f4b80ff6555"} Feb 18 14:39:01 crc kubenswrapper[4957]: I0218 14:39:01.645797 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-hvb66" event={"ID":"3da7ba0a-4ddc-4bca-acd8-e598854eceec","Type":"ContainerDied","Data":"2589baf18de33489f2254deddca36101ec0dd9c817ee0858fb1885543755b7fd"} Feb 18 14:39:01 crc kubenswrapper[4957]: I0218 14:39:01.645796 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-hvb66" Feb 18 14:39:01 crc kubenswrapper[4957]: I0218 14:39:01.645843 4957 scope.go:117] "RemoveContainer" containerID="ba042a90b28b4f74a10a27ab7f2f733d6eaf97aba98e22701ba26f4b80ff6555" Feb 18 14:39:01 crc kubenswrapper[4957]: I0218 14:39:01.684227 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-hvb66"] Feb 18 14:39:01 crc kubenswrapper[4957]: I0218 14:39:01.684528 4957 scope.go:117] "RemoveContainer" containerID="ba042a90b28b4f74a10a27ab7f2f733d6eaf97aba98e22701ba26f4b80ff6555" Feb 18 14:39:01 crc kubenswrapper[4957]: E0218 14:39:01.685644 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba042a90b28b4f74a10a27ab7f2f733d6eaf97aba98e22701ba26f4b80ff6555\": container with ID starting with ba042a90b28b4f74a10a27ab7f2f733d6eaf97aba98e22701ba26f4b80ff6555 not found: ID does not exist" containerID="ba042a90b28b4f74a10a27ab7f2f733d6eaf97aba98e22701ba26f4b80ff6555" Feb 18 14:39:01 crc kubenswrapper[4957]: I0218 14:39:01.685694 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba042a90b28b4f74a10a27ab7f2f733d6eaf97aba98e22701ba26f4b80ff6555"} err="failed to get container status \"ba042a90b28b4f74a10a27ab7f2f733d6eaf97aba98e22701ba26f4b80ff6555\": rpc error: code = NotFound desc = could not find container \"ba042a90b28b4f74a10a27ab7f2f733d6eaf97aba98e22701ba26f4b80ff6555\": container with ID starting with ba042a90b28b4f74a10a27ab7f2f733d6eaf97aba98e22701ba26f4b80ff6555 not found: ID does not exist" Feb 18 14:39:01 crc kubenswrapper[4957]: I0218 14:39:01.688550 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-hvb66"] Feb 18 14:39:02 crc kubenswrapper[4957]: I0218 14:39:02.221395 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3da7ba0a-4ddc-4bca-acd8-e598854eceec" path="/var/lib/kubelet/pods/3da7ba0a-4ddc-4bca-acd8-e598854eceec/volumes" Feb 18 14:39:05 crc kubenswrapper[4957]: I0218 14:39:05.428354 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/metrics-server-7ffc4d6784-c7kvk" Feb 18 14:39:05 crc kubenswrapper[4957]: I0218 14:39:05.433905 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/metrics-server-7ffc4d6784-c7kvk" Feb 18 14:39:26 crc kubenswrapper[4957]: I0218 14:39:26.515291 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/prometheus-k8s-0" Feb 18 14:39:26 crc kubenswrapper[4957]: I0218 14:39:26.553921 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/prometheus-k8s-0" Feb 18 14:39:26 crc kubenswrapper[4957]: I0218 14:39:26.839266 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-k8s-0" Feb 18 14:39:49 crc kubenswrapper[4957]: I0218 14:39:49.669471 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-79f7c68f86-ldbhx"] Feb 18 14:39:49 crc kubenswrapper[4957]: E0218 14:39:49.671643 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3da7ba0a-4ddc-4bca-acd8-e598854eceec" containerName="console" Feb 18 14:39:49 crc kubenswrapper[4957]: I0218 14:39:49.671733 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="3da7ba0a-4ddc-4bca-acd8-e598854eceec" containerName="console" Feb 18 14:39:49 crc kubenswrapper[4957]: I0218 14:39:49.671911 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="3da7ba0a-4ddc-4bca-acd8-e598854eceec" containerName="console" Feb 18 14:39:49 crc kubenswrapper[4957]: I0218 14:39:49.672470 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-79f7c68f86-ldbhx" Feb 18 14:39:49 crc kubenswrapper[4957]: I0218 14:39:49.689791 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-79f7c68f86-ldbhx"] Feb 18 14:39:49 crc kubenswrapper[4957]: I0218 14:39:49.768358 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/55b19801-5e2c-47d1-b460-de7f465941b4-console-serving-cert\") pod \"console-79f7c68f86-ldbhx\" (UID: \"55b19801-5e2c-47d1-b460-de7f465941b4\") " pod="openshift-console/console-79f7c68f86-ldbhx" Feb 18 14:39:49 crc kubenswrapper[4957]: I0218 14:39:49.768436 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/55b19801-5e2c-47d1-b460-de7f465941b4-oauth-serving-cert\") pod \"console-79f7c68f86-ldbhx\" (UID: \"55b19801-5e2c-47d1-b460-de7f465941b4\") " pod="openshift-console/console-79f7c68f86-ldbhx" Feb 18 14:39:49 crc kubenswrapper[4957]: I0218 14:39:49.768467 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/55b19801-5e2c-47d1-b460-de7f465941b4-trusted-ca-bundle\") pod \"console-79f7c68f86-ldbhx\" (UID: \"55b19801-5e2c-47d1-b460-de7f465941b4\") " pod="openshift-console/console-79f7c68f86-ldbhx" Feb 18 14:39:49 crc kubenswrapper[4957]: I0218 14:39:49.768509 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/55b19801-5e2c-47d1-b460-de7f465941b4-service-ca\") pod \"console-79f7c68f86-ldbhx\" (UID: \"55b19801-5e2c-47d1-b460-de7f465941b4\") " pod="openshift-console/console-79f7c68f86-ldbhx" Feb 18 14:39:49 crc kubenswrapper[4957]: I0218 14:39:49.768540 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/55b19801-5e2c-47d1-b460-de7f465941b4-console-config\") pod \"console-79f7c68f86-ldbhx\" (UID: \"55b19801-5e2c-47d1-b460-de7f465941b4\") " pod="openshift-console/console-79f7c68f86-ldbhx" Feb 18 14:39:49 crc kubenswrapper[4957]: I0218 14:39:49.768585 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/55b19801-5e2c-47d1-b460-de7f465941b4-console-oauth-config\") pod \"console-79f7c68f86-ldbhx\" (UID: \"55b19801-5e2c-47d1-b460-de7f465941b4\") " pod="openshift-console/console-79f7c68f86-ldbhx" Feb 18 14:39:49 crc kubenswrapper[4957]: I0218 14:39:49.768620 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5clh\" (UniqueName: \"kubernetes.io/projected/55b19801-5e2c-47d1-b460-de7f465941b4-kube-api-access-l5clh\") pod \"console-79f7c68f86-ldbhx\" (UID: \"55b19801-5e2c-47d1-b460-de7f465941b4\") " pod="openshift-console/console-79f7c68f86-ldbhx" Feb 18 14:39:49 crc kubenswrapper[4957]: I0218 14:39:49.869885 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l5clh\" (UniqueName: \"kubernetes.io/projected/55b19801-5e2c-47d1-b460-de7f465941b4-kube-api-access-l5clh\") pod \"console-79f7c68f86-ldbhx\" (UID: \"55b19801-5e2c-47d1-b460-de7f465941b4\") " pod="openshift-console/console-79f7c68f86-ldbhx" Feb 18 14:39:49 crc kubenswrapper[4957]: I0218 14:39:49.869977 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/55b19801-5e2c-47d1-b460-de7f465941b4-console-serving-cert\") pod \"console-79f7c68f86-ldbhx\" (UID: \"55b19801-5e2c-47d1-b460-de7f465941b4\") " pod="openshift-console/console-79f7c68f86-ldbhx" Feb 18 14:39:49 crc kubenswrapper[4957]: I0218 14:39:49.870014 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/55b19801-5e2c-47d1-b460-de7f465941b4-oauth-serving-cert\") pod \"console-79f7c68f86-ldbhx\" (UID: \"55b19801-5e2c-47d1-b460-de7f465941b4\") " pod="openshift-console/console-79f7c68f86-ldbhx" Feb 18 14:39:49 crc kubenswrapper[4957]: I0218 14:39:49.870036 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/55b19801-5e2c-47d1-b460-de7f465941b4-trusted-ca-bundle\") pod \"console-79f7c68f86-ldbhx\" (UID: \"55b19801-5e2c-47d1-b460-de7f465941b4\") " pod="openshift-console/console-79f7c68f86-ldbhx" Feb 18 14:39:49 crc kubenswrapper[4957]: I0218 14:39:49.870079 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/55b19801-5e2c-47d1-b460-de7f465941b4-service-ca\") pod \"console-79f7c68f86-ldbhx\" (UID: \"55b19801-5e2c-47d1-b460-de7f465941b4\") " pod="openshift-console/console-79f7c68f86-ldbhx" Feb 18 14:39:49 crc kubenswrapper[4957]: I0218 14:39:49.870110 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/55b19801-5e2c-47d1-b460-de7f465941b4-console-config\") pod \"console-79f7c68f86-ldbhx\" (UID: \"55b19801-5e2c-47d1-b460-de7f465941b4\") " pod="openshift-console/console-79f7c68f86-ldbhx" Feb 18 14:39:49 crc kubenswrapper[4957]: I0218 14:39:49.870134 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/55b19801-5e2c-47d1-b460-de7f465941b4-console-oauth-config\") pod \"console-79f7c68f86-ldbhx\" (UID: \"55b19801-5e2c-47d1-b460-de7f465941b4\") " pod="openshift-console/console-79f7c68f86-ldbhx" Feb 18 14:39:49 crc kubenswrapper[4957]: I0218 14:39:49.871499 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/55b19801-5e2c-47d1-b460-de7f465941b4-oauth-serving-cert\") pod \"console-79f7c68f86-ldbhx\" (UID: \"55b19801-5e2c-47d1-b460-de7f465941b4\") " pod="openshift-console/console-79f7c68f86-ldbhx" Feb 18 14:39:49 crc kubenswrapper[4957]: I0218 14:39:49.871533 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/55b19801-5e2c-47d1-b460-de7f465941b4-console-config\") pod \"console-79f7c68f86-ldbhx\" (UID: \"55b19801-5e2c-47d1-b460-de7f465941b4\") " pod="openshift-console/console-79f7c68f86-ldbhx" Feb 18 14:39:49 crc kubenswrapper[4957]: I0218 14:39:49.871577 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/55b19801-5e2c-47d1-b460-de7f465941b4-service-ca\") pod \"console-79f7c68f86-ldbhx\" (UID: \"55b19801-5e2c-47d1-b460-de7f465941b4\") " pod="openshift-console/console-79f7c68f86-ldbhx" Feb 18 14:39:49 crc kubenswrapper[4957]: I0218 14:39:49.871806 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/55b19801-5e2c-47d1-b460-de7f465941b4-trusted-ca-bundle\") pod \"console-79f7c68f86-ldbhx\" (UID: \"55b19801-5e2c-47d1-b460-de7f465941b4\") " pod="openshift-console/console-79f7c68f86-ldbhx" Feb 18 14:39:49 crc kubenswrapper[4957]: I0218 14:39:49.878566 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/55b19801-5e2c-47d1-b460-de7f465941b4-console-serving-cert\") pod \"console-79f7c68f86-ldbhx\" (UID: \"55b19801-5e2c-47d1-b460-de7f465941b4\") " pod="openshift-console/console-79f7c68f86-ldbhx" Feb 18 14:39:49 crc kubenswrapper[4957]: I0218 14:39:49.879126 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/55b19801-5e2c-47d1-b460-de7f465941b4-console-oauth-config\") pod \"console-79f7c68f86-ldbhx\" (UID: \"55b19801-5e2c-47d1-b460-de7f465941b4\") " pod="openshift-console/console-79f7c68f86-ldbhx" Feb 18 14:39:49 crc kubenswrapper[4957]: I0218 14:39:49.889324 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l5clh\" (UniqueName: \"kubernetes.io/projected/55b19801-5e2c-47d1-b460-de7f465941b4-kube-api-access-l5clh\") pod \"console-79f7c68f86-ldbhx\" (UID: \"55b19801-5e2c-47d1-b460-de7f465941b4\") " pod="openshift-console/console-79f7c68f86-ldbhx" Feb 18 14:39:50 crc kubenswrapper[4957]: I0218 14:39:50.000638 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-79f7c68f86-ldbhx" Feb 18 14:39:50 crc kubenswrapper[4957]: I0218 14:39:50.247376 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-79f7c68f86-ldbhx"] Feb 18 14:39:50 crc kubenswrapper[4957]: I0218 14:39:50.967704 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-79f7c68f86-ldbhx" event={"ID":"55b19801-5e2c-47d1-b460-de7f465941b4","Type":"ContainerStarted","Data":"8fd25ff164793defec7640c055a00c655b4e1fb41475345575c073ef511ab8cf"} Feb 18 14:39:50 crc kubenswrapper[4957]: I0218 14:39:50.968072 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-79f7c68f86-ldbhx" event={"ID":"55b19801-5e2c-47d1-b460-de7f465941b4","Type":"ContainerStarted","Data":"bee339cd435cbca41fc1f74962e8debd0cc8856ac3ac1fffd34d02ef5681aad2"} Feb 18 14:39:50 crc kubenswrapper[4957]: I0218 14:39:50.996673 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-79f7c68f86-ldbhx" podStartSLOduration=1.996654 podStartE2EDuration="1.996654s" podCreationTimestamp="2026-02-18 14:39:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:39:50.995252789 +0000 UTC m=+497.516117543" watchObservedRunningTime="2026-02-18 14:39:50.996654 +0000 UTC m=+497.517518744" Feb 18 14:40:00 crc kubenswrapper[4957]: I0218 14:40:00.000727 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-79f7c68f86-ldbhx" Feb 18 14:40:00 crc kubenswrapper[4957]: I0218 14:40:00.001283 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-79f7c68f86-ldbhx" Feb 18 14:40:00 crc kubenswrapper[4957]: I0218 14:40:00.005651 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-79f7c68f86-ldbhx" Feb 18 14:40:00 crc kubenswrapper[4957]: I0218 14:40:00.469569 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-79f7c68f86-ldbhx" Feb 18 14:40:00 crc kubenswrapper[4957]: I0218 14:40:00.529000 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-66744675f6-ctstc"] Feb 18 14:40:07 crc kubenswrapper[4957]: I0218 14:40:07.279802 4957 patch_prober.go:28] interesting pod/machine-config-daemon-x8wwg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 14:40:07 crc kubenswrapper[4957]: I0218 14:40:07.280355 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 14:40:25 crc kubenswrapper[4957]: I0218 14:40:25.615541 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-66744675f6-ctstc" podUID="6e57bd79-bd26-4c30-ae82-8cf6215bad62" containerName="console" containerID="cri-o://ddb97e767db08834fa3b55997e5794616787eeae405d356b7fe1b40ae91f7cb5" gracePeriod=15 Feb 18 14:40:25 crc kubenswrapper[4957]: I0218 14:40:25.981067 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-66744675f6-ctstc_6e57bd79-bd26-4c30-ae82-8cf6215bad62/console/0.log" Feb 18 14:40:25 crc kubenswrapper[4957]: I0218 14:40:25.981446 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-66744675f6-ctstc" Feb 18 14:40:26 crc kubenswrapper[4957]: I0218 14:40:26.032732 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6e57bd79-bd26-4c30-ae82-8cf6215bad62-console-serving-cert\") pod \"6e57bd79-bd26-4c30-ae82-8cf6215bad62\" (UID: \"6e57bd79-bd26-4c30-ae82-8cf6215bad62\") " Feb 18 14:40:26 crc kubenswrapper[4957]: I0218 14:40:26.032825 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6e57bd79-bd26-4c30-ae82-8cf6215bad62-trusted-ca-bundle\") pod \"6e57bd79-bd26-4c30-ae82-8cf6215bad62\" (UID: \"6e57bd79-bd26-4c30-ae82-8cf6215bad62\") " Feb 18 14:40:26 crc kubenswrapper[4957]: I0218 14:40:26.032870 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6e57bd79-bd26-4c30-ae82-8cf6215bad62-oauth-serving-cert\") pod \"6e57bd79-bd26-4c30-ae82-8cf6215bad62\" (UID: \"6e57bd79-bd26-4c30-ae82-8cf6215bad62\") " Feb 18 14:40:26 crc kubenswrapper[4957]: I0218 14:40:26.032896 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6e57bd79-bd26-4c30-ae82-8cf6215bad62-console-oauth-config\") pod \"6e57bd79-bd26-4c30-ae82-8cf6215bad62\" (UID: \"6e57bd79-bd26-4c30-ae82-8cf6215bad62\") " Feb 18 14:40:26 crc kubenswrapper[4957]: I0218 14:40:26.032914 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6e57bd79-bd26-4c30-ae82-8cf6215bad62-console-config\") pod \"6e57bd79-bd26-4c30-ae82-8cf6215bad62\" (UID: \"6e57bd79-bd26-4c30-ae82-8cf6215bad62\") " Feb 18 14:40:26 crc kubenswrapper[4957]: I0218 14:40:26.032943 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-frcgw\" (UniqueName: \"kubernetes.io/projected/6e57bd79-bd26-4c30-ae82-8cf6215bad62-kube-api-access-frcgw\") pod \"6e57bd79-bd26-4c30-ae82-8cf6215bad62\" (UID: \"6e57bd79-bd26-4c30-ae82-8cf6215bad62\") " Feb 18 14:40:26 crc kubenswrapper[4957]: I0218 14:40:26.032967 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6e57bd79-bd26-4c30-ae82-8cf6215bad62-service-ca\") pod \"6e57bd79-bd26-4c30-ae82-8cf6215bad62\" (UID: \"6e57bd79-bd26-4c30-ae82-8cf6215bad62\") " Feb 18 14:40:26 crc kubenswrapper[4957]: I0218 14:40:26.033828 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6e57bd79-bd26-4c30-ae82-8cf6215bad62-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "6e57bd79-bd26-4c30-ae82-8cf6215bad62" (UID: "6e57bd79-bd26-4c30-ae82-8cf6215bad62"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:40:26 crc kubenswrapper[4957]: I0218 14:40:26.033977 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6e57bd79-bd26-4c30-ae82-8cf6215bad62-console-config" (OuterVolumeSpecName: "console-config") pod "6e57bd79-bd26-4c30-ae82-8cf6215bad62" (UID: "6e57bd79-bd26-4c30-ae82-8cf6215bad62"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:40:26 crc kubenswrapper[4957]: I0218 14:40:26.037636 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6e57bd79-bd26-4c30-ae82-8cf6215bad62-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6e57bd79-bd26-4c30-ae82-8cf6215bad62" (UID: "6e57bd79-bd26-4c30-ae82-8cf6215bad62"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:40:26 crc kubenswrapper[4957]: I0218 14:40:26.038227 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6e57bd79-bd26-4c30-ae82-8cf6215bad62-service-ca" (OuterVolumeSpecName: "service-ca") pod "6e57bd79-bd26-4c30-ae82-8cf6215bad62" (UID: "6e57bd79-bd26-4c30-ae82-8cf6215bad62"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:40:26 crc kubenswrapper[4957]: I0218 14:40:26.038333 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e57bd79-bd26-4c30-ae82-8cf6215bad62-kube-api-access-frcgw" (OuterVolumeSpecName: "kube-api-access-frcgw") pod "6e57bd79-bd26-4c30-ae82-8cf6215bad62" (UID: "6e57bd79-bd26-4c30-ae82-8cf6215bad62"). InnerVolumeSpecName "kube-api-access-frcgw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:40:26 crc kubenswrapper[4957]: I0218 14:40:26.038359 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e57bd79-bd26-4c30-ae82-8cf6215bad62-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "6e57bd79-bd26-4c30-ae82-8cf6215bad62" (UID: "6e57bd79-bd26-4c30-ae82-8cf6215bad62"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:40:26 crc kubenswrapper[4957]: I0218 14:40:26.038476 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e57bd79-bd26-4c30-ae82-8cf6215bad62-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "6e57bd79-bd26-4c30-ae82-8cf6215bad62" (UID: "6e57bd79-bd26-4c30-ae82-8cf6215bad62"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:40:26 crc kubenswrapper[4957]: I0218 14:40:26.134161 4957 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6e57bd79-bd26-4c30-ae82-8cf6215bad62-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 14:40:26 crc kubenswrapper[4957]: I0218 14:40:26.134514 4957 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6e57bd79-bd26-4c30-ae82-8cf6215bad62-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 18 14:40:26 crc kubenswrapper[4957]: I0218 14:40:26.134572 4957 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6e57bd79-bd26-4c30-ae82-8cf6215bad62-console-config\") on node \"crc\" DevicePath \"\"" Feb 18 14:40:26 crc kubenswrapper[4957]: I0218 14:40:26.134631 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-frcgw\" (UniqueName: \"kubernetes.io/projected/6e57bd79-bd26-4c30-ae82-8cf6215bad62-kube-api-access-frcgw\") on node \"crc\" DevicePath \"\"" Feb 18 14:40:26 crc kubenswrapper[4957]: I0218 14:40:26.134708 4957 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6e57bd79-bd26-4c30-ae82-8cf6215bad62-service-ca\") on node \"crc\" DevicePath \"\"" Feb 18 14:40:26 crc kubenswrapper[4957]: I0218 14:40:26.134763 4957 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6e57bd79-bd26-4c30-ae82-8cf6215bad62-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 14:40:26 crc kubenswrapper[4957]: I0218 14:40:26.134812 4957 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6e57bd79-bd26-4c30-ae82-8cf6215bad62-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:40:26 crc kubenswrapper[4957]: I0218 14:40:26.662047 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-66744675f6-ctstc_6e57bd79-bd26-4c30-ae82-8cf6215bad62/console/0.log" Feb 18 14:40:26 crc kubenswrapper[4957]: I0218 14:40:26.662113 4957 generic.go:334] "Generic (PLEG): container finished" podID="6e57bd79-bd26-4c30-ae82-8cf6215bad62" containerID="ddb97e767db08834fa3b55997e5794616787eeae405d356b7fe1b40ae91f7cb5" exitCode=2 Feb 18 14:40:26 crc kubenswrapper[4957]: I0218 14:40:26.662161 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-66744675f6-ctstc" event={"ID":"6e57bd79-bd26-4c30-ae82-8cf6215bad62","Type":"ContainerDied","Data":"ddb97e767db08834fa3b55997e5794616787eeae405d356b7fe1b40ae91f7cb5"} Feb 18 14:40:26 crc kubenswrapper[4957]: I0218 14:40:26.662208 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-66744675f6-ctstc" event={"ID":"6e57bd79-bd26-4c30-ae82-8cf6215bad62","Type":"ContainerDied","Data":"9d5e3b08e253955b9c1542975a73a24b759957e0c4bce6c28db907bec9c1ae38"} Feb 18 14:40:26 crc kubenswrapper[4957]: I0218 14:40:26.662228 4957 scope.go:117] "RemoveContainer" containerID="ddb97e767db08834fa3b55997e5794616787eeae405d356b7fe1b40ae91f7cb5" Feb 18 14:40:26 crc kubenswrapper[4957]: I0218 14:40:26.662228 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-66744675f6-ctstc" Feb 18 14:40:26 crc kubenswrapper[4957]: I0218 14:40:26.684816 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-66744675f6-ctstc"] Feb 18 14:40:26 crc kubenswrapper[4957]: I0218 14:40:26.686996 4957 scope.go:117] "RemoveContainer" containerID="ddb97e767db08834fa3b55997e5794616787eeae405d356b7fe1b40ae91f7cb5" Feb 18 14:40:26 crc kubenswrapper[4957]: E0218 14:40:26.687705 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ddb97e767db08834fa3b55997e5794616787eeae405d356b7fe1b40ae91f7cb5\": container with ID starting with ddb97e767db08834fa3b55997e5794616787eeae405d356b7fe1b40ae91f7cb5 not found: ID does not exist" containerID="ddb97e767db08834fa3b55997e5794616787eeae405d356b7fe1b40ae91f7cb5" Feb 18 14:40:26 crc kubenswrapper[4957]: I0218 14:40:26.687745 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ddb97e767db08834fa3b55997e5794616787eeae405d356b7fe1b40ae91f7cb5"} err="failed to get container status \"ddb97e767db08834fa3b55997e5794616787eeae405d356b7fe1b40ae91f7cb5\": rpc error: code = NotFound desc = could not find container \"ddb97e767db08834fa3b55997e5794616787eeae405d356b7fe1b40ae91f7cb5\": container with ID starting with ddb97e767db08834fa3b55997e5794616787eeae405d356b7fe1b40ae91f7cb5 not found: ID does not exist" Feb 18 14:40:26 crc kubenswrapper[4957]: I0218 14:40:26.689931 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-66744675f6-ctstc"] Feb 18 14:40:28 crc kubenswrapper[4957]: I0218 14:40:28.226573 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e57bd79-bd26-4c30-ae82-8cf6215bad62" path="/var/lib/kubelet/pods/6e57bd79-bd26-4c30-ae82-8cf6215bad62/volumes" Feb 18 14:40:37 crc kubenswrapper[4957]: I0218 14:40:37.279612 4957 patch_prober.go:28] interesting pod/machine-config-daemon-x8wwg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 14:40:37 crc kubenswrapper[4957]: I0218 14:40:37.280681 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 14:41:07 crc kubenswrapper[4957]: I0218 14:41:07.278898 4957 patch_prober.go:28] interesting pod/machine-config-daemon-x8wwg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 14:41:07 crc kubenswrapper[4957]: I0218 14:41:07.279766 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 14:41:07 crc kubenswrapper[4957]: I0218 14:41:07.279825 4957 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" Feb 18 14:41:07 crc kubenswrapper[4957]: I0218 14:41:07.280586 4957 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"135878b003cd530333b1214c0015b7d7b2523f3ba8a66d6b435c6895505efe92"} pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 14:41:07 crc kubenswrapper[4957]: I0218 14:41:07.280654 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" containerName="machine-config-daemon" containerID="cri-o://135878b003cd530333b1214c0015b7d7b2523f3ba8a66d6b435c6895505efe92" gracePeriod=600 Feb 18 14:41:08 crc kubenswrapper[4957]: I0218 14:41:08.054804 4957 generic.go:334] "Generic (PLEG): container finished" podID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" containerID="135878b003cd530333b1214c0015b7d7b2523f3ba8a66d6b435c6895505efe92" exitCode=0 Feb 18 14:41:08 crc kubenswrapper[4957]: I0218 14:41:08.054901 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" event={"ID":"4cde17e3-43e9-4bed-afe8-5b76229e35cf","Type":"ContainerDied","Data":"135878b003cd530333b1214c0015b7d7b2523f3ba8a66d6b435c6895505efe92"} Feb 18 14:41:08 crc kubenswrapper[4957]: I0218 14:41:08.055675 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" event={"ID":"4cde17e3-43e9-4bed-afe8-5b76229e35cf","Type":"ContainerStarted","Data":"7941b927f3e2a93f4dd37a4d3392ef224a62719b3a5e2aadf6edc8ae2eb35a9d"} Feb 18 14:41:08 crc kubenswrapper[4957]: I0218 14:41:08.055702 4957 scope.go:117] "RemoveContainer" containerID="039cd260c9485199567f1825b67f6c50651e5a330eb2444ec1819bfdfe45f6a1" Feb 18 14:41:34 crc kubenswrapper[4957]: I0218 14:41:34.482718 4957 scope.go:117] "RemoveContainer" containerID="7e7084fbda1c7228c1e42f7cf4b3376a5c90faaacb1043d891fa810d1cfeac4b" Feb 18 14:42:41 crc kubenswrapper[4957]: I0218 14:42:41.407482 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08596th"] Feb 18 14:42:41 crc kubenswrapper[4957]: E0218 14:42:41.408480 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e57bd79-bd26-4c30-ae82-8cf6215bad62" containerName="console" Feb 18 14:42:41 crc kubenswrapper[4957]: I0218 14:42:41.408497 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e57bd79-bd26-4c30-ae82-8cf6215bad62" containerName="console" Feb 18 14:42:41 crc kubenswrapper[4957]: I0218 14:42:41.408649 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e57bd79-bd26-4c30-ae82-8cf6215bad62" containerName="console" Feb 18 14:42:41 crc kubenswrapper[4957]: I0218 14:42:41.409759 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08596th" Feb 18 14:42:41 crc kubenswrapper[4957]: I0218 14:42:41.412569 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 18 14:42:41 crc kubenswrapper[4957]: I0218 14:42:41.418993 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08596th"] Feb 18 14:42:41 crc kubenswrapper[4957]: I0218 14:42:41.587480 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f56fj\" (UniqueName: \"kubernetes.io/projected/b83c916a-2d04-4618-9254-4f4660a4b976-kube-api-access-f56fj\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08596th\" (UID: \"b83c916a-2d04-4618-9254-4f4660a4b976\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08596th" Feb 18 14:42:41 crc kubenswrapper[4957]: I0218 14:42:41.587599 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b83c916a-2d04-4618-9254-4f4660a4b976-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08596th\" (UID: \"b83c916a-2d04-4618-9254-4f4660a4b976\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08596th" Feb 18 14:42:41 crc kubenswrapper[4957]: I0218 14:42:41.587671 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b83c916a-2d04-4618-9254-4f4660a4b976-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08596th\" (UID: \"b83c916a-2d04-4618-9254-4f4660a4b976\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08596th" Feb 18 14:42:41 crc kubenswrapper[4957]: I0218 14:42:41.688766 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f56fj\" (UniqueName: \"kubernetes.io/projected/b83c916a-2d04-4618-9254-4f4660a4b976-kube-api-access-f56fj\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08596th\" (UID: \"b83c916a-2d04-4618-9254-4f4660a4b976\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08596th" Feb 18 14:42:41 crc kubenswrapper[4957]: I0218 14:42:41.688846 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b83c916a-2d04-4618-9254-4f4660a4b976-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08596th\" (UID: \"b83c916a-2d04-4618-9254-4f4660a4b976\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08596th" Feb 18 14:42:41 crc kubenswrapper[4957]: I0218 14:42:41.688877 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b83c916a-2d04-4618-9254-4f4660a4b976-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08596th\" (UID: \"b83c916a-2d04-4618-9254-4f4660a4b976\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08596th" Feb 18 14:42:41 crc kubenswrapper[4957]: I0218 14:42:41.691100 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b83c916a-2d04-4618-9254-4f4660a4b976-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08596th\" (UID: \"b83c916a-2d04-4618-9254-4f4660a4b976\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08596th" Feb 18 14:42:41 crc kubenswrapper[4957]: I0218 14:42:41.691264 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b83c916a-2d04-4618-9254-4f4660a4b976-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08596th\" (UID: \"b83c916a-2d04-4618-9254-4f4660a4b976\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08596th" Feb 18 14:42:41 crc kubenswrapper[4957]: I0218 14:42:41.711707 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f56fj\" (UniqueName: \"kubernetes.io/projected/b83c916a-2d04-4618-9254-4f4660a4b976-kube-api-access-f56fj\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08596th\" (UID: \"b83c916a-2d04-4618-9254-4f4660a4b976\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08596th" Feb 18 14:42:41 crc kubenswrapper[4957]: I0218 14:42:41.727668 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08596th" Feb 18 14:42:41 crc kubenswrapper[4957]: I0218 14:42:41.987017 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08596th"] Feb 18 14:42:42 crc kubenswrapper[4957]: I0218 14:42:42.472996 4957 generic.go:334] "Generic (PLEG): container finished" podID="b83c916a-2d04-4618-9254-4f4660a4b976" containerID="35b29627e65e6661071ae85c47dce106684ef4b4f6841a039195fafb11c2fb3a" exitCode=0 Feb 18 14:42:42 crc kubenswrapper[4957]: I0218 14:42:42.473078 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08596th" event={"ID":"b83c916a-2d04-4618-9254-4f4660a4b976","Type":"ContainerDied","Data":"35b29627e65e6661071ae85c47dce106684ef4b4f6841a039195fafb11c2fb3a"} Feb 18 14:42:42 crc kubenswrapper[4957]: I0218 14:42:42.473543 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08596th" event={"ID":"b83c916a-2d04-4618-9254-4f4660a4b976","Type":"ContainerStarted","Data":"7d921e74ee5526bf939ddcf5543d3a0ba1d42b50e0b61440f772a9ea2c30e9f6"} Feb 18 14:42:42 crc kubenswrapper[4957]: I0218 14:42:42.476493 4957 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 14:42:44 crc kubenswrapper[4957]: I0218 14:42:44.492007 4957 generic.go:334] "Generic (PLEG): container finished" podID="b83c916a-2d04-4618-9254-4f4660a4b976" containerID="56e1eb5d9e1ad28946c66a0560c3ee251a3482646a466f47f716859399bd942f" exitCode=0 Feb 18 14:42:44 crc kubenswrapper[4957]: I0218 14:42:44.492149 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08596th" event={"ID":"b83c916a-2d04-4618-9254-4f4660a4b976","Type":"ContainerDied","Data":"56e1eb5d9e1ad28946c66a0560c3ee251a3482646a466f47f716859399bd942f"} Feb 18 14:42:45 crc kubenswrapper[4957]: I0218 14:42:45.503843 4957 generic.go:334] "Generic (PLEG): container finished" podID="b83c916a-2d04-4618-9254-4f4660a4b976" containerID="a5f1d5cdc506bd0245776a0413cda8f77910e5fa3123323703dbfd10eb4acc5c" exitCode=0 Feb 18 14:42:45 crc kubenswrapper[4957]: I0218 14:42:45.503932 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08596th" event={"ID":"b83c916a-2d04-4618-9254-4f4660a4b976","Type":"ContainerDied","Data":"a5f1d5cdc506bd0245776a0413cda8f77910e5fa3123323703dbfd10eb4acc5c"} Feb 18 14:42:46 crc kubenswrapper[4957]: I0218 14:42:46.813761 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08596th" Feb 18 14:42:46 crc kubenswrapper[4957]: I0218 14:42:46.982738 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b83c916a-2d04-4618-9254-4f4660a4b976-util\") pod \"b83c916a-2d04-4618-9254-4f4660a4b976\" (UID: \"b83c916a-2d04-4618-9254-4f4660a4b976\") " Feb 18 14:42:46 crc kubenswrapper[4957]: I0218 14:42:46.983011 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b83c916a-2d04-4618-9254-4f4660a4b976-bundle\") pod \"b83c916a-2d04-4618-9254-4f4660a4b976\" (UID: \"b83c916a-2d04-4618-9254-4f4660a4b976\") " Feb 18 14:42:46 crc kubenswrapper[4957]: I0218 14:42:46.983123 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f56fj\" (UniqueName: \"kubernetes.io/projected/b83c916a-2d04-4618-9254-4f4660a4b976-kube-api-access-f56fj\") pod \"b83c916a-2d04-4618-9254-4f4660a4b976\" (UID: \"b83c916a-2d04-4618-9254-4f4660a4b976\") " Feb 18 14:42:46 crc kubenswrapper[4957]: I0218 14:42:46.985883 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b83c916a-2d04-4618-9254-4f4660a4b976-bundle" (OuterVolumeSpecName: "bundle") pod "b83c916a-2d04-4618-9254-4f4660a4b976" (UID: "b83c916a-2d04-4618-9254-4f4660a4b976"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:42:46 crc kubenswrapper[4957]: I0218 14:42:46.990318 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b83c916a-2d04-4618-9254-4f4660a4b976-kube-api-access-f56fj" (OuterVolumeSpecName: "kube-api-access-f56fj") pod "b83c916a-2d04-4618-9254-4f4660a4b976" (UID: "b83c916a-2d04-4618-9254-4f4660a4b976"). InnerVolumeSpecName "kube-api-access-f56fj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:42:46 crc kubenswrapper[4957]: I0218 14:42:46.999097 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b83c916a-2d04-4618-9254-4f4660a4b976-util" (OuterVolumeSpecName: "util") pod "b83c916a-2d04-4618-9254-4f4660a4b976" (UID: "b83c916a-2d04-4618-9254-4f4660a4b976"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:42:47 crc kubenswrapper[4957]: I0218 14:42:47.084896 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f56fj\" (UniqueName: \"kubernetes.io/projected/b83c916a-2d04-4618-9254-4f4660a4b976-kube-api-access-f56fj\") on node \"crc\" DevicePath \"\"" Feb 18 14:42:47 crc kubenswrapper[4957]: I0218 14:42:47.084931 4957 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b83c916a-2d04-4618-9254-4f4660a4b976-util\") on node \"crc\" DevicePath \"\"" Feb 18 14:42:47 crc kubenswrapper[4957]: I0218 14:42:47.084947 4957 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b83c916a-2d04-4618-9254-4f4660a4b976-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:42:47 crc kubenswrapper[4957]: I0218 14:42:47.524978 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08596th" event={"ID":"b83c916a-2d04-4618-9254-4f4660a4b976","Type":"ContainerDied","Data":"7d921e74ee5526bf939ddcf5543d3a0ba1d42b50e0b61440f772a9ea2c30e9f6"} Feb 18 14:42:47 crc kubenswrapper[4957]: I0218 14:42:47.525060 4957 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7d921e74ee5526bf939ddcf5543d3a0ba1d42b50e0b61440f772a9ea2c30e9f6" Feb 18 14:42:47 crc kubenswrapper[4957]: I0218 14:42:47.525020 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08596th" Feb 18 14:42:52 crc kubenswrapper[4957]: I0218 14:42:52.657975 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-t7lp9"] Feb 18 14:42:52 crc kubenswrapper[4957]: I0218 14:42:52.659318 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-t7lp9" podUID="c1ab5e7d-28c9-416b-9e12-1209987d8a2c" containerName="ovn-controller" containerID="cri-o://c7fc50065803f879ae25e1ab406e1b74e142cab30c38bedd0426616a2cc6cc84" gracePeriod=30 Feb 18 14:42:52 crc kubenswrapper[4957]: I0218 14:42:52.659401 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-t7lp9" podUID="c1ab5e7d-28c9-416b-9e12-1209987d8a2c" containerName="nbdb" containerID="cri-o://6c773ab7da87d534f8ca400b73f89b473291b15b65118abfa48da7e8af126724" gracePeriod=30 Feb 18 14:42:52 crc kubenswrapper[4957]: I0218 14:42:52.659534 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-t7lp9" podUID="c1ab5e7d-28c9-416b-9e12-1209987d8a2c" containerName="northd" containerID="cri-o://0fb30623c479a6d604d07d8779f3b5661ba8a2c98fa38f145443c7f80fbfddb0" gracePeriod=30 Feb 18 14:42:52 crc kubenswrapper[4957]: I0218 14:42:52.659609 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-t7lp9" podUID="c1ab5e7d-28c9-416b-9e12-1209987d8a2c" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://eb7e286dbf7b49a0ee8019eb660a4b977399c163bb094d77f830b3747399d702" gracePeriod=30 Feb 18 14:42:52 crc kubenswrapper[4957]: I0218 14:42:52.659662 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-t7lp9" podUID="c1ab5e7d-28c9-416b-9e12-1209987d8a2c" containerName="kube-rbac-proxy-node" containerID="cri-o://d3d55cd020086b018dcf2183c7307ce4f66a18d4dfd5c7e387419f5aaeb08e53" gracePeriod=30 Feb 18 14:42:52 crc kubenswrapper[4957]: I0218 14:42:52.659718 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-t7lp9" podUID="c1ab5e7d-28c9-416b-9e12-1209987d8a2c" containerName="ovn-acl-logging" containerID="cri-o://9575c57251809ec79d9e9a63fc18fc0fe431582b023c0bfb9d71b3ea5c369e49" gracePeriod=30 Feb 18 14:42:52 crc kubenswrapper[4957]: I0218 14:42:52.659887 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-t7lp9" podUID="c1ab5e7d-28c9-416b-9e12-1209987d8a2c" containerName="sbdb" containerID="cri-o://07549d11e88948ba7929804722b7d49a2fefb64d0c324c2032529c13df70a23f" gracePeriod=30 Feb 18 14:42:52 crc kubenswrapper[4957]: I0218 14:42:52.718279 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-t7lp9" podUID="c1ab5e7d-28c9-416b-9e12-1209987d8a2c" containerName="ovnkube-controller" containerID="cri-o://6abc231989624dfd52760fc08c1c15a4ca94c6c14ac91bfb7bc64bf5b3fc6404" gracePeriod=30 Feb 18 14:42:53 crc kubenswrapper[4957]: I0218 14:42:53.569434 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-t7lp9_c1ab5e7d-28c9-416b-9e12-1209987d8a2c/ovnkube-controller/4.log" Feb 18 14:42:53 crc kubenswrapper[4957]: I0218 14:42:53.570720 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-t7lp9_c1ab5e7d-28c9-416b-9e12-1209987d8a2c/ovnkube-controller/3.log" Feb 18 14:42:53 crc kubenswrapper[4957]: I0218 14:42:53.573385 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-t7lp9_c1ab5e7d-28c9-416b-9e12-1209987d8a2c/ovn-acl-logging/0.log" Feb 18 14:42:53 crc kubenswrapper[4957]: I0218 14:42:53.573907 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-t7lp9_c1ab5e7d-28c9-416b-9e12-1209987d8a2c/ovn-controller/0.log" Feb 18 14:42:53 crc kubenswrapper[4957]: I0218 14:42:53.574469 4957 generic.go:334] "Generic (PLEG): container finished" podID="c1ab5e7d-28c9-416b-9e12-1209987d8a2c" containerID="6abc231989624dfd52760fc08c1c15a4ca94c6c14ac91bfb7bc64bf5b3fc6404" exitCode=2 Feb 18 14:42:53 crc kubenswrapper[4957]: I0218 14:42:53.574498 4957 generic.go:334] "Generic (PLEG): container finished" podID="c1ab5e7d-28c9-416b-9e12-1209987d8a2c" containerID="07549d11e88948ba7929804722b7d49a2fefb64d0c324c2032529c13df70a23f" exitCode=0 Feb 18 14:42:53 crc kubenswrapper[4957]: I0218 14:42:53.574507 4957 generic.go:334] "Generic (PLEG): container finished" podID="c1ab5e7d-28c9-416b-9e12-1209987d8a2c" containerID="6c773ab7da87d534f8ca400b73f89b473291b15b65118abfa48da7e8af126724" exitCode=0 Feb 18 14:42:53 crc kubenswrapper[4957]: I0218 14:42:53.574516 4957 generic.go:334] "Generic (PLEG): container finished" podID="c1ab5e7d-28c9-416b-9e12-1209987d8a2c" containerID="0fb30623c479a6d604d07d8779f3b5661ba8a2c98fa38f145443c7f80fbfddb0" exitCode=0 Feb 18 14:42:53 crc kubenswrapper[4957]: I0218 14:42:53.574527 4957 generic.go:334] "Generic (PLEG): container finished" podID="c1ab5e7d-28c9-416b-9e12-1209987d8a2c" containerID="9575c57251809ec79d9e9a63fc18fc0fe431582b023c0bfb9d71b3ea5c369e49" exitCode=143 Feb 18 14:42:53 crc kubenswrapper[4957]: I0218 14:42:53.574538 4957 generic.go:334] "Generic (PLEG): container finished" podID="c1ab5e7d-28c9-416b-9e12-1209987d8a2c" containerID="c7fc50065803f879ae25e1ab406e1b74e142cab30c38bedd0426616a2cc6cc84" exitCode=143 Feb 18 14:42:53 crc kubenswrapper[4957]: I0218 14:42:53.574563 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t7lp9" event={"ID":"c1ab5e7d-28c9-416b-9e12-1209987d8a2c","Type":"ContainerDied","Data":"6abc231989624dfd52760fc08c1c15a4ca94c6c14ac91bfb7bc64bf5b3fc6404"} Feb 18 14:42:53 crc kubenswrapper[4957]: I0218 14:42:53.574619 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t7lp9" event={"ID":"c1ab5e7d-28c9-416b-9e12-1209987d8a2c","Type":"ContainerDied","Data":"07549d11e88948ba7929804722b7d49a2fefb64d0c324c2032529c13df70a23f"} Feb 18 14:42:53 crc kubenswrapper[4957]: I0218 14:42:53.574631 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t7lp9" event={"ID":"c1ab5e7d-28c9-416b-9e12-1209987d8a2c","Type":"ContainerDied","Data":"6c773ab7da87d534f8ca400b73f89b473291b15b65118abfa48da7e8af126724"} Feb 18 14:42:53 crc kubenswrapper[4957]: I0218 14:42:53.574647 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t7lp9" event={"ID":"c1ab5e7d-28c9-416b-9e12-1209987d8a2c","Type":"ContainerDied","Data":"0fb30623c479a6d604d07d8779f3b5661ba8a2c98fa38f145443c7f80fbfddb0"} Feb 18 14:42:53 crc kubenswrapper[4957]: I0218 14:42:53.574661 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t7lp9" event={"ID":"c1ab5e7d-28c9-416b-9e12-1209987d8a2c","Type":"ContainerDied","Data":"9575c57251809ec79d9e9a63fc18fc0fe431582b023c0bfb9d71b3ea5c369e49"} Feb 18 14:42:53 crc kubenswrapper[4957]: I0218 14:42:53.574672 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t7lp9" event={"ID":"c1ab5e7d-28c9-416b-9e12-1209987d8a2c","Type":"ContainerDied","Data":"c7fc50065803f879ae25e1ab406e1b74e142cab30c38bedd0426616a2cc6cc84"} Feb 18 14:42:53 crc kubenswrapper[4957]: I0218 14:42:53.574673 4957 scope.go:117] "RemoveContainer" containerID="bc6ab3573641e68bec6f227fa30ead1a586c1cf3abdf781b93edf3396b81f192" Feb 18 14:42:53 crc kubenswrapper[4957]: I0218 14:42:53.577335 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-sk96m_e7e4902e-7bb6-44ec-a3ac-3d8b3afe3fbb/kube-multus/2.log" Feb 18 14:42:53 crc kubenswrapper[4957]: I0218 14:42:53.577878 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-sk96m_e7e4902e-7bb6-44ec-a3ac-3d8b3afe3fbb/kube-multus/1.log" Feb 18 14:42:53 crc kubenswrapper[4957]: I0218 14:42:53.577927 4957 generic.go:334] "Generic (PLEG): container finished" podID="e7e4902e-7bb6-44ec-a3ac-3d8b3afe3fbb" containerID="5404d3757559a526399e1c8ad0cad0fc231a9f59ad1d9be14407c358b01a6bb2" exitCode=2 Feb 18 14:42:53 crc kubenswrapper[4957]: I0218 14:42:53.577954 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-sk96m" event={"ID":"e7e4902e-7bb6-44ec-a3ac-3d8b3afe3fbb","Type":"ContainerDied","Data":"5404d3757559a526399e1c8ad0cad0fc231a9f59ad1d9be14407c358b01a6bb2"} Feb 18 14:42:53 crc kubenswrapper[4957]: I0218 14:42:53.578488 4957 scope.go:117] "RemoveContainer" containerID="5404d3757559a526399e1c8ad0cad0fc231a9f59ad1d9be14407c358b01a6bb2" Feb 18 14:42:53 crc kubenswrapper[4957]: E0218 14:42:53.578698 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-sk96m_openshift-multus(e7e4902e-7bb6-44ec-a3ac-3d8b3afe3fbb)\"" pod="openshift-multus/multus-sk96m" podUID="e7e4902e-7bb6-44ec-a3ac-3d8b3afe3fbb" Feb 18 14:42:53 crc kubenswrapper[4957]: I0218 14:42:53.602915 4957 scope.go:117] "RemoveContainer" containerID="3b9a1d713ed8c708dc17974a35a4e22543cfeee4911baa33638b24cb02f01f1e" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.398636 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-t7lp9_c1ab5e7d-28c9-416b-9e12-1209987d8a2c/ovnkube-controller/4.log" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.404043 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-t7lp9_c1ab5e7d-28c9-416b-9e12-1209987d8a2c/ovn-acl-logging/0.log" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.404864 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-t7lp9_c1ab5e7d-28c9-416b-9e12-1209987d8a2c/ovn-controller/0.log" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.405626 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-t7lp9" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.493352 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-vc7fk"] Feb 18 14:42:54 crc kubenswrapper[4957]: E0218 14:42:54.493659 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1ab5e7d-28c9-416b-9e12-1209987d8a2c" containerName="ovn-controller" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.493678 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1ab5e7d-28c9-416b-9e12-1209987d8a2c" containerName="ovn-controller" Feb 18 14:42:54 crc kubenswrapper[4957]: E0218 14:42:54.493690 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1ab5e7d-28c9-416b-9e12-1209987d8a2c" containerName="sbdb" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.493698 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1ab5e7d-28c9-416b-9e12-1209987d8a2c" containerName="sbdb" Feb 18 14:42:54 crc kubenswrapper[4957]: E0218 14:42:54.493712 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1ab5e7d-28c9-416b-9e12-1209987d8a2c" containerName="northd" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.493721 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1ab5e7d-28c9-416b-9e12-1209987d8a2c" containerName="northd" Feb 18 14:42:54 crc kubenswrapper[4957]: E0218 14:42:54.493736 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1ab5e7d-28c9-416b-9e12-1209987d8a2c" containerName="ovnkube-controller" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.493743 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1ab5e7d-28c9-416b-9e12-1209987d8a2c" containerName="ovnkube-controller" Feb 18 14:42:54 crc kubenswrapper[4957]: E0218 14:42:54.493752 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1ab5e7d-28c9-416b-9e12-1209987d8a2c" containerName="kube-rbac-proxy-node" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.493758 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1ab5e7d-28c9-416b-9e12-1209987d8a2c" containerName="kube-rbac-proxy-node" Feb 18 14:42:54 crc kubenswrapper[4957]: E0218 14:42:54.493767 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1ab5e7d-28c9-416b-9e12-1209987d8a2c" containerName="ovnkube-controller" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.493773 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1ab5e7d-28c9-416b-9e12-1209987d8a2c" containerName="ovnkube-controller" Feb 18 14:42:54 crc kubenswrapper[4957]: E0218 14:42:54.493780 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1ab5e7d-28c9-416b-9e12-1209987d8a2c" containerName="kubecfg-setup" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.493787 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1ab5e7d-28c9-416b-9e12-1209987d8a2c" containerName="kubecfg-setup" Feb 18 14:42:54 crc kubenswrapper[4957]: E0218 14:42:54.493797 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b83c916a-2d04-4618-9254-4f4660a4b976" containerName="extract" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.493804 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="b83c916a-2d04-4618-9254-4f4660a4b976" containerName="extract" Feb 18 14:42:54 crc kubenswrapper[4957]: E0218 14:42:54.493812 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1ab5e7d-28c9-416b-9e12-1209987d8a2c" containerName="ovn-acl-logging" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.493819 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1ab5e7d-28c9-416b-9e12-1209987d8a2c" containerName="ovn-acl-logging" Feb 18 14:42:54 crc kubenswrapper[4957]: E0218 14:42:54.493833 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1ab5e7d-28c9-416b-9e12-1209987d8a2c" containerName="ovnkube-controller" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.493841 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1ab5e7d-28c9-416b-9e12-1209987d8a2c" containerName="ovnkube-controller" Feb 18 14:42:54 crc kubenswrapper[4957]: E0218 14:42:54.493852 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1ab5e7d-28c9-416b-9e12-1209987d8a2c" containerName="ovnkube-controller" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.493860 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1ab5e7d-28c9-416b-9e12-1209987d8a2c" containerName="ovnkube-controller" Feb 18 14:42:54 crc kubenswrapper[4957]: E0218 14:42:54.493875 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b83c916a-2d04-4618-9254-4f4660a4b976" containerName="pull" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.493883 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="b83c916a-2d04-4618-9254-4f4660a4b976" containerName="pull" Feb 18 14:42:54 crc kubenswrapper[4957]: E0218 14:42:54.493903 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1ab5e7d-28c9-416b-9e12-1209987d8a2c" containerName="nbdb" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.493910 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1ab5e7d-28c9-416b-9e12-1209987d8a2c" containerName="nbdb" Feb 18 14:42:54 crc kubenswrapper[4957]: E0218 14:42:54.493921 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1ab5e7d-28c9-416b-9e12-1209987d8a2c" containerName="kube-rbac-proxy-ovn-metrics" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.493928 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1ab5e7d-28c9-416b-9e12-1209987d8a2c" containerName="kube-rbac-proxy-ovn-metrics" Feb 18 14:42:54 crc kubenswrapper[4957]: E0218 14:42:54.493941 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b83c916a-2d04-4618-9254-4f4660a4b976" containerName="util" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.493948 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="b83c916a-2d04-4618-9254-4f4660a4b976" containerName="util" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.494069 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1ab5e7d-28c9-416b-9e12-1209987d8a2c" containerName="northd" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.494082 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="b83c916a-2d04-4618-9254-4f4660a4b976" containerName="extract" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.494093 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1ab5e7d-28c9-416b-9e12-1209987d8a2c" containerName="kube-rbac-proxy-node" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.494106 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1ab5e7d-28c9-416b-9e12-1209987d8a2c" containerName="ovn-controller" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.494114 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1ab5e7d-28c9-416b-9e12-1209987d8a2c" containerName="ovnkube-controller" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.494123 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1ab5e7d-28c9-416b-9e12-1209987d8a2c" containerName="sbdb" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.494131 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1ab5e7d-28c9-416b-9e12-1209987d8a2c" containerName="ovn-acl-logging" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.494142 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1ab5e7d-28c9-416b-9e12-1209987d8a2c" containerName="nbdb" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.494149 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1ab5e7d-28c9-416b-9e12-1209987d8a2c" containerName="kube-rbac-proxy-ovn-metrics" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.494162 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1ab5e7d-28c9-416b-9e12-1209987d8a2c" containerName="ovnkube-controller" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.494172 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1ab5e7d-28c9-416b-9e12-1209987d8a2c" containerName="ovnkube-controller" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.494181 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1ab5e7d-28c9-416b-9e12-1209987d8a2c" containerName="ovnkube-controller" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.494192 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1ab5e7d-28c9-416b-9e12-1209987d8a2c" containerName="ovnkube-controller" Feb 18 14:42:54 crc kubenswrapper[4957]: E0218 14:42:54.494310 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1ab5e7d-28c9-416b-9e12-1209987d8a2c" containerName="ovnkube-controller" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.494319 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1ab5e7d-28c9-416b-9e12-1209987d8a2c" containerName="ovnkube-controller" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.496765 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-vc7fk" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.536539 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c1ab5e7d-28c9-416b-9e12-1209987d8a2c-env-overrides\") pod \"c1ab5e7d-28c9-416b-9e12-1209987d8a2c\" (UID: \"c1ab5e7d-28c9-416b-9e12-1209987d8a2c\") " Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.536595 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ngss\" (UniqueName: \"kubernetes.io/projected/c1ab5e7d-28c9-416b-9e12-1209987d8a2c-kube-api-access-6ngss\") pod \"c1ab5e7d-28c9-416b-9e12-1209987d8a2c\" (UID: \"c1ab5e7d-28c9-416b-9e12-1209987d8a2c\") " Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.536642 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c1ab5e7d-28c9-416b-9e12-1209987d8a2c-host-slash\") pod \"c1ab5e7d-28c9-416b-9e12-1209987d8a2c\" (UID: \"c1ab5e7d-28c9-416b-9e12-1209987d8a2c\") " Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.536678 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c1ab5e7d-28c9-416b-9e12-1209987d8a2c-ovnkube-script-lib\") pod \"c1ab5e7d-28c9-416b-9e12-1209987d8a2c\" (UID: \"c1ab5e7d-28c9-416b-9e12-1209987d8a2c\") " Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.536712 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c1ab5e7d-28c9-416b-9e12-1209987d8a2c-host-run-netns\") pod \"c1ab5e7d-28c9-416b-9e12-1209987d8a2c\" (UID: \"c1ab5e7d-28c9-416b-9e12-1209987d8a2c\") " Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.536744 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c1ab5e7d-28c9-416b-9e12-1209987d8a2c-var-lib-openvswitch\") pod \"c1ab5e7d-28c9-416b-9e12-1209987d8a2c\" (UID: \"c1ab5e7d-28c9-416b-9e12-1209987d8a2c\") " Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.536769 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c1ab5e7d-28c9-416b-9e12-1209987d8a2c-run-openvswitch\") pod \"c1ab5e7d-28c9-416b-9e12-1209987d8a2c\" (UID: \"c1ab5e7d-28c9-416b-9e12-1209987d8a2c\") " Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.536796 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c1ab5e7d-28c9-416b-9e12-1209987d8a2c-host-cni-netd\") pod \"c1ab5e7d-28c9-416b-9e12-1209987d8a2c\" (UID: \"c1ab5e7d-28c9-416b-9e12-1209987d8a2c\") " Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.536853 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c1ab5e7d-28c9-416b-9e12-1209987d8a2c-ovnkube-config\") pod \"c1ab5e7d-28c9-416b-9e12-1209987d8a2c\" (UID: \"c1ab5e7d-28c9-416b-9e12-1209987d8a2c\") " Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.536899 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c1ab5e7d-28c9-416b-9e12-1209987d8a2c-node-log\") pod \"c1ab5e7d-28c9-416b-9e12-1209987d8a2c\" (UID: \"c1ab5e7d-28c9-416b-9e12-1209987d8a2c\") " Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.536931 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c1ab5e7d-28c9-416b-9e12-1209987d8a2c-host-cni-bin\") pod \"c1ab5e7d-28c9-416b-9e12-1209987d8a2c\" (UID: \"c1ab5e7d-28c9-416b-9e12-1209987d8a2c\") " Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.536963 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c1ab5e7d-28c9-416b-9e12-1209987d8a2c-ovn-node-metrics-cert\") pod \"c1ab5e7d-28c9-416b-9e12-1209987d8a2c\" (UID: \"c1ab5e7d-28c9-416b-9e12-1209987d8a2c\") " Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.536992 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c1ab5e7d-28c9-416b-9e12-1209987d8a2c-host-run-ovn-kubernetes\") pod \"c1ab5e7d-28c9-416b-9e12-1209987d8a2c\" (UID: \"c1ab5e7d-28c9-416b-9e12-1209987d8a2c\") " Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.537022 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c1ab5e7d-28c9-416b-9e12-1209987d8a2c-run-systemd\") pod \"c1ab5e7d-28c9-416b-9e12-1209987d8a2c\" (UID: \"c1ab5e7d-28c9-416b-9e12-1209987d8a2c\") " Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.537145 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1ab5e7d-28c9-416b-9e12-1209987d8a2c-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "c1ab5e7d-28c9-416b-9e12-1209987d8a2c" (UID: "c1ab5e7d-28c9-416b-9e12-1209987d8a2c"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.537215 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c1ab5e7d-28c9-416b-9e12-1209987d8a2c-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "c1ab5e7d-28c9-416b-9e12-1209987d8a2c" (UID: "c1ab5e7d-28c9-416b-9e12-1209987d8a2c"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.537223 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c1ab5e7d-28c9-416b-9e12-1209987d8a2c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"c1ab5e7d-28c9-416b-9e12-1209987d8a2c\" (UID: \"c1ab5e7d-28c9-416b-9e12-1209987d8a2c\") " Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.537246 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c1ab5e7d-28c9-416b-9e12-1209987d8a2c-host-slash" (OuterVolumeSpecName: "host-slash") pod "c1ab5e7d-28c9-416b-9e12-1209987d8a2c" (UID: "c1ab5e7d-28c9-416b-9e12-1209987d8a2c"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.537468 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c1ab5e7d-28c9-416b-9e12-1209987d8a2c-systemd-units\") pod \"c1ab5e7d-28c9-416b-9e12-1209987d8a2c\" (UID: \"c1ab5e7d-28c9-416b-9e12-1209987d8a2c\") " Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.537544 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c1ab5e7d-28c9-416b-9e12-1209987d8a2c-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "c1ab5e7d-28c9-416b-9e12-1209987d8a2c" (UID: "c1ab5e7d-28c9-416b-9e12-1209987d8a2c"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.537564 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1ab5e7d-28c9-416b-9e12-1209987d8a2c-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "c1ab5e7d-28c9-416b-9e12-1209987d8a2c" (UID: "c1ab5e7d-28c9-416b-9e12-1209987d8a2c"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.537581 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c1ab5e7d-28c9-416b-9e12-1209987d8a2c-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "c1ab5e7d-28c9-416b-9e12-1209987d8a2c" (UID: "c1ab5e7d-28c9-416b-9e12-1209987d8a2c"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.537606 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c1ab5e7d-28c9-416b-9e12-1209987d8a2c-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "c1ab5e7d-28c9-416b-9e12-1209987d8a2c" (UID: "c1ab5e7d-28c9-416b-9e12-1209987d8a2c"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.537633 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c1ab5e7d-28c9-416b-9e12-1209987d8a2c-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "c1ab5e7d-28c9-416b-9e12-1209987d8a2c" (UID: "c1ab5e7d-28c9-416b-9e12-1209987d8a2c"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.537684 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c1ab5e7d-28c9-416b-9e12-1209987d8a2c-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "c1ab5e7d-28c9-416b-9e12-1209987d8a2c" (UID: "c1ab5e7d-28c9-416b-9e12-1209987d8a2c"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.537685 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c1ab5e7d-28c9-416b-9e12-1209987d8a2c-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "c1ab5e7d-28c9-416b-9e12-1209987d8a2c" (UID: "c1ab5e7d-28c9-416b-9e12-1209987d8a2c"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.538011 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c1ab5e7d-28c9-416b-9e12-1209987d8a2c-node-log" (OuterVolumeSpecName: "node-log") pod "c1ab5e7d-28c9-416b-9e12-1209987d8a2c" (UID: "c1ab5e7d-28c9-416b-9e12-1209987d8a2c"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.538038 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c1ab5e7d-28c9-416b-9e12-1209987d8a2c-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "c1ab5e7d-28c9-416b-9e12-1209987d8a2c" (UID: "c1ab5e7d-28c9-416b-9e12-1209987d8a2c"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.538057 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c1ab5e7d-28c9-416b-9e12-1209987d8a2c-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "c1ab5e7d-28c9-416b-9e12-1209987d8a2c" (UID: "c1ab5e7d-28c9-416b-9e12-1209987d8a2c"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.538184 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1ab5e7d-28c9-416b-9e12-1209987d8a2c-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "c1ab5e7d-28c9-416b-9e12-1209987d8a2c" (UID: "c1ab5e7d-28c9-416b-9e12-1209987d8a2c"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.538034 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c1ab5e7d-28c9-416b-9e12-1209987d8a2c-etc-openvswitch\") pod \"c1ab5e7d-28c9-416b-9e12-1209987d8a2c\" (UID: \"c1ab5e7d-28c9-416b-9e12-1209987d8a2c\") " Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.539176 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c1ab5e7d-28c9-416b-9e12-1209987d8a2c-host-kubelet\") pod \"c1ab5e7d-28c9-416b-9e12-1209987d8a2c\" (UID: \"c1ab5e7d-28c9-416b-9e12-1209987d8a2c\") " Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.539303 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c1ab5e7d-28c9-416b-9e12-1209987d8a2c-log-socket\") pod \"c1ab5e7d-28c9-416b-9e12-1209987d8a2c\" (UID: \"c1ab5e7d-28c9-416b-9e12-1209987d8a2c\") " Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.539412 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c1ab5e7d-28c9-416b-9e12-1209987d8a2c-run-ovn\") pod \"c1ab5e7d-28c9-416b-9e12-1209987d8a2c\" (UID: \"c1ab5e7d-28c9-416b-9e12-1209987d8a2c\") " Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.539756 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/910cf14f-cf93-4db8-8681-12130ae2ae27-ovn-node-metrics-cert\") pod \"ovnkube-node-vc7fk\" (UID: \"910cf14f-cf93-4db8-8681-12130ae2ae27\") " pod="openshift-ovn-kubernetes/ovnkube-node-vc7fk" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.539875 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/910cf14f-cf93-4db8-8681-12130ae2ae27-host-cni-bin\") pod \"ovnkube-node-vc7fk\" (UID: \"910cf14f-cf93-4db8-8681-12130ae2ae27\") " pod="openshift-ovn-kubernetes/ovnkube-node-vc7fk" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.539964 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/910cf14f-cf93-4db8-8681-12130ae2ae27-ovnkube-config\") pod \"ovnkube-node-vc7fk\" (UID: \"910cf14f-cf93-4db8-8681-12130ae2ae27\") " pod="openshift-ovn-kubernetes/ovnkube-node-vc7fk" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.540072 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/910cf14f-cf93-4db8-8681-12130ae2ae27-env-overrides\") pod \"ovnkube-node-vc7fk\" (UID: \"910cf14f-cf93-4db8-8681-12130ae2ae27\") " pod="openshift-ovn-kubernetes/ovnkube-node-vc7fk" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.540108 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c1ab5e7d-28c9-416b-9e12-1209987d8a2c-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "c1ab5e7d-28c9-416b-9e12-1209987d8a2c" (UID: "c1ab5e7d-28c9-416b-9e12-1209987d8a2c"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.540128 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c1ab5e7d-28c9-416b-9e12-1209987d8a2c-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "c1ab5e7d-28c9-416b-9e12-1209987d8a2c" (UID: "c1ab5e7d-28c9-416b-9e12-1209987d8a2c"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.540144 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c1ab5e7d-28c9-416b-9e12-1209987d8a2c-log-socket" (OuterVolumeSpecName: "log-socket") pod "c1ab5e7d-28c9-416b-9e12-1209987d8a2c" (UID: "c1ab5e7d-28c9-416b-9e12-1209987d8a2c"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.540202 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/910cf14f-cf93-4db8-8681-12130ae2ae27-host-run-netns\") pod \"ovnkube-node-vc7fk\" (UID: \"910cf14f-cf93-4db8-8681-12130ae2ae27\") " pod="openshift-ovn-kubernetes/ovnkube-node-vc7fk" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.540544 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/910cf14f-cf93-4db8-8681-12130ae2ae27-run-ovn\") pod \"ovnkube-node-vc7fk\" (UID: \"910cf14f-cf93-4db8-8681-12130ae2ae27\") " pod="openshift-ovn-kubernetes/ovnkube-node-vc7fk" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.541157 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wps4p\" (UniqueName: \"kubernetes.io/projected/910cf14f-cf93-4db8-8681-12130ae2ae27-kube-api-access-wps4p\") pod \"ovnkube-node-vc7fk\" (UID: \"910cf14f-cf93-4db8-8681-12130ae2ae27\") " pod="openshift-ovn-kubernetes/ovnkube-node-vc7fk" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.541283 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/910cf14f-cf93-4db8-8681-12130ae2ae27-log-socket\") pod \"ovnkube-node-vc7fk\" (UID: \"910cf14f-cf93-4db8-8681-12130ae2ae27\") " pod="openshift-ovn-kubernetes/ovnkube-node-vc7fk" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.544665 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/910cf14f-cf93-4db8-8681-12130ae2ae27-run-openvswitch\") pod \"ovnkube-node-vc7fk\" (UID: \"910cf14f-cf93-4db8-8681-12130ae2ae27\") " pod="openshift-ovn-kubernetes/ovnkube-node-vc7fk" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.544766 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/910cf14f-cf93-4db8-8681-12130ae2ae27-etc-openvswitch\") pod \"ovnkube-node-vc7fk\" (UID: \"910cf14f-cf93-4db8-8681-12130ae2ae27\") " pod="openshift-ovn-kubernetes/ovnkube-node-vc7fk" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.544803 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/910cf14f-cf93-4db8-8681-12130ae2ae27-systemd-units\") pod \"ovnkube-node-vc7fk\" (UID: \"910cf14f-cf93-4db8-8681-12130ae2ae27\") " pod="openshift-ovn-kubernetes/ovnkube-node-vc7fk" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.544839 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/910cf14f-cf93-4db8-8681-12130ae2ae27-run-systemd\") pod \"ovnkube-node-vc7fk\" (UID: \"910cf14f-cf93-4db8-8681-12130ae2ae27\") " pod="openshift-ovn-kubernetes/ovnkube-node-vc7fk" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.544895 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/910cf14f-cf93-4db8-8681-12130ae2ae27-host-kubelet\") pod \"ovnkube-node-vc7fk\" (UID: \"910cf14f-cf93-4db8-8681-12130ae2ae27\") " pod="openshift-ovn-kubernetes/ovnkube-node-vc7fk" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.544920 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/910cf14f-cf93-4db8-8681-12130ae2ae27-host-slash\") pod \"ovnkube-node-vc7fk\" (UID: \"910cf14f-cf93-4db8-8681-12130ae2ae27\") " pod="openshift-ovn-kubernetes/ovnkube-node-vc7fk" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.544958 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/910cf14f-cf93-4db8-8681-12130ae2ae27-var-lib-openvswitch\") pod \"ovnkube-node-vc7fk\" (UID: \"910cf14f-cf93-4db8-8681-12130ae2ae27\") " pod="openshift-ovn-kubernetes/ovnkube-node-vc7fk" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.545003 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/910cf14f-cf93-4db8-8681-12130ae2ae27-ovnkube-script-lib\") pod \"ovnkube-node-vc7fk\" (UID: \"910cf14f-cf93-4db8-8681-12130ae2ae27\") " pod="openshift-ovn-kubernetes/ovnkube-node-vc7fk" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.545033 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/910cf14f-cf93-4db8-8681-12130ae2ae27-node-log\") pod \"ovnkube-node-vc7fk\" (UID: \"910cf14f-cf93-4db8-8681-12130ae2ae27\") " pod="openshift-ovn-kubernetes/ovnkube-node-vc7fk" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.545091 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/910cf14f-cf93-4db8-8681-12130ae2ae27-host-cni-netd\") pod \"ovnkube-node-vc7fk\" (UID: \"910cf14f-cf93-4db8-8681-12130ae2ae27\") " pod="openshift-ovn-kubernetes/ovnkube-node-vc7fk" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.545126 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/910cf14f-cf93-4db8-8681-12130ae2ae27-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-vc7fk\" (UID: \"910cf14f-cf93-4db8-8681-12130ae2ae27\") " pod="openshift-ovn-kubernetes/ovnkube-node-vc7fk" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.545173 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/910cf14f-cf93-4db8-8681-12130ae2ae27-host-run-ovn-kubernetes\") pod \"ovnkube-node-vc7fk\" (UID: \"910cf14f-cf93-4db8-8681-12130ae2ae27\") " pod="openshift-ovn-kubernetes/ovnkube-node-vc7fk" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.545287 4957 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c1ab5e7d-28c9-416b-9e12-1209987d8a2c-node-log\") on node \"crc\" DevicePath \"\"" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.545303 4957 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c1ab5e7d-28c9-416b-9e12-1209987d8a2c-host-cni-bin\") on node \"crc\" DevicePath \"\"" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.545316 4957 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c1ab5e7d-28c9-416b-9e12-1209987d8a2c-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.545327 4957 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c1ab5e7d-28c9-416b-9e12-1209987d8a2c-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.545337 4957 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c1ab5e7d-28c9-416b-9e12-1209987d8a2c-systemd-units\") on node \"crc\" DevicePath \"\"" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.545346 4957 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c1ab5e7d-28c9-416b-9e12-1209987d8a2c-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.545356 4957 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c1ab5e7d-28c9-416b-9e12-1209987d8a2c-host-kubelet\") on node \"crc\" DevicePath \"\"" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.545370 4957 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c1ab5e7d-28c9-416b-9e12-1209987d8a2c-log-socket\") on node \"crc\" DevicePath \"\"" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.545383 4957 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c1ab5e7d-28c9-416b-9e12-1209987d8a2c-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.545394 4957 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c1ab5e7d-28c9-416b-9e12-1209987d8a2c-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.545405 4957 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c1ab5e7d-28c9-416b-9e12-1209987d8a2c-host-slash\") on node \"crc\" DevicePath \"\"" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.545415 4957 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c1ab5e7d-28c9-416b-9e12-1209987d8a2c-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.545465 4957 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c1ab5e7d-28c9-416b-9e12-1209987d8a2c-host-run-netns\") on node \"crc\" DevicePath \"\"" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.545475 4957 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c1ab5e7d-28c9-416b-9e12-1209987d8a2c-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.545485 4957 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c1ab5e7d-28c9-416b-9e12-1209987d8a2c-run-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.545497 4957 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c1ab5e7d-28c9-416b-9e12-1209987d8a2c-host-cni-netd\") on node \"crc\" DevicePath \"\"" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.545510 4957 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c1ab5e7d-28c9-416b-9e12-1209987d8a2c-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.555773 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1ab5e7d-28c9-416b-9e12-1209987d8a2c-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "c1ab5e7d-28c9-416b-9e12-1209987d8a2c" (UID: "c1ab5e7d-28c9-416b-9e12-1209987d8a2c"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.564667 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1ab5e7d-28c9-416b-9e12-1209987d8a2c-kube-api-access-6ngss" (OuterVolumeSpecName: "kube-api-access-6ngss") pod "c1ab5e7d-28c9-416b-9e12-1209987d8a2c" (UID: "c1ab5e7d-28c9-416b-9e12-1209987d8a2c"). InnerVolumeSpecName "kube-api-access-6ngss". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.582177 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c1ab5e7d-28c9-416b-9e12-1209987d8a2c-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "c1ab5e7d-28c9-416b-9e12-1209987d8a2c" (UID: "c1ab5e7d-28c9-416b-9e12-1209987d8a2c"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.589588 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-sk96m_e7e4902e-7bb6-44ec-a3ac-3d8b3afe3fbb/kube-multus/2.log" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.591731 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-t7lp9_c1ab5e7d-28c9-416b-9e12-1209987d8a2c/ovnkube-controller/4.log" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.593880 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-t7lp9_c1ab5e7d-28c9-416b-9e12-1209987d8a2c/ovn-acl-logging/0.log" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.594558 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-t7lp9_c1ab5e7d-28c9-416b-9e12-1209987d8a2c/ovn-controller/0.log" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.594916 4957 generic.go:334] "Generic (PLEG): container finished" podID="c1ab5e7d-28c9-416b-9e12-1209987d8a2c" containerID="eb7e286dbf7b49a0ee8019eb660a4b977399c163bb094d77f830b3747399d702" exitCode=0 Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.594942 4957 generic.go:334] "Generic (PLEG): container finished" podID="c1ab5e7d-28c9-416b-9e12-1209987d8a2c" containerID="d3d55cd020086b018dcf2183c7307ce4f66a18d4dfd5c7e387419f5aaeb08e53" exitCode=0 Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.594971 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t7lp9" event={"ID":"c1ab5e7d-28c9-416b-9e12-1209987d8a2c","Type":"ContainerDied","Data":"eb7e286dbf7b49a0ee8019eb660a4b977399c163bb094d77f830b3747399d702"} Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.595002 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t7lp9" event={"ID":"c1ab5e7d-28c9-416b-9e12-1209987d8a2c","Type":"ContainerDied","Data":"d3d55cd020086b018dcf2183c7307ce4f66a18d4dfd5c7e387419f5aaeb08e53"} Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.595016 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t7lp9" event={"ID":"c1ab5e7d-28c9-416b-9e12-1209987d8a2c","Type":"ContainerDied","Data":"024ebb6ef55970f726161a8a19661c20d8e7565a53612ff6d85907f667f95251"} Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.595021 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-t7lp9" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.595034 4957 scope.go:117] "RemoveContainer" containerID="6abc231989624dfd52760fc08c1c15a4ca94c6c14ac91bfb7bc64bf5b3fc6404" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.620711 4957 scope.go:117] "RemoveContainer" containerID="07549d11e88948ba7929804722b7d49a2fefb64d0c324c2032529c13df70a23f" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.636778 4957 scope.go:117] "RemoveContainer" containerID="6c773ab7da87d534f8ca400b73f89b473291b15b65118abfa48da7e8af126724" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.641579 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-t7lp9"] Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.646560 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/910cf14f-cf93-4db8-8681-12130ae2ae27-host-cni-netd\") pod \"ovnkube-node-vc7fk\" (UID: \"910cf14f-cf93-4db8-8681-12130ae2ae27\") " pod="openshift-ovn-kubernetes/ovnkube-node-vc7fk" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.646609 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/910cf14f-cf93-4db8-8681-12130ae2ae27-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-vc7fk\" (UID: \"910cf14f-cf93-4db8-8681-12130ae2ae27\") " pod="openshift-ovn-kubernetes/ovnkube-node-vc7fk" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.646648 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/910cf14f-cf93-4db8-8681-12130ae2ae27-host-run-ovn-kubernetes\") pod \"ovnkube-node-vc7fk\" (UID: \"910cf14f-cf93-4db8-8681-12130ae2ae27\") " pod="openshift-ovn-kubernetes/ovnkube-node-vc7fk" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.646677 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/910cf14f-cf93-4db8-8681-12130ae2ae27-ovn-node-metrics-cert\") pod \"ovnkube-node-vc7fk\" (UID: \"910cf14f-cf93-4db8-8681-12130ae2ae27\") " pod="openshift-ovn-kubernetes/ovnkube-node-vc7fk" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.646697 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/910cf14f-cf93-4db8-8681-12130ae2ae27-host-cni-bin\") pod \"ovnkube-node-vc7fk\" (UID: \"910cf14f-cf93-4db8-8681-12130ae2ae27\") " pod="openshift-ovn-kubernetes/ovnkube-node-vc7fk" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.646715 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/910cf14f-cf93-4db8-8681-12130ae2ae27-ovnkube-config\") pod \"ovnkube-node-vc7fk\" (UID: \"910cf14f-cf93-4db8-8681-12130ae2ae27\") " pod="openshift-ovn-kubernetes/ovnkube-node-vc7fk" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.646739 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/910cf14f-cf93-4db8-8681-12130ae2ae27-env-overrides\") pod \"ovnkube-node-vc7fk\" (UID: \"910cf14f-cf93-4db8-8681-12130ae2ae27\") " pod="openshift-ovn-kubernetes/ovnkube-node-vc7fk" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.646741 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/910cf14f-cf93-4db8-8681-12130ae2ae27-host-cni-netd\") pod \"ovnkube-node-vc7fk\" (UID: \"910cf14f-cf93-4db8-8681-12130ae2ae27\") " pod="openshift-ovn-kubernetes/ovnkube-node-vc7fk" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.646762 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/910cf14f-cf93-4db8-8681-12130ae2ae27-host-run-netns\") pod \"ovnkube-node-vc7fk\" (UID: \"910cf14f-cf93-4db8-8681-12130ae2ae27\") " pod="openshift-ovn-kubernetes/ovnkube-node-vc7fk" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.646806 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/910cf14f-cf93-4db8-8681-12130ae2ae27-run-ovn\") pod \"ovnkube-node-vc7fk\" (UID: \"910cf14f-cf93-4db8-8681-12130ae2ae27\") " pod="openshift-ovn-kubernetes/ovnkube-node-vc7fk" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.646830 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wps4p\" (UniqueName: \"kubernetes.io/projected/910cf14f-cf93-4db8-8681-12130ae2ae27-kube-api-access-wps4p\") pod \"ovnkube-node-vc7fk\" (UID: \"910cf14f-cf93-4db8-8681-12130ae2ae27\") " pod="openshift-ovn-kubernetes/ovnkube-node-vc7fk" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.646863 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/910cf14f-cf93-4db8-8681-12130ae2ae27-log-socket\") pod \"ovnkube-node-vc7fk\" (UID: \"910cf14f-cf93-4db8-8681-12130ae2ae27\") " pod="openshift-ovn-kubernetes/ovnkube-node-vc7fk" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.646892 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/910cf14f-cf93-4db8-8681-12130ae2ae27-run-openvswitch\") pod \"ovnkube-node-vc7fk\" (UID: \"910cf14f-cf93-4db8-8681-12130ae2ae27\") " pod="openshift-ovn-kubernetes/ovnkube-node-vc7fk" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.646918 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/910cf14f-cf93-4db8-8681-12130ae2ae27-etc-openvswitch\") pod \"ovnkube-node-vc7fk\" (UID: \"910cf14f-cf93-4db8-8681-12130ae2ae27\") " pod="openshift-ovn-kubernetes/ovnkube-node-vc7fk" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.646938 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/910cf14f-cf93-4db8-8681-12130ae2ae27-systemd-units\") pod \"ovnkube-node-vc7fk\" (UID: \"910cf14f-cf93-4db8-8681-12130ae2ae27\") " pod="openshift-ovn-kubernetes/ovnkube-node-vc7fk" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.646957 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/910cf14f-cf93-4db8-8681-12130ae2ae27-run-systemd\") pod \"ovnkube-node-vc7fk\" (UID: \"910cf14f-cf93-4db8-8681-12130ae2ae27\") " pod="openshift-ovn-kubernetes/ovnkube-node-vc7fk" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.646962 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/910cf14f-cf93-4db8-8681-12130ae2ae27-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-vc7fk\" (UID: \"910cf14f-cf93-4db8-8681-12130ae2ae27\") " pod="openshift-ovn-kubernetes/ovnkube-node-vc7fk" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.647028 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/910cf14f-cf93-4db8-8681-12130ae2ae27-host-kubelet\") pod \"ovnkube-node-vc7fk\" (UID: \"910cf14f-cf93-4db8-8681-12130ae2ae27\") " pod="openshift-ovn-kubernetes/ovnkube-node-vc7fk" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.646983 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/910cf14f-cf93-4db8-8681-12130ae2ae27-host-kubelet\") pod \"ovnkube-node-vc7fk\" (UID: \"910cf14f-cf93-4db8-8681-12130ae2ae27\") " pod="openshift-ovn-kubernetes/ovnkube-node-vc7fk" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.647094 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/910cf14f-cf93-4db8-8681-12130ae2ae27-host-slash\") pod \"ovnkube-node-vc7fk\" (UID: \"910cf14f-cf93-4db8-8681-12130ae2ae27\") " pod="openshift-ovn-kubernetes/ovnkube-node-vc7fk" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.647162 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/910cf14f-cf93-4db8-8681-12130ae2ae27-var-lib-openvswitch\") pod \"ovnkube-node-vc7fk\" (UID: \"910cf14f-cf93-4db8-8681-12130ae2ae27\") " pod="openshift-ovn-kubernetes/ovnkube-node-vc7fk" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.647240 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/910cf14f-cf93-4db8-8681-12130ae2ae27-ovnkube-script-lib\") pod \"ovnkube-node-vc7fk\" (UID: \"910cf14f-cf93-4db8-8681-12130ae2ae27\") " pod="openshift-ovn-kubernetes/ovnkube-node-vc7fk" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.647284 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/910cf14f-cf93-4db8-8681-12130ae2ae27-node-log\") pod \"ovnkube-node-vc7fk\" (UID: \"910cf14f-cf93-4db8-8681-12130ae2ae27\") " pod="openshift-ovn-kubernetes/ovnkube-node-vc7fk" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.647510 4957 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c1ab5e7d-28c9-416b-9e12-1209987d8a2c-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.647541 4957 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c1ab5e7d-28c9-416b-9e12-1209987d8a2c-run-systemd\") on node \"crc\" DevicePath \"\"" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.647549 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/910cf14f-cf93-4db8-8681-12130ae2ae27-host-cni-bin\") pod \"ovnkube-node-vc7fk\" (UID: \"910cf14f-cf93-4db8-8681-12130ae2ae27\") " pod="openshift-ovn-kubernetes/ovnkube-node-vc7fk" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.647589 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/910cf14f-cf93-4db8-8681-12130ae2ae27-node-log\") pod \"ovnkube-node-vc7fk\" (UID: \"910cf14f-cf93-4db8-8681-12130ae2ae27\") " pod="openshift-ovn-kubernetes/ovnkube-node-vc7fk" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.647561 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ngss\" (UniqueName: \"kubernetes.io/projected/c1ab5e7d-28c9-416b-9e12-1209987d8a2c-kube-api-access-6ngss\") on node \"crc\" DevicePath \"\"" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.647620 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/910cf14f-cf93-4db8-8681-12130ae2ae27-var-lib-openvswitch\") pod \"ovnkube-node-vc7fk\" (UID: \"910cf14f-cf93-4db8-8681-12130ae2ae27\") " pod="openshift-ovn-kubernetes/ovnkube-node-vc7fk" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.647694 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/910cf14f-cf93-4db8-8681-12130ae2ae27-log-socket\") pod \"ovnkube-node-vc7fk\" (UID: \"910cf14f-cf93-4db8-8681-12130ae2ae27\") " pod="openshift-ovn-kubernetes/ovnkube-node-vc7fk" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.647735 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/910cf14f-cf93-4db8-8681-12130ae2ae27-run-ovn\") pod \"ovnkube-node-vc7fk\" (UID: \"910cf14f-cf93-4db8-8681-12130ae2ae27\") " pod="openshift-ovn-kubernetes/ovnkube-node-vc7fk" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.647762 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/910cf14f-cf93-4db8-8681-12130ae2ae27-host-run-netns\") pod \"ovnkube-node-vc7fk\" (UID: \"910cf14f-cf93-4db8-8681-12130ae2ae27\") " pod="openshift-ovn-kubernetes/ovnkube-node-vc7fk" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.647766 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/910cf14f-cf93-4db8-8681-12130ae2ae27-ovnkube-config\") pod \"ovnkube-node-vc7fk\" (UID: \"910cf14f-cf93-4db8-8681-12130ae2ae27\") " pod="openshift-ovn-kubernetes/ovnkube-node-vc7fk" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.647791 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/910cf14f-cf93-4db8-8681-12130ae2ae27-host-run-ovn-kubernetes\") pod \"ovnkube-node-vc7fk\" (UID: \"910cf14f-cf93-4db8-8681-12130ae2ae27\") " pod="openshift-ovn-kubernetes/ovnkube-node-vc7fk" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.647801 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/910cf14f-cf93-4db8-8681-12130ae2ae27-env-overrides\") pod \"ovnkube-node-vc7fk\" (UID: \"910cf14f-cf93-4db8-8681-12130ae2ae27\") " pod="openshift-ovn-kubernetes/ovnkube-node-vc7fk" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.647818 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/910cf14f-cf93-4db8-8681-12130ae2ae27-etc-openvswitch\") pod \"ovnkube-node-vc7fk\" (UID: \"910cf14f-cf93-4db8-8681-12130ae2ae27\") " pod="openshift-ovn-kubernetes/ovnkube-node-vc7fk" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.647821 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/910cf14f-cf93-4db8-8681-12130ae2ae27-systemd-units\") pod \"ovnkube-node-vc7fk\" (UID: \"910cf14f-cf93-4db8-8681-12130ae2ae27\") " pod="openshift-ovn-kubernetes/ovnkube-node-vc7fk" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.647837 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/910cf14f-cf93-4db8-8681-12130ae2ae27-run-openvswitch\") pod \"ovnkube-node-vc7fk\" (UID: \"910cf14f-cf93-4db8-8681-12130ae2ae27\") " pod="openshift-ovn-kubernetes/ovnkube-node-vc7fk" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.647874 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/910cf14f-cf93-4db8-8681-12130ae2ae27-run-systemd\") pod \"ovnkube-node-vc7fk\" (UID: \"910cf14f-cf93-4db8-8681-12130ae2ae27\") " pod="openshift-ovn-kubernetes/ovnkube-node-vc7fk" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.647890 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/910cf14f-cf93-4db8-8681-12130ae2ae27-host-slash\") pod \"ovnkube-node-vc7fk\" (UID: \"910cf14f-cf93-4db8-8681-12130ae2ae27\") " pod="openshift-ovn-kubernetes/ovnkube-node-vc7fk" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.648452 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/910cf14f-cf93-4db8-8681-12130ae2ae27-ovnkube-script-lib\") pod \"ovnkube-node-vc7fk\" (UID: \"910cf14f-cf93-4db8-8681-12130ae2ae27\") " pod="openshift-ovn-kubernetes/ovnkube-node-vc7fk" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.650647 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/910cf14f-cf93-4db8-8681-12130ae2ae27-ovn-node-metrics-cert\") pod \"ovnkube-node-vc7fk\" (UID: \"910cf14f-cf93-4db8-8681-12130ae2ae27\") " pod="openshift-ovn-kubernetes/ovnkube-node-vc7fk" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.650936 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-t7lp9"] Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.674865 4957 scope.go:117] "RemoveContainer" containerID="0fb30623c479a6d604d07d8779f3b5661ba8a2c98fa38f145443c7f80fbfddb0" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.679778 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wps4p\" (UniqueName: \"kubernetes.io/projected/910cf14f-cf93-4db8-8681-12130ae2ae27-kube-api-access-wps4p\") pod \"ovnkube-node-vc7fk\" (UID: \"910cf14f-cf93-4db8-8681-12130ae2ae27\") " pod="openshift-ovn-kubernetes/ovnkube-node-vc7fk" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.696350 4957 scope.go:117] "RemoveContainer" containerID="eb7e286dbf7b49a0ee8019eb660a4b977399c163bb094d77f830b3747399d702" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.718894 4957 scope.go:117] "RemoveContainer" containerID="d3d55cd020086b018dcf2183c7307ce4f66a18d4dfd5c7e387419f5aaeb08e53" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.741835 4957 scope.go:117] "RemoveContainer" containerID="9575c57251809ec79d9e9a63fc18fc0fe431582b023c0bfb9d71b3ea5c369e49" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.759811 4957 scope.go:117] "RemoveContainer" containerID="c7fc50065803f879ae25e1ab406e1b74e142cab30c38bedd0426616a2cc6cc84" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.779698 4957 scope.go:117] "RemoveContainer" containerID="854d32930f1afa0a3f29b83159a4a473fd3c074f065cab0f110c4c0e62d0ab79" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.810188 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-vc7fk" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.814881 4957 scope.go:117] "RemoveContainer" containerID="6abc231989624dfd52760fc08c1c15a4ca94c6c14ac91bfb7bc64bf5b3fc6404" Feb 18 14:42:54 crc kubenswrapper[4957]: E0218 14:42:54.815525 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6abc231989624dfd52760fc08c1c15a4ca94c6c14ac91bfb7bc64bf5b3fc6404\": container with ID starting with 6abc231989624dfd52760fc08c1c15a4ca94c6c14ac91bfb7bc64bf5b3fc6404 not found: ID does not exist" containerID="6abc231989624dfd52760fc08c1c15a4ca94c6c14ac91bfb7bc64bf5b3fc6404" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.815564 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6abc231989624dfd52760fc08c1c15a4ca94c6c14ac91bfb7bc64bf5b3fc6404"} err="failed to get container status \"6abc231989624dfd52760fc08c1c15a4ca94c6c14ac91bfb7bc64bf5b3fc6404\": rpc error: code = NotFound desc = could not find container \"6abc231989624dfd52760fc08c1c15a4ca94c6c14ac91bfb7bc64bf5b3fc6404\": container with ID starting with 6abc231989624dfd52760fc08c1c15a4ca94c6c14ac91bfb7bc64bf5b3fc6404 not found: ID does not exist" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.815596 4957 scope.go:117] "RemoveContainer" containerID="07549d11e88948ba7929804722b7d49a2fefb64d0c324c2032529c13df70a23f" Feb 18 14:42:54 crc kubenswrapper[4957]: E0218 14:42:54.815948 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"07549d11e88948ba7929804722b7d49a2fefb64d0c324c2032529c13df70a23f\": container with ID starting with 07549d11e88948ba7929804722b7d49a2fefb64d0c324c2032529c13df70a23f not found: ID does not exist" containerID="07549d11e88948ba7929804722b7d49a2fefb64d0c324c2032529c13df70a23f" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.816012 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"07549d11e88948ba7929804722b7d49a2fefb64d0c324c2032529c13df70a23f"} err="failed to get container status \"07549d11e88948ba7929804722b7d49a2fefb64d0c324c2032529c13df70a23f\": rpc error: code = NotFound desc = could not find container \"07549d11e88948ba7929804722b7d49a2fefb64d0c324c2032529c13df70a23f\": container with ID starting with 07549d11e88948ba7929804722b7d49a2fefb64d0c324c2032529c13df70a23f not found: ID does not exist" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.816052 4957 scope.go:117] "RemoveContainer" containerID="6c773ab7da87d534f8ca400b73f89b473291b15b65118abfa48da7e8af126724" Feb 18 14:42:54 crc kubenswrapper[4957]: E0218 14:42:54.816560 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6c773ab7da87d534f8ca400b73f89b473291b15b65118abfa48da7e8af126724\": container with ID starting with 6c773ab7da87d534f8ca400b73f89b473291b15b65118abfa48da7e8af126724 not found: ID does not exist" containerID="6c773ab7da87d534f8ca400b73f89b473291b15b65118abfa48da7e8af126724" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.816587 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c773ab7da87d534f8ca400b73f89b473291b15b65118abfa48da7e8af126724"} err="failed to get container status \"6c773ab7da87d534f8ca400b73f89b473291b15b65118abfa48da7e8af126724\": rpc error: code = NotFound desc = could not find container \"6c773ab7da87d534f8ca400b73f89b473291b15b65118abfa48da7e8af126724\": container with ID starting with 6c773ab7da87d534f8ca400b73f89b473291b15b65118abfa48da7e8af126724 not found: ID does not exist" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.816613 4957 scope.go:117] "RemoveContainer" containerID="0fb30623c479a6d604d07d8779f3b5661ba8a2c98fa38f145443c7f80fbfddb0" Feb 18 14:42:54 crc kubenswrapper[4957]: E0218 14:42:54.816931 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0fb30623c479a6d604d07d8779f3b5661ba8a2c98fa38f145443c7f80fbfddb0\": container with ID starting with 0fb30623c479a6d604d07d8779f3b5661ba8a2c98fa38f145443c7f80fbfddb0 not found: ID does not exist" containerID="0fb30623c479a6d604d07d8779f3b5661ba8a2c98fa38f145443c7f80fbfddb0" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.816959 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0fb30623c479a6d604d07d8779f3b5661ba8a2c98fa38f145443c7f80fbfddb0"} err="failed to get container status \"0fb30623c479a6d604d07d8779f3b5661ba8a2c98fa38f145443c7f80fbfddb0\": rpc error: code = NotFound desc = could not find container \"0fb30623c479a6d604d07d8779f3b5661ba8a2c98fa38f145443c7f80fbfddb0\": container with ID starting with 0fb30623c479a6d604d07d8779f3b5661ba8a2c98fa38f145443c7f80fbfddb0 not found: ID does not exist" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.816975 4957 scope.go:117] "RemoveContainer" containerID="eb7e286dbf7b49a0ee8019eb660a4b977399c163bb094d77f830b3747399d702" Feb 18 14:42:54 crc kubenswrapper[4957]: E0218 14:42:54.817234 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eb7e286dbf7b49a0ee8019eb660a4b977399c163bb094d77f830b3747399d702\": container with ID starting with eb7e286dbf7b49a0ee8019eb660a4b977399c163bb094d77f830b3747399d702 not found: ID does not exist" containerID="eb7e286dbf7b49a0ee8019eb660a4b977399c163bb094d77f830b3747399d702" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.817257 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb7e286dbf7b49a0ee8019eb660a4b977399c163bb094d77f830b3747399d702"} err="failed to get container status \"eb7e286dbf7b49a0ee8019eb660a4b977399c163bb094d77f830b3747399d702\": rpc error: code = NotFound desc = could not find container \"eb7e286dbf7b49a0ee8019eb660a4b977399c163bb094d77f830b3747399d702\": container with ID starting with eb7e286dbf7b49a0ee8019eb660a4b977399c163bb094d77f830b3747399d702 not found: ID does not exist" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.817272 4957 scope.go:117] "RemoveContainer" containerID="d3d55cd020086b018dcf2183c7307ce4f66a18d4dfd5c7e387419f5aaeb08e53" Feb 18 14:42:54 crc kubenswrapper[4957]: E0218 14:42:54.817714 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d3d55cd020086b018dcf2183c7307ce4f66a18d4dfd5c7e387419f5aaeb08e53\": container with ID starting with d3d55cd020086b018dcf2183c7307ce4f66a18d4dfd5c7e387419f5aaeb08e53 not found: ID does not exist" containerID="d3d55cd020086b018dcf2183c7307ce4f66a18d4dfd5c7e387419f5aaeb08e53" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.817744 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3d55cd020086b018dcf2183c7307ce4f66a18d4dfd5c7e387419f5aaeb08e53"} err="failed to get container status \"d3d55cd020086b018dcf2183c7307ce4f66a18d4dfd5c7e387419f5aaeb08e53\": rpc error: code = NotFound desc = could not find container \"d3d55cd020086b018dcf2183c7307ce4f66a18d4dfd5c7e387419f5aaeb08e53\": container with ID starting with d3d55cd020086b018dcf2183c7307ce4f66a18d4dfd5c7e387419f5aaeb08e53 not found: ID does not exist" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.817757 4957 scope.go:117] "RemoveContainer" containerID="9575c57251809ec79d9e9a63fc18fc0fe431582b023c0bfb9d71b3ea5c369e49" Feb 18 14:42:54 crc kubenswrapper[4957]: E0218 14:42:54.818020 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9575c57251809ec79d9e9a63fc18fc0fe431582b023c0bfb9d71b3ea5c369e49\": container with ID starting with 9575c57251809ec79d9e9a63fc18fc0fe431582b023c0bfb9d71b3ea5c369e49 not found: ID does not exist" containerID="9575c57251809ec79d9e9a63fc18fc0fe431582b023c0bfb9d71b3ea5c369e49" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.818113 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9575c57251809ec79d9e9a63fc18fc0fe431582b023c0bfb9d71b3ea5c369e49"} err="failed to get container status \"9575c57251809ec79d9e9a63fc18fc0fe431582b023c0bfb9d71b3ea5c369e49\": rpc error: code = NotFound desc = could not find container \"9575c57251809ec79d9e9a63fc18fc0fe431582b023c0bfb9d71b3ea5c369e49\": container with ID starting with 9575c57251809ec79d9e9a63fc18fc0fe431582b023c0bfb9d71b3ea5c369e49 not found: ID does not exist" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.818141 4957 scope.go:117] "RemoveContainer" containerID="c7fc50065803f879ae25e1ab406e1b74e142cab30c38bedd0426616a2cc6cc84" Feb 18 14:42:54 crc kubenswrapper[4957]: E0218 14:42:54.818532 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c7fc50065803f879ae25e1ab406e1b74e142cab30c38bedd0426616a2cc6cc84\": container with ID starting with c7fc50065803f879ae25e1ab406e1b74e142cab30c38bedd0426616a2cc6cc84 not found: ID does not exist" containerID="c7fc50065803f879ae25e1ab406e1b74e142cab30c38bedd0426616a2cc6cc84" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.818581 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7fc50065803f879ae25e1ab406e1b74e142cab30c38bedd0426616a2cc6cc84"} err="failed to get container status \"c7fc50065803f879ae25e1ab406e1b74e142cab30c38bedd0426616a2cc6cc84\": rpc error: code = NotFound desc = could not find container \"c7fc50065803f879ae25e1ab406e1b74e142cab30c38bedd0426616a2cc6cc84\": container with ID starting with c7fc50065803f879ae25e1ab406e1b74e142cab30c38bedd0426616a2cc6cc84 not found: ID does not exist" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.818629 4957 scope.go:117] "RemoveContainer" containerID="854d32930f1afa0a3f29b83159a4a473fd3c074f065cab0f110c4c0e62d0ab79" Feb 18 14:42:54 crc kubenswrapper[4957]: E0218 14:42:54.819007 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"854d32930f1afa0a3f29b83159a4a473fd3c074f065cab0f110c4c0e62d0ab79\": container with ID starting with 854d32930f1afa0a3f29b83159a4a473fd3c074f065cab0f110c4c0e62d0ab79 not found: ID does not exist" containerID="854d32930f1afa0a3f29b83159a4a473fd3c074f065cab0f110c4c0e62d0ab79" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.819036 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"854d32930f1afa0a3f29b83159a4a473fd3c074f065cab0f110c4c0e62d0ab79"} err="failed to get container status \"854d32930f1afa0a3f29b83159a4a473fd3c074f065cab0f110c4c0e62d0ab79\": rpc error: code = NotFound desc = could not find container \"854d32930f1afa0a3f29b83159a4a473fd3c074f065cab0f110c4c0e62d0ab79\": container with ID starting with 854d32930f1afa0a3f29b83159a4a473fd3c074f065cab0f110c4c0e62d0ab79 not found: ID does not exist" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.819054 4957 scope.go:117] "RemoveContainer" containerID="6abc231989624dfd52760fc08c1c15a4ca94c6c14ac91bfb7bc64bf5b3fc6404" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.819371 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6abc231989624dfd52760fc08c1c15a4ca94c6c14ac91bfb7bc64bf5b3fc6404"} err="failed to get container status \"6abc231989624dfd52760fc08c1c15a4ca94c6c14ac91bfb7bc64bf5b3fc6404\": rpc error: code = NotFound desc = could not find container \"6abc231989624dfd52760fc08c1c15a4ca94c6c14ac91bfb7bc64bf5b3fc6404\": container with ID starting with 6abc231989624dfd52760fc08c1c15a4ca94c6c14ac91bfb7bc64bf5b3fc6404 not found: ID does not exist" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.819402 4957 scope.go:117] "RemoveContainer" containerID="07549d11e88948ba7929804722b7d49a2fefb64d0c324c2032529c13df70a23f" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.819795 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"07549d11e88948ba7929804722b7d49a2fefb64d0c324c2032529c13df70a23f"} err="failed to get container status \"07549d11e88948ba7929804722b7d49a2fefb64d0c324c2032529c13df70a23f\": rpc error: code = NotFound desc = could not find container \"07549d11e88948ba7929804722b7d49a2fefb64d0c324c2032529c13df70a23f\": container with ID starting with 07549d11e88948ba7929804722b7d49a2fefb64d0c324c2032529c13df70a23f not found: ID does not exist" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.819820 4957 scope.go:117] "RemoveContainer" containerID="6c773ab7da87d534f8ca400b73f89b473291b15b65118abfa48da7e8af126724" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.820063 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c773ab7da87d534f8ca400b73f89b473291b15b65118abfa48da7e8af126724"} err="failed to get container status \"6c773ab7da87d534f8ca400b73f89b473291b15b65118abfa48da7e8af126724\": rpc error: code = NotFound desc = could not find container \"6c773ab7da87d534f8ca400b73f89b473291b15b65118abfa48da7e8af126724\": container with ID starting with 6c773ab7da87d534f8ca400b73f89b473291b15b65118abfa48da7e8af126724 not found: ID does not exist" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.820092 4957 scope.go:117] "RemoveContainer" containerID="0fb30623c479a6d604d07d8779f3b5661ba8a2c98fa38f145443c7f80fbfddb0" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.820361 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0fb30623c479a6d604d07d8779f3b5661ba8a2c98fa38f145443c7f80fbfddb0"} err="failed to get container status \"0fb30623c479a6d604d07d8779f3b5661ba8a2c98fa38f145443c7f80fbfddb0\": rpc error: code = NotFound desc = could not find container \"0fb30623c479a6d604d07d8779f3b5661ba8a2c98fa38f145443c7f80fbfddb0\": container with ID starting with 0fb30623c479a6d604d07d8779f3b5661ba8a2c98fa38f145443c7f80fbfddb0 not found: ID does not exist" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.820384 4957 scope.go:117] "RemoveContainer" containerID="eb7e286dbf7b49a0ee8019eb660a4b977399c163bb094d77f830b3747399d702" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.820849 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb7e286dbf7b49a0ee8019eb660a4b977399c163bb094d77f830b3747399d702"} err="failed to get container status \"eb7e286dbf7b49a0ee8019eb660a4b977399c163bb094d77f830b3747399d702\": rpc error: code = NotFound desc = could not find container \"eb7e286dbf7b49a0ee8019eb660a4b977399c163bb094d77f830b3747399d702\": container with ID starting with eb7e286dbf7b49a0ee8019eb660a4b977399c163bb094d77f830b3747399d702 not found: ID does not exist" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.820868 4957 scope.go:117] "RemoveContainer" containerID="d3d55cd020086b018dcf2183c7307ce4f66a18d4dfd5c7e387419f5aaeb08e53" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.821191 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3d55cd020086b018dcf2183c7307ce4f66a18d4dfd5c7e387419f5aaeb08e53"} err="failed to get container status \"d3d55cd020086b018dcf2183c7307ce4f66a18d4dfd5c7e387419f5aaeb08e53\": rpc error: code = NotFound desc = could not find container \"d3d55cd020086b018dcf2183c7307ce4f66a18d4dfd5c7e387419f5aaeb08e53\": container with ID starting with d3d55cd020086b018dcf2183c7307ce4f66a18d4dfd5c7e387419f5aaeb08e53 not found: ID does not exist" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.821277 4957 scope.go:117] "RemoveContainer" containerID="9575c57251809ec79d9e9a63fc18fc0fe431582b023c0bfb9d71b3ea5c369e49" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.821550 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9575c57251809ec79d9e9a63fc18fc0fe431582b023c0bfb9d71b3ea5c369e49"} err="failed to get container status \"9575c57251809ec79d9e9a63fc18fc0fe431582b023c0bfb9d71b3ea5c369e49\": rpc error: code = NotFound desc = could not find container \"9575c57251809ec79d9e9a63fc18fc0fe431582b023c0bfb9d71b3ea5c369e49\": container with ID starting with 9575c57251809ec79d9e9a63fc18fc0fe431582b023c0bfb9d71b3ea5c369e49 not found: ID does not exist" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.821646 4957 scope.go:117] "RemoveContainer" containerID="c7fc50065803f879ae25e1ab406e1b74e142cab30c38bedd0426616a2cc6cc84" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.822405 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7fc50065803f879ae25e1ab406e1b74e142cab30c38bedd0426616a2cc6cc84"} err="failed to get container status \"c7fc50065803f879ae25e1ab406e1b74e142cab30c38bedd0426616a2cc6cc84\": rpc error: code = NotFound desc = could not find container \"c7fc50065803f879ae25e1ab406e1b74e142cab30c38bedd0426616a2cc6cc84\": container with ID starting with c7fc50065803f879ae25e1ab406e1b74e142cab30c38bedd0426616a2cc6cc84 not found: ID does not exist" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.822522 4957 scope.go:117] "RemoveContainer" containerID="854d32930f1afa0a3f29b83159a4a473fd3c074f065cab0f110c4c0e62d0ab79" Feb 18 14:42:54 crc kubenswrapper[4957]: I0218 14:42:54.822895 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"854d32930f1afa0a3f29b83159a4a473fd3c074f065cab0f110c4c0e62d0ab79"} err="failed to get container status \"854d32930f1afa0a3f29b83159a4a473fd3c074f065cab0f110c4c0e62d0ab79\": rpc error: code = NotFound desc = could not find container \"854d32930f1afa0a3f29b83159a4a473fd3c074f065cab0f110c4c0e62d0ab79\": container with ID starting with 854d32930f1afa0a3f29b83159a4a473fd3c074f065cab0f110c4c0e62d0ab79 not found: ID does not exist" Feb 18 14:42:55 crc kubenswrapper[4957]: I0218 14:42:55.604314 4957 generic.go:334] "Generic (PLEG): container finished" podID="910cf14f-cf93-4db8-8681-12130ae2ae27" containerID="652ac5c021ce4fe7ebbf84b18cf6d797292b78d393ef6022a1621b5574818deb" exitCode=0 Feb 18 14:42:55 crc kubenswrapper[4957]: I0218 14:42:55.604429 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vc7fk" event={"ID":"910cf14f-cf93-4db8-8681-12130ae2ae27","Type":"ContainerDied","Data":"652ac5c021ce4fe7ebbf84b18cf6d797292b78d393ef6022a1621b5574818deb"} Feb 18 14:42:55 crc kubenswrapper[4957]: I0218 14:42:55.604501 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vc7fk" event={"ID":"910cf14f-cf93-4db8-8681-12130ae2ae27","Type":"ContainerStarted","Data":"e39ef3ec695e30f3dba08d02586c775f7e8de20d93b6ae470775ffc35f38fde1"} Feb 18 14:42:55 crc kubenswrapper[4957]: I0218 14:42:55.935959 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-h8fsb"] Feb 18 14:42:55 crc kubenswrapper[4957]: I0218 14:42:55.938139 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-h8fsb" Feb 18 14:42:55 crc kubenswrapper[4957]: I0218 14:42:55.940888 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Feb 18 14:42:55 crc kubenswrapper[4957]: I0218 14:42:55.941487 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Feb 18 14:42:55 crc kubenswrapper[4957]: I0218 14:42:55.945554 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-dsr79" Feb 18 14:42:55 crc kubenswrapper[4957]: I0218 14:42:55.970852 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xd6pp\" (UniqueName: \"kubernetes.io/projected/36ce4ea6-b461-4e76-9db4-10bb9d864512-kube-api-access-xd6pp\") pod \"obo-prometheus-operator-68bc856cb9-h8fsb\" (UID: \"36ce4ea6-b461-4e76-9db4-10bb9d864512\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-h8fsb" Feb 18 14:42:56 crc kubenswrapper[4957]: I0218 14:42:56.073001 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xd6pp\" (UniqueName: \"kubernetes.io/projected/36ce4ea6-b461-4e76-9db4-10bb9d864512-kube-api-access-xd6pp\") pod \"obo-prometheus-operator-68bc856cb9-h8fsb\" (UID: \"36ce4ea6-b461-4e76-9db4-10bb9d864512\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-h8fsb" Feb 18 14:42:56 crc kubenswrapper[4957]: I0218 14:42:56.099382 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-745fbd9cbf-xjh8t"] Feb 18 14:42:56 crc kubenswrapper[4957]: I0218 14:42:56.104866 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-745fbd9cbf-xjh8t" Feb 18 14:42:56 crc kubenswrapper[4957]: I0218 14:42:56.109327 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-gxmwv" Feb 18 14:42:56 crc kubenswrapper[4957]: I0218 14:42:56.111736 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Feb 18 14:42:56 crc kubenswrapper[4957]: I0218 14:42:56.112729 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xd6pp\" (UniqueName: \"kubernetes.io/projected/36ce4ea6-b461-4e76-9db4-10bb9d864512-kube-api-access-xd6pp\") pod \"obo-prometheus-operator-68bc856cb9-h8fsb\" (UID: \"36ce4ea6-b461-4e76-9db4-10bb9d864512\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-h8fsb" Feb 18 14:42:56 crc kubenswrapper[4957]: I0218 14:42:56.117703 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-745fbd9cbf-82dch"] Feb 18 14:42:56 crc kubenswrapper[4957]: I0218 14:42:56.118924 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-745fbd9cbf-82dch" Feb 18 14:42:56 crc kubenswrapper[4957]: I0218 14:42:56.174813 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8fd72d01-103a-4c09-8858-0fcd773f5d13-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-745fbd9cbf-xjh8t\" (UID: \"8fd72d01-103a-4c09-8858-0fcd773f5d13\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-745fbd9cbf-xjh8t" Feb 18 14:42:56 crc kubenswrapper[4957]: I0218 14:42:56.174899 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8fd72d01-103a-4c09-8858-0fcd773f5d13-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-745fbd9cbf-xjh8t\" (UID: \"8fd72d01-103a-4c09-8858-0fcd773f5d13\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-745fbd9cbf-xjh8t" Feb 18 14:42:56 crc kubenswrapper[4957]: I0218 14:42:56.174989 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d89f03d4-3521-43ea-85f4-631c25a3379b-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-745fbd9cbf-82dch\" (UID: \"d89f03d4-3521-43ea-85f4-631c25a3379b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-745fbd9cbf-82dch" Feb 18 14:42:56 crc kubenswrapper[4957]: I0218 14:42:56.175047 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d89f03d4-3521-43ea-85f4-631c25a3379b-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-745fbd9cbf-82dch\" (UID: \"d89f03d4-3521-43ea-85f4-631c25a3379b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-745fbd9cbf-82dch" Feb 18 14:42:56 crc kubenswrapper[4957]: I0218 14:42:56.225194 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c1ab5e7d-28c9-416b-9e12-1209987d8a2c" path="/var/lib/kubelet/pods/c1ab5e7d-28c9-416b-9e12-1209987d8a2c/volumes" Feb 18 14:42:56 crc kubenswrapper[4957]: I0218 14:42:56.259572 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-h8fsb" Feb 18 14:42:56 crc kubenswrapper[4957]: I0218 14:42:56.277435 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8fd72d01-103a-4c09-8858-0fcd773f5d13-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-745fbd9cbf-xjh8t\" (UID: \"8fd72d01-103a-4c09-8858-0fcd773f5d13\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-745fbd9cbf-xjh8t" Feb 18 14:42:56 crc kubenswrapper[4957]: I0218 14:42:56.277546 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8fd72d01-103a-4c09-8858-0fcd773f5d13-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-745fbd9cbf-xjh8t\" (UID: \"8fd72d01-103a-4c09-8858-0fcd773f5d13\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-745fbd9cbf-xjh8t" Feb 18 14:42:56 crc kubenswrapper[4957]: I0218 14:42:56.277601 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d89f03d4-3521-43ea-85f4-631c25a3379b-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-745fbd9cbf-82dch\" (UID: \"d89f03d4-3521-43ea-85f4-631c25a3379b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-745fbd9cbf-82dch" Feb 18 14:42:56 crc kubenswrapper[4957]: I0218 14:42:56.277712 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d89f03d4-3521-43ea-85f4-631c25a3379b-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-745fbd9cbf-82dch\" (UID: \"d89f03d4-3521-43ea-85f4-631c25a3379b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-745fbd9cbf-82dch" Feb 18 14:42:56 crc kubenswrapper[4957]: I0218 14:42:56.290208 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8fd72d01-103a-4c09-8858-0fcd773f5d13-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-745fbd9cbf-xjh8t\" (UID: \"8fd72d01-103a-4c09-8858-0fcd773f5d13\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-745fbd9cbf-xjh8t" Feb 18 14:42:56 crc kubenswrapper[4957]: I0218 14:42:56.290662 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d89f03d4-3521-43ea-85f4-631c25a3379b-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-745fbd9cbf-82dch\" (UID: \"d89f03d4-3521-43ea-85f4-631c25a3379b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-745fbd9cbf-82dch" Feb 18 14:42:56 crc kubenswrapper[4957]: I0218 14:42:56.293070 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d89f03d4-3521-43ea-85f4-631c25a3379b-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-745fbd9cbf-82dch\" (UID: \"d89f03d4-3521-43ea-85f4-631c25a3379b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-745fbd9cbf-82dch" Feb 18 14:42:56 crc kubenswrapper[4957]: I0218 14:42:56.293240 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-mxz2r"] Feb 18 14:42:56 crc kubenswrapper[4957]: I0218 14:42:56.294777 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-mxz2r" Feb 18 14:42:56 crc kubenswrapper[4957]: I0218 14:42:56.298756 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-tjbw7" Feb 18 14:42:56 crc kubenswrapper[4957]: I0218 14:42:56.299133 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Feb 18 14:42:56 crc kubenswrapper[4957]: I0218 14:42:56.301079 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8fd72d01-103a-4c09-8858-0fcd773f5d13-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-745fbd9cbf-xjh8t\" (UID: \"8fd72d01-103a-4c09-8858-0fcd773f5d13\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-745fbd9cbf-xjh8t" Feb 18 14:42:56 crc kubenswrapper[4957]: E0218 14:42:56.324240 4957 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-h8fsb_openshift-operators_36ce4ea6-b461-4e76-9db4-10bb9d864512_0(d785fabc76f40247983fffbce3d32d9aa47ac27f863787cd3ffd1070344cc807): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 18 14:42:56 crc kubenswrapper[4957]: E0218 14:42:56.324318 4957 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-h8fsb_openshift-operators_36ce4ea6-b461-4e76-9db4-10bb9d864512_0(d785fabc76f40247983fffbce3d32d9aa47ac27f863787cd3ffd1070344cc807): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-h8fsb" Feb 18 14:42:56 crc kubenswrapper[4957]: E0218 14:42:56.324339 4957 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-h8fsb_openshift-operators_36ce4ea6-b461-4e76-9db4-10bb9d864512_0(d785fabc76f40247983fffbce3d32d9aa47ac27f863787cd3ffd1070344cc807): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-h8fsb" Feb 18 14:42:56 crc kubenswrapper[4957]: E0218 14:42:56.324401 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-68bc856cb9-h8fsb_openshift-operators(36ce4ea6-b461-4e76-9db4-10bb9d864512)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-68bc856cb9-h8fsb_openshift-operators(36ce4ea6-b461-4e76-9db4-10bb9d864512)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-h8fsb_openshift-operators_36ce4ea6-b461-4e76-9db4-10bb9d864512_0(d785fabc76f40247983fffbce3d32d9aa47ac27f863787cd3ffd1070344cc807): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-h8fsb" podUID="36ce4ea6-b461-4e76-9db4-10bb9d864512" Feb 18 14:42:56 crc kubenswrapper[4957]: I0218 14:42:56.383669 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4n6t\" (UniqueName: \"kubernetes.io/projected/955eb799-56c6-47e7-b5f7-eccac4b52134-kube-api-access-r4n6t\") pod \"observability-operator-59bdc8b94-mxz2r\" (UID: \"955eb799-56c6-47e7-b5f7-eccac4b52134\") " pod="openshift-operators/observability-operator-59bdc8b94-mxz2r" Feb 18 14:42:56 crc kubenswrapper[4957]: I0218 14:42:56.383773 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/955eb799-56c6-47e7-b5f7-eccac4b52134-observability-operator-tls\") pod \"observability-operator-59bdc8b94-mxz2r\" (UID: \"955eb799-56c6-47e7-b5f7-eccac4b52134\") " pod="openshift-operators/observability-operator-59bdc8b94-mxz2r" Feb 18 14:42:56 crc kubenswrapper[4957]: I0218 14:42:56.485651 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/955eb799-56c6-47e7-b5f7-eccac4b52134-observability-operator-tls\") pod \"observability-operator-59bdc8b94-mxz2r\" (UID: \"955eb799-56c6-47e7-b5f7-eccac4b52134\") " pod="openshift-operators/observability-operator-59bdc8b94-mxz2r" Feb 18 14:42:56 crc kubenswrapper[4957]: I0218 14:42:56.485782 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r4n6t\" (UniqueName: \"kubernetes.io/projected/955eb799-56c6-47e7-b5f7-eccac4b52134-kube-api-access-r4n6t\") pod \"observability-operator-59bdc8b94-mxz2r\" (UID: \"955eb799-56c6-47e7-b5f7-eccac4b52134\") " pod="openshift-operators/observability-operator-59bdc8b94-mxz2r" Feb 18 14:42:56 crc kubenswrapper[4957]: I0218 14:42:56.491909 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-745fbd9cbf-xjh8t" Feb 18 14:42:56 crc kubenswrapper[4957]: I0218 14:42:56.492160 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/955eb799-56c6-47e7-b5f7-eccac4b52134-observability-operator-tls\") pod \"observability-operator-59bdc8b94-mxz2r\" (UID: \"955eb799-56c6-47e7-b5f7-eccac4b52134\") " pod="openshift-operators/observability-operator-59bdc8b94-mxz2r" Feb 18 14:42:56 crc kubenswrapper[4957]: I0218 14:42:56.512255 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-745fbd9cbf-82dch" Feb 18 14:42:56 crc kubenswrapper[4957]: I0218 14:42:56.512993 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-khfcc"] Feb 18 14:42:56 crc kubenswrapper[4957]: I0218 14:42:56.513919 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-khfcc" Feb 18 14:42:56 crc kubenswrapper[4957]: I0218 14:42:56.517782 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-l8g8g" Feb 18 14:42:56 crc kubenswrapper[4957]: I0218 14:42:56.534885 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r4n6t\" (UniqueName: \"kubernetes.io/projected/955eb799-56c6-47e7-b5f7-eccac4b52134-kube-api-access-r4n6t\") pod \"observability-operator-59bdc8b94-mxz2r\" (UID: \"955eb799-56c6-47e7-b5f7-eccac4b52134\") " pod="openshift-operators/observability-operator-59bdc8b94-mxz2r" Feb 18 14:42:56 crc kubenswrapper[4957]: E0218 14:42:56.562781 4957 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-745fbd9cbf-xjh8t_openshift-operators_8fd72d01-103a-4c09-8858-0fcd773f5d13_0(7200c764384ae181b42aa5f48e5f7ed640515a669d8fafb15a3c1c2611101b55): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 18 14:42:56 crc kubenswrapper[4957]: E0218 14:42:56.563295 4957 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-745fbd9cbf-xjh8t_openshift-operators_8fd72d01-103a-4c09-8858-0fcd773f5d13_0(7200c764384ae181b42aa5f48e5f7ed640515a669d8fafb15a3c1c2611101b55): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-745fbd9cbf-xjh8t" Feb 18 14:42:56 crc kubenswrapper[4957]: E0218 14:42:56.563321 4957 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-745fbd9cbf-xjh8t_openshift-operators_8fd72d01-103a-4c09-8858-0fcd773f5d13_0(7200c764384ae181b42aa5f48e5f7ed640515a669d8fafb15a3c1c2611101b55): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-745fbd9cbf-xjh8t" Feb 18 14:42:56 crc kubenswrapper[4957]: E0218 14:42:56.563388 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-745fbd9cbf-xjh8t_openshift-operators(8fd72d01-103a-4c09-8858-0fcd773f5d13)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-745fbd9cbf-xjh8t_openshift-operators(8fd72d01-103a-4c09-8858-0fcd773f5d13)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-745fbd9cbf-xjh8t_openshift-operators_8fd72d01-103a-4c09-8858-0fcd773f5d13_0(7200c764384ae181b42aa5f48e5f7ed640515a669d8fafb15a3c1c2611101b55): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-745fbd9cbf-xjh8t" podUID="8fd72d01-103a-4c09-8858-0fcd773f5d13" Feb 18 14:42:56 crc kubenswrapper[4957]: E0218 14:42:56.579797 4957 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-745fbd9cbf-82dch_openshift-operators_d89f03d4-3521-43ea-85f4-631c25a3379b_0(debc4d3b4e60bd303f49adaaf7977d66e0be890bfa63f2af466785ffdde3b768): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 18 14:42:56 crc kubenswrapper[4957]: E0218 14:42:56.579876 4957 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-745fbd9cbf-82dch_openshift-operators_d89f03d4-3521-43ea-85f4-631c25a3379b_0(debc4d3b4e60bd303f49adaaf7977d66e0be890bfa63f2af466785ffdde3b768): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-745fbd9cbf-82dch" Feb 18 14:42:56 crc kubenswrapper[4957]: E0218 14:42:56.579904 4957 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-745fbd9cbf-82dch_openshift-operators_d89f03d4-3521-43ea-85f4-631c25a3379b_0(debc4d3b4e60bd303f49adaaf7977d66e0be890bfa63f2af466785ffdde3b768): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-745fbd9cbf-82dch" Feb 18 14:42:56 crc kubenswrapper[4957]: E0218 14:42:56.579966 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-745fbd9cbf-82dch_openshift-operators(d89f03d4-3521-43ea-85f4-631c25a3379b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-745fbd9cbf-82dch_openshift-operators(d89f03d4-3521-43ea-85f4-631c25a3379b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-745fbd9cbf-82dch_openshift-operators_d89f03d4-3521-43ea-85f4-631c25a3379b_0(debc4d3b4e60bd303f49adaaf7977d66e0be890bfa63f2af466785ffdde3b768): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-745fbd9cbf-82dch" podUID="d89f03d4-3521-43ea-85f4-631c25a3379b" Feb 18 14:42:56 crc kubenswrapper[4957]: I0218 14:42:56.587015 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqt46\" (UniqueName: \"kubernetes.io/projected/aa193683-1796-419f-ac5f-e620b3206699-kube-api-access-kqt46\") pod \"perses-operator-5bf474d74f-khfcc\" (UID: \"aa193683-1796-419f-ac5f-e620b3206699\") " pod="openshift-operators/perses-operator-5bf474d74f-khfcc" Feb 18 14:42:56 crc kubenswrapper[4957]: I0218 14:42:56.587080 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/aa193683-1796-419f-ac5f-e620b3206699-openshift-service-ca\") pod \"perses-operator-5bf474d74f-khfcc\" (UID: \"aa193683-1796-419f-ac5f-e620b3206699\") " pod="openshift-operators/perses-operator-5bf474d74f-khfcc" Feb 18 14:42:56 crc kubenswrapper[4957]: I0218 14:42:56.617123 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vc7fk" event={"ID":"910cf14f-cf93-4db8-8681-12130ae2ae27","Type":"ContainerStarted","Data":"2b7824b8ca0257257556aa055dcd2acdbe7d88455d676ff46684ec99acd030f8"} Feb 18 14:42:56 crc kubenswrapper[4957]: I0218 14:42:56.617194 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vc7fk" event={"ID":"910cf14f-cf93-4db8-8681-12130ae2ae27","Type":"ContainerStarted","Data":"9eb580b586ee49a5dda8ccc4fc772b5554c1470d8a1926bd2665fb817cf4c892"} Feb 18 14:42:56 crc kubenswrapper[4957]: I0218 14:42:56.617205 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vc7fk" event={"ID":"910cf14f-cf93-4db8-8681-12130ae2ae27","Type":"ContainerStarted","Data":"7bff51edbb855892a91f12a306ef647b337a95f0ef7347abb7e5f788d3b267a8"} Feb 18 14:42:56 crc kubenswrapper[4957]: I0218 14:42:56.617215 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vc7fk" event={"ID":"910cf14f-cf93-4db8-8681-12130ae2ae27","Type":"ContainerStarted","Data":"9d5efbd964c42f18cd888974f6cf5e7066d206df64490c82add483fb7ac19eb1"} Feb 18 14:42:56 crc kubenswrapper[4957]: I0218 14:42:56.617228 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vc7fk" event={"ID":"910cf14f-cf93-4db8-8681-12130ae2ae27","Type":"ContainerStarted","Data":"2b12755d19f741215a039bab67ee0dfb30f2bd15237d00a542c185029632f1a5"} Feb 18 14:42:56 crc kubenswrapper[4957]: I0218 14:42:56.617243 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vc7fk" event={"ID":"910cf14f-cf93-4db8-8681-12130ae2ae27","Type":"ContainerStarted","Data":"3289e014eddb421390d495dafdec5f659adde1c83ccaeb2022bd2a2ecf0df2ef"} Feb 18 14:42:56 crc kubenswrapper[4957]: I0218 14:42:56.681922 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-mxz2r" Feb 18 14:42:56 crc kubenswrapper[4957]: I0218 14:42:56.689196 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kqt46\" (UniqueName: \"kubernetes.io/projected/aa193683-1796-419f-ac5f-e620b3206699-kube-api-access-kqt46\") pod \"perses-operator-5bf474d74f-khfcc\" (UID: \"aa193683-1796-419f-ac5f-e620b3206699\") " pod="openshift-operators/perses-operator-5bf474d74f-khfcc" Feb 18 14:42:56 crc kubenswrapper[4957]: I0218 14:42:56.689278 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/aa193683-1796-419f-ac5f-e620b3206699-openshift-service-ca\") pod \"perses-operator-5bf474d74f-khfcc\" (UID: \"aa193683-1796-419f-ac5f-e620b3206699\") " pod="openshift-operators/perses-operator-5bf474d74f-khfcc" Feb 18 14:42:56 crc kubenswrapper[4957]: I0218 14:42:56.690637 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/aa193683-1796-419f-ac5f-e620b3206699-openshift-service-ca\") pod \"perses-operator-5bf474d74f-khfcc\" (UID: \"aa193683-1796-419f-ac5f-e620b3206699\") " pod="openshift-operators/perses-operator-5bf474d74f-khfcc" Feb 18 14:42:56 crc kubenswrapper[4957]: E0218 14:42:56.709678 4957 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-mxz2r_openshift-operators_955eb799-56c6-47e7-b5f7-eccac4b52134_0(5190614aa056757580a08816ff8055037391ed43dbd0799c1b0ac9e6d462f02f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 18 14:42:56 crc kubenswrapper[4957]: E0218 14:42:56.709763 4957 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-mxz2r_openshift-operators_955eb799-56c6-47e7-b5f7-eccac4b52134_0(5190614aa056757580a08816ff8055037391ed43dbd0799c1b0ac9e6d462f02f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-mxz2r" Feb 18 14:42:56 crc kubenswrapper[4957]: E0218 14:42:56.709792 4957 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-mxz2r_openshift-operators_955eb799-56c6-47e7-b5f7-eccac4b52134_0(5190614aa056757580a08816ff8055037391ed43dbd0799c1b0ac9e6d462f02f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-mxz2r" Feb 18 14:42:56 crc kubenswrapper[4957]: E0218 14:42:56.709902 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-59bdc8b94-mxz2r_openshift-operators(955eb799-56c6-47e7-b5f7-eccac4b52134)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-59bdc8b94-mxz2r_openshift-operators(955eb799-56c6-47e7-b5f7-eccac4b52134)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-mxz2r_openshift-operators_955eb799-56c6-47e7-b5f7-eccac4b52134_0(5190614aa056757580a08816ff8055037391ed43dbd0799c1b0ac9e6d462f02f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-59bdc8b94-mxz2r" podUID="955eb799-56c6-47e7-b5f7-eccac4b52134" Feb 18 14:42:56 crc kubenswrapper[4957]: I0218 14:42:56.710778 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kqt46\" (UniqueName: \"kubernetes.io/projected/aa193683-1796-419f-ac5f-e620b3206699-kube-api-access-kqt46\") pod \"perses-operator-5bf474d74f-khfcc\" (UID: \"aa193683-1796-419f-ac5f-e620b3206699\") " pod="openshift-operators/perses-operator-5bf474d74f-khfcc" Feb 18 14:42:56 crc kubenswrapper[4957]: I0218 14:42:56.853260 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-khfcc" Feb 18 14:42:56 crc kubenswrapper[4957]: E0218 14:42:56.884718 4957 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-khfcc_openshift-operators_aa193683-1796-419f-ac5f-e620b3206699_0(302ff2a34bb1938725c3c9e14cc5933aca24f355879e89f539eab1056f572d49): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 18 14:42:56 crc kubenswrapper[4957]: E0218 14:42:56.884833 4957 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-khfcc_openshift-operators_aa193683-1796-419f-ac5f-e620b3206699_0(302ff2a34bb1938725c3c9e14cc5933aca24f355879e89f539eab1056f572d49): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-khfcc" Feb 18 14:42:56 crc kubenswrapper[4957]: E0218 14:42:56.884868 4957 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-khfcc_openshift-operators_aa193683-1796-419f-ac5f-e620b3206699_0(302ff2a34bb1938725c3c9e14cc5933aca24f355879e89f539eab1056f572d49): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-khfcc" Feb 18 14:42:56 crc kubenswrapper[4957]: E0218 14:42:56.884962 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5bf474d74f-khfcc_openshift-operators(aa193683-1796-419f-ac5f-e620b3206699)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5bf474d74f-khfcc_openshift-operators(aa193683-1796-419f-ac5f-e620b3206699)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-khfcc_openshift-operators_aa193683-1796-419f-ac5f-e620b3206699_0(302ff2a34bb1938725c3c9e14cc5933aca24f355879e89f539eab1056f572d49): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5bf474d74f-khfcc" podUID="aa193683-1796-419f-ac5f-e620b3206699" Feb 18 14:42:59 crc kubenswrapper[4957]: I0218 14:42:59.643251 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vc7fk" event={"ID":"910cf14f-cf93-4db8-8681-12130ae2ae27","Type":"ContainerStarted","Data":"0b5357e1618826bcdcf0a3a5ed40643e20b168f610d4d2bf963f77aa08f37fd6"} Feb 18 14:43:01 crc kubenswrapper[4957]: I0218 14:43:01.661103 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vc7fk" event={"ID":"910cf14f-cf93-4db8-8681-12130ae2ae27","Type":"ContainerStarted","Data":"50285c88b0c4a96e4229d647910b021d741381b08998654616d421864b2e1411"} Feb 18 14:43:01 crc kubenswrapper[4957]: I0218 14:43:01.662061 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-vc7fk" Feb 18 14:43:01 crc kubenswrapper[4957]: I0218 14:43:01.662081 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-vc7fk" Feb 18 14:43:01 crc kubenswrapper[4957]: I0218 14:43:01.719102 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-vc7fk" podStartSLOduration=7.719067525 podStartE2EDuration="7.719067525s" podCreationTimestamp="2026-02-18 14:42:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:43:01.716250925 +0000 UTC m=+688.237115689" watchObservedRunningTime="2026-02-18 14:43:01.719067525 +0000 UTC m=+688.239932269" Feb 18 14:43:01 crc kubenswrapper[4957]: I0218 14:43:01.722374 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-vc7fk" Feb 18 14:43:02 crc kubenswrapper[4957]: I0218 14:43:02.398081 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-745fbd9cbf-82dch"] Feb 18 14:43:02 crc kubenswrapper[4957]: I0218 14:43:02.398247 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-745fbd9cbf-82dch" Feb 18 14:43:02 crc kubenswrapper[4957]: I0218 14:43:02.398984 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-745fbd9cbf-82dch" Feb 18 14:43:02 crc kubenswrapper[4957]: I0218 14:43:02.405597 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-h8fsb"] Feb 18 14:43:02 crc kubenswrapper[4957]: I0218 14:43:02.405821 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-h8fsb" Feb 18 14:43:02 crc kubenswrapper[4957]: I0218 14:43:02.406569 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-h8fsb" Feb 18 14:43:02 crc kubenswrapper[4957]: I0218 14:43:02.455222 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-745fbd9cbf-xjh8t"] Feb 18 14:43:02 crc kubenswrapper[4957]: I0218 14:43:02.455407 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-745fbd9cbf-xjh8t" Feb 18 14:43:02 crc kubenswrapper[4957]: I0218 14:43:02.461580 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-745fbd9cbf-xjh8t" Feb 18 14:43:02 crc kubenswrapper[4957]: I0218 14:43:02.473284 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-mxz2r"] Feb 18 14:43:02 crc kubenswrapper[4957]: I0218 14:43:02.473474 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-mxz2r" Feb 18 14:43:02 crc kubenswrapper[4957]: I0218 14:43:02.474148 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-mxz2r" Feb 18 14:43:02 crc kubenswrapper[4957]: I0218 14:43:02.508219 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-khfcc"] Feb 18 14:43:02 crc kubenswrapper[4957]: I0218 14:43:02.508366 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-khfcc" Feb 18 14:43:02 crc kubenswrapper[4957]: I0218 14:43:02.508957 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-khfcc" Feb 18 14:43:02 crc kubenswrapper[4957]: E0218 14:43:02.517578 4957 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-745fbd9cbf-82dch_openshift-operators_d89f03d4-3521-43ea-85f4-631c25a3379b_0(f012b97398b7c2b1b8dac592a80023a05ece31e374ffb9a19ebc48d33470c9ba): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 18 14:43:02 crc kubenswrapper[4957]: E0218 14:43:02.517678 4957 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-745fbd9cbf-82dch_openshift-operators_d89f03d4-3521-43ea-85f4-631c25a3379b_0(f012b97398b7c2b1b8dac592a80023a05ece31e374ffb9a19ebc48d33470c9ba): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-745fbd9cbf-82dch" Feb 18 14:43:02 crc kubenswrapper[4957]: E0218 14:43:02.517708 4957 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-745fbd9cbf-82dch_openshift-operators_d89f03d4-3521-43ea-85f4-631c25a3379b_0(f012b97398b7c2b1b8dac592a80023a05ece31e374ffb9a19ebc48d33470c9ba): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-745fbd9cbf-82dch" Feb 18 14:43:02 crc kubenswrapper[4957]: E0218 14:43:02.517769 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-745fbd9cbf-82dch_openshift-operators(d89f03d4-3521-43ea-85f4-631c25a3379b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-745fbd9cbf-82dch_openshift-operators(d89f03d4-3521-43ea-85f4-631c25a3379b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-745fbd9cbf-82dch_openshift-operators_d89f03d4-3521-43ea-85f4-631c25a3379b_0(f012b97398b7c2b1b8dac592a80023a05ece31e374ffb9a19ebc48d33470c9ba): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-745fbd9cbf-82dch" podUID="d89f03d4-3521-43ea-85f4-631c25a3379b" Feb 18 14:43:02 crc kubenswrapper[4957]: E0218 14:43:02.564807 4957 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-h8fsb_openshift-operators_36ce4ea6-b461-4e76-9db4-10bb9d864512_0(58b2b3804078d93dae0936469da8dd8d7d261df52a9e21f457838d80f86184bf): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 18 14:43:02 crc kubenswrapper[4957]: E0218 14:43:02.564878 4957 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-h8fsb_openshift-operators_36ce4ea6-b461-4e76-9db4-10bb9d864512_0(58b2b3804078d93dae0936469da8dd8d7d261df52a9e21f457838d80f86184bf): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-h8fsb" Feb 18 14:43:02 crc kubenswrapper[4957]: E0218 14:43:02.564896 4957 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-h8fsb_openshift-operators_36ce4ea6-b461-4e76-9db4-10bb9d864512_0(58b2b3804078d93dae0936469da8dd8d7d261df52a9e21f457838d80f86184bf): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-h8fsb" Feb 18 14:43:02 crc kubenswrapper[4957]: E0218 14:43:02.564963 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-68bc856cb9-h8fsb_openshift-operators(36ce4ea6-b461-4e76-9db4-10bb9d864512)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-68bc856cb9-h8fsb_openshift-operators(36ce4ea6-b461-4e76-9db4-10bb9d864512)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-h8fsb_openshift-operators_36ce4ea6-b461-4e76-9db4-10bb9d864512_0(58b2b3804078d93dae0936469da8dd8d7d261df52a9e21f457838d80f86184bf): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-h8fsb" podUID="36ce4ea6-b461-4e76-9db4-10bb9d864512" Feb 18 14:43:02 crc kubenswrapper[4957]: E0218 14:43:02.627621 4957 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-745fbd9cbf-xjh8t_openshift-operators_8fd72d01-103a-4c09-8858-0fcd773f5d13_0(77713f2ad16f731fab037c742161d07131f878845d5e3c1cbbd73f1e4a3dbc79): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 18 14:43:02 crc kubenswrapper[4957]: E0218 14:43:02.627708 4957 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-745fbd9cbf-xjh8t_openshift-operators_8fd72d01-103a-4c09-8858-0fcd773f5d13_0(77713f2ad16f731fab037c742161d07131f878845d5e3c1cbbd73f1e4a3dbc79): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-745fbd9cbf-xjh8t" Feb 18 14:43:02 crc kubenswrapper[4957]: E0218 14:43:02.627736 4957 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-745fbd9cbf-xjh8t_openshift-operators_8fd72d01-103a-4c09-8858-0fcd773f5d13_0(77713f2ad16f731fab037c742161d07131f878845d5e3c1cbbd73f1e4a3dbc79): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-745fbd9cbf-xjh8t" Feb 18 14:43:02 crc kubenswrapper[4957]: E0218 14:43:02.627795 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-745fbd9cbf-xjh8t_openshift-operators(8fd72d01-103a-4c09-8858-0fcd773f5d13)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-745fbd9cbf-xjh8t_openshift-operators(8fd72d01-103a-4c09-8858-0fcd773f5d13)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-745fbd9cbf-xjh8t_openshift-operators_8fd72d01-103a-4c09-8858-0fcd773f5d13_0(77713f2ad16f731fab037c742161d07131f878845d5e3c1cbbd73f1e4a3dbc79): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-745fbd9cbf-xjh8t" podUID="8fd72d01-103a-4c09-8858-0fcd773f5d13" Feb 18 14:43:02 crc kubenswrapper[4957]: E0218 14:43:02.683876 4957 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-mxz2r_openshift-operators_955eb799-56c6-47e7-b5f7-eccac4b52134_0(afbbb08e54e51442f9a14a7b4e63953f3c5f064db10e6bd7c683342337b1f76a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 18 14:43:02 crc kubenswrapper[4957]: E0218 14:43:02.683972 4957 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-mxz2r_openshift-operators_955eb799-56c6-47e7-b5f7-eccac4b52134_0(afbbb08e54e51442f9a14a7b4e63953f3c5f064db10e6bd7c683342337b1f76a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-mxz2r" Feb 18 14:43:02 crc kubenswrapper[4957]: E0218 14:43:02.684007 4957 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-mxz2r_openshift-operators_955eb799-56c6-47e7-b5f7-eccac4b52134_0(afbbb08e54e51442f9a14a7b4e63953f3c5f064db10e6bd7c683342337b1f76a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-mxz2r" Feb 18 14:43:02 crc kubenswrapper[4957]: E0218 14:43:02.684067 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-59bdc8b94-mxz2r_openshift-operators(955eb799-56c6-47e7-b5f7-eccac4b52134)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-59bdc8b94-mxz2r_openshift-operators(955eb799-56c6-47e7-b5f7-eccac4b52134)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-mxz2r_openshift-operators_955eb799-56c6-47e7-b5f7-eccac4b52134_0(afbbb08e54e51442f9a14a7b4e63953f3c5f064db10e6bd7c683342337b1f76a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-59bdc8b94-mxz2r" podUID="955eb799-56c6-47e7-b5f7-eccac4b52134" Feb 18 14:43:02 crc kubenswrapper[4957]: I0218 14:43:02.687150 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-vc7fk" Feb 18 14:43:02 crc kubenswrapper[4957]: E0218 14:43:02.689575 4957 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-khfcc_openshift-operators_aa193683-1796-419f-ac5f-e620b3206699_0(6f1fce465f82d667807315822490ce618f4dc046bd95fa90533b9cec4e132023): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 18 14:43:02 crc kubenswrapper[4957]: E0218 14:43:02.689660 4957 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-khfcc_openshift-operators_aa193683-1796-419f-ac5f-e620b3206699_0(6f1fce465f82d667807315822490ce618f4dc046bd95fa90533b9cec4e132023): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-khfcc" Feb 18 14:43:02 crc kubenswrapper[4957]: E0218 14:43:02.689686 4957 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-khfcc_openshift-operators_aa193683-1796-419f-ac5f-e620b3206699_0(6f1fce465f82d667807315822490ce618f4dc046bd95fa90533b9cec4e132023): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-khfcc" Feb 18 14:43:02 crc kubenswrapper[4957]: E0218 14:43:02.689734 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5bf474d74f-khfcc_openshift-operators(aa193683-1796-419f-ac5f-e620b3206699)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5bf474d74f-khfcc_openshift-operators(aa193683-1796-419f-ac5f-e620b3206699)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-khfcc_openshift-operators_aa193683-1796-419f-ac5f-e620b3206699_0(6f1fce465f82d667807315822490ce618f4dc046bd95fa90533b9cec4e132023): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5bf474d74f-khfcc" podUID="aa193683-1796-419f-ac5f-e620b3206699" Feb 18 14:43:02 crc kubenswrapper[4957]: I0218 14:43:02.753138 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-vc7fk" Feb 18 14:43:04 crc kubenswrapper[4957]: I0218 14:43:04.219384 4957 scope.go:117] "RemoveContainer" containerID="5404d3757559a526399e1c8ad0cad0fc231a9f59ad1d9be14407c358b01a6bb2" Feb 18 14:43:04 crc kubenswrapper[4957]: E0218 14:43:04.223974 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-sk96m_openshift-multus(e7e4902e-7bb6-44ec-a3ac-3d8b3afe3fbb)\"" pod="openshift-multus/multus-sk96m" podUID="e7e4902e-7bb6-44ec-a3ac-3d8b3afe3fbb" Feb 18 14:43:07 crc kubenswrapper[4957]: I0218 14:43:07.279867 4957 patch_prober.go:28] interesting pod/machine-config-daemon-x8wwg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 14:43:07 crc kubenswrapper[4957]: I0218 14:43:07.279969 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 14:43:14 crc kubenswrapper[4957]: I0218 14:43:14.212582 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-mxz2r" Feb 18 14:43:14 crc kubenswrapper[4957]: I0218 14:43:14.215366 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-mxz2r" Feb 18 14:43:14 crc kubenswrapper[4957]: E0218 14:43:14.262392 4957 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-mxz2r_openshift-operators_955eb799-56c6-47e7-b5f7-eccac4b52134_0(6e37b65f113f4bb6de0a723223b9f5d6d7b48912272647f2a4937ed8a026bc11): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 18 14:43:14 crc kubenswrapper[4957]: E0218 14:43:14.262496 4957 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-mxz2r_openshift-operators_955eb799-56c6-47e7-b5f7-eccac4b52134_0(6e37b65f113f4bb6de0a723223b9f5d6d7b48912272647f2a4937ed8a026bc11): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-mxz2r" Feb 18 14:43:14 crc kubenswrapper[4957]: E0218 14:43:14.262522 4957 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-mxz2r_openshift-operators_955eb799-56c6-47e7-b5f7-eccac4b52134_0(6e37b65f113f4bb6de0a723223b9f5d6d7b48912272647f2a4937ed8a026bc11): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-mxz2r" Feb 18 14:43:14 crc kubenswrapper[4957]: E0218 14:43:14.262568 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-59bdc8b94-mxz2r_openshift-operators(955eb799-56c6-47e7-b5f7-eccac4b52134)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-59bdc8b94-mxz2r_openshift-operators(955eb799-56c6-47e7-b5f7-eccac4b52134)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-mxz2r_openshift-operators_955eb799-56c6-47e7-b5f7-eccac4b52134_0(6e37b65f113f4bb6de0a723223b9f5d6d7b48912272647f2a4937ed8a026bc11): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-59bdc8b94-mxz2r" podUID="955eb799-56c6-47e7-b5f7-eccac4b52134" Feb 18 14:43:15 crc kubenswrapper[4957]: I0218 14:43:15.212219 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-h8fsb" Feb 18 14:43:15 crc kubenswrapper[4957]: I0218 14:43:15.212264 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-745fbd9cbf-82dch" Feb 18 14:43:15 crc kubenswrapper[4957]: I0218 14:43:15.212312 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-khfcc" Feb 18 14:43:15 crc kubenswrapper[4957]: I0218 14:43:15.213378 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-745fbd9cbf-82dch" Feb 18 14:43:15 crc kubenswrapper[4957]: I0218 14:43:15.213537 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-h8fsb" Feb 18 14:43:15 crc kubenswrapper[4957]: I0218 14:43:15.213546 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-khfcc" Feb 18 14:43:15 crc kubenswrapper[4957]: E0218 14:43:15.258749 4957 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-h8fsb_openshift-operators_36ce4ea6-b461-4e76-9db4-10bb9d864512_0(f52216f88e37eb00649eda219ebd4ecc0467f675bf8092f3b3c98af29f1457e5): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 18 14:43:15 crc kubenswrapper[4957]: E0218 14:43:15.258844 4957 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-h8fsb_openshift-operators_36ce4ea6-b461-4e76-9db4-10bb9d864512_0(f52216f88e37eb00649eda219ebd4ecc0467f675bf8092f3b3c98af29f1457e5): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-h8fsb" Feb 18 14:43:15 crc kubenswrapper[4957]: E0218 14:43:15.258872 4957 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-h8fsb_openshift-operators_36ce4ea6-b461-4e76-9db4-10bb9d864512_0(f52216f88e37eb00649eda219ebd4ecc0467f675bf8092f3b3c98af29f1457e5): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-h8fsb" Feb 18 14:43:15 crc kubenswrapper[4957]: E0218 14:43:15.258923 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-68bc856cb9-h8fsb_openshift-operators(36ce4ea6-b461-4e76-9db4-10bb9d864512)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-68bc856cb9-h8fsb_openshift-operators(36ce4ea6-b461-4e76-9db4-10bb9d864512)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-h8fsb_openshift-operators_36ce4ea6-b461-4e76-9db4-10bb9d864512_0(f52216f88e37eb00649eda219ebd4ecc0467f675bf8092f3b3c98af29f1457e5): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-h8fsb" podUID="36ce4ea6-b461-4e76-9db4-10bb9d864512" Feb 18 14:43:15 crc kubenswrapper[4957]: E0218 14:43:15.263622 4957 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-khfcc_openshift-operators_aa193683-1796-419f-ac5f-e620b3206699_0(a6abe3f00af1cb71d9408919f0a210c42b66eac4f44c1207da884a722284e411): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 18 14:43:15 crc kubenswrapper[4957]: E0218 14:43:15.263690 4957 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-khfcc_openshift-operators_aa193683-1796-419f-ac5f-e620b3206699_0(a6abe3f00af1cb71d9408919f0a210c42b66eac4f44c1207da884a722284e411): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-khfcc" Feb 18 14:43:15 crc kubenswrapper[4957]: E0218 14:43:15.263727 4957 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-khfcc_openshift-operators_aa193683-1796-419f-ac5f-e620b3206699_0(a6abe3f00af1cb71d9408919f0a210c42b66eac4f44c1207da884a722284e411): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-khfcc" Feb 18 14:43:15 crc kubenswrapper[4957]: E0218 14:43:15.263774 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5bf474d74f-khfcc_openshift-operators(aa193683-1796-419f-ac5f-e620b3206699)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5bf474d74f-khfcc_openshift-operators(aa193683-1796-419f-ac5f-e620b3206699)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-khfcc_openshift-operators_aa193683-1796-419f-ac5f-e620b3206699_0(a6abe3f00af1cb71d9408919f0a210c42b66eac4f44c1207da884a722284e411): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5bf474d74f-khfcc" podUID="aa193683-1796-419f-ac5f-e620b3206699" Feb 18 14:43:15 crc kubenswrapper[4957]: E0218 14:43:15.276539 4957 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-745fbd9cbf-82dch_openshift-operators_d89f03d4-3521-43ea-85f4-631c25a3379b_0(e8f3fde8a891cc4a0c02d7693c964abbd441780edb64640e0ba59fc8e5ab562d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 18 14:43:15 crc kubenswrapper[4957]: E0218 14:43:15.276631 4957 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-745fbd9cbf-82dch_openshift-operators_d89f03d4-3521-43ea-85f4-631c25a3379b_0(e8f3fde8a891cc4a0c02d7693c964abbd441780edb64640e0ba59fc8e5ab562d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-745fbd9cbf-82dch" Feb 18 14:43:15 crc kubenswrapper[4957]: E0218 14:43:15.276667 4957 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-745fbd9cbf-82dch_openshift-operators_d89f03d4-3521-43ea-85f4-631c25a3379b_0(e8f3fde8a891cc4a0c02d7693c964abbd441780edb64640e0ba59fc8e5ab562d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-745fbd9cbf-82dch" Feb 18 14:43:15 crc kubenswrapper[4957]: E0218 14:43:15.276726 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-745fbd9cbf-82dch_openshift-operators(d89f03d4-3521-43ea-85f4-631c25a3379b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-745fbd9cbf-82dch_openshift-operators(d89f03d4-3521-43ea-85f4-631c25a3379b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-745fbd9cbf-82dch_openshift-operators_d89f03d4-3521-43ea-85f4-631c25a3379b_0(e8f3fde8a891cc4a0c02d7693c964abbd441780edb64640e0ba59fc8e5ab562d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-745fbd9cbf-82dch" podUID="d89f03d4-3521-43ea-85f4-631c25a3379b" Feb 18 14:43:16 crc kubenswrapper[4957]: I0218 14:43:16.212854 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-745fbd9cbf-xjh8t" Feb 18 14:43:16 crc kubenswrapper[4957]: I0218 14:43:16.213399 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-745fbd9cbf-xjh8t" Feb 18 14:43:16 crc kubenswrapper[4957]: E0218 14:43:16.238057 4957 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-745fbd9cbf-xjh8t_openshift-operators_8fd72d01-103a-4c09-8858-0fcd773f5d13_0(b6925e6885a7ec6e4133673530101d43af1fd1dcd19677d038a6c70f252b2c39): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 18 14:43:16 crc kubenswrapper[4957]: E0218 14:43:16.238144 4957 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-745fbd9cbf-xjh8t_openshift-operators_8fd72d01-103a-4c09-8858-0fcd773f5d13_0(b6925e6885a7ec6e4133673530101d43af1fd1dcd19677d038a6c70f252b2c39): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-745fbd9cbf-xjh8t" Feb 18 14:43:16 crc kubenswrapper[4957]: E0218 14:43:16.238174 4957 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-745fbd9cbf-xjh8t_openshift-operators_8fd72d01-103a-4c09-8858-0fcd773f5d13_0(b6925e6885a7ec6e4133673530101d43af1fd1dcd19677d038a6c70f252b2c39): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-745fbd9cbf-xjh8t" Feb 18 14:43:16 crc kubenswrapper[4957]: E0218 14:43:16.238253 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-745fbd9cbf-xjh8t_openshift-operators(8fd72d01-103a-4c09-8858-0fcd773f5d13)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-745fbd9cbf-xjh8t_openshift-operators(8fd72d01-103a-4c09-8858-0fcd773f5d13)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-745fbd9cbf-xjh8t_openshift-operators_8fd72d01-103a-4c09-8858-0fcd773f5d13_0(b6925e6885a7ec6e4133673530101d43af1fd1dcd19677d038a6c70f252b2c39): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-745fbd9cbf-xjh8t" podUID="8fd72d01-103a-4c09-8858-0fcd773f5d13" Feb 18 14:43:18 crc kubenswrapper[4957]: I0218 14:43:18.215680 4957 scope.go:117] "RemoveContainer" containerID="5404d3757559a526399e1c8ad0cad0fc231a9f59ad1d9be14407c358b01a6bb2" Feb 18 14:43:18 crc kubenswrapper[4957]: I0218 14:43:18.819268 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-sk96m_e7e4902e-7bb6-44ec-a3ac-3d8b3afe3fbb/kube-multus/2.log" Feb 18 14:43:18 crc kubenswrapper[4957]: I0218 14:43:18.820638 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-sk96m" event={"ID":"e7e4902e-7bb6-44ec-a3ac-3d8b3afe3fbb","Type":"ContainerStarted","Data":"cf4ce339d1b10d262029d4a18158e1c21ede30fdf26b66e89db91af85b97b245"} Feb 18 14:43:24 crc kubenswrapper[4957]: I0218 14:43:24.840242 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-vc7fk" Feb 18 14:43:27 crc kubenswrapper[4957]: I0218 14:43:27.212530 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-h8fsb" Feb 18 14:43:27 crc kubenswrapper[4957]: I0218 14:43:27.213235 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-h8fsb" Feb 18 14:43:27 crc kubenswrapper[4957]: I0218 14:43:27.652358 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-h8fsb"] Feb 18 14:43:27 crc kubenswrapper[4957]: W0218 14:43:27.658970 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod36ce4ea6_b461_4e76_9db4_10bb9d864512.slice/crio-ed2db5f7b5e880d909e1f4cfaab8eb1a428007ce1aa11d4d965690a74536689d WatchSource:0}: Error finding container ed2db5f7b5e880d909e1f4cfaab8eb1a428007ce1aa11d4d965690a74536689d: Status 404 returned error can't find the container with id ed2db5f7b5e880d909e1f4cfaab8eb1a428007ce1aa11d4d965690a74536689d Feb 18 14:43:27 crc kubenswrapper[4957]: I0218 14:43:27.875997 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-h8fsb" event={"ID":"36ce4ea6-b461-4e76-9db4-10bb9d864512","Type":"ContainerStarted","Data":"ed2db5f7b5e880d909e1f4cfaab8eb1a428007ce1aa11d4d965690a74536689d"} Feb 18 14:43:28 crc kubenswrapper[4957]: I0218 14:43:28.212768 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-745fbd9cbf-xjh8t" Feb 18 14:43:28 crc kubenswrapper[4957]: I0218 14:43:28.213182 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-745fbd9cbf-xjh8t" Feb 18 14:43:28 crc kubenswrapper[4957]: I0218 14:43:28.213838 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-mxz2r" Feb 18 14:43:28 crc kubenswrapper[4957]: I0218 14:43:28.214350 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-mxz2r" Feb 18 14:43:28 crc kubenswrapper[4957]: I0218 14:43:28.699697 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-745fbd9cbf-xjh8t"] Feb 18 14:43:28 crc kubenswrapper[4957]: I0218 14:43:28.704050 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-mxz2r"] Feb 18 14:43:28 crc kubenswrapper[4957]: W0218 14:43:28.706009 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8fd72d01_103a_4c09_8858_0fcd773f5d13.slice/crio-97a69ec332ffd4de8ca2d0bf08a78bcf7f225d1d58bd2ee61283feb86c241800 WatchSource:0}: Error finding container 97a69ec332ffd4de8ca2d0bf08a78bcf7f225d1d58bd2ee61283feb86c241800: Status 404 returned error can't find the container with id 97a69ec332ffd4de8ca2d0bf08a78bcf7f225d1d58bd2ee61283feb86c241800 Feb 18 14:43:28 crc kubenswrapper[4957]: W0218 14:43:28.707451 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod955eb799_56c6_47e7_b5f7_eccac4b52134.slice/crio-718de111cb497fb27706e68810eb8b5df3d9edaed033f6687502f40a836df9bd WatchSource:0}: Error finding container 718de111cb497fb27706e68810eb8b5df3d9edaed033f6687502f40a836df9bd: Status 404 returned error can't find the container with id 718de111cb497fb27706e68810eb8b5df3d9edaed033f6687502f40a836df9bd Feb 18 14:43:28 crc kubenswrapper[4957]: I0218 14:43:28.883062 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-mxz2r" event={"ID":"955eb799-56c6-47e7-b5f7-eccac4b52134","Type":"ContainerStarted","Data":"718de111cb497fb27706e68810eb8b5df3d9edaed033f6687502f40a836df9bd"} Feb 18 14:43:28 crc kubenswrapper[4957]: I0218 14:43:28.884054 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-745fbd9cbf-xjh8t" event={"ID":"8fd72d01-103a-4c09-8858-0fcd773f5d13","Type":"ContainerStarted","Data":"97a69ec332ffd4de8ca2d0bf08a78bcf7f225d1d58bd2ee61283feb86c241800"} Feb 18 14:43:30 crc kubenswrapper[4957]: I0218 14:43:30.212802 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-745fbd9cbf-82dch" Feb 18 14:43:30 crc kubenswrapper[4957]: I0218 14:43:30.214032 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-khfcc" Feb 18 14:43:30 crc kubenswrapper[4957]: I0218 14:43:30.214342 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-745fbd9cbf-82dch" Feb 18 14:43:30 crc kubenswrapper[4957]: I0218 14:43:30.215435 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-khfcc" Feb 18 14:43:35 crc kubenswrapper[4957]: I0218 14:43:35.647552 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-khfcc"] Feb 18 14:43:35 crc kubenswrapper[4957]: I0218 14:43:35.652725 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-745fbd9cbf-82dch"] Feb 18 14:43:35 crc kubenswrapper[4957]: I0218 14:43:35.953782 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-745fbd9cbf-xjh8t" event={"ID":"8fd72d01-103a-4c09-8858-0fcd773f5d13","Type":"ContainerStarted","Data":"e5881068fdf7a4db448d3870cfce3ca236deb92f5067097024a24181bc155dce"} Feb 18 14:43:35 crc kubenswrapper[4957]: I0218 14:43:35.957035 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-745fbd9cbf-82dch" event={"ID":"d89f03d4-3521-43ea-85f4-631c25a3379b","Type":"ContainerStarted","Data":"bc4f44308412d00742c411f6db3f624cf18d848de164bef28aa3b7df14260e1b"} Feb 18 14:43:35 crc kubenswrapper[4957]: I0218 14:43:35.957181 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-745fbd9cbf-82dch" event={"ID":"d89f03d4-3521-43ea-85f4-631c25a3379b","Type":"ContainerStarted","Data":"04003b25958858467b6eb7fc56fb25c90ede1c934791fa2c23cc4463f5e44494"} Feb 18 14:43:35 crc kubenswrapper[4957]: I0218 14:43:35.960267 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-mxz2r" event={"ID":"955eb799-56c6-47e7-b5f7-eccac4b52134","Type":"ContainerStarted","Data":"379afd60011766f0721f864c454ea6aef3f693607460f456e8d5baa010fdd53a"} Feb 18 14:43:35 crc kubenswrapper[4957]: I0218 14:43:35.960527 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-mxz2r" Feb 18 14:43:35 crc kubenswrapper[4957]: I0218 14:43:35.962350 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-khfcc" event={"ID":"aa193683-1796-419f-ac5f-e620b3206699","Type":"ContainerStarted","Data":"f84ef48036ead1ec47b44e545dcb9d21999457be88f86c00c47e7186d96bb53d"} Feb 18 14:43:35 crc kubenswrapper[4957]: I0218 14:43:35.964887 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-h8fsb" event={"ID":"36ce4ea6-b461-4e76-9db4-10bb9d864512","Type":"ContainerStarted","Data":"bc4e6f8d47f6596743fd1b2b758dbcc546051f7d1a1d9087ab48924f8cc0ea9a"} Feb 18 14:43:35 crc kubenswrapper[4957]: I0218 14:43:35.981985 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-745fbd9cbf-xjh8t" podStartSLOduration=33.557866525 podStartE2EDuration="39.981964511s" podCreationTimestamp="2026-02-18 14:42:56 +0000 UTC" firstStartedPulling="2026-02-18 14:43:28.708960593 +0000 UTC m=+715.229825337" lastFinishedPulling="2026-02-18 14:43:35.133058579 +0000 UTC m=+721.653923323" observedRunningTime="2026-02-18 14:43:35.978581725 +0000 UTC m=+722.499446489" watchObservedRunningTime="2026-02-18 14:43:35.981964511 +0000 UTC m=+722.502829255" Feb 18 14:43:35 crc kubenswrapper[4957]: I0218 14:43:35.999449 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-mxz2r" Feb 18 14:43:36 crc kubenswrapper[4957]: I0218 14:43:36.012439 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-h8fsb" podStartSLOduration=33.542189385 podStartE2EDuration="41.012383995s" podCreationTimestamp="2026-02-18 14:42:55 +0000 UTC" firstStartedPulling="2026-02-18 14:43:27.662530569 +0000 UTC m=+714.183395303" lastFinishedPulling="2026-02-18 14:43:35.132725169 +0000 UTC m=+721.653589913" observedRunningTime="2026-02-18 14:43:36.011056498 +0000 UTC m=+722.531921242" watchObservedRunningTime="2026-02-18 14:43:36.012383995 +0000 UTC m=+722.533248759" Feb 18 14:43:36 crc kubenswrapper[4957]: I0218 14:43:36.039684 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-745fbd9cbf-82dch" podStartSLOduration=40.03966183 podStartE2EDuration="40.03966183s" podCreationTimestamp="2026-02-18 14:42:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:43:36.038937119 +0000 UTC m=+722.559801863" watchObservedRunningTime="2026-02-18 14:43:36.03966183 +0000 UTC m=+722.560526574" Feb 18 14:43:36 crc kubenswrapper[4957]: I0218 14:43:36.064580 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-59bdc8b94-mxz2r" podStartSLOduration=33.616590793 podStartE2EDuration="40.064562687s" podCreationTimestamp="2026-02-18 14:42:56 +0000 UTC" firstStartedPulling="2026-02-18 14:43:28.71131306 +0000 UTC m=+715.232177804" lastFinishedPulling="2026-02-18 14:43:35.159284954 +0000 UTC m=+721.680149698" observedRunningTime="2026-02-18 14:43:36.061203082 +0000 UTC m=+722.582067826" watchObservedRunningTime="2026-02-18 14:43:36.064562687 +0000 UTC m=+722.585427431" Feb 18 14:43:37 crc kubenswrapper[4957]: I0218 14:43:37.279478 4957 patch_prober.go:28] interesting pod/machine-config-daemon-x8wwg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 14:43:37 crc kubenswrapper[4957]: I0218 14:43:37.279921 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 14:43:38 crc kubenswrapper[4957]: I0218 14:43:38.995343 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-khfcc" event={"ID":"aa193683-1796-419f-ac5f-e620b3206699","Type":"ContainerStarted","Data":"d6caf397f755d39bb5bee235e4c48b38ab62424df52a1bb38ca1ac994fe5c131"} Feb 18 14:43:38 crc kubenswrapper[4957]: I0218 14:43:38.996879 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-khfcc" Feb 18 14:43:44 crc kubenswrapper[4957]: I0218 14:43:44.937678 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5bf474d74f-khfcc" podStartSLOduration=46.189922821 podStartE2EDuration="48.937655207s" podCreationTimestamp="2026-02-18 14:42:56 +0000 UTC" firstStartedPulling="2026-02-18 14:43:35.659549083 +0000 UTC m=+722.180413827" lastFinishedPulling="2026-02-18 14:43:38.407281469 +0000 UTC m=+724.928146213" observedRunningTime="2026-02-18 14:43:39.075730706 +0000 UTC m=+725.596595460" watchObservedRunningTime="2026-02-18 14:43:44.937655207 +0000 UTC m=+731.458519951" Feb 18 14:43:44 crc kubenswrapper[4957]: I0218 14:43:44.940951 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-t6tqs"] Feb 18 14:43:44 crc kubenswrapper[4957]: I0218 14:43:44.941751 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-t6tqs" Feb 18 14:43:44 crc kubenswrapper[4957]: I0218 14:43:44.944106 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Feb 18 14:43:44 crc kubenswrapper[4957]: I0218 14:43:44.944357 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Feb 18 14:43:44 crc kubenswrapper[4957]: I0218 14:43:44.945559 4957 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-svmjt" Feb 18 14:43:44 crc kubenswrapper[4957]: I0218 14:43:44.960413 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-t6tqs"] Feb 18 14:43:44 crc kubenswrapper[4957]: I0218 14:43:44.970525 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-568q2\" (UniqueName: \"kubernetes.io/projected/4aa8b825-ca42-4619-b7bb-380195ddbf84-kube-api-access-568q2\") pod \"cert-manager-cainjector-cf98fcc89-t6tqs\" (UID: \"4aa8b825-ca42-4619-b7bb-380195ddbf84\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-t6tqs" Feb 18 14:43:44 crc kubenswrapper[4957]: I0218 14:43:44.974579 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-mqqs4"] Feb 18 14:43:44 crc kubenswrapper[4957]: I0218 14:43:44.975523 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-mqqs4" Feb 18 14:43:44 crc kubenswrapper[4957]: I0218 14:43:44.977636 4957 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-sb29z" Feb 18 14:43:44 crc kubenswrapper[4957]: I0218 14:43:44.997960 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-mqqs4"] Feb 18 14:43:45 crc kubenswrapper[4957]: I0218 14:43:45.006194 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-q5pw9"] Feb 18 14:43:45 crc kubenswrapper[4957]: I0218 14:43:45.007632 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-q5pw9" Feb 18 14:43:45 crc kubenswrapper[4957]: I0218 14:43:45.011640 4957 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-47jqz" Feb 18 14:43:45 crc kubenswrapper[4957]: I0218 14:43:45.020866 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-q5pw9"] Feb 18 14:43:45 crc kubenswrapper[4957]: I0218 14:43:45.073125 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pv8f8\" (UniqueName: \"kubernetes.io/projected/12b5712d-e8c7-43f6-b44f-4641e48d8046-kube-api-access-pv8f8\") pod \"cert-manager-858654f9db-mqqs4\" (UID: \"12b5712d-e8c7-43f6-b44f-4641e48d8046\") " pod="cert-manager/cert-manager-858654f9db-mqqs4" Feb 18 14:43:45 crc kubenswrapper[4957]: I0218 14:43:45.073204 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2jfp\" (UniqueName: \"kubernetes.io/projected/77a4b221-67be-4248-beaa-1f4602e3b35b-kube-api-access-b2jfp\") pod \"cert-manager-webhook-687f57d79b-q5pw9\" (UID: \"77a4b221-67be-4248-beaa-1f4602e3b35b\") " pod="cert-manager/cert-manager-webhook-687f57d79b-q5pw9" Feb 18 14:43:45 crc kubenswrapper[4957]: I0218 14:43:45.073300 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-568q2\" (UniqueName: \"kubernetes.io/projected/4aa8b825-ca42-4619-b7bb-380195ddbf84-kube-api-access-568q2\") pod \"cert-manager-cainjector-cf98fcc89-t6tqs\" (UID: \"4aa8b825-ca42-4619-b7bb-380195ddbf84\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-t6tqs" Feb 18 14:43:45 crc kubenswrapper[4957]: I0218 14:43:45.096441 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-568q2\" (UniqueName: \"kubernetes.io/projected/4aa8b825-ca42-4619-b7bb-380195ddbf84-kube-api-access-568q2\") pod \"cert-manager-cainjector-cf98fcc89-t6tqs\" (UID: \"4aa8b825-ca42-4619-b7bb-380195ddbf84\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-t6tqs" Feb 18 14:43:45 crc kubenswrapper[4957]: I0218 14:43:45.174346 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pv8f8\" (UniqueName: \"kubernetes.io/projected/12b5712d-e8c7-43f6-b44f-4641e48d8046-kube-api-access-pv8f8\") pod \"cert-manager-858654f9db-mqqs4\" (UID: \"12b5712d-e8c7-43f6-b44f-4641e48d8046\") " pod="cert-manager/cert-manager-858654f9db-mqqs4" Feb 18 14:43:45 crc kubenswrapper[4957]: I0218 14:43:45.174411 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b2jfp\" (UniqueName: \"kubernetes.io/projected/77a4b221-67be-4248-beaa-1f4602e3b35b-kube-api-access-b2jfp\") pod \"cert-manager-webhook-687f57d79b-q5pw9\" (UID: \"77a4b221-67be-4248-beaa-1f4602e3b35b\") " pod="cert-manager/cert-manager-webhook-687f57d79b-q5pw9" Feb 18 14:43:45 crc kubenswrapper[4957]: I0218 14:43:45.192214 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pv8f8\" (UniqueName: \"kubernetes.io/projected/12b5712d-e8c7-43f6-b44f-4641e48d8046-kube-api-access-pv8f8\") pod \"cert-manager-858654f9db-mqqs4\" (UID: \"12b5712d-e8c7-43f6-b44f-4641e48d8046\") " pod="cert-manager/cert-manager-858654f9db-mqqs4" Feb 18 14:43:45 crc kubenswrapper[4957]: I0218 14:43:45.192431 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b2jfp\" (UniqueName: \"kubernetes.io/projected/77a4b221-67be-4248-beaa-1f4602e3b35b-kube-api-access-b2jfp\") pod \"cert-manager-webhook-687f57d79b-q5pw9\" (UID: \"77a4b221-67be-4248-beaa-1f4602e3b35b\") " pod="cert-manager/cert-manager-webhook-687f57d79b-q5pw9" Feb 18 14:43:45 crc kubenswrapper[4957]: I0218 14:43:45.271485 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-t6tqs" Feb 18 14:43:45 crc kubenswrapper[4957]: I0218 14:43:45.294562 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-mqqs4" Feb 18 14:43:45 crc kubenswrapper[4957]: I0218 14:43:45.329511 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-q5pw9" Feb 18 14:43:45 crc kubenswrapper[4957]: I0218 14:43:45.535249 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-t6tqs"] Feb 18 14:43:45 crc kubenswrapper[4957]: W0218 14:43:45.546202 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4aa8b825_ca42_4619_b7bb_380195ddbf84.slice/crio-09a41aefb6a522a35f91adc295c5df4818586c03230670b80dec95cbda22fec3 WatchSource:0}: Error finding container 09a41aefb6a522a35f91adc295c5df4818586c03230670b80dec95cbda22fec3: Status 404 returned error can't find the container with id 09a41aefb6a522a35f91adc295c5df4818586c03230670b80dec95cbda22fec3 Feb 18 14:43:45 crc kubenswrapper[4957]: I0218 14:43:45.816779 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-q5pw9"] Feb 18 14:43:45 crc kubenswrapper[4957]: I0218 14:43:45.822305 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-mqqs4"] Feb 18 14:43:45 crc kubenswrapper[4957]: W0218 14:43:45.822922 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod12b5712d_e8c7_43f6_b44f_4641e48d8046.slice/crio-34c96e50cf2ceff50d04187576c13549a4bde4e707a607297a4dc9f73fc3b34f WatchSource:0}: Error finding container 34c96e50cf2ceff50d04187576c13549a4bde4e707a607297a4dc9f73fc3b34f: Status 404 returned error can't find the container with id 34c96e50cf2ceff50d04187576c13549a4bde4e707a607297a4dc9f73fc3b34f Feb 18 14:43:46 crc kubenswrapper[4957]: I0218 14:43:46.046482 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-q5pw9" event={"ID":"77a4b221-67be-4248-beaa-1f4602e3b35b","Type":"ContainerStarted","Data":"5814e5f23f1d38a29d65a2045e5370c177cd07b41f22a7676ed8189a7e53b43f"} Feb 18 14:43:46 crc kubenswrapper[4957]: I0218 14:43:46.047951 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-t6tqs" event={"ID":"4aa8b825-ca42-4619-b7bb-380195ddbf84","Type":"ContainerStarted","Data":"09a41aefb6a522a35f91adc295c5df4818586c03230670b80dec95cbda22fec3"} Feb 18 14:43:46 crc kubenswrapper[4957]: I0218 14:43:46.048975 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-mqqs4" event={"ID":"12b5712d-e8c7-43f6-b44f-4641e48d8046","Type":"ContainerStarted","Data":"34c96e50cf2ceff50d04187576c13549a4bde4e707a607297a4dc9f73fc3b34f"} Feb 18 14:43:46 crc kubenswrapper[4957]: I0218 14:43:46.856305 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-khfcc" Feb 18 14:43:51 crc kubenswrapper[4957]: I0218 14:43:51.083991 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-t6tqs" event={"ID":"4aa8b825-ca42-4619-b7bb-380195ddbf84","Type":"ContainerStarted","Data":"3ada7aeb0ffdd3df230bf6f2ac03d7037addb2ee0da0d0a6cb1ed7fac8f85c0f"} Feb 18 14:43:51 crc kubenswrapper[4957]: I0218 14:43:51.111675 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-t6tqs" podStartSLOduration=2.544950009 podStartE2EDuration="7.111652522s" podCreationTimestamp="2026-02-18 14:43:44 +0000 UTC" firstStartedPulling="2026-02-18 14:43:45.548809846 +0000 UTC m=+732.069674590" lastFinishedPulling="2026-02-18 14:43:50.115512359 +0000 UTC m=+736.636377103" observedRunningTime="2026-02-18 14:43:51.109187802 +0000 UTC m=+737.630052556" watchObservedRunningTime="2026-02-18 14:43:51.111652522 +0000 UTC m=+737.632517266" Feb 18 14:43:52 crc kubenswrapper[4957]: I0218 14:43:52.093869 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-q5pw9" event={"ID":"77a4b221-67be-4248-beaa-1f4602e3b35b","Type":"ContainerStarted","Data":"b353c117523172744de7839ed70ea1395c04c0968731c41a24a113c9cb1a10dc"} Feb 18 14:43:52 crc kubenswrapper[4957]: I0218 14:43:52.094252 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-q5pw9" Feb 18 14:43:52 crc kubenswrapper[4957]: I0218 14:43:52.134381 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-q5pw9" podStartSLOduration=2.184966636 podStartE2EDuration="8.134362041s" podCreationTimestamp="2026-02-18 14:43:44 +0000 UTC" firstStartedPulling="2026-02-18 14:43:45.826531055 +0000 UTC m=+732.347395799" lastFinishedPulling="2026-02-18 14:43:51.77592646 +0000 UTC m=+738.296791204" observedRunningTime="2026-02-18 14:43:52.133775995 +0000 UTC m=+738.654640739" watchObservedRunningTime="2026-02-18 14:43:52.134362041 +0000 UTC m=+738.655226785" Feb 18 14:43:52 crc kubenswrapper[4957]: I0218 14:43:52.135435 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-mqqs4" event={"ID":"12b5712d-e8c7-43f6-b44f-4641e48d8046","Type":"ContainerStarted","Data":"957c9b85889964ebfdf5e6dfe9b6a027656d895ad90ccc1406ea8245605e46fc"} Feb 18 14:43:52 crc kubenswrapper[4957]: I0218 14:43:52.153855 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-mqqs4" podStartSLOduration=2.137906449 podStartE2EDuration="8.153833054s" podCreationTimestamp="2026-02-18 14:43:44 +0000 UTC" firstStartedPulling="2026-02-18 14:43:45.826459313 +0000 UTC m=+732.347324057" lastFinishedPulling="2026-02-18 14:43:51.842385898 +0000 UTC m=+738.363250662" observedRunningTime="2026-02-18 14:43:52.151596421 +0000 UTC m=+738.672461165" watchObservedRunningTime="2026-02-18 14:43:52.153833054 +0000 UTC m=+738.674697798" Feb 18 14:44:00 crc kubenswrapper[4957]: I0218 14:44:00.346448 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-q5pw9" Feb 18 14:44:07 crc kubenswrapper[4957]: I0218 14:44:07.279970 4957 patch_prober.go:28] interesting pod/machine-config-daemon-x8wwg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 14:44:07 crc kubenswrapper[4957]: I0218 14:44:07.280910 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 14:44:07 crc kubenswrapper[4957]: I0218 14:44:07.281002 4957 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" Feb 18 14:44:07 crc kubenswrapper[4957]: I0218 14:44:07.282235 4957 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7941b927f3e2a93f4dd37a4d3392ef224a62719b3a5e2aadf6edc8ae2eb35a9d"} pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 14:44:07 crc kubenswrapper[4957]: I0218 14:44:07.282366 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" containerName="machine-config-daemon" containerID="cri-o://7941b927f3e2a93f4dd37a4d3392ef224a62719b3a5e2aadf6edc8ae2eb35a9d" gracePeriod=600 Feb 18 14:44:08 crc kubenswrapper[4957]: I0218 14:44:08.261472 4957 generic.go:334] "Generic (PLEG): container finished" podID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" containerID="7941b927f3e2a93f4dd37a4d3392ef224a62719b3a5e2aadf6edc8ae2eb35a9d" exitCode=0 Feb 18 14:44:08 crc kubenswrapper[4957]: I0218 14:44:08.261538 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" event={"ID":"4cde17e3-43e9-4bed-afe8-5b76229e35cf","Type":"ContainerDied","Data":"7941b927f3e2a93f4dd37a4d3392ef224a62719b3a5e2aadf6edc8ae2eb35a9d"} Feb 18 14:44:08 crc kubenswrapper[4957]: I0218 14:44:08.262764 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" event={"ID":"4cde17e3-43e9-4bed-afe8-5b76229e35cf","Type":"ContainerStarted","Data":"c57c13136d0c689489f9ddc49289b10f813ba077ecbc190959d9174e61a4424f"} Feb 18 14:44:08 crc kubenswrapper[4957]: I0218 14:44:08.262855 4957 scope.go:117] "RemoveContainer" containerID="135878b003cd530333b1214c0015b7d7b2523f3ba8a66d6b435c6895505efe92" Feb 18 14:44:10 crc kubenswrapper[4957]: I0218 14:44:10.179786 4957 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 18 14:44:26 crc kubenswrapper[4957]: I0218 14:44:26.630371 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989kcpzg"] Feb 18 14:44:26 crc kubenswrapper[4957]: I0218 14:44:26.633394 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989kcpzg" Feb 18 14:44:26 crc kubenswrapper[4957]: I0218 14:44:26.637647 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 18 14:44:26 crc kubenswrapper[4957]: I0218 14:44:26.644606 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989kcpzg"] Feb 18 14:44:26 crc kubenswrapper[4957]: I0218 14:44:26.714327 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9994b531-894e-4fbf-a8dc-8bdaa0684615-bundle\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989kcpzg\" (UID: \"9994b531-894e-4fbf-a8dc-8bdaa0684615\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989kcpzg" Feb 18 14:44:26 crc kubenswrapper[4957]: I0218 14:44:26.714422 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9994b531-894e-4fbf-a8dc-8bdaa0684615-util\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989kcpzg\" (UID: \"9994b531-894e-4fbf-a8dc-8bdaa0684615\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989kcpzg" Feb 18 14:44:26 crc kubenswrapper[4957]: I0218 14:44:26.714508 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6c4fr\" (UniqueName: \"kubernetes.io/projected/9994b531-894e-4fbf-a8dc-8bdaa0684615-kube-api-access-6c4fr\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989kcpzg\" (UID: \"9994b531-894e-4fbf-a8dc-8bdaa0684615\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989kcpzg" Feb 18 14:44:26 crc kubenswrapper[4957]: I0218 14:44:26.816713 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9994b531-894e-4fbf-a8dc-8bdaa0684615-util\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989kcpzg\" (UID: \"9994b531-894e-4fbf-a8dc-8bdaa0684615\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989kcpzg" Feb 18 14:44:26 crc kubenswrapper[4957]: I0218 14:44:26.816832 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6c4fr\" (UniqueName: \"kubernetes.io/projected/9994b531-894e-4fbf-a8dc-8bdaa0684615-kube-api-access-6c4fr\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989kcpzg\" (UID: \"9994b531-894e-4fbf-a8dc-8bdaa0684615\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989kcpzg" Feb 18 14:44:26 crc kubenswrapper[4957]: I0218 14:44:26.817008 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9994b531-894e-4fbf-a8dc-8bdaa0684615-bundle\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989kcpzg\" (UID: \"9994b531-894e-4fbf-a8dc-8bdaa0684615\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989kcpzg" Feb 18 14:44:26 crc kubenswrapper[4957]: I0218 14:44:26.817644 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9994b531-894e-4fbf-a8dc-8bdaa0684615-util\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989kcpzg\" (UID: \"9994b531-894e-4fbf-a8dc-8bdaa0684615\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989kcpzg" Feb 18 14:44:26 crc kubenswrapper[4957]: I0218 14:44:26.817829 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9994b531-894e-4fbf-a8dc-8bdaa0684615-bundle\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989kcpzg\" (UID: \"9994b531-894e-4fbf-a8dc-8bdaa0684615\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989kcpzg" Feb 18 14:44:26 crc kubenswrapper[4957]: I0218 14:44:26.835690 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19mcx5v"] Feb 18 14:44:26 crc kubenswrapper[4957]: I0218 14:44:26.837308 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19mcx5v" Feb 18 14:44:26 crc kubenswrapper[4957]: I0218 14:44:26.846857 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6c4fr\" (UniqueName: \"kubernetes.io/projected/9994b531-894e-4fbf-a8dc-8bdaa0684615-kube-api-access-6c4fr\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989kcpzg\" (UID: \"9994b531-894e-4fbf-a8dc-8bdaa0684615\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989kcpzg" Feb 18 14:44:26 crc kubenswrapper[4957]: I0218 14:44:26.856850 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19mcx5v"] Feb 18 14:44:26 crc kubenswrapper[4957]: I0218 14:44:26.918666 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f303e553-d646-4bd0-9fff-92ba6ad6dc90-util\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19mcx5v\" (UID: \"f303e553-d646-4bd0-9fff-92ba6ad6dc90\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19mcx5v" Feb 18 14:44:26 crc kubenswrapper[4957]: I0218 14:44:26.918779 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6dn8k\" (UniqueName: \"kubernetes.io/projected/f303e553-d646-4bd0-9fff-92ba6ad6dc90-kube-api-access-6dn8k\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19mcx5v\" (UID: \"f303e553-d646-4bd0-9fff-92ba6ad6dc90\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19mcx5v" Feb 18 14:44:26 crc kubenswrapper[4957]: I0218 14:44:26.918871 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f303e553-d646-4bd0-9fff-92ba6ad6dc90-bundle\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19mcx5v\" (UID: \"f303e553-d646-4bd0-9fff-92ba6ad6dc90\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19mcx5v" Feb 18 14:44:26 crc kubenswrapper[4957]: I0218 14:44:26.971335 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989kcpzg" Feb 18 14:44:27 crc kubenswrapper[4957]: I0218 14:44:27.019918 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6dn8k\" (UniqueName: \"kubernetes.io/projected/f303e553-d646-4bd0-9fff-92ba6ad6dc90-kube-api-access-6dn8k\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19mcx5v\" (UID: \"f303e553-d646-4bd0-9fff-92ba6ad6dc90\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19mcx5v" Feb 18 14:44:27 crc kubenswrapper[4957]: I0218 14:44:27.020022 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f303e553-d646-4bd0-9fff-92ba6ad6dc90-bundle\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19mcx5v\" (UID: \"f303e553-d646-4bd0-9fff-92ba6ad6dc90\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19mcx5v" Feb 18 14:44:27 crc kubenswrapper[4957]: I0218 14:44:27.020110 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f303e553-d646-4bd0-9fff-92ba6ad6dc90-util\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19mcx5v\" (UID: \"f303e553-d646-4bd0-9fff-92ba6ad6dc90\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19mcx5v" Feb 18 14:44:27 crc kubenswrapper[4957]: I0218 14:44:27.020616 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f303e553-d646-4bd0-9fff-92ba6ad6dc90-bundle\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19mcx5v\" (UID: \"f303e553-d646-4bd0-9fff-92ba6ad6dc90\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19mcx5v" Feb 18 14:44:27 crc kubenswrapper[4957]: I0218 14:44:27.020645 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f303e553-d646-4bd0-9fff-92ba6ad6dc90-util\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19mcx5v\" (UID: \"f303e553-d646-4bd0-9fff-92ba6ad6dc90\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19mcx5v" Feb 18 14:44:27 crc kubenswrapper[4957]: I0218 14:44:27.042635 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6dn8k\" (UniqueName: \"kubernetes.io/projected/f303e553-d646-4bd0-9fff-92ba6ad6dc90-kube-api-access-6dn8k\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19mcx5v\" (UID: \"f303e553-d646-4bd0-9fff-92ba6ad6dc90\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19mcx5v" Feb 18 14:44:27 crc kubenswrapper[4957]: I0218 14:44:27.195065 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19mcx5v" Feb 18 14:44:27 crc kubenswrapper[4957]: I0218 14:44:27.216757 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989kcpzg"] Feb 18 14:44:27 crc kubenswrapper[4957]: I0218 14:44:27.411764 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989kcpzg" event={"ID":"9994b531-894e-4fbf-a8dc-8bdaa0684615","Type":"ContainerStarted","Data":"d38cbef4ecf848dd0c1edd0a96621045cb4ee7bc070e9ca57fc79789324e4930"} Feb 18 14:44:27 crc kubenswrapper[4957]: I0218 14:44:27.454005 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19mcx5v"] Feb 18 14:44:27 crc kubenswrapper[4957]: W0218 14:44:27.456259 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf303e553_d646_4bd0_9fff_92ba6ad6dc90.slice/crio-835e8940c4170462d279295d8d879bcf1234e8e67c87805e8ea46e53fce95b61 WatchSource:0}: Error finding container 835e8940c4170462d279295d8d879bcf1234e8e67c87805e8ea46e53fce95b61: Status 404 returned error can't find the container with id 835e8940c4170462d279295d8d879bcf1234e8e67c87805e8ea46e53fce95b61 Feb 18 14:44:28 crc kubenswrapper[4957]: I0218 14:44:28.421739 4957 generic.go:334] "Generic (PLEG): container finished" podID="9994b531-894e-4fbf-a8dc-8bdaa0684615" containerID="2e8e21d49b50ace0f25d5e602e03e5879154f3433ff3dea32c01e8e7ff102dd6" exitCode=0 Feb 18 14:44:28 crc kubenswrapper[4957]: I0218 14:44:28.421869 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989kcpzg" event={"ID":"9994b531-894e-4fbf-a8dc-8bdaa0684615","Type":"ContainerDied","Data":"2e8e21d49b50ace0f25d5e602e03e5879154f3433ff3dea32c01e8e7ff102dd6"} Feb 18 14:44:28 crc kubenswrapper[4957]: I0218 14:44:28.424471 4957 generic.go:334] "Generic (PLEG): container finished" podID="f303e553-d646-4bd0-9fff-92ba6ad6dc90" containerID="7de12bc4d68fa825724ff6adde35d75b8926d99953d45f55afe36c978dbc1608" exitCode=0 Feb 18 14:44:28 crc kubenswrapper[4957]: I0218 14:44:28.424522 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19mcx5v" event={"ID":"f303e553-d646-4bd0-9fff-92ba6ad6dc90","Type":"ContainerDied","Data":"7de12bc4d68fa825724ff6adde35d75b8926d99953d45f55afe36c978dbc1608"} Feb 18 14:44:28 crc kubenswrapper[4957]: I0218 14:44:28.424564 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19mcx5v" event={"ID":"f303e553-d646-4bd0-9fff-92ba6ad6dc90","Type":"ContainerStarted","Data":"835e8940c4170462d279295d8d879bcf1234e8e67c87805e8ea46e53fce95b61"} Feb 18 14:44:30 crc kubenswrapper[4957]: I0218 14:44:30.385613 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-dfxwx"] Feb 18 14:44:30 crc kubenswrapper[4957]: I0218 14:44:30.388024 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dfxwx" Feb 18 14:44:30 crc kubenswrapper[4957]: I0218 14:44:30.410099 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dfxwx"] Feb 18 14:44:30 crc kubenswrapper[4957]: I0218 14:44:30.542623 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/847dae77-c922-4d36-a043-df06ec81c15a-utilities\") pod \"redhat-operators-dfxwx\" (UID: \"847dae77-c922-4d36-a043-df06ec81c15a\") " pod="openshift-marketplace/redhat-operators-dfxwx" Feb 18 14:44:30 crc kubenswrapper[4957]: I0218 14:44:30.542701 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/847dae77-c922-4d36-a043-df06ec81c15a-catalog-content\") pod \"redhat-operators-dfxwx\" (UID: \"847dae77-c922-4d36-a043-df06ec81c15a\") " pod="openshift-marketplace/redhat-operators-dfxwx" Feb 18 14:44:30 crc kubenswrapper[4957]: I0218 14:44:30.542743 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rk7q\" (UniqueName: \"kubernetes.io/projected/847dae77-c922-4d36-a043-df06ec81c15a-kube-api-access-7rk7q\") pod \"redhat-operators-dfxwx\" (UID: \"847dae77-c922-4d36-a043-df06ec81c15a\") " pod="openshift-marketplace/redhat-operators-dfxwx" Feb 18 14:44:30 crc kubenswrapper[4957]: I0218 14:44:30.643272 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/847dae77-c922-4d36-a043-df06ec81c15a-utilities\") pod \"redhat-operators-dfxwx\" (UID: \"847dae77-c922-4d36-a043-df06ec81c15a\") " pod="openshift-marketplace/redhat-operators-dfxwx" Feb 18 14:44:30 crc kubenswrapper[4957]: I0218 14:44:30.643369 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/847dae77-c922-4d36-a043-df06ec81c15a-catalog-content\") pod \"redhat-operators-dfxwx\" (UID: \"847dae77-c922-4d36-a043-df06ec81c15a\") " pod="openshift-marketplace/redhat-operators-dfxwx" Feb 18 14:44:30 crc kubenswrapper[4957]: I0218 14:44:30.643485 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rk7q\" (UniqueName: \"kubernetes.io/projected/847dae77-c922-4d36-a043-df06ec81c15a-kube-api-access-7rk7q\") pod \"redhat-operators-dfxwx\" (UID: \"847dae77-c922-4d36-a043-df06ec81c15a\") " pod="openshift-marketplace/redhat-operators-dfxwx" Feb 18 14:44:30 crc kubenswrapper[4957]: I0218 14:44:30.645132 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/847dae77-c922-4d36-a043-df06ec81c15a-catalog-content\") pod \"redhat-operators-dfxwx\" (UID: \"847dae77-c922-4d36-a043-df06ec81c15a\") " pod="openshift-marketplace/redhat-operators-dfxwx" Feb 18 14:44:30 crc kubenswrapper[4957]: I0218 14:44:30.645125 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/847dae77-c922-4d36-a043-df06ec81c15a-utilities\") pod \"redhat-operators-dfxwx\" (UID: \"847dae77-c922-4d36-a043-df06ec81c15a\") " pod="openshift-marketplace/redhat-operators-dfxwx" Feb 18 14:44:30 crc kubenswrapper[4957]: I0218 14:44:30.673507 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rk7q\" (UniqueName: \"kubernetes.io/projected/847dae77-c922-4d36-a043-df06ec81c15a-kube-api-access-7rk7q\") pod \"redhat-operators-dfxwx\" (UID: \"847dae77-c922-4d36-a043-df06ec81c15a\") " pod="openshift-marketplace/redhat-operators-dfxwx" Feb 18 14:44:30 crc kubenswrapper[4957]: I0218 14:44:30.725467 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dfxwx" Feb 18 14:44:31 crc kubenswrapper[4957]: I0218 14:44:31.033633 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dfxwx"] Feb 18 14:44:31 crc kubenswrapper[4957]: W0218 14:44:31.038754 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod847dae77_c922_4d36_a043_df06ec81c15a.slice/crio-ec2294be40c2a600bc1575b5f7257014383b391ac3b6fdbff2799ff16bd1e383 WatchSource:0}: Error finding container ec2294be40c2a600bc1575b5f7257014383b391ac3b6fdbff2799ff16bd1e383: Status 404 returned error can't find the container with id ec2294be40c2a600bc1575b5f7257014383b391ac3b6fdbff2799ff16bd1e383 Feb 18 14:44:31 crc kubenswrapper[4957]: I0218 14:44:31.468047 4957 generic.go:334] "Generic (PLEG): container finished" podID="9994b531-894e-4fbf-a8dc-8bdaa0684615" containerID="81dd4656da5cce4602a28e74aa489db041d8a2380833fc978b7e7f19b5f56759" exitCode=0 Feb 18 14:44:31 crc kubenswrapper[4957]: I0218 14:44:31.468125 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989kcpzg" event={"ID":"9994b531-894e-4fbf-a8dc-8bdaa0684615","Type":"ContainerDied","Data":"81dd4656da5cce4602a28e74aa489db041d8a2380833fc978b7e7f19b5f56759"} Feb 18 14:44:31 crc kubenswrapper[4957]: I0218 14:44:31.472653 4957 generic.go:334] "Generic (PLEG): container finished" podID="847dae77-c922-4d36-a043-df06ec81c15a" containerID="8e759a7dddd8efb9ea3177b74313912d86f46772aae8793fde2c75b11549f026" exitCode=0 Feb 18 14:44:31 crc kubenswrapper[4957]: I0218 14:44:31.472733 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dfxwx" event={"ID":"847dae77-c922-4d36-a043-df06ec81c15a","Type":"ContainerDied","Data":"8e759a7dddd8efb9ea3177b74313912d86f46772aae8793fde2c75b11549f026"} Feb 18 14:44:31 crc kubenswrapper[4957]: I0218 14:44:31.472768 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dfxwx" event={"ID":"847dae77-c922-4d36-a043-df06ec81c15a","Type":"ContainerStarted","Data":"ec2294be40c2a600bc1575b5f7257014383b391ac3b6fdbff2799ff16bd1e383"} Feb 18 14:44:31 crc kubenswrapper[4957]: I0218 14:44:31.477574 4957 generic.go:334] "Generic (PLEG): container finished" podID="f303e553-d646-4bd0-9fff-92ba6ad6dc90" containerID="64405bdc9666a96d6cd3b4cca628be11042314a1504b89d496e726c76020d842" exitCode=0 Feb 18 14:44:31 crc kubenswrapper[4957]: I0218 14:44:31.477626 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19mcx5v" event={"ID":"f303e553-d646-4bd0-9fff-92ba6ad6dc90","Type":"ContainerDied","Data":"64405bdc9666a96d6cd3b4cca628be11042314a1504b89d496e726c76020d842"} Feb 18 14:44:32 crc kubenswrapper[4957]: I0218 14:44:32.487606 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dfxwx" event={"ID":"847dae77-c922-4d36-a043-df06ec81c15a","Type":"ContainerStarted","Data":"794c1f3fa2df71446dd4c0ebdeeb61f9d22b879499426a7dd43b5ca66a6b1b72"} Feb 18 14:44:32 crc kubenswrapper[4957]: I0218 14:44:32.490786 4957 generic.go:334] "Generic (PLEG): container finished" podID="f303e553-d646-4bd0-9fff-92ba6ad6dc90" containerID="8285a811d705a778b3cfa3d63ad989f790d62f13011f90b9d409e23d71d7f77d" exitCode=0 Feb 18 14:44:32 crc kubenswrapper[4957]: I0218 14:44:32.490844 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19mcx5v" event={"ID":"f303e553-d646-4bd0-9fff-92ba6ad6dc90","Type":"ContainerDied","Data":"8285a811d705a778b3cfa3d63ad989f790d62f13011f90b9d409e23d71d7f77d"} Feb 18 14:44:32 crc kubenswrapper[4957]: I0218 14:44:32.495137 4957 generic.go:334] "Generic (PLEG): container finished" podID="9994b531-894e-4fbf-a8dc-8bdaa0684615" containerID="d63cfbe9fbc7788609abe0018ba940937cbc9d3445e44a3fa1e099ea292a8ee0" exitCode=0 Feb 18 14:44:32 crc kubenswrapper[4957]: I0218 14:44:32.495192 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989kcpzg" event={"ID":"9994b531-894e-4fbf-a8dc-8bdaa0684615","Type":"ContainerDied","Data":"d63cfbe9fbc7788609abe0018ba940937cbc9d3445e44a3fa1e099ea292a8ee0"} Feb 18 14:44:33 crc kubenswrapper[4957]: I0218 14:44:33.779471 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989kcpzg" Feb 18 14:44:33 crc kubenswrapper[4957]: I0218 14:44:33.844034 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19mcx5v" Feb 18 14:44:33 crc kubenswrapper[4957]: I0218 14:44:33.909481 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9994b531-894e-4fbf-a8dc-8bdaa0684615-util\") pod \"9994b531-894e-4fbf-a8dc-8bdaa0684615\" (UID: \"9994b531-894e-4fbf-a8dc-8bdaa0684615\") " Feb 18 14:44:33 crc kubenswrapper[4957]: I0218 14:44:33.909604 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6c4fr\" (UniqueName: \"kubernetes.io/projected/9994b531-894e-4fbf-a8dc-8bdaa0684615-kube-api-access-6c4fr\") pod \"9994b531-894e-4fbf-a8dc-8bdaa0684615\" (UID: \"9994b531-894e-4fbf-a8dc-8bdaa0684615\") " Feb 18 14:44:33 crc kubenswrapper[4957]: I0218 14:44:33.909754 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9994b531-894e-4fbf-a8dc-8bdaa0684615-bundle\") pod \"9994b531-894e-4fbf-a8dc-8bdaa0684615\" (UID: \"9994b531-894e-4fbf-a8dc-8bdaa0684615\") " Feb 18 14:44:33 crc kubenswrapper[4957]: I0218 14:44:33.910500 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9994b531-894e-4fbf-a8dc-8bdaa0684615-bundle" (OuterVolumeSpecName: "bundle") pod "9994b531-894e-4fbf-a8dc-8bdaa0684615" (UID: "9994b531-894e-4fbf-a8dc-8bdaa0684615"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:44:33 crc kubenswrapper[4957]: I0218 14:44:33.916811 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9994b531-894e-4fbf-a8dc-8bdaa0684615-kube-api-access-6c4fr" (OuterVolumeSpecName: "kube-api-access-6c4fr") pod "9994b531-894e-4fbf-a8dc-8bdaa0684615" (UID: "9994b531-894e-4fbf-a8dc-8bdaa0684615"). InnerVolumeSpecName "kube-api-access-6c4fr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:44:33 crc kubenswrapper[4957]: I0218 14:44:33.935514 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9994b531-894e-4fbf-a8dc-8bdaa0684615-util" (OuterVolumeSpecName: "util") pod "9994b531-894e-4fbf-a8dc-8bdaa0684615" (UID: "9994b531-894e-4fbf-a8dc-8bdaa0684615"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:44:34 crc kubenswrapper[4957]: I0218 14:44:34.010782 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f303e553-d646-4bd0-9fff-92ba6ad6dc90-bundle\") pod \"f303e553-d646-4bd0-9fff-92ba6ad6dc90\" (UID: \"f303e553-d646-4bd0-9fff-92ba6ad6dc90\") " Feb 18 14:44:34 crc kubenswrapper[4957]: I0218 14:44:34.010945 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6dn8k\" (UniqueName: \"kubernetes.io/projected/f303e553-d646-4bd0-9fff-92ba6ad6dc90-kube-api-access-6dn8k\") pod \"f303e553-d646-4bd0-9fff-92ba6ad6dc90\" (UID: \"f303e553-d646-4bd0-9fff-92ba6ad6dc90\") " Feb 18 14:44:34 crc kubenswrapper[4957]: I0218 14:44:34.010976 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f303e553-d646-4bd0-9fff-92ba6ad6dc90-util\") pod \"f303e553-d646-4bd0-9fff-92ba6ad6dc90\" (UID: \"f303e553-d646-4bd0-9fff-92ba6ad6dc90\") " Feb 18 14:44:34 crc kubenswrapper[4957]: I0218 14:44:34.011305 4957 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9994b531-894e-4fbf-a8dc-8bdaa0684615-util\") on node \"crc\" DevicePath \"\"" Feb 18 14:44:34 crc kubenswrapper[4957]: I0218 14:44:34.011320 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6c4fr\" (UniqueName: \"kubernetes.io/projected/9994b531-894e-4fbf-a8dc-8bdaa0684615-kube-api-access-6c4fr\") on node \"crc\" DevicePath \"\"" Feb 18 14:44:34 crc kubenswrapper[4957]: I0218 14:44:34.011331 4957 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9994b531-894e-4fbf-a8dc-8bdaa0684615-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:44:34 crc kubenswrapper[4957]: I0218 14:44:34.012658 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f303e553-d646-4bd0-9fff-92ba6ad6dc90-bundle" (OuterVolumeSpecName: "bundle") pod "f303e553-d646-4bd0-9fff-92ba6ad6dc90" (UID: "f303e553-d646-4bd0-9fff-92ba6ad6dc90"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:44:34 crc kubenswrapper[4957]: I0218 14:44:34.013983 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f303e553-d646-4bd0-9fff-92ba6ad6dc90-kube-api-access-6dn8k" (OuterVolumeSpecName: "kube-api-access-6dn8k") pod "f303e553-d646-4bd0-9fff-92ba6ad6dc90" (UID: "f303e553-d646-4bd0-9fff-92ba6ad6dc90"). InnerVolumeSpecName "kube-api-access-6dn8k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:44:34 crc kubenswrapper[4957]: I0218 14:44:34.021950 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f303e553-d646-4bd0-9fff-92ba6ad6dc90-util" (OuterVolumeSpecName: "util") pod "f303e553-d646-4bd0-9fff-92ba6ad6dc90" (UID: "f303e553-d646-4bd0-9fff-92ba6ad6dc90"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:44:34 crc kubenswrapper[4957]: I0218 14:44:34.112791 4957 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f303e553-d646-4bd0-9fff-92ba6ad6dc90-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:44:34 crc kubenswrapper[4957]: I0218 14:44:34.112842 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6dn8k\" (UniqueName: \"kubernetes.io/projected/f303e553-d646-4bd0-9fff-92ba6ad6dc90-kube-api-access-6dn8k\") on node \"crc\" DevicePath \"\"" Feb 18 14:44:34 crc kubenswrapper[4957]: I0218 14:44:34.112858 4957 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f303e553-d646-4bd0-9fff-92ba6ad6dc90-util\") on node \"crc\" DevicePath \"\"" Feb 18 14:44:34 crc kubenswrapper[4957]: I0218 14:44:34.514709 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989kcpzg" event={"ID":"9994b531-894e-4fbf-a8dc-8bdaa0684615","Type":"ContainerDied","Data":"d38cbef4ecf848dd0c1edd0a96621045cb4ee7bc070e9ca57fc79789324e4930"} Feb 18 14:44:34 crc kubenswrapper[4957]: I0218 14:44:34.514786 4957 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d38cbef4ecf848dd0c1edd0a96621045cb4ee7bc070e9ca57fc79789324e4930" Feb 18 14:44:34 crc kubenswrapper[4957]: I0218 14:44:34.515111 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989kcpzg" Feb 18 14:44:34 crc kubenswrapper[4957]: I0218 14:44:34.517176 4957 generic.go:334] "Generic (PLEG): container finished" podID="847dae77-c922-4d36-a043-df06ec81c15a" containerID="794c1f3fa2df71446dd4c0ebdeeb61f9d22b879499426a7dd43b5ca66a6b1b72" exitCode=0 Feb 18 14:44:34 crc kubenswrapper[4957]: I0218 14:44:34.517324 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dfxwx" event={"ID":"847dae77-c922-4d36-a043-df06ec81c15a","Type":"ContainerDied","Data":"794c1f3fa2df71446dd4c0ebdeeb61f9d22b879499426a7dd43b5ca66a6b1b72"} Feb 18 14:44:34 crc kubenswrapper[4957]: I0218 14:44:34.522271 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19mcx5v" event={"ID":"f303e553-d646-4bd0-9fff-92ba6ad6dc90","Type":"ContainerDied","Data":"835e8940c4170462d279295d8d879bcf1234e8e67c87805e8ea46e53fce95b61"} Feb 18 14:44:34 crc kubenswrapper[4957]: I0218 14:44:34.522312 4957 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="835e8940c4170462d279295d8d879bcf1234e8e67c87805e8ea46e53fce95b61" Feb 18 14:44:34 crc kubenswrapper[4957]: I0218 14:44:34.522397 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19mcx5v" Feb 18 14:44:35 crc kubenswrapper[4957]: I0218 14:44:35.533107 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dfxwx" event={"ID":"847dae77-c922-4d36-a043-df06ec81c15a","Type":"ContainerStarted","Data":"d0e0c1cdd74b6128c975451b4349c18aec7c323a4c4915700462c8fa1e77841e"} Feb 18 14:44:35 crc kubenswrapper[4957]: I0218 14:44:35.552886 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-dfxwx" podStartSLOduration=2.083366649 podStartE2EDuration="5.552863406s" podCreationTimestamp="2026-02-18 14:44:30 +0000 UTC" firstStartedPulling="2026-02-18 14:44:31.484151939 +0000 UTC m=+778.005016683" lastFinishedPulling="2026-02-18 14:44:34.953648696 +0000 UTC m=+781.474513440" observedRunningTime="2026-02-18 14:44:35.552814845 +0000 UTC m=+782.073679589" watchObservedRunningTime="2026-02-18 14:44:35.552863406 +0000 UTC m=+782.073728150" Feb 18 14:44:40 crc kubenswrapper[4957]: I0218 14:44:40.726857 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-dfxwx" Feb 18 14:44:40 crc kubenswrapper[4957]: I0218 14:44:40.727582 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-dfxwx" Feb 18 14:44:41 crc kubenswrapper[4957]: I0218 14:44:41.775078 4957 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-dfxwx" podUID="847dae77-c922-4d36-a043-df06ec81c15a" containerName="registry-server" probeResult="failure" output=< Feb 18 14:44:41 crc kubenswrapper[4957]: timeout: failed to connect service ":50051" within 1s Feb 18 14:44:41 crc kubenswrapper[4957]: > Feb 18 14:44:43 crc kubenswrapper[4957]: I0218 14:44:43.591443 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-669bf4b44b-ndlc7"] Feb 18 14:44:43 crc kubenswrapper[4957]: E0218 14:44:43.592296 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f303e553-d646-4bd0-9fff-92ba6ad6dc90" containerName="pull" Feb 18 14:44:43 crc kubenswrapper[4957]: I0218 14:44:43.592315 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="f303e553-d646-4bd0-9fff-92ba6ad6dc90" containerName="pull" Feb 18 14:44:43 crc kubenswrapper[4957]: E0218 14:44:43.592330 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9994b531-894e-4fbf-a8dc-8bdaa0684615" containerName="extract" Feb 18 14:44:43 crc kubenswrapper[4957]: I0218 14:44:43.592337 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="9994b531-894e-4fbf-a8dc-8bdaa0684615" containerName="extract" Feb 18 14:44:43 crc kubenswrapper[4957]: E0218 14:44:43.592346 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9994b531-894e-4fbf-a8dc-8bdaa0684615" containerName="util" Feb 18 14:44:43 crc kubenswrapper[4957]: I0218 14:44:43.592355 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="9994b531-894e-4fbf-a8dc-8bdaa0684615" containerName="util" Feb 18 14:44:43 crc kubenswrapper[4957]: E0218 14:44:43.592372 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9994b531-894e-4fbf-a8dc-8bdaa0684615" containerName="pull" Feb 18 14:44:43 crc kubenswrapper[4957]: I0218 14:44:43.592377 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="9994b531-894e-4fbf-a8dc-8bdaa0684615" containerName="pull" Feb 18 14:44:43 crc kubenswrapper[4957]: E0218 14:44:43.592387 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f303e553-d646-4bd0-9fff-92ba6ad6dc90" containerName="util" Feb 18 14:44:43 crc kubenswrapper[4957]: I0218 14:44:43.592392 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="f303e553-d646-4bd0-9fff-92ba6ad6dc90" containerName="util" Feb 18 14:44:43 crc kubenswrapper[4957]: E0218 14:44:43.592409 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f303e553-d646-4bd0-9fff-92ba6ad6dc90" containerName="extract" Feb 18 14:44:43 crc kubenswrapper[4957]: I0218 14:44:43.592440 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="f303e553-d646-4bd0-9fff-92ba6ad6dc90" containerName="extract" Feb 18 14:44:43 crc kubenswrapper[4957]: I0218 14:44:43.592563 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="f303e553-d646-4bd0-9fff-92ba6ad6dc90" containerName="extract" Feb 18 14:44:43 crc kubenswrapper[4957]: I0218 14:44:43.592586 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="9994b531-894e-4fbf-a8dc-8bdaa0684615" containerName="extract" Feb 18 14:44:43 crc kubenswrapper[4957]: I0218 14:44:43.593517 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-669bf4b44b-ndlc7" Feb 18 14:44:43 crc kubenswrapper[4957]: I0218 14:44:43.595959 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"openshift-service-ca.crt" Feb 18 14:44:43 crc kubenswrapper[4957]: I0218 14:44:43.601890 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-controller-manager-dockercfg-7vx4q" Feb 18 14:44:43 crc kubenswrapper[4957]: I0218 14:44:43.601947 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-controller-manager-service-cert" Feb 18 14:44:43 crc kubenswrapper[4957]: I0218 14:44:43.601889 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"kube-root-ca.crt" Feb 18 14:44:43 crc kubenswrapper[4957]: I0218 14:44:43.601902 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"loki-operator-manager-config" Feb 18 14:44:43 crc kubenswrapper[4957]: I0218 14:44:43.602116 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-metrics" Feb 18 14:44:43 crc kubenswrapper[4957]: I0218 14:44:43.620590 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-669bf4b44b-ndlc7"] Feb 18 14:44:43 crc kubenswrapper[4957]: I0218 14:44:43.769903 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/da87ca13-b23a-4345-b79d-46c8e9bec9b3-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-669bf4b44b-ndlc7\" (UID: \"da87ca13-b23a-4345-b79d-46c8e9bec9b3\") " pod="openshift-operators-redhat/loki-operator-controller-manager-669bf4b44b-ndlc7" Feb 18 14:44:43 crc kubenswrapper[4957]: I0218 14:44:43.769980 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/da87ca13-b23a-4345-b79d-46c8e9bec9b3-webhook-cert\") pod \"loki-operator-controller-manager-669bf4b44b-ndlc7\" (UID: \"da87ca13-b23a-4345-b79d-46c8e9bec9b3\") " pod="openshift-operators-redhat/loki-operator-controller-manager-669bf4b44b-ndlc7" Feb 18 14:44:43 crc kubenswrapper[4957]: I0218 14:44:43.770117 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/da87ca13-b23a-4345-b79d-46c8e9bec9b3-apiservice-cert\") pod \"loki-operator-controller-manager-669bf4b44b-ndlc7\" (UID: \"da87ca13-b23a-4345-b79d-46c8e9bec9b3\") " pod="openshift-operators-redhat/loki-operator-controller-manager-669bf4b44b-ndlc7" Feb 18 14:44:43 crc kubenswrapper[4957]: I0218 14:44:43.770147 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-md2ck\" (UniqueName: \"kubernetes.io/projected/da87ca13-b23a-4345-b79d-46c8e9bec9b3-kube-api-access-md2ck\") pod \"loki-operator-controller-manager-669bf4b44b-ndlc7\" (UID: \"da87ca13-b23a-4345-b79d-46c8e9bec9b3\") " pod="openshift-operators-redhat/loki-operator-controller-manager-669bf4b44b-ndlc7" Feb 18 14:44:43 crc kubenswrapper[4957]: I0218 14:44:43.770203 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/da87ca13-b23a-4345-b79d-46c8e9bec9b3-manager-config\") pod \"loki-operator-controller-manager-669bf4b44b-ndlc7\" (UID: \"da87ca13-b23a-4345-b79d-46c8e9bec9b3\") " pod="openshift-operators-redhat/loki-operator-controller-manager-669bf4b44b-ndlc7" Feb 18 14:44:43 crc kubenswrapper[4957]: I0218 14:44:43.872527 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/da87ca13-b23a-4345-b79d-46c8e9bec9b3-apiservice-cert\") pod \"loki-operator-controller-manager-669bf4b44b-ndlc7\" (UID: \"da87ca13-b23a-4345-b79d-46c8e9bec9b3\") " pod="openshift-operators-redhat/loki-operator-controller-manager-669bf4b44b-ndlc7" Feb 18 14:44:43 crc kubenswrapper[4957]: I0218 14:44:43.872596 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-md2ck\" (UniqueName: \"kubernetes.io/projected/da87ca13-b23a-4345-b79d-46c8e9bec9b3-kube-api-access-md2ck\") pod \"loki-operator-controller-manager-669bf4b44b-ndlc7\" (UID: \"da87ca13-b23a-4345-b79d-46c8e9bec9b3\") " pod="openshift-operators-redhat/loki-operator-controller-manager-669bf4b44b-ndlc7" Feb 18 14:44:43 crc kubenswrapper[4957]: I0218 14:44:43.872682 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/da87ca13-b23a-4345-b79d-46c8e9bec9b3-manager-config\") pod \"loki-operator-controller-manager-669bf4b44b-ndlc7\" (UID: \"da87ca13-b23a-4345-b79d-46c8e9bec9b3\") " pod="openshift-operators-redhat/loki-operator-controller-manager-669bf4b44b-ndlc7" Feb 18 14:44:43 crc kubenswrapper[4957]: I0218 14:44:43.872742 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/da87ca13-b23a-4345-b79d-46c8e9bec9b3-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-669bf4b44b-ndlc7\" (UID: \"da87ca13-b23a-4345-b79d-46c8e9bec9b3\") " pod="openshift-operators-redhat/loki-operator-controller-manager-669bf4b44b-ndlc7" Feb 18 14:44:43 crc kubenswrapper[4957]: I0218 14:44:43.872783 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/da87ca13-b23a-4345-b79d-46c8e9bec9b3-webhook-cert\") pod \"loki-operator-controller-manager-669bf4b44b-ndlc7\" (UID: \"da87ca13-b23a-4345-b79d-46c8e9bec9b3\") " pod="openshift-operators-redhat/loki-operator-controller-manager-669bf4b44b-ndlc7" Feb 18 14:44:43 crc kubenswrapper[4957]: I0218 14:44:43.873796 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/da87ca13-b23a-4345-b79d-46c8e9bec9b3-manager-config\") pod \"loki-operator-controller-manager-669bf4b44b-ndlc7\" (UID: \"da87ca13-b23a-4345-b79d-46c8e9bec9b3\") " pod="openshift-operators-redhat/loki-operator-controller-manager-669bf4b44b-ndlc7" Feb 18 14:44:43 crc kubenswrapper[4957]: I0218 14:44:43.880350 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/da87ca13-b23a-4345-b79d-46c8e9bec9b3-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-669bf4b44b-ndlc7\" (UID: \"da87ca13-b23a-4345-b79d-46c8e9bec9b3\") " pod="openshift-operators-redhat/loki-operator-controller-manager-669bf4b44b-ndlc7" Feb 18 14:44:43 crc kubenswrapper[4957]: I0218 14:44:43.880627 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/da87ca13-b23a-4345-b79d-46c8e9bec9b3-webhook-cert\") pod \"loki-operator-controller-manager-669bf4b44b-ndlc7\" (UID: \"da87ca13-b23a-4345-b79d-46c8e9bec9b3\") " pod="openshift-operators-redhat/loki-operator-controller-manager-669bf4b44b-ndlc7" Feb 18 14:44:43 crc kubenswrapper[4957]: I0218 14:44:43.886086 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/da87ca13-b23a-4345-b79d-46c8e9bec9b3-apiservice-cert\") pod \"loki-operator-controller-manager-669bf4b44b-ndlc7\" (UID: \"da87ca13-b23a-4345-b79d-46c8e9bec9b3\") " pod="openshift-operators-redhat/loki-operator-controller-manager-669bf4b44b-ndlc7" Feb 18 14:44:43 crc kubenswrapper[4957]: I0218 14:44:43.915937 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-md2ck\" (UniqueName: \"kubernetes.io/projected/da87ca13-b23a-4345-b79d-46c8e9bec9b3-kube-api-access-md2ck\") pod \"loki-operator-controller-manager-669bf4b44b-ndlc7\" (UID: \"da87ca13-b23a-4345-b79d-46c8e9bec9b3\") " pod="openshift-operators-redhat/loki-operator-controller-manager-669bf4b44b-ndlc7" Feb 18 14:44:43 crc kubenswrapper[4957]: I0218 14:44:43.917931 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-669bf4b44b-ndlc7" Feb 18 14:44:44 crc kubenswrapper[4957]: I0218 14:44:44.395244 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-669bf4b44b-ndlc7"] Feb 18 14:44:44 crc kubenswrapper[4957]: W0218 14:44:44.409647 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podda87ca13_b23a_4345_b79d_46c8e9bec9b3.slice/crio-5b951b11d53a3f3ded7e2df3a9cb089065cbcb6f758effd87b85c1363fff6093 WatchSource:0}: Error finding container 5b951b11d53a3f3ded7e2df3a9cb089065cbcb6f758effd87b85c1363fff6093: Status 404 returned error can't find the container with id 5b951b11d53a3f3ded7e2df3a9cb089065cbcb6f758effd87b85c1363fff6093 Feb 18 14:44:44 crc kubenswrapper[4957]: I0218 14:44:44.594070 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-669bf4b44b-ndlc7" event={"ID":"da87ca13-b23a-4345-b79d-46c8e9bec9b3","Type":"ContainerStarted","Data":"5b951b11d53a3f3ded7e2df3a9cb089065cbcb6f758effd87b85c1363fff6093"} Feb 18 14:44:47 crc kubenswrapper[4957]: I0218 14:44:47.322117 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/cluster-logging-operator-c769fd969-kl2hc"] Feb 18 14:44:47 crc kubenswrapper[4957]: I0218 14:44:47.323832 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/cluster-logging-operator-c769fd969-kl2hc" Feb 18 14:44:47 crc kubenswrapper[4957]: I0218 14:44:47.326365 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"kube-root-ca.crt" Feb 18 14:44:47 crc kubenswrapper[4957]: I0218 14:44:47.327294 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"cluster-logging-operator-dockercfg-mpsj9" Feb 18 14:44:47 crc kubenswrapper[4957]: I0218 14:44:47.327466 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"openshift-service-ca.crt" Feb 18 14:44:47 crc kubenswrapper[4957]: I0218 14:44:47.335914 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/cluster-logging-operator-c769fd969-kl2hc"] Feb 18 14:44:47 crc kubenswrapper[4957]: I0218 14:44:47.431952 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ts5zj\" (UniqueName: \"kubernetes.io/projected/e703948a-fdb1-445a-8ece-94bd76181899-kube-api-access-ts5zj\") pod \"cluster-logging-operator-c769fd969-kl2hc\" (UID: \"e703948a-fdb1-445a-8ece-94bd76181899\") " pod="openshift-logging/cluster-logging-operator-c769fd969-kl2hc" Feb 18 14:44:47 crc kubenswrapper[4957]: I0218 14:44:47.533136 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ts5zj\" (UniqueName: \"kubernetes.io/projected/e703948a-fdb1-445a-8ece-94bd76181899-kube-api-access-ts5zj\") pod \"cluster-logging-operator-c769fd969-kl2hc\" (UID: \"e703948a-fdb1-445a-8ece-94bd76181899\") " pod="openshift-logging/cluster-logging-operator-c769fd969-kl2hc" Feb 18 14:44:47 crc kubenswrapper[4957]: I0218 14:44:47.577584 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ts5zj\" (UniqueName: \"kubernetes.io/projected/e703948a-fdb1-445a-8ece-94bd76181899-kube-api-access-ts5zj\") pod \"cluster-logging-operator-c769fd969-kl2hc\" (UID: \"e703948a-fdb1-445a-8ece-94bd76181899\") " pod="openshift-logging/cluster-logging-operator-c769fd969-kl2hc" Feb 18 14:44:47 crc kubenswrapper[4957]: I0218 14:44:47.643770 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/cluster-logging-operator-c769fd969-kl2hc" Feb 18 14:44:49 crc kubenswrapper[4957]: I0218 14:44:49.807166 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/cluster-logging-operator-c769fd969-kl2hc"] Feb 18 14:44:50 crc kubenswrapper[4957]: I0218 14:44:50.700998 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-669bf4b44b-ndlc7" event={"ID":"da87ca13-b23a-4345-b79d-46c8e9bec9b3","Type":"ContainerStarted","Data":"e370b474353f69743b7b5ca79226784bb826fa7402419014686a331422c5efbd"} Feb 18 14:44:50 crc kubenswrapper[4957]: I0218 14:44:50.702401 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/cluster-logging-operator-c769fd969-kl2hc" event={"ID":"e703948a-fdb1-445a-8ece-94bd76181899","Type":"ContainerStarted","Data":"38a8fe7e762dc8f1bdc6c6cdc103f0d026262b37f19ebf982391183411dfdd5a"} Feb 18 14:44:50 crc kubenswrapper[4957]: I0218 14:44:50.776964 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-dfxwx" Feb 18 14:44:50 crc kubenswrapper[4957]: I0218 14:44:50.820044 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-dfxwx" Feb 18 14:44:53 crc kubenswrapper[4957]: I0218 14:44:53.779915 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dfxwx"] Feb 18 14:44:53 crc kubenswrapper[4957]: I0218 14:44:53.780593 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-dfxwx" podUID="847dae77-c922-4d36-a043-df06ec81c15a" containerName="registry-server" containerID="cri-o://d0e0c1cdd74b6128c975451b4349c18aec7c323a4c4915700462c8fa1e77841e" gracePeriod=2 Feb 18 14:44:54 crc kubenswrapper[4957]: I0218 14:44:54.747384 4957 generic.go:334] "Generic (PLEG): container finished" podID="847dae77-c922-4d36-a043-df06ec81c15a" containerID="d0e0c1cdd74b6128c975451b4349c18aec7c323a4c4915700462c8fa1e77841e" exitCode=0 Feb 18 14:44:54 crc kubenswrapper[4957]: I0218 14:44:54.747738 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dfxwx" event={"ID":"847dae77-c922-4d36-a043-df06ec81c15a","Type":"ContainerDied","Data":"d0e0c1cdd74b6128c975451b4349c18aec7c323a4c4915700462c8fa1e77841e"} Feb 18 14:44:56 crc kubenswrapper[4957]: I0218 14:44:56.779780 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dfxwx" event={"ID":"847dae77-c922-4d36-a043-df06ec81c15a","Type":"ContainerDied","Data":"ec2294be40c2a600bc1575b5f7257014383b391ac3b6fdbff2799ff16bd1e383"} Feb 18 14:44:56 crc kubenswrapper[4957]: I0218 14:44:56.780166 4957 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ec2294be40c2a600bc1575b5f7257014383b391ac3b6fdbff2799ff16bd1e383" Feb 18 14:44:56 crc kubenswrapper[4957]: I0218 14:44:56.849399 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dfxwx" Feb 18 14:44:57 crc kubenswrapper[4957]: I0218 14:44:57.011880 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/847dae77-c922-4d36-a043-df06ec81c15a-catalog-content\") pod \"847dae77-c922-4d36-a043-df06ec81c15a\" (UID: \"847dae77-c922-4d36-a043-df06ec81c15a\") " Feb 18 14:44:57 crc kubenswrapper[4957]: I0218 14:44:57.012009 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7rk7q\" (UniqueName: \"kubernetes.io/projected/847dae77-c922-4d36-a043-df06ec81c15a-kube-api-access-7rk7q\") pod \"847dae77-c922-4d36-a043-df06ec81c15a\" (UID: \"847dae77-c922-4d36-a043-df06ec81c15a\") " Feb 18 14:44:57 crc kubenswrapper[4957]: I0218 14:44:57.012029 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/847dae77-c922-4d36-a043-df06ec81c15a-utilities\") pod \"847dae77-c922-4d36-a043-df06ec81c15a\" (UID: \"847dae77-c922-4d36-a043-df06ec81c15a\") " Feb 18 14:44:57 crc kubenswrapper[4957]: I0218 14:44:57.013079 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/847dae77-c922-4d36-a043-df06ec81c15a-utilities" (OuterVolumeSpecName: "utilities") pod "847dae77-c922-4d36-a043-df06ec81c15a" (UID: "847dae77-c922-4d36-a043-df06ec81c15a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:44:57 crc kubenswrapper[4957]: I0218 14:44:57.043451 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/847dae77-c922-4d36-a043-df06ec81c15a-kube-api-access-7rk7q" (OuterVolumeSpecName: "kube-api-access-7rk7q") pod "847dae77-c922-4d36-a043-df06ec81c15a" (UID: "847dae77-c922-4d36-a043-df06ec81c15a"). InnerVolumeSpecName "kube-api-access-7rk7q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:44:57 crc kubenswrapper[4957]: I0218 14:44:57.118374 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7rk7q\" (UniqueName: \"kubernetes.io/projected/847dae77-c922-4d36-a043-df06ec81c15a-kube-api-access-7rk7q\") on node \"crc\" DevicePath \"\"" Feb 18 14:44:57 crc kubenswrapper[4957]: I0218 14:44:57.118434 4957 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/847dae77-c922-4d36-a043-df06ec81c15a-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 14:44:57 crc kubenswrapper[4957]: I0218 14:44:57.160753 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/847dae77-c922-4d36-a043-df06ec81c15a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "847dae77-c922-4d36-a043-df06ec81c15a" (UID: "847dae77-c922-4d36-a043-df06ec81c15a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:44:57 crc kubenswrapper[4957]: I0218 14:44:57.222701 4957 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/847dae77-c922-4d36-a043-df06ec81c15a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 14:44:57 crc kubenswrapper[4957]: I0218 14:44:57.789776 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dfxwx" Feb 18 14:44:57 crc kubenswrapper[4957]: I0218 14:44:57.824202 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dfxwx"] Feb 18 14:44:57 crc kubenswrapper[4957]: I0218 14:44:57.829535 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-dfxwx"] Feb 18 14:44:58 crc kubenswrapper[4957]: I0218 14:44:58.223022 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="847dae77-c922-4d36-a043-df06ec81c15a" path="/var/lib/kubelet/pods/847dae77-c922-4d36-a043-df06ec81c15a/volumes" Feb 18 14:45:00 crc kubenswrapper[4957]: I0218 14:45:00.164677 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29523765-nsgpn"] Feb 18 14:45:00 crc kubenswrapper[4957]: E0218 14:45:00.165309 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="847dae77-c922-4d36-a043-df06ec81c15a" containerName="extract-content" Feb 18 14:45:00 crc kubenswrapper[4957]: I0218 14:45:00.165326 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="847dae77-c922-4d36-a043-df06ec81c15a" containerName="extract-content" Feb 18 14:45:00 crc kubenswrapper[4957]: E0218 14:45:00.165343 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="847dae77-c922-4d36-a043-df06ec81c15a" containerName="registry-server" Feb 18 14:45:00 crc kubenswrapper[4957]: I0218 14:45:00.165351 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="847dae77-c922-4d36-a043-df06ec81c15a" containerName="registry-server" Feb 18 14:45:00 crc kubenswrapper[4957]: E0218 14:45:00.165376 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="847dae77-c922-4d36-a043-df06ec81c15a" containerName="extract-utilities" Feb 18 14:45:00 crc kubenswrapper[4957]: I0218 14:45:00.165382 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="847dae77-c922-4d36-a043-df06ec81c15a" containerName="extract-utilities" Feb 18 14:45:00 crc kubenswrapper[4957]: I0218 14:45:00.165534 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="847dae77-c922-4d36-a043-df06ec81c15a" containerName="registry-server" Feb 18 14:45:00 crc kubenswrapper[4957]: I0218 14:45:00.165983 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29523765-nsgpn" Feb 18 14:45:00 crc kubenswrapper[4957]: I0218 14:45:00.168956 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 18 14:45:00 crc kubenswrapper[4957]: I0218 14:45:00.168965 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 18 14:45:00 crc kubenswrapper[4957]: I0218 14:45:00.179219 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29523765-nsgpn"] Feb 18 14:45:00 crc kubenswrapper[4957]: I0218 14:45:00.281712 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/83e53af5-e229-449c-a9e9-422344aaecef-config-volume\") pod \"collect-profiles-29523765-nsgpn\" (UID: \"83e53af5-e229-449c-a9e9-422344aaecef\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523765-nsgpn" Feb 18 14:45:00 crc kubenswrapper[4957]: I0218 14:45:00.281775 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/83e53af5-e229-449c-a9e9-422344aaecef-secret-volume\") pod \"collect-profiles-29523765-nsgpn\" (UID: \"83e53af5-e229-449c-a9e9-422344aaecef\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523765-nsgpn" Feb 18 14:45:00 crc kubenswrapper[4957]: I0218 14:45:00.281833 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dl9c8\" (UniqueName: \"kubernetes.io/projected/83e53af5-e229-449c-a9e9-422344aaecef-kube-api-access-dl9c8\") pod \"collect-profiles-29523765-nsgpn\" (UID: \"83e53af5-e229-449c-a9e9-422344aaecef\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523765-nsgpn" Feb 18 14:45:00 crc kubenswrapper[4957]: I0218 14:45:00.383785 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/83e53af5-e229-449c-a9e9-422344aaecef-config-volume\") pod \"collect-profiles-29523765-nsgpn\" (UID: \"83e53af5-e229-449c-a9e9-422344aaecef\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523765-nsgpn" Feb 18 14:45:00 crc kubenswrapper[4957]: I0218 14:45:00.383852 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/83e53af5-e229-449c-a9e9-422344aaecef-secret-volume\") pod \"collect-profiles-29523765-nsgpn\" (UID: \"83e53af5-e229-449c-a9e9-422344aaecef\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523765-nsgpn" Feb 18 14:45:00 crc kubenswrapper[4957]: I0218 14:45:00.383911 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dl9c8\" (UniqueName: \"kubernetes.io/projected/83e53af5-e229-449c-a9e9-422344aaecef-kube-api-access-dl9c8\") pod \"collect-profiles-29523765-nsgpn\" (UID: \"83e53af5-e229-449c-a9e9-422344aaecef\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523765-nsgpn" Feb 18 14:45:00 crc kubenswrapper[4957]: I0218 14:45:00.386325 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/83e53af5-e229-449c-a9e9-422344aaecef-config-volume\") pod \"collect-profiles-29523765-nsgpn\" (UID: \"83e53af5-e229-449c-a9e9-422344aaecef\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523765-nsgpn" Feb 18 14:45:00 crc kubenswrapper[4957]: I0218 14:45:00.401837 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dl9c8\" (UniqueName: \"kubernetes.io/projected/83e53af5-e229-449c-a9e9-422344aaecef-kube-api-access-dl9c8\") pod \"collect-profiles-29523765-nsgpn\" (UID: \"83e53af5-e229-449c-a9e9-422344aaecef\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523765-nsgpn" Feb 18 14:45:00 crc kubenswrapper[4957]: I0218 14:45:00.404574 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/83e53af5-e229-449c-a9e9-422344aaecef-secret-volume\") pod \"collect-profiles-29523765-nsgpn\" (UID: \"83e53af5-e229-449c-a9e9-422344aaecef\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523765-nsgpn" Feb 18 14:45:00 crc kubenswrapper[4957]: I0218 14:45:00.500220 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29523765-nsgpn" Feb 18 14:45:01 crc kubenswrapper[4957]: I0218 14:45:01.502804 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29523765-nsgpn"] Feb 18 14:45:01 crc kubenswrapper[4957]: W0218 14:45:01.517239 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod83e53af5_e229_449c_a9e9_422344aaecef.slice/crio-a589b4f75a6b055ddaa3a01bff2eaa7042b120ae6400c74ee13efbc10722ab7f WatchSource:0}: Error finding container a589b4f75a6b055ddaa3a01bff2eaa7042b120ae6400c74ee13efbc10722ab7f: Status 404 returned error can't find the container with id a589b4f75a6b055ddaa3a01bff2eaa7042b120ae6400c74ee13efbc10722ab7f Feb 18 14:45:01 crc kubenswrapper[4957]: I0218 14:45:01.822197 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/cluster-logging-operator-c769fd969-kl2hc" event={"ID":"e703948a-fdb1-445a-8ece-94bd76181899","Type":"ContainerStarted","Data":"8daff115751c6abf5a87ce3e124b990eaa59d2ad45257aeccb450b356dcfb904"} Feb 18 14:45:01 crc kubenswrapper[4957]: I0218 14:45:01.824887 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-669bf4b44b-ndlc7" event={"ID":"da87ca13-b23a-4345-b79d-46c8e9bec9b3","Type":"ContainerStarted","Data":"838ef3b17d05297d3f1de5581c0b94312c8dea4390737bbaae22a3bc342eb4b4"} Feb 18 14:45:01 crc kubenswrapper[4957]: I0218 14:45:01.825432 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators-redhat/loki-operator-controller-manager-669bf4b44b-ndlc7" Feb 18 14:45:01 crc kubenswrapper[4957]: I0218 14:45:01.826471 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29523765-nsgpn" event={"ID":"83e53af5-e229-449c-a9e9-422344aaecef","Type":"ContainerStarted","Data":"a589b4f75a6b055ddaa3a01bff2eaa7042b120ae6400c74ee13efbc10722ab7f"} Feb 18 14:45:01 crc kubenswrapper[4957]: I0218 14:45:01.828512 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators-redhat/loki-operator-controller-manager-669bf4b44b-ndlc7" Feb 18 14:45:01 crc kubenswrapper[4957]: I0218 14:45:01.855619 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/cluster-logging-operator-c769fd969-kl2hc" podStartSLOduration=3.469275841 podStartE2EDuration="14.855599777s" podCreationTimestamp="2026-02-18 14:44:47 +0000 UTC" firstStartedPulling="2026-02-18 14:44:49.833600936 +0000 UTC m=+796.354465680" lastFinishedPulling="2026-02-18 14:45:01.219924872 +0000 UTC m=+807.740789616" observedRunningTime="2026-02-18 14:45:01.853525248 +0000 UTC m=+808.374389992" watchObservedRunningTime="2026-02-18 14:45:01.855599777 +0000 UTC m=+808.376464521" Feb 18 14:45:01 crc kubenswrapper[4957]: I0218 14:45:01.949616 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators-redhat/loki-operator-controller-manager-669bf4b44b-ndlc7" podStartSLOduration=2.122016579 podStartE2EDuration="18.949559356s" podCreationTimestamp="2026-02-18 14:44:43 +0000 UTC" firstStartedPulling="2026-02-18 14:44:44.413671179 +0000 UTC m=+790.934535923" lastFinishedPulling="2026-02-18 14:45:01.241213956 +0000 UTC m=+807.762078700" observedRunningTime="2026-02-18 14:45:01.924213516 +0000 UTC m=+808.445078270" watchObservedRunningTime="2026-02-18 14:45:01.949559356 +0000 UTC m=+808.470424120" Feb 18 14:45:02 crc kubenswrapper[4957]: I0218 14:45:02.838601 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29523765-nsgpn" event={"ID":"83e53af5-e229-449c-a9e9-422344aaecef","Type":"ContainerStarted","Data":"4af4a642c75ffc855a545a026043ec21b9800efe61fc37d90afcc336ce460db8"} Feb 18 14:45:02 crc kubenswrapper[4957]: I0218 14:45:02.856506 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29523765-nsgpn" podStartSLOduration=2.85627547 podStartE2EDuration="2.85627547s" podCreationTimestamp="2026-02-18 14:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:45:02.853884382 +0000 UTC m=+809.374749136" watchObservedRunningTime="2026-02-18 14:45:02.85627547 +0000 UTC m=+809.377140214" Feb 18 14:45:03 crc kubenswrapper[4957]: I0218 14:45:03.847820 4957 generic.go:334] "Generic (PLEG): container finished" podID="83e53af5-e229-449c-a9e9-422344aaecef" containerID="4af4a642c75ffc855a545a026043ec21b9800efe61fc37d90afcc336ce460db8" exitCode=0 Feb 18 14:45:03 crc kubenswrapper[4957]: I0218 14:45:03.847923 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29523765-nsgpn" event={"ID":"83e53af5-e229-449c-a9e9-422344aaecef","Type":"ContainerDied","Data":"4af4a642c75ffc855a545a026043ec21b9800efe61fc37d90afcc336ce460db8"} Feb 18 14:45:05 crc kubenswrapper[4957]: I0218 14:45:05.219857 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29523765-nsgpn" Feb 18 14:45:05 crc kubenswrapper[4957]: I0218 14:45:05.386278 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/83e53af5-e229-449c-a9e9-422344aaecef-config-volume\") pod \"83e53af5-e229-449c-a9e9-422344aaecef\" (UID: \"83e53af5-e229-449c-a9e9-422344aaecef\") " Feb 18 14:45:05 crc kubenswrapper[4957]: I0218 14:45:05.386397 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/83e53af5-e229-449c-a9e9-422344aaecef-secret-volume\") pod \"83e53af5-e229-449c-a9e9-422344aaecef\" (UID: \"83e53af5-e229-449c-a9e9-422344aaecef\") " Feb 18 14:45:05 crc kubenswrapper[4957]: I0218 14:45:05.386454 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dl9c8\" (UniqueName: \"kubernetes.io/projected/83e53af5-e229-449c-a9e9-422344aaecef-kube-api-access-dl9c8\") pod \"83e53af5-e229-449c-a9e9-422344aaecef\" (UID: \"83e53af5-e229-449c-a9e9-422344aaecef\") " Feb 18 14:45:05 crc kubenswrapper[4957]: I0218 14:45:05.387297 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/83e53af5-e229-449c-a9e9-422344aaecef-config-volume" (OuterVolumeSpecName: "config-volume") pod "83e53af5-e229-449c-a9e9-422344aaecef" (UID: "83e53af5-e229-449c-a9e9-422344aaecef"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:45:05 crc kubenswrapper[4957]: I0218 14:45:05.407802 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83e53af5-e229-449c-a9e9-422344aaecef-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "83e53af5-e229-449c-a9e9-422344aaecef" (UID: "83e53af5-e229-449c-a9e9-422344aaecef"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:45:05 crc kubenswrapper[4957]: I0218 14:45:05.420515 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83e53af5-e229-449c-a9e9-422344aaecef-kube-api-access-dl9c8" (OuterVolumeSpecName: "kube-api-access-dl9c8") pod "83e53af5-e229-449c-a9e9-422344aaecef" (UID: "83e53af5-e229-449c-a9e9-422344aaecef"). InnerVolumeSpecName "kube-api-access-dl9c8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:45:05 crc kubenswrapper[4957]: I0218 14:45:05.488829 4957 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/83e53af5-e229-449c-a9e9-422344aaecef-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 18 14:45:05 crc kubenswrapper[4957]: I0218 14:45:05.488883 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dl9c8\" (UniqueName: \"kubernetes.io/projected/83e53af5-e229-449c-a9e9-422344aaecef-kube-api-access-dl9c8\") on node \"crc\" DevicePath \"\"" Feb 18 14:45:05 crc kubenswrapper[4957]: I0218 14:45:05.488895 4957 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/83e53af5-e229-449c-a9e9-422344aaecef-config-volume\") on node \"crc\" DevicePath \"\"" Feb 18 14:45:05 crc kubenswrapper[4957]: I0218 14:45:05.874917 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29523765-nsgpn" event={"ID":"83e53af5-e229-449c-a9e9-422344aaecef","Type":"ContainerDied","Data":"a589b4f75a6b055ddaa3a01bff2eaa7042b120ae6400c74ee13efbc10722ab7f"} Feb 18 14:45:05 crc kubenswrapper[4957]: I0218 14:45:05.875371 4957 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a589b4f75a6b055ddaa3a01bff2eaa7042b120ae6400c74ee13efbc10722ab7f" Feb 18 14:45:05 crc kubenswrapper[4957]: I0218 14:45:05.875046 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29523765-nsgpn" Feb 18 14:45:07 crc kubenswrapper[4957]: I0218 14:45:07.172401 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["minio-dev/minio"] Feb 18 14:45:07 crc kubenswrapper[4957]: E0218 14:45:07.172869 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83e53af5-e229-449c-a9e9-422344aaecef" containerName="collect-profiles" Feb 18 14:45:07 crc kubenswrapper[4957]: I0218 14:45:07.172888 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="83e53af5-e229-449c-a9e9-422344aaecef" containerName="collect-profiles" Feb 18 14:45:07 crc kubenswrapper[4957]: I0218 14:45:07.173025 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="83e53af5-e229-449c-a9e9-422344aaecef" containerName="collect-profiles" Feb 18 14:45:07 crc kubenswrapper[4957]: I0218 14:45:07.173741 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="minio-dev/minio" Feb 18 14:45:07 crc kubenswrapper[4957]: I0218 14:45:07.176780 4957 reflector.go:368] Caches populated for *v1.Secret from object-"minio-dev"/"default-dockercfg-9hnkz" Feb 18 14:45:07 crc kubenswrapper[4957]: I0218 14:45:07.179127 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"minio-dev"/"kube-root-ca.crt" Feb 18 14:45:07 crc kubenswrapper[4957]: I0218 14:45:07.179174 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"minio-dev"/"openshift-service-ca.crt" Feb 18 14:45:07 crc kubenswrapper[4957]: I0218 14:45:07.183635 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["minio-dev/minio"] Feb 18 14:45:07 crc kubenswrapper[4957]: I0218 14:45:07.353786 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wp5z\" (UniqueName: \"kubernetes.io/projected/83c3ca23-057b-4a2d-aa3c-45f967ef55e6-kube-api-access-8wp5z\") pod \"minio\" (UID: \"83c3ca23-057b-4a2d-aa3c-45f967ef55e6\") " pod="minio-dev/minio" Feb 18 14:45:07 crc kubenswrapper[4957]: I0218 14:45:07.353952 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-d693314f-2f60-402b-a36c-cefe49e76260\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d693314f-2f60-402b-a36c-cefe49e76260\") pod \"minio\" (UID: \"83c3ca23-057b-4a2d-aa3c-45f967ef55e6\") " pod="minio-dev/minio" Feb 18 14:45:07 crc kubenswrapper[4957]: I0218 14:45:07.455098 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-d693314f-2f60-402b-a36c-cefe49e76260\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d693314f-2f60-402b-a36c-cefe49e76260\") pod \"minio\" (UID: \"83c3ca23-057b-4a2d-aa3c-45f967ef55e6\") " pod="minio-dev/minio" Feb 18 14:45:07 crc kubenswrapper[4957]: I0218 14:45:07.455157 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8wp5z\" (UniqueName: \"kubernetes.io/projected/83c3ca23-057b-4a2d-aa3c-45f967ef55e6-kube-api-access-8wp5z\") pod \"minio\" (UID: \"83c3ca23-057b-4a2d-aa3c-45f967ef55e6\") " pod="minio-dev/minio" Feb 18 14:45:07 crc kubenswrapper[4957]: I0218 14:45:07.459614 4957 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 18 14:45:07 crc kubenswrapper[4957]: I0218 14:45:07.459649 4957 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-d693314f-2f60-402b-a36c-cefe49e76260\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d693314f-2f60-402b-a36c-cefe49e76260\") pod \"minio\" (UID: \"83c3ca23-057b-4a2d-aa3c-45f967ef55e6\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/de3cb7193ef373c3c91f60851fafab3de5793dc4020d771770621b3de85ccc15/globalmount\"" pod="minio-dev/minio" Feb 18 14:45:07 crc kubenswrapper[4957]: I0218 14:45:07.476675 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8wp5z\" (UniqueName: \"kubernetes.io/projected/83c3ca23-057b-4a2d-aa3c-45f967ef55e6-kube-api-access-8wp5z\") pod \"minio\" (UID: \"83c3ca23-057b-4a2d-aa3c-45f967ef55e6\") " pod="minio-dev/minio" Feb 18 14:45:07 crc kubenswrapper[4957]: I0218 14:45:07.487384 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-d693314f-2f60-402b-a36c-cefe49e76260\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d693314f-2f60-402b-a36c-cefe49e76260\") pod \"minio\" (UID: \"83c3ca23-057b-4a2d-aa3c-45f967ef55e6\") " pod="minio-dev/minio" Feb 18 14:45:07 crc kubenswrapper[4957]: I0218 14:45:07.492645 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="minio-dev/minio" Feb 18 14:45:07 crc kubenswrapper[4957]: I0218 14:45:07.733013 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["minio-dev/minio"] Feb 18 14:45:07 crc kubenswrapper[4957]: I0218 14:45:07.893513 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="minio-dev/minio" event={"ID":"83c3ca23-057b-4a2d-aa3c-45f967ef55e6","Type":"ContainerStarted","Data":"c68c5cc67b3cbe247623d1dcc29a96f0d76a275a95846b2f8eb356c93fcd54f9"} Feb 18 14:45:11 crc kubenswrapper[4957]: I0218 14:45:11.955460 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="minio-dev/minio" event={"ID":"83c3ca23-057b-4a2d-aa3c-45f967ef55e6","Type":"ContainerStarted","Data":"17c7f205c47a082e5733fee08d19490be621c9fe0a9af6c55173dd6280e6ecc3"} Feb 18 14:45:11 crc kubenswrapper[4957]: I0218 14:45:11.976220 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="minio-dev/minio" podStartSLOduration=4.332742683 podStartE2EDuration="7.976192111s" podCreationTimestamp="2026-02-18 14:45:04 +0000 UTC" firstStartedPulling="2026-02-18 14:45:07.739835312 +0000 UTC m=+814.260700056" lastFinishedPulling="2026-02-18 14:45:11.38328474 +0000 UTC m=+817.904149484" observedRunningTime="2026-02-18 14:45:11.973212457 +0000 UTC m=+818.494077201" watchObservedRunningTime="2026-02-18 14:45:11.976192111 +0000 UTC m=+818.497056875" Feb 18 14:45:16 crc kubenswrapper[4957]: I0218 14:45:16.558689 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-distributor-5d5548c9f5-2wbxs"] Feb 18 14:45:16 crc kubenswrapper[4957]: I0218 14:45:16.560344 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-2wbxs" Feb 18 14:45:16 crc kubenswrapper[4957]: I0218 14:45:16.569255 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-distributor-http" Feb 18 14:45:16 crc kubenswrapper[4957]: I0218 14:45:16.569289 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-config" Feb 18 14:45:16 crc kubenswrapper[4957]: I0218 14:45:16.569734 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-distributor-grpc" Feb 18 14:45:16 crc kubenswrapper[4957]: I0218 14:45:16.569894 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-dockercfg-6t8l2" Feb 18 14:45:16 crc kubenswrapper[4957]: I0218 14:45:16.570755 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-ca-bundle" Feb 18 14:45:16 crc kubenswrapper[4957]: I0218 14:45:16.576764 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-distributor-5d5548c9f5-2wbxs"] Feb 18 14:45:16 crc kubenswrapper[4957]: I0218 14:45:16.744917 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/4de1a2b8-9bfb-4104-b065-e0c991cb95ea-logging-loki-distributor-http\") pod \"logging-loki-distributor-5d5548c9f5-2wbxs\" (UID: \"4de1a2b8-9bfb-4104-b065-e0c991cb95ea\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-2wbxs" Feb 18 14:45:16 crc kubenswrapper[4957]: I0218 14:45:16.744998 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6r4q9\" (UniqueName: \"kubernetes.io/projected/4de1a2b8-9bfb-4104-b065-e0c991cb95ea-kube-api-access-6r4q9\") pod \"logging-loki-distributor-5d5548c9f5-2wbxs\" (UID: \"4de1a2b8-9bfb-4104-b065-e0c991cb95ea\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-2wbxs" Feb 18 14:45:16 crc kubenswrapper[4957]: I0218 14:45:16.745061 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4de1a2b8-9bfb-4104-b065-e0c991cb95ea-logging-loki-ca-bundle\") pod \"logging-loki-distributor-5d5548c9f5-2wbxs\" (UID: \"4de1a2b8-9bfb-4104-b065-e0c991cb95ea\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-2wbxs" Feb 18 14:45:16 crc kubenswrapper[4957]: I0218 14:45:16.745259 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4de1a2b8-9bfb-4104-b065-e0c991cb95ea-config\") pod \"logging-loki-distributor-5d5548c9f5-2wbxs\" (UID: \"4de1a2b8-9bfb-4104-b065-e0c991cb95ea\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-2wbxs" Feb 18 14:45:16 crc kubenswrapper[4957]: I0218 14:45:16.745490 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/4de1a2b8-9bfb-4104-b065-e0c991cb95ea-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-5d5548c9f5-2wbxs\" (UID: \"4de1a2b8-9bfb-4104-b065-e0c991cb95ea\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-2wbxs" Feb 18 14:45:16 crc kubenswrapper[4957]: I0218 14:45:16.747953 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-querier-76bf7b6d45-t4b27"] Feb 18 14:45:16 crc kubenswrapper[4957]: I0218 14:45:16.749372 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-querier-76bf7b6d45-t4b27" Feb 18 14:45:16 crc kubenswrapper[4957]: I0218 14:45:16.755442 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-s3" Feb 18 14:45:16 crc kubenswrapper[4957]: I0218 14:45:16.755763 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-querier-http" Feb 18 14:45:16 crc kubenswrapper[4957]: I0218 14:45:16.757551 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-querier-76bf7b6d45-t4b27"] Feb 18 14:45:16 crc kubenswrapper[4957]: I0218 14:45:16.758523 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-querier-grpc" Feb 18 14:45:16 crc kubenswrapper[4957]: I0218 14:45:16.824387 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-query-frontend-6d6859c548-x4qsb"] Feb 18 14:45:16 crc kubenswrapper[4957]: I0218 14:45:16.825651 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-x4qsb" Feb 18 14:45:16 crc kubenswrapper[4957]: I0218 14:45:16.829661 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-query-frontend-http" Feb 18 14:45:16 crc kubenswrapper[4957]: I0218 14:45:16.830067 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-query-frontend-grpc" Feb 18 14:45:16 crc kubenswrapper[4957]: I0218 14:45:16.840108 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-query-frontend-6d6859c548-x4qsb"] Feb 18 14:45:16 crc kubenswrapper[4957]: I0218 14:45:16.849061 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/4de1a2b8-9bfb-4104-b065-e0c991cb95ea-logging-loki-distributor-http\") pod \"logging-loki-distributor-5d5548c9f5-2wbxs\" (UID: \"4de1a2b8-9bfb-4104-b065-e0c991cb95ea\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-2wbxs" Feb 18 14:45:16 crc kubenswrapper[4957]: I0218 14:45:16.849137 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6r4q9\" (UniqueName: \"kubernetes.io/projected/4de1a2b8-9bfb-4104-b065-e0c991cb95ea-kube-api-access-6r4q9\") pod \"logging-loki-distributor-5d5548c9f5-2wbxs\" (UID: \"4de1a2b8-9bfb-4104-b065-e0c991cb95ea\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-2wbxs" Feb 18 14:45:16 crc kubenswrapper[4957]: I0218 14:45:16.849183 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c093fd9d-72e8-42d1-a5ad-5e687f61aa9e-config\") pod \"logging-loki-querier-76bf7b6d45-t4b27\" (UID: \"c093fd9d-72e8-42d1-a5ad-5e687f61aa9e\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-t4b27" Feb 18 14:45:16 crc kubenswrapper[4957]: I0218 14:45:16.849213 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4de1a2b8-9bfb-4104-b065-e0c991cb95ea-logging-loki-ca-bundle\") pod \"logging-loki-distributor-5d5548c9f5-2wbxs\" (UID: \"4de1a2b8-9bfb-4104-b065-e0c991cb95ea\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-2wbxs" Feb 18 14:45:16 crc kubenswrapper[4957]: I0218 14:45:16.849237 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jcp2s\" (UniqueName: \"kubernetes.io/projected/c093fd9d-72e8-42d1-a5ad-5e687f61aa9e-kube-api-access-jcp2s\") pod \"logging-loki-querier-76bf7b6d45-t4b27\" (UID: \"c093fd9d-72e8-42d1-a5ad-5e687f61aa9e\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-t4b27" Feb 18 14:45:16 crc kubenswrapper[4957]: I0218 14:45:16.849262 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/c093fd9d-72e8-42d1-a5ad-5e687f61aa9e-logging-loki-s3\") pod \"logging-loki-querier-76bf7b6d45-t4b27\" (UID: \"c093fd9d-72e8-42d1-a5ad-5e687f61aa9e\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-t4b27" Feb 18 14:45:16 crc kubenswrapper[4957]: I0218 14:45:16.849287 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c093fd9d-72e8-42d1-a5ad-5e687f61aa9e-logging-loki-ca-bundle\") pod \"logging-loki-querier-76bf7b6d45-t4b27\" (UID: \"c093fd9d-72e8-42d1-a5ad-5e687f61aa9e\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-t4b27" Feb 18 14:45:16 crc kubenswrapper[4957]: I0218 14:45:16.849312 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/c093fd9d-72e8-42d1-a5ad-5e687f61aa9e-logging-loki-querier-http\") pod \"logging-loki-querier-76bf7b6d45-t4b27\" (UID: \"c093fd9d-72e8-42d1-a5ad-5e687f61aa9e\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-t4b27" Feb 18 14:45:16 crc kubenswrapper[4957]: I0218 14:45:16.849375 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4de1a2b8-9bfb-4104-b065-e0c991cb95ea-config\") pod \"logging-loki-distributor-5d5548c9f5-2wbxs\" (UID: \"4de1a2b8-9bfb-4104-b065-e0c991cb95ea\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-2wbxs" Feb 18 14:45:16 crc kubenswrapper[4957]: I0218 14:45:16.849437 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/4de1a2b8-9bfb-4104-b065-e0c991cb95ea-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-5d5548c9f5-2wbxs\" (UID: \"4de1a2b8-9bfb-4104-b065-e0c991cb95ea\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-2wbxs" Feb 18 14:45:16 crc kubenswrapper[4957]: I0218 14:45:16.849462 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/c093fd9d-72e8-42d1-a5ad-5e687f61aa9e-logging-loki-querier-grpc\") pod \"logging-loki-querier-76bf7b6d45-t4b27\" (UID: \"c093fd9d-72e8-42d1-a5ad-5e687f61aa9e\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-t4b27" Feb 18 14:45:16 crc kubenswrapper[4957]: I0218 14:45:16.851314 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4de1a2b8-9bfb-4104-b065-e0c991cb95ea-logging-loki-ca-bundle\") pod \"logging-loki-distributor-5d5548c9f5-2wbxs\" (UID: \"4de1a2b8-9bfb-4104-b065-e0c991cb95ea\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-2wbxs" Feb 18 14:45:16 crc kubenswrapper[4957]: I0218 14:45:16.851302 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4de1a2b8-9bfb-4104-b065-e0c991cb95ea-config\") pod \"logging-loki-distributor-5d5548c9f5-2wbxs\" (UID: \"4de1a2b8-9bfb-4104-b065-e0c991cb95ea\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-2wbxs" Feb 18 14:45:16 crc kubenswrapper[4957]: I0218 14:45:16.881897 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/4de1a2b8-9bfb-4104-b065-e0c991cb95ea-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-5d5548c9f5-2wbxs\" (UID: \"4de1a2b8-9bfb-4104-b065-e0c991cb95ea\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-2wbxs" Feb 18 14:45:16 crc kubenswrapper[4957]: I0218 14:45:16.886272 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6r4q9\" (UniqueName: \"kubernetes.io/projected/4de1a2b8-9bfb-4104-b065-e0c991cb95ea-kube-api-access-6r4q9\") pod \"logging-loki-distributor-5d5548c9f5-2wbxs\" (UID: \"4de1a2b8-9bfb-4104-b065-e0c991cb95ea\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-2wbxs" Feb 18 14:45:16 crc kubenswrapper[4957]: I0218 14:45:16.907377 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/4de1a2b8-9bfb-4104-b065-e0c991cb95ea-logging-loki-distributor-http\") pod \"logging-loki-distributor-5d5548c9f5-2wbxs\" (UID: \"4de1a2b8-9bfb-4104-b065-e0c991cb95ea\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-2wbxs" Feb 18 14:45:16 crc kubenswrapper[4957]: I0218 14:45:16.951675 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c093fd9d-72e8-42d1-a5ad-5e687f61aa9e-config\") pod \"logging-loki-querier-76bf7b6d45-t4b27\" (UID: \"c093fd9d-72e8-42d1-a5ad-5e687f61aa9e\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-t4b27" Feb 18 14:45:16 crc kubenswrapper[4957]: I0218 14:45:16.951748 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jcp2s\" (UniqueName: \"kubernetes.io/projected/c093fd9d-72e8-42d1-a5ad-5e687f61aa9e-kube-api-access-jcp2s\") pod \"logging-loki-querier-76bf7b6d45-t4b27\" (UID: \"c093fd9d-72e8-42d1-a5ad-5e687f61aa9e\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-t4b27" Feb 18 14:45:16 crc kubenswrapper[4957]: I0218 14:45:16.951773 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/c093fd9d-72e8-42d1-a5ad-5e687f61aa9e-logging-loki-s3\") pod \"logging-loki-querier-76bf7b6d45-t4b27\" (UID: \"c093fd9d-72e8-42d1-a5ad-5e687f61aa9e\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-t4b27" Feb 18 14:45:16 crc kubenswrapper[4957]: I0218 14:45:16.951805 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c093fd9d-72e8-42d1-a5ad-5e687f61aa9e-logging-loki-ca-bundle\") pod \"logging-loki-querier-76bf7b6d45-t4b27\" (UID: \"c093fd9d-72e8-42d1-a5ad-5e687f61aa9e\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-t4b27" Feb 18 14:45:16 crc kubenswrapper[4957]: I0218 14:45:16.951825 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/c093fd9d-72e8-42d1-a5ad-5e687f61aa9e-logging-loki-querier-http\") pod \"logging-loki-querier-76bf7b6d45-t4b27\" (UID: \"c093fd9d-72e8-42d1-a5ad-5e687f61aa9e\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-t4b27" Feb 18 14:45:16 crc kubenswrapper[4957]: I0218 14:45:16.951880 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/d1e1403b-f3b0-4377-9c77-1d9f5b3c10c7-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-6d6859c548-x4qsb\" (UID: \"d1e1403b-f3b0-4377-9c77-1d9f5b3c10c7\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-x4qsb" Feb 18 14:45:16 crc kubenswrapper[4957]: I0218 14:45:16.951909 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1e1403b-f3b0-4377-9c77-1d9f5b3c10c7-config\") pod \"logging-loki-query-frontend-6d6859c548-x4qsb\" (UID: \"d1e1403b-f3b0-4377-9c77-1d9f5b3c10c7\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-x4qsb" Feb 18 14:45:16 crc kubenswrapper[4957]: I0218 14:45:16.951948 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d1e1403b-f3b0-4377-9c77-1d9f5b3c10c7-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-6d6859c548-x4qsb\" (UID: \"d1e1403b-f3b0-4377-9c77-1d9f5b3c10c7\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-x4qsb" Feb 18 14:45:16 crc kubenswrapper[4957]: I0218 14:45:16.951976 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/d1e1403b-f3b0-4377-9c77-1d9f5b3c10c7-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-6d6859c548-x4qsb\" (UID: \"d1e1403b-f3b0-4377-9c77-1d9f5b3c10c7\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-x4qsb" Feb 18 14:45:16 crc kubenswrapper[4957]: I0218 14:45:16.952002 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/c093fd9d-72e8-42d1-a5ad-5e687f61aa9e-logging-loki-querier-grpc\") pod \"logging-loki-querier-76bf7b6d45-t4b27\" (UID: \"c093fd9d-72e8-42d1-a5ad-5e687f61aa9e\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-t4b27" Feb 18 14:45:16 crc kubenswrapper[4957]: I0218 14:45:16.952024 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhjpr\" (UniqueName: \"kubernetes.io/projected/d1e1403b-f3b0-4377-9c77-1d9f5b3c10c7-kube-api-access-dhjpr\") pod \"logging-loki-query-frontend-6d6859c548-x4qsb\" (UID: \"d1e1403b-f3b0-4377-9c77-1d9f5b3c10c7\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-x4qsb" Feb 18 14:45:16 crc kubenswrapper[4957]: I0218 14:45:16.953442 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c093fd9d-72e8-42d1-a5ad-5e687f61aa9e-logging-loki-ca-bundle\") pod \"logging-loki-querier-76bf7b6d45-t4b27\" (UID: \"c093fd9d-72e8-42d1-a5ad-5e687f61aa9e\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-t4b27" Feb 18 14:45:16 crc kubenswrapper[4957]: I0218 14:45:16.953473 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c093fd9d-72e8-42d1-a5ad-5e687f61aa9e-config\") pod \"logging-loki-querier-76bf7b6d45-t4b27\" (UID: \"c093fd9d-72e8-42d1-a5ad-5e687f61aa9e\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-t4b27" Feb 18 14:45:16 crc kubenswrapper[4957]: I0218 14:45:16.956371 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/c093fd9d-72e8-42d1-a5ad-5e687f61aa9e-logging-loki-s3\") pod \"logging-loki-querier-76bf7b6d45-t4b27\" (UID: \"c093fd9d-72e8-42d1-a5ad-5e687f61aa9e\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-t4b27" Feb 18 14:45:16 crc kubenswrapper[4957]: I0218 14:45:16.963031 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/c093fd9d-72e8-42d1-a5ad-5e687f61aa9e-logging-loki-querier-http\") pod \"logging-loki-querier-76bf7b6d45-t4b27\" (UID: \"c093fd9d-72e8-42d1-a5ad-5e687f61aa9e\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-t4b27" Feb 18 14:45:16 crc kubenswrapper[4957]: I0218 14:45:16.969899 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-gateway-7b58bd6fcd-n8wjk"] Feb 18 14:45:16 crc kubenswrapper[4957]: I0218 14:45:16.976295 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/c093fd9d-72e8-42d1-a5ad-5e687f61aa9e-logging-loki-querier-grpc\") pod \"logging-loki-querier-76bf7b6d45-t4b27\" (UID: \"c093fd9d-72e8-42d1-a5ad-5e687f61aa9e\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-t4b27" Feb 18 14:45:16 crc kubenswrapper[4957]: I0218 14:45:16.977309 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-7b58bd6fcd-n8wjk" Feb 18 14:45:16 crc kubenswrapper[4957]: I0218 14:45:16.983154 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-client-http" Feb 18 14:45:16 crc kubenswrapper[4957]: I0218 14:45:16.983435 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-gateway-ca-bundle" Feb 18 14:45:16 crc kubenswrapper[4957]: I0218 14:45:16.983584 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway" Feb 18 14:45:16 crc kubenswrapper[4957]: I0218 14:45:16.983642 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-gateway-7b58bd6fcd-58vxq"] Feb 18 14:45:16 crc kubenswrapper[4957]: I0218 14:45:16.983774 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-http" Feb 18 14:45:16 crc kubenswrapper[4957]: I0218 14:45:16.989701 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-7b58bd6fcd-58vxq" Feb 18 14:45:16 crc kubenswrapper[4957]: I0218 14:45:16.992617 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-gateway" Feb 18 14:45:16 crc kubenswrapper[4957]: I0218 14:45:16.993532 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-dockercfg-lmn7g" Feb 18 14:45:16 crc kubenswrapper[4957]: I0218 14:45:16.994609 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jcp2s\" (UniqueName: \"kubernetes.io/projected/c093fd9d-72e8-42d1-a5ad-5e687f61aa9e-kube-api-access-jcp2s\") pod \"logging-loki-querier-76bf7b6d45-t4b27\" (UID: \"c093fd9d-72e8-42d1-a5ad-5e687f61aa9e\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-t4b27" Feb 18 14:45:17 crc kubenswrapper[4957]: I0218 14:45:17.017957 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-7b58bd6fcd-n8wjk"] Feb 18 14:45:17 crc kubenswrapper[4957]: I0218 14:45:17.053689 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d1e1403b-f3b0-4377-9c77-1d9f5b3c10c7-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-6d6859c548-x4qsb\" (UID: \"d1e1403b-f3b0-4377-9c77-1d9f5b3c10c7\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-x4qsb" Feb 18 14:45:17 crc kubenswrapper[4957]: I0218 14:45:17.053786 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/d1e1403b-f3b0-4377-9c77-1d9f5b3c10c7-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-6d6859c548-x4qsb\" (UID: \"d1e1403b-f3b0-4377-9c77-1d9f5b3c10c7\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-x4qsb" Feb 18 14:45:17 crc kubenswrapper[4957]: I0218 14:45:17.053832 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dhjpr\" (UniqueName: \"kubernetes.io/projected/d1e1403b-f3b0-4377-9c77-1d9f5b3c10c7-kube-api-access-dhjpr\") pod \"logging-loki-query-frontend-6d6859c548-x4qsb\" (UID: \"d1e1403b-f3b0-4377-9c77-1d9f5b3c10c7\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-x4qsb" Feb 18 14:45:17 crc kubenswrapper[4957]: I0218 14:45:17.053937 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/d1e1403b-f3b0-4377-9c77-1d9f5b3c10c7-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-6d6859c548-x4qsb\" (UID: \"d1e1403b-f3b0-4377-9c77-1d9f5b3c10c7\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-x4qsb" Feb 18 14:45:17 crc kubenswrapper[4957]: I0218 14:45:17.053976 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1e1403b-f3b0-4377-9c77-1d9f5b3c10c7-config\") pod \"logging-loki-query-frontend-6d6859c548-x4qsb\" (UID: \"d1e1403b-f3b0-4377-9c77-1d9f5b3c10c7\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-x4qsb" Feb 18 14:45:17 crc kubenswrapper[4957]: I0218 14:45:17.054920 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d1e1403b-f3b0-4377-9c77-1d9f5b3c10c7-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-6d6859c548-x4qsb\" (UID: \"d1e1403b-f3b0-4377-9c77-1d9f5b3c10c7\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-x4qsb" Feb 18 14:45:17 crc kubenswrapper[4957]: I0218 14:45:17.056103 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1e1403b-f3b0-4377-9c77-1d9f5b3c10c7-config\") pod \"logging-loki-query-frontend-6d6859c548-x4qsb\" (UID: \"d1e1403b-f3b0-4377-9c77-1d9f5b3c10c7\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-x4qsb" Feb 18 14:45:17 crc kubenswrapper[4957]: I0218 14:45:17.066103 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-querier-76bf7b6d45-t4b27" Feb 18 14:45:17 crc kubenswrapper[4957]: I0218 14:45:17.069859 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/d1e1403b-f3b0-4377-9c77-1d9f5b3c10c7-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-6d6859c548-x4qsb\" (UID: \"d1e1403b-f3b0-4377-9c77-1d9f5b3c10c7\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-x4qsb" Feb 18 14:45:17 crc kubenswrapper[4957]: I0218 14:45:17.070269 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-7b58bd6fcd-58vxq"] Feb 18 14:45:17 crc kubenswrapper[4957]: I0218 14:45:17.090018 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/d1e1403b-f3b0-4377-9c77-1d9f5b3c10c7-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-6d6859c548-x4qsb\" (UID: \"d1e1403b-f3b0-4377-9c77-1d9f5b3c10c7\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-x4qsb" Feb 18 14:45:17 crc kubenswrapper[4957]: I0218 14:45:17.094886 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dhjpr\" (UniqueName: \"kubernetes.io/projected/d1e1403b-f3b0-4377-9c77-1d9f5b3c10c7-kube-api-access-dhjpr\") pod \"logging-loki-query-frontend-6d6859c548-x4qsb\" (UID: \"d1e1403b-f3b0-4377-9c77-1d9f5b3c10c7\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-x4qsb" Feb 18 14:45:17 crc kubenswrapper[4957]: I0218 14:45:17.143358 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-x4qsb" Feb 18 14:45:17 crc kubenswrapper[4957]: I0218 14:45:17.155824 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/ae719427-398b-455b-8d4f-d1f96df0e800-tls-secret\") pod \"logging-loki-gateway-7b58bd6fcd-n8wjk\" (UID: \"ae719427-398b-455b-8d4f-d1f96df0e800\") " pod="openshift-logging/logging-loki-gateway-7b58bd6fcd-n8wjk" Feb 18 14:45:17 crc kubenswrapper[4957]: I0218 14:45:17.155914 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwws5\" (UniqueName: \"kubernetes.io/projected/6e82b47f-b61b-40dd-92f1-62180459082f-kube-api-access-rwws5\") pod \"logging-loki-gateway-7b58bd6fcd-58vxq\" (UID: \"6e82b47f-b61b-40dd-92f1-62180459082f\") " pod="openshift-logging/logging-loki-gateway-7b58bd6fcd-58vxq" Feb 18 14:45:17 crc kubenswrapper[4957]: I0218 14:45:17.155963 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ae719427-398b-455b-8d4f-d1f96df0e800-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-7b58bd6fcd-n8wjk\" (UID: \"ae719427-398b-455b-8d4f-d1f96df0e800\") " pod="openshift-logging/logging-loki-gateway-7b58bd6fcd-n8wjk" Feb 18 14:45:17 crc kubenswrapper[4957]: I0218 14:45:17.156031 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/ae719427-398b-455b-8d4f-d1f96df0e800-tenants\") pod \"logging-loki-gateway-7b58bd6fcd-n8wjk\" (UID: \"ae719427-398b-455b-8d4f-d1f96df0e800\") " pod="openshift-logging/logging-loki-gateway-7b58bd6fcd-n8wjk" Feb 18 14:45:17 crc kubenswrapper[4957]: I0218 14:45:17.156064 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/6e82b47f-b61b-40dd-92f1-62180459082f-rbac\") pod \"logging-loki-gateway-7b58bd6fcd-58vxq\" (UID: \"6e82b47f-b61b-40dd-92f1-62180459082f\") " pod="openshift-logging/logging-loki-gateway-7b58bd6fcd-58vxq" Feb 18 14:45:17 crc kubenswrapper[4957]: I0218 14:45:17.156091 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6e82b47f-b61b-40dd-92f1-62180459082f-logging-loki-ca-bundle\") pod \"logging-loki-gateway-7b58bd6fcd-58vxq\" (UID: \"6e82b47f-b61b-40dd-92f1-62180459082f\") " pod="openshift-logging/logging-loki-gateway-7b58bd6fcd-58vxq" Feb 18 14:45:17 crc kubenswrapper[4957]: I0218 14:45:17.156108 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/ae719427-398b-455b-8d4f-d1f96df0e800-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-7b58bd6fcd-n8wjk\" (UID: \"ae719427-398b-455b-8d4f-d1f96df0e800\") " pod="openshift-logging/logging-loki-gateway-7b58bd6fcd-n8wjk" Feb 18 14:45:17 crc kubenswrapper[4957]: I0218 14:45:17.156156 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/6e82b47f-b61b-40dd-92f1-62180459082f-lokistack-gateway\") pod \"logging-loki-gateway-7b58bd6fcd-58vxq\" (UID: \"6e82b47f-b61b-40dd-92f1-62180459082f\") " pod="openshift-logging/logging-loki-gateway-7b58bd6fcd-58vxq" Feb 18 14:45:17 crc kubenswrapper[4957]: I0218 14:45:17.156179 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/6e82b47f-b61b-40dd-92f1-62180459082f-tenants\") pod \"logging-loki-gateway-7b58bd6fcd-58vxq\" (UID: \"6e82b47f-b61b-40dd-92f1-62180459082f\") " pod="openshift-logging/logging-loki-gateway-7b58bd6fcd-58vxq" Feb 18 14:45:17 crc kubenswrapper[4957]: I0218 14:45:17.156195 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmqn6\" (UniqueName: \"kubernetes.io/projected/ae719427-398b-455b-8d4f-d1f96df0e800-kube-api-access-gmqn6\") pod \"logging-loki-gateway-7b58bd6fcd-n8wjk\" (UID: \"ae719427-398b-455b-8d4f-d1f96df0e800\") " pod="openshift-logging/logging-loki-gateway-7b58bd6fcd-n8wjk" Feb 18 14:45:17 crc kubenswrapper[4957]: I0218 14:45:17.156214 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6e82b47f-b61b-40dd-92f1-62180459082f-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-7b58bd6fcd-58vxq\" (UID: \"6e82b47f-b61b-40dd-92f1-62180459082f\") " pod="openshift-logging/logging-loki-gateway-7b58bd6fcd-58vxq" Feb 18 14:45:17 crc kubenswrapper[4957]: I0218 14:45:17.156230 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/ae719427-398b-455b-8d4f-d1f96df0e800-rbac\") pod \"logging-loki-gateway-7b58bd6fcd-n8wjk\" (UID: \"ae719427-398b-455b-8d4f-d1f96df0e800\") " pod="openshift-logging/logging-loki-gateway-7b58bd6fcd-n8wjk" Feb 18 14:45:17 crc kubenswrapper[4957]: I0218 14:45:17.156250 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/6e82b47f-b61b-40dd-92f1-62180459082f-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-7b58bd6fcd-58vxq\" (UID: \"6e82b47f-b61b-40dd-92f1-62180459082f\") " pod="openshift-logging/logging-loki-gateway-7b58bd6fcd-58vxq" Feb 18 14:45:17 crc kubenswrapper[4957]: I0218 14:45:17.156310 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/ae719427-398b-455b-8d4f-d1f96df0e800-lokistack-gateway\") pod \"logging-loki-gateway-7b58bd6fcd-n8wjk\" (UID: \"ae719427-398b-455b-8d4f-d1f96df0e800\") " pod="openshift-logging/logging-loki-gateway-7b58bd6fcd-n8wjk" Feb 18 14:45:17 crc kubenswrapper[4957]: I0218 14:45:17.156363 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ae719427-398b-455b-8d4f-d1f96df0e800-logging-loki-ca-bundle\") pod \"logging-loki-gateway-7b58bd6fcd-n8wjk\" (UID: \"ae719427-398b-455b-8d4f-d1f96df0e800\") " pod="openshift-logging/logging-loki-gateway-7b58bd6fcd-n8wjk" Feb 18 14:45:17 crc kubenswrapper[4957]: I0218 14:45:17.156406 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/6e82b47f-b61b-40dd-92f1-62180459082f-tls-secret\") pod \"logging-loki-gateway-7b58bd6fcd-58vxq\" (UID: \"6e82b47f-b61b-40dd-92f1-62180459082f\") " pod="openshift-logging/logging-loki-gateway-7b58bd6fcd-58vxq" Feb 18 14:45:17 crc kubenswrapper[4957]: I0218 14:45:17.203661 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-2wbxs" Feb 18 14:45:17 crc kubenswrapper[4957]: I0218 14:45:17.258373 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/ae719427-398b-455b-8d4f-d1f96df0e800-tenants\") pod \"logging-loki-gateway-7b58bd6fcd-n8wjk\" (UID: \"ae719427-398b-455b-8d4f-d1f96df0e800\") " pod="openshift-logging/logging-loki-gateway-7b58bd6fcd-n8wjk" Feb 18 14:45:17 crc kubenswrapper[4957]: I0218 14:45:17.258470 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/6e82b47f-b61b-40dd-92f1-62180459082f-rbac\") pod \"logging-loki-gateway-7b58bd6fcd-58vxq\" (UID: \"6e82b47f-b61b-40dd-92f1-62180459082f\") " pod="openshift-logging/logging-loki-gateway-7b58bd6fcd-58vxq" Feb 18 14:45:17 crc kubenswrapper[4957]: I0218 14:45:17.258508 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6e82b47f-b61b-40dd-92f1-62180459082f-logging-loki-ca-bundle\") pod \"logging-loki-gateway-7b58bd6fcd-58vxq\" (UID: \"6e82b47f-b61b-40dd-92f1-62180459082f\") " pod="openshift-logging/logging-loki-gateway-7b58bd6fcd-58vxq" Feb 18 14:45:17 crc kubenswrapper[4957]: I0218 14:45:17.258536 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/ae719427-398b-455b-8d4f-d1f96df0e800-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-7b58bd6fcd-n8wjk\" (UID: \"ae719427-398b-455b-8d4f-d1f96df0e800\") " pod="openshift-logging/logging-loki-gateway-7b58bd6fcd-n8wjk" Feb 18 14:45:17 crc kubenswrapper[4957]: I0218 14:45:17.258568 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/6e82b47f-b61b-40dd-92f1-62180459082f-lokistack-gateway\") pod \"logging-loki-gateway-7b58bd6fcd-58vxq\" (UID: \"6e82b47f-b61b-40dd-92f1-62180459082f\") " pod="openshift-logging/logging-loki-gateway-7b58bd6fcd-58vxq" Feb 18 14:45:17 crc kubenswrapper[4957]: I0218 14:45:17.258599 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/6e82b47f-b61b-40dd-92f1-62180459082f-tenants\") pod \"logging-loki-gateway-7b58bd6fcd-58vxq\" (UID: \"6e82b47f-b61b-40dd-92f1-62180459082f\") " pod="openshift-logging/logging-loki-gateway-7b58bd6fcd-58vxq" Feb 18 14:45:17 crc kubenswrapper[4957]: I0218 14:45:17.258632 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6e82b47f-b61b-40dd-92f1-62180459082f-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-7b58bd6fcd-58vxq\" (UID: \"6e82b47f-b61b-40dd-92f1-62180459082f\") " pod="openshift-logging/logging-loki-gateway-7b58bd6fcd-58vxq" Feb 18 14:45:17 crc kubenswrapper[4957]: I0218 14:45:17.258655 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gmqn6\" (UniqueName: \"kubernetes.io/projected/ae719427-398b-455b-8d4f-d1f96df0e800-kube-api-access-gmqn6\") pod \"logging-loki-gateway-7b58bd6fcd-n8wjk\" (UID: \"ae719427-398b-455b-8d4f-d1f96df0e800\") " pod="openshift-logging/logging-loki-gateway-7b58bd6fcd-n8wjk" Feb 18 14:45:17 crc kubenswrapper[4957]: I0218 14:45:17.258677 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/ae719427-398b-455b-8d4f-d1f96df0e800-rbac\") pod \"logging-loki-gateway-7b58bd6fcd-n8wjk\" (UID: \"ae719427-398b-455b-8d4f-d1f96df0e800\") " pod="openshift-logging/logging-loki-gateway-7b58bd6fcd-n8wjk" Feb 18 14:45:17 crc kubenswrapper[4957]: I0218 14:45:17.258699 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/6e82b47f-b61b-40dd-92f1-62180459082f-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-7b58bd6fcd-58vxq\" (UID: \"6e82b47f-b61b-40dd-92f1-62180459082f\") " pod="openshift-logging/logging-loki-gateway-7b58bd6fcd-58vxq" Feb 18 14:45:17 crc kubenswrapper[4957]: I0218 14:45:17.258737 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/ae719427-398b-455b-8d4f-d1f96df0e800-lokistack-gateway\") pod \"logging-loki-gateway-7b58bd6fcd-n8wjk\" (UID: \"ae719427-398b-455b-8d4f-d1f96df0e800\") " pod="openshift-logging/logging-loki-gateway-7b58bd6fcd-n8wjk" Feb 18 14:45:17 crc kubenswrapper[4957]: I0218 14:45:17.258769 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ae719427-398b-455b-8d4f-d1f96df0e800-logging-loki-ca-bundle\") pod \"logging-loki-gateway-7b58bd6fcd-n8wjk\" (UID: \"ae719427-398b-455b-8d4f-d1f96df0e800\") " pod="openshift-logging/logging-loki-gateway-7b58bd6fcd-n8wjk" Feb 18 14:45:17 crc kubenswrapper[4957]: I0218 14:45:17.258866 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/6e82b47f-b61b-40dd-92f1-62180459082f-tls-secret\") pod \"logging-loki-gateway-7b58bd6fcd-58vxq\" (UID: \"6e82b47f-b61b-40dd-92f1-62180459082f\") " pod="openshift-logging/logging-loki-gateway-7b58bd6fcd-58vxq" Feb 18 14:45:17 crc kubenswrapper[4957]: I0218 14:45:17.258927 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/ae719427-398b-455b-8d4f-d1f96df0e800-tls-secret\") pod \"logging-loki-gateway-7b58bd6fcd-n8wjk\" (UID: \"ae719427-398b-455b-8d4f-d1f96df0e800\") " pod="openshift-logging/logging-loki-gateway-7b58bd6fcd-n8wjk" Feb 18 14:45:17 crc kubenswrapper[4957]: I0218 14:45:17.258966 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rwws5\" (UniqueName: \"kubernetes.io/projected/6e82b47f-b61b-40dd-92f1-62180459082f-kube-api-access-rwws5\") pod \"logging-loki-gateway-7b58bd6fcd-58vxq\" (UID: \"6e82b47f-b61b-40dd-92f1-62180459082f\") " pod="openshift-logging/logging-loki-gateway-7b58bd6fcd-58vxq" Feb 18 14:45:17 crc kubenswrapper[4957]: I0218 14:45:17.259041 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ae719427-398b-455b-8d4f-d1f96df0e800-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-7b58bd6fcd-n8wjk\" (UID: \"ae719427-398b-455b-8d4f-d1f96df0e800\") " pod="openshift-logging/logging-loki-gateway-7b58bd6fcd-n8wjk" Feb 18 14:45:17 crc kubenswrapper[4957]: I0218 14:45:17.260464 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ae719427-398b-455b-8d4f-d1f96df0e800-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-7b58bd6fcd-n8wjk\" (UID: \"ae719427-398b-455b-8d4f-d1f96df0e800\") " pod="openshift-logging/logging-loki-gateway-7b58bd6fcd-n8wjk" Feb 18 14:45:17 crc kubenswrapper[4957]: I0218 14:45:17.260926 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/6e82b47f-b61b-40dd-92f1-62180459082f-rbac\") pod \"logging-loki-gateway-7b58bd6fcd-58vxq\" (UID: \"6e82b47f-b61b-40dd-92f1-62180459082f\") " pod="openshift-logging/logging-loki-gateway-7b58bd6fcd-58vxq" Feb 18 14:45:17 crc kubenswrapper[4957]: I0218 14:45:17.261200 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/ae719427-398b-455b-8d4f-d1f96df0e800-rbac\") pod \"logging-loki-gateway-7b58bd6fcd-n8wjk\" (UID: \"ae719427-398b-455b-8d4f-d1f96df0e800\") " pod="openshift-logging/logging-loki-gateway-7b58bd6fcd-n8wjk" Feb 18 14:45:17 crc kubenswrapper[4957]: I0218 14:45:17.261582 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6e82b47f-b61b-40dd-92f1-62180459082f-logging-loki-ca-bundle\") pod \"logging-loki-gateway-7b58bd6fcd-58vxq\" (UID: \"6e82b47f-b61b-40dd-92f1-62180459082f\") " pod="openshift-logging/logging-loki-gateway-7b58bd6fcd-58vxq" Feb 18 14:45:17 crc kubenswrapper[4957]: E0218 14:45:17.263757 4957 secret.go:188] Couldn't get secret openshift-logging/logging-loki-gateway-http: secret "logging-loki-gateway-http" not found Feb 18 14:45:17 crc kubenswrapper[4957]: E0218 14:45:17.263775 4957 secret.go:188] Couldn't get secret openshift-logging/logging-loki-gateway-http: secret "logging-loki-gateway-http" not found Feb 18 14:45:17 crc kubenswrapper[4957]: E0218 14:45:17.263838 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6e82b47f-b61b-40dd-92f1-62180459082f-tls-secret podName:6e82b47f-b61b-40dd-92f1-62180459082f nodeName:}" failed. No retries permitted until 2026-02-18 14:45:17.76381225 +0000 UTC m=+824.284676994 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-secret" (UniqueName: "kubernetes.io/secret/6e82b47f-b61b-40dd-92f1-62180459082f-tls-secret") pod "logging-loki-gateway-7b58bd6fcd-58vxq" (UID: "6e82b47f-b61b-40dd-92f1-62180459082f") : secret "logging-loki-gateway-http" not found Feb 18 14:45:17 crc kubenswrapper[4957]: E0218 14:45:17.263935 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ae719427-398b-455b-8d4f-d1f96df0e800-tls-secret podName:ae719427-398b-455b-8d4f-d1f96df0e800 nodeName:}" failed. No retries permitted until 2026-02-18 14:45:17.763852101 +0000 UTC m=+824.284716845 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-secret" (UniqueName: "kubernetes.io/secret/ae719427-398b-455b-8d4f-d1f96df0e800-tls-secret") pod "logging-loki-gateway-7b58bd6fcd-n8wjk" (UID: "ae719427-398b-455b-8d4f-d1f96df0e800") : secret "logging-loki-gateway-http" not found Feb 18 14:45:17 crc kubenswrapper[4957]: I0218 14:45:17.264995 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/6e82b47f-b61b-40dd-92f1-62180459082f-lokistack-gateway\") pod \"logging-loki-gateway-7b58bd6fcd-58vxq\" (UID: \"6e82b47f-b61b-40dd-92f1-62180459082f\") " pod="openshift-logging/logging-loki-gateway-7b58bd6fcd-58vxq" Feb 18 14:45:17 crc kubenswrapper[4957]: I0218 14:45:17.265280 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/ae719427-398b-455b-8d4f-d1f96df0e800-lokistack-gateway\") pod \"logging-loki-gateway-7b58bd6fcd-n8wjk\" (UID: \"ae719427-398b-455b-8d4f-d1f96df0e800\") " pod="openshift-logging/logging-loki-gateway-7b58bd6fcd-n8wjk" Feb 18 14:45:17 crc kubenswrapper[4957]: I0218 14:45:17.265764 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ae719427-398b-455b-8d4f-d1f96df0e800-logging-loki-ca-bundle\") pod \"logging-loki-gateway-7b58bd6fcd-n8wjk\" (UID: \"ae719427-398b-455b-8d4f-d1f96df0e800\") " pod="openshift-logging/logging-loki-gateway-7b58bd6fcd-n8wjk" Feb 18 14:45:17 crc kubenswrapper[4957]: I0218 14:45:17.266213 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/ae719427-398b-455b-8d4f-d1f96df0e800-tenants\") pod \"logging-loki-gateway-7b58bd6fcd-n8wjk\" (UID: \"ae719427-398b-455b-8d4f-d1f96df0e800\") " pod="openshift-logging/logging-loki-gateway-7b58bd6fcd-n8wjk" Feb 18 14:45:17 crc kubenswrapper[4957]: I0218 14:45:17.266975 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6e82b47f-b61b-40dd-92f1-62180459082f-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-7b58bd6fcd-58vxq\" (UID: \"6e82b47f-b61b-40dd-92f1-62180459082f\") " pod="openshift-logging/logging-loki-gateway-7b58bd6fcd-58vxq" Feb 18 14:45:17 crc kubenswrapper[4957]: I0218 14:45:17.267013 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/6e82b47f-b61b-40dd-92f1-62180459082f-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-7b58bd6fcd-58vxq\" (UID: \"6e82b47f-b61b-40dd-92f1-62180459082f\") " pod="openshift-logging/logging-loki-gateway-7b58bd6fcd-58vxq" Feb 18 14:45:17 crc kubenswrapper[4957]: I0218 14:45:17.269105 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/ae719427-398b-455b-8d4f-d1f96df0e800-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-7b58bd6fcd-n8wjk\" (UID: \"ae719427-398b-455b-8d4f-d1f96df0e800\") " pod="openshift-logging/logging-loki-gateway-7b58bd6fcd-n8wjk" Feb 18 14:45:17 crc kubenswrapper[4957]: I0218 14:45:17.274625 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/6e82b47f-b61b-40dd-92f1-62180459082f-tenants\") pod \"logging-loki-gateway-7b58bd6fcd-58vxq\" (UID: \"6e82b47f-b61b-40dd-92f1-62180459082f\") " pod="openshift-logging/logging-loki-gateway-7b58bd6fcd-58vxq" Feb 18 14:45:17 crc kubenswrapper[4957]: I0218 14:45:17.286844 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gmqn6\" (UniqueName: \"kubernetes.io/projected/ae719427-398b-455b-8d4f-d1f96df0e800-kube-api-access-gmqn6\") pod \"logging-loki-gateway-7b58bd6fcd-n8wjk\" (UID: \"ae719427-398b-455b-8d4f-d1f96df0e800\") " pod="openshift-logging/logging-loki-gateway-7b58bd6fcd-n8wjk" Feb 18 14:45:17 crc kubenswrapper[4957]: I0218 14:45:17.293087 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rwws5\" (UniqueName: \"kubernetes.io/projected/6e82b47f-b61b-40dd-92f1-62180459082f-kube-api-access-rwws5\") pod \"logging-loki-gateway-7b58bd6fcd-58vxq\" (UID: \"6e82b47f-b61b-40dd-92f1-62180459082f\") " pod="openshift-logging/logging-loki-gateway-7b58bd6fcd-58vxq" Feb 18 14:45:17 crc kubenswrapper[4957]: I0218 14:45:17.585077 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-querier-76bf7b6d45-t4b27"] Feb 18 14:45:17 crc kubenswrapper[4957]: I0218 14:45:17.663375 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-query-frontend-6d6859c548-x4qsb"] Feb 18 14:45:17 crc kubenswrapper[4957]: I0218 14:45:17.734068 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Feb 18 14:45:17 crc kubenswrapper[4957]: I0218 14:45:17.739121 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-ingester-0" Feb 18 14:45:17 crc kubenswrapper[4957]: I0218 14:45:17.744109 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-ingester-http" Feb 18 14:45:17 crc kubenswrapper[4957]: I0218 14:45:17.744402 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-ingester-grpc" Feb 18 14:45:17 crc kubenswrapper[4957]: I0218 14:45:17.766322 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Feb 18 14:45:17 crc kubenswrapper[4957]: I0218 14:45:17.771098 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/6e82b47f-b61b-40dd-92f1-62180459082f-tls-secret\") pod \"logging-loki-gateway-7b58bd6fcd-58vxq\" (UID: \"6e82b47f-b61b-40dd-92f1-62180459082f\") " pod="openshift-logging/logging-loki-gateway-7b58bd6fcd-58vxq" Feb 18 14:45:17 crc kubenswrapper[4957]: I0218 14:45:17.771164 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/ae719427-398b-455b-8d4f-d1f96df0e800-tls-secret\") pod \"logging-loki-gateway-7b58bd6fcd-n8wjk\" (UID: \"ae719427-398b-455b-8d4f-d1f96df0e800\") " pod="openshift-logging/logging-loki-gateway-7b58bd6fcd-n8wjk" Feb 18 14:45:17 crc kubenswrapper[4957]: I0218 14:45:17.792050 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/6e82b47f-b61b-40dd-92f1-62180459082f-tls-secret\") pod \"logging-loki-gateway-7b58bd6fcd-58vxq\" (UID: \"6e82b47f-b61b-40dd-92f1-62180459082f\") " pod="openshift-logging/logging-loki-gateway-7b58bd6fcd-58vxq" Feb 18 14:45:17 crc kubenswrapper[4957]: I0218 14:45:17.793551 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/ae719427-398b-455b-8d4f-d1f96df0e800-tls-secret\") pod \"logging-loki-gateway-7b58bd6fcd-n8wjk\" (UID: \"ae719427-398b-455b-8d4f-d1f96df0e800\") " pod="openshift-logging/logging-loki-gateway-7b58bd6fcd-n8wjk" Feb 18 14:45:17 crc kubenswrapper[4957]: I0218 14:45:17.822227 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Feb 18 14:45:17 crc kubenswrapper[4957]: I0218 14:45:17.824900 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-compactor-0" Feb 18 14:45:17 crc kubenswrapper[4957]: I0218 14:45:17.831996 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-compactor-http" Feb 18 14:45:17 crc kubenswrapper[4957]: I0218 14:45:17.844906 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-compactor-grpc" Feb 18 14:45:17 crc kubenswrapper[4957]: I0218 14:45:17.851550 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Feb 18 14:45:17 crc kubenswrapper[4957]: I0218 14:45:17.872603 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-1d2aca6e-9120-4b67-be38-44ca2808106e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1d2aca6e-9120-4b67-be38-44ca2808106e\") pod \"logging-loki-ingester-0\" (UID: \"f0549571-def1-4cd5-9cae-77780cf6870b\") " pod="openshift-logging/logging-loki-ingester-0" Feb 18 14:45:17 crc kubenswrapper[4957]: I0218 14:45:17.872660 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9q74n\" (UniqueName: \"kubernetes.io/projected/f0549571-def1-4cd5-9cae-77780cf6870b-kube-api-access-9q74n\") pod \"logging-loki-ingester-0\" (UID: \"f0549571-def1-4cd5-9cae-77780cf6870b\") " pod="openshift-logging/logging-loki-ingester-0" Feb 18 14:45:17 crc kubenswrapper[4957]: I0218 14:45:17.872689 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/f0549571-def1-4cd5-9cae-77780cf6870b-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"f0549571-def1-4cd5-9cae-77780cf6870b\") " pod="openshift-logging/logging-loki-ingester-0" Feb 18 14:45:17 crc kubenswrapper[4957]: I0218 14:45:17.872720 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0549571-def1-4cd5-9cae-77780cf6870b-config\") pod \"logging-loki-ingester-0\" (UID: \"f0549571-def1-4cd5-9cae-77780cf6870b\") " pod="openshift-logging/logging-loki-ingester-0" Feb 18 14:45:17 crc kubenswrapper[4957]: I0218 14:45:17.872740 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/f0549571-def1-4cd5-9cae-77780cf6870b-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"f0549571-def1-4cd5-9cae-77780cf6870b\") " pod="openshift-logging/logging-loki-ingester-0" Feb 18 14:45:17 crc kubenswrapper[4957]: I0218 14:45:17.872779 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/f0549571-def1-4cd5-9cae-77780cf6870b-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"f0549571-def1-4cd5-9cae-77780cf6870b\") " pod="openshift-logging/logging-loki-ingester-0" Feb 18 14:45:17 crc kubenswrapper[4957]: I0218 14:45:17.872814 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-32e8d43a-b032-4d7d-9038-e03bff6a896d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-32e8d43a-b032-4d7d-9038-e03bff6a896d\") pod \"logging-loki-ingester-0\" (UID: \"f0549571-def1-4cd5-9cae-77780cf6870b\") " pod="openshift-logging/logging-loki-ingester-0" Feb 18 14:45:17 crc kubenswrapper[4957]: I0218 14:45:17.872852 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f0549571-def1-4cd5-9cae-77780cf6870b-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"f0549571-def1-4cd5-9cae-77780cf6870b\") " pod="openshift-logging/logging-loki-ingester-0" Feb 18 14:45:17 crc kubenswrapper[4957]: I0218 14:45:17.930376 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Feb 18 14:45:17 crc kubenswrapper[4957]: I0218 14:45:17.931923 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-index-gateway-0" Feb 18 14:45:17 crc kubenswrapper[4957]: I0218 14:45:17.934324 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-7b58bd6fcd-n8wjk" Feb 18 14:45:17 crc kubenswrapper[4957]: I0218 14:45:17.939974 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-index-gateway-grpc" Feb 18 14:45:17 crc kubenswrapper[4957]: I0218 14:45:17.940135 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-index-gateway-http" Feb 18 14:45:17 crc kubenswrapper[4957]: I0218 14:45:17.943269 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-7b58bd6fcd-58vxq" Feb 18 14:45:17 crc kubenswrapper[4957]: I0218 14:45:17.979313 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Feb 18 14:45:17 crc kubenswrapper[4957]: I0218 14:45:17.980705 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/f0549571-def1-4cd5-9cae-77780cf6870b-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"f0549571-def1-4cd5-9cae-77780cf6870b\") " pod="openshift-logging/logging-loki-ingester-0" Feb 18 14:45:17 crc kubenswrapper[4957]: I0218 14:45:17.980751 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/c7e42da2-0160-4c19-bd98-1ebb4d0d84dc-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"c7e42da2-0160-4c19-bd98-1ebb4d0d84dc\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 18 14:45:17 crc kubenswrapper[4957]: I0218 14:45:17.980775 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/8ebf8bd1-097b-45c7-be49-c38760e885e2-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"8ebf8bd1-097b-45c7-be49-c38760e885e2\") " pod="openshift-logging/logging-loki-compactor-0" Feb 18 14:45:17 crc kubenswrapper[4957]: I0218 14:45:17.980811 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/8ebf8bd1-097b-45c7-be49-c38760e885e2-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"8ebf8bd1-097b-45c7-be49-c38760e885e2\") " pod="openshift-logging/logging-loki-compactor-0" Feb 18 14:45:17 crc kubenswrapper[4957]: I0218 14:45:17.980844 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/f0549571-def1-4cd5-9cae-77780cf6870b-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"f0549571-def1-4cd5-9cae-77780cf6870b\") " pod="openshift-logging/logging-loki-ingester-0" Feb 18 14:45:17 crc kubenswrapper[4957]: I0218 14:45:17.980862 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vn65\" (UniqueName: \"kubernetes.io/projected/8ebf8bd1-097b-45c7-be49-c38760e885e2-kube-api-access-9vn65\") pod \"logging-loki-compactor-0\" (UID: \"8ebf8bd1-097b-45c7-be49-c38760e885e2\") " pod="openshift-logging/logging-loki-compactor-0" Feb 18 14:45:17 crc kubenswrapper[4957]: I0218 14:45:17.980900 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-32e8d43a-b032-4d7d-9038-e03bff6a896d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-32e8d43a-b032-4d7d-9038-e03bff6a896d\") pod \"logging-loki-ingester-0\" (UID: \"f0549571-def1-4cd5-9cae-77780cf6870b\") " pod="openshift-logging/logging-loki-ingester-0" Feb 18 14:45:17 crc kubenswrapper[4957]: I0218 14:45:17.980916 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7e42da2-0160-4c19-bd98-1ebb4d0d84dc-config\") pod \"logging-loki-index-gateway-0\" (UID: \"c7e42da2-0160-4c19-bd98-1ebb4d0d84dc\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 18 14:45:17 crc kubenswrapper[4957]: I0218 14:45:17.980938 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/c7e42da2-0160-4c19-bd98-1ebb4d0d84dc-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"c7e42da2-0160-4c19-bd98-1ebb4d0d84dc\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 18 14:45:17 crc kubenswrapper[4957]: I0218 14:45:17.980976 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f0549571-def1-4cd5-9cae-77780cf6870b-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"f0549571-def1-4cd5-9cae-77780cf6870b\") " pod="openshift-logging/logging-loki-ingester-0" Feb 18 14:45:17 crc kubenswrapper[4957]: I0218 14:45:17.980998 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7688\" (UniqueName: \"kubernetes.io/projected/c7e42da2-0160-4c19-bd98-1ebb4d0d84dc-kube-api-access-k7688\") pod \"logging-loki-index-gateway-0\" (UID: \"c7e42da2-0160-4c19-bd98-1ebb4d0d84dc\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 18 14:45:17 crc kubenswrapper[4957]: I0218 14:45:17.981016 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/c7e42da2-0160-4c19-bd98-1ebb4d0d84dc-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"c7e42da2-0160-4c19-bd98-1ebb4d0d84dc\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 18 14:45:17 crc kubenswrapper[4957]: I0218 14:45:17.981058 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8ebf8bd1-097b-45c7-be49-c38760e885e2-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"8ebf8bd1-097b-45c7-be49-c38760e885e2\") " pod="openshift-logging/logging-loki-compactor-0" Feb 18 14:45:17 crc kubenswrapper[4957]: I0218 14:45:17.981084 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-1d2aca6e-9120-4b67-be38-44ca2808106e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1d2aca6e-9120-4b67-be38-44ca2808106e\") pod \"logging-loki-ingester-0\" (UID: \"f0549571-def1-4cd5-9cae-77780cf6870b\") " pod="openshift-logging/logging-loki-ingester-0" Feb 18 14:45:17 crc kubenswrapper[4957]: I0218 14:45:17.981130 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9q74n\" (UniqueName: \"kubernetes.io/projected/f0549571-def1-4cd5-9cae-77780cf6870b-kube-api-access-9q74n\") pod \"logging-loki-ingester-0\" (UID: \"f0549571-def1-4cd5-9cae-77780cf6870b\") " pod="openshift-logging/logging-loki-ingester-0" Feb 18 14:45:17 crc kubenswrapper[4957]: I0218 14:45:17.981151 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/f0549571-def1-4cd5-9cae-77780cf6870b-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"f0549571-def1-4cd5-9cae-77780cf6870b\") " pod="openshift-logging/logging-loki-ingester-0" Feb 18 14:45:17 crc kubenswrapper[4957]: I0218 14:45:17.981175 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-016eb6bd-b854-4870-a438-a4e9e43742bc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-016eb6bd-b854-4870-a438-a4e9e43742bc\") pod \"logging-loki-compactor-0\" (UID: \"8ebf8bd1-097b-45c7-be49-c38760e885e2\") " pod="openshift-logging/logging-loki-compactor-0" Feb 18 14:45:17 crc kubenswrapper[4957]: I0218 14:45:17.981328 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ebf8bd1-097b-45c7-be49-c38760e885e2-config\") pod \"logging-loki-compactor-0\" (UID: \"8ebf8bd1-097b-45c7-be49-c38760e885e2\") " pod="openshift-logging/logging-loki-compactor-0" Feb 18 14:45:17 crc kubenswrapper[4957]: I0218 14:45:17.981351 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0549571-def1-4cd5-9cae-77780cf6870b-config\") pod \"logging-loki-ingester-0\" (UID: \"f0549571-def1-4cd5-9cae-77780cf6870b\") " pod="openshift-logging/logging-loki-ingester-0" Feb 18 14:45:17 crc kubenswrapper[4957]: I0218 14:45:17.981432 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c7e42da2-0160-4c19-bd98-1ebb4d0d84dc-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"c7e42da2-0160-4c19-bd98-1ebb4d0d84dc\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 18 14:45:17 crc kubenswrapper[4957]: I0218 14:45:17.981456 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-0f32b534-5951-4c3e-b9b2-026a0e305786\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0f32b534-5951-4c3e-b9b2-026a0e305786\") pod \"logging-loki-index-gateway-0\" (UID: \"c7e42da2-0160-4c19-bd98-1ebb4d0d84dc\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 18 14:45:17 crc kubenswrapper[4957]: I0218 14:45:17.981513 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/8ebf8bd1-097b-45c7-be49-c38760e885e2-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"8ebf8bd1-097b-45c7-be49-c38760e885e2\") " pod="openshift-logging/logging-loki-compactor-0" Feb 18 14:45:17 crc kubenswrapper[4957]: I0218 14:45:17.993352 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0549571-def1-4cd5-9cae-77780cf6870b-config\") pod \"logging-loki-ingester-0\" (UID: \"f0549571-def1-4cd5-9cae-77780cf6870b\") " pod="openshift-logging/logging-loki-ingester-0" Feb 18 14:45:17 crc kubenswrapper[4957]: I0218 14:45:17.994180 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f0549571-def1-4cd5-9cae-77780cf6870b-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"f0549571-def1-4cd5-9cae-77780cf6870b\") " pod="openshift-logging/logging-loki-ingester-0" Feb 18 14:45:18 crc kubenswrapper[4957]: I0218 14:45:18.013509 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-distributor-5d5548c9f5-2wbxs"] Feb 18 14:45:18 crc kubenswrapper[4957]: I0218 14:45:18.028945 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/f0549571-def1-4cd5-9cae-77780cf6870b-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"f0549571-def1-4cd5-9cae-77780cf6870b\") " pod="openshift-logging/logging-loki-ingester-0" Feb 18 14:45:18 crc kubenswrapper[4957]: I0218 14:45:18.030130 4957 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 18 14:45:18 crc kubenswrapper[4957]: I0218 14:45:18.030152 4957 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-32e8d43a-b032-4d7d-9038-e03bff6a896d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-32e8d43a-b032-4d7d-9038-e03bff6a896d\") pod \"logging-loki-ingester-0\" (UID: \"f0549571-def1-4cd5-9cae-77780cf6870b\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/7746d28fc8cfd8e96f340ce54f245d05738220e0b5fb8ff07065ba5912b1c3e0/globalmount\"" pod="openshift-logging/logging-loki-ingester-0" Feb 18 14:45:18 crc kubenswrapper[4957]: I0218 14:45:18.032801 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9q74n\" (UniqueName: \"kubernetes.io/projected/f0549571-def1-4cd5-9cae-77780cf6870b-kube-api-access-9q74n\") pod \"logging-loki-ingester-0\" (UID: \"f0549571-def1-4cd5-9cae-77780cf6870b\") " pod="openshift-logging/logging-loki-ingester-0" Feb 18 14:45:18 crc kubenswrapper[4957]: I0218 14:45:18.032818 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/f0549571-def1-4cd5-9cae-77780cf6870b-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"f0549571-def1-4cd5-9cae-77780cf6870b\") " pod="openshift-logging/logging-loki-ingester-0" Feb 18 14:45:18 crc kubenswrapper[4957]: I0218 14:45:18.034902 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/f0549571-def1-4cd5-9cae-77780cf6870b-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"f0549571-def1-4cd5-9cae-77780cf6870b\") " pod="openshift-logging/logging-loki-ingester-0" Feb 18 14:45:18 crc kubenswrapper[4957]: I0218 14:45:18.045994 4957 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 18 14:45:18 crc kubenswrapper[4957]: I0218 14:45:18.046059 4957 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-1d2aca6e-9120-4b67-be38-44ca2808106e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1d2aca6e-9120-4b67-be38-44ca2808106e\") pod \"logging-loki-ingester-0\" (UID: \"f0549571-def1-4cd5-9cae-77780cf6870b\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/04d826a4ae4bc32e6bb30773f6bb1158b869e1cabc14867d2290b91b3f5a383a/globalmount\"" pod="openshift-logging/logging-loki-ingester-0" Feb 18 14:45:18 crc kubenswrapper[4957]: I0218 14:45:18.080762 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-x4qsb" event={"ID":"d1e1403b-f3b0-4377-9c77-1d9f5b3c10c7","Type":"ContainerStarted","Data":"a0a5b338768a61446cca3310bc183ec1fef6df6002b1df2b7bf6024ab5d031f1"} Feb 18 14:45:18 crc kubenswrapper[4957]: I0218 14:45:18.088249 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/c7e42da2-0160-4c19-bd98-1ebb4d0d84dc-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"c7e42da2-0160-4c19-bd98-1ebb4d0d84dc\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 18 14:45:18 crc kubenswrapper[4957]: I0218 14:45:18.088304 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/8ebf8bd1-097b-45c7-be49-c38760e885e2-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"8ebf8bd1-097b-45c7-be49-c38760e885e2\") " pod="openshift-logging/logging-loki-compactor-0" Feb 18 14:45:18 crc kubenswrapper[4957]: I0218 14:45:18.088326 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/8ebf8bd1-097b-45c7-be49-c38760e885e2-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"8ebf8bd1-097b-45c7-be49-c38760e885e2\") " pod="openshift-logging/logging-loki-compactor-0" Feb 18 14:45:18 crc kubenswrapper[4957]: I0218 14:45:18.088361 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9vn65\" (UniqueName: \"kubernetes.io/projected/8ebf8bd1-097b-45c7-be49-c38760e885e2-kube-api-access-9vn65\") pod \"logging-loki-compactor-0\" (UID: \"8ebf8bd1-097b-45c7-be49-c38760e885e2\") " pod="openshift-logging/logging-loki-compactor-0" Feb 18 14:45:18 crc kubenswrapper[4957]: I0218 14:45:18.088398 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7e42da2-0160-4c19-bd98-1ebb4d0d84dc-config\") pod \"logging-loki-index-gateway-0\" (UID: \"c7e42da2-0160-4c19-bd98-1ebb4d0d84dc\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 18 14:45:18 crc kubenswrapper[4957]: I0218 14:45:18.088445 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/c7e42da2-0160-4c19-bd98-1ebb4d0d84dc-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"c7e42da2-0160-4c19-bd98-1ebb4d0d84dc\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 18 14:45:18 crc kubenswrapper[4957]: I0218 14:45:18.088469 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k7688\" (UniqueName: \"kubernetes.io/projected/c7e42da2-0160-4c19-bd98-1ebb4d0d84dc-kube-api-access-k7688\") pod \"logging-loki-index-gateway-0\" (UID: \"c7e42da2-0160-4c19-bd98-1ebb4d0d84dc\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 18 14:45:18 crc kubenswrapper[4957]: I0218 14:45:18.088485 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/c7e42da2-0160-4c19-bd98-1ebb4d0d84dc-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"c7e42da2-0160-4c19-bd98-1ebb4d0d84dc\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 18 14:45:18 crc kubenswrapper[4957]: I0218 14:45:18.088510 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8ebf8bd1-097b-45c7-be49-c38760e885e2-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"8ebf8bd1-097b-45c7-be49-c38760e885e2\") " pod="openshift-logging/logging-loki-compactor-0" Feb 18 14:45:18 crc kubenswrapper[4957]: I0218 14:45:18.088549 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-016eb6bd-b854-4870-a438-a4e9e43742bc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-016eb6bd-b854-4870-a438-a4e9e43742bc\") pod \"logging-loki-compactor-0\" (UID: \"8ebf8bd1-097b-45c7-be49-c38760e885e2\") " pod="openshift-logging/logging-loki-compactor-0" Feb 18 14:45:18 crc kubenswrapper[4957]: I0218 14:45:18.088566 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ebf8bd1-097b-45c7-be49-c38760e885e2-config\") pod \"logging-loki-compactor-0\" (UID: \"8ebf8bd1-097b-45c7-be49-c38760e885e2\") " pod="openshift-logging/logging-loki-compactor-0" Feb 18 14:45:18 crc kubenswrapper[4957]: I0218 14:45:18.088608 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c7e42da2-0160-4c19-bd98-1ebb4d0d84dc-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"c7e42da2-0160-4c19-bd98-1ebb4d0d84dc\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 18 14:45:18 crc kubenswrapper[4957]: I0218 14:45:18.088640 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-0f32b534-5951-4c3e-b9b2-026a0e305786\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0f32b534-5951-4c3e-b9b2-026a0e305786\") pod \"logging-loki-index-gateway-0\" (UID: \"c7e42da2-0160-4c19-bd98-1ebb4d0d84dc\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 18 14:45:18 crc kubenswrapper[4957]: I0218 14:45:18.088690 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/8ebf8bd1-097b-45c7-be49-c38760e885e2-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"8ebf8bd1-097b-45c7-be49-c38760e885e2\") " pod="openshift-logging/logging-loki-compactor-0" Feb 18 14:45:18 crc kubenswrapper[4957]: I0218 14:45:18.095535 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/c7e42da2-0160-4c19-bd98-1ebb4d0d84dc-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"c7e42da2-0160-4c19-bd98-1ebb4d0d84dc\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 18 14:45:18 crc kubenswrapper[4957]: I0218 14:45:18.096902 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/c7e42da2-0160-4c19-bd98-1ebb4d0d84dc-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"c7e42da2-0160-4c19-bd98-1ebb4d0d84dc\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 18 14:45:18 crc kubenswrapper[4957]: I0218 14:45:18.096956 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/8ebf8bd1-097b-45c7-be49-c38760e885e2-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"8ebf8bd1-097b-45c7-be49-c38760e885e2\") " pod="openshift-logging/logging-loki-compactor-0" Feb 18 14:45:18 crc kubenswrapper[4957]: I0218 14:45:18.097330 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-querier-76bf7b6d45-t4b27" event={"ID":"c093fd9d-72e8-42d1-a5ad-5e687f61aa9e","Type":"ContainerStarted","Data":"710bfcce0526997967b539393d7bfe63217aedafd82a95ab43a8893b7757db6f"} Feb 18 14:45:18 crc kubenswrapper[4957]: I0218 14:45:18.097535 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8ebf8bd1-097b-45c7-be49-c38760e885e2-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"8ebf8bd1-097b-45c7-be49-c38760e885e2\") " pod="openshift-logging/logging-loki-compactor-0" Feb 18 14:45:18 crc kubenswrapper[4957]: I0218 14:45:18.098326 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ebf8bd1-097b-45c7-be49-c38760e885e2-config\") pod \"logging-loki-compactor-0\" (UID: \"8ebf8bd1-097b-45c7-be49-c38760e885e2\") " pod="openshift-logging/logging-loki-compactor-0" Feb 18 14:45:18 crc kubenswrapper[4957]: I0218 14:45:18.099182 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/c7e42da2-0160-4c19-bd98-1ebb4d0d84dc-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"c7e42da2-0160-4c19-bd98-1ebb4d0d84dc\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 18 14:45:18 crc kubenswrapper[4957]: I0218 14:45:18.100106 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/8ebf8bd1-097b-45c7-be49-c38760e885e2-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"8ebf8bd1-097b-45c7-be49-c38760e885e2\") " pod="openshift-logging/logging-loki-compactor-0" Feb 18 14:45:18 crc kubenswrapper[4957]: I0218 14:45:18.105441 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/8ebf8bd1-097b-45c7-be49-c38760e885e2-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"8ebf8bd1-097b-45c7-be49-c38760e885e2\") " pod="openshift-logging/logging-loki-compactor-0" Feb 18 14:45:18 crc kubenswrapper[4957]: I0218 14:45:18.112445 4957 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 18 14:45:18 crc kubenswrapper[4957]: I0218 14:45:18.112481 4957 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-0f32b534-5951-4c3e-b9b2-026a0e305786\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0f32b534-5951-4c3e-b9b2-026a0e305786\") pod \"logging-loki-index-gateway-0\" (UID: \"c7e42da2-0160-4c19-bd98-1ebb4d0d84dc\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/d4685eceaaa3cbccf76f07e79c68fbda50fa777cca228ecfe74a5c6171b19cbc/globalmount\"" pod="openshift-logging/logging-loki-index-gateway-0" Feb 18 14:45:18 crc kubenswrapper[4957]: I0218 14:45:18.113504 4957 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 18 14:45:18 crc kubenswrapper[4957]: I0218 14:45:18.113553 4957 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-016eb6bd-b854-4870-a438-a4e9e43742bc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-016eb6bd-b854-4870-a438-a4e9e43742bc\") pod \"logging-loki-compactor-0\" (UID: \"8ebf8bd1-097b-45c7-be49-c38760e885e2\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/231f8912dd6fa2c55648512850be19a91585e87b47b35d01c983f4dee7150411/globalmount\"" pod="openshift-logging/logging-loki-compactor-0" Feb 18 14:45:18 crc kubenswrapper[4957]: I0218 14:45:18.120612 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c7e42da2-0160-4c19-bd98-1ebb4d0d84dc-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"c7e42da2-0160-4c19-bd98-1ebb4d0d84dc\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 18 14:45:18 crc kubenswrapper[4957]: I0218 14:45:18.120622 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7e42da2-0160-4c19-bd98-1ebb4d0d84dc-config\") pod \"logging-loki-index-gateway-0\" (UID: \"c7e42da2-0160-4c19-bd98-1ebb4d0d84dc\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 18 14:45:18 crc kubenswrapper[4957]: I0218 14:45:18.125440 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k7688\" (UniqueName: \"kubernetes.io/projected/c7e42da2-0160-4c19-bd98-1ebb4d0d84dc-kube-api-access-k7688\") pod \"logging-loki-index-gateway-0\" (UID: \"c7e42da2-0160-4c19-bd98-1ebb4d0d84dc\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 18 14:45:18 crc kubenswrapper[4957]: I0218 14:45:18.135821 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9vn65\" (UniqueName: \"kubernetes.io/projected/8ebf8bd1-097b-45c7-be49-c38760e885e2-kube-api-access-9vn65\") pod \"logging-loki-compactor-0\" (UID: \"8ebf8bd1-097b-45c7-be49-c38760e885e2\") " pod="openshift-logging/logging-loki-compactor-0" Feb 18 14:45:18 crc kubenswrapper[4957]: I0218 14:45:18.185307 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-32e8d43a-b032-4d7d-9038-e03bff6a896d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-32e8d43a-b032-4d7d-9038-e03bff6a896d\") pod \"logging-loki-ingester-0\" (UID: \"f0549571-def1-4cd5-9cae-77780cf6870b\") " pod="openshift-logging/logging-loki-ingester-0" Feb 18 14:45:18 crc kubenswrapper[4957]: I0218 14:45:18.203869 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-016eb6bd-b854-4870-a438-a4e9e43742bc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-016eb6bd-b854-4870-a438-a4e9e43742bc\") pod \"logging-loki-compactor-0\" (UID: \"8ebf8bd1-097b-45c7-be49-c38760e885e2\") " pod="openshift-logging/logging-loki-compactor-0" Feb 18 14:45:18 crc kubenswrapper[4957]: I0218 14:45:18.208644 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-1d2aca6e-9120-4b67-be38-44ca2808106e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1d2aca6e-9120-4b67-be38-44ca2808106e\") pod \"logging-loki-ingester-0\" (UID: \"f0549571-def1-4cd5-9cae-77780cf6870b\") " pod="openshift-logging/logging-loki-ingester-0" Feb 18 14:45:18 crc kubenswrapper[4957]: I0218 14:45:18.276654 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-0f32b534-5951-4c3e-b9b2-026a0e305786\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0f32b534-5951-4c3e-b9b2-026a0e305786\") pod \"logging-loki-index-gateway-0\" (UID: \"c7e42da2-0160-4c19-bd98-1ebb4d0d84dc\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 18 14:45:18 crc kubenswrapper[4957]: I0218 14:45:18.307791 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-index-gateway-0" Feb 18 14:45:18 crc kubenswrapper[4957]: I0218 14:45:18.382572 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-ingester-0" Feb 18 14:45:18 crc kubenswrapper[4957]: I0218 14:45:18.389123 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-7b58bd6fcd-n8wjk"] Feb 18 14:45:18 crc kubenswrapper[4957]: I0218 14:45:18.469786 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-compactor-0" Feb 18 14:45:18 crc kubenswrapper[4957]: I0218 14:45:18.679729 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-7b58bd6fcd-58vxq"] Feb 18 14:45:18 crc kubenswrapper[4957]: W0218 14:45:18.695019 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6e82b47f_b61b_40dd_92f1_62180459082f.slice/crio-095dc854c1b984b46d0292eab10bdb6976b1d0def50d9cfda49ecf19f20a5a0b WatchSource:0}: Error finding container 095dc854c1b984b46d0292eab10bdb6976b1d0def50d9cfda49ecf19f20a5a0b: Status 404 returned error can't find the container with id 095dc854c1b984b46d0292eab10bdb6976b1d0def50d9cfda49ecf19f20a5a0b Feb 18 14:45:18 crc kubenswrapper[4957]: I0218 14:45:18.830312 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Feb 18 14:45:18 crc kubenswrapper[4957]: I0218 14:45:18.855022 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Feb 18 14:45:18 crc kubenswrapper[4957]: I0218 14:45:18.976698 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Feb 18 14:45:19 crc kubenswrapper[4957]: I0218 14:45:19.107223 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-2wbxs" event={"ID":"4de1a2b8-9bfb-4104-b065-e0c991cb95ea","Type":"ContainerStarted","Data":"8f97f880730245f6068df1a94c2ce4cf0f5e1420886b478b2e92081a9b418619"} Feb 18 14:45:19 crc kubenswrapper[4957]: I0218 14:45:19.109702 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-index-gateway-0" event={"ID":"c7e42da2-0160-4c19-bd98-1ebb4d0d84dc","Type":"ContainerStarted","Data":"a0379d5aa2101f8ec0d4787f310417a5cdf1f2a7ca514e16652e87997e7644a3"} Feb 18 14:45:19 crc kubenswrapper[4957]: I0218 14:45:19.110905 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-7b58bd6fcd-n8wjk" event={"ID":"ae719427-398b-455b-8d4f-d1f96df0e800","Type":"ContainerStarted","Data":"2f030f88b8b3ab93f609a36b30be1d05d74ec893c4eac235409a9f84c95c14f9"} Feb 18 14:45:19 crc kubenswrapper[4957]: I0218 14:45:19.112988 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-compactor-0" event={"ID":"8ebf8bd1-097b-45c7-be49-c38760e885e2","Type":"ContainerStarted","Data":"0e184f5aaf386eea4bc89f7b04a6f73b5efe9ad186dd64d9eb95348067f4cd30"} Feb 18 14:45:19 crc kubenswrapper[4957]: I0218 14:45:19.117257 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-7b58bd6fcd-58vxq" event={"ID":"6e82b47f-b61b-40dd-92f1-62180459082f","Type":"ContainerStarted","Data":"095dc854c1b984b46d0292eab10bdb6976b1d0def50d9cfda49ecf19f20a5a0b"} Feb 18 14:45:19 crc kubenswrapper[4957]: I0218 14:45:19.119544 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-ingester-0" event={"ID":"f0549571-def1-4cd5-9cae-77780cf6870b","Type":"ContainerStarted","Data":"de93e79a174e99b9e08a917f79f3c34f5419b3bb77b22b150fe0dd91013eb11f"} Feb 18 14:45:26 crc kubenswrapper[4957]: I0218 14:45:26.191706 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-index-gateway-0" event={"ID":"c7e42da2-0160-4c19-bd98-1ebb4d0d84dc","Type":"ContainerStarted","Data":"ed1096b93402e5d1f11fe0d06dbcd356464bc553039f642e62588e6cd30fb358"} Feb 18 14:45:26 crc kubenswrapper[4957]: I0218 14:45:26.192636 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-index-gateway-0" Feb 18 14:45:26 crc kubenswrapper[4957]: I0218 14:45:26.193608 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-7b58bd6fcd-n8wjk" event={"ID":"ae719427-398b-455b-8d4f-d1f96df0e800","Type":"ContainerStarted","Data":"cbec57062c75138db30c49d7397548fe6013dbe29c1c57807d46faf1eab4bce1"} Feb 18 14:45:26 crc kubenswrapper[4957]: I0218 14:45:26.195071 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-x4qsb" event={"ID":"d1e1403b-f3b0-4377-9c77-1d9f5b3c10c7","Type":"ContainerStarted","Data":"9c6bfb59f6a0e5d74943bea44e7daaf22695874be57b6eb6bb02d6676e333a9f"} Feb 18 14:45:26 crc kubenswrapper[4957]: I0218 14:45:26.195429 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-x4qsb" Feb 18 14:45:26 crc kubenswrapper[4957]: I0218 14:45:26.196689 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-compactor-0" event={"ID":"8ebf8bd1-097b-45c7-be49-c38760e885e2","Type":"ContainerStarted","Data":"cbde8ee9a7c4c213e3239ca55f3b4d64597d3725a504a49136a692ddcfa93fca"} Feb 18 14:45:26 crc kubenswrapper[4957]: I0218 14:45:26.196876 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-compactor-0" Feb 18 14:45:26 crc kubenswrapper[4957]: I0218 14:45:26.198718 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-querier-76bf7b6d45-t4b27" event={"ID":"c093fd9d-72e8-42d1-a5ad-5e687f61aa9e","Type":"ContainerStarted","Data":"5fb516b3fbb05577fb144825ec3667a42229cf093198e7c55cd7a10d8bf58970"} Feb 18 14:45:26 crc kubenswrapper[4957]: I0218 14:45:26.198866 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-querier-76bf7b6d45-t4b27" Feb 18 14:45:26 crc kubenswrapper[4957]: I0218 14:45:26.200617 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-7b58bd6fcd-58vxq" event={"ID":"6e82b47f-b61b-40dd-92f1-62180459082f","Type":"ContainerStarted","Data":"0e43da80845000d4ced9159f4f38c89453ca3e872d28571e9c9ce93c6fe56421"} Feb 18 14:45:26 crc kubenswrapper[4957]: I0218 14:45:26.202317 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-ingester-0" event={"ID":"f0549571-def1-4cd5-9cae-77780cf6870b","Type":"ContainerStarted","Data":"1f8deb4ec16f0d6a588faa5517c9064d5327a43857213b42eb221941ed394780"} Feb 18 14:45:26 crc kubenswrapper[4957]: I0218 14:45:26.202474 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-ingester-0" Feb 18 14:45:26 crc kubenswrapper[4957]: I0218 14:45:26.203815 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-2wbxs" event={"ID":"4de1a2b8-9bfb-4104-b065-e0c991cb95ea","Type":"ContainerStarted","Data":"7ffb8a2accd77be70fbe8f8780a7a5ce28ad3ac4a3d71939176cd87bd810e243"} Feb 18 14:45:26 crc kubenswrapper[4957]: I0218 14:45:26.203979 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-2wbxs" Feb 18 14:45:26 crc kubenswrapper[4957]: I0218 14:45:26.222862 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-index-gateway-0" podStartSLOduration=4.215719137 podStartE2EDuration="10.222839522s" podCreationTimestamp="2026-02-18 14:45:16 +0000 UTC" firstStartedPulling="2026-02-18 14:45:18.857527408 +0000 UTC m=+825.378392162" lastFinishedPulling="2026-02-18 14:45:24.864647783 +0000 UTC m=+831.385512547" observedRunningTime="2026-02-18 14:45:26.220122295 +0000 UTC m=+832.740987039" watchObservedRunningTime="2026-02-18 14:45:26.222839522 +0000 UTC m=+832.743704266" Feb 18 14:45:26 crc kubenswrapper[4957]: I0218 14:45:26.253714 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-2wbxs" podStartSLOduration=3.419302674 podStartE2EDuration="10.253661687s" podCreationTimestamp="2026-02-18 14:45:16 +0000 UTC" firstStartedPulling="2026-02-18 14:45:18.080577609 +0000 UTC m=+824.601442343" lastFinishedPulling="2026-02-18 14:45:24.914936612 +0000 UTC m=+831.435801356" observedRunningTime="2026-02-18 14:45:26.245978649 +0000 UTC m=+832.766843403" watchObservedRunningTime="2026-02-18 14:45:26.253661687 +0000 UTC m=+832.774526431" Feb 18 14:45:26 crc kubenswrapper[4957]: I0218 14:45:26.274833 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-ingester-0" podStartSLOduration=4.2509005349999995 podStartE2EDuration="10.274804788s" podCreationTimestamp="2026-02-18 14:45:16 +0000 UTC" firstStartedPulling="2026-02-18 14:45:18.999033067 +0000 UTC m=+825.519897811" lastFinishedPulling="2026-02-18 14:45:25.02293732 +0000 UTC m=+831.543802064" observedRunningTime="2026-02-18 14:45:26.270479105 +0000 UTC m=+832.791343859" watchObservedRunningTime="2026-02-18 14:45:26.274804788 +0000 UTC m=+832.795669532" Feb 18 14:45:26 crc kubenswrapper[4957]: I0218 14:45:26.295681 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-querier-76bf7b6d45-t4b27" podStartSLOduration=3.068878402 podStartE2EDuration="10.2956567s" podCreationTimestamp="2026-02-18 14:45:16 +0000 UTC" firstStartedPulling="2026-02-18 14:45:17.602867411 +0000 UTC m=+824.123732145" lastFinishedPulling="2026-02-18 14:45:24.829645699 +0000 UTC m=+831.350510443" observedRunningTime="2026-02-18 14:45:26.288405594 +0000 UTC m=+832.809270328" watchObservedRunningTime="2026-02-18 14:45:26.2956567 +0000 UTC m=+832.816521454" Feb 18 14:45:26 crc kubenswrapper[4957]: I0218 14:45:26.310614 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-compactor-0" podStartSLOduration=4.224109404 podStartE2EDuration="10.310590244s" podCreationTimestamp="2026-02-18 14:45:16 +0000 UTC" firstStartedPulling="2026-02-18 14:45:18.880563202 +0000 UTC m=+825.401427946" lastFinishedPulling="2026-02-18 14:45:24.967044042 +0000 UTC m=+831.487908786" observedRunningTime="2026-02-18 14:45:26.308458154 +0000 UTC m=+832.829322908" watchObservedRunningTime="2026-02-18 14:45:26.310590244 +0000 UTC m=+832.831455008" Feb 18 14:45:26 crc kubenswrapper[4957]: I0218 14:45:26.334880 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-x4qsb" podStartSLOduration=3.042755679 podStartE2EDuration="10.334852253s" podCreationTimestamp="2026-02-18 14:45:16 +0000 UTC" firstStartedPulling="2026-02-18 14:45:17.682448311 +0000 UTC m=+824.203313055" lastFinishedPulling="2026-02-18 14:45:24.974544885 +0000 UTC m=+831.495409629" observedRunningTime="2026-02-18 14:45:26.331385075 +0000 UTC m=+832.852249819" watchObservedRunningTime="2026-02-18 14:45:26.334852253 +0000 UTC m=+832.855716997" Feb 18 14:45:28 crc kubenswrapper[4957]: I0218 14:45:28.225159 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-7b58bd6fcd-n8wjk" event={"ID":"ae719427-398b-455b-8d4f-d1f96df0e800","Type":"ContainerStarted","Data":"62d7ac0cbaaab594f32af1cb457b0e42d9bd499d38267db39788d5d8a6cf75a1"} Feb 18 14:45:28 crc kubenswrapper[4957]: I0218 14:45:28.225669 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-7b58bd6fcd-n8wjk" Feb 18 14:45:28 crc kubenswrapper[4957]: I0218 14:45:28.230863 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-7b58bd6fcd-58vxq" event={"ID":"6e82b47f-b61b-40dd-92f1-62180459082f","Type":"ContainerStarted","Data":"8afb741090b637d619b981d98026edf00ab3a355caa08aa3c21d262f8ee0ef99"} Feb 18 14:45:28 crc kubenswrapper[4957]: I0218 14:45:28.231157 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-7b58bd6fcd-58vxq" Feb 18 14:45:28 crc kubenswrapper[4957]: I0218 14:45:28.233600 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-7b58bd6fcd-n8wjk" Feb 18 14:45:28 crc kubenswrapper[4957]: I0218 14:45:28.239200 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-7b58bd6fcd-58vxq" Feb 18 14:45:28 crc kubenswrapper[4957]: I0218 14:45:28.251320 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-gateway-7b58bd6fcd-n8wjk" podStartSLOduration=3.123253484 podStartE2EDuration="12.251292977s" podCreationTimestamp="2026-02-18 14:45:16 +0000 UTC" firstStartedPulling="2026-02-18 14:45:18.412031344 +0000 UTC m=+824.932896088" lastFinishedPulling="2026-02-18 14:45:27.540070837 +0000 UTC m=+834.060935581" observedRunningTime="2026-02-18 14:45:28.248535549 +0000 UTC m=+834.769400303" watchObservedRunningTime="2026-02-18 14:45:28.251292977 +0000 UTC m=+834.772157721" Feb 18 14:45:28 crc kubenswrapper[4957]: I0218 14:45:28.305972 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-gateway-7b58bd6fcd-58vxq" podStartSLOduration=3.458924739 podStartE2EDuration="12.305934929s" podCreationTimestamp="2026-02-18 14:45:16 +0000 UTC" firstStartedPulling="2026-02-18 14:45:18.697897394 +0000 UTC m=+825.218762138" lastFinishedPulling="2026-02-18 14:45:27.544907584 +0000 UTC m=+834.065772328" observedRunningTime="2026-02-18 14:45:28.293848346 +0000 UTC m=+834.814713100" watchObservedRunningTime="2026-02-18 14:45:28.305934929 +0000 UTC m=+834.826799673" Feb 18 14:45:29 crc kubenswrapper[4957]: I0218 14:45:29.241367 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-7b58bd6fcd-58vxq" Feb 18 14:45:29 crc kubenswrapper[4957]: I0218 14:45:29.241927 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-7b58bd6fcd-n8wjk" Feb 18 14:45:29 crc kubenswrapper[4957]: I0218 14:45:29.249472 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-7b58bd6fcd-n8wjk" Feb 18 14:45:29 crc kubenswrapper[4957]: I0218 14:45:29.261002 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-7b58bd6fcd-58vxq" Feb 18 14:45:47 crc kubenswrapper[4957]: I0218 14:45:47.075781 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-querier-76bf7b6d45-t4b27" Feb 18 14:45:47 crc kubenswrapper[4957]: I0218 14:45:47.150894 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-x4qsb" Feb 18 14:45:47 crc kubenswrapper[4957]: I0218 14:45:47.213943 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-2wbxs" Feb 18 14:45:48 crc kubenswrapper[4957]: I0218 14:45:48.314786 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-index-gateway-0" Feb 18 14:45:48 crc kubenswrapper[4957]: I0218 14:45:48.412485 4957 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: this instance owns no tokens Feb 18 14:45:48 crc kubenswrapper[4957]: I0218 14:45:48.412666 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="f0549571-def1-4cd5-9cae-77780cf6870b" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 18 14:45:48 crc kubenswrapper[4957]: I0218 14:45:48.535526 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-compactor-0" Feb 18 14:45:53 crc kubenswrapper[4957]: I0218 14:45:53.081962 4957 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-tmknf container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.29:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 14:45:53 crc kubenswrapper[4957]: I0218 14:45:53.082785 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-tmknf" podUID="7b0517cf-fb33-49b0-9f1c-ba39f8edfc37" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.29:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 14:45:53 crc kubenswrapper[4957]: I0218 14:45:53.081969 4957 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-tmknf container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.29:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 14:45:53 crc kubenswrapper[4957]: I0218 14:45:53.082913 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-tmknf" podUID="7b0517cf-fb33-49b0-9f1c-ba39f8edfc37" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.29:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 14:45:58 crc kubenswrapper[4957]: I0218 14:45:58.390534 4957 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: waiting for 15s after being ready Feb 18 14:45:58 crc kubenswrapper[4957]: I0218 14:45:58.392843 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="f0549571-def1-4cd5-9cae-77780cf6870b" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 18 14:46:07 crc kubenswrapper[4957]: I0218 14:46:07.279724 4957 patch_prober.go:28] interesting pod/machine-config-daemon-x8wwg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 14:46:07 crc kubenswrapper[4957]: I0218 14:46:07.280500 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 14:46:08 crc kubenswrapper[4957]: I0218 14:46:08.388855 4957 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: waiting for 15s after being ready Feb 18 14:46:08 crc kubenswrapper[4957]: I0218 14:46:08.388942 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="f0549571-def1-4cd5-9cae-77780cf6870b" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 18 14:46:18 crc kubenswrapper[4957]: I0218 14:46:18.388991 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-ingester-0" Feb 18 14:46:36 crc kubenswrapper[4957]: I0218 14:46:36.370658 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-s6mvx"] Feb 18 14:46:36 crc kubenswrapper[4957]: I0218 14:46:36.373672 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s6mvx" Feb 18 14:46:36 crc kubenswrapper[4957]: I0218 14:46:36.401594 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/collector-zxprd"] Feb 18 14:46:36 crc kubenswrapper[4957]: I0218 14:46:36.403552 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-zxprd" Feb 18 14:46:36 crc kubenswrapper[4957]: I0218 14:46:36.407048 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-metrics" Feb 18 14:46:36 crc kubenswrapper[4957]: I0218 14:46:36.408802 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-syslog-receiver" Feb 18 14:46:36 crc kubenswrapper[4957]: I0218 14:46:36.409626 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/5c7d61cb-09d1-4ec3-b24b-5bed11ca98c3-datadir\") pod \"collector-zxprd\" (UID: \"5c7d61cb-09d1-4ec3-b24b-5bed11ca98c3\") " pod="openshift-logging/collector-zxprd" Feb 18 14:46:36 crc kubenswrapper[4957]: I0218 14:46:36.409681 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/5c7d61cb-09d1-4ec3-b24b-5bed11ca98c3-config-openshift-service-cacrt\") pod \"collector-zxprd\" (UID: \"5c7d61cb-09d1-4ec3-b24b-5bed11ca98c3\") " pod="openshift-logging/collector-zxprd" Feb 18 14:46:36 crc kubenswrapper[4957]: I0218 14:46:36.409719 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/5c7d61cb-09d1-4ec3-b24b-5bed11ca98c3-sa-token\") pod \"collector-zxprd\" (UID: \"5c7d61cb-09d1-4ec3-b24b-5bed11ca98c3\") " pod="openshift-logging/collector-zxprd" Feb 18 14:46:36 crc kubenswrapper[4957]: I0218 14:46:36.409743 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/5c7d61cb-09d1-4ec3-b24b-5bed11ca98c3-tmp\") pod \"collector-zxprd\" (UID: \"5c7d61cb-09d1-4ec3-b24b-5bed11ca98c3\") " pod="openshift-logging/collector-zxprd" Feb 18 14:46:36 crc kubenswrapper[4957]: I0218 14:46:36.409762 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/5c7d61cb-09d1-4ec3-b24b-5bed11ca98c3-entrypoint\") pod \"collector-zxprd\" (UID: \"5c7d61cb-09d1-4ec3-b24b-5bed11ca98c3\") " pod="openshift-logging/collector-zxprd" Feb 18 14:46:36 crc kubenswrapper[4957]: I0218 14:46:36.409892 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/5c7d61cb-09d1-4ec3-b24b-5bed11ca98c3-metrics\") pod \"collector-zxprd\" (UID: \"5c7d61cb-09d1-4ec3-b24b-5bed11ca98c3\") " pod="openshift-logging/collector-zxprd" Feb 18 14:46:36 crc kubenswrapper[4957]: I0218 14:46:36.409964 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c7d61cb-09d1-4ec3-b24b-5bed11ca98c3-config\") pod \"collector-zxprd\" (UID: \"5c7d61cb-09d1-4ec3-b24b-5bed11ca98c3\") " pod="openshift-logging/collector-zxprd" Feb 18 14:46:36 crc kubenswrapper[4957]: I0218 14:46:36.410072 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8860152c-b1e3-45ea-9b74-a67b60258866-catalog-content\") pod \"redhat-marketplace-s6mvx\" (UID: \"8860152c-b1e3-45ea-9b74-a67b60258866\") " pod="openshift-marketplace/redhat-marketplace-s6mvx" Feb 18 14:46:36 crc kubenswrapper[4957]: I0218 14:46:36.410113 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8860152c-b1e3-45ea-9b74-a67b60258866-utilities\") pod \"redhat-marketplace-s6mvx\" (UID: \"8860152c-b1e3-45ea-9b74-a67b60258866\") " pod="openshift-marketplace/redhat-marketplace-s6mvx" Feb 18 14:46:36 crc kubenswrapper[4957]: I0218 14:46:36.410143 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-token" Feb 18 14:46:36 crc kubenswrapper[4957]: I0218 14:46:36.410177 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/5c7d61cb-09d1-4ec3-b24b-5bed11ca98c3-collector-token\") pod \"collector-zxprd\" (UID: \"5c7d61cb-09d1-4ec3-b24b-5bed11ca98c3\") " pod="openshift-logging/collector-zxprd" Feb 18 14:46:36 crc kubenswrapper[4957]: I0218 14:46:36.410222 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/5c7d61cb-09d1-4ec3-b24b-5bed11ca98c3-collector-syslog-receiver\") pod \"collector-zxprd\" (UID: \"5c7d61cb-09d1-4ec3-b24b-5bed11ca98c3\") " pod="openshift-logging/collector-zxprd" Feb 18 14:46:36 crc kubenswrapper[4957]: I0218 14:46:36.410283 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdgrc\" (UniqueName: \"kubernetes.io/projected/8860152c-b1e3-45ea-9b74-a67b60258866-kube-api-access-jdgrc\") pod \"redhat-marketplace-s6mvx\" (UID: \"8860152c-b1e3-45ea-9b74-a67b60258866\") " pod="openshift-marketplace/redhat-marketplace-s6mvx" Feb 18 14:46:36 crc kubenswrapper[4957]: I0218 14:46:36.410364 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-dockercfg-zmskq" Feb 18 14:46:36 crc kubenswrapper[4957]: I0218 14:46:36.410507 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmcmg\" (UniqueName: \"kubernetes.io/projected/5c7d61cb-09d1-4ec3-b24b-5bed11ca98c3-kube-api-access-vmcmg\") pod \"collector-zxprd\" (UID: \"5c7d61cb-09d1-4ec3-b24b-5bed11ca98c3\") " pod="openshift-logging/collector-zxprd" Feb 18 14:46:36 crc kubenswrapper[4957]: I0218 14:46:36.410537 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5c7d61cb-09d1-4ec3-b24b-5bed11ca98c3-trusted-ca\") pod \"collector-zxprd\" (UID: \"5c7d61cb-09d1-4ec3-b24b-5bed11ca98c3\") " pod="openshift-logging/collector-zxprd" Feb 18 14:46:36 crc kubenswrapper[4957]: I0218 14:46:36.411066 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-config" Feb 18 14:46:36 crc kubenswrapper[4957]: I0218 14:46:36.411994 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-s6mvx"] Feb 18 14:46:36 crc kubenswrapper[4957]: I0218 14:46:36.424754 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-trustbundle" Feb 18 14:46:36 crc kubenswrapper[4957]: I0218 14:46:36.427282 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-zxprd"] Feb 18 14:46:36 crc kubenswrapper[4957]: I0218 14:46:36.511706 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jdgrc\" (UniqueName: \"kubernetes.io/projected/8860152c-b1e3-45ea-9b74-a67b60258866-kube-api-access-jdgrc\") pod \"redhat-marketplace-s6mvx\" (UID: \"8860152c-b1e3-45ea-9b74-a67b60258866\") " pod="openshift-marketplace/redhat-marketplace-s6mvx" Feb 18 14:46:36 crc kubenswrapper[4957]: I0218 14:46:36.511761 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vmcmg\" (UniqueName: \"kubernetes.io/projected/5c7d61cb-09d1-4ec3-b24b-5bed11ca98c3-kube-api-access-vmcmg\") pod \"collector-zxprd\" (UID: \"5c7d61cb-09d1-4ec3-b24b-5bed11ca98c3\") " pod="openshift-logging/collector-zxprd" Feb 18 14:46:36 crc kubenswrapper[4957]: I0218 14:46:36.511785 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5c7d61cb-09d1-4ec3-b24b-5bed11ca98c3-trusted-ca\") pod \"collector-zxprd\" (UID: \"5c7d61cb-09d1-4ec3-b24b-5bed11ca98c3\") " pod="openshift-logging/collector-zxprd" Feb 18 14:46:36 crc kubenswrapper[4957]: I0218 14:46:36.511816 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/5c7d61cb-09d1-4ec3-b24b-5bed11ca98c3-datadir\") pod \"collector-zxprd\" (UID: \"5c7d61cb-09d1-4ec3-b24b-5bed11ca98c3\") " pod="openshift-logging/collector-zxprd" Feb 18 14:46:36 crc kubenswrapper[4957]: I0218 14:46:36.511833 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/5c7d61cb-09d1-4ec3-b24b-5bed11ca98c3-config-openshift-service-cacrt\") pod \"collector-zxprd\" (UID: \"5c7d61cb-09d1-4ec3-b24b-5bed11ca98c3\") " pod="openshift-logging/collector-zxprd" Feb 18 14:46:36 crc kubenswrapper[4957]: I0218 14:46:36.511856 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/5c7d61cb-09d1-4ec3-b24b-5bed11ca98c3-sa-token\") pod \"collector-zxprd\" (UID: \"5c7d61cb-09d1-4ec3-b24b-5bed11ca98c3\") " pod="openshift-logging/collector-zxprd" Feb 18 14:46:36 crc kubenswrapper[4957]: I0218 14:46:36.511874 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/5c7d61cb-09d1-4ec3-b24b-5bed11ca98c3-tmp\") pod \"collector-zxprd\" (UID: \"5c7d61cb-09d1-4ec3-b24b-5bed11ca98c3\") " pod="openshift-logging/collector-zxprd" Feb 18 14:46:36 crc kubenswrapper[4957]: I0218 14:46:36.511892 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/5c7d61cb-09d1-4ec3-b24b-5bed11ca98c3-entrypoint\") pod \"collector-zxprd\" (UID: \"5c7d61cb-09d1-4ec3-b24b-5bed11ca98c3\") " pod="openshift-logging/collector-zxprd" Feb 18 14:46:36 crc kubenswrapper[4957]: I0218 14:46:36.511913 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/5c7d61cb-09d1-4ec3-b24b-5bed11ca98c3-metrics\") pod \"collector-zxprd\" (UID: \"5c7d61cb-09d1-4ec3-b24b-5bed11ca98c3\") " pod="openshift-logging/collector-zxprd" Feb 18 14:46:36 crc kubenswrapper[4957]: I0218 14:46:36.511935 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c7d61cb-09d1-4ec3-b24b-5bed11ca98c3-config\") pod \"collector-zxprd\" (UID: \"5c7d61cb-09d1-4ec3-b24b-5bed11ca98c3\") " pod="openshift-logging/collector-zxprd" Feb 18 14:46:36 crc kubenswrapper[4957]: I0218 14:46:36.511977 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8860152c-b1e3-45ea-9b74-a67b60258866-catalog-content\") pod \"redhat-marketplace-s6mvx\" (UID: \"8860152c-b1e3-45ea-9b74-a67b60258866\") " pod="openshift-marketplace/redhat-marketplace-s6mvx" Feb 18 14:46:36 crc kubenswrapper[4957]: I0218 14:46:36.511996 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8860152c-b1e3-45ea-9b74-a67b60258866-utilities\") pod \"redhat-marketplace-s6mvx\" (UID: \"8860152c-b1e3-45ea-9b74-a67b60258866\") " pod="openshift-marketplace/redhat-marketplace-s6mvx" Feb 18 14:46:36 crc kubenswrapper[4957]: I0218 14:46:36.512019 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/5c7d61cb-09d1-4ec3-b24b-5bed11ca98c3-collector-token\") pod \"collector-zxprd\" (UID: \"5c7d61cb-09d1-4ec3-b24b-5bed11ca98c3\") " pod="openshift-logging/collector-zxprd" Feb 18 14:46:36 crc kubenswrapper[4957]: I0218 14:46:36.512040 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/5c7d61cb-09d1-4ec3-b24b-5bed11ca98c3-collector-syslog-receiver\") pod \"collector-zxprd\" (UID: \"5c7d61cb-09d1-4ec3-b24b-5bed11ca98c3\") " pod="openshift-logging/collector-zxprd" Feb 18 14:46:36 crc kubenswrapper[4957]: E0218 14:46:36.512171 4957 secret.go:188] Couldn't get secret openshift-logging/collector-syslog-receiver: secret "collector-syslog-receiver" not found Feb 18 14:46:36 crc kubenswrapper[4957]: E0218 14:46:36.512228 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5c7d61cb-09d1-4ec3-b24b-5bed11ca98c3-collector-syslog-receiver podName:5c7d61cb-09d1-4ec3-b24b-5bed11ca98c3 nodeName:}" failed. No retries permitted until 2026-02-18 14:46:37.012207543 +0000 UTC m=+903.533072287 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "collector-syslog-receiver" (UniqueName: "kubernetes.io/secret/5c7d61cb-09d1-4ec3-b24b-5bed11ca98c3-collector-syslog-receiver") pod "collector-zxprd" (UID: "5c7d61cb-09d1-4ec3-b24b-5bed11ca98c3") : secret "collector-syslog-receiver" not found Feb 18 14:46:36 crc kubenswrapper[4957]: I0218 14:46:36.513497 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5c7d61cb-09d1-4ec3-b24b-5bed11ca98c3-trusted-ca\") pod \"collector-zxprd\" (UID: \"5c7d61cb-09d1-4ec3-b24b-5bed11ca98c3\") " pod="openshift-logging/collector-zxprd" Feb 18 14:46:36 crc kubenswrapper[4957]: I0218 14:46:36.513541 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/5c7d61cb-09d1-4ec3-b24b-5bed11ca98c3-datadir\") pod \"collector-zxprd\" (UID: \"5c7d61cb-09d1-4ec3-b24b-5bed11ca98c3\") " pod="openshift-logging/collector-zxprd" Feb 18 14:46:36 crc kubenswrapper[4957]: I0218 14:46:36.513939 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/5c7d61cb-09d1-4ec3-b24b-5bed11ca98c3-config-openshift-service-cacrt\") pod \"collector-zxprd\" (UID: \"5c7d61cb-09d1-4ec3-b24b-5bed11ca98c3\") " pod="openshift-logging/collector-zxprd" Feb 18 14:46:36 crc kubenswrapper[4957]: E0218 14:46:36.514706 4957 secret.go:188] Couldn't get secret openshift-logging/collector-metrics: secret "collector-metrics" not found Feb 18 14:46:36 crc kubenswrapper[4957]: E0218 14:46:36.514814 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5c7d61cb-09d1-4ec3-b24b-5bed11ca98c3-metrics podName:5c7d61cb-09d1-4ec3-b24b-5bed11ca98c3 nodeName:}" failed. No retries permitted until 2026-02-18 14:46:37.014787277 +0000 UTC m=+903.535652021 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics" (UniqueName: "kubernetes.io/secret/5c7d61cb-09d1-4ec3-b24b-5bed11ca98c3-metrics") pod "collector-zxprd" (UID: "5c7d61cb-09d1-4ec3-b24b-5bed11ca98c3") : secret "collector-metrics" not found Feb 18 14:46:36 crc kubenswrapper[4957]: I0218 14:46:36.515596 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/5c7d61cb-09d1-4ec3-b24b-5bed11ca98c3-entrypoint\") pod \"collector-zxprd\" (UID: \"5c7d61cb-09d1-4ec3-b24b-5bed11ca98c3\") " pod="openshift-logging/collector-zxprd" Feb 18 14:46:36 crc kubenswrapper[4957]: I0218 14:46:36.515704 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8860152c-b1e3-45ea-9b74-a67b60258866-catalog-content\") pod \"redhat-marketplace-s6mvx\" (UID: \"8860152c-b1e3-45ea-9b74-a67b60258866\") " pod="openshift-marketplace/redhat-marketplace-s6mvx" Feb 18 14:46:36 crc kubenswrapper[4957]: I0218 14:46:36.516002 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8860152c-b1e3-45ea-9b74-a67b60258866-utilities\") pod \"redhat-marketplace-s6mvx\" (UID: \"8860152c-b1e3-45ea-9b74-a67b60258866\") " pod="openshift-marketplace/redhat-marketplace-s6mvx" Feb 18 14:46:36 crc kubenswrapper[4957]: I0218 14:46:36.516222 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c7d61cb-09d1-4ec3-b24b-5bed11ca98c3-config\") pod \"collector-zxprd\" (UID: \"5c7d61cb-09d1-4ec3-b24b-5bed11ca98c3\") " pod="openshift-logging/collector-zxprd" Feb 18 14:46:36 crc kubenswrapper[4957]: I0218 14:46:36.525372 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/5c7d61cb-09d1-4ec3-b24b-5bed11ca98c3-collector-token\") pod \"collector-zxprd\" (UID: \"5c7d61cb-09d1-4ec3-b24b-5bed11ca98c3\") " pod="openshift-logging/collector-zxprd" Feb 18 14:46:36 crc kubenswrapper[4957]: I0218 14:46:36.541293 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vmcmg\" (UniqueName: \"kubernetes.io/projected/5c7d61cb-09d1-4ec3-b24b-5bed11ca98c3-kube-api-access-vmcmg\") pod \"collector-zxprd\" (UID: \"5c7d61cb-09d1-4ec3-b24b-5bed11ca98c3\") " pod="openshift-logging/collector-zxprd" Feb 18 14:46:36 crc kubenswrapper[4957]: I0218 14:46:36.545292 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jdgrc\" (UniqueName: \"kubernetes.io/projected/8860152c-b1e3-45ea-9b74-a67b60258866-kube-api-access-jdgrc\") pod \"redhat-marketplace-s6mvx\" (UID: \"8860152c-b1e3-45ea-9b74-a67b60258866\") " pod="openshift-marketplace/redhat-marketplace-s6mvx" Feb 18 14:46:36 crc kubenswrapper[4957]: I0218 14:46:36.555360 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/5c7d61cb-09d1-4ec3-b24b-5bed11ca98c3-tmp\") pod \"collector-zxprd\" (UID: \"5c7d61cb-09d1-4ec3-b24b-5bed11ca98c3\") " pod="openshift-logging/collector-zxprd" Feb 18 14:46:36 crc kubenswrapper[4957]: I0218 14:46:36.561943 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/5c7d61cb-09d1-4ec3-b24b-5bed11ca98c3-sa-token\") pod \"collector-zxprd\" (UID: \"5c7d61cb-09d1-4ec3-b24b-5bed11ca98c3\") " pod="openshift-logging/collector-zxprd" Feb 18 14:46:36 crc kubenswrapper[4957]: I0218 14:46:36.632704 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-logging/collector-zxprd"] Feb 18 14:46:36 crc kubenswrapper[4957]: E0218 14:46:36.641217 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[collector-syslog-receiver metrics], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-logging/collector-zxprd" podUID="5c7d61cb-09d1-4ec3-b24b-5bed11ca98c3" Feb 18 14:46:36 crc kubenswrapper[4957]: I0218 14:46:36.696699 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s6mvx" Feb 18 14:46:36 crc kubenswrapper[4957]: I0218 14:46:36.869412 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-zxprd" Feb 18 14:46:36 crc kubenswrapper[4957]: I0218 14:46:36.905865 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-zxprd" Feb 18 14:46:37 crc kubenswrapper[4957]: I0218 14:46:37.026154 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c7d61cb-09d1-4ec3-b24b-5bed11ca98c3-config\") pod \"5c7d61cb-09d1-4ec3-b24b-5bed11ca98c3\" (UID: \"5c7d61cb-09d1-4ec3-b24b-5bed11ca98c3\") " Feb 18 14:46:37 crc kubenswrapper[4957]: I0218 14:46:37.026269 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/5c7d61cb-09d1-4ec3-b24b-5bed11ca98c3-sa-token\") pod \"5c7d61cb-09d1-4ec3-b24b-5bed11ca98c3\" (UID: \"5c7d61cb-09d1-4ec3-b24b-5bed11ca98c3\") " Feb 18 14:46:37 crc kubenswrapper[4957]: I0218 14:46:37.026288 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5c7d61cb-09d1-4ec3-b24b-5bed11ca98c3-trusted-ca\") pod \"5c7d61cb-09d1-4ec3-b24b-5bed11ca98c3\" (UID: \"5c7d61cb-09d1-4ec3-b24b-5bed11ca98c3\") " Feb 18 14:46:37 crc kubenswrapper[4957]: I0218 14:46:37.026318 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/5c7d61cb-09d1-4ec3-b24b-5bed11ca98c3-tmp\") pod \"5c7d61cb-09d1-4ec3-b24b-5bed11ca98c3\" (UID: \"5c7d61cb-09d1-4ec3-b24b-5bed11ca98c3\") " Feb 18 14:46:37 crc kubenswrapper[4957]: I0218 14:46:37.026382 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vmcmg\" (UniqueName: \"kubernetes.io/projected/5c7d61cb-09d1-4ec3-b24b-5bed11ca98c3-kube-api-access-vmcmg\") pod \"5c7d61cb-09d1-4ec3-b24b-5bed11ca98c3\" (UID: \"5c7d61cb-09d1-4ec3-b24b-5bed11ca98c3\") " Feb 18 14:46:37 crc kubenswrapper[4957]: I0218 14:46:37.026555 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/5c7d61cb-09d1-4ec3-b24b-5bed11ca98c3-entrypoint\") pod \"5c7d61cb-09d1-4ec3-b24b-5bed11ca98c3\" (UID: \"5c7d61cb-09d1-4ec3-b24b-5bed11ca98c3\") " Feb 18 14:46:37 crc kubenswrapper[4957]: I0218 14:46:37.026595 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/5c7d61cb-09d1-4ec3-b24b-5bed11ca98c3-datadir\") pod \"5c7d61cb-09d1-4ec3-b24b-5bed11ca98c3\" (UID: \"5c7d61cb-09d1-4ec3-b24b-5bed11ca98c3\") " Feb 18 14:46:37 crc kubenswrapper[4957]: I0218 14:46:37.026690 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/5c7d61cb-09d1-4ec3-b24b-5bed11ca98c3-config-openshift-service-cacrt\") pod \"5c7d61cb-09d1-4ec3-b24b-5bed11ca98c3\" (UID: \"5c7d61cb-09d1-4ec3-b24b-5bed11ca98c3\") " Feb 18 14:46:37 crc kubenswrapper[4957]: I0218 14:46:37.026786 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/5c7d61cb-09d1-4ec3-b24b-5bed11ca98c3-collector-token\") pod \"5c7d61cb-09d1-4ec3-b24b-5bed11ca98c3\" (UID: \"5c7d61cb-09d1-4ec3-b24b-5bed11ca98c3\") " Feb 18 14:46:37 crc kubenswrapper[4957]: I0218 14:46:37.027098 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/5c7d61cb-09d1-4ec3-b24b-5bed11ca98c3-metrics\") pod \"collector-zxprd\" (UID: \"5c7d61cb-09d1-4ec3-b24b-5bed11ca98c3\") " pod="openshift-logging/collector-zxprd" Feb 18 14:46:37 crc kubenswrapper[4957]: I0218 14:46:37.027186 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/5c7d61cb-09d1-4ec3-b24b-5bed11ca98c3-collector-syslog-receiver\") pod \"collector-zxprd\" (UID: \"5c7d61cb-09d1-4ec3-b24b-5bed11ca98c3\") " pod="openshift-logging/collector-zxprd" Feb 18 14:46:37 crc kubenswrapper[4957]: I0218 14:46:37.031083 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5c7d61cb-09d1-4ec3-b24b-5bed11ca98c3-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "5c7d61cb-09d1-4ec3-b24b-5bed11ca98c3" (UID: "5c7d61cb-09d1-4ec3-b24b-5bed11ca98c3"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:46:37 crc kubenswrapper[4957]: I0218 14:46:37.031384 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5c7d61cb-09d1-4ec3-b24b-5bed11ca98c3-config-openshift-service-cacrt" (OuterVolumeSpecName: "config-openshift-service-cacrt") pod "5c7d61cb-09d1-4ec3-b24b-5bed11ca98c3" (UID: "5c7d61cb-09d1-4ec3-b24b-5bed11ca98c3"). InnerVolumeSpecName "config-openshift-service-cacrt". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:46:37 crc kubenswrapper[4957]: I0218 14:46:37.031937 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5c7d61cb-09d1-4ec3-b24b-5bed11ca98c3-entrypoint" (OuterVolumeSpecName: "entrypoint") pod "5c7d61cb-09d1-4ec3-b24b-5bed11ca98c3" (UID: "5c7d61cb-09d1-4ec3-b24b-5bed11ca98c3"). InnerVolumeSpecName "entrypoint". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:46:37 crc kubenswrapper[4957]: I0218 14:46:37.031976 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5c7d61cb-09d1-4ec3-b24b-5bed11ca98c3-datadir" (OuterVolumeSpecName: "datadir") pod "5c7d61cb-09d1-4ec3-b24b-5bed11ca98c3" (UID: "5c7d61cb-09d1-4ec3-b24b-5bed11ca98c3"). InnerVolumeSpecName "datadir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 14:46:37 crc kubenswrapper[4957]: I0218 14:46:37.034978 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/5c7d61cb-09d1-4ec3-b24b-5bed11ca98c3-collector-syslog-receiver\") pod \"collector-zxprd\" (UID: \"5c7d61cb-09d1-4ec3-b24b-5bed11ca98c3\") " pod="openshift-logging/collector-zxprd" Feb 18 14:46:37 crc kubenswrapper[4957]: I0218 14:46:37.037792 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/5c7d61cb-09d1-4ec3-b24b-5bed11ca98c3-metrics\") pod \"collector-zxprd\" (UID: \"5c7d61cb-09d1-4ec3-b24b-5bed11ca98c3\") " pod="openshift-logging/collector-zxprd" Feb 18 14:46:37 crc kubenswrapper[4957]: I0218 14:46:37.038261 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5c7d61cb-09d1-4ec3-b24b-5bed11ca98c3-config" (OuterVolumeSpecName: "config") pod "5c7d61cb-09d1-4ec3-b24b-5bed11ca98c3" (UID: "5c7d61cb-09d1-4ec3-b24b-5bed11ca98c3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:46:37 crc kubenswrapper[4957]: I0218 14:46:37.038728 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c7d61cb-09d1-4ec3-b24b-5bed11ca98c3-kube-api-access-vmcmg" (OuterVolumeSpecName: "kube-api-access-vmcmg") pod "5c7d61cb-09d1-4ec3-b24b-5bed11ca98c3" (UID: "5c7d61cb-09d1-4ec3-b24b-5bed11ca98c3"). InnerVolumeSpecName "kube-api-access-vmcmg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:46:37 crc kubenswrapper[4957]: I0218 14:46:37.041559 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c7d61cb-09d1-4ec3-b24b-5bed11ca98c3-tmp" (OuterVolumeSpecName: "tmp") pod "5c7d61cb-09d1-4ec3-b24b-5bed11ca98c3" (UID: "5c7d61cb-09d1-4ec3-b24b-5bed11ca98c3"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:46:37 crc kubenswrapper[4957]: I0218 14:46:37.043863 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c7d61cb-09d1-4ec3-b24b-5bed11ca98c3-collector-token" (OuterVolumeSpecName: "collector-token") pod "5c7d61cb-09d1-4ec3-b24b-5bed11ca98c3" (UID: "5c7d61cb-09d1-4ec3-b24b-5bed11ca98c3"). InnerVolumeSpecName "collector-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:46:37 crc kubenswrapper[4957]: I0218 14:46:37.046665 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c7d61cb-09d1-4ec3-b24b-5bed11ca98c3-sa-token" (OuterVolumeSpecName: "sa-token") pod "5c7d61cb-09d1-4ec3-b24b-5bed11ca98c3" (UID: "5c7d61cb-09d1-4ec3-b24b-5bed11ca98c3"). InnerVolumeSpecName "sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:46:37 crc kubenswrapper[4957]: I0218 14:46:37.095070 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-s6mvx"] Feb 18 14:46:37 crc kubenswrapper[4957]: I0218 14:46:37.128753 4957 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5c7d61cb-09d1-4ec3-b24b-5bed11ca98c3-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 18 14:46:37 crc kubenswrapper[4957]: I0218 14:46:37.128788 4957 reconciler_common.go:293] "Volume detached for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/5c7d61cb-09d1-4ec3-b24b-5bed11ca98c3-sa-token\") on node \"crc\" DevicePath \"\"" Feb 18 14:46:37 crc kubenswrapper[4957]: I0218 14:46:37.128800 4957 reconciler_common.go:293] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/5c7d61cb-09d1-4ec3-b24b-5bed11ca98c3-tmp\") on node \"crc\" DevicePath \"\"" Feb 18 14:46:37 crc kubenswrapper[4957]: I0218 14:46:37.128811 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vmcmg\" (UniqueName: \"kubernetes.io/projected/5c7d61cb-09d1-4ec3-b24b-5bed11ca98c3-kube-api-access-vmcmg\") on node \"crc\" DevicePath \"\"" Feb 18 14:46:37 crc kubenswrapper[4957]: I0218 14:46:37.128820 4957 reconciler_common.go:293] "Volume detached for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/5c7d61cb-09d1-4ec3-b24b-5bed11ca98c3-entrypoint\") on node \"crc\" DevicePath \"\"" Feb 18 14:46:37 crc kubenswrapper[4957]: I0218 14:46:37.128829 4957 reconciler_common.go:293] "Volume detached for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/5c7d61cb-09d1-4ec3-b24b-5bed11ca98c3-datadir\") on node \"crc\" DevicePath \"\"" Feb 18 14:46:37 crc kubenswrapper[4957]: I0218 14:46:37.128841 4957 reconciler_common.go:293] "Volume detached for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/5c7d61cb-09d1-4ec3-b24b-5bed11ca98c3-config-openshift-service-cacrt\") on node \"crc\" DevicePath \"\"" Feb 18 14:46:37 crc kubenswrapper[4957]: I0218 14:46:37.128853 4957 reconciler_common.go:293] "Volume detached for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/5c7d61cb-09d1-4ec3-b24b-5bed11ca98c3-collector-token\") on node \"crc\" DevicePath \"\"" Feb 18 14:46:37 crc kubenswrapper[4957]: I0218 14:46:37.128866 4957 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c7d61cb-09d1-4ec3-b24b-5bed11ca98c3-config\") on node \"crc\" DevicePath \"\"" Feb 18 14:46:37 crc kubenswrapper[4957]: I0218 14:46:37.230132 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/5c7d61cb-09d1-4ec3-b24b-5bed11ca98c3-metrics\") pod \"5c7d61cb-09d1-4ec3-b24b-5bed11ca98c3\" (UID: \"5c7d61cb-09d1-4ec3-b24b-5bed11ca98c3\") " Feb 18 14:46:37 crc kubenswrapper[4957]: I0218 14:46:37.230340 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/5c7d61cb-09d1-4ec3-b24b-5bed11ca98c3-collector-syslog-receiver\") pod \"5c7d61cb-09d1-4ec3-b24b-5bed11ca98c3\" (UID: \"5c7d61cb-09d1-4ec3-b24b-5bed11ca98c3\") " Feb 18 14:46:37 crc kubenswrapper[4957]: I0218 14:46:37.238747 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c7d61cb-09d1-4ec3-b24b-5bed11ca98c3-collector-syslog-receiver" (OuterVolumeSpecName: "collector-syslog-receiver") pod "5c7d61cb-09d1-4ec3-b24b-5bed11ca98c3" (UID: "5c7d61cb-09d1-4ec3-b24b-5bed11ca98c3"). InnerVolumeSpecName "collector-syslog-receiver". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:46:37 crc kubenswrapper[4957]: I0218 14:46:37.238886 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c7d61cb-09d1-4ec3-b24b-5bed11ca98c3-metrics" (OuterVolumeSpecName: "metrics") pod "5c7d61cb-09d1-4ec3-b24b-5bed11ca98c3" (UID: "5c7d61cb-09d1-4ec3-b24b-5bed11ca98c3"). InnerVolumeSpecName "metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:46:37 crc kubenswrapper[4957]: I0218 14:46:37.278921 4957 patch_prober.go:28] interesting pod/machine-config-daemon-x8wwg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 14:46:37 crc kubenswrapper[4957]: I0218 14:46:37.278966 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 14:46:37 crc kubenswrapper[4957]: I0218 14:46:37.334612 4957 reconciler_common.go:293] "Volume detached for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/5c7d61cb-09d1-4ec3-b24b-5bed11ca98c3-metrics\") on node \"crc\" DevicePath \"\"" Feb 18 14:46:37 crc kubenswrapper[4957]: I0218 14:46:37.334932 4957 reconciler_common.go:293] "Volume detached for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/5c7d61cb-09d1-4ec3-b24b-5bed11ca98c3-collector-syslog-receiver\") on node \"crc\" DevicePath \"\"" Feb 18 14:46:37 crc kubenswrapper[4957]: I0218 14:46:37.878563 4957 generic.go:334] "Generic (PLEG): container finished" podID="8860152c-b1e3-45ea-9b74-a67b60258866" containerID="246f037dbd75e02e6e49fa4f70bc20c0d64bdc4ede1cacf53280fb8bfeb5d009" exitCode=0 Feb 18 14:46:37 crc kubenswrapper[4957]: I0218 14:46:37.878651 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-zxprd" Feb 18 14:46:37 crc kubenswrapper[4957]: I0218 14:46:37.878707 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s6mvx" event={"ID":"8860152c-b1e3-45ea-9b74-a67b60258866","Type":"ContainerDied","Data":"246f037dbd75e02e6e49fa4f70bc20c0d64bdc4ede1cacf53280fb8bfeb5d009"} Feb 18 14:46:37 crc kubenswrapper[4957]: I0218 14:46:37.878761 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s6mvx" event={"ID":"8860152c-b1e3-45ea-9b74-a67b60258866","Type":"ContainerStarted","Data":"32822188d1c484fc2d5b593f980316fc2edb2c6cd0239df34995ee99580e2826"} Feb 18 14:46:37 crc kubenswrapper[4957]: I0218 14:46:37.956232 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-logging/collector-zxprd"] Feb 18 14:46:37 crc kubenswrapper[4957]: I0218 14:46:37.973639 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-logging/collector-zxprd"] Feb 18 14:46:37 crc kubenswrapper[4957]: I0218 14:46:37.982559 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/collector-5tn9r"] Feb 18 14:46:37 crc kubenswrapper[4957]: I0218 14:46:37.983886 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-5tn9r" Feb 18 14:46:37 crc kubenswrapper[4957]: I0218 14:46:37.987408 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-dockercfg-zmskq" Feb 18 14:46:37 crc kubenswrapper[4957]: I0218 14:46:37.988739 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-metrics" Feb 18 14:46:37 crc kubenswrapper[4957]: I0218 14:46:37.989177 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-syslog-receiver" Feb 18 14:46:37 crc kubenswrapper[4957]: I0218 14:46:37.996659 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-5tn9r"] Feb 18 14:46:37 crc kubenswrapper[4957]: I0218 14:46:37.997211 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-token" Feb 18 14:46:37 crc kubenswrapper[4957]: I0218 14:46:37.997564 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-config" Feb 18 14:46:38 crc kubenswrapper[4957]: I0218 14:46:38.014972 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-trustbundle" Feb 18 14:46:38 crc kubenswrapper[4957]: I0218 14:46:38.049033 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/f43654c7-84ed-488e-912c-c089b778adc7-entrypoint\") pod \"collector-5tn9r\" (UID: \"f43654c7-84ed-488e-912c-c089b778adc7\") " pod="openshift-logging/collector-5tn9r" Feb 18 14:46:38 crc kubenswrapper[4957]: I0218 14:46:38.049123 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bs6f2\" (UniqueName: \"kubernetes.io/projected/f43654c7-84ed-488e-912c-c089b778adc7-kube-api-access-bs6f2\") pod \"collector-5tn9r\" (UID: \"f43654c7-84ed-488e-912c-c089b778adc7\") " pod="openshift-logging/collector-5tn9r" Feb 18 14:46:38 crc kubenswrapper[4957]: I0218 14:46:38.049158 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/f43654c7-84ed-488e-912c-c089b778adc7-sa-token\") pod \"collector-5tn9r\" (UID: \"f43654c7-84ed-488e-912c-c089b778adc7\") " pod="openshift-logging/collector-5tn9r" Feb 18 14:46:38 crc kubenswrapper[4957]: I0218 14:46:38.049192 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/f43654c7-84ed-488e-912c-c089b778adc7-datadir\") pod \"collector-5tn9r\" (UID: \"f43654c7-84ed-488e-912c-c089b778adc7\") " pod="openshift-logging/collector-5tn9r" Feb 18 14:46:38 crc kubenswrapper[4957]: I0218 14:46:38.049227 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/f43654c7-84ed-488e-912c-c089b778adc7-collector-syslog-receiver\") pod \"collector-5tn9r\" (UID: \"f43654c7-84ed-488e-912c-c089b778adc7\") " pod="openshift-logging/collector-5tn9r" Feb 18 14:46:38 crc kubenswrapper[4957]: I0218 14:46:38.049257 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f43654c7-84ed-488e-912c-c089b778adc7-trusted-ca\") pod \"collector-5tn9r\" (UID: \"f43654c7-84ed-488e-912c-c089b778adc7\") " pod="openshift-logging/collector-5tn9r" Feb 18 14:46:38 crc kubenswrapper[4957]: I0218 14:46:38.049281 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/f43654c7-84ed-488e-912c-c089b778adc7-collector-token\") pod \"collector-5tn9r\" (UID: \"f43654c7-84ed-488e-912c-c089b778adc7\") " pod="openshift-logging/collector-5tn9r" Feb 18 14:46:38 crc kubenswrapper[4957]: I0218 14:46:38.049314 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/f43654c7-84ed-488e-912c-c089b778adc7-tmp\") pod \"collector-5tn9r\" (UID: \"f43654c7-84ed-488e-912c-c089b778adc7\") " pod="openshift-logging/collector-5tn9r" Feb 18 14:46:38 crc kubenswrapper[4957]: I0218 14:46:38.049337 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/f43654c7-84ed-488e-912c-c089b778adc7-metrics\") pod \"collector-5tn9r\" (UID: \"f43654c7-84ed-488e-912c-c089b778adc7\") " pod="openshift-logging/collector-5tn9r" Feb 18 14:46:38 crc kubenswrapper[4957]: I0218 14:46:38.049552 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/f43654c7-84ed-488e-912c-c089b778adc7-config-openshift-service-cacrt\") pod \"collector-5tn9r\" (UID: \"f43654c7-84ed-488e-912c-c089b778adc7\") " pod="openshift-logging/collector-5tn9r" Feb 18 14:46:38 crc kubenswrapper[4957]: I0218 14:46:38.050039 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f43654c7-84ed-488e-912c-c089b778adc7-config\") pod \"collector-5tn9r\" (UID: \"f43654c7-84ed-488e-912c-c089b778adc7\") " pod="openshift-logging/collector-5tn9r" Feb 18 14:46:38 crc kubenswrapper[4957]: I0218 14:46:38.151904 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/f43654c7-84ed-488e-912c-c089b778adc7-tmp\") pod \"collector-5tn9r\" (UID: \"f43654c7-84ed-488e-912c-c089b778adc7\") " pod="openshift-logging/collector-5tn9r" Feb 18 14:46:38 crc kubenswrapper[4957]: I0218 14:46:38.151956 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/f43654c7-84ed-488e-912c-c089b778adc7-metrics\") pod \"collector-5tn9r\" (UID: \"f43654c7-84ed-488e-912c-c089b778adc7\") " pod="openshift-logging/collector-5tn9r" Feb 18 14:46:38 crc kubenswrapper[4957]: I0218 14:46:38.152027 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/f43654c7-84ed-488e-912c-c089b778adc7-config-openshift-service-cacrt\") pod \"collector-5tn9r\" (UID: \"f43654c7-84ed-488e-912c-c089b778adc7\") " pod="openshift-logging/collector-5tn9r" Feb 18 14:46:38 crc kubenswrapper[4957]: I0218 14:46:38.152043 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f43654c7-84ed-488e-912c-c089b778adc7-config\") pod \"collector-5tn9r\" (UID: \"f43654c7-84ed-488e-912c-c089b778adc7\") " pod="openshift-logging/collector-5tn9r" Feb 18 14:46:38 crc kubenswrapper[4957]: I0218 14:46:38.153189 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f43654c7-84ed-488e-912c-c089b778adc7-config\") pod \"collector-5tn9r\" (UID: \"f43654c7-84ed-488e-912c-c089b778adc7\") " pod="openshift-logging/collector-5tn9r" Feb 18 14:46:38 crc kubenswrapper[4957]: I0218 14:46:38.153092 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/f43654c7-84ed-488e-912c-c089b778adc7-config-openshift-service-cacrt\") pod \"collector-5tn9r\" (UID: \"f43654c7-84ed-488e-912c-c089b778adc7\") " pod="openshift-logging/collector-5tn9r" Feb 18 14:46:38 crc kubenswrapper[4957]: I0218 14:46:38.153285 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/f43654c7-84ed-488e-912c-c089b778adc7-entrypoint\") pod \"collector-5tn9r\" (UID: \"f43654c7-84ed-488e-912c-c089b778adc7\") " pod="openshift-logging/collector-5tn9r" Feb 18 14:46:38 crc kubenswrapper[4957]: I0218 14:46:38.153323 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bs6f2\" (UniqueName: \"kubernetes.io/projected/f43654c7-84ed-488e-912c-c089b778adc7-kube-api-access-bs6f2\") pod \"collector-5tn9r\" (UID: \"f43654c7-84ed-488e-912c-c089b778adc7\") " pod="openshift-logging/collector-5tn9r" Feb 18 14:46:38 crc kubenswrapper[4957]: I0218 14:46:38.153345 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/f43654c7-84ed-488e-912c-c089b778adc7-sa-token\") pod \"collector-5tn9r\" (UID: \"f43654c7-84ed-488e-912c-c089b778adc7\") " pod="openshift-logging/collector-5tn9r" Feb 18 14:46:38 crc kubenswrapper[4957]: I0218 14:46:38.153360 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/f43654c7-84ed-488e-912c-c089b778adc7-datadir\") pod \"collector-5tn9r\" (UID: \"f43654c7-84ed-488e-912c-c089b778adc7\") " pod="openshift-logging/collector-5tn9r" Feb 18 14:46:38 crc kubenswrapper[4957]: I0218 14:46:38.153395 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/f43654c7-84ed-488e-912c-c089b778adc7-collector-syslog-receiver\") pod \"collector-5tn9r\" (UID: \"f43654c7-84ed-488e-912c-c089b778adc7\") " pod="openshift-logging/collector-5tn9r" Feb 18 14:46:38 crc kubenswrapper[4957]: I0218 14:46:38.153448 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f43654c7-84ed-488e-912c-c089b778adc7-trusted-ca\") pod \"collector-5tn9r\" (UID: \"f43654c7-84ed-488e-912c-c089b778adc7\") " pod="openshift-logging/collector-5tn9r" Feb 18 14:46:38 crc kubenswrapper[4957]: I0218 14:46:38.153481 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/f43654c7-84ed-488e-912c-c089b778adc7-collector-token\") pod \"collector-5tn9r\" (UID: \"f43654c7-84ed-488e-912c-c089b778adc7\") " pod="openshift-logging/collector-5tn9r" Feb 18 14:46:38 crc kubenswrapper[4957]: I0218 14:46:38.153887 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/f43654c7-84ed-488e-912c-c089b778adc7-datadir\") pod \"collector-5tn9r\" (UID: \"f43654c7-84ed-488e-912c-c089b778adc7\") " pod="openshift-logging/collector-5tn9r" Feb 18 14:46:38 crc kubenswrapper[4957]: I0218 14:46:38.154723 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f43654c7-84ed-488e-912c-c089b778adc7-trusted-ca\") pod \"collector-5tn9r\" (UID: \"f43654c7-84ed-488e-912c-c089b778adc7\") " pod="openshift-logging/collector-5tn9r" Feb 18 14:46:38 crc kubenswrapper[4957]: I0218 14:46:38.155563 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/f43654c7-84ed-488e-912c-c089b778adc7-entrypoint\") pod \"collector-5tn9r\" (UID: \"f43654c7-84ed-488e-912c-c089b778adc7\") " pod="openshift-logging/collector-5tn9r" Feb 18 14:46:38 crc kubenswrapper[4957]: I0218 14:46:38.159272 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/f43654c7-84ed-488e-912c-c089b778adc7-metrics\") pod \"collector-5tn9r\" (UID: \"f43654c7-84ed-488e-912c-c089b778adc7\") " pod="openshift-logging/collector-5tn9r" Feb 18 14:46:38 crc kubenswrapper[4957]: I0218 14:46:38.159372 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/f43654c7-84ed-488e-912c-c089b778adc7-collector-syslog-receiver\") pod \"collector-5tn9r\" (UID: \"f43654c7-84ed-488e-912c-c089b778adc7\") " pod="openshift-logging/collector-5tn9r" Feb 18 14:46:38 crc kubenswrapper[4957]: I0218 14:46:38.166547 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/f43654c7-84ed-488e-912c-c089b778adc7-tmp\") pod \"collector-5tn9r\" (UID: \"f43654c7-84ed-488e-912c-c089b778adc7\") " pod="openshift-logging/collector-5tn9r" Feb 18 14:46:38 crc kubenswrapper[4957]: I0218 14:46:38.169995 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/f43654c7-84ed-488e-912c-c089b778adc7-collector-token\") pod \"collector-5tn9r\" (UID: \"f43654c7-84ed-488e-912c-c089b778adc7\") " pod="openshift-logging/collector-5tn9r" Feb 18 14:46:38 crc kubenswrapper[4957]: I0218 14:46:38.170652 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bs6f2\" (UniqueName: \"kubernetes.io/projected/f43654c7-84ed-488e-912c-c089b778adc7-kube-api-access-bs6f2\") pod \"collector-5tn9r\" (UID: \"f43654c7-84ed-488e-912c-c089b778adc7\") " pod="openshift-logging/collector-5tn9r" Feb 18 14:46:38 crc kubenswrapper[4957]: I0218 14:46:38.172952 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/f43654c7-84ed-488e-912c-c089b778adc7-sa-token\") pod \"collector-5tn9r\" (UID: \"f43654c7-84ed-488e-912c-c089b778adc7\") " pod="openshift-logging/collector-5tn9r" Feb 18 14:46:38 crc kubenswrapper[4957]: I0218 14:46:38.221665 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c7d61cb-09d1-4ec3-b24b-5bed11ca98c3" path="/var/lib/kubelet/pods/5c7d61cb-09d1-4ec3-b24b-5bed11ca98c3/volumes" Feb 18 14:46:38 crc kubenswrapper[4957]: I0218 14:46:38.323060 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-5tn9r" Feb 18 14:46:38 crc kubenswrapper[4957]: I0218 14:46:38.890787 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-5tn9r"] Feb 18 14:46:38 crc kubenswrapper[4957]: I0218 14:46:38.896439 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s6mvx" event={"ID":"8860152c-b1e3-45ea-9b74-a67b60258866","Type":"ContainerStarted","Data":"e8ca3bd24907b08da0fbc50b712e7e8d2e8c0ed59394698b5b3836a6203cc0b5"} Feb 18 14:46:38 crc kubenswrapper[4957]: W0218 14:46:38.904777 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf43654c7_84ed_488e_912c_c089b778adc7.slice/crio-8ca35fcf5703223bc2cc5e0c9c87c743a0e716ce09675d404f6e6bd8ef139db8 WatchSource:0}: Error finding container 8ca35fcf5703223bc2cc5e0c9c87c743a0e716ce09675d404f6e6bd8ef139db8: Status 404 returned error can't find the container with id 8ca35fcf5703223bc2cc5e0c9c87c743a0e716ce09675d404f6e6bd8ef139db8 Feb 18 14:46:39 crc kubenswrapper[4957]: I0218 14:46:39.904077 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/collector-5tn9r" event={"ID":"f43654c7-84ed-488e-912c-c089b778adc7","Type":"ContainerStarted","Data":"8ca35fcf5703223bc2cc5e0c9c87c743a0e716ce09675d404f6e6bd8ef139db8"} Feb 18 14:46:39 crc kubenswrapper[4957]: I0218 14:46:39.907761 4957 generic.go:334] "Generic (PLEG): container finished" podID="8860152c-b1e3-45ea-9b74-a67b60258866" containerID="e8ca3bd24907b08da0fbc50b712e7e8d2e8c0ed59394698b5b3836a6203cc0b5" exitCode=0 Feb 18 14:46:39 crc kubenswrapper[4957]: I0218 14:46:39.907800 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s6mvx" event={"ID":"8860152c-b1e3-45ea-9b74-a67b60258866","Type":"ContainerDied","Data":"e8ca3bd24907b08da0fbc50b712e7e8d2e8c0ed59394698b5b3836a6203cc0b5"} Feb 18 14:46:40 crc kubenswrapper[4957]: I0218 14:46:40.921502 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s6mvx" event={"ID":"8860152c-b1e3-45ea-9b74-a67b60258866","Type":"ContainerStarted","Data":"5d2ade10b043d7ba71311f79a9d603aa748e77fbfc77e3033231e037b813e1b5"} Feb 18 14:46:40 crc kubenswrapper[4957]: I0218 14:46:40.959130 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-s6mvx" podStartSLOduration=2.5578580090000003 podStartE2EDuration="4.959101731s" podCreationTimestamp="2026-02-18 14:46:36 +0000 UTC" firstStartedPulling="2026-02-18 14:46:37.880967328 +0000 UTC m=+904.401832072" lastFinishedPulling="2026-02-18 14:46:40.28221105 +0000 UTC m=+906.803075794" observedRunningTime="2026-02-18 14:46:40.954856478 +0000 UTC m=+907.475721222" watchObservedRunningTime="2026-02-18 14:46:40.959101731 +0000 UTC m=+907.479966475" Feb 18 14:46:43 crc kubenswrapper[4957]: I0218 14:46:43.126296 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-cx8cn"] Feb 18 14:46:43 crc kubenswrapper[4957]: I0218 14:46:43.130441 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cx8cn" Feb 18 14:46:43 crc kubenswrapper[4957]: I0218 14:46:43.143987 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cx8cn"] Feb 18 14:46:43 crc kubenswrapper[4957]: I0218 14:46:43.168434 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4332fa89-063c-4fe4-9847-17830ac8ee2e-utilities\") pod \"certified-operators-cx8cn\" (UID: \"4332fa89-063c-4fe4-9847-17830ac8ee2e\") " pod="openshift-marketplace/certified-operators-cx8cn" Feb 18 14:46:43 crc kubenswrapper[4957]: I0218 14:46:43.168557 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4332fa89-063c-4fe4-9847-17830ac8ee2e-catalog-content\") pod \"certified-operators-cx8cn\" (UID: \"4332fa89-063c-4fe4-9847-17830ac8ee2e\") " pod="openshift-marketplace/certified-operators-cx8cn" Feb 18 14:46:43 crc kubenswrapper[4957]: I0218 14:46:43.168634 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jfhvq\" (UniqueName: \"kubernetes.io/projected/4332fa89-063c-4fe4-9847-17830ac8ee2e-kube-api-access-jfhvq\") pod \"certified-operators-cx8cn\" (UID: \"4332fa89-063c-4fe4-9847-17830ac8ee2e\") " pod="openshift-marketplace/certified-operators-cx8cn" Feb 18 14:46:43 crc kubenswrapper[4957]: I0218 14:46:43.270259 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jfhvq\" (UniqueName: \"kubernetes.io/projected/4332fa89-063c-4fe4-9847-17830ac8ee2e-kube-api-access-jfhvq\") pod \"certified-operators-cx8cn\" (UID: \"4332fa89-063c-4fe4-9847-17830ac8ee2e\") " pod="openshift-marketplace/certified-operators-cx8cn" Feb 18 14:46:43 crc kubenswrapper[4957]: I0218 14:46:43.270743 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4332fa89-063c-4fe4-9847-17830ac8ee2e-utilities\") pod \"certified-operators-cx8cn\" (UID: \"4332fa89-063c-4fe4-9847-17830ac8ee2e\") " pod="openshift-marketplace/certified-operators-cx8cn" Feb 18 14:46:43 crc kubenswrapper[4957]: I0218 14:46:43.270882 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4332fa89-063c-4fe4-9847-17830ac8ee2e-catalog-content\") pod \"certified-operators-cx8cn\" (UID: \"4332fa89-063c-4fe4-9847-17830ac8ee2e\") " pod="openshift-marketplace/certified-operators-cx8cn" Feb 18 14:46:43 crc kubenswrapper[4957]: I0218 14:46:43.271343 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4332fa89-063c-4fe4-9847-17830ac8ee2e-utilities\") pod \"certified-operators-cx8cn\" (UID: \"4332fa89-063c-4fe4-9847-17830ac8ee2e\") " pod="openshift-marketplace/certified-operators-cx8cn" Feb 18 14:46:43 crc kubenswrapper[4957]: I0218 14:46:43.271554 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4332fa89-063c-4fe4-9847-17830ac8ee2e-catalog-content\") pod \"certified-operators-cx8cn\" (UID: \"4332fa89-063c-4fe4-9847-17830ac8ee2e\") " pod="openshift-marketplace/certified-operators-cx8cn" Feb 18 14:46:43 crc kubenswrapper[4957]: I0218 14:46:43.298226 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jfhvq\" (UniqueName: \"kubernetes.io/projected/4332fa89-063c-4fe4-9847-17830ac8ee2e-kube-api-access-jfhvq\") pod \"certified-operators-cx8cn\" (UID: \"4332fa89-063c-4fe4-9847-17830ac8ee2e\") " pod="openshift-marketplace/certified-operators-cx8cn" Feb 18 14:46:43 crc kubenswrapper[4957]: I0218 14:46:43.472139 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cx8cn" Feb 18 14:46:46 crc kubenswrapper[4957]: I0218 14:46:46.696940 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-s6mvx" Feb 18 14:46:46 crc kubenswrapper[4957]: I0218 14:46:46.698171 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-s6mvx" Feb 18 14:46:46 crc kubenswrapper[4957]: I0218 14:46:46.762106 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-s6mvx" Feb 18 14:46:47 crc kubenswrapper[4957]: I0218 14:46:47.079469 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-s6mvx" Feb 18 14:46:47 crc kubenswrapper[4957]: I0218 14:46:47.915931 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-s6mvx"] Feb 18 14:46:48 crc kubenswrapper[4957]: W0218 14:46:48.697562 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4332fa89_063c_4fe4_9847_17830ac8ee2e.slice/crio-e8aac533884196e64c87834f46cdd6a58c58f2baec61086c55e39238f1661806 WatchSource:0}: Error finding container e8aac533884196e64c87834f46cdd6a58c58f2baec61086c55e39238f1661806: Status 404 returned error can't find the container with id e8aac533884196e64c87834f46cdd6a58c58f2baec61086c55e39238f1661806 Feb 18 14:46:48 crc kubenswrapper[4957]: I0218 14:46:48.699148 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cx8cn"] Feb 18 14:46:49 crc kubenswrapper[4957]: I0218 14:46:49.024441 4957 generic.go:334] "Generic (PLEG): container finished" podID="4332fa89-063c-4fe4-9847-17830ac8ee2e" containerID="e7a4269908454b807a0d78c15749ea9b48dffd84a4c6931211d0b5ac5d6e2c2f" exitCode=0 Feb 18 14:46:49 crc kubenswrapper[4957]: I0218 14:46:49.024572 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cx8cn" event={"ID":"4332fa89-063c-4fe4-9847-17830ac8ee2e","Type":"ContainerDied","Data":"e7a4269908454b807a0d78c15749ea9b48dffd84a4c6931211d0b5ac5d6e2c2f"} Feb 18 14:46:49 crc kubenswrapper[4957]: I0218 14:46:49.024929 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cx8cn" event={"ID":"4332fa89-063c-4fe4-9847-17830ac8ee2e","Type":"ContainerStarted","Data":"e8aac533884196e64c87834f46cdd6a58c58f2baec61086c55e39238f1661806"} Feb 18 14:46:49 crc kubenswrapper[4957]: I0218 14:46:49.027878 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/collector-5tn9r" event={"ID":"f43654c7-84ed-488e-912c-c089b778adc7","Type":"ContainerStarted","Data":"5a3c6bd47c3e8dbd4224521bd81bdaddfaf13e1bd4f1e429c977b587ae4afbc4"} Feb 18 14:46:49 crc kubenswrapper[4957]: I0218 14:46:49.028007 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-s6mvx" podUID="8860152c-b1e3-45ea-9b74-a67b60258866" containerName="registry-server" containerID="cri-o://5d2ade10b043d7ba71311f79a9d603aa748e77fbfc77e3033231e037b813e1b5" gracePeriod=2 Feb 18 14:46:49 crc kubenswrapper[4957]: I0218 14:46:49.087617 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/collector-5tn9r" podStartSLOduration=2.672931616 podStartE2EDuration="12.087583428s" podCreationTimestamp="2026-02-18 14:46:37 +0000 UTC" firstStartedPulling="2026-02-18 14:46:38.917925015 +0000 UTC m=+905.438789759" lastFinishedPulling="2026-02-18 14:46:48.332576817 +0000 UTC m=+914.853441571" observedRunningTime="2026-02-18 14:46:49.077698142 +0000 UTC m=+915.598562926" watchObservedRunningTime="2026-02-18 14:46:49.087583428 +0000 UTC m=+915.608448192" Feb 18 14:46:49 crc kubenswrapper[4957]: I0218 14:46:49.549859 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s6mvx" Feb 18 14:46:49 crc kubenswrapper[4957]: I0218 14:46:49.728749 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8860152c-b1e3-45ea-9b74-a67b60258866-catalog-content\") pod \"8860152c-b1e3-45ea-9b74-a67b60258866\" (UID: \"8860152c-b1e3-45ea-9b74-a67b60258866\") " Feb 18 14:46:49 crc kubenswrapper[4957]: I0218 14:46:49.729012 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8860152c-b1e3-45ea-9b74-a67b60258866-utilities\") pod \"8860152c-b1e3-45ea-9b74-a67b60258866\" (UID: \"8860152c-b1e3-45ea-9b74-a67b60258866\") " Feb 18 14:46:49 crc kubenswrapper[4957]: I0218 14:46:49.730110 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8860152c-b1e3-45ea-9b74-a67b60258866-utilities" (OuterVolumeSpecName: "utilities") pod "8860152c-b1e3-45ea-9b74-a67b60258866" (UID: "8860152c-b1e3-45ea-9b74-a67b60258866"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:46:49 crc kubenswrapper[4957]: I0218 14:46:49.731563 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jdgrc\" (UniqueName: \"kubernetes.io/projected/8860152c-b1e3-45ea-9b74-a67b60258866-kube-api-access-jdgrc\") pod \"8860152c-b1e3-45ea-9b74-a67b60258866\" (UID: \"8860152c-b1e3-45ea-9b74-a67b60258866\") " Feb 18 14:46:49 crc kubenswrapper[4957]: I0218 14:46:49.732328 4957 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8860152c-b1e3-45ea-9b74-a67b60258866-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 14:46:49 crc kubenswrapper[4957]: I0218 14:46:49.744864 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8860152c-b1e3-45ea-9b74-a67b60258866-kube-api-access-jdgrc" (OuterVolumeSpecName: "kube-api-access-jdgrc") pod "8860152c-b1e3-45ea-9b74-a67b60258866" (UID: "8860152c-b1e3-45ea-9b74-a67b60258866"). InnerVolumeSpecName "kube-api-access-jdgrc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:46:49 crc kubenswrapper[4957]: I0218 14:46:49.766362 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8860152c-b1e3-45ea-9b74-a67b60258866-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8860152c-b1e3-45ea-9b74-a67b60258866" (UID: "8860152c-b1e3-45ea-9b74-a67b60258866"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:46:49 crc kubenswrapper[4957]: I0218 14:46:49.833926 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jdgrc\" (UniqueName: \"kubernetes.io/projected/8860152c-b1e3-45ea-9b74-a67b60258866-kube-api-access-jdgrc\") on node \"crc\" DevicePath \"\"" Feb 18 14:46:49 crc kubenswrapper[4957]: I0218 14:46:49.833961 4957 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8860152c-b1e3-45ea-9b74-a67b60258866-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 14:46:50 crc kubenswrapper[4957]: I0218 14:46:50.041101 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cx8cn" event={"ID":"4332fa89-063c-4fe4-9847-17830ac8ee2e","Type":"ContainerStarted","Data":"513b25493aea28311e53aa38cf9178e11a70728bcd01534d3a2af5cd4838a532"} Feb 18 14:46:50 crc kubenswrapper[4957]: I0218 14:46:50.044048 4957 generic.go:334] "Generic (PLEG): container finished" podID="8860152c-b1e3-45ea-9b74-a67b60258866" containerID="5d2ade10b043d7ba71311f79a9d603aa748e77fbfc77e3033231e037b813e1b5" exitCode=0 Feb 18 14:46:50 crc kubenswrapper[4957]: I0218 14:46:50.044658 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s6mvx" Feb 18 14:46:50 crc kubenswrapper[4957]: I0218 14:46:50.050872 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s6mvx" event={"ID":"8860152c-b1e3-45ea-9b74-a67b60258866","Type":"ContainerDied","Data":"5d2ade10b043d7ba71311f79a9d603aa748e77fbfc77e3033231e037b813e1b5"} Feb 18 14:46:50 crc kubenswrapper[4957]: I0218 14:46:50.050943 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s6mvx" event={"ID":"8860152c-b1e3-45ea-9b74-a67b60258866","Type":"ContainerDied","Data":"32822188d1c484fc2d5b593f980316fc2edb2c6cd0239df34995ee99580e2826"} Feb 18 14:46:50 crc kubenswrapper[4957]: I0218 14:46:50.050973 4957 scope.go:117] "RemoveContainer" containerID="5d2ade10b043d7ba71311f79a9d603aa748e77fbfc77e3033231e037b813e1b5" Feb 18 14:46:50 crc kubenswrapper[4957]: I0218 14:46:50.092408 4957 scope.go:117] "RemoveContainer" containerID="e8ca3bd24907b08da0fbc50b712e7e8d2e8c0ed59394698b5b3836a6203cc0b5" Feb 18 14:46:50 crc kubenswrapper[4957]: I0218 14:46:50.102531 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-s6mvx"] Feb 18 14:46:50 crc kubenswrapper[4957]: I0218 14:46:50.109219 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-s6mvx"] Feb 18 14:46:50 crc kubenswrapper[4957]: I0218 14:46:50.119717 4957 scope.go:117] "RemoveContainer" containerID="246f037dbd75e02e6e49fa4f70bc20c0d64bdc4ede1cacf53280fb8bfeb5d009" Feb 18 14:46:50 crc kubenswrapper[4957]: I0218 14:46:50.146708 4957 scope.go:117] "RemoveContainer" containerID="5d2ade10b043d7ba71311f79a9d603aa748e77fbfc77e3033231e037b813e1b5" Feb 18 14:46:50 crc kubenswrapper[4957]: E0218 14:46:50.147473 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d2ade10b043d7ba71311f79a9d603aa748e77fbfc77e3033231e037b813e1b5\": container with ID starting with 5d2ade10b043d7ba71311f79a9d603aa748e77fbfc77e3033231e037b813e1b5 not found: ID does not exist" containerID="5d2ade10b043d7ba71311f79a9d603aa748e77fbfc77e3033231e037b813e1b5" Feb 18 14:46:50 crc kubenswrapper[4957]: I0218 14:46:50.147522 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d2ade10b043d7ba71311f79a9d603aa748e77fbfc77e3033231e037b813e1b5"} err="failed to get container status \"5d2ade10b043d7ba71311f79a9d603aa748e77fbfc77e3033231e037b813e1b5\": rpc error: code = NotFound desc = could not find container \"5d2ade10b043d7ba71311f79a9d603aa748e77fbfc77e3033231e037b813e1b5\": container with ID starting with 5d2ade10b043d7ba71311f79a9d603aa748e77fbfc77e3033231e037b813e1b5 not found: ID does not exist" Feb 18 14:46:50 crc kubenswrapper[4957]: I0218 14:46:50.147565 4957 scope.go:117] "RemoveContainer" containerID="e8ca3bd24907b08da0fbc50b712e7e8d2e8c0ed59394698b5b3836a6203cc0b5" Feb 18 14:46:50 crc kubenswrapper[4957]: E0218 14:46:50.147975 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e8ca3bd24907b08da0fbc50b712e7e8d2e8c0ed59394698b5b3836a6203cc0b5\": container with ID starting with e8ca3bd24907b08da0fbc50b712e7e8d2e8c0ed59394698b5b3836a6203cc0b5 not found: ID does not exist" containerID="e8ca3bd24907b08da0fbc50b712e7e8d2e8c0ed59394698b5b3836a6203cc0b5" Feb 18 14:46:50 crc kubenswrapper[4957]: I0218 14:46:50.148006 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e8ca3bd24907b08da0fbc50b712e7e8d2e8c0ed59394698b5b3836a6203cc0b5"} err="failed to get container status \"e8ca3bd24907b08da0fbc50b712e7e8d2e8c0ed59394698b5b3836a6203cc0b5\": rpc error: code = NotFound desc = could not find container \"e8ca3bd24907b08da0fbc50b712e7e8d2e8c0ed59394698b5b3836a6203cc0b5\": container with ID starting with e8ca3bd24907b08da0fbc50b712e7e8d2e8c0ed59394698b5b3836a6203cc0b5 not found: ID does not exist" Feb 18 14:46:50 crc kubenswrapper[4957]: I0218 14:46:50.148025 4957 scope.go:117] "RemoveContainer" containerID="246f037dbd75e02e6e49fa4f70bc20c0d64bdc4ede1cacf53280fb8bfeb5d009" Feb 18 14:46:50 crc kubenswrapper[4957]: E0218 14:46:50.148498 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"246f037dbd75e02e6e49fa4f70bc20c0d64bdc4ede1cacf53280fb8bfeb5d009\": container with ID starting with 246f037dbd75e02e6e49fa4f70bc20c0d64bdc4ede1cacf53280fb8bfeb5d009 not found: ID does not exist" containerID="246f037dbd75e02e6e49fa4f70bc20c0d64bdc4ede1cacf53280fb8bfeb5d009" Feb 18 14:46:50 crc kubenswrapper[4957]: I0218 14:46:50.148571 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"246f037dbd75e02e6e49fa4f70bc20c0d64bdc4ede1cacf53280fb8bfeb5d009"} err="failed to get container status \"246f037dbd75e02e6e49fa4f70bc20c0d64bdc4ede1cacf53280fb8bfeb5d009\": rpc error: code = NotFound desc = could not find container \"246f037dbd75e02e6e49fa4f70bc20c0d64bdc4ede1cacf53280fb8bfeb5d009\": container with ID starting with 246f037dbd75e02e6e49fa4f70bc20c0d64bdc4ede1cacf53280fb8bfeb5d009 not found: ID does not exist" Feb 18 14:46:50 crc kubenswrapper[4957]: I0218 14:46:50.223465 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8860152c-b1e3-45ea-9b74-a67b60258866" path="/var/lib/kubelet/pods/8860152c-b1e3-45ea-9b74-a67b60258866/volumes" Feb 18 14:46:51 crc kubenswrapper[4957]: I0218 14:46:51.051945 4957 generic.go:334] "Generic (PLEG): container finished" podID="4332fa89-063c-4fe4-9847-17830ac8ee2e" containerID="513b25493aea28311e53aa38cf9178e11a70728bcd01534d3a2af5cd4838a532" exitCode=0 Feb 18 14:46:51 crc kubenswrapper[4957]: I0218 14:46:51.052008 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cx8cn" event={"ID":"4332fa89-063c-4fe4-9847-17830ac8ee2e","Type":"ContainerDied","Data":"513b25493aea28311e53aa38cf9178e11a70728bcd01534d3a2af5cd4838a532"} Feb 18 14:46:52 crc kubenswrapper[4957]: I0218 14:46:52.062787 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cx8cn" event={"ID":"4332fa89-063c-4fe4-9847-17830ac8ee2e","Type":"ContainerStarted","Data":"956a15374bf51cb93721c5ea408f4e3ac6868eb4b9c7916121be86878650e24a"} Feb 18 14:46:52 crc kubenswrapper[4957]: I0218 14:46:52.085074 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-cx8cn" podStartSLOduration=6.62276901 podStartE2EDuration="9.085051007s" podCreationTimestamp="2026-02-18 14:46:43 +0000 UTC" firstStartedPulling="2026-02-18 14:46:49.0274928 +0000 UTC m=+915.548357544" lastFinishedPulling="2026-02-18 14:46:51.489774797 +0000 UTC m=+918.010639541" observedRunningTime="2026-02-18 14:46:52.081924987 +0000 UTC m=+918.602789731" watchObservedRunningTime="2026-02-18 14:46:52.085051007 +0000 UTC m=+918.605915751" Feb 18 14:46:53 crc kubenswrapper[4957]: I0218 14:46:53.473325 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-cx8cn" Feb 18 14:46:53 crc kubenswrapper[4957]: I0218 14:46:53.473958 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-cx8cn" Feb 18 14:46:53 crc kubenswrapper[4957]: I0218 14:46:53.522692 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-cx8cn" Feb 18 14:47:03 crc kubenswrapper[4957]: I0218 14:47:03.522945 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-cx8cn" Feb 18 14:47:03 crc kubenswrapper[4957]: I0218 14:47:03.570839 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cx8cn"] Feb 18 14:47:04 crc kubenswrapper[4957]: I0218 14:47:04.163953 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-cx8cn" podUID="4332fa89-063c-4fe4-9847-17830ac8ee2e" containerName="registry-server" containerID="cri-o://956a15374bf51cb93721c5ea408f4e3ac6868eb4b9c7916121be86878650e24a" gracePeriod=2 Feb 18 14:47:04 crc kubenswrapper[4957]: I0218 14:47:04.610939 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cx8cn" Feb 18 14:47:04 crc kubenswrapper[4957]: I0218 14:47:04.655787 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4332fa89-063c-4fe4-9847-17830ac8ee2e-utilities\") pod \"4332fa89-063c-4fe4-9847-17830ac8ee2e\" (UID: \"4332fa89-063c-4fe4-9847-17830ac8ee2e\") " Feb 18 14:47:04 crc kubenswrapper[4957]: I0218 14:47:04.655835 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4332fa89-063c-4fe4-9847-17830ac8ee2e-catalog-content\") pod \"4332fa89-063c-4fe4-9847-17830ac8ee2e\" (UID: \"4332fa89-063c-4fe4-9847-17830ac8ee2e\") " Feb 18 14:47:04 crc kubenswrapper[4957]: I0218 14:47:04.655869 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jfhvq\" (UniqueName: \"kubernetes.io/projected/4332fa89-063c-4fe4-9847-17830ac8ee2e-kube-api-access-jfhvq\") pod \"4332fa89-063c-4fe4-9847-17830ac8ee2e\" (UID: \"4332fa89-063c-4fe4-9847-17830ac8ee2e\") " Feb 18 14:47:04 crc kubenswrapper[4957]: I0218 14:47:04.657111 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4332fa89-063c-4fe4-9847-17830ac8ee2e-utilities" (OuterVolumeSpecName: "utilities") pod "4332fa89-063c-4fe4-9847-17830ac8ee2e" (UID: "4332fa89-063c-4fe4-9847-17830ac8ee2e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:47:04 crc kubenswrapper[4957]: I0218 14:47:04.667301 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4332fa89-063c-4fe4-9847-17830ac8ee2e-kube-api-access-jfhvq" (OuterVolumeSpecName: "kube-api-access-jfhvq") pod "4332fa89-063c-4fe4-9847-17830ac8ee2e" (UID: "4332fa89-063c-4fe4-9847-17830ac8ee2e"). InnerVolumeSpecName "kube-api-access-jfhvq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:47:04 crc kubenswrapper[4957]: I0218 14:47:04.717445 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4332fa89-063c-4fe4-9847-17830ac8ee2e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4332fa89-063c-4fe4-9847-17830ac8ee2e" (UID: "4332fa89-063c-4fe4-9847-17830ac8ee2e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:47:04 crc kubenswrapper[4957]: I0218 14:47:04.757960 4957 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4332fa89-063c-4fe4-9847-17830ac8ee2e-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 14:47:04 crc kubenswrapper[4957]: I0218 14:47:04.758038 4957 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4332fa89-063c-4fe4-9847-17830ac8ee2e-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 14:47:04 crc kubenswrapper[4957]: I0218 14:47:04.758058 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jfhvq\" (UniqueName: \"kubernetes.io/projected/4332fa89-063c-4fe4-9847-17830ac8ee2e-kube-api-access-jfhvq\") on node \"crc\" DevicePath \"\"" Feb 18 14:47:05 crc kubenswrapper[4957]: I0218 14:47:05.175366 4957 generic.go:334] "Generic (PLEG): container finished" podID="4332fa89-063c-4fe4-9847-17830ac8ee2e" containerID="956a15374bf51cb93721c5ea408f4e3ac6868eb4b9c7916121be86878650e24a" exitCode=0 Feb 18 14:47:05 crc kubenswrapper[4957]: I0218 14:47:05.175432 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cx8cn" event={"ID":"4332fa89-063c-4fe4-9847-17830ac8ee2e","Type":"ContainerDied","Data":"956a15374bf51cb93721c5ea408f4e3ac6868eb4b9c7916121be86878650e24a"} Feb 18 14:47:05 crc kubenswrapper[4957]: I0218 14:47:05.175471 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cx8cn" event={"ID":"4332fa89-063c-4fe4-9847-17830ac8ee2e","Type":"ContainerDied","Data":"e8aac533884196e64c87834f46cdd6a58c58f2baec61086c55e39238f1661806"} Feb 18 14:47:05 crc kubenswrapper[4957]: I0218 14:47:05.175492 4957 scope.go:117] "RemoveContainer" containerID="956a15374bf51cb93721c5ea408f4e3ac6868eb4b9c7916121be86878650e24a" Feb 18 14:47:05 crc kubenswrapper[4957]: I0218 14:47:05.175902 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cx8cn" Feb 18 14:47:05 crc kubenswrapper[4957]: I0218 14:47:05.196538 4957 scope.go:117] "RemoveContainer" containerID="513b25493aea28311e53aa38cf9178e11a70728bcd01534d3a2af5cd4838a532" Feb 18 14:47:05 crc kubenswrapper[4957]: I0218 14:47:05.224528 4957 scope.go:117] "RemoveContainer" containerID="e7a4269908454b807a0d78c15749ea9b48dffd84a4c6931211d0b5ac5d6e2c2f" Feb 18 14:47:05 crc kubenswrapper[4957]: I0218 14:47:05.242447 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cx8cn"] Feb 18 14:47:05 crc kubenswrapper[4957]: I0218 14:47:05.249995 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-cx8cn"] Feb 18 14:47:05 crc kubenswrapper[4957]: I0218 14:47:05.258643 4957 scope.go:117] "RemoveContainer" containerID="956a15374bf51cb93721c5ea408f4e3ac6868eb4b9c7916121be86878650e24a" Feb 18 14:47:05 crc kubenswrapper[4957]: E0218 14:47:05.259293 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"956a15374bf51cb93721c5ea408f4e3ac6868eb4b9c7916121be86878650e24a\": container with ID starting with 956a15374bf51cb93721c5ea408f4e3ac6868eb4b9c7916121be86878650e24a not found: ID does not exist" containerID="956a15374bf51cb93721c5ea408f4e3ac6868eb4b9c7916121be86878650e24a" Feb 18 14:47:05 crc kubenswrapper[4957]: I0218 14:47:05.259395 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"956a15374bf51cb93721c5ea408f4e3ac6868eb4b9c7916121be86878650e24a"} err="failed to get container status \"956a15374bf51cb93721c5ea408f4e3ac6868eb4b9c7916121be86878650e24a\": rpc error: code = NotFound desc = could not find container \"956a15374bf51cb93721c5ea408f4e3ac6868eb4b9c7916121be86878650e24a\": container with ID starting with 956a15374bf51cb93721c5ea408f4e3ac6868eb4b9c7916121be86878650e24a not found: ID does not exist" Feb 18 14:47:05 crc kubenswrapper[4957]: I0218 14:47:05.259509 4957 scope.go:117] "RemoveContainer" containerID="513b25493aea28311e53aa38cf9178e11a70728bcd01534d3a2af5cd4838a532" Feb 18 14:47:05 crc kubenswrapper[4957]: E0218 14:47:05.259994 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"513b25493aea28311e53aa38cf9178e11a70728bcd01534d3a2af5cd4838a532\": container with ID starting with 513b25493aea28311e53aa38cf9178e11a70728bcd01534d3a2af5cd4838a532 not found: ID does not exist" containerID="513b25493aea28311e53aa38cf9178e11a70728bcd01534d3a2af5cd4838a532" Feb 18 14:47:05 crc kubenswrapper[4957]: I0218 14:47:05.260257 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"513b25493aea28311e53aa38cf9178e11a70728bcd01534d3a2af5cd4838a532"} err="failed to get container status \"513b25493aea28311e53aa38cf9178e11a70728bcd01534d3a2af5cd4838a532\": rpc error: code = NotFound desc = could not find container \"513b25493aea28311e53aa38cf9178e11a70728bcd01534d3a2af5cd4838a532\": container with ID starting with 513b25493aea28311e53aa38cf9178e11a70728bcd01534d3a2af5cd4838a532 not found: ID does not exist" Feb 18 14:47:05 crc kubenswrapper[4957]: I0218 14:47:05.260401 4957 scope.go:117] "RemoveContainer" containerID="e7a4269908454b807a0d78c15749ea9b48dffd84a4c6931211d0b5ac5d6e2c2f" Feb 18 14:47:05 crc kubenswrapper[4957]: E0218 14:47:05.261087 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e7a4269908454b807a0d78c15749ea9b48dffd84a4c6931211d0b5ac5d6e2c2f\": container with ID starting with e7a4269908454b807a0d78c15749ea9b48dffd84a4c6931211d0b5ac5d6e2c2f not found: ID does not exist" containerID="e7a4269908454b807a0d78c15749ea9b48dffd84a4c6931211d0b5ac5d6e2c2f" Feb 18 14:47:05 crc kubenswrapper[4957]: I0218 14:47:05.261183 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7a4269908454b807a0d78c15749ea9b48dffd84a4c6931211d0b5ac5d6e2c2f"} err="failed to get container status \"e7a4269908454b807a0d78c15749ea9b48dffd84a4c6931211d0b5ac5d6e2c2f\": rpc error: code = NotFound desc = could not find container \"e7a4269908454b807a0d78c15749ea9b48dffd84a4c6931211d0b5ac5d6e2c2f\": container with ID starting with e7a4269908454b807a0d78c15749ea9b48dffd84a4c6931211d0b5ac5d6e2c2f not found: ID does not exist" Feb 18 14:47:06 crc kubenswrapper[4957]: I0218 14:47:06.229041 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4332fa89-063c-4fe4-9847-17830ac8ee2e" path="/var/lib/kubelet/pods/4332fa89-063c-4fe4-9847-17830ac8ee2e/volumes" Feb 18 14:47:07 crc kubenswrapper[4957]: I0218 14:47:07.279445 4957 patch_prober.go:28] interesting pod/machine-config-daemon-x8wwg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 14:47:07 crc kubenswrapper[4957]: I0218 14:47:07.281863 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 14:47:07 crc kubenswrapper[4957]: I0218 14:47:07.282031 4957 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" Feb 18 14:47:07 crc kubenswrapper[4957]: I0218 14:47:07.283117 4957 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c57c13136d0c689489f9ddc49289b10f813ba077ecbc190959d9174e61a4424f"} pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 14:47:07 crc kubenswrapper[4957]: I0218 14:47:07.283294 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" containerName="machine-config-daemon" containerID="cri-o://c57c13136d0c689489f9ddc49289b10f813ba077ecbc190959d9174e61a4424f" gracePeriod=600 Feb 18 14:47:08 crc kubenswrapper[4957]: I0218 14:47:08.207805 4957 generic.go:334] "Generic (PLEG): container finished" podID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" containerID="c57c13136d0c689489f9ddc49289b10f813ba077ecbc190959d9174e61a4424f" exitCode=0 Feb 18 14:47:08 crc kubenswrapper[4957]: I0218 14:47:08.208813 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" event={"ID":"4cde17e3-43e9-4bed-afe8-5b76229e35cf","Type":"ContainerDied","Data":"c57c13136d0c689489f9ddc49289b10f813ba077ecbc190959d9174e61a4424f"} Feb 18 14:47:08 crc kubenswrapper[4957]: I0218 14:47:08.208858 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" event={"ID":"4cde17e3-43e9-4bed-afe8-5b76229e35cf","Type":"ContainerStarted","Data":"cfb8c9a50ccd65f948148f9634be317a0e1f6018a21e87a691f9e80583c8ce0b"} Feb 18 14:47:08 crc kubenswrapper[4957]: I0218 14:47:08.208888 4957 scope.go:117] "RemoveContainer" containerID="7941b927f3e2a93f4dd37a4d3392ef224a62719b3a5e2aadf6edc8ae2eb35a9d" Feb 18 14:47:15 crc kubenswrapper[4957]: I0218 14:47:15.905794 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecat2mww"] Feb 18 14:47:15 crc kubenswrapper[4957]: E0218 14:47:15.906758 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4332fa89-063c-4fe4-9847-17830ac8ee2e" containerName="extract-content" Feb 18 14:47:15 crc kubenswrapper[4957]: I0218 14:47:15.906777 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="4332fa89-063c-4fe4-9847-17830ac8ee2e" containerName="extract-content" Feb 18 14:47:15 crc kubenswrapper[4957]: E0218 14:47:15.906796 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4332fa89-063c-4fe4-9847-17830ac8ee2e" containerName="registry-server" Feb 18 14:47:15 crc kubenswrapper[4957]: I0218 14:47:15.906804 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="4332fa89-063c-4fe4-9847-17830ac8ee2e" containerName="registry-server" Feb 18 14:47:15 crc kubenswrapper[4957]: E0218 14:47:15.906821 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4332fa89-063c-4fe4-9847-17830ac8ee2e" containerName="extract-utilities" Feb 18 14:47:15 crc kubenswrapper[4957]: I0218 14:47:15.906830 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="4332fa89-063c-4fe4-9847-17830ac8ee2e" containerName="extract-utilities" Feb 18 14:47:15 crc kubenswrapper[4957]: E0218 14:47:15.906843 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8860152c-b1e3-45ea-9b74-a67b60258866" containerName="registry-server" Feb 18 14:47:15 crc kubenswrapper[4957]: I0218 14:47:15.906850 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="8860152c-b1e3-45ea-9b74-a67b60258866" containerName="registry-server" Feb 18 14:47:15 crc kubenswrapper[4957]: E0218 14:47:15.906868 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8860152c-b1e3-45ea-9b74-a67b60258866" containerName="extract-content" Feb 18 14:47:15 crc kubenswrapper[4957]: I0218 14:47:15.906878 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="8860152c-b1e3-45ea-9b74-a67b60258866" containerName="extract-content" Feb 18 14:47:15 crc kubenswrapper[4957]: E0218 14:47:15.906893 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8860152c-b1e3-45ea-9b74-a67b60258866" containerName="extract-utilities" Feb 18 14:47:15 crc kubenswrapper[4957]: I0218 14:47:15.906899 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="8860152c-b1e3-45ea-9b74-a67b60258866" containerName="extract-utilities" Feb 18 14:47:15 crc kubenswrapper[4957]: I0218 14:47:15.907012 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="8860152c-b1e3-45ea-9b74-a67b60258866" containerName="registry-server" Feb 18 14:47:15 crc kubenswrapper[4957]: I0218 14:47:15.907051 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="4332fa89-063c-4fe4-9847-17830ac8ee2e" containerName="registry-server" Feb 18 14:47:15 crc kubenswrapper[4957]: I0218 14:47:15.908015 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecat2mww" Feb 18 14:47:15 crc kubenswrapper[4957]: I0218 14:47:15.913137 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 18 14:47:15 crc kubenswrapper[4957]: I0218 14:47:15.923701 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecat2mww"] Feb 18 14:47:16 crc kubenswrapper[4957]: I0218 14:47:16.079317 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjh44\" (UniqueName: \"kubernetes.io/projected/12179b8a-bf4d-428f-a65c-26ac376f2945-kube-api-access-wjh44\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecat2mww\" (UID: \"12179b8a-bf4d-428f-a65c-26ac376f2945\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecat2mww" Feb 18 14:47:16 crc kubenswrapper[4957]: I0218 14:47:16.079429 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/12179b8a-bf4d-428f-a65c-26ac376f2945-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecat2mww\" (UID: \"12179b8a-bf4d-428f-a65c-26ac376f2945\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecat2mww" Feb 18 14:47:16 crc kubenswrapper[4957]: I0218 14:47:16.079545 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/12179b8a-bf4d-428f-a65c-26ac376f2945-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecat2mww\" (UID: \"12179b8a-bf4d-428f-a65c-26ac376f2945\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecat2mww" Feb 18 14:47:16 crc kubenswrapper[4957]: I0218 14:47:16.180537 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/12179b8a-bf4d-428f-a65c-26ac376f2945-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecat2mww\" (UID: \"12179b8a-bf4d-428f-a65c-26ac376f2945\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecat2mww" Feb 18 14:47:16 crc kubenswrapper[4957]: I0218 14:47:16.181059 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wjh44\" (UniqueName: \"kubernetes.io/projected/12179b8a-bf4d-428f-a65c-26ac376f2945-kube-api-access-wjh44\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecat2mww\" (UID: \"12179b8a-bf4d-428f-a65c-26ac376f2945\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecat2mww" Feb 18 14:47:16 crc kubenswrapper[4957]: I0218 14:47:16.181107 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/12179b8a-bf4d-428f-a65c-26ac376f2945-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecat2mww\" (UID: \"12179b8a-bf4d-428f-a65c-26ac376f2945\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecat2mww" Feb 18 14:47:16 crc kubenswrapper[4957]: I0218 14:47:16.181314 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/12179b8a-bf4d-428f-a65c-26ac376f2945-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecat2mww\" (UID: \"12179b8a-bf4d-428f-a65c-26ac376f2945\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecat2mww" Feb 18 14:47:16 crc kubenswrapper[4957]: I0218 14:47:16.181576 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/12179b8a-bf4d-428f-a65c-26ac376f2945-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecat2mww\" (UID: \"12179b8a-bf4d-428f-a65c-26ac376f2945\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecat2mww" Feb 18 14:47:16 crc kubenswrapper[4957]: I0218 14:47:16.213978 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wjh44\" (UniqueName: \"kubernetes.io/projected/12179b8a-bf4d-428f-a65c-26ac376f2945-kube-api-access-wjh44\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecat2mww\" (UID: \"12179b8a-bf4d-428f-a65c-26ac376f2945\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecat2mww" Feb 18 14:47:16 crc kubenswrapper[4957]: I0218 14:47:16.235511 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecat2mww" Feb 18 14:47:16 crc kubenswrapper[4957]: I0218 14:47:16.716297 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecat2mww"] Feb 18 14:47:17 crc kubenswrapper[4957]: I0218 14:47:17.292554 4957 generic.go:334] "Generic (PLEG): container finished" podID="12179b8a-bf4d-428f-a65c-26ac376f2945" containerID="53d1badebef81b94ef986468fe9cbcded1b7729b4b8658c7e29848f7a1dc4a39" exitCode=0 Feb 18 14:47:17 crc kubenswrapper[4957]: I0218 14:47:17.292627 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecat2mww" event={"ID":"12179b8a-bf4d-428f-a65c-26ac376f2945","Type":"ContainerDied","Data":"53d1badebef81b94ef986468fe9cbcded1b7729b4b8658c7e29848f7a1dc4a39"} Feb 18 14:47:17 crc kubenswrapper[4957]: I0218 14:47:17.292666 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecat2mww" event={"ID":"12179b8a-bf4d-428f-a65c-26ac376f2945","Type":"ContainerStarted","Data":"fd60747cd7fc0a10d362453b8a91af539b40548726580561d2a376996ee534cf"} Feb 18 14:47:19 crc kubenswrapper[4957]: I0218 14:47:19.312672 4957 generic.go:334] "Generic (PLEG): container finished" podID="12179b8a-bf4d-428f-a65c-26ac376f2945" containerID="c3ad8a141d3763db2c04074bc3af88ad6247bead2509a22137231129f362fe0b" exitCode=0 Feb 18 14:47:19 crc kubenswrapper[4957]: I0218 14:47:19.312808 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecat2mww" event={"ID":"12179b8a-bf4d-428f-a65c-26ac376f2945","Type":"ContainerDied","Data":"c3ad8a141d3763db2c04074bc3af88ad6247bead2509a22137231129f362fe0b"} Feb 18 14:47:20 crc kubenswrapper[4957]: I0218 14:47:20.324859 4957 generic.go:334] "Generic (PLEG): container finished" podID="12179b8a-bf4d-428f-a65c-26ac376f2945" containerID="614df64c7c90e6f451638d960f802c1cb599236960ca30e1f4f6c49c9be83870" exitCode=0 Feb 18 14:47:20 crc kubenswrapper[4957]: I0218 14:47:20.324906 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecat2mww" event={"ID":"12179b8a-bf4d-428f-a65c-26ac376f2945","Type":"ContainerDied","Data":"614df64c7c90e6f451638d960f802c1cb599236960ca30e1f4f6c49c9be83870"} Feb 18 14:47:21 crc kubenswrapper[4957]: I0218 14:47:21.730123 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecat2mww" Feb 18 14:47:21 crc kubenswrapper[4957]: I0218 14:47:21.899469 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wjh44\" (UniqueName: \"kubernetes.io/projected/12179b8a-bf4d-428f-a65c-26ac376f2945-kube-api-access-wjh44\") pod \"12179b8a-bf4d-428f-a65c-26ac376f2945\" (UID: \"12179b8a-bf4d-428f-a65c-26ac376f2945\") " Feb 18 14:47:21 crc kubenswrapper[4957]: I0218 14:47:21.899619 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/12179b8a-bf4d-428f-a65c-26ac376f2945-bundle\") pod \"12179b8a-bf4d-428f-a65c-26ac376f2945\" (UID: \"12179b8a-bf4d-428f-a65c-26ac376f2945\") " Feb 18 14:47:21 crc kubenswrapper[4957]: I0218 14:47:21.899708 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/12179b8a-bf4d-428f-a65c-26ac376f2945-util\") pod \"12179b8a-bf4d-428f-a65c-26ac376f2945\" (UID: \"12179b8a-bf4d-428f-a65c-26ac376f2945\") " Feb 18 14:47:21 crc kubenswrapper[4957]: I0218 14:47:21.901514 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/12179b8a-bf4d-428f-a65c-26ac376f2945-bundle" (OuterVolumeSpecName: "bundle") pod "12179b8a-bf4d-428f-a65c-26ac376f2945" (UID: "12179b8a-bf4d-428f-a65c-26ac376f2945"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:47:21 crc kubenswrapper[4957]: I0218 14:47:21.908311 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12179b8a-bf4d-428f-a65c-26ac376f2945-kube-api-access-wjh44" (OuterVolumeSpecName: "kube-api-access-wjh44") pod "12179b8a-bf4d-428f-a65c-26ac376f2945" (UID: "12179b8a-bf4d-428f-a65c-26ac376f2945"). InnerVolumeSpecName "kube-api-access-wjh44". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:47:21 crc kubenswrapper[4957]: I0218 14:47:21.914708 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/12179b8a-bf4d-428f-a65c-26ac376f2945-util" (OuterVolumeSpecName: "util") pod "12179b8a-bf4d-428f-a65c-26ac376f2945" (UID: "12179b8a-bf4d-428f-a65c-26ac376f2945"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:47:22 crc kubenswrapper[4957]: I0218 14:47:22.001787 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wjh44\" (UniqueName: \"kubernetes.io/projected/12179b8a-bf4d-428f-a65c-26ac376f2945-kube-api-access-wjh44\") on node \"crc\" DevicePath \"\"" Feb 18 14:47:22 crc kubenswrapper[4957]: I0218 14:47:22.001850 4957 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/12179b8a-bf4d-428f-a65c-26ac376f2945-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:47:22 crc kubenswrapper[4957]: I0218 14:47:22.001860 4957 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/12179b8a-bf4d-428f-a65c-26ac376f2945-util\") on node \"crc\" DevicePath \"\"" Feb 18 14:47:22 crc kubenswrapper[4957]: I0218 14:47:22.346564 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecat2mww" event={"ID":"12179b8a-bf4d-428f-a65c-26ac376f2945","Type":"ContainerDied","Data":"fd60747cd7fc0a10d362453b8a91af539b40548726580561d2a376996ee534cf"} Feb 18 14:47:22 crc kubenswrapper[4957]: I0218 14:47:22.346612 4957 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fd60747cd7fc0a10d362453b8a91af539b40548726580561d2a376996ee534cf" Feb 18 14:47:22 crc kubenswrapper[4957]: I0218 14:47:22.346642 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecat2mww" Feb 18 14:47:24 crc kubenswrapper[4957]: I0218 14:47:24.741733 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-kk85q"] Feb 18 14:47:24 crc kubenswrapper[4957]: E0218 14:47:24.743216 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12179b8a-bf4d-428f-a65c-26ac376f2945" containerName="pull" Feb 18 14:47:24 crc kubenswrapper[4957]: I0218 14:47:24.743236 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="12179b8a-bf4d-428f-a65c-26ac376f2945" containerName="pull" Feb 18 14:47:24 crc kubenswrapper[4957]: E0218 14:47:24.743271 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12179b8a-bf4d-428f-a65c-26ac376f2945" containerName="util" Feb 18 14:47:24 crc kubenswrapper[4957]: I0218 14:47:24.743278 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="12179b8a-bf4d-428f-a65c-26ac376f2945" containerName="util" Feb 18 14:47:24 crc kubenswrapper[4957]: E0218 14:47:24.743293 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12179b8a-bf4d-428f-a65c-26ac376f2945" containerName="extract" Feb 18 14:47:24 crc kubenswrapper[4957]: I0218 14:47:24.743301 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="12179b8a-bf4d-428f-a65c-26ac376f2945" containerName="extract" Feb 18 14:47:24 crc kubenswrapper[4957]: I0218 14:47:24.743478 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="12179b8a-bf4d-428f-a65c-26ac376f2945" containerName="extract" Feb 18 14:47:24 crc kubenswrapper[4957]: I0218 14:47:24.744186 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-694c9596b7-kk85q" Feb 18 14:47:24 crc kubenswrapper[4957]: I0218 14:47:24.747968 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Feb 18 14:47:24 crc kubenswrapper[4957]: I0218 14:47:24.748124 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Feb 18 14:47:24 crc kubenswrapper[4957]: I0218 14:47:24.748524 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-8k8mz" Feb 18 14:47:24 crc kubenswrapper[4957]: I0218 14:47:24.751038 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-kk85q"] Feb 18 14:47:24 crc kubenswrapper[4957]: I0218 14:47:24.851029 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xb2f\" (UniqueName: \"kubernetes.io/projected/25b254d3-9fe8-4024-b499-a813dbd98972-kube-api-access-7xb2f\") pod \"nmstate-operator-694c9596b7-kk85q\" (UID: \"25b254d3-9fe8-4024-b499-a813dbd98972\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-kk85q" Feb 18 14:47:24 crc kubenswrapper[4957]: I0218 14:47:24.952591 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7xb2f\" (UniqueName: \"kubernetes.io/projected/25b254d3-9fe8-4024-b499-a813dbd98972-kube-api-access-7xb2f\") pod \"nmstate-operator-694c9596b7-kk85q\" (UID: \"25b254d3-9fe8-4024-b499-a813dbd98972\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-kk85q" Feb 18 14:47:24 crc kubenswrapper[4957]: I0218 14:47:24.983697 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7xb2f\" (UniqueName: \"kubernetes.io/projected/25b254d3-9fe8-4024-b499-a813dbd98972-kube-api-access-7xb2f\") pod \"nmstate-operator-694c9596b7-kk85q\" (UID: \"25b254d3-9fe8-4024-b499-a813dbd98972\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-kk85q" Feb 18 14:47:25 crc kubenswrapper[4957]: I0218 14:47:25.064507 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-694c9596b7-kk85q" Feb 18 14:47:25 crc kubenswrapper[4957]: I0218 14:47:25.618099 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-kk85q"] Feb 18 14:47:25 crc kubenswrapper[4957]: W0218 14:47:25.624486 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod25b254d3_9fe8_4024_b499_a813dbd98972.slice/crio-1b592bdda845cd00c62b0e5cc2f0b45b099ba37bdb4fbe98e791fff58729c6d0 WatchSource:0}: Error finding container 1b592bdda845cd00c62b0e5cc2f0b45b099ba37bdb4fbe98e791fff58729c6d0: Status 404 returned error can't find the container with id 1b592bdda845cd00c62b0e5cc2f0b45b099ba37bdb4fbe98e791fff58729c6d0 Feb 18 14:47:26 crc kubenswrapper[4957]: I0218 14:47:26.376392 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-694c9596b7-kk85q" event={"ID":"25b254d3-9fe8-4024-b499-a813dbd98972","Type":"ContainerStarted","Data":"1b592bdda845cd00c62b0e5cc2f0b45b099ba37bdb4fbe98e791fff58729c6d0"} Feb 18 14:47:29 crc kubenswrapper[4957]: I0218 14:47:29.400775 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-694c9596b7-kk85q" event={"ID":"25b254d3-9fe8-4024-b499-a813dbd98972","Type":"ContainerStarted","Data":"240b9b7690050891201f31bd9b89f2a142e89b3a81dd87f79208a28c42e5a297"} Feb 18 14:47:29 crc kubenswrapper[4957]: I0218 14:47:29.417929 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-694c9596b7-kk85q" podStartSLOduration=2.501531239 podStartE2EDuration="5.417911313s" podCreationTimestamp="2026-02-18 14:47:24 +0000 UTC" firstStartedPulling="2026-02-18 14:47:25.627067514 +0000 UTC m=+952.147932258" lastFinishedPulling="2026-02-18 14:47:28.543447588 +0000 UTC m=+955.064312332" observedRunningTime="2026-02-18 14:47:29.417613935 +0000 UTC m=+955.938478679" watchObservedRunningTime="2026-02-18 14:47:29.417911313 +0000 UTC m=+955.938776057" Feb 18 14:47:30 crc kubenswrapper[4957]: I0218 14:47:30.336344 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-6hxmz"] Feb 18 14:47:30 crc kubenswrapper[4957]: I0218 14:47:30.337848 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-58c85c668d-6hxmz" Feb 18 14:47:30 crc kubenswrapper[4957]: I0218 14:47:30.340820 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-5fjvn" Feb 18 14:47:30 crc kubenswrapper[4957]: I0218 14:47:30.359374 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-vzkmx"] Feb 18 14:47:30 crc kubenswrapper[4957]: I0218 14:47:30.360781 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-vzkmx" Feb 18 14:47:30 crc kubenswrapper[4957]: I0218 14:47:30.369207 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Feb 18 14:47:30 crc kubenswrapper[4957]: I0218 14:47:30.387368 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-6hxmz"] Feb 18 14:47:30 crc kubenswrapper[4957]: I0218 14:47:30.425260 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-vzkmx"] Feb 18 14:47:30 crc kubenswrapper[4957]: I0218 14:47:30.458019 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/7bf5dd6b-3bc3-4ead-8fab-478e02b32496-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-vzkmx\" (UID: \"7bf5dd6b-3bc3-4ead-8fab-478e02b32496\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-vzkmx" Feb 18 14:47:30 crc kubenswrapper[4957]: I0218 14:47:30.458098 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8mbj\" (UniqueName: \"kubernetes.io/projected/7bf5dd6b-3bc3-4ead-8fab-478e02b32496-kube-api-access-f8mbj\") pod \"nmstate-webhook-866bcb46dc-vzkmx\" (UID: \"7bf5dd6b-3bc3-4ead-8fab-478e02b32496\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-vzkmx" Feb 18 14:47:30 crc kubenswrapper[4957]: I0218 14:47:30.458197 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8d22h\" (UniqueName: \"kubernetes.io/projected/20ae69bb-d285-4fbd-8c24-8385e5f6152d-kube-api-access-8d22h\") pod \"nmstate-metrics-58c85c668d-6hxmz\" (UID: \"20ae69bb-d285-4fbd-8c24-8385e5f6152d\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-6hxmz" Feb 18 14:47:30 crc kubenswrapper[4957]: I0218 14:47:30.483119 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-vwl5q"] Feb 18 14:47:30 crc kubenswrapper[4957]: I0218 14:47:30.496810 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-vwl5q" Feb 18 14:47:30 crc kubenswrapper[4957]: I0218 14:47:30.561033 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f8mbj\" (UniqueName: \"kubernetes.io/projected/7bf5dd6b-3bc3-4ead-8fab-478e02b32496-kube-api-access-f8mbj\") pod \"nmstate-webhook-866bcb46dc-vzkmx\" (UID: \"7bf5dd6b-3bc3-4ead-8fab-478e02b32496\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-vzkmx" Feb 18 14:47:30 crc kubenswrapper[4957]: I0218 14:47:30.561145 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8d22h\" (UniqueName: \"kubernetes.io/projected/20ae69bb-d285-4fbd-8c24-8385e5f6152d-kube-api-access-8d22h\") pod \"nmstate-metrics-58c85c668d-6hxmz\" (UID: \"20ae69bb-d285-4fbd-8c24-8385e5f6152d\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-6hxmz" Feb 18 14:47:30 crc kubenswrapper[4957]: I0218 14:47:30.561271 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/7bf5dd6b-3bc3-4ead-8fab-478e02b32496-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-vzkmx\" (UID: \"7bf5dd6b-3bc3-4ead-8fab-478e02b32496\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-vzkmx" Feb 18 14:47:30 crc kubenswrapper[4957]: E0218 14:47:30.561972 4957 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Feb 18 14:47:30 crc kubenswrapper[4957]: E0218 14:47:30.562048 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7bf5dd6b-3bc3-4ead-8fab-478e02b32496-tls-key-pair podName:7bf5dd6b-3bc3-4ead-8fab-478e02b32496 nodeName:}" failed. No retries permitted until 2026-02-18 14:47:31.06202223 +0000 UTC m=+957.582886974 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/7bf5dd6b-3bc3-4ead-8fab-478e02b32496-tls-key-pair") pod "nmstate-webhook-866bcb46dc-vzkmx" (UID: "7bf5dd6b-3bc3-4ead-8fab-478e02b32496") : secret "openshift-nmstate-webhook" not found Feb 18 14:47:30 crc kubenswrapper[4957]: I0218 14:47:30.608575 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8d22h\" (UniqueName: \"kubernetes.io/projected/20ae69bb-d285-4fbd-8c24-8385e5f6152d-kube-api-access-8d22h\") pod \"nmstate-metrics-58c85c668d-6hxmz\" (UID: \"20ae69bb-d285-4fbd-8c24-8385e5f6152d\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-6hxmz" Feb 18 14:47:30 crc kubenswrapper[4957]: I0218 14:47:30.611217 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f8mbj\" (UniqueName: \"kubernetes.io/projected/7bf5dd6b-3bc3-4ead-8fab-478e02b32496-kube-api-access-f8mbj\") pod \"nmstate-webhook-866bcb46dc-vzkmx\" (UID: \"7bf5dd6b-3bc3-4ead-8fab-478e02b32496\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-vzkmx" Feb 18 14:47:30 crc kubenswrapper[4957]: I0218 14:47:30.631967 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-dd487"] Feb 18 14:47:30 crc kubenswrapper[4957]: I0218 14:47:30.633029 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-dd487" Feb 18 14:47:30 crc kubenswrapper[4957]: I0218 14:47:30.642212 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Feb 18 14:47:30 crc kubenswrapper[4957]: I0218 14:47:30.642651 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-rkm45" Feb 18 14:47:30 crc kubenswrapper[4957]: I0218 14:47:30.642823 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Feb 18 14:47:30 crc kubenswrapper[4957]: I0218 14:47:30.663011 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-58c85c668d-6hxmz" Feb 18 14:47:30 crc kubenswrapper[4957]: I0218 14:47:30.665040 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/3b69da89-d2a6-4e8f-ac79-99e1bb296fcc-dbus-socket\") pod \"nmstate-handler-vwl5q\" (UID: \"3b69da89-d2a6-4e8f-ac79-99e1bb296fcc\") " pod="openshift-nmstate/nmstate-handler-vwl5q" Feb 18 14:47:30 crc kubenswrapper[4957]: I0218 14:47:30.665196 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxzp4\" (UniqueName: \"kubernetes.io/projected/3b69da89-d2a6-4e8f-ac79-99e1bb296fcc-kube-api-access-lxzp4\") pod \"nmstate-handler-vwl5q\" (UID: \"3b69da89-d2a6-4e8f-ac79-99e1bb296fcc\") " pod="openshift-nmstate/nmstate-handler-vwl5q" Feb 18 14:47:30 crc kubenswrapper[4957]: I0218 14:47:30.665335 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/3b69da89-d2a6-4e8f-ac79-99e1bb296fcc-nmstate-lock\") pod \"nmstate-handler-vwl5q\" (UID: \"3b69da89-d2a6-4e8f-ac79-99e1bb296fcc\") " pod="openshift-nmstate/nmstate-handler-vwl5q" Feb 18 14:47:30 crc kubenswrapper[4957]: I0218 14:47:30.665400 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/3b69da89-d2a6-4e8f-ac79-99e1bb296fcc-ovs-socket\") pod \"nmstate-handler-vwl5q\" (UID: \"3b69da89-d2a6-4e8f-ac79-99e1bb296fcc\") " pod="openshift-nmstate/nmstate-handler-vwl5q" Feb 18 14:47:30 crc kubenswrapper[4957]: I0218 14:47:30.673116 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-dd487"] Feb 18 14:47:30 crc kubenswrapper[4957]: I0218 14:47:30.766512 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/3b69da89-d2a6-4e8f-ac79-99e1bb296fcc-dbus-socket\") pod \"nmstate-handler-vwl5q\" (UID: \"3b69da89-d2a6-4e8f-ac79-99e1bb296fcc\") " pod="openshift-nmstate/nmstate-handler-vwl5q" Feb 18 14:47:30 crc kubenswrapper[4957]: I0218 14:47:30.766566 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/938fd18b-26fb-40b4-ab31-e0e8dff90d82-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-dd487\" (UID: \"938fd18b-26fb-40b4-ab31-e0e8dff90d82\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-dd487" Feb 18 14:47:30 crc kubenswrapper[4957]: I0218 14:47:30.766879 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrcmb\" (UniqueName: \"kubernetes.io/projected/938fd18b-26fb-40b4-ab31-e0e8dff90d82-kube-api-access-zrcmb\") pod \"nmstate-console-plugin-5c78fc5d65-dd487\" (UID: \"938fd18b-26fb-40b4-ab31-e0e8dff90d82\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-dd487" Feb 18 14:47:30 crc kubenswrapper[4957]: I0218 14:47:30.766921 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/3b69da89-d2a6-4e8f-ac79-99e1bb296fcc-dbus-socket\") pod \"nmstate-handler-vwl5q\" (UID: \"3b69da89-d2a6-4e8f-ac79-99e1bb296fcc\") " pod="openshift-nmstate/nmstate-handler-vwl5q" Feb 18 14:47:30 crc kubenswrapper[4957]: I0218 14:47:30.767109 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxzp4\" (UniqueName: \"kubernetes.io/projected/3b69da89-d2a6-4e8f-ac79-99e1bb296fcc-kube-api-access-lxzp4\") pod \"nmstate-handler-vwl5q\" (UID: \"3b69da89-d2a6-4e8f-ac79-99e1bb296fcc\") " pod="openshift-nmstate/nmstate-handler-vwl5q" Feb 18 14:47:30 crc kubenswrapper[4957]: I0218 14:47:30.767286 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/3b69da89-d2a6-4e8f-ac79-99e1bb296fcc-nmstate-lock\") pod \"nmstate-handler-vwl5q\" (UID: \"3b69da89-d2a6-4e8f-ac79-99e1bb296fcc\") " pod="openshift-nmstate/nmstate-handler-vwl5q" Feb 18 14:47:30 crc kubenswrapper[4957]: I0218 14:47:30.767345 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/3b69da89-d2a6-4e8f-ac79-99e1bb296fcc-nmstate-lock\") pod \"nmstate-handler-vwl5q\" (UID: \"3b69da89-d2a6-4e8f-ac79-99e1bb296fcc\") " pod="openshift-nmstate/nmstate-handler-vwl5q" Feb 18 14:47:30 crc kubenswrapper[4957]: I0218 14:47:30.767499 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/3b69da89-d2a6-4e8f-ac79-99e1bb296fcc-ovs-socket\") pod \"nmstate-handler-vwl5q\" (UID: \"3b69da89-d2a6-4e8f-ac79-99e1bb296fcc\") " pod="openshift-nmstate/nmstate-handler-vwl5q" Feb 18 14:47:30 crc kubenswrapper[4957]: I0218 14:47:30.767535 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/938fd18b-26fb-40b4-ab31-e0e8dff90d82-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-dd487\" (UID: \"938fd18b-26fb-40b4-ab31-e0e8dff90d82\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-dd487" Feb 18 14:47:30 crc kubenswrapper[4957]: I0218 14:47:30.767556 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/3b69da89-d2a6-4e8f-ac79-99e1bb296fcc-ovs-socket\") pod \"nmstate-handler-vwl5q\" (UID: \"3b69da89-d2a6-4e8f-ac79-99e1bb296fcc\") " pod="openshift-nmstate/nmstate-handler-vwl5q" Feb 18 14:47:30 crc kubenswrapper[4957]: I0218 14:47:30.799548 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxzp4\" (UniqueName: \"kubernetes.io/projected/3b69da89-d2a6-4e8f-ac79-99e1bb296fcc-kube-api-access-lxzp4\") pod \"nmstate-handler-vwl5q\" (UID: \"3b69da89-d2a6-4e8f-ac79-99e1bb296fcc\") " pod="openshift-nmstate/nmstate-handler-vwl5q" Feb 18 14:47:30 crc kubenswrapper[4957]: I0218 14:47:30.852258 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-vwl5q" Feb 18 14:47:30 crc kubenswrapper[4957]: I0218 14:47:30.863231 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f4db7666f-9jmvl"] Feb 18 14:47:30 crc kubenswrapper[4957]: I0218 14:47:30.865332 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f4db7666f-9jmvl" Feb 18 14:47:30 crc kubenswrapper[4957]: I0218 14:47:30.869587 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/938fd18b-26fb-40b4-ab31-e0e8dff90d82-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-dd487\" (UID: \"938fd18b-26fb-40b4-ab31-e0e8dff90d82\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-dd487" Feb 18 14:47:30 crc kubenswrapper[4957]: I0218 14:47:30.869684 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/938fd18b-26fb-40b4-ab31-e0e8dff90d82-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-dd487\" (UID: \"938fd18b-26fb-40b4-ab31-e0e8dff90d82\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-dd487" Feb 18 14:47:30 crc kubenswrapper[4957]: I0218 14:47:30.869764 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zrcmb\" (UniqueName: \"kubernetes.io/projected/938fd18b-26fb-40b4-ab31-e0e8dff90d82-kube-api-access-zrcmb\") pod \"nmstate-console-plugin-5c78fc5d65-dd487\" (UID: \"938fd18b-26fb-40b4-ab31-e0e8dff90d82\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-dd487" Feb 18 14:47:30 crc kubenswrapper[4957]: E0218 14:47:30.870283 4957 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Feb 18 14:47:30 crc kubenswrapper[4957]: E0218 14:47:30.870348 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/938fd18b-26fb-40b4-ab31-e0e8dff90d82-plugin-serving-cert podName:938fd18b-26fb-40b4-ab31-e0e8dff90d82 nodeName:}" failed. No retries permitted until 2026-02-18 14:47:31.370322748 +0000 UTC m=+957.891187492 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/938fd18b-26fb-40b4-ab31-e0e8dff90d82-plugin-serving-cert") pod "nmstate-console-plugin-5c78fc5d65-dd487" (UID: "938fd18b-26fb-40b4-ab31-e0e8dff90d82") : secret "plugin-serving-cert" not found Feb 18 14:47:30 crc kubenswrapper[4957]: I0218 14:47:30.871666 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/938fd18b-26fb-40b4-ab31-e0e8dff90d82-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-dd487\" (UID: \"938fd18b-26fb-40b4-ab31-e0e8dff90d82\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-dd487" Feb 18 14:47:30 crc kubenswrapper[4957]: I0218 14:47:30.898406 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zrcmb\" (UniqueName: \"kubernetes.io/projected/938fd18b-26fb-40b4-ab31-e0e8dff90d82-kube-api-access-zrcmb\") pod \"nmstate-console-plugin-5c78fc5d65-dd487\" (UID: \"938fd18b-26fb-40b4-ab31-e0e8dff90d82\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-dd487" Feb 18 14:47:30 crc kubenswrapper[4957]: I0218 14:47:30.910114 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f4db7666f-9jmvl"] Feb 18 14:47:30 crc kubenswrapper[4957]: I0218 14:47:30.971585 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7f1e72bc-e441-4172-b761-7d580c7a6ebc-console-config\") pod \"console-f4db7666f-9jmvl\" (UID: \"7f1e72bc-e441-4172-b761-7d580c7a6ebc\") " pod="openshift-console/console-f4db7666f-9jmvl" Feb 18 14:47:30 crc kubenswrapper[4957]: I0218 14:47:30.971627 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7f1e72bc-e441-4172-b761-7d580c7a6ebc-service-ca\") pod \"console-f4db7666f-9jmvl\" (UID: \"7f1e72bc-e441-4172-b761-7d580c7a6ebc\") " pod="openshift-console/console-f4db7666f-9jmvl" Feb 18 14:47:30 crc kubenswrapper[4957]: I0218 14:47:30.971651 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmghv\" (UniqueName: \"kubernetes.io/projected/7f1e72bc-e441-4172-b761-7d580c7a6ebc-kube-api-access-tmghv\") pod \"console-f4db7666f-9jmvl\" (UID: \"7f1e72bc-e441-4172-b761-7d580c7a6ebc\") " pod="openshift-console/console-f4db7666f-9jmvl" Feb 18 14:47:30 crc kubenswrapper[4957]: I0218 14:47:30.971681 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7f1e72bc-e441-4172-b761-7d580c7a6ebc-console-serving-cert\") pod \"console-f4db7666f-9jmvl\" (UID: \"7f1e72bc-e441-4172-b761-7d580c7a6ebc\") " pod="openshift-console/console-f4db7666f-9jmvl" Feb 18 14:47:30 crc kubenswrapper[4957]: I0218 14:47:30.971700 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7f1e72bc-e441-4172-b761-7d580c7a6ebc-trusted-ca-bundle\") pod \"console-f4db7666f-9jmvl\" (UID: \"7f1e72bc-e441-4172-b761-7d580c7a6ebc\") " pod="openshift-console/console-f4db7666f-9jmvl" Feb 18 14:47:30 crc kubenswrapper[4957]: I0218 14:47:30.971727 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7f1e72bc-e441-4172-b761-7d580c7a6ebc-oauth-serving-cert\") pod \"console-f4db7666f-9jmvl\" (UID: \"7f1e72bc-e441-4172-b761-7d580c7a6ebc\") " pod="openshift-console/console-f4db7666f-9jmvl" Feb 18 14:47:30 crc kubenswrapper[4957]: I0218 14:47:30.971828 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7f1e72bc-e441-4172-b761-7d580c7a6ebc-console-oauth-config\") pod \"console-f4db7666f-9jmvl\" (UID: \"7f1e72bc-e441-4172-b761-7d580c7a6ebc\") " pod="openshift-console/console-f4db7666f-9jmvl" Feb 18 14:47:31 crc kubenswrapper[4957]: I0218 14:47:31.073605 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7f1e72bc-e441-4172-b761-7d580c7a6ebc-console-config\") pod \"console-f4db7666f-9jmvl\" (UID: \"7f1e72bc-e441-4172-b761-7d580c7a6ebc\") " pod="openshift-console/console-f4db7666f-9jmvl" Feb 18 14:47:31 crc kubenswrapper[4957]: I0218 14:47:31.073653 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7f1e72bc-e441-4172-b761-7d580c7a6ebc-service-ca\") pod \"console-f4db7666f-9jmvl\" (UID: \"7f1e72bc-e441-4172-b761-7d580c7a6ebc\") " pod="openshift-console/console-f4db7666f-9jmvl" Feb 18 14:47:31 crc kubenswrapper[4957]: I0218 14:47:31.073674 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tmghv\" (UniqueName: \"kubernetes.io/projected/7f1e72bc-e441-4172-b761-7d580c7a6ebc-kube-api-access-tmghv\") pod \"console-f4db7666f-9jmvl\" (UID: \"7f1e72bc-e441-4172-b761-7d580c7a6ebc\") " pod="openshift-console/console-f4db7666f-9jmvl" Feb 18 14:47:31 crc kubenswrapper[4957]: I0218 14:47:31.073696 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7f1e72bc-e441-4172-b761-7d580c7a6ebc-console-serving-cert\") pod \"console-f4db7666f-9jmvl\" (UID: \"7f1e72bc-e441-4172-b761-7d580c7a6ebc\") " pod="openshift-console/console-f4db7666f-9jmvl" Feb 18 14:47:31 crc kubenswrapper[4957]: I0218 14:47:31.073716 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7f1e72bc-e441-4172-b761-7d580c7a6ebc-trusted-ca-bundle\") pod \"console-f4db7666f-9jmvl\" (UID: \"7f1e72bc-e441-4172-b761-7d580c7a6ebc\") " pod="openshift-console/console-f4db7666f-9jmvl" Feb 18 14:47:31 crc kubenswrapper[4957]: I0218 14:47:31.073743 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7f1e72bc-e441-4172-b761-7d580c7a6ebc-oauth-serving-cert\") pod \"console-f4db7666f-9jmvl\" (UID: \"7f1e72bc-e441-4172-b761-7d580c7a6ebc\") " pod="openshift-console/console-f4db7666f-9jmvl" Feb 18 14:47:31 crc kubenswrapper[4957]: I0218 14:47:31.073760 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/7bf5dd6b-3bc3-4ead-8fab-478e02b32496-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-vzkmx\" (UID: \"7bf5dd6b-3bc3-4ead-8fab-478e02b32496\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-vzkmx" Feb 18 14:47:31 crc kubenswrapper[4957]: I0218 14:47:31.073824 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7f1e72bc-e441-4172-b761-7d580c7a6ebc-console-oauth-config\") pod \"console-f4db7666f-9jmvl\" (UID: \"7f1e72bc-e441-4172-b761-7d580c7a6ebc\") " pod="openshift-console/console-f4db7666f-9jmvl" Feb 18 14:47:31 crc kubenswrapper[4957]: I0218 14:47:31.074787 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7f1e72bc-e441-4172-b761-7d580c7a6ebc-console-config\") pod \"console-f4db7666f-9jmvl\" (UID: \"7f1e72bc-e441-4172-b761-7d580c7a6ebc\") " pod="openshift-console/console-f4db7666f-9jmvl" Feb 18 14:47:31 crc kubenswrapper[4957]: I0218 14:47:31.074811 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7f1e72bc-e441-4172-b761-7d580c7a6ebc-service-ca\") pod \"console-f4db7666f-9jmvl\" (UID: \"7f1e72bc-e441-4172-b761-7d580c7a6ebc\") " pod="openshift-console/console-f4db7666f-9jmvl" Feb 18 14:47:31 crc kubenswrapper[4957]: I0218 14:47:31.075816 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7f1e72bc-e441-4172-b761-7d580c7a6ebc-oauth-serving-cert\") pod \"console-f4db7666f-9jmvl\" (UID: \"7f1e72bc-e441-4172-b761-7d580c7a6ebc\") " pod="openshift-console/console-f4db7666f-9jmvl" Feb 18 14:47:31 crc kubenswrapper[4957]: I0218 14:47:31.075912 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7f1e72bc-e441-4172-b761-7d580c7a6ebc-trusted-ca-bundle\") pod \"console-f4db7666f-9jmvl\" (UID: \"7f1e72bc-e441-4172-b761-7d580c7a6ebc\") " pod="openshift-console/console-f4db7666f-9jmvl" Feb 18 14:47:31 crc kubenswrapper[4957]: I0218 14:47:31.077638 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7f1e72bc-e441-4172-b761-7d580c7a6ebc-console-oauth-config\") pod \"console-f4db7666f-9jmvl\" (UID: \"7f1e72bc-e441-4172-b761-7d580c7a6ebc\") " pod="openshift-console/console-f4db7666f-9jmvl" Feb 18 14:47:31 crc kubenswrapper[4957]: I0218 14:47:31.079841 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7f1e72bc-e441-4172-b761-7d580c7a6ebc-console-serving-cert\") pod \"console-f4db7666f-9jmvl\" (UID: \"7f1e72bc-e441-4172-b761-7d580c7a6ebc\") " pod="openshift-console/console-f4db7666f-9jmvl" Feb 18 14:47:31 crc kubenswrapper[4957]: I0218 14:47:31.083743 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/7bf5dd6b-3bc3-4ead-8fab-478e02b32496-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-vzkmx\" (UID: \"7bf5dd6b-3bc3-4ead-8fab-478e02b32496\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-vzkmx" Feb 18 14:47:31 crc kubenswrapper[4957]: I0218 14:47:31.085789 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-vzkmx" Feb 18 14:47:31 crc kubenswrapper[4957]: I0218 14:47:31.091687 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tmghv\" (UniqueName: \"kubernetes.io/projected/7f1e72bc-e441-4172-b761-7d580c7a6ebc-kube-api-access-tmghv\") pod \"console-f4db7666f-9jmvl\" (UID: \"7f1e72bc-e441-4172-b761-7d580c7a6ebc\") " pod="openshift-console/console-f4db7666f-9jmvl" Feb 18 14:47:31 crc kubenswrapper[4957]: I0218 14:47:31.206508 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f4db7666f-9jmvl" Feb 18 14:47:31 crc kubenswrapper[4957]: I0218 14:47:31.380776 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/938fd18b-26fb-40b4-ab31-e0e8dff90d82-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-dd487\" (UID: \"938fd18b-26fb-40b4-ab31-e0e8dff90d82\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-dd487" Feb 18 14:47:31 crc kubenswrapper[4957]: I0218 14:47:31.387357 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/938fd18b-26fb-40b4-ab31-e0e8dff90d82-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-dd487\" (UID: \"938fd18b-26fb-40b4-ab31-e0e8dff90d82\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-dd487" Feb 18 14:47:31 crc kubenswrapper[4957]: I0218 14:47:31.407123 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-6hxmz"] Feb 18 14:47:31 crc kubenswrapper[4957]: I0218 14:47:31.442817 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-vzkmx"] Feb 18 14:47:31 crc kubenswrapper[4957]: W0218 14:47:31.458371 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7bf5dd6b_3bc3_4ead_8fab_478e02b32496.slice/crio-d78da8bdea902ade9afc7c4dfd816be3a679396896fece002dba2ab3d911f18b WatchSource:0}: Error finding container d78da8bdea902ade9afc7c4dfd816be3a679396896fece002dba2ab3d911f18b: Status 404 returned error can't find the container with id d78da8bdea902ade9afc7c4dfd816be3a679396896fece002dba2ab3d911f18b Feb 18 14:47:31 crc kubenswrapper[4957]: I0218 14:47:31.460549 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-vwl5q" event={"ID":"3b69da89-d2a6-4e8f-ac79-99e1bb296fcc","Type":"ContainerStarted","Data":"3a681020849c2578fe985aedc9151e026d1f5ab8366114f8f99231837ab7d4f8"} Feb 18 14:47:31 crc kubenswrapper[4957]: I0218 14:47:31.465169 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-6hxmz" event={"ID":"20ae69bb-d285-4fbd-8c24-8385e5f6152d","Type":"ContainerStarted","Data":"59c4e8f40502963a19898a100c4dd2fa7067bc4d72996969a0717e3d92003c9f"} Feb 18 14:47:31 crc kubenswrapper[4957]: I0218 14:47:31.673946 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-dd487" Feb 18 14:47:31 crc kubenswrapper[4957]: I0218 14:47:31.777353 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f4db7666f-9jmvl"] Feb 18 14:47:32 crc kubenswrapper[4957]: I0218 14:47:32.111595 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-dd487"] Feb 18 14:47:32 crc kubenswrapper[4957]: W0218 14:47:32.122484 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod938fd18b_26fb_40b4_ab31_e0e8dff90d82.slice/crio-8b89e16a439c529661b07b40a5fdb68c90440cdcf252d0b8706819d31d1fad9c WatchSource:0}: Error finding container 8b89e16a439c529661b07b40a5fdb68c90440cdcf252d0b8706819d31d1fad9c: Status 404 returned error can't find the container with id 8b89e16a439c529661b07b40a5fdb68c90440cdcf252d0b8706819d31d1fad9c Feb 18 14:47:32 crc kubenswrapper[4957]: I0218 14:47:32.476128 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f4db7666f-9jmvl" event={"ID":"7f1e72bc-e441-4172-b761-7d580c7a6ebc","Type":"ContainerStarted","Data":"d95140379f41a68d3520bb53d130245803292a9350d3a82bd8e2187deda0e519"} Feb 18 14:47:32 crc kubenswrapper[4957]: I0218 14:47:32.477367 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f4db7666f-9jmvl" event={"ID":"7f1e72bc-e441-4172-b761-7d580c7a6ebc","Type":"ContainerStarted","Data":"96b567d8437bf02dcd88506f739fda65f8d04c03e11424d93324e16fa9742ed3"} Feb 18 14:47:32 crc kubenswrapper[4957]: I0218 14:47:32.478822 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-vzkmx" event={"ID":"7bf5dd6b-3bc3-4ead-8fab-478e02b32496","Type":"ContainerStarted","Data":"d78da8bdea902ade9afc7c4dfd816be3a679396896fece002dba2ab3d911f18b"} Feb 18 14:47:32 crc kubenswrapper[4957]: I0218 14:47:32.480882 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-dd487" event={"ID":"938fd18b-26fb-40b4-ab31-e0e8dff90d82","Type":"ContainerStarted","Data":"8b89e16a439c529661b07b40a5fdb68c90440cdcf252d0b8706819d31d1fad9c"} Feb 18 14:47:32 crc kubenswrapper[4957]: I0218 14:47:32.509481 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f4db7666f-9jmvl" podStartSLOduration=2.509461035 podStartE2EDuration="2.509461035s" podCreationTimestamp="2026-02-18 14:47:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:47:32.507298492 +0000 UTC m=+959.028163246" watchObservedRunningTime="2026-02-18 14:47:32.509461035 +0000 UTC m=+959.030325779" Feb 18 14:47:35 crc kubenswrapper[4957]: I0218 14:47:35.297344 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-nt2x5"] Feb 18 14:47:35 crc kubenswrapper[4957]: I0218 14:47:35.300449 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nt2x5" Feb 18 14:47:35 crc kubenswrapper[4957]: I0218 14:47:35.321533 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nt2x5"] Feb 18 14:47:35 crc kubenswrapper[4957]: I0218 14:47:35.364170 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5jmb\" (UniqueName: \"kubernetes.io/projected/f066a024-a0f2-411e-b059-e97047ad4c4e-kube-api-access-v5jmb\") pod \"community-operators-nt2x5\" (UID: \"f066a024-a0f2-411e-b059-e97047ad4c4e\") " pod="openshift-marketplace/community-operators-nt2x5" Feb 18 14:47:35 crc kubenswrapper[4957]: I0218 14:47:35.364226 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f066a024-a0f2-411e-b059-e97047ad4c4e-utilities\") pod \"community-operators-nt2x5\" (UID: \"f066a024-a0f2-411e-b059-e97047ad4c4e\") " pod="openshift-marketplace/community-operators-nt2x5" Feb 18 14:47:35 crc kubenswrapper[4957]: I0218 14:47:35.364329 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f066a024-a0f2-411e-b059-e97047ad4c4e-catalog-content\") pod \"community-operators-nt2x5\" (UID: \"f066a024-a0f2-411e-b059-e97047ad4c4e\") " pod="openshift-marketplace/community-operators-nt2x5" Feb 18 14:47:35 crc kubenswrapper[4957]: I0218 14:47:35.466537 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v5jmb\" (UniqueName: \"kubernetes.io/projected/f066a024-a0f2-411e-b059-e97047ad4c4e-kube-api-access-v5jmb\") pod \"community-operators-nt2x5\" (UID: \"f066a024-a0f2-411e-b059-e97047ad4c4e\") " pod="openshift-marketplace/community-operators-nt2x5" Feb 18 14:47:35 crc kubenswrapper[4957]: I0218 14:47:35.466626 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f066a024-a0f2-411e-b059-e97047ad4c4e-utilities\") pod \"community-operators-nt2x5\" (UID: \"f066a024-a0f2-411e-b059-e97047ad4c4e\") " pod="openshift-marketplace/community-operators-nt2x5" Feb 18 14:47:35 crc kubenswrapper[4957]: I0218 14:47:35.466760 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f066a024-a0f2-411e-b059-e97047ad4c4e-catalog-content\") pod \"community-operators-nt2x5\" (UID: \"f066a024-a0f2-411e-b059-e97047ad4c4e\") " pod="openshift-marketplace/community-operators-nt2x5" Feb 18 14:47:35 crc kubenswrapper[4957]: I0218 14:47:35.467515 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f066a024-a0f2-411e-b059-e97047ad4c4e-catalog-content\") pod \"community-operators-nt2x5\" (UID: \"f066a024-a0f2-411e-b059-e97047ad4c4e\") " pod="openshift-marketplace/community-operators-nt2x5" Feb 18 14:47:35 crc kubenswrapper[4957]: I0218 14:47:35.467616 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f066a024-a0f2-411e-b059-e97047ad4c4e-utilities\") pod \"community-operators-nt2x5\" (UID: \"f066a024-a0f2-411e-b059-e97047ad4c4e\") " pod="openshift-marketplace/community-operators-nt2x5" Feb 18 14:47:35 crc kubenswrapper[4957]: I0218 14:47:35.495015 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v5jmb\" (UniqueName: \"kubernetes.io/projected/f066a024-a0f2-411e-b059-e97047ad4c4e-kube-api-access-v5jmb\") pod \"community-operators-nt2x5\" (UID: \"f066a024-a0f2-411e-b059-e97047ad4c4e\") " pod="openshift-marketplace/community-operators-nt2x5" Feb 18 14:47:35 crc kubenswrapper[4957]: I0218 14:47:35.524066 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-vwl5q" event={"ID":"3b69da89-d2a6-4e8f-ac79-99e1bb296fcc","Type":"ContainerStarted","Data":"7f199e254598df3b800da2d3dc27e13edd76a1c174c1ed94349ccda345a49883"} Feb 18 14:47:35 crc kubenswrapper[4957]: I0218 14:47:35.524228 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-vwl5q" Feb 18 14:47:35 crc kubenswrapper[4957]: I0218 14:47:35.528074 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-6hxmz" event={"ID":"20ae69bb-d285-4fbd-8c24-8385e5f6152d","Type":"ContainerStarted","Data":"e5b6bb52acf10b09c227cc3c7d6bc50b157b1fc9b921343bc570c90628a60681"} Feb 18 14:47:35 crc kubenswrapper[4957]: I0218 14:47:35.532958 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-vzkmx" event={"ID":"7bf5dd6b-3bc3-4ead-8fab-478e02b32496","Type":"ContainerStarted","Data":"ab09bd12fde653f0f74669a7ce5523f23dc3497834b13a2465df10ee92d82d08"} Feb 18 14:47:35 crc kubenswrapper[4957]: I0218 14:47:35.533378 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-vzkmx" Feb 18 14:47:35 crc kubenswrapper[4957]: I0218 14:47:35.552699 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-vwl5q" podStartSLOduration=1.407466537 podStartE2EDuration="5.552656867s" podCreationTimestamp="2026-02-18 14:47:30 +0000 UTC" firstStartedPulling="2026-02-18 14:47:30.912713284 +0000 UTC m=+957.433578028" lastFinishedPulling="2026-02-18 14:47:35.057903614 +0000 UTC m=+961.578768358" observedRunningTime="2026-02-18 14:47:35.544841981 +0000 UTC m=+962.065706735" watchObservedRunningTime="2026-02-18 14:47:35.552656867 +0000 UTC m=+962.073521611" Feb 18 14:47:35 crc kubenswrapper[4957]: I0218 14:47:35.576475 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-vzkmx" podStartSLOduration=1.966514088 podStartE2EDuration="5.576445625s" podCreationTimestamp="2026-02-18 14:47:30 +0000 UTC" firstStartedPulling="2026-02-18 14:47:31.46291428 +0000 UTC m=+957.983779024" lastFinishedPulling="2026-02-18 14:47:35.072845817 +0000 UTC m=+961.593710561" observedRunningTime="2026-02-18 14:47:35.567350882 +0000 UTC m=+962.088215626" watchObservedRunningTime="2026-02-18 14:47:35.576445625 +0000 UTC m=+962.097310369" Feb 18 14:47:35 crc kubenswrapper[4957]: I0218 14:47:35.617376 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nt2x5" Feb 18 14:47:36 crc kubenswrapper[4957]: I0218 14:47:36.275021 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nt2x5"] Feb 18 14:47:38 crc kubenswrapper[4957]: I0218 14:47:38.565625 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-dd487" event={"ID":"938fd18b-26fb-40b4-ab31-e0e8dff90d82","Type":"ContainerStarted","Data":"6a2109babc24c8534781dba724f4596d07d3a96f3b854b276ebc3ff6a2e1ea13"} Feb 18 14:47:38 crc kubenswrapper[4957]: I0218 14:47:38.570616 4957 generic.go:334] "Generic (PLEG): container finished" podID="f066a024-a0f2-411e-b059-e97047ad4c4e" containerID="0515c2859bf2362c9ede1d3fd90a8f0446ca07a0a9fd135aa82aa49bdabaca8f" exitCode=0 Feb 18 14:47:38 crc kubenswrapper[4957]: I0218 14:47:38.570725 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nt2x5" event={"ID":"f066a024-a0f2-411e-b059-e97047ad4c4e","Type":"ContainerDied","Data":"0515c2859bf2362c9ede1d3fd90a8f0446ca07a0a9fd135aa82aa49bdabaca8f"} Feb 18 14:47:38 crc kubenswrapper[4957]: I0218 14:47:38.570769 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nt2x5" event={"ID":"f066a024-a0f2-411e-b059-e97047ad4c4e","Type":"ContainerStarted","Data":"9a048c31b7a77b8886c8396952b68eb480acf95d1a17ff2d2ef6874ed95786c7"} Feb 18 14:47:38 crc kubenswrapper[4957]: I0218 14:47:38.577528 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-6hxmz" event={"ID":"20ae69bb-d285-4fbd-8c24-8385e5f6152d","Type":"ContainerStarted","Data":"753d606628d44e079a0a79c0ed94d25c16e6e0399358f8c62cd57bd5a0c9d0ae"} Feb 18 14:47:38 crc kubenswrapper[4957]: I0218 14:47:38.616869 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-dd487" podStartSLOduration=2.645630814 podStartE2EDuration="8.616836186s" podCreationTimestamp="2026-02-18 14:47:30 +0000 UTC" firstStartedPulling="2026-02-18 14:47:32.125967581 +0000 UTC m=+958.646832325" lastFinishedPulling="2026-02-18 14:47:38.097172953 +0000 UTC m=+964.618037697" observedRunningTime="2026-02-18 14:47:38.593166821 +0000 UTC m=+965.114031585" watchObservedRunningTime="2026-02-18 14:47:38.616836186 +0000 UTC m=+965.137700940" Feb 18 14:47:38 crc kubenswrapper[4957]: I0218 14:47:38.672256 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-58c85c668d-6hxmz" podStartSLOduration=1.981925465 podStartE2EDuration="8.672218728s" podCreationTimestamp="2026-02-18 14:47:30 +0000 UTC" firstStartedPulling="2026-02-18 14:47:31.422721178 +0000 UTC m=+957.943585922" lastFinishedPulling="2026-02-18 14:47:38.113014441 +0000 UTC m=+964.633879185" observedRunningTime="2026-02-18 14:47:38.655612858 +0000 UTC m=+965.176477612" watchObservedRunningTime="2026-02-18 14:47:38.672218728 +0000 UTC m=+965.193083472" Feb 18 14:47:39 crc kubenswrapper[4957]: I0218 14:47:39.588497 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nt2x5" event={"ID":"f066a024-a0f2-411e-b059-e97047ad4c4e","Type":"ContainerStarted","Data":"7a5cad5460849cc3bcda48bd627c9ad2b1a43fe3db57cc298023438ad3248220"} Feb 18 14:47:40 crc kubenswrapper[4957]: I0218 14:47:40.599071 4957 generic.go:334] "Generic (PLEG): container finished" podID="f066a024-a0f2-411e-b059-e97047ad4c4e" containerID="7a5cad5460849cc3bcda48bd627c9ad2b1a43fe3db57cc298023438ad3248220" exitCode=0 Feb 18 14:47:40 crc kubenswrapper[4957]: I0218 14:47:40.599129 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nt2x5" event={"ID":"f066a024-a0f2-411e-b059-e97047ad4c4e","Type":"ContainerDied","Data":"7a5cad5460849cc3bcda48bd627c9ad2b1a43fe3db57cc298023438ad3248220"} Feb 18 14:47:40 crc kubenswrapper[4957]: I0218 14:47:40.888440 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-vwl5q" Feb 18 14:47:41 crc kubenswrapper[4957]: I0218 14:47:41.207851 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f4db7666f-9jmvl" Feb 18 14:47:41 crc kubenswrapper[4957]: I0218 14:47:41.207903 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f4db7666f-9jmvl" Feb 18 14:47:41 crc kubenswrapper[4957]: I0218 14:47:41.217079 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f4db7666f-9jmvl" Feb 18 14:47:41 crc kubenswrapper[4957]: I0218 14:47:41.614900 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f4db7666f-9jmvl" Feb 18 14:47:41 crc kubenswrapper[4957]: I0218 14:47:41.686456 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-79f7c68f86-ldbhx"] Feb 18 14:47:42 crc kubenswrapper[4957]: I0218 14:47:42.638484 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nt2x5" event={"ID":"f066a024-a0f2-411e-b059-e97047ad4c4e","Type":"ContainerStarted","Data":"7ed3f9483c7cb31b6dc37979260ec0b750c98d90c9ab8b8afb610b34bc9a8405"} Feb 18 14:47:42 crc kubenswrapper[4957]: I0218 14:47:42.678175 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-nt2x5" podStartSLOduration=4.871021638 podStartE2EDuration="7.67814958s" podCreationTimestamp="2026-02-18 14:47:35 +0000 UTC" firstStartedPulling="2026-02-18 14:47:38.573027569 +0000 UTC m=+965.093892313" lastFinishedPulling="2026-02-18 14:47:41.380155501 +0000 UTC m=+967.901020255" observedRunningTime="2026-02-18 14:47:42.665864044 +0000 UTC m=+969.186728808" watchObservedRunningTime="2026-02-18 14:47:42.67814958 +0000 UTC m=+969.199014324" Feb 18 14:47:45 crc kubenswrapper[4957]: I0218 14:47:45.618957 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-nt2x5" Feb 18 14:47:45 crc kubenswrapper[4957]: I0218 14:47:45.619358 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-nt2x5" Feb 18 14:47:45 crc kubenswrapper[4957]: I0218 14:47:45.676376 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-nt2x5" Feb 18 14:47:51 crc kubenswrapper[4957]: I0218 14:47:51.094668 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-vzkmx" Feb 18 14:47:55 crc kubenswrapper[4957]: I0218 14:47:55.676189 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-nt2x5" Feb 18 14:47:55 crc kubenswrapper[4957]: I0218 14:47:55.739566 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nt2x5"] Feb 18 14:47:55 crc kubenswrapper[4957]: I0218 14:47:55.743226 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-nt2x5" podUID="f066a024-a0f2-411e-b059-e97047ad4c4e" containerName="registry-server" containerID="cri-o://7ed3f9483c7cb31b6dc37979260ec0b750c98d90c9ab8b8afb610b34bc9a8405" gracePeriod=2 Feb 18 14:47:56 crc kubenswrapper[4957]: I0218 14:47:56.184056 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nt2x5" Feb 18 14:47:56 crc kubenswrapper[4957]: I0218 14:47:56.316063 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v5jmb\" (UniqueName: \"kubernetes.io/projected/f066a024-a0f2-411e-b059-e97047ad4c4e-kube-api-access-v5jmb\") pod \"f066a024-a0f2-411e-b059-e97047ad4c4e\" (UID: \"f066a024-a0f2-411e-b059-e97047ad4c4e\") " Feb 18 14:47:56 crc kubenswrapper[4957]: I0218 14:47:56.316113 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f066a024-a0f2-411e-b059-e97047ad4c4e-catalog-content\") pod \"f066a024-a0f2-411e-b059-e97047ad4c4e\" (UID: \"f066a024-a0f2-411e-b059-e97047ad4c4e\") " Feb 18 14:47:56 crc kubenswrapper[4957]: I0218 14:47:56.316172 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f066a024-a0f2-411e-b059-e97047ad4c4e-utilities\") pod \"f066a024-a0f2-411e-b059-e97047ad4c4e\" (UID: \"f066a024-a0f2-411e-b059-e97047ad4c4e\") " Feb 18 14:47:56 crc kubenswrapper[4957]: I0218 14:47:56.317538 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f066a024-a0f2-411e-b059-e97047ad4c4e-utilities" (OuterVolumeSpecName: "utilities") pod "f066a024-a0f2-411e-b059-e97047ad4c4e" (UID: "f066a024-a0f2-411e-b059-e97047ad4c4e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:47:56 crc kubenswrapper[4957]: I0218 14:47:56.322889 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f066a024-a0f2-411e-b059-e97047ad4c4e-kube-api-access-v5jmb" (OuterVolumeSpecName: "kube-api-access-v5jmb") pod "f066a024-a0f2-411e-b059-e97047ad4c4e" (UID: "f066a024-a0f2-411e-b059-e97047ad4c4e"). InnerVolumeSpecName "kube-api-access-v5jmb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:47:56 crc kubenswrapper[4957]: I0218 14:47:56.371256 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f066a024-a0f2-411e-b059-e97047ad4c4e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f066a024-a0f2-411e-b059-e97047ad4c4e" (UID: "f066a024-a0f2-411e-b059-e97047ad4c4e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:47:56 crc kubenswrapper[4957]: I0218 14:47:56.417869 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v5jmb\" (UniqueName: \"kubernetes.io/projected/f066a024-a0f2-411e-b059-e97047ad4c4e-kube-api-access-v5jmb\") on node \"crc\" DevicePath \"\"" Feb 18 14:47:56 crc kubenswrapper[4957]: I0218 14:47:56.417899 4957 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f066a024-a0f2-411e-b059-e97047ad4c4e-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 14:47:56 crc kubenswrapper[4957]: I0218 14:47:56.417909 4957 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f066a024-a0f2-411e-b059-e97047ad4c4e-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 14:47:56 crc kubenswrapper[4957]: I0218 14:47:56.754068 4957 generic.go:334] "Generic (PLEG): container finished" podID="f066a024-a0f2-411e-b059-e97047ad4c4e" containerID="7ed3f9483c7cb31b6dc37979260ec0b750c98d90c9ab8b8afb610b34bc9a8405" exitCode=0 Feb 18 14:47:56 crc kubenswrapper[4957]: I0218 14:47:56.754136 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nt2x5" Feb 18 14:47:56 crc kubenswrapper[4957]: I0218 14:47:56.754146 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nt2x5" event={"ID":"f066a024-a0f2-411e-b059-e97047ad4c4e","Type":"ContainerDied","Data":"7ed3f9483c7cb31b6dc37979260ec0b750c98d90c9ab8b8afb610b34bc9a8405"} Feb 18 14:47:56 crc kubenswrapper[4957]: I0218 14:47:56.754471 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nt2x5" event={"ID":"f066a024-a0f2-411e-b059-e97047ad4c4e","Type":"ContainerDied","Data":"9a048c31b7a77b8886c8396952b68eb480acf95d1a17ff2d2ef6874ed95786c7"} Feb 18 14:47:56 crc kubenswrapper[4957]: I0218 14:47:56.754493 4957 scope.go:117] "RemoveContainer" containerID="7ed3f9483c7cb31b6dc37979260ec0b750c98d90c9ab8b8afb610b34bc9a8405" Feb 18 14:47:56 crc kubenswrapper[4957]: I0218 14:47:56.779220 4957 scope.go:117] "RemoveContainer" containerID="7a5cad5460849cc3bcda48bd627c9ad2b1a43fe3db57cc298023438ad3248220" Feb 18 14:47:56 crc kubenswrapper[4957]: I0218 14:47:56.807243 4957 scope.go:117] "RemoveContainer" containerID="0515c2859bf2362c9ede1d3fd90a8f0446ca07a0a9fd135aa82aa49bdabaca8f" Feb 18 14:47:56 crc kubenswrapper[4957]: I0218 14:47:56.807398 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nt2x5"] Feb 18 14:47:56 crc kubenswrapper[4957]: I0218 14:47:56.815744 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-nt2x5"] Feb 18 14:47:56 crc kubenswrapper[4957]: I0218 14:47:56.842660 4957 scope.go:117] "RemoveContainer" containerID="7ed3f9483c7cb31b6dc37979260ec0b750c98d90c9ab8b8afb610b34bc9a8405" Feb 18 14:47:56 crc kubenswrapper[4957]: E0218 14:47:56.843323 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7ed3f9483c7cb31b6dc37979260ec0b750c98d90c9ab8b8afb610b34bc9a8405\": container with ID starting with 7ed3f9483c7cb31b6dc37979260ec0b750c98d90c9ab8b8afb610b34bc9a8405 not found: ID does not exist" containerID="7ed3f9483c7cb31b6dc37979260ec0b750c98d90c9ab8b8afb610b34bc9a8405" Feb 18 14:47:56 crc kubenswrapper[4957]: I0218 14:47:56.843375 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ed3f9483c7cb31b6dc37979260ec0b750c98d90c9ab8b8afb610b34bc9a8405"} err="failed to get container status \"7ed3f9483c7cb31b6dc37979260ec0b750c98d90c9ab8b8afb610b34bc9a8405\": rpc error: code = NotFound desc = could not find container \"7ed3f9483c7cb31b6dc37979260ec0b750c98d90c9ab8b8afb610b34bc9a8405\": container with ID starting with 7ed3f9483c7cb31b6dc37979260ec0b750c98d90c9ab8b8afb610b34bc9a8405 not found: ID does not exist" Feb 18 14:47:56 crc kubenswrapper[4957]: I0218 14:47:56.843408 4957 scope.go:117] "RemoveContainer" containerID="7a5cad5460849cc3bcda48bd627c9ad2b1a43fe3db57cc298023438ad3248220" Feb 18 14:47:56 crc kubenswrapper[4957]: E0218 14:47:56.843907 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7a5cad5460849cc3bcda48bd627c9ad2b1a43fe3db57cc298023438ad3248220\": container with ID starting with 7a5cad5460849cc3bcda48bd627c9ad2b1a43fe3db57cc298023438ad3248220 not found: ID does not exist" containerID="7a5cad5460849cc3bcda48bd627c9ad2b1a43fe3db57cc298023438ad3248220" Feb 18 14:47:56 crc kubenswrapper[4957]: I0218 14:47:56.843952 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a5cad5460849cc3bcda48bd627c9ad2b1a43fe3db57cc298023438ad3248220"} err="failed to get container status \"7a5cad5460849cc3bcda48bd627c9ad2b1a43fe3db57cc298023438ad3248220\": rpc error: code = NotFound desc = could not find container \"7a5cad5460849cc3bcda48bd627c9ad2b1a43fe3db57cc298023438ad3248220\": container with ID starting with 7a5cad5460849cc3bcda48bd627c9ad2b1a43fe3db57cc298023438ad3248220 not found: ID does not exist" Feb 18 14:47:56 crc kubenswrapper[4957]: I0218 14:47:56.843980 4957 scope.go:117] "RemoveContainer" containerID="0515c2859bf2362c9ede1d3fd90a8f0446ca07a0a9fd135aa82aa49bdabaca8f" Feb 18 14:47:56 crc kubenswrapper[4957]: E0218 14:47:56.846557 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0515c2859bf2362c9ede1d3fd90a8f0446ca07a0a9fd135aa82aa49bdabaca8f\": container with ID starting with 0515c2859bf2362c9ede1d3fd90a8f0446ca07a0a9fd135aa82aa49bdabaca8f not found: ID does not exist" containerID="0515c2859bf2362c9ede1d3fd90a8f0446ca07a0a9fd135aa82aa49bdabaca8f" Feb 18 14:47:56 crc kubenswrapper[4957]: I0218 14:47:56.846594 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0515c2859bf2362c9ede1d3fd90a8f0446ca07a0a9fd135aa82aa49bdabaca8f"} err="failed to get container status \"0515c2859bf2362c9ede1d3fd90a8f0446ca07a0a9fd135aa82aa49bdabaca8f\": rpc error: code = NotFound desc = could not find container \"0515c2859bf2362c9ede1d3fd90a8f0446ca07a0a9fd135aa82aa49bdabaca8f\": container with ID starting with 0515c2859bf2362c9ede1d3fd90a8f0446ca07a0a9fd135aa82aa49bdabaca8f not found: ID does not exist" Feb 18 14:47:58 crc kubenswrapper[4957]: I0218 14:47:58.224444 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f066a024-a0f2-411e-b059-e97047ad4c4e" path="/var/lib/kubelet/pods/f066a024-a0f2-411e-b059-e97047ad4c4e/volumes" Feb 18 14:48:06 crc kubenswrapper[4957]: I0218 14:48:06.746738 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-79f7c68f86-ldbhx" podUID="55b19801-5e2c-47d1-b460-de7f465941b4" containerName="console" containerID="cri-o://8fd25ff164793defec7640c055a00c655b4e1fb41475345575c073ef511ab8cf" gracePeriod=15 Feb 18 14:48:07 crc kubenswrapper[4957]: I0218 14:48:07.221026 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-79f7c68f86-ldbhx_55b19801-5e2c-47d1-b460-de7f465941b4/console/0.log" Feb 18 14:48:07 crc kubenswrapper[4957]: I0218 14:48:07.221957 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-79f7c68f86-ldbhx" Feb 18 14:48:07 crc kubenswrapper[4957]: I0218 14:48:07.293315 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/55b19801-5e2c-47d1-b460-de7f465941b4-oauth-serving-cert\") pod \"55b19801-5e2c-47d1-b460-de7f465941b4\" (UID: \"55b19801-5e2c-47d1-b460-de7f465941b4\") " Feb 18 14:48:07 crc kubenswrapper[4957]: I0218 14:48:07.294199 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/55b19801-5e2c-47d1-b460-de7f465941b4-service-ca\") pod \"55b19801-5e2c-47d1-b460-de7f465941b4\" (UID: \"55b19801-5e2c-47d1-b460-de7f465941b4\") " Feb 18 14:48:07 crc kubenswrapper[4957]: I0218 14:48:07.294291 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/55b19801-5e2c-47d1-b460-de7f465941b4-console-config\") pod \"55b19801-5e2c-47d1-b460-de7f465941b4\" (UID: \"55b19801-5e2c-47d1-b460-de7f465941b4\") " Feb 18 14:48:07 crc kubenswrapper[4957]: I0218 14:48:07.294368 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/55b19801-5e2c-47d1-b460-de7f465941b4-console-oauth-config\") pod \"55b19801-5e2c-47d1-b460-de7f465941b4\" (UID: \"55b19801-5e2c-47d1-b460-de7f465941b4\") " Feb 18 14:48:07 crc kubenswrapper[4957]: I0218 14:48:07.294487 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/55b19801-5e2c-47d1-b460-de7f465941b4-trusted-ca-bundle\") pod \"55b19801-5e2c-47d1-b460-de7f465941b4\" (UID: \"55b19801-5e2c-47d1-b460-de7f465941b4\") " Feb 18 14:48:07 crc kubenswrapper[4957]: I0218 14:48:07.294512 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/55b19801-5e2c-47d1-b460-de7f465941b4-console-serving-cert\") pod \"55b19801-5e2c-47d1-b460-de7f465941b4\" (UID: \"55b19801-5e2c-47d1-b460-de7f465941b4\") " Feb 18 14:48:07 crc kubenswrapper[4957]: I0218 14:48:07.294575 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l5clh\" (UniqueName: \"kubernetes.io/projected/55b19801-5e2c-47d1-b460-de7f465941b4-kube-api-access-l5clh\") pod \"55b19801-5e2c-47d1-b460-de7f465941b4\" (UID: \"55b19801-5e2c-47d1-b460-de7f465941b4\") " Feb 18 14:48:07 crc kubenswrapper[4957]: I0218 14:48:07.295368 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55b19801-5e2c-47d1-b460-de7f465941b4-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "55b19801-5e2c-47d1-b460-de7f465941b4" (UID: "55b19801-5e2c-47d1-b460-de7f465941b4"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:48:07 crc kubenswrapper[4957]: I0218 14:48:07.296439 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55b19801-5e2c-47d1-b460-de7f465941b4-service-ca" (OuterVolumeSpecName: "service-ca") pod "55b19801-5e2c-47d1-b460-de7f465941b4" (UID: "55b19801-5e2c-47d1-b460-de7f465941b4"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:48:07 crc kubenswrapper[4957]: I0218 14:48:07.296836 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55b19801-5e2c-47d1-b460-de7f465941b4-console-config" (OuterVolumeSpecName: "console-config") pod "55b19801-5e2c-47d1-b460-de7f465941b4" (UID: "55b19801-5e2c-47d1-b460-de7f465941b4"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:48:07 crc kubenswrapper[4957]: I0218 14:48:07.299461 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55b19801-5e2c-47d1-b460-de7f465941b4-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "55b19801-5e2c-47d1-b460-de7f465941b4" (UID: "55b19801-5e2c-47d1-b460-de7f465941b4"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:48:07 crc kubenswrapper[4957]: I0218 14:48:07.305657 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55b19801-5e2c-47d1-b460-de7f465941b4-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "55b19801-5e2c-47d1-b460-de7f465941b4" (UID: "55b19801-5e2c-47d1-b460-de7f465941b4"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:48:07 crc kubenswrapper[4957]: I0218 14:48:07.305848 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55b19801-5e2c-47d1-b460-de7f465941b4-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "55b19801-5e2c-47d1-b460-de7f465941b4" (UID: "55b19801-5e2c-47d1-b460-de7f465941b4"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:48:07 crc kubenswrapper[4957]: I0218 14:48:07.306635 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55b19801-5e2c-47d1-b460-de7f465941b4-kube-api-access-l5clh" (OuterVolumeSpecName: "kube-api-access-l5clh") pod "55b19801-5e2c-47d1-b460-de7f465941b4" (UID: "55b19801-5e2c-47d1-b460-de7f465941b4"). InnerVolumeSpecName "kube-api-access-l5clh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:48:07 crc kubenswrapper[4957]: I0218 14:48:07.396575 4957 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/55b19801-5e2c-47d1-b460-de7f465941b4-console-config\") on node \"crc\" DevicePath \"\"" Feb 18 14:48:07 crc kubenswrapper[4957]: I0218 14:48:07.396608 4957 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/55b19801-5e2c-47d1-b460-de7f465941b4-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 18 14:48:07 crc kubenswrapper[4957]: I0218 14:48:07.396618 4957 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/55b19801-5e2c-47d1-b460-de7f465941b4-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:48:07 crc kubenswrapper[4957]: I0218 14:48:07.396627 4957 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/55b19801-5e2c-47d1-b460-de7f465941b4-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 14:48:07 crc kubenswrapper[4957]: I0218 14:48:07.396636 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l5clh\" (UniqueName: \"kubernetes.io/projected/55b19801-5e2c-47d1-b460-de7f465941b4-kube-api-access-l5clh\") on node \"crc\" DevicePath \"\"" Feb 18 14:48:07 crc kubenswrapper[4957]: I0218 14:48:07.396647 4957 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/55b19801-5e2c-47d1-b460-de7f465941b4-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 14:48:07 crc kubenswrapper[4957]: I0218 14:48:07.396658 4957 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/55b19801-5e2c-47d1-b460-de7f465941b4-service-ca\") on node \"crc\" DevicePath \"\"" Feb 18 14:48:07 crc kubenswrapper[4957]: I0218 14:48:07.873826 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-79f7c68f86-ldbhx_55b19801-5e2c-47d1-b460-de7f465941b4/console/0.log" Feb 18 14:48:07 crc kubenswrapper[4957]: I0218 14:48:07.875304 4957 generic.go:334] "Generic (PLEG): container finished" podID="55b19801-5e2c-47d1-b460-de7f465941b4" containerID="8fd25ff164793defec7640c055a00c655b4e1fb41475345575c073ef511ab8cf" exitCode=2 Feb 18 14:48:07 crc kubenswrapper[4957]: I0218 14:48:07.875477 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-79f7c68f86-ldbhx" event={"ID":"55b19801-5e2c-47d1-b460-de7f465941b4","Type":"ContainerDied","Data":"8fd25ff164793defec7640c055a00c655b4e1fb41475345575c073ef511ab8cf"} Feb 18 14:48:07 crc kubenswrapper[4957]: I0218 14:48:07.875601 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-79f7c68f86-ldbhx" event={"ID":"55b19801-5e2c-47d1-b460-de7f465941b4","Type":"ContainerDied","Data":"bee339cd435cbca41fc1f74962e8debd0cc8856ac3ac1fffd34d02ef5681aad2"} Feb 18 14:48:07 crc kubenswrapper[4957]: I0218 14:48:07.875401 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-79f7c68f86-ldbhx" Feb 18 14:48:07 crc kubenswrapper[4957]: I0218 14:48:07.875636 4957 scope.go:117] "RemoveContainer" containerID="8fd25ff164793defec7640c055a00c655b4e1fb41475345575c073ef511ab8cf" Feb 18 14:48:07 crc kubenswrapper[4957]: I0218 14:48:07.923703 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-79f7c68f86-ldbhx"] Feb 18 14:48:07 crc kubenswrapper[4957]: I0218 14:48:07.926147 4957 scope.go:117] "RemoveContainer" containerID="8fd25ff164793defec7640c055a00c655b4e1fb41475345575c073ef511ab8cf" Feb 18 14:48:07 crc kubenswrapper[4957]: E0218 14:48:07.927193 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8fd25ff164793defec7640c055a00c655b4e1fb41475345575c073ef511ab8cf\": container with ID starting with 8fd25ff164793defec7640c055a00c655b4e1fb41475345575c073ef511ab8cf not found: ID does not exist" containerID="8fd25ff164793defec7640c055a00c655b4e1fb41475345575c073ef511ab8cf" Feb 18 14:48:07 crc kubenswrapper[4957]: I0218 14:48:07.927250 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8fd25ff164793defec7640c055a00c655b4e1fb41475345575c073ef511ab8cf"} err="failed to get container status \"8fd25ff164793defec7640c055a00c655b4e1fb41475345575c073ef511ab8cf\": rpc error: code = NotFound desc = could not find container \"8fd25ff164793defec7640c055a00c655b4e1fb41475345575c073ef511ab8cf\": container with ID starting with 8fd25ff164793defec7640c055a00c655b4e1fb41475345575c073ef511ab8cf not found: ID does not exist" Feb 18 14:48:07 crc kubenswrapper[4957]: I0218 14:48:07.930805 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-79f7c68f86-ldbhx"] Feb 18 14:48:08 crc kubenswrapper[4957]: I0218 14:48:08.228028 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55b19801-5e2c-47d1-b460-de7f465941b4" path="/var/lib/kubelet/pods/55b19801-5e2c-47d1-b460-de7f465941b4/volumes" Feb 18 14:48:11 crc kubenswrapper[4957]: I0218 14:48:11.527979 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213kbn6m"] Feb 18 14:48:11 crc kubenswrapper[4957]: E0218 14:48:11.529011 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f066a024-a0f2-411e-b059-e97047ad4c4e" containerName="extract-utilities" Feb 18 14:48:11 crc kubenswrapper[4957]: I0218 14:48:11.529038 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="f066a024-a0f2-411e-b059-e97047ad4c4e" containerName="extract-utilities" Feb 18 14:48:11 crc kubenswrapper[4957]: E0218 14:48:11.529069 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f066a024-a0f2-411e-b059-e97047ad4c4e" containerName="registry-server" Feb 18 14:48:11 crc kubenswrapper[4957]: I0218 14:48:11.529079 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="f066a024-a0f2-411e-b059-e97047ad4c4e" containerName="registry-server" Feb 18 14:48:11 crc kubenswrapper[4957]: E0218 14:48:11.529103 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55b19801-5e2c-47d1-b460-de7f465941b4" containerName="console" Feb 18 14:48:11 crc kubenswrapper[4957]: I0218 14:48:11.529115 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="55b19801-5e2c-47d1-b460-de7f465941b4" containerName="console" Feb 18 14:48:11 crc kubenswrapper[4957]: E0218 14:48:11.529135 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f066a024-a0f2-411e-b059-e97047ad4c4e" containerName="extract-content" Feb 18 14:48:11 crc kubenswrapper[4957]: I0218 14:48:11.529143 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="f066a024-a0f2-411e-b059-e97047ad4c4e" containerName="extract-content" Feb 18 14:48:11 crc kubenswrapper[4957]: I0218 14:48:11.529377 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="f066a024-a0f2-411e-b059-e97047ad4c4e" containerName="registry-server" Feb 18 14:48:11 crc kubenswrapper[4957]: I0218 14:48:11.529394 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="55b19801-5e2c-47d1-b460-de7f465941b4" containerName="console" Feb 18 14:48:11 crc kubenswrapper[4957]: I0218 14:48:11.531070 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213kbn6m" Feb 18 14:48:11 crc kubenswrapper[4957]: I0218 14:48:11.540268 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 18 14:48:11 crc kubenswrapper[4957]: I0218 14:48:11.540917 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213kbn6m"] Feb 18 14:48:11 crc kubenswrapper[4957]: I0218 14:48:11.587891 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e69e6354-b8aa-4eb0-80c2-85121b9ce3de-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213kbn6m\" (UID: \"e69e6354-b8aa-4eb0-80c2-85121b9ce3de\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213kbn6m" Feb 18 14:48:11 crc kubenswrapper[4957]: I0218 14:48:11.588017 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xsht5\" (UniqueName: \"kubernetes.io/projected/e69e6354-b8aa-4eb0-80c2-85121b9ce3de-kube-api-access-xsht5\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213kbn6m\" (UID: \"e69e6354-b8aa-4eb0-80c2-85121b9ce3de\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213kbn6m" Feb 18 14:48:11 crc kubenswrapper[4957]: I0218 14:48:11.588070 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e69e6354-b8aa-4eb0-80c2-85121b9ce3de-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213kbn6m\" (UID: \"e69e6354-b8aa-4eb0-80c2-85121b9ce3de\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213kbn6m" Feb 18 14:48:11 crc kubenswrapper[4957]: I0218 14:48:11.689191 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e69e6354-b8aa-4eb0-80c2-85121b9ce3de-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213kbn6m\" (UID: \"e69e6354-b8aa-4eb0-80c2-85121b9ce3de\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213kbn6m" Feb 18 14:48:11 crc kubenswrapper[4957]: I0218 14:48:11.689282 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e69e6354-b8aa-4eb0-80c2-85121b9ce3de-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213kbn6m\" (UID: \"e69e6354-b8aa-4eb0-80c2-85121b9ce3de\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213kbn6m" Feb 18 14:48:11 crc kubenswrapper[4957]: I0218 14:48:11.689354 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xsht5\" (UniqueName: \"kubernetes.io/projected/e69e6354-b8aa-4eb0-80c2-85121b9ce3de-kube-api-access-xsht5\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213kbn6m\" (UID: \"e69e6354-b8aa-4eb0-80c2-85121b9ce3de\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213kbn6m" Feb 18 14:48:11 crc kubenswrapper[4957]: I0218 14:48:11.690324 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e69e6354-b8aa-4eb0-80c2-85121b9ce3de-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213kbn6m\" (UID: \"e69e6354-b8aa-4eb0-80c2-85121b9ce3de\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213kbn6m" Feb 18 14:48:11 crc kubenswrapper[4957]: I0218 14:48:11.690596 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e69e6354-b8aa-4eb0-80c2-85121b9ce3de-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213kbn6m\" (UID: \"e69e6354-b8aa-4eb0-80c2-85121b9ce3de\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213kbn6m" Feb 18 14:48:11 crc kubenswrapper[4957]: I0218 14:48:11.717135 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xsht5\" (UniqueName: \"kubernetes.io/projected/e69e6354-b8aa-4eb0-80c2-85121b9ce3de-kube-api-access-xsht5\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213kbn6m\" (UID: \"e69e6354-b8aa-4eb0-80c2-85121b9ce3de\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213kbn6m" Feb 18 14:48:11 crc kubenswrapper[4957]: I0218 14:48:11.882210 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213kbn6m" Feb 18 14:48:12 crc kubenswrapper[4957]: I0218 14:48:12.368724 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213kbn6m"] Feb 18 14:48:12 crc kubenswrapper[4957]: I0218 14:48:12.922016 4957 generic.go:334] "Generic (PLEG): container finished" podID="e69e6354-b8aa-4eb0-80c2-85121b9ce3de" containerID="7ebca9b6da00a4358576e59e1a173093801b0af376ec133679c554c87c54f0cd" exitCode=0 Feb 18 14:48:12 crc kubenswrapper[4957]: I0218 14:48:12.922100 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213kbn6m" event={"ID":"e69e6354-b8aa-4eb0-80c2-85121b9ce3de","Type":"ContainerDied","Data":"7ebca9b6da00a4358576e59e1a173093801b0af376ec133679c554c87c54f0cd"} Feb 18 14:48:12 crc kubenswrapper[4957]: I0218 14:48:12.922439 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213kbn6m" event={"ID":"e69e6354-b8aa-4eb0-80c2-85121b9ce3de","Type":"ContainerStarted","Data":"35878d120fa9ab77a05f7e000313c33df0a08f94c202f85c49cd158ecc7ce298"} Feb 18 14:48:12 crc kubenswrapper[4957]: I0218 14:48:12.924058 4957 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 14:48:14 crc kubenswrapper[4957]: I0218 14:48:14.942709 4957 generic.go:334] "Generic (PLEG): container finished" podID="e69e6354-b8aa-4eb0-80c2-85121b9ce3de" containerID="f66c809935ff400e4fa6b55c13e6266a990517c6666a65ce40ea7cdb1944f2e9" exitCode=0 Feb 18 14:48:14 crc kubenswrapper[4957]: I0218 14:48:14.942815 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213kbn6m" event={"ID":"e69e6354-b8aa-4eb0-80c2-85121b9ce3de","Type":"ContainerDied","Data":"f66c809935ff400e4fa6b55c13e6266a990517c6666a65ce40ea7cdb1944f2e9"} Feb 18 14:48:15 crc kubenswrapper[4957]: I0218 14:48:15.957973 4957 generic.go:334] "Generic (PLEG): container finished" podID="e69e6354-b8aa-4eb0-80c2-85121b9ce3de" containerID="02afea515c9bb1005a57db2fdf0fabdfe6d21d3a2cbfa9f2292fb66a86009473" exitCode=0 Feb 18 14:48:15 crc kubenswrapper[4957]: I0218 14:48:15.958027 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213kbn6m" event={"ID":"e69e6354-b8aa-4eb0-80c2-85121b9ce3de","Type":"ContainerDied","Data":"02afea515c9bb1005a57db2fdf0fabdfe6d21d3a2cbfa9f2292fb66a86009473"} Feb 18 14:48:17 crc kubenswrapper[4957]: I0218 14:48:17.297461 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213kbn6m" Feb 18 14:48:17 crc kubenswrapper[4957]: I0218 14:48:17.414824 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xsht5\" (UniqueName: \"kubernetes.io/projected/e69e6354-b8aa-4eb0-80c2-85121b9ce3de-kube-api-access-xsht5\") pod \"e69e6354-b8aa-4eb0-80c2-85121b9ce3de\" (UID: \"e69e6354-b8aa-4eb0-80c2-85121b9ce3de\") " Feb 18 14:48:17 crc kubenswrapper[4957]: I0218 14:48:17.415008 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e69e6354-b8aa-4eb0-80c2-85121b9ce3de-util\") pod \"e69e6354-b8aa-4eb0-80c2-85121b9ce3de\" (UID: \"e69e6354-b8aa-4eb0-80c2-85121b9ce3de\") " Feb 18 14:48:17 crc kubenswrapper[4957]: I0218 14:48:17.415060 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e69e6354-b8aa-4eb0-80c2-85121b9ce3de-bundle\") pod \"e69e6354-b8aa-4eb0-80c2-85121b9ce3de\" (UID: \"e69e6354-b8aa-4eb0-80c2-85121b9ce3de\") " Feb 18 14:48:17 crc kubenswrapper[4957]: I0218 14:48:17.416455 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e69e6354-b8aa-4eb0-80c2-85121b9ce3de-bundle" (OuterVolumeSpecName: "bundle") pod "e69e6354-b8aa-4eb0-80c2-85121b9ce3de" (UID: "e69e6354-b8aa-4eb0-80c2-85121b9ce3de"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:48:17 crc kubenswrapper[4957]: I0218 14:48:17.426871 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e69e6354-b8aa-4eb0-80c2-85121b9ce3de-kube-api-access-xsht5" (OuterVolumeSpecName: "kube-api-access-xsht5") pod "e69e6354-b8aa-4eb0-80c2-85121b9ce3de" (UID: "e69e6354-b8aa-4eb0-80c2-85121b9ce3de"). InnerVolumeSpecName "kube-api-access-xsht5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:48:17 crc kubenswrapper[4957]: I0218 14:48:17.433077 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e69e6354-b8aa-4eb0-80c2-85121b9ce3de-util" (OuterVolumeSpecName: "util") pod "e69e6354-b8aa-4eb0-80c2-85121b9ce3de" (UID: "e69e6354-b8aa-4eb0-80c2-85121b9ce3de"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:48:17 crc kubenswrapper[4957]: I0218 14:48:17.517251 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xsht5\" (UniqueName: \"kubernetes.io/projected/e69e6354-b8aa-4eb0-80c2-85121b9ce3de-kube-api-access-xsht5\") on node \"crc\" DevicePath \"\"" Feb 18 14:48:17 crc kubenswrapper[4957]: I0218 14:48:17.517298 4957 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e69e6354-b8aa-4eb0-80c2-85121b9ce3de-util\") on node \"crc\" DevicePath \"\"" Feb 18 14:48:17 crc kubenswrapper[4957]: I0218 14:48:17.517319 4957 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e69e6354-b8aa-4eb0-80c2-85121b9ce3de-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:48:17 crc kubenswrapper[4957]: I0218 14:48:17.978590 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213kbn6m" event={"ID":"e69e6354-b8aa-4eb0-80c2-85121b9ce3de","Type":"ContainerDied","Data":"35878d120fa9ab77a05f7e000313c33df0a08f94c202f85c49cd158ecc7ce298"} Feb 18 14:48:17 crc kubenswrapper[4957]: I0218 14:48:17.978642 4957 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="35878d120fa9ab77a05f7e000313c33df0a08f94c202f85c49cd158ecc7ce298" Feb 18 14:48:17 crc kubenswrapper[4957]: I0218 14:48:17.978697 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213kbn6m" Feb 18 14:48:26 crc kubenswrapper[4957]: I0218 14:48:26.619190 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-8675cb849f-2g7hj"] Feb 18 14:48:26 crc kubenswrapper[4957]: E0218 14:48:26.620770 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e69e6354-b8aa-4eb0-80c2-85121b9ce3de" containerName="extract" Feb 18 14:48:26 crc kubenswrapper[4957]: I0218 14:48:26.620789 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="e69e6354-b8aa-4eb0-80c2-85121b9ce3de" containerName="extract" Feb 18 14:48:26 crc kubenswrapper[4957]: E0218 14:48:26.620808 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e69e6354-b8aa-4eb0-80c2-85121b9ce3de" containerName="util" Feb 18 14:48:26 crc kubenswrapper[4957]: I0218 14:48:26.620817 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="e69e6354-b8aa-4eb0-80c2-85121b9ce3de" containerName="util" Feb 18 14:48:26 crc kubenswrapper[4957]: E0218 14:48:26.620837 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e69e6354-b8aa-4eb0-80c2-85121b9ce3de" containerName="pull" Feb 18 14:48:26 crc kubenswrapper[4957]: I0218 14:48:26.620847 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="e69e6354-b8aa-4eb0-80c2-85121b9ce3de" containerName="pull" Feb 18 14:48:26 crc kubenswrapper[4957]: I0218 14:48:26.621055 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="e69e6354-b8aa-4eb0-80c2-85121b9ce3de" containerName="extract" Feb 18 14:48:26 crc kubenswrapper[4957]: I0218 14:48:26.622067 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-8675cb849f-2g7hj" Feb 18 14:48:26 crc kubenswrapper[4957]: I0218 14:48:26.635314 4957 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Feb 18 14:48:26 crc kubenswrapper[4957]: I0218 14:48:26.635853 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Feb 18 14:48:26 crc kubenswrapper[4957]: I0218 14:48:26.635893 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Feb 18 14:48:26 crc kubenswrapper[4957]: I0218 14:48:26.636051 4957 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Feb 18 14:48:26 crc kubenswrapper[4957]: I0218 14:48:26.644117 4957 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-wlp67" Feb 18 14:48:26 crc kubenswrapper[4957]: I0218 14:48:26.649803 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-8675cb849f-2g7hj"] Feb 18 14:48:26 crc kubenswrapper[4957]: I0218 14:48:26.712511 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4f287d67-8d26-430a-a775-fdf0abeed6dd-apiservice-cert\") pod \"metallb-operator-controller-manager-8675cb849f-2g7hj\" (UID: \"4f287d67-8d26-430a-a775-fdf0abeed6dd\") " pod="metallb-system/metallb-operator-controller-manager-8675cb849f-2g7hj" Feb 18 14:48:26 crc kubenswrapper[4957]: I0218 14:48:26.712877 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25l2v\" (UniqueName: \"kubernetes.io/projected/4f287d67-8d26-430a-a775-fdf0abeed6dd-kube-api-access-25l2v\") pod \"metallb-operator-controller-manager-8675cb849f-2g7hj\" (UID: \"4f287d67-8d26-430a-a775-fdf0abeed6dd\") " pod="metallb-system/metallb-operator-controller-manager-8675cb849f-2g7hj" Feb 18 14:48:26 crc kubenswrapper[4957]: I0218 14:48:26.713085 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4f287d67-8d26-430a-a775-fdf0abeed6dd-webhook-cert\") pod \"metallb-operator-controller-manager-8675cb849f-2g7hj\" (UID: \"4f287d67-8d26-430a-a775-fdf0abeed6dd\") " pod="metallb-system/metallb-operator-controller-manager-8675cb849f-2g7hj" Feb 18 14:48:26 crc kubenswrapper[4957]: I0218 14:48:26.815748 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25l2v\" (UniqueName: \"kubernetes.io/projected/4f287d67-8d26-430a-a775-fdf0abeed6dd-kube-api-access-25l2v\") pod \"metallb-operator-controller-manager-8675cb849f-2g7hj\" (UID: \"4f287d67-8d26-430a-a775-fdf0abeed6dd\") " pod="metallb-system/metallb-operator-controller-manager-8675cb849f-2g7hj" Feb 18 14:48:26 crc kubenswrapper[4957]: I0218 14:48:26.815877 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4f287d67-8d26-430a-a775-fdf0abeed6dd-webhook-cert\") pod \"metallb-operator-controller-manager-8675cb849f-2g7hj\" (UID: \"4f287d67-8d26-430a-a775-fdf0abeed6dd\") " pod="metallb-system/metallb-operator-controller-manager-8675cb849f-2g7hj" Feb 18 14:48:26 crc kubenswrapper[4957]: I0218 14:48:26.815988 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4f287d67-8d26-430a-a775-fdf0abeed6dd-apiservice-cert\") pod \"metallb-operator-controller-manager-8675cb849f-2g7hj\" (UID: \"4f287d67-8d26-430a-a775-fdf0abeed6dd\") " pod="metallb-system/metallb-operator-controller-manager-8675cb849f-2g7hj" Feb 18 14:48:26 crc kubenswrapper[4957]: I0218 14:48:26.824629 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4f287d67-8d26-430a-a775-fdf0abeed6dd-apiservice-cert\") pod \"metallb-operator-controller-manager-8675cb849f-2g7hj\" (UID: \"4f287d67-8d26-430a-a775-fdf0abeed6dd\") " pod="metallb-system/metallb-operator-controller-manager-8675cb849f-2g7hj" Feb 18 14:48:26 crc kubenswrapper[4957]: I0218 14:48:26.825605 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4f287d67-8d26-430a-a775-fdf0abeed6dd-webhook-cert\") pod \"metallb-operator-controller-manager-8675cb849f-2g7hj\" (UID: \"4f287d67-8d26-430a-a775-fdf0abeed6dd\") " pod="metallb-system/metallb-operator-controller-manager-8675cb849f-2g7hj" Feb 18 14:48:26 crc kubenswrapper[4957]: I0218 14:48:26.885563 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-25l2v\" (UniqueName: \"kubernetes.io/projected/4f287d67-8d26-430a-a775-fdf0abeed6dd-kube-api-access-25l2v\") pod \"metallb-operator-controller-manager-8675cb849f-2g7hj\" (UID: \"4f287d67-8d26-430a-a775-fdf0abeed6dd\") " pod="metallb-system/metallb-operator-controller-manager-8675cb849f-2g7hj" Feb 18 14:48:26 crc kubenswrapper[4957]: I0218 14:48:26.943778 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-8675cb849f-2g7hj" Feb 18 14:48:27 crc kubenswrapper[4957]: I0218 14:48:27.065202 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-84df667ccc-2w5tf"] Feb 18 14:48:27 crc kubenswrapper[4957]: I0218 14:48:27.066169 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-84df667ccc-2w5tf" Feb 18 14:48:27 crc kubenswrapper[4957]: I0218 14:48:27.077462 4957 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Feb 18 14:48:27 crc kubenswrapper[4957]: I0218 14:48:27.077739 4957 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 18 14:48:27 crc kubenswrapper[4957]: I0218 14:48:27.079781 4957 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-jjlk6" Feb 18 14:48:27 crc kubenswrapper[4957]: I0218 14:48:27.121881 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-84df667ccc-2w5tf"] Feb 18 14:48:27 crc kubenswrapper[4957]: I0218 14:48:27.231768 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/32734ff2-fe7b-4588-a4c8-0e5882b54b87-webhook-cert\") pod \"metallb-operator-webhook-server-84df667ccc-2w5tf\" (UID: \"32734ff2-fe7b-4588-a4c8-0e5882b54b87\") " pod="metallb-system/metallb-operator-webhook-server-84df667ccc-2w5tf" Feb 18 14:48:27 crc kubenswrapper[4957]: I0218 14:48:27.231905 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/32734ff2-fe7b-4588-a4c8-0e5882b54b87-apiservice-cert\") pod \"metallb-operator-webhook-server-84df667ccc-2w5tf\" (UID: \"32734ff2-fe7b-4588-a4c8-0e5882b54b87\") " pod="metallb-system/metallb-operator-webhook-server-84df667ccc-2w5tf" Feb 18 14:48:27 crc kubenswrapper[4957]: I0218 14:48:27.231971 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dlrqj\" (UniqueName: \"kubernetes.io/projected/32734ff2-fe7b-4588-a4c8-0e5882b54b87-kube-api-access-dlrqj\") pod \"metallb-operator-webhook-server-84df667ccc-2w5tf\" (UID: \"32734ff2-fe7b-4588-a4c8-0e5882b54b87\") " pod="metallb-system/metallb-operator-webhook-server-84df667ccc-2w5tf" Feb 18 14:48:27 crc kubenswrapper[4957]: I0218 14:48:27.337503 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/32734ff2-fe7b-4588-a4c8-0e5882b54b87-apiservice-cert\") pod \"metallb-operator-webhook-server-84df667ccc-2w5tf\" (UID: \"32734ff2-fe7b-4588-a4c8-0e5882b54b87\") " pod="metallb-system/metallb-operator-webhook-server-84df667ccc-2w5tf" Feb 18 14:48:27 crc kubenswrapper[4957]: I0218 14:48:27.338166 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dlrqj\" (UniqueName: \"kubernetes.io/projected/32734ff2-fe7b-4588-a4c8-0e5882b54b87-kube-api-access-dlrqj\") pod \"metallb-operator-webhook-server-84df667ccc-2w5tf\" (UID: \"32734ff2-fe7b-4588-a4c8-0e5882b54b87\") " pod="metallb-system/metallb-operator-webhook-server-84df667ccc-2w5tf" Feb 18 14:48:27 crc kubenswrapper[4957]: I0218 14:48:27.338272 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/32734ff2-fe7b-4588-a4c8-0e5882b54b87-webhook-cert\") pod \"metallb-operator-webhook-server-84df667ccc-2w5tf\" (UID: \"32734ff2-fe7b-4588-a4c8-0e5882b54b87\") " pod="metallb-system/metallb-operator-webhook-server-84df667ccc-2w5tf" Feb 18 14:48:27 crc kubenswrapper[4957]: I0218 14:48:27.356312 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/32734ff2-fe7b-4588-a4c8-0e5882b54b87-apiservice-cert\") pod \"metallb-operator-webhook-server-84df667ccc-2w5tf\" (UID: \"32734ff2-fe7b-4588-a4c8-0e5882b54b87\") " pod="metallb-system/metallb-operator-webhook-server-84df667ccc-2w5tf" Feb 18 14:48:27 crc kubenswrapper[4957]: I0218 14:48:27.356824 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/32734ff2-fe7b-4588-a4c8-0e5882b54b87-webhook-cert\") pod \"metallb-operator-webhook-server-84df667ccc-2w5tf\" (UID: \"32734ff2-fe7b-4588-a4c8-0e5882b54b87\") " pod="metallb-system/metallb-operator-webhook-server-84df667ccc-2w5tf" Feb 18 14:48:27 crc kubenswrapper[4957]: I0218 14:48:27.362783 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dlrqj\" (UniqueName: \"kubernetes.io/projected/32734ff2-fe7b-4588-a4c8-0e5882b54b87-kube-api-access-dlrqj\") pod \"metallb-operator-webhook-server-84df667ccc-2w5tf\" (UID: \"32734ff2-fe7b-4588-a4c8-0e5882b54b87\") " pod="metallb-system/metallb-operator-webhook-server-84df667ccc-2w5tf" Feb 18 14:48:27 crc kubenswrapper[4957]: I0218 14:48:27.430389 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-84df667ccc-2w5tf" Feb 18 14:48:27 crc kubenswrapper[4957]: I0218 14:48:27.557218 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-8675cb849f-2g7hj"] Feb 18 14:48:27 crc kubenswrapper[4957]: W0218 14:48:27.571217 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4f287d67_8d26_430a_a775_fdf0abeed6dd.slice/crio-98e05875faa2e53ea6479e63486b9c7a8f481fd2aa6fd9ccce881729fa958971 WatchSource:0}: Error finding container 98e05875faa2e53ea6479e63486b9c7a8f481fd2aa6fd9ccce881729fa958971: Status 404 returned error can't find the container with id 98e05875faa2e53ea6479e63486b9c7a8f481fd2aa6fd9ccce881729fa958971 Feb 18 14:48:27 crc kubenswrapper[4957]: I0218 14:48:27.898263 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-84df667ccc-2w5tf"] Feb 18 14:48:28 crc kubenswrapper[4957]: I0218 14:48:28.076031 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-84df667ccc-2w5tf" event={"ID":"32734ff2-fe7b-4588-a4c8-0e5882b54b87","Type":"ContainerStarted","Data":"ef69b54b2442e3f7858dc210c90160532fd515aa3a61a9c3ec4c4b936a6a79de"} Feb 18 14:48:28 crc kubenswrapper[4957]: I0218 14:48:28.077941 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-8675cb849f-2g7hj" event={"ID":"4f287d67-8d26-430a-a775-fdf0abeed6dd","Type":"ContainerStarted","Data":"98e05875faa2e53ea6479e63486b9c7a8f481fd2aa6fd9ccce881729fa958971"} Feb 18 14:48:32 crc kubenswrapper[4957]: I0218 14:48:32.130224 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-8675cb849f-2g7hj" event={"ID":"4f287d67-8d26-430a-a775-fdf0abeed6dd","Type":"ContainerStarted","Data":"afa6f369785be620d98fa0eec8cfde9c15198ff7482cb86d1e5977af114d991d"} Feb 18 14:48:32 crc kubenswrapper[4957]: I0218 14:48:32.131138 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-8675cb849f-2g7hj" Feb 18 14:48:32 crc kubenswrapper[4957]: I0218 14:48:32.162953 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-8675cb849f-2g7hj" podStartSLOduration=2.692000514 podStartE2EDuration="6.162922847s" podCreationTimestamp="2026-02-18 14:48:26 +0000 UTC" firstStartedPulling="2026-02-18 14:48:27.588639468 +0000 UTC m=+1014.109504202" lastFinishedPulling="2026-02-18 14:48:31.059561791 +0000 UTC m=+1017.580426535" observedRunningTime="2026-02-18 14:48:32.158977072 +0000 UTC m=+1018.679841816" watchObservedRunningTime="2026-02-18 14:48:32.162922847 +0000 UTC m=+1018.683787591" Feb 18 14:48:34 crc kubenswrapper[4957]: I0218 14:48:34.153096 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-84df667ccc-2w5tf" event={"ID":"32734ff2-fe7b-4588-a4c8-0e5882b54b87","Type":"ContainerStarted","Data":"e4afb054492727097059a2d4d40ca47d547635f3083ddb7d324534e16cc97534"} Feb 18 14:48:34 crc kubenswrapper[4957]: I0218 14:48:34.154022 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-84df667ccc-2w5tf" Feb 18 14:48:34 crc kubenswrapper[4957]: I0218 14:48:34.182923 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-84df667ccc-2w5tf" podStartSLOduration=1.778011268 podStartE2EDuration="7.182899486s" podCreationTimestamp="2026-02-18 14:48:27 +0000 UTC" firstStartedPulling="2026-02-18 14:48:27.907558034 +0000 UTC m=+1014.428422778" lastFinishedPulling="2026-02-18 14:48:33.312446252 +0000 UTC m=+1019.833310996" observedRunningTime="2026-02-18 14:48:34.175988966 +0000 UTC m=+1020.696853730" watchObservedRunningTime="2026-02-18 14:48:34.182899486 +0000 UTC m=+1020.703764230" Feb 18 14:48:47 crc kubenswrapper[4957]: I0218 14:48:47.448634 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-84df667ccc-2w5tf" Feb 18 14:49:06 crc kubenswrapper[4957]: I0218 14:49:06.949562 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-8675cb849f-2g7hj" Feb 18 14:49:07 crc kubenswrapper[4957]: I0218 14:49:07.279167 4957 patch_prober.go:28] interesting pod/machine-config-daemon-x8wwg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 14:49:07 crc kubenswrapper[4957]: I0218 14:49:07.279256 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 14:49:07 crc kubenswrapper[4957]: I0218 14:49:07.665742 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-bl7jx"] Feb 18 14:49:07 crc kubenswrapper[4957]: I0218 14:49:07.667486 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-bl7jx" Feb 18 14:49:07 crc kubenswrapper[4957]: I0218 14:49:07.670099 4957 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-74dhh" Feb 18 14:49:07 crc kubenswrapper[4957]: I0218 14:49:07.670319 4957 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Feb 18 14:49:07 crc kubenswrapper[4957]: I0218 14:49:07.678783 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-c8kqz"] Feb 18 14:49:07 crc kubenswrapper[4957]: I0218 14:49:07.682518 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-c8kqz" Feb 18 14:49:07 crc kubenswrapper[4957]: I0218 14:49:07.684763 4957 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Feb 18 14:49:07 crc kubenswrapper[4957]: I0218 14:49:07.685020 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Feb 18 14:49:07 crc kubenswrapper[4957]: I0218 14:49:07.689241 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-bl7jx"] Feb 18 14:49:07 crc kubenswrapper[4957]: I0218 14:49:07.742270 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/84258a40-276a-4da4-8240-603932be25c0-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-bl7jx\" (UID: \"84258a40-276a-4da4-8240-603932be25c0\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-bl7jx" Feb 18 14:49:07 crc kubenswrapper[4957]: I0218 14:49:07.742331 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68dt4\" (UniqueName: \"kubernetes.io/projected/84258a40-276a-4da4-8240-603932be25c0-kube-api-access-68dt4\") pod \"frr-k8s-webhook-server-78b44bf5bb-bl7jx\" (UID: \"84258a40-276a-4da4-8240-603932be25c0\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-bl7jx" Feb 18 14:49:07 crc kubenswrapper[4957]: I0218 14:49:07.790189 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-wvv4k"] Feb 18 14:49:07 crc kubenswrapper[4957]: I0218 14:49:07.792137 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-wvv4k" Feb 18 14:49:07 crc kubenswrapper[4957]: I0218 14:49:07.798487 4957 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Feb 18 14:49:07 crc kubenswrapper[4957]: I0218 14:49:07.798544 4957 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Feb 18 14:49:07 crc kubenswrapper[4957]: I0218 14:49:07.798687 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Feb 18 14:49:07 crc kubenswrapper[4957]: I0218 14:49:07.799967 4957 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-jp752" Feb 18 14:49:07 crc kubenswrapper[4957]: I0218 14:49:07.822020 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-69bbfbf88f-hw6sv"] Feb 18 14:49:07 crc kubenswrapper[4957]: I0218 14:49:07.825683 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-69bbfbf88f-hw6sv" Feb 18 14:49:07 crc kubenswrapper[4957]: I0218 14:49:07.827916 4957 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Feb 18 14:49:07 crc kubenswrapper[4957]: I0218 14:49:07.844491 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/06e3f0bd-70d3-493b-ab24-e8f75298f7a3-metrics-certs\") pod \"frr-k8s-c8kqz\" (UID: \"06e3f0bd-70d3-493b-ab24-e8f75298f7a3\") " pod="metallb-system/frr-k8s-c8kqz" Feb 18 14:49:07 crc kubenswrapper[4957]: I0218 14:49:07.844554 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/06e3f0bd-70d3-493b-ab24-e8f75298f7a3-metrics\") pod \"frr-k8s-c8kqz\" (UID: \"06e3f0bd-70d3-493b-ab24-e8f75298f7a3\") " pod="metallb-system/frr-k8s-c8kqz" Feb 18 14:49:07 crc kubenswrapper[4957]: I0218 14:49:07.844620 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/84258a40-276a-4da4-8240-603932be25c0-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-bl7jx\" (UID: \"84258a40-276a-4da4-8240-603932be25c0\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-bl7jx" Feb 18 14:49:07 crc kubenswrapper[4957]: I0218 14:49:07.844671 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/06e3f0bd-70d3-493b-ab24-e8f75298f7a3-frr-sockets\") pod \"frr-k8s-c8kqz\" (UID: \"06e3f0bd-70d3-493b-ab24-e8f75298f7a3\") " pod="metallb-system/frr-k8s-c8kqz" Feb 18 14:49:07 crc kubenswrapper[4957]: I0218 14:49:07.844698 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-68dt4\" (UniqueName: \"kubernetes.io/projected/84258a40-276a-4da4-8240-603932be25c0-kube-api-access-68dt4\") pod \"frr-k8s-webhook-server-78b44bf5bb-bl7jx\" (UID: \"84258a40-276a-4da4-8240-603932be25c0\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-bl7jx" Feb 18 14:49:07 crc kubenswrapper[4957]: I0218 14:49:07.844743 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfl5s\" (UniqueName: \"kubernetes.io/projected/06e3f0bd-70d3-493b-ab24-e8f75298f7a3-kube-api-access-vfl5s\") pod \"frr-k8s-c8kqz\" (UID: \"06e3f0bd-70d3-493b-ab24-e8f75298f7a3\") " pod="metallb-system/frr-k8s-c8kqz" Feb 18 14:49:07 crc kubenswrapper[4957]: I0218 14:49:07.844770 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/06e3f0bd-70d3-493b-ab24-e8f75298f7a3-frr-conf\") pod \"frr-k8s-c8kqz\" (UID: \"06e3f0bd-70d3-493b-ab24-e8f75298f7a3\") " pod="metallb-system/frr-k8s-c8kqz" Feb 18 14:49:07 crc kubenswrapper[4957]: I0218 14:49:07.844802 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/06e3f0bd-70d3-493b-ab24-e8f75298f7a3-reloader\") pod \"frr-k8s-c8kqz\" (UID: \"06e3f0bd-70d3-493b-ab24-e8f75298f7a3\") " pod="metallb-system/frr-k8s-c8kqz" Feb 18 14:49:07 crc kubenswrapper[4957]: I0218 14:49:07.844833 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/06e3f0bd-70d3-493b-ab24-e8f75298f7a3-frr-startup\") pod \"frr-k8s-c8kqz\" (UID: \"06e3f0bd-70d3-493b-ab24-e8f75298f7a3\") " pod="metallb-system/frr-k8s-c8kqz" Feb 18 14:49:07 crc kubenswrapper[4957]: I0218 14:49:07.855759 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/84258a40-276a-4da4-8240-603932be25c0-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-bl7jx\" (UID: \"84258a40-276a-4da4-8240-603932be25c0\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-bl7jx" Feb 18 14:49:07 crc kubenswrapper[4957]: I0218 14:49:07.861848 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-69bbfbf88f-hw6sv"] Feb 18 14:49:07 crc kubenswrapper[4957]: I0218 14:49:07.873536 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-68dt4\" (UniqueName: \"kubernetes.io/projected/84258a40-276a-4da4-8240-603932be25c0-kube-api-access-68dt4\") pod \"frr-k8s-webhook-server-78b44bf5bb-bl7jx\" (UID: \"84258a40-276a-4da4-8240-603932be25c0\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-bl7jx" Feb 18 14:49:07 crc kubenswrapper[4957]: I0218 14:49:07.947155 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3929daaa-39b8-475f-9af0-644180cb7682-cert\") pod \"controller-69bbfbf88f-hw6sv\" (UID: \"3929daaa-39b8-475f-9af0-644180cb7682\") " pod="metallb-system/controller-69bbfbf88f-hw6sv" Feb 18 14:49:07 crc kubenswrapper[4957]: I0218 14:49:07.947212 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/06e3f0bd-70d3-493b-ab24-e8f75298f7a3-reloader\") pod \"frr-k8s-c8kqz\" (UID: \"06e3f0bd-70d3-493b-ab24-e8f75298f7a3\") " pod="metallb-system/frr-k8s-c8kqz" Feb 18 14:49:07 crc kubenswrapper[4957]: I0218 14:49:07.947262 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/06e3f0bd-70d3-493b-ab24-e8f75298f7a3-frr-startup\") pod \"frr-k8s-c8kqz\" (UID: \"06e3f0bd-70d3-493b-ab24-e8f75298f7a3\") " pod="metallb-system/frr-k8s-c8kqz" Feb 18 14:49:07 crc kubenswrapper[4957]: I0218 14:49:07.947293 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hcblg\" (UniqueName: \"kubernetes.io/projected/379fdde6-815b-433b-b62c-b9863ea4fb9e-kube-api-access-hcblg\") pod \"speaker-wvv4k\" (UID: \"379fdde6-815b-433b-b62c-b9863ea4fb9e\") " pod="metallb-system/speaker-wvv4k" Feb 18 14:49:07 crc kubenswrapper[4957]: I0218 14:49:07.947323 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/379fdde6-815b-433b-b62c-b9863ea4fb9e-metallb-excludel2\") pod \"speaker-wvv4k\" (UID: \"379fdde6-815b-433b-b62c-b9863ea4fb9e\") " pod="metallb-system/speaker-wvv4k" Feb 18 14:49:07 crc kubenswrapper[4957]: I0218 14:49:07.947350 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/379fdde6-815b-433b-b62c-b9863ea4fb9e-metrics-certs\") pod \"speaker-wvv4k\" (UID: \"379fdde6-815b-433b-b62c-b9863ea4fb9e\") " pod="metallb-system/speaker-wvv4k" Feb 18 14:49:07 crc kubenswrapper[4957]: I0218 14:49:07.947374 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/379fdde6-815b-433b-b62c-b9863ea4fb9e-memberlist\") pod \"speaker-wvv4k\" (UID: \"379fdde6-815b-433b-b62c-b9863ea4fb9e\") " pod="metallb-system/speaker-wvv4k" Feb 18 14:49:07 crc kubenswrapper[4957]: I0218 14:49:07.947409 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/06e3f0bd-70d3-493b-ab24-e8f75298f7a3-metrics-certs\") pod \"frr-k8s-c8kqz\" (UID: \"06e3f0bd-70d3-493b-ab24-e8f75298f7a3\") " pod="metallb-system/frr-k8s-c8kqz" Feb 18 14:49:07 crc kubenswrapper[4957]: I0218 14:49:07.947462 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/06e3f0bd-70d3-493b-ab24-e8f75298f7a3-metrics\") pod \"frr-k8s-c8kqz\" (UID: \"06e3f0bd-70d3-493b-ab24-e8f75298f7a3\") " pod="metallb-system/frr-k8s-c8kqz" Feb 18 14:49:07 crc kubenswrapper[4957]: I0218 14:49:07.947484 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3929daaa-39b8-475f-9af0-644180cb7682-metrics-certs\") pod \"controller-69bbfbf88f-hw6sv\" (UID: \"3929daaa-39b8-475f-9af0-644180cb7682\") " pod="metallb-system/controller-69bbfbf88f-hw6sv" Feb 18 14:49:07 crc kubenswrapper[4957]: I0218 14:49:07.947528 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/06e3f0bd-70d3-493b-ab24-e8f75298f7a3-frr-sockets\") pod \"frr-k8s-c8kqz\" (UID: \"06e3f0bd-70d3-493b-ab24-e8f75298f7a3\") " pod="metallb-system/frr-k8s-c8kqz" Feb 18 14:49:07 crc kubenswrapper[4957]: I0218 14:49:07.947572 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77g2c\" (UniqueName: \"kubernetes.io/projected/3929daaa-39b8-475f-9af0-644180cb7682-kube-api-access-77g2c\") pod \"controller-69bbfbf88f-hw6sv\" (UID: \"3929daaa-39b8-475f-9af0-644180cb7682\") " pod="metallb-system/controller-69bbfbf88f-hw6sv" Feb 18 14:49:07 crc kubenswrapper[4957]: I0218 14:49:07.947590 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vfl5s\" (UniqueName: \"kubernetes.io/projected/06e3f0bd-70d3-493b-ab24-e8f75298f7a3-kube-api-access-vfl5s\") pod \"frr-k8s-c8kqz\" (UID: \"06e3f0bd-70d3-493b-ab24-e8f75298f7a3\") " pod="metallb-system/frr-k8s-c8kqz" Feb 18 14:49:07 crc kubenswrapper[4957]: I0218 14:49:07.947660 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/06e3f0bd-70d3-493b-ab24-e8f75298f7a3-frr-conf\") pod \"frr-k8s-c8kqz\" (UID: \"06e3f0bd-70d3-493b-ab24-e8f75298f7a3\") " pod="metallb-system/frr-k8s-c8kqz" Feb 18 14:49:07 crc kubenswrapper[4957]: I0218 14:49:07.947900 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/06e3f0bd-70d3-493b-ab24-e8f75298f7a3-reloader\") pod \"frr-k8s-c8kqz\" (UID: \"06e3f0bd-70d3-493b-ab24-e8f75298f7a3\") " pod="metallb-system/frr-k8s-c8kqz" Feb 18 14:49:07 crc kubenswrapper[4957]: I0218 14:49:07.948140 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/06e3f0bd-70d3-493b-ab24-e8f75298f7a3-frr-conf\") pod \"frr-k8s-c8kqz\" (UID: \"06e3f0bd-70d3-493b-ab24-e8f75298f7a3\") " pod="metallb-system/frr-k8s-c8kqz" Feb 18 14:49:07 crc kubenswrapper[4957]: I0218 14:49:07.948487 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/06e3f0bd-70d3-493b-ab24-e8f75298f7a3-frr-sockets\") pod \"frr-k8s-c8kqz\" (UID: \"06e3f0bd-70d3-493b-ab24-e8f75298f7a3\") " pod="metallb-system/frr-k8s-c8kqz" Feb 18 14:49:07 crc kubenswrapper[4957]: I0218 14:49:07.948655 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/06e3f0bd-70d3-493b-ab24-e8f75298f7a3-metrics\") pod \"frr-k8s-c8kqz\" (UID: \"06e3f0bd-70d3-493b-ab24-e8f75298f7a3\") " pod="metallb-system/frr-k8s-c8kqz" Feb 18 14:49:07 crc kubenswrapper[4957]: I0218 14:49:07.948867 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/06e3f0bd-70d3-493b-ab24-e8f75298f7a3-frr-startup\") pod \"frr-k8s-c8kqz\" (UID: \"06e3f0bd-70d3-493b-ab24-e8f75298f7a3\") " pod="metallb-system/frr-k8s-c8kqz" Feb 18 14:49:07 crc kubenswrapper[4957]: I0218 14:49:07.953887 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/06e3f0bd-70d3-493b-ab24-e8f75298f7a3-metrics-certs\") pod \"frr-k8s-c8kqz\" (UID: \"06e3f0bd-70d3-493b-ab24-e8f75298f7a3\") " pod="metallb-system/frr-k8s-c8kqz" Feb 18 14:49:07 crc kubenswrapper[4957]: I0218 14:49:07.968199 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vfl5s\" (UniqueName: \"kubernetes.io/projected/06e3f0bd-70d3-493b-ab24-e8f75298f7a3-kube-api-access-vfl5s\") pod \"frr-k8s-c8kqz\" (UID: \"06e3f0bd-70d3-493b-ab24-e8f75298f7a3\") " pod="metallb-system/frr-k8s-c8kqz" Feb 18 14:49:07 crc kubenswrapper[4957]: I0218 14:49:07.993575 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-bl7jx" Feb 18 14:49:08 crc kubenswrapper[4957]: I0218 14:49:08.011978 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-c8kqz" Feb 18 14:49:08 crc kubenswrapper[4957]: I0218 14:49:08.049980 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3929daaa-39b8-475f-9af0-644180cb7682-metrics-certs\") pod \"controller-69bbfbf88f-hw6sv\" (UID: \"3929daaa-39b8-475f-9af0-644180cb7682\") " pod="metallb-system/controller-69bbfbf88f-hw6sv" Feb 18 14:49:08 crc kubenswrapper[4957]: I0218 14:49:08.050100 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-77g2c\" (UniqueName: \"kubernetes.io/projected/3929daaa-39b8-475f-9af0-644180cb7682-kube-api-access-77g2c\") pod \"controller-69bbfbf88f-hw6sv\" (UID: \"3929daaa-39b8-475f-9af0-644180cb7682\") " pod="metallb-system/controller-69bbfbf88f-hw6sv" Feb 18 14:49:08 crc kubenswrapper[4957]: I0218 14:49:08.050157 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3929daaa-39b8-475f-9af0-644180cb7682-cert\") pod \"controller-69bbfbf88f-hw6sv\" (UID: \"3929daaa-39b8-475f-9af0-644180cb7682\") " pod="metallb-system/controller-69bbfbf88f-hw6sv" Feb 18 14:49:08 crc kubenswrapper[4957]: I0218 14:49:08.050208 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hcblg\" (UniqueName: \"kubernetes.io/projected/379fdde6-815b-433b-b62c-b9863ea4fb9e-kube-api-access-hcblg\") pod \"speaker-wvv4k\" (UID: \"379fdde6-815b-433b-b62c-b9863ea4fb9e\") " pod="metallb-system/speaker-wvv4k" Feb 18 14:49:08 crc kubenswrapper[4957]: I0218 14:49:08.050232 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/379fdde6-815b-433b-b62c-b9863ea4fb9e-metallb-excludel2\") pod \"speaker-wvv4k\" (UID: \"379fdde6-815b-433b-b62c-b9863ea4fb9e\") " pod="metallb-system/speaker-wvv4k" Feb 18 14:49:08 crc kubenswrapper[4957]: I0218 14:49:08.050275 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/379fdde6-815b-433b-b62c-b9863ea4fb9e-metrics-certs\") pod \"speaker-wvv4k\" (UID: \"379fdde6-815b-433b-b62c-b9863ea4fb9e\") " pod="metallb-system/speaker-wvv4k" Feb 18 14:49:08 crc kubenswrapper[4957]: I0218 14:49:08.050297 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/379fdde6-815b-433b-b62c-b9863ea4fb9e-memberlist\") pod \"speaker-wvv4k\" (UID: \"379fdde6-815b-433b-b62c-b9863ea4fb9e\") " pod="metallb-system/speaker-wvv4k" Feb 18 14:49:08 crc kubenswrapper[4957]: E0218 14:49:08.050497 4957 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 18 14:49:08 crc kubenswrapper[4957]: E0218 14:49:08.050568 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/379fdde6-815b-433b-b62c-b9863ea4fb9e-memberlist podName:379fdde6-815b-433b-b62c-b9863ea4fb9e nodeName:}" failed. No retries permitted until 2026-02-18 14:49:08.550543387 +0000 UTC m=+1055.071408121 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/379fdde6-815b-433b-b62c-b9863ea4fb9e-memberlist") pod "speaker-wvv4k" (UID: "379fdde6-815b-433b-b62c-b9863ea4fb9e") : secret "metallb-memberlist" not found Feb 18 14:49:08 crc kubenswrapper[4957]: E0218 14:49:08.050602 4957 secret.go:188] Couldn't get secret metallb-system/speaker-certs-secret: secret "speaker-certs-secret" not found Feb 18 14:49:08 crc kubenswrapper[4957]: E0218 14:49:08.050715 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/379fdde6-815b-433b-b62c-b9863ea4fb9e-metrics-certs podName:379fdde6-815b-433b-b62c-b9863ea4fb9e nodeName:}" failed. No retries permitted until 2026-02-18 14:49:08.550681211 +0000 UTC m=+1055.071546035 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/379fdde6-815b-433b-b62c-b9863ea4fb9e-metrics-certs") pod "speaker-wvv4k" (UID: "379fdde6-815b-433b-b62c-b9863ea4fb9e") : secret "speaker-certs-secret" not found Feb 18 14:49:08 crc kubenswrapper[4957]: I0218 14:49:08.051728 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/379fdde6-815b-433b-b62c-b9863ea4fb9e-metallb-excludel2\") pod \"speaker-wvv4k\" (UID: \"379fdde6-815b-433b-b62c-b9863ea4fb9e\") " pod="metallb-system/speaker-wvv4k" Feb 18 14:49:08 crc kubenswrapper[4957]: I0218 14:49:08.053312 4957 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 18 14:49:08 crc kubenswrapper[4957]: I0218 14:49:08.060437 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3929daaa-39b8-475f-9af0-644180cb7682-metrics-certs\") pod \"controller-69bbfbf88f-hw6sv\" (UID: \"3929daaa-39b8-475f-9af0-644180cb7682\") " pod="metallb-system/controller-69bbfbf88f-hw6sv" Feb 18 14:49:08 crc kubenswrapper[4957]: I0218 14:49:08.072517 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-77g2c\" (UniqueName: \"kubernetes.io/projected/3929daaa-39b8-475f-9af0-644180cb7682-kube-api-access-77g2c\") pod \"controller-69bbfbf88f-hw6sv\" (UID: \"3929daaa-39b8-475f-9af0-644180cb7682\") " pod="metallb-system/controller-69bbfbf88f-hw6sv" Feb 18 14:49:08 crc kubenswrapper[4957]: I0218 14:49:08.064567 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3929daaa-39b8-475f-9af0-644180cb7682-cert\") pod \"controller-69bbfbf88f-hw6sv\" (UID: \"3929daaa-39b8-475f-9af0-644180cb7682\") " pod="metallb-system/controller-69bbfbf88f-hw6sv" Feb 18 14:49:08 crc kubenswrapper[4957]: I0218 14:49:08.087893 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hcblg\" (UniqueName: \"kubernetes.io/projected/379fdde6-815b-433b-b62c-b9863ea4fb9e-kube-api-access-hcblg\") pod \"speaker-wvv4k\" (UID: \"379fdde6-815b-433b-b62c-b9863ea4fb9e\") " pod="metallb-system/speaker-wvv4k" Feb 18 14:49:08 crc kubenswrapper[4957]: I0218 14:49:08.149723 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-69bbfbf88f-hw6sv" Feb 18 14:49:08 crc kubenswrapper[4957]: I0218 14:49:08.478175 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-c8kqz" event={"ID":"06e3f0bd-70d3-493b-ab24-e8f75298f7a3","Type":"ContainerStarted","Data":"224b461f3db399b264066e4299c673eae8e8ea6ceca6c3378de57f47133fa186"} Feb 18 14:49:08 crc kubenswrapper[4957]: I0218 14:49:08.543886 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-bl7jx"] Feb 18 14:49:08 crc kubenswrapper[4957]: W0218 14:49:08.548330 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod84258a40_276a_4da4_8240_603932be25c0.slice/crio-e652b82d45f3b64c2e1d62a4ea71219b9390487b5c0218347b1e5d7a6bc0baae WatchSource:0}: Error finding container e652b82d45f3b64c2e1d62a4ea71219b9390487b5c0218347b1e5d7a6bc0baae: Status 404 returned error can't find the container with id e652b82d45f3b64c2e1d62a4ea71219b9390487b5c0218347b1e5d7a6bc0baae Feb 18 14:49:08 crc kubenswrapper[4957]: I0218 14:49:08.561875 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-69bbfbf88f-hw6sv"] Feb 18 14:49:08 crc kubenswrapper[4957]: I0218 14:49:08.563548 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/379fdde6-815b-433b-b62c-b9863ea4fb9e-metrics-certs\") pod \"speaker-wvv4k\" (UID: \"379fdde6-815b-433b-b62c-b9863ea4fb9e\") " pod="metallb-system/speaker-wvv4k" Feb 18 14:49:08 crc kubenswrapper[4957]: I0218 14:49:08.563590 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/379fdde6-815b-433b-b62c-b9863ea4fb9e-memberlist\") pod \"speaker-wvv4k\" (UID: \"379fdde6-815b-433b-b62c-b9863ea4fb9e\") " pod="metallb-system/speaker-wvv4k" Feb 18 14:49:08 crc kubenswrapper[4957]: E0218 14:49:08.563813 4957 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 18 14:49:08 crc kubenswrapper[4957]: E0218 14:49:08.563873 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/379fdde6-815b-433b-b62c-b9863ea4fb9e-memberlist podName:379fdde6-815b-433b-b62c-b9863ea4fb9e nodeName:}" failed. No retries permitted until 2026-02-18 14:49:09.563855766 +0000 UTC m=+1056.084720510 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/379fdde6-815b-433b-b62c-b9863ea4fb9e-memberlist") pod "speaker-wvv4k" (UID: "379fdde6-815b-433b-b62c-b9863ea4fb9e") : secret "metallb-memberlist" not found Feb 18 14:49:08 crc kubenswrapper[4957]: I0218 14:49:08.569786 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/379fdde6-815b-433b-b62c-b9863ea4fb9e-metrics-certs\") pod \"speaker-wvv4k\" (UID: \"379fdde6-815b-433b-b62c-b9863ea4fb9e\") " pod="metallb-system/speaker-wvv4k" Feb 18 14:49:09 crc kubenswrapper[4957]: I0218 14:49:09.489775 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-hw6sv" event={"ID":"3929daaa-39b8-475f-9af0-644180cb7682","Type":"ContainerStarted","Data":"a55f97a928f9eb22dcf05e5416fb9336c8ea1d4e042fdd4945fccbb1f616193f"} Feb 18 14:49:09 crc kubenswrapper[4957]: I0218 14:49:09.490127 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-hw6sv" event={"ID":"3929daaa-39b8-475f-9af0-644180cb7682","Type":"ContainerStarted","Data":"a1d4ad4f006df753a43b4aedb6e48444c211669d711d0b365685c94cc06d2194"} Feb 18 14:49:09 crc kubenswrapper[4957]: I0218 14:49:09.490139 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-hw6sv" event={"ID":"3929daaa-39b8-475f-9af0-644180cb7682","Type":"ContainerStarted","Data":"dd68d22b041f2d651a3d7726d54de78df14adb9adaf80e84b9437563c1cef8f7"} Feb 18 14:49:09 crc kubenswrapper[4957]: I0218 14:49:09.490163 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-69bbfbf88f-hw6sv" Feb 18 14:49:09 crc kubenswrapper[4957]: I0218 14:49:09.491795 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-bl7jx" event={"ID":"84258a40-276a-4da4-8240-603932be25c0","Type":"ContainerStarted","Data":"e652b82d45f3b64c2e1d62a4ea71219b9390487b5c0218347b1e5d7a6bc0baae"} Feb 18 14:49:09 crc kubenswrapper[4957]: I0218 14:49:09.531317 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-69bbfbf88f-hw6sv" podStartSLOduration=2.531298486 podStartE2EDuration="2.531298486s" podCreationTimestamp="2026-02-18 14:49:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:49:09.527973559 +0000 UTC m=+1056.048838303" watchObservedRunningTime="2026-02-18 14:49:09.531298486 +0000 UTC m=+1056.052163230" Feb 18 14:49:09 crc kubenswrapper[4957]: I0218 14:49:09.583482 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/379fdde6-815b-433b-b62c-b9863ea4fb9e-memberlist\") pod \"speaker-wvv4k\" (UID: \"379fdde6-815b-433b-b62c-b9863ea4fb9e\") " pod="metallb-system/speaker-wvv4k" Feb 18 14:49:09 crc kubenswrapper[4957]: I0218 14:49:09.591213 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/379fdde6-815b-433b-b62c-b9863ea4fb9e-memberlist\") pod \"speaker-wvv4k\" (UID: \"379fdde6-815b-433b-b62c-b9863ea4fb9e\") " pod="metallb-system/speaker-wvv4k" Feb 18 14:49:09 crc kubenswrapper[4957]: I0218 14:49:09.613639 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-wvv4k" Feb 18 14:49:09 crc kubenswrapper[4957]: W0218 14:49:09.657744 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod379fdde6_815b_433b_b62c_b9863ea4fb9e.slice/crio-364204bf107e8b746d75e335db34ee9f80bf0a703baa7541e161e4f964f7a802 WatchSource:0}: Error finding container 364204bf107e8b746d75e335db34ee9f80bf0a703baa7541e161e4f964f7a802: Status 404 returned error can't find the container with id 364204bf107e8b746d75e335db34ee9f80bf0a703baa7541e161e4f964f7a802 Feb 18 14:49:10 crc kubenswrapper[4957]: I0218 14:49:10.504564 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-wvv4k" event={"ID":"379fdde6-815b-433b-b62c-b9863ea4fb9e","Type":"ContainerStarted","Data":"d70999a99e4d5c6b2e45b34dc4b9f264a85a4862bd7ccabaf815abada4d8d1d7"} Feb 18 14:49:10 crc kubenswrapper[4957]: I0218 14:49:10.504638 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-wvv4k" event={"ID":"379fdde6-815b-433b-b62c-b9863ea4fb9e","Type":"ContainerStarted","Data":"46fa0439b78dc98b2e466e1c77a11ad92a91158f1c9a87d365215a5a6fad8545"} Feb 18 14:49:10 crc kubenswrapper[4957]: I0218 14:49:10.504650 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-wvv4k" event={"ID":"379fdde6-815b-433b-b62c-b9863ea4fb9e","Type":"ContainerStarted","Data":"364204bf107e8b746d75e335db34ee9f80bf0a703baa7541e161e4f964f7a802"} Feb 18 14:49:10 crc kubenswrapper[4957]: I0218 14:49:10.505486 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-wvv4k" Feb 18 14:49:14 crc kubenswrapper[4957]: I0218 14:49:14.245185 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-wvv4k" podStartSLOduration=7.245163155 podStartE2EDuration="7.245163155s" podCreationTimestamp="2026-02-18 14:49:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:49:10.537367656 +0000 UTC m=+1057.058232410" watchObservedRunningTime="2026-02-18 14:49:14.245163155 +0000 UTC m=+1060.766027899" Feb 18 14:49:18 crc kubenswrapper[4957]: I0218 14:49:18.161680 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-69bbfbf88f-hw6sv" Feb 18 14:49:18 crc kubenswrapper[4957]: I0218 14:49:18.612809 4957 generic.go:334] "Generic (PLEG): container finished" podID="06e3f0bd-70d3-493b-ab24-e8f75298f7a3" containerID="2192982f2249f81a3d6bf28fd857642688c80c73b4f1b2bee58183cc711af91c" exitCode=0 Feb 18 14:49:18 crc kubenswrapper[4957]: I0218 14:49:18.612972 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-c8kqz" event={"ID":"06e3f0bd-70d3-493b-ab24-e8f75298f7a3","Type":"ContainerDied","Data":"2192982f2249f81a3d6bf28fd857642688c80c73b4f1b2bee58183cc711af91c"} Feb 18 14:49:18 crc kubenswrapper[4957]: I0218 14:49:18.614927 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-bl7jx" event={"ID":"84258a40-276a-4da4-8240-603932be25c0","Type":"ContainerStarted","Data":"8ec51358c3b2e5eea4e34871efe9d013d61ddca8462c2c83fe7be24427a95975"} Feb 18 14:49:18 crc kubenswrapper[4957]: I0218 14:49:18.615081 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-bl7jx" Feb 18 14:49:18 crc kubenswrapper[4957]: I0218 14:49:18.670283 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-bl7jx" podStartSLOduration=2.6081485669999998 podStartE2EDuration="11.670252975s" podCreationTimestamp="2026-02-18 14:49:07 +0000 UTC" firstStartedPulling="2026-02-18 14:49:08.562036563 +0000 UTC m=+1055.082901317" lastFinishedPulling="2026-02-18 14:49:17.624140981 +0000 UTC m=+1064.145005725" observedRunningTime="2026-02-18 14:49:18.667264818 +0000 UTC m=+1065.188129562" watchObservedRunningTime="2026-02-18 14:49:18.670252975 +0000 UTC m=+1065.191117719" Feb 18 14:49:19 crc kubenswrapper[4957]: I0218 14:49:19.619433 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-wvv4k" Feb 18 14:49:19 crc kubenswrapper[4957]: I0218 14:49:19.633780 4957 generic.go:334] "Generic (PLEG): container finished" podID="06e3f0bd-70d3-493b-ab24-e8f75298f7a3" containerID="f3d924b9efb12f2e215282653996d59940b72207b619bf1c6d7478972d446364" exitCode=0 Feb 18 14:49:19 crc kubenswrapper[4957]: I0218 14:49:19.633872 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-c8kqz" event={"ID":"06e3f0bd-70d3-493b-ab24-e8f75298f7a3","Type":"ContainerDied","Data":"f3d924b9efb12f2e215282653996d59940b72207b619bf1c6d7478972d446364"} Feb 18 14:49:20 crc kubenswrapper[4957]: I0218 14:49:20.651439 4957 generic.go:334] "Generic (PLEG): container finished" podID="06e3f0bd-70d3-493b-ab24-e8f75298f7a3" containerID="c7b1ac85de6fa2da3a577340f507303930c3734415d6e7e568e798a172f710df" exitCode=0 Feb 18 14:49:20 crc kubenswrapper[4957]: I0218 14:49:20.653130 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-c8kqz" event={"ID":"06e3f0bd-70d3-493b-ab24-e8f75298f7a3","Type":"ContainerDied","Data":"c7b1ac85de6fa2da3a577340f507303930c3734415d6e7e568e798a172f710df"} Feb 18 14:49:21 crc kubenswrapper[4957]: I0218 14:49:21.668791 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-c8kqz" event={"ID":"06e3f0bd-70d3-493b-ab24-e8f75298f7a3","Type":"ContainerStarted","Data":"b62cfcd6c28fcd7ee0d7e5a9d2a6c8386157dd9b624d5fb1a036343aab4de87f"} Feb 18 14:49:21 crc kubenswrapper[4957]: I0218 14:49:21.669098 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-c8kqz" event={"ID":"06e3f0bd-70d3-493b-ab24-e8f75298f7a3","Type":"ContainerStarted","Data":"b486c402bb609c6bbb2ebed3c25be0b35be53e99d3f0b6a9ce21baf1bfa41681"} Feb 18 14:49:21 crc kubenswrapper[4957]: I0218 14:49:21.669113 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-c8kqz" event={"ID":"06e3f0bd-70d3-493b-ab24-e8f75298f7a3","Type":"ContainerStarted","Data":"49b83ce706eea250b91649b2e85af341536f1b1130e5f548bacbc3725a8b4d72"} Feb 18 14:49:21 crc kubenswrapper[4957]: I0218 14:49:21.669127 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-c8kqz" event={"ID":"06e3f0bd-70d3-493b-ab24-e8f75298f7a3","Type":"ContainerStarted","Data":"02d58f3725fc3455093ad9d4bc9a6fff935d5e35877a9633675912a93af85d0d"} Feb 18 14:49:22 crc kubenswrapper[4957]: I0218 14:49:22.514245 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-5qns2"] Feb 18 14:49:22 crc kubenswrapper[4957]: I0218 14:49:22.517049 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-5qns2" Feb 18 14:49:22 crc kubenswrapper[4957]: I0218 14:49:22.520895 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Feb 18 14:49:22 crc kubenswrapper[4957]: I0218 14:49:22.521089 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Feb 18 14:49:22 crc kubenswrapper[4957]: I0218 14:49:22.521280 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-wn4xb" Feb 18 14:49:22 crc kubenswrapper[4957]: I0218 14:49:22.530294 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-5qns2"] Feb 18 14:49:22 crc kubenswrapper[4957]: I0218 14:49:22.639033 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwcws\" (UniqueName: \"kubernetes.io/projected/08d1410b-2756-4473-af14-8c10334415d4-kube-api-access-pwcws\") pod \"openstack-operator-index-5qns2\" (UID: \"08d1410b-2756-4473-af14-8c10334415d4\") " pod="openstack-operators/openstack-operator-index-5qns2" Feb 18 14:49:22 crc kubenswrapper[4957]: I0218 14:49:22.685541 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-c8kqz" event={"ID":"06e3f0bd-70d3-493b-ab24-e8f75298f7a3","Type":"ContainerStarted","Data":"0e53da37c2cd63553dba84429da720366d5a710d047a25b0e06d3280106ec561"} Feb 18 14:49:22 crc kubenswrapper[4957]: I0218 14:49:22.685613 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-c8kqz" event={"ID":"06e3f0bd-70d3-493b-ab24-e8f75298f7a3","Type":"ContainerStarted","Data":"424866b064da5c634f22e6d7341c3e1cd592a43ad5059cffcae191b05e7042c7"} Feb 18 14:49:22 crc kubenswrapper[4957]: I0218 14:49:22.687372 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-c8kqz" Feb 18 14:49:22 crc kubenswrapper[4957]: I0218 14:49:22.740641 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pwcws\" (UniqueName: \"kubernetes.io/projected/08d1410b-2756-4473-af14-8c10334415d4-kube-api-access-pwcws\") pod \"openstack-operator-index-5qns2\" (UID: \"08d1410b-2756-4473-af14-8c10334415d4\") " pod="openstack-operators/openstack-operator-index-5qns2" Feb 18 14:49:22 crc kubenswrapper[4957]: I0218 14:49:22.748809 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-c8kqz" podStartSLOduration=6.368517702 podStartE2EDuration="15.748779865s" podCreationTimestamp="2026-02-18 14:49:07 +0000 UTC" firstStartedPulling="2026-02-18 14:49:08.226669069 +0000 UTC m=+1054.747533813" lastFinishedPulling="2026-02-18 14:49:17.606931232 +0000 UTC m=+1064.127795976" observedRunningTime="2026-02-18 14:49:22.743913004 +0000 UTC m=+1069.264777758" watchObservedRunningTime="2026-02-18 14:49:22.748779865 +0000 UTC m=+1069.269644609" Feb 18 14:49:22 crc kubenswrapper[4957]: I0218 14:49:22.781496 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pwcws\" (UniqueName: \"kubernetes.io/projected/08d1410b-2756-4473-af14-8c10334415d4-kube-api-access-pwcws\") pod \"openstack-operator-index-5qns2\" (UID: \"08d1410b-2756-4473-af14-8c10334415d4\") " pod="openstack-operators/openstack-operator-index-5qns2" Feb 18 14:49:22 crc kubenswrapper[4957]: I0218 14:49:22.844697 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-5qns2" Feb 18 14:49:23 crc kubenswrapper[4957]: I0218 14:49:23.012253 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-c8kqz" Feb 18 14:49:23 crc kubenswrapper[4957]: I0218 14:49:23.071843 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-c8kqz" Feb 18 14:49:23 crc kubenswrapper[4957]: I0218 14:49:23.341038 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-5qns2"] Feb 18 14:49:23 crc kubenswrapper[4957]: I0218 14:49:23.697266 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-5qns2" event={"ID":"08d1410b-2756-4473-af14-8c10334415d4","Type":"ContainerStarted","Data":"0db24f823b94ec72f1ce9f0b1ddff8a6b36c98b5a51654d3892091b8bceb7491"} Feb 18 14:49:25 crc kubenswrapper[4957]: I0218 14:49:25.896320 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-5qns2"] Feb 18 14:49:26 crc kubenswrapper[4957]: I0218 14:49:26.502249 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-hbqf7"] Feb 18 14:49:26 crc kubenswrapper[4957]: I0218 14:49:26.504584 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-hbqf7" Feb 18 14:49:26 crc kubenswrapper[4957]: I0218 14:49:26.515363 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-hbqf7"] Feb 18 14:49:26 crc kubenswrapper[4957]: I0218 14:49:26.549127 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j55p5\" (UniqueName: \"kubernetes.io/projected/4c4be899-e6fc-4664-89e1-b2eb45187e3a-kube-api-access-j55p5\") pod \"openstack-operator-index-hbqf7\" (UID: \"4c4be899-e6fc-4664-89e1-b2eb45187e3a\") " pod="openstack-operators/openstack-operator-index-hbqf7" Feb 18 14:49:26 crc kubenswrapper[4957]: I0218 14:49:26.650725 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j55p5\" (UniqueName: \"kubernetes.io/projected/4c4be899-e6fc-4664-89e1-b2eb45187e3a-kube-api-access-j55p5\") pod \"openstack-operator-index-hbqf7\" (UID: \"4c4be899-e6fc-4664-89e1-b2eb45187e3a\") " pod="openstack-operators/openstack-operator-index-hbqf7" Feb 18 14:49:26 crc kubenswrapper[4957]: I0218 14:49:26.674686 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j55p5\" (UniqueName: \"kubernetes.io/projected/4c4be899-e6fc-4664-89e1-b2eb45187e3a-kube-api-access-j55p5\") pod \"openstack-operator-index-hbqf7\" (UID: \"4c4be899-e6fc-4664-89e1-b2eb45187e3a\") " pod="openstack-operators/openstack-operator-index-hbqf7" Feb 18 14:49:26 crc kubenswrapper[4957]: I0218 14:49:26.873077 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-hbqf7" Feb 18 14:49:27 crc kubenswrapper[4957]: I0218 14:49:27.551896 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-hbqf7"] Feb 18 14:49:27 crc kubenswrapper[4957]: I0218 14:49:27.750748 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-5qns2" event={"ID":"08d1410b-2756-4473-af14-8c10334415d4","Type":"ContainerStarted","Data":"b6f26f3a02a4a2738c709df881cbeeacd957a151453d3959e31ed51946cca824"} Feb 18 14:49:27 crc kubenswrapper[4957]: I0218 14:49:27.750867 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-5qns2" podUID="08d1410b-2756-4473-af14-8c10334415d4" containerName="registry-server" containerID="cri-o://b6f26f3a02a4a2738c709df881cbeeacd957a151453d3959e31ed51946cca824" gracePeriod=2 Feb 18 14:49:27 crc kubenswrapper[4957]: I0218 14:49:27.753957 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-hbqf7" event={"ID":"4c4be899-e6fc-4664-89e1-b2eb45187e3a","Type":"ContainerStarted","Data":"f71690ed44bea566a9a4a400411d9e0e001e76b2369a145ea8fe69673abaf90e"} Feb 18 14:49:27 crc kubenswrapper[4957]: I0218 14:49:27.786783 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-5qns2" podStartSLOduration=1.9778293869999999 podStartE2EDuration="5.786745352s" podCreationTimestamp="2026-02-18 14:49:22 +0000 UTC" firstStartedPulling="2026-02-18 14:49:23.348657046 +0000 UTC m=+1069.869521790" lastFinishedPulling="2026-02-18 14:49:27.157573011 +0000 UTC m=+1073.678437755" observedRunningTime="2026-02-18 14:49:27.772693824 +0000 UTC m=+1074.293558588" watchObservedRunningTime="2026-02-18 14:49:27.786745352 +0000 UTC m=+1074.307610096" Feb 18 14:49:28 crc kubenswrapper[4957]: I0218 14:49:28.043737 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-bl7jx" Feb 18 14:49:28 crc kubenswrapper[4957]: I0218 14:49:28.378738 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-5qns2" Feb 18 14:49:28 crc kubenswrapper[4957]: I0218 14:49:28.499753 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pwcws\" (UniqueName: \"kubernetes.io/projected/08d1410b-2756-4473-af14-8c10334415d4-kube-api-access-pwcws\") pod \"08d1410b-2756-4473-af14-8c10334415d4\" (UID: \"08d1410b-2756-4473-af14-8c10334415d4\") " Feb 18 14:49:28 crc kubenswrapper[4957]: I0218 14:49:28.509222 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08d1410b-2756-4473-af14-8c10334415d4-kube-api-access-pwcws" (OuterVolumeSpecName: "kube-api-access-pwcws") pod "08d1410b-2756-4473-af14-8c10334415d4" (UID: "08d1410b-2756-4473-af14-8c10334415d4"). InnerVolumeSpecName "kube-api-access-pwcws". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:49:28 crc kubenswrapper[4957]: I0218 14:49:28.602166 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pwcws\" (UniqueName: \"kubernetes.io/projected/08d1410b-2756-4473-af14-8c10334415d4-kube-api-access-pwcws\") on node \"crc\" DevicePath \"\"" Feb 18 14:49:28 crc kubenswrapper[4957]: I0218 14:49:28.765901 4957 generic.go:334] "Generic (PLEG): container finished" podID="08d1410b-2756-4473-af14-8c10334415d4" containerID="b6f26f3a02a4a2738c709df881cbeeacd957a151453d3959e31ed51946cca824" exitCode=0 Feb 18 14:49:28 crc kubenswrapper[4957]: I0218 14:49:28.766007 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-5qns2" Feb 18 14:49:28 crc kubenswrapper[4957]: I0218 14:49:28.766009 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-5qns2" event={"ID":"08d1410b-2756-4473-af14-8c10334415d4","Type":"ContainerDied","Data":"b6f26f3a02a4a2738c709df881cbeeacd957a151453d3959e31ed51946cca824"} Feb 18 14:49:28 crc kubenswrapper[4957]: I0218 14:49:28.766081 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-5qns2" event={"ID":"08d1410b-2756-4473-af14-8c10334415d4","Type":"ContainerDied","Data":"0db24f823b94ec72f1ce9f0b1ddff8a6b36c98b5a51654d3892091b8bceb7491"} Feb 18 14:49:28 crc kubenswrapper[4957]: I0218 14:49:28.766110 4957 scope.go:117] "RemoveContainer" containerID="b6f26f3a02a4a2738c709df881cbeeacd957a151453d3959e31ed51946cca824" Feb 18 14:49:28 crc kubenswrapper[4957]: I0218 14:49:28.768135 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-hbqf7" event={"ID":"4c4be899-e6fc-4664-89e1-b2eb45187e3a","Type":"ContainerStarted","Data":"810337c5e67f29ddb7ae6e69c4047e13673031b1389e81b5251b6b5e7b806d4f"} Feb 18 14:49:28 crc kubenswrapper[4957]: I0218 14:49:28.795309 4957 scope.go:117] "RemoveContainer" containerID="b6f26f3a02a4a2738c709df881cbeeacd957a151453d3959e31ed51946cca824" Feb 18 14:49:28 crc kubenswrapper[4957]: E0218 14:49:28.797578 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b6f26f3a02a4a2738c709df881cbeeacd957a151453d3959e31ed51946cca824\": container with ID starting with b6f26f3a02a4a2738c709df881cbeeacd957a151453d3959e31ed51946cca824 not found: ID does not exist" containerID="b6f26f3a02a4a2738c709df881cbeeacd957a151453d3959e31ed51946cca824" Feb 18 14:49:28 crc kubenswrapper[4957]: I0218 14:49:28.797632 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b6f26f3a02a4a2738c709df881cbeeacd957a151453d3959e31ed51946cca824"} err="failed to get container status \"b6f26f3a02a4a2738c709df881cbeeacd957a151453d3959e31ed51946cca824\": rpc error: code = NotFound desc = could not find container \"b6f26f3a02a4a2738c709df881cbeeacd957a151453d3959e31ed51946cca824\": container with ID starting with b6f26f3a02a4a2738c709df881cbeeacd957a151453d3959e31ed51946cca824 not found: ID does not exist" Feb 18 14:49:28 crc kubenswrapper[4957]: I0218 14:49:28.797956 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-hbqf7" podStartSLOduration=2.750875916 podStartE2EDuration="2.797941432s" podCreationTimestamp="2026-02-18 14:49:26 +0000 UTC" firstStartedPulling="2026-02-18 14:49:27.563161692 +0000 UTC m=+1074.084026436" lastFinishedPulling="2026-02-18 14:49:27.610227208 +0000 UTC m=+1074.131091952" observedRunningTime="2026-02-18 14:49:28.794841682 +0000 UTC m=+1075.315706426" watchObservedRunningTime="2026-02-18 14:49:28.797941432 +0000 UTC m=+1075.318806176" Feb 18 14:49:28 crc kubenswrapper[4957]: I0218 14:49:28.817775 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-5qns2"] Feb 18 14:49:28 crc kubenswrapper[4957]: I0218 14:49:28.824761 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-5qns2"] Feb 18 14:49:30 crc kubenswrapper[4957]: I0218 14:49:30.229300 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08d1410b-2756-4473-af14-8c10334415d4" path="/var/lib/kubelet/pods/08d1410b-2756-4473-af14-8c10334415d4/volumes" Feb 18 14:49:36 crc kubenswrapper[4957]: I0218 14:49:36.873559 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-hbqf7" Feb 18 14:49:36 crc kubenswrapper[4957]: I0218 14:49:36.874083 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-hbqf7" Feb 18 14:49:36 crc kubenswrapper[4957]: I0218 14:49:36.915481 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-hbqf7" Feb 18 14:49:37 crc kubenswrapper[4957]: I0218 14:49:37.279495 4957 patch_prober.go:28] interesting pod/machine-config-daemon-x8wwg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 14:49:37 crc kubenswrapper[4957]: I0218 14:49:37.279567 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 14:49:37 crc kubenswrapper[4957]: I0218 14:49:37.895778 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-hbqf7" Feb 18 14:49:38 crc kubenswrapper[4957]: I0218 14:49:38.015847 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-c8kqz" Feb 18 14:49:44 crc kubenswrapper[4957]: I0218 14:49:44.146399 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/916dfaad630ecae58644db741cc680a2786dd5efa7bb707c738ce0be548rf8h"] Feb 18 14:49:44 crc kubenswrapper[4957]: E0218 14:49:44.147739 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08d1410b-2756-4473-af14-8c10334415d4" containerName="registry-server" Feb 18 14:49:44 crc kubenswrapper[4957]: I0218 14:49:44.147759 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="08d1410b-2756-4473-af14-8c10334415d4" containerName="registry-server" Feb 18 14:49:44 crc kubenswrapper[4957]: I0218 14:49:44.147976 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="08d1410b-2756-4473-af14-8c10334415d4" containerName="registry-server" Feb 18 14:49:44 crc kubenswrapper[4957]: I0218 14:49:44.149379 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/916dfaad630ecae58644db741cc680a2786dd5efa7bb707c738ce0be548rf8h" Feb 18 14:49:44 crc kubenswrapper[4957]: I0218 14:49:44.152187 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-gxk22" Feb 18 14:49:44 crc kubenswrapper[4957]: I0218 14:49:44.205895 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/916dfaad630ecae58644db741cc680a2786dd5efa7bb707c738ce0be548rf8h"] Feb 18 14:49:44 crc kubenswrapper[4957]: I0218 14:49:44.280340 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdxvj\" (UniqueName: \"kubernetes.io/projected/8fd97eb8-3fca-445e-9811-3921ab8ec6e8-kube-api-access-pdxvj\") pod \"916dfaad630ecae58644db741cc680a2786dd5efa7bb707c738ce0be548rf8h\" (UID: \"8fd97eb8-3fca-445e-9811-3921ab8ec6e8\") " pod="openstack-operators/916dfaad630ecae58644db741cc680a2786dd5efa7bb707c738ce0be548rf8h" Feb 18 14:49:44 crc kubenswrapper[4957]: I0218 14:49:44.281060 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8fd97eb8-3fca-445e-9811-3921ab8ec6e8-util\") pod \"916dfaad630ecae58644db741cc680a2786dd5efa7bb707c738ce0be548rf8h\" (UID: \"8fd97eb8-3fca-445e-9811-3921ab8ec6e8\") " pod="openstack-operators/916dfaad630ecae58644db741cc680a2786dd5efa7bb707c738ce0be548rf8h" Feb 18 14:49:44 crc kubenswrapper[4957]: I0218 14:49:44.281145 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8fd97eb8-3fca-445e-9811-3921ab8ec6e8-bundle\") pod \"916dfaad630ecae58644db741cc680a2786dd5efa7bb707c738ce0be548rf8h\" (UID: \"8fd97eb8-3fca-445e-9811-3921ab8ec6e8\") " pod="openstack-operators/916dfaad630ecae58644db741cc680a2786dd5efa7bb707c738ce0be548rf8h" Feb 18 14:49:44 crc kubenswrapper[4957]: I0218 14:49:44.382193 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8fd97eb8-3fca-445e-9811-3921ab8ec6e8-util\") pod \"916dfaad630ecae58644db741cc680a2786dd5efa7bb707c738ce0be548rf8h\" (UID: \"8fd97eb8-3fca-445e-9811-3921ab8ec6e8\") " pod="openstack-operators/916dfaad630ecae58644db741cc680a2786dd5efa7bb707c738ce0be548rf8h" Feb 18 14:49:44 crc kubenswrapper[4957]: I0218 14:49:44.382257 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8fd97eb8-3fca-445e-9811-3921ab8ec6e8-bundle\") pod \"916dfaad630ecae58644db741cc680a2786dd5efa7bb707c738ce0be548rf8h\" (UID: \"8fd97eb8-3fca-445e-9811-3921ab8ec6e8\") " pod="openstack-operators/916dfaad630ecae58644db741cc680a2786dd5efa7bb707c738ce0be548rf8h" Feb 18 14:49:44 crc kubenswrapper[4957]: I0218 14:49:44.382431 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pdxvj\" (UniqueName: \"kubernetes.io/projected/8fd97eb8-3fca-445e-9811-3921ab8ec6e8-kube-api-access-pdxvj\") pod \"916dfaad630ecae58644db741cc680a2786dd5efa7bb707c738ce0be548rf8h\" (UID: \"8fd97eb8-3fca-445e-9811-3921ab8ec6e8\") " pod="openstack-operators/916dfaad630ecae58644db741cc680a2786dd5efa7bb707c738ce0be548rf8h" Feb 18 14:49:44 crc kubenswrapper[4957]: I0218 14:49:44.382750 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8fd97eb8-3fca-445e-9811-3921ab8ec6e8-util\") pod \"916dfaad630ecae58644db741cc680a2786dd5efa7bb707c738ce0be548rf8h\" (UID: \"8fd97eb8-3fca-445e-9811-3921ab8ec6e8\") " pod="openstack-operators/916dfaad630ecae58644db741cc680a2786dd5efa7bb707c738ce0be548rf8h" Feb 18 14:49:44 crc kubenswrapper[4957]: I0218 14:49:44.382994 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8fd97eb8-3fca-445e-9811-3921ab8ec6e8-bundle\") pod \"916dfaad630ecae58644db741cc680a2786dd5efa7bb707c738ce0be548rf8h\" (UID: \"8fd97eb8-3fca-445e-9811-3921ab8ec6e8\") " pod="openstack-operators/916dfaad630ecae58644db741cc680a2786dd5efa7bb707c738ce0be548rf8h" Feb 18 14:49:44 crc kubenswrapper[4957]: I0218 14:49:44.419318 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pdxvj\" (UniqueName: \"kubernetes.io/projected/8fd97eb8-3fca-445e-9811-3921ab8ec6e8-kube-api-access-pdxvj\") pod \"916dfaad630ecae58644db741cc680a2786dd5efa7bb707c738ce0be548rf8h\" (UID: \"8fd97eb8-3fca-445e-9811-3921ab8ec6e8\") " pod="openstack-operators/916dfaad630ecae58644db741cc680a2786dd5efa7bb707c738ce0be548rf8h" Feb 18 14:49:44 crc kubenswrapper[4957]: I0218 14:49:44.480727 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/916dfaad630ecae58644db741cc680a2786dd5efa7bb707c738ce0be548rf8h" Feb 18 14:49:45 crc kubenswrapper[4957]: I0218 14:49:45.018913 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/916dfaad630ecae58644db741cc680a2786dd5efa7bb707c738ce0be548rf8h"] Feb 18 14:49:45 crc kubenswrapper[4957]: I0218 14:49:45.920522 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/916dfaad630ecae58644db741cc680a2786dd5efa7bb707c738ce0be548rf8h" event={"ID":"8fd97eb8-3fca-445e-9811-3921ab8ec6e8","Type":"ContainerStarted","Data":"f4fd09419ebe1a681898892e9d908cc5c64135f3a95c77289c26f51953ccf5a8"} Feb 18 14:49:46 crc kubenswrapper[4957]: I0218 14:49:46.930161 4957 generic.go:334] "Generic (PLEG): container finished" podID="8fd97eb8-3fca-445e-9811-3921ab8ec6e8" containerID="becc3cc0c3fc37447be3de0ffb67cc59e7e066e74e62b5c1a3711d18c6bd915b" exitCode=0 Feb 18 14:49:46 crc kubenswrapper[4957]: I0218 14:49:46.930244 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/916dfaad630ecae58644db741cc680a2786dd5efa7bb707c738ce0be548rf8h" event={"ID":"8fd97eb8-3fca-445e-9811-3921ab8ec6e8","Type":"ContainerDied","Data":"becc3cc0c3fc37447be3de0ffb67cc59e7e066e74e62b5c1a3711d18c6bd915b"} Feb 18 14:49:47 crc kubenswrapper[4957]: I0218 14:49:47.943632 4957 generic.go:334] "Generic (PLEG): container finished" podID="8fd97eb8-3fca-445e-9811-3921ab8ec6e8" containerID="21362edbfe21a3b72abd7f6704aa90eed2997a911e9d313fe70cfd456dc14444" exitCode=0 Feb 18 14:49:47 crc kubenswrapper[4957]: I0218 14:49:47.943684 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/916dfaad630ecae58644db741cc680a2786dd5efa7bb707c738ce0be548rf8h" event={"ID":"8fd97eb8-3fca-445e-9811-3921ab8ec6e8","Type":"ContainerDied","Data":"21362edbfe21a3b72abd7f6704aa90eed2997a911e9d313fe70cfd456dc14444"} Feb 18 14:49:48 crc kubenswrapper[4957]: I0218 14:49:48.954893 4957 generic.go:334] "Generic (PLEG): container finished" podID="8fd97eb8-3fca-445e-9811-3921ab8ec6e8" containerID="6ae820b24e120c658a3d8baa9f1f4cb68ec19a00d951ab6fbecb0b0859261bf2" exitCode=0 Feb 18 14:49:48 crc kubenswrapper[4957]: I0218 14:49:48.954969 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/916dfaad630ecae58644db741cc680a2786dd5efa7bb707c738ce0be548rf8h" event={"ID":"8fd97eb8-3fca-445e-9811-3921ab8ec6e8","Type":"ContainerDied","Data":"6ae820b24e120c658a3d8baa9f1f4cb68ec19a00d951ab6fbecb0b0859261bf2"} Feb 18 14:49:50 crc kubenswrapper[4957]: I0218 14:49:50.325387 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/916dfaad630ecae58644db741cc680a2786dd5efa7bb707c738ce0be548rf8h" Feb 18 14:49:50 crc kubenswrapper[4957]: I0218 14:49:50.502601 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8fd97eb8-3fca-445e-9811-3921ab8ec6e8-util\") pod \"8fd97eb8-3fca-445e-9811-3921ab8ec6e8\" (UID: \"8fd97eb8-3fca-445e-9811-3921ab8ec6e8\") " Feb 18 14:49:50 crc kubenswrapper[4957]: I0218 14:49:50.502723 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pdxvj\" (UniqueName: \"kubernetes.io/projected/8fd97eb8-3fca-445e-9811-3921ab8ec6e8-kube-api-access-pdxvj\") pod \"8fd97eb8-3fca-445e-9811-3921ab8ec6e8\" (UID: \"8fd97eb8-3fca-445e-9811-3921ab8ec6e8\") " Feb 18 14:49:50 crc kubenswrapper[4957]: I0218 14:49:50.502913 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8fd97eb8-3fca-445e-9811-3921ab8ec6e8-bundle\") pod \"8fd97eb8-3fca-445e-9811-3921ab8ec6e8\" (UID: \"8fd97eb8-3fca-445e-9811-3921ab8ec6e8\") " Feb 18 14:49:50 crc kubenswrapper[4957]: I0218 14:49:50.503917 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8fd97eb8-3fca-445e-9811-3921ab8ec6e8-bundle" (OuterVolumeSpecName: "bundle") pod "8fd97eb8-3fca-445e-9811-3921ab8ec6e8" (UID: "8fd97eb8-3fca-445e-9811-3921ab8ec6e8"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:49:50 crc kubenswrapper[4957]: I0218 14:49:50.511521 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8fd97eb8-3fca-445e-9811-3921ab8ec6e8-kube-api-access-pdxvj" (OuterVolumeSpecName: "kube-api-access-pdxvj") pod "8fd97eb8-3fca-445e-9811-3921ab8ec6e8" (UID: "8fd97eb8-3fca-445e-9811-3921ab8ec6e8"). InnerVolumeSpecName "kube-api-access-pdxvj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:49:50 crc kubenswrapper[4957]: I0218 14:49:50.523105 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8fd97eb8-3fca-445e-9811-3921ab8ec6e8-util" (OuterVolumeSpecName: "util") pod "8fd97eb8-3fca-445e-9811-3921ab8ec6e8" (UID: "8fd97eb8-3fca-445e-9811-3921ab8ec6e8"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:49:50 crc kubenswrapper[4957]: I0218 14:49:50.605787 4957 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8fd97eb8-3fca-445e-9811-3921ab8ec6e8-util\") on node \"crc\" DevicePath \"\"" Feb 18 14:49:50 crc kubenswrapper[4957]: I0218 14:49:50.605844 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pdxvj\" (UniqueName: \"kubernetes.io/projected/8fd97eb8-3fca-445e-9811-3921ab8ec6e8-kube-api-access-pdxvj\") on node \"crc\" DevicePath \"\"" Feb 18 14:49:50 crc kubenswrapper[4957]: I0218 14:49:50.605860 4957 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8fd97eb8-3fca-445e-9811-3921ab8ec6e8-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:49:50 crc kubenswrapper[4957]: I0218 14:49:50.978907 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/916dfaad630ecae58644db741cc680a2786dd5efa7bb707c738ce0be548rf8h" event={"ID":"8fd97eb8-3fca-445e-9811-3921ab8ec6e8","Type":"ContainerDied","Data":"f4fd09419ebe1a681898892e9d908cc5c64135f3a95c77289c26f51953ccf5a8"} Feb 18 14:49:50 crc kubenswrapper[4957]: I0218 14:49:50.978975 4957 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f4fd09419ebe1a681898892e9d908cc5c64135f3a95c77289c26f51953ccf5a8" Feb 18 14:49:50 crc kubenswrapper[4957]: I0218 14:49:50.979032 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/916dfaad630ecae58644db741cc680a2786dd5efa7bb707c738ce0be548rf8h" Feb 18 14:49:56 crc kubenswrapper[4957]: I0218 14:49:56.330281 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-5666c999f9-b87pp"] Feb 18 14:49:56 crc kubenswrapper[4957]: E0218 14:49:56.331519 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fd97eb8-3fca-445e-9811-3921ab8ec6e8" containerName="util" Feb 18 14:49:56 crc kubenswrapper[4957]: I0218 14:49:56.331539 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fd97eb8-3fca-445e-9811-3921ab8ec6e8" containerName="util" Feb 18 14:49:56 crc kubenswrapper[4957]: E0218 14:49:56.331560 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fd97eb8-3fca-445e-9811-3921ab8ec6e8" containerName="pull" Feb 18 14:49:56 crc kubenswrapper[4957]: I0218 14:49:56.331570 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fd97eb8-3fca-445e-9811-3921ab8ec6e8" containerName="pull" Feb 18 14:49:56 crc kubenswrapper[4957]: E0218 14:49:56.331627 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fd97eb8-3fca-445e-9811-3921ab8ec6e8" containerName="extract" Feb 18 14:49:56 crc kubenswrapper[4957]: I0218 14:49:56.331636 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fd97eb8-3fca-445e-9811-3921ab8ec6e8" containerName="extract" Feb 18 14:49:56 crc kubenswrapper[4957]: I0218 14:49:56.331873 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="8fd97eb8-3fca-445e-9811-3921ab8ec6e8" containerName="extract" Feb 18 14:49:56 crc kubenswrapper[4957]: I0218 14:49:56.332866 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-5666c999f9-b87pp" Feb 18 14:49:56 crc kubenswrapper[4957]: I0218 14:49:56.335943 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-m42fm" Feb 18 14:49:56 crc kubenswrapper[4957]: I0218 14:49:56.368220 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-5666c999f9-b87pp"] Feb 18 14:49:56 crc kubenswrapper[4957]: I0218 14:49:56.417843 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-csb4s\" (UniqueName: \"kubernetes.io/projected/09673cd4-22c2-43fa-87ae-17b7a8a03308-kube-api-access-csb4s\") pod \"openstack-operator-controller-init-5666c999f9-b87pp\" (UID: \"09673cd4-22c2-43fa-87ae-17b7a8a03308\") " pod="openstack-operators/openstack-operator-controller-init-5666c999f9-b87pp" Feb 18 14:49:56 crc kubenswrapper[4957]: I0218 14:49:56.519015 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-csb4s\" (UniqueName: \"kubernetes.io/projected/09673cd4-22c2-43fa-87ae-17b7a8a03308-kube-api-access-csb4s\") pod \"openstack-operator-controller-init-5666c999f9-b87pp\" (UID: \"09673cd4-22c2-43fa-87ae-17b7a8a03308\") " pod="openstack-operators/openstack-operator-controller-init-5666c999f9-b87pp" Feb 18 14:49:56 crc kubenswrapper[4957]: I0218 14:49:56.540693 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-csb4s\" (UniqueName: \"kubernetes.io/projected/09673cd4-22c2-43fa-87ae-17b7a8a03308-kube-api-access-csb4s\") pod \"openstack-operator-controller-init-5666c999f9-b87pp\" (UID: \"09673cd4-22c2-43fa-87ae-17b7a8a03308\") " pod="openstack-operators/openstack-operator-controller-init-5666c999f9-b87pp" Feb 18 14:49:56 crc kubenswrapper[4957]: I0218 14:49:56.653839 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-5666c999f9-b87pp" Feb 18 14:49:57 crc kubenswrapper[4957]: I0218 14:49:57.097320 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-5666c999f9-b87pp"] Feb 18 14:49:58 crc kubenswrapper[4957]: I0218 14:49:58.036266 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-5666c999f9-b87pp" event={"ID":"09673cd4-22c2-43fa-87ae-17b7a8a03308","Type":"ContainerStarted","Data":"51e59432b0c41478ea6785c63e4034ddfbb5706a2d88cc7a393bbfb468a987c5"} Feb 18 14:50:04 crc kubenswrapper[4957]: I0218 14:50:04.090918 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-5666c999f9-b87pp" event={"ID":"09673cd4-22c2-43fa-87ae-17b7a8a03308","Type":"ContainerStarted","Data":"6e53b53fba9c4ba90a2dd99b95b660b7e9883137fab02e7cafb7a73e864e5ed9"} Feb 18 14:50:04 crc kubenswrapper[4957]: I0218 14:50:04.091927 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-5666c999f9-b87pp" Feb 18 14:50:04 crc kubenswrapper[4957]: I0218 14:50:04.129817 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-5666c999f9-b87pp" podStartSLOduration=1.8668869620000001 podStartE2EDuration="8.129799088s" podCreationTimestamp="2026-02-18 14:49:56 +0000 UTC" firstStartedPulling="2026-02-18 14:49:57.100742941 +0000 UTC m=+1103.621607685" lastFinishedPulling="2026-02-18 14:50:03.363655067 +0000 UTC m=+1109.884519811" observedRunningTime="2026-02-18 14:50:04.124740501 +0000 UTC m=+1110.645605255" watchObservedRunningTime="2026-02-18 14:50:04.129799088 +0000 UTC m=+1110.650663832" Feb 18 14:50:07 crc kubenswrapper[4957]: I0218 14:50:07.279066 4957 patch_prober.go:28] interesting pod/machine-config-daemon-x8wwg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 14:50:07 crc kubenswrapper[4957]: I0218 14:50:07.279778 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 14:50:07 crc kubenswrapper[4957]: I0218 14:50:07.279823 4957 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" Feb 18 14:50:07 crc kubenswrapper[4957]: I0218 14:50:07.280588 4957 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"cfb8c9a50ccd65f948148f9634be317a0e1f6018a21e87a691f9e80583c8ce0b"} pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 14:50:07 crc kubenswrapper[4957]: I0218 14:50:07.280642 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" containerName="machine-config-daemon" containerID="cri-o://cfb8c9a50ccd65f948148f9634be317a0e1f6018a21e87a691f9e80583c8ce0b" gracePeriod=600 Feb 18 14:50:08 crc kubenswrapper[4957]: I0218 14:50:08.123247 4957 generic.go:334] "Generic (PLEG): container finished" podID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" containerID="cfb8c9a50ccd65f948148f9634be317a0e1f6018a21e87a691f9e80583c8ce0b" exitCode=0 Feb 18 14:50:08 crc kubenswrapper[4957]: I0218 14:50:08.123338 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" event={"ID":"4cde17e3-43e9-4bed-afe8-5b76229e35cf","Type":"ContainerDied","Data":"cfb8c9a50ccd65f948148f9634be317a0e1f6018a21e87a691f9e80583c8ce0b"} Feb 18 14:50:08 crc kubenswrapper[4957]: I0218 14:50:08.123748 4957 scope.go:117] "RemoveContainer" containerID="c57c13136d0c689489f9ddc49289b10f813ba077ecbc190959d9174e61a4424f" Feb 18 14:50:09 crc kubenswrapper[4957]: I0218 14:50:09.135113 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" event={"ID":"4cde17e3-43e9-4bed-afe8-5b76229e35cf","Type":"ContainerStarted","Data":"3685eb85ff4135d1d788989087159dbba7bab3b709d7c66d8e50ece795084250"} Feb 18 14:50:16 crc kubenswrapper[4957]: I0218 14:50:16.657080 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-5666c999f9-b87pp" Feb 18 14:50:35 crc kubenswrapper[4957]: I0218 14:50:35.283706 4957 scope.go:117] "RemoveContainer" containerID="d0e0c1cdd74b6128c975451b4349c18aec7c323a4c4915700462c8fa1e77841e" Feb 18 14:50:35 crc kubenswrapper[4957]: I0218 14:50:35.306861 4957 scope.go:117] "RemoveContainer" containerID="794c1f3fa2df71446dd4c0ebdeeb61f9d22b879499426a7dd43b5ca66a6b1b72" Feb 18 14:50:35 crc kubenswrapper[4957]: I0218 14:50:35.340654 4957 scope.go:117] "RemoveContainer" containerID="8e759a7dddd8efb9ea3177b74313912d86f46772aae8793fde2c75b11549f026" Feb 18 14:50:44 crc kubenswrapper[4957]: I0218 14:50:44.606709 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-jv5dd"] Feb 18 14:50:44 crc kubenswrapper[4957]: I0218 14:50:44.608381 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-jv5dd" Feb 18 14:50:44 crc kubenswrapper[4957]: I0218 14:50:44.611798 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-nhn9p" Feb 18 14:50:44 crc kubenswrapper[4957]: I0218 14:50:44.624891 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-xt2c9"] Feb 18 14:50:44 crc kubenswrapper[4957]: I0218 14:50:44.626331 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-xt2c9" Feb 18 14:50:44 crc kubenswrapper[4957]: I0218 14:50:44.630694 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-nqwcx" Feb 18 14:50:44 crc kubenswrapper[4957]: I0218 14:50:44.662254 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-slwlj"] Feb 18 14:50:44 crc kubenswrapper[4957]: I0218 14:50:44.669524 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-xt2c9"] Feb 18 14:50:44 crc kubenswrapper[4957]: I0218 14:50:44.669672 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-slwlj" Feb 18 14:50:44 crc kubenswrapper[4957]: I0218 14:50:44.676506 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-slwlj"] Feb 18 14:50:44 crc kubenswrapper[4957]: I0218 14:50:44.676644 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-sgnbs" Feb 18 14:50:44 crc kubenswrapper[4957]: I0218 14:50:44.688493 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-b85vd"] Feb 18 14:50:44 crc kubenswrapper[4957]: I0218 14:50:44.689778 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-77987464f4-b85vd" Feb 18 14:50:44 crc kubenswrapper[4957]: I0218 14:50:44.707462 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-cmshp" Feb 18 14:50:44 crc kubenswrapper[4957]: I0218 14:50:44.779529 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wt8f\" (UniqueName: \"kubernetes.io/projected/07a618be-7572-49b8-aeb3-12ce37fbe7b3-kube-api-access-6wt8f\") pod \"cinder-operator-controller-manager-5d946d989d-xt2c9\" (UID: \"07a618be-7572-49b8-aeb3-12ce37fbe7b3\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-xt2c9" Feb 18 14:50:44 crc kubenswrapper[4957]: I0218 14:50:44.779709 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jp7vr\" (UniqueName: \"kubernetes.io/projected/86c162c7-c82d-4627-bf84-11d5fb80199f-kube-api-access-jp7vr\") pod \"barbican-operator-controller-manager-868647ff47-jv5dd\" (UID: \"86c162c7-c82d-4627-bf84-11d5fb80199f\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-jv5dd" Feb 18 14:50:44 crc kubenswrapper[4957]: I0218 14:50:44.779771 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kn4sn\" (UniqueName: \"kubernetes.io/projected/e6651ea1-6311-4597-81cc-a8637f8cc88a-kube-api-access-kn4sn\") pod \"designate-operator-controller-manager-6d8bf5c495-slwlj\" (UID: \"e6651ea1-6311-4597-81cc-a8637f8cc88a\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-slwlj" Feb 18 14:50:44 crc kubenswrapper[4957]: I0218 14:50:44.839153 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-jv5dd"] Feb 18 14:50:44 crc kubenswrapper[4957]: I0218 14:50:44.868763 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-b85vd"] Feb 18 14:50:44 crc kubenswrapper[4957]: I0218 14:50:44.899026 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6wt8f\" (UniqueName: \"kubernetes.io/projected/07a618be-7572-49b8-aeb3-12ce37fbe7b3-kube-api-access-6wt8f\") pod \"cinder-operator-controller-manager-5d946d989d-xt2c9\" (UID: \"07a618be-7572-49b8-aeb3-12ce37fbe7b3\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-xt2c9" Feb 18 14:50:44 crc kubenswrapper[4957]: I0218 14:50:44.899116 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-286mq\" (UniqueName: \"kubernetes.io/projected/cc38dff8-4b46-4281-96a3-ff88c8200f59-kube-api-access-286mq\") pod \"glance-operator-controller-manager-77987464f4-b85vd\" (UID: \"cc38dff8-4b46-4281-96a3-ff88c8200f59\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-b85vd" Feb 18 14:50:44 crc kubenswrapper[4957]: I0218 14:50:44.899148 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jp7vr\" (UniqueName: \"kubernetes.io/projected/86c162c7-c82d-4627-bf84-11d5fb80199f-kube-api-access-jp7vr\") pod \"barbican-operator-controller-manager-868647ff47-jv5dd\" (UID: \"86c162c7-c82d-4627-bf84-11d5fb80199f\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-jv5dd" Feb 18 14:50:44 crc kubenswrapper[4957]: I0218 14:50:44.899175 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kn4sn\" (UniqueName: \"kubernetes.io/projected/e6651ea1-6311-4597-81cc-a8637f8cc88a-kube-api-access-kn4sn\") pod \"designate-operator-controller-manager-6d8bf5c495-slwlj\" (UID: \"e6651ea1-6311-4597-81cc-a8637f8cc88a\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-slwlj" Feb 18 14:50:44 crc kubenswrapper[4957]: I0218 14:50:44.949540 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-hhg5g"] Feb 18 14:50:44 crc kubenswrapper[4957]: I0218 14:50:44.951231 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-hhg5g" Feb 18 14:50:44 crc kubenswrapper[4957]: I0218 14:50:44.959751 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6wt8f\" (UniqueName: \"kubernetes.io/projected/07a618be-7572-49b8-aeb3-12ce37fbe7b3-kube-api-access-6wt8f\") pod \"cinder-operator-controller-manager-5d946d989d-xt2c9\" (UID: \"07a618be-7572-49b8-aeb3-12ce37fbe7b3\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-xt2c9" Feb 18 14:50:44 crc kubenswrapper[4957]: I0218 14:50:44.959795 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kn4sn\" (UniqueName: \"kubernetes.io/projected/e6651ea1-6311-4597-81cc-a8637f8cc88a-kube-api-access-kn4sn\") pod \"designate-operator-controller-manager-6d8bf5c495-slwlj\" (UID: \"e6651ea1-6311-4597-81cc-a8637f8cc88a\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-slwlj" Feb 18 14:50:44 crc kubenswrapper[4957]: I0218 14:50:44.960425 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jp7vr\" (UniqueName: \"kubernetes.io/projected/86c162c7-c82d-4627-bf84-11d5fb80199f-kube-api-access-jp7vr\") pod \"barbican-operator-controller-manager-868647ff47-jv5dd\" (UID: \"86c162c7-c82d-4627-bf84-11d5fb80199f\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-jv5dd" Feb 18 14:50:44 crc kubenswrapper[4957]: I0218 14:50:44.960451 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-xqkm6" Feb 18 14:50:44 crc kubenswrapper[4957]: I0218 14:50:44.977496 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-pgchj"] Feb 18 14:50:44 crc kubenswrapper[4957]: I0218 14:50:44.978584 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79d975b745-pgchj" Feb 18 14:50:44 crc kubenswrapper[4957]: I0218 14:50:44.984133 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Feb 18 14:50:44 crc kubenswrapper[4957]: I0218 14:50:44.984471 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-b66tc" Feb 18 14:50:45 crc kubenswrapper[4957]: I0218 14:50:45.000821 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-286mq\" (UniqueName: \"kubernetes.io/projected/cc38dff8-4b46-4281-96a3-ff88c8200f59-kube-api-access-286mq\") pod \"glance-operator-controller-manager-77987464f4-b85vd\" (UID: \"cc38dff8-4b46-4281-96a3-ff88c8200f59\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-b85vd" Feb 18 14:50:45 crc kubenswrapper[4957]: I0218 14:50:45.013951 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-slwlj" Feb 18 14:50:45 crc kubenswrapper[4957]: I0218 14:50:45.032168 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-hhg5g"] Feb 18 14:50:45 crc kubenswrapper[4957]: I0218 14:50:45.047264 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-xg6pp"] Feb 18 14:50:45 crc kubenswrapper[4957]: I0218 14:50:45.049628 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-xg6pp" Feb 18 14:50:45 crc kubenswrapper[4957]: I0218 14:50:45.068099 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-286mq\" (UniqueName: \"kubernetes.io/projected/cc38dff8-4b46-4281-96a3-ff88c8200f59-kube-api-access-286mq\") pod \"glance-operator-controller-manager-77987464f4-b85vd\" (UID: \"cc38dff8-4b46-4281-96a3-ff88c8200f59\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-b85vd" Feb 18 14:50:45 crc kubenswrapper[4957]: I0218 14:50:45.068986 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-fx4tl"] Feb 18 14:50:45 crc kubenswrapper[4957]: I0218 14:50:45.069177 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-fz5zq" Feb 18 14:50:45 crc kubenswrapper[4957]: I0218 14:50:45.070166 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-fx4tl" Feb 18 14:50:45 crc kubenswrapper[4957]: I0218 14:50:45.075098 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-scz5p" Feb 18 14:50:45 crc kubenswrapper[4957]: I0218 14:50:45.092706 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-77987464f4-b85vd" Feb 18 14:50:45 crc kubenswrapper[4957]: I0218 14:50:45.103121 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6c6f7318-74c7-4971-9888-45a6c025bdde-cert\") pod \"infra-operator-controller-manager-79d975b745-pgchj\" (UID: \"6c6f7318-74c7-4971-9888-45a6c025bdde\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-pgchj" Feb 18 14:50:45 crc kubenswrapper[4957]: I0218 14:50:45.103382 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xcj9g\" (UniqueName: \"kubernetes.io/projected/18f96572-e72c-48ae-b22b-4c6fb7a4d7b9-kube-api-access-xcj9g\") pod \"heat-operator-controller-manager-69f49c598c-hhg5g\" (UID: \"18f96572-e72c-48ae-b22b-4c6fb7a4d7b9\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-hhg5g" Feb 18 14:50:45 crc kubenswrapper[4957]: I0218 14:50:45.103608 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfhn6\" (UniqueName: \"kubernetes.io/projected/6c6f7318-74c7-4971-9888-45a6c025bdde-kube-api-access-sfhn6\") pod \"infra-operator-controller-manager-79d975b745-pgchj\" (UID: \"6c6f7318-74c7-4971-9888-45a6c025bdde\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-pgchj" Feb 18 14:50:45 crc kubenswrapper[4957]: I0218 14:50:45.112588 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-fx4tl"] Feb 18 14:50:45 crc kubenswrapper[4957]: I0218 14:50:45.131998 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-xg6pp"] Feb 18 14:50:45 crc kubenswrapper[4957]: I0218 14:50:45.143600 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-vt4tc"] Feb 18 14:50:45 crc kubenswrapper[4957]: I0218 14:50:45.161352 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-vt4tc" Feb 18 14:50:45 crc kubenswrapper[4957]: I0218 14:50:45.166068 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-qpcjj" Feb 18 14:50:45 crc kubenswrapper[4957]: I0218 14:50:45.198501 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-pgchj"] Feb 18 14:50:45 crc kubenswrapper[4957]: I0218 14:50:45.206515 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dshnc\" (UniqueName: \"kubernetes.io/projected/94bb800a-9927-4d0f-b9d2-53e4fb398fda-kube-api-access-dshnc\") pod \"horizon-operator-controller-manager-5b9b8895d5-xg6pp\" (UID: \"94bb800a-9927-4d0f-b9d2-53e4fb398fda\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-xg6pp" Feb 18 14:50:45 crc kubenswrapper[4957]: I0218 14:50:45.206620 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgjhh\" (UniqueName: \"kubernetes.io/projected/91fd8838-0687-420b-b3dd-4130e221a66d-kube-api-access-vgjhh\") pod \"ironic-operator-controller-manager-554564d7fc-fx4tl\" (UID: \"91fd8838-0687-420b-b3dd-4130e221a66d\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-fx4tl" Feb 18 14:50:45 crc kubenswrapper[4957]: I0218 14:50:45.206665 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6c6f7318-74c7-4971-9888-45a6c025bdde-cert\") pod \"infra-operator-controller-manager-79d975b745-pgchj\" (UID: \"6c6f7318-74c7-4971-9888-45a6c025bdde\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-pgchj" Feb 18 14:50:45 crc kubenswrapper[4957]: I0218 14:50:45.206741 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xcj9g\" (UniqueName: \"kubernetes.io/projected/18f96572-e72c-48ae-b22b-4c6fb7a4d7b9-kube-api-access-xcj9g\") pod \"heat-operator-controller-manager-69f49c598c-hhg5g\" (UID: \"18f96572-e72c-48ae-b22b-4c6fb7a4d7b9\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-hhg5g" Feb 18 14:50:45 crc kubenswrapper[4957]: I0218 14:50:45.206826 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sfhn6\" (UniqueName: \"kubernetes.io/projected/6c6f7318-74c7-4971-9888-45a6c025bdde-kube-api-access-sfhn6\") pod \"infra-operator-controller-manager-79d975b745-pgchj\" (UID: \"6c6f7318-74c7-4971-9888-45a6c025bdde\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-pgchj" Feb 18 14:50:45 crc kubenswrapper[4957]: E0218 14:50:45.207347 4957 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 18 14:50:45 crc kubenswrapper[4957]: E0218 14:50:45.207434 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6c6f7318-74c7-4971-9888-45a6c025bdde-cert podName:6c6f7318-74c7-4971-9888-45a6c025bdde nodeName:}" failed. No retries permitted until 2026-02-18 14:50:45.707393829 +0000 UTC m=+1152.228258573 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/6c6f7318-74c7-4971-9888-45a6c025bdde-cert") pod "infra-operator-controller-manager-79d975b745-pgchj" (UID: "6c6f7318-74c7-4971-9888-45a6c025bdde") : secret "infra-operator-webhook-server-cert" not found Feb 18 14:50:45 crc kubenswrapper[4957]: I0218 14:50:45.238217 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-czjx4"] Feb 18 14:50:45 crc kubenswrapper[4957]: I0218 14:50:45.238699 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-jv5dd" Feb 18 14:50:45 crc kubenswrapper[4957]: I0218 14:50:45.264338 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-xt2c9" Feb 18 14:50:45 crc kubenswrapper[4957]: I0218 14:50:45.264546 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-zc4tx"] Feb 18 14:50:45 crc kubenswrapper[4957]: I0218 14:50:45.267896 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-vt4tc"] Feb 18 14:50:45 crc kubenswrapper[4957]: I0218 14:50:45.268007 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-zc4tx" Feb 18 14:50:45 crc kubenswrapper[4957]: I0218 14:50:45.264750 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-czjx4" Feb 18 14:50:45 crc kubenswrapper[4957]: I0218 14:50:45.272831 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xcj9g\" (UniqueName: \"kubernetes.io/projected/18f96572-e72c-48ae-b22b-4c6fb7a4d7b9-kube-api-access-xcj9g\") pod \"heat-operator-controller-manager-69f49c598c-hhg5g\" (UID: \"18f96572-e72c-48ae-b22b-4c6fb7a4d7b9\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-hhg5g" Feb 18 14:50:45 crc kubenswrapper[4957]: I0218 14:50:45.280328 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-xfpnc" Feb 18 14:50:45 crc kubenswrapper[4957]: I0218 14:50:45.280632 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-xh2h4" Feb 18 14:50:45 crc kubenswrapper[4957]: I0218 14:50:45.281676 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-vn8z8"] Feb 18 14:50:45 crc kubenswrapper[4957]: I0218 14:50:45.283736 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sfhn6\" (UniqueName: \"kubernetes.io/projected/6c6f7318-74c7-4971-9888-45a6c025bdde-kube-api-access-sfhn6\") pod \"infra-operator-controller-manager-79d975b745-pgchj\" (UID: \"6c6f7318-74c7-4971-9888-45a6c025bdde\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-pgchj" Feb 18 14:50:45 crc kubenswrapper[4957]: I0218 14:50:45.290189 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-zc4tx"] Feb 18 14:50:45 crc kubenswrapper[4957]: I0218 14:50:45.290235 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-rd7mm"] Feb 18 14:50:45 crc kubenswrapper[4957]: I0218 14:50:45.291278 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-rd7mm" Feb 18 14:50:45 crc kubenswrapper[4957]: I0218 14:50:45.291728 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-vn8z8" Feb 18 14:50:45 crc kubenswrapper[4957]: I0218 14:50:45.299094 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-czjx4"] Feb 18 14:50:45 crc kubenswrapper[4957]: I0218 14:50:45.301053 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-fhb84" Feb 18 14:50:45 crc kubenswrapper[4957]: I0218 14:50:45.301277 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-6fnrm" Feb 18 14:50:45 crc kubenswrapper[4957]: I0218 14:50:45.310642 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-vn8z8"] Feb 18 14:50:45 crc kubenswrapper[4957]: I0218 14:50:45.313475 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-rd7mm"] Feb 18 14:50:45 crc kubenswrapper[4957]: I0218 14:50:45.323888 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-kx5gv"] Feb 18 14:50:45 crc kubenswrapper[4957]: I0218 14:50:45.325896 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-kx5gv" Feb 18 14:50:45 crc kubenswrapper[4957]: I0218 14:50:45.328080 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-775rt\" (UniqueName: \"kubernetes.io/projected/147c50a5-37fc-4b06-803f-8ad1d1fd4625-kube-api-access-775rt\") pod \"manila-operator-controller-manager-54f6768c69-czjx4\" (UID: \"147c50a5-37fc-4b06-803f-8ad1d1fd4625\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-czjx4" Feb 18 14:50:45 crc kubenswrapper[4957]: I0218 14:50:45.328204 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dshnc\" (UniqueName: \"kubernetes.io/projected/94bb800a-9927-4d0f-b9d2-53e4fb398fda-kube-api-access-dshnc\") pod \"horizon-operator-controller-manager-5b9b8895d5-xg6pp\" (UID: \"94bb800a-9927-4d0f-b9d2-53e4fb398fda\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-xg6pp" Feb 18 14:50:45 crc kubenswrapper[4957]: I0218 14:50:45.328271 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vgjhh\" (UniqueName: \"kubernetes.io/projected/91fd8838-0687-420b-b3dd-4130e221a66d-kube-api-access-vgjhh\") pod \"ironic-operator-controller-manager-554564d7fc-fx4tl\" (UID: \"91fd8838-0687-420b-b3dd-4130e221a66d\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-fx4tl" Feb 18 14:50:45 crc kubenswrapper[4957]: I0218 14:50:45.328356 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2jtd\" (UniqueName: \"kubernetes.io/projected/eb1f93ca-0a6f-4582-9d3a-329ddd1dd4fe-kube-api-access-r2jtd\") pod \"nova-operator-controller-manager-567668f5cf-rd7mm\" (UID: \"eb1f93ca-0a6f-4582-9d3a-329ddd1dd4fe\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-rd7mm" Feb 18 14:50:45 crc kubenswrapper[4957]: I0218 14:50:45.328444 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwc8j\" (UniqueName: \"kubernetes.io/projected/78c8fb66-d71a-44b7-b858-51f7ca26a407-kube-api-access-jwc8j\") pod \"keystone-operator-controller-manager-b4d948c87-vt4tc\" (UID: \"78c8fb66-d71a-44b7-b858-51f7ca26a407\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-vt4tc" Feb 18 14:50:45 crc kubenswrapper[4957]: I0218 14:50:45.329762 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-4rr2b" Feb 18 14:50:45 crc kubenswrapper[4957]: I0218 14:50:45.369615 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-cxzhc"] Feb 18 14:50:45 crc kubenswrapper[4957]: I0218 14:50:45.380735 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-cxzhc" Feb 18 14:50:45 crc kubenswrapper[4957]: I0218 14:50:45.383954 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dshnc\" (UniqueName: \"kubernetes.io/projected/94bb800a-9927-4d0f-b9d2-53e4fb398fda-kube-api-access-dshnc\") pod \"horizon-operator-controller-manager-5b9b8895d5-xg6pp\" (UID: \"94bb800a-9927-4d0f-b9d2-53e4fb398fda\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-xg6pp" Feb 18 14:50:45 crc kubenswrapper[4957]: I0218 14:50:45.395201 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-7vv6x" Feb 18 14:50:45 crc kubenswrapper[4957]: I0218 14:50:45.421700 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vgjhh\" (UniqueName: \"kubernetes.io/projected/91fd8838-0687-420b-b3dd-4130e221a66d-kube-api-access-vgjhh\") pod \"ironic-operator-controller-manager-554564d7fc-fx4tl\" (UID: \"91fd8838-0687-420b-b3dd-4130e221a66d\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-fx4tl" Feb 18 14:50:45 crc kubenswrapper[4957]: I0218 14:50:45.431493 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-kx5gv"] Feb 18 14:50:45 crc kubenswrapper[4957]: I0218 14:50:45.433289 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jwc8j\" (UniqueName: \"kubernetes.io/projected/78c8fb66-d71a-44b7-b858-51f7ca26a407-kube-api-access-jwc8j\") pod \"keystone-operator-controller-manager-b4d948c87-vt4tc\" (UID: \"78c8fb66-d71a-44b7-b858-51f7ca26a407\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-vt4tc" Feb 18 14:50:45 crc kubenswrapper[4957]: I0218 14:50:45.433338 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4wtp\" (UniqueName: \"kubernetes.io/projected/ef6b6faf-f852-4948-8d1b-d53eace855a4-kube-api-access-d4wtp\") pod \"ovn-operator-controller-manager-d44cf6b75-cxzhc\" (UID: \"ef6b6faf-f852-4948-8d1b-d53eace855a4\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-cxzhc" Feb 18 14:50:45 crc kubenswrapper[4957]: I0218 14:50:45.433400 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-775rt\" (UniqueName: \"kubernetes.io/projected/147c50a5-37fc-4b06-803f-8ad1d1fd4625-kube-api-access-775rt\") pod \"manila-operator-controller-manager-54f6768c69-czjx4\" (UID: \"147c50a5-37fc-4b06-803f-8ad1d1fd4625\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-czjx4" Feb 18 14:50:45 crc kubenswrapper[4957]: I0218 14:50:45.433499 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vx6gn\" (UniqueName: \"kubernetes.io/projected/f70d6609-fcf8-47f9-89dc-986f8f2f902b-kube-api-access-vx6gn\") pod \"octavia-operator-controller-manager-69f8888797-kx5gv\" (UID: \"f70d6609-fcf8-47f9-89dc-986f8f2f902b\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-kx5gv" Feb 18 14:50:45 crc kubenswrapper[4957]: I0218 14:50:45.433521 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2gjdd\" (UniqueName: \"kubernetes.io/projected/8aaaba83-1c93-481a-9627-a46dbd3eef31-kube-api-access-2gjdd\") pod \"neutron-operator-controller-manager-64ddbf8bb-vn8z8\" (UID: \"8aaaba83-1c93-481a-9627-a46dbd3eef31\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-vn8z8" Feb 18 14:50:45 crc kubenswrapper[4957]: I0218 14:50:45.433543 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r2jtd\" (UniqueName: \"kubernetes.io/projected/eb1f93ca-0a6f-4582-9d3a-329ddd1dd4fe-kube-api-access-r2jtd\") pod \"nova-operator-controller-manager-567668f5cf-rd7mm\" (UID: \"eb1f93ca-0a6f-4582-9d3a-329ddd1dd4fe\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-rd7mm" Feb 18 14:50:45 crc kubenswrapper[4957]: I0218 14:50:45.433559 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xvxk\" (UniqueName: \"kubernetes.io/projected/ff480d9a-ead3-47a1-a765-59507dfe0853-kube-api-access-8xvxk\") pod \"mariadb-operator-controller-manager-6994f66f48-zc4tx\" (UID: \"ff480d9a-ead3-47a1-a765-59507dfe0853\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-zc4tx" Feb 18 14:50:45 crc kubenswrapper[4957]: I0218 14:50:45.443611 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-hhg5g" Feb 18 14:50:45 crc kubenswrapper[4957]: I0218 14:50:45.498775 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cwqlsr"] Feb 18 14:50:45 crc kubenswrapper[4957]: I0218 14:50:45.505752 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-xg6pp" Feb 18 14:50:45 crc kubenswrapper[4957]: I0218 14:50:45.529985 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-fx4tl" Feb 18 14:50:45 crc kubenswrapper[4957]: I0218 14:50:45.530246 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-cxzhc"] Feb 18 14:50:45 crc kubenswrapper[4957]: I0218 14:50:45.530400 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cwqlsr" Feb 18 14:50:45 crc kubenswrapper[4957]: I0218 14:50:45.515004 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jwc8j\" (UniqueName: \"kubernetes.io/projected/78c8fb66-d71a-44b7-b858-51f7ca26a407-kube-api-access-jwc8j\") pod \"keystone-operator-controller-manager-b4d948c87-vt4tc\" (UID: \"78c8fb66-d71a-44b7-b858-51f7ca26a407\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-vt4tc" Feb 18 14:50:45 crc kubenswrapper[4957]: I0218 14:50:45.512311 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-775rt\" (UniqueName: \"kubernetes.io/projected/147c50a5-37fc-4b06-803f-8ad1d1fd4625-kube-api-access-775rt\") pod \"manila-operator-controller-manager-54f6768c69-czjx4\" (UID: \"147c50a5-37fc-4b06-803f-8ad1d1fd4625\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-czjx4" Feb 18 14:50:45 crc kubenswrapper[4957]: I0218 14:50:45.532946 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r2jtd\" (UniqueName: \"kubernetes.io/projected/eb1f93ca-0a6f-4582-9d3a-329ddd1dd4fe-kube-api-access-r2jtd\") pod \"nova-operator-controller-manager-567668f5cf-rd7mm\" (UID: \"eb1f93ca-0a6f-4582-9d3a-329ddd1dd4fe\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-rd7mm" Feb 18 14:50:45 crc kubenswrapper[4957]: I0218 14:50:45.534450 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Feb 18 14:50:45 crc kubenswrapper[4957]: I0218 14:50:45.549533 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-lpjqm" Feb 18 14:50:45 crc kubenswrapper[4957]: I0218 14:50:45.560549 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vx6gn\" (UniqueName: \"kubernetes.io/projected/f70d6609-fcf8-47f9-89dc-986f8f2f902b-kube-api-access-vx6gn\") pod \"octavia-operator-controller-manager-69f8888797-kx5gv\" (UID: \"f70d6609-fcf8-47f9-89dc-986f8f2f902b\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-kx5gv" Feb 18 14:50:45 crc kubenswrapper[4957]: I0218 14:50:45.560834 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2gjdd\" (UniqueName: \"kubernetes.io/projected/8aaaba83-1c93-481a-9627-a46dbd3eef31-kube-api-access-2gjdd\") pod \"neutron-operator-controller-manager-64ddbf8bb-vn8z8\" (UID: \"8aaaba83-1c93-481a-9627-a46dbd3eef31\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-vn8z8" Feb 18 14:50:45 crc kubenswrapper[4957]: I0218 14:50:45.560966 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8xvxk\" (UniqueName: \"kubernetes.io/projected/ff480d9a-ead3-47a1-a765-59507dfe0853-kube-api-access-8xvxk\") pod \"mariadb-operator-controller-manager-6994f66f48-zc4tx\" (UID: \"ff480d9a-ead3-47a1-a765-59507dfe0853\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-zc4tx" Feb 18 14:50:45 crc kubenswrapper[4957]: I0218 14:50:45.561081 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8bd25216-306e-42c0-93da-a51803507c1f-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cwqlsr\" (UID: \"8bd25216-306e-42c0-93da-a51803507c1f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cwqlsr" Feb 18 14:50:45 crc kubenswrapper[4957]: I0218 14:50:45.561240 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d4wtp\" (UniqueName: \"kubernetes.io/projected/ef6b6faf-f852-4948-8d1b-d53eace855a4-kube-api-access-d4wtp\") pod \"ovn-operator-controller-manager-d44cf6b75-cxzhc\" (UID: \"ef6b6faf-f852-4948-8d1b-d53eace855a4\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-cxzhc" Feb 18 14:50:45 crc kubenswrapper[4957]: I0218 14:50:45.561349 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fv9hd\" (UniqueName: \"kubernetes.io/projected/8bd25216-306e-42c0-93da-a51803507c1f-kube-api-access-fv9hd\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cwqlsr\" (UID: \"8bd25216-306e-42c0-93da-a51803507c1f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cwqlsr" Feb 18 14:50:45 crc kubenswrapper[4957]: I0218 14:50:45.584095 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-vt4tc" Feb 18 14:50:45 crc kubenswrapper[4957]: I0218 14:50:45.695869 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d4wtp\" (UniqueName: \"kubernetes.io/projected/ef6b6faf-f852-4948-8d1b-d53eace855a4-kube-api-access-d4wtp\") pod \"ovn-operator-controller-manager-d44cf6b75-cxzhc\" (UID: \"ef6b6faf-f852-4948-8d1b-d53eace855a4\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-cxzhc" Feb 18 14:50:45 crc kubenswrapper[4957]: I0218 14:50:45.696357 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vx6gn\" (UniqueName: \"kubernetes.io/projected/f70d6609-fcf8-47f9-89dc-986f8f2f902b-kube-api-access-vx6gn\") pod \"octavia-operator-controller-manager-69f8888797-kx5gv\" (UID: \"f70d6609-fcf8-47f9-89dc-986f8f2f902b\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-kx5gv" Feb 18 14:50:45 crc kubenswrapper[4957]: I0218 14:50:45.696931 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2gjdd\" (UniqueName: \"kubernetes.io/projected/8aaaba83-1c93-481a-9627-a46dbd3eef31-kube-api-access-2gjdd\") pod \"neutron-operator-controller-manager-64ddbf8bb-vn8z8\" (UID: \"8aaaba83-1c93-481a-9627-a46dbd3eef31\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-vn8z8" Feb 18 14:50:45 crc kubenswrapper[4957]: I0218 14:50:45.699731 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8bd25216-306e-42c0-93da-a51803507c1f-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cwqlsr\" (UID: \"8bd25216-306e-42c0-93da-a51803507c1f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cwqlsr" Feb 18 14:50:45 crc kubenswrapper[4957]: I0218 14:50:45.699803 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fv9hd\" (UniqueName: \"kubernetes.io/projected/8bd25216-306e-42c0-93da-a51803507c1f-kube-api-access-fv9hd\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cwqlsr\" (UID: \"8bd25216-306e-42c0-93da-a51803507c1f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cwqlsr" Feb 18 14:50:45 crc kubenswrapper[4957]: E0218 14:50:45.700773 4957 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 18 14:50:45 crc kubenswrapper[4957]: E0218 14:50:45.700820 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8bd25216-306e-42c0-93da-a51803507c1f-cert podName:8bd25216-306e-42c0-93da-a51803507c1f nodeName:}" failed. No retries permitted until 2026-02-18 14:50:46.200805468 +0000 UTC m=+1152.721670212 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/8bd25216-306e-42c0-93da-a51803507c1f-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cwqlsr" (UID: "8bd25216-306e-42c0-93da-a51803507c1f") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 18 14:50:45 crc kubenswrapper[4957]: I0218 14:50:45.710543 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-9mf8z"] Feb 18 14:50:45 crc kubenswrapper[4957]: I0218 14:50:45.712651 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-czjx4" Feb 18 14:50:45 crc kubenswrapper[4957]: I0218 14:50:45.713869 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8xvxk\" (UniqueName: \"kubernetes.io/projected/ff480d9a-ead3-47a1-a765-59507dfe0853-kube-api-access-8xvxk\") pod \"mariadb-operator-controller-manager-6994f66f48-zc4tx\" (UID: \"ff480d9a-ead3-47a1-a765-59507dfe0853\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-zc4tx" Feb 18 14:50:45 crc kubenswrapper[4957]: I0218 14:50:45.723206 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-rd7mm" Feb 18 14:50:45 crc kubenswrapper[4957]: I0218 14:50:45.726818 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-9mf8z" Feb 18 14:50:45 crc kubenswrapper[4957]: I0218 14:50:45.732745 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-9gvzj" Feb 18 14:50:45 crc kubenswrapper[4957]: I0218 14:50:45.733014 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-5k7g6"] Feb 18 14:50:45 crc kubenswrapper[4957]: I0218 14:50:45.750693 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-vn8z8" Feb 18 14:50:45 crc kubenswrapper[4957]: I0218 14:50:45.781356 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-kx5gv" Feb 18 14:50:45 crc kubenswrapper[4957]: I0218 14:50:45.791298 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-54bf66477-rc4j4"] Feb 18 14:50:45 crc kubenswrapper[4957]: I0218 14:50:45.791641 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68f46476f-5k7g6" Feb 18 14:50:45 crc kubenswrapper[4957]: I0218 14:50:45.796043 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-54bf66477-rc4j4" Feb 18 14:50:45 crc kubenswrapper[4957]: I0218 14:50:45.801299 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-f6sfp" Feb 18 14:50:45 crc kubenswrapper[4957]: I0218 14:50:45.802544 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-zmrmx" Feb 18 14:50:45 crc kubenswrapper[4957]: I0218 14:50:45.803530 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fv9hd\" (UniqueName: \"kubernetes.io/projected/8bd25216-306e-42c0-93da-a51803507c1f-kube-api-access-fv9hd\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cwqlsr\" (UID: \"8bd25216-306e-42c0-93da-a51803507c1f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cwqlsr" Feb 18 14:50:45 crc kubenswrapper[4957]: I0218 14:50:45.803883 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kstr4\" (UniqueName: \"kubernetes.io/projected/644451ba-ce73-4312-b6cd-af99eb6c9fbc-kube-api-access-kstr4\") pod \"placement-operator-controller-manager-8497b45c89-9mf8z\" (UID: \"644451ba-ce73-4312-b6cd-af99eb6c9fbc\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-9mf8z" Feb 18 14:50:45 crc kubenswrapper[4957]: I0218 14:50:45.803946 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tqlck\" (UniqueName: \"kubernetes.io/projected/f507ee0e-6836-4f30-b79e-63979d76a449-kube-api-access-tqlck\") pod \"telemetry-operator-controller-manager-54bf66477-rc4j4\" (UID: \"f507ee0e-6836-4f30-b79e-63979d76a449\") " pod="openstack-operators/telemetry-operator-controller-manager-54bf66477-rc4j4" Feb 18 14:50:45 crc kubenswrapper[4957]: I0218 14:50:45.803988 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6c6f7318-74c7-4971-9888-45a6c025bdde-cert\") pod \"infra-operator-controller-manager-79d975b745-pgchj\" (UID: \"6c6f7318-74c7-4971-9888-45a6c025bdde\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-pgchj" Feb 18 14:50:45 crc kubenswrapper[4957]: I0218 14:50:45.804033 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvbnq\" (UniqueName: \"kubernetes.io/projected/8adf52f0-b132-4541-8962-7fae9bce89c6-kube-api-access-vvbnq\") pod \"swift-operator-controller-manager-68f46476f-5k7g6\" (UID: \"8adf52f0-b132-4541-8962-7fae9bce89c6\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-5k7g6" Feb 18 14:50:45 crc kubenswrapper[4957]: E0218 14:50:45.804726 4957 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 18 14:50:45 crc kubenswrapper[4957]: E0218 14:50:45.804830 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6c6f7318-74c7-4971-9888-45a6c025bdde-cert podName:6c6f7318-74c7-4971-9888-45a6c025bdde nodeName:}" failed. No retries permitted until 2026-02-18 14:50:46.804796873 +0000 UTC m=+1153.325661617 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/6c6f7318-74c7-4971-9888-45a6c025bdde-cert") pod "infra-operator-controller-manager-79d975b745-pgchj" (UID: "6c6f7318-74c7-4971-9888-45a6c025bdde") : secret "infra-operator-webhook-server-cert" not found Feb 18 14:50:45 crc kubenswrapper[4957]: I0218 14:50:45.855213 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-cxzhc" Feb 18 14:50:45 crc kubenswrapper[4957]: I0218 14:50:45.876675 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-9mf8z"] Feb 18 14:50:45 crc kubenswrapper[4957]: I0218 14:50:45.877488 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cwqlsr"] Feb 18 14:50:45 crc kubenswrapper[4957]: I0218 14:50:45.894502 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-5k7g6"] Feb 18 14:50:45 crc kubenswrapper[4957]: I0218 14:50:45.905473 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kstr4\" (UniqueName: \"kubernetes.io/projected/644451ba-ce73-4312-b6cd-af99eb6c9fbc-kube-api-access-kstr4\") pod \"placement-operator-controller-manager-8497b45c89-9mf8z\" (UID: \"644451ba-ce73-4312-b6cd-af99eb6c9fbc\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-9mf8z" Feb 18 14:50:45 crc kubenswrapper[4957]: I0218 14:50:45.905534 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tqlck\" (UniqueName: \"kubernetes.io/projected/f507ee0e-6836-4f30-b79e-63979d76a449-kube-api-access-tqlck\") pod \"telemetry-operator-controller-manager-54bf66477-rc4j4\" (UID: \"f507ee0e-6836-4f30-b79e-63979d76a449\") " pod="openstack-operators/telemetry-operator-controller-manager-54bf66477-rc4j4" Feb 18 14:50:45 crc kubenswrapper[4957]: I0218 14:50:45.905590 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vvbnq\" (UniqueName: \"kubernetes.io/projected/8adf52f0-b132-4541-8962-7fae9bce89c6-kube-api-access-vvbnq\") pod \"swift-operator-controller-manager-68f46476f-5k7g6\" (UID: \"8adf52f0-b132-4541-8962-7fae9bce89c6\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-5k7g6" Feb 18 14:50:45 crc kubenswrapper[4957]: I0218 14:50:45.927665 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-zc4tx" Feb 18 14:50:45 crc kubenswrapper[4957]: I0218 14:50:45.936770 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-54bf66477-rc4j4"] Feb 18 14:50:45 crc kubenswrapper[4957]: I0218 14:50:45.947483 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-2fdgz"] Feb 18 14:50:45 crc kubenswrapper[4957]: I0218 14:50:45.948827 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-7866795846-2fdgz" Feb 18 14:50:45 crc kubenswrapper[4957]: I0218 14:50:45.958526 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-2fdgz"] Feb 18 14:50:45 crc kubenswrapper[4957]: I0218 14:50:45.961178 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-n6jkf" Feb 18 14:50:45 crc kubenswrapper[4957]: I0218 14:50:45.969065 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vvbnq\" (UniqueName: \"kubernetes.io/projected/8adf52f0-b132-4541-8962-7fae9bce89c6-kube-api-access-vvbnq\") pod \"swift-operator-controller-manager-68f46476f-5k7g6\" (UID: \"8adf52f0-b132-4541-8962-7fae9bce89c6\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-5k7g6" Feb 18 14:50:45 crc kubenswrapper[4957]: I0218 14:50:45.970796 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tqlck\" (UniqueName: \"kubernetes.io/projected/f507ee0e-6836-4f30-b79e-63979d76a449-kube-api-access-tqlck\") pod \"telemetry-operator-controller-manager-54bf66477-rc4j4\" (UID: \"f507ee0e-6836-4f30-b79e-63979d76a449\") " pod="openstack-operators/telemetry-operator-controller-manager-54bf66477-rc4j4" Feb 18 14:50:45 crc kubenswrapper[4957]: I0218 14:50:45.971348 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kstr4\" (UniqueName: \"kubernetes.io/projected/644451ba-ce73-4312-b6cd-af99eb6c9fbc-kube-api-access-kstr4\") pod \"placement-operator-controller-manager-8497b45c89-9mf8z\" (UID: \"644451ba-ce73-4312-b6cd-af99eb6c9fbc\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-9mf8z" Feb 18 14:50:45 crc kubenswrapper[4957]: I0218 14:50:45.983891 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5db88f68c-kshbq"] Feb 18 14:50:45 crc kubenswrapper[4957]: I0218 14:50:45.985064 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-kshbq" Feb 18 14:50:45 crc kubenswrapper[4957]: I0218 14:50:45.988472 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-gfs7s" Feb 18 14:50:45 crc kubenswrapper[4957]: I0218 14:50:45.990607 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5db88f68c-kshbq"] Feb 18 14:50:46 crc kubenswrapper[4957]: I0218 14:50:46.017648 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xpd6z\" (UniqueName: \"kubernetes.io/projected/b724d9a9-8ae5-4295-9b4e-5ec65793b59f-kube-api-access-xpd6z\") pod \"test-operator-controller-manager-7866795846-2fdgz\" (UID: \"b724d9a9-8ae5-4295-9b4e-5ec65793b59f\") " pod="openstack-operators/test-operator-controller-manager-7866795846-2fdgz" Feb 18 14:50:46 crc kubenswrapper[4957]: I0218 14:50:46.017701 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68wbk\" (UniqueName: \"kubernetes.io/projected/c17ba5f2-7fb4-4ed7-8623-f987653f8f9b-kube-api-access-68wbk\") pod \"watcher-operator-controller-manager-5db88f68c-kshbq\" (UID: \"c17ba5f2-7fb4-4ed7-8623-f987653f8f9b\") " pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-kshbq" Feb 18 14:50:46 crc kubenswrapper[4957]: I0218 14:50:46.080569 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-549dd7c895-84tm5"] Feb 18 14:50:46 crc kubenswrapper[4957]: I0218 14:50:46.090387 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-549dd7c895-84tm5" Feb 18 14:50:46 crc kubenswrapper[4957]: I0218 14:50:46.095126 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-9mvf7" Feb 18 14:50:46 crc kubenswrapper[4957]: I0218 14:50:46.095303 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Feb 18 14:50:46 crc kubenswrapper[4957]: I0218 14:50:46.097055 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Feb 18 14:50:46 crc kubenswrapper[4957]: I0218 14:50:46.105378 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-549dd7c895-84tm5"] Feb 18 14:50:46 crc kubenswrapper[4957]: I0218 14:50:46.119321 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/33e1b915-d740-4ec7-b74e-b8b8b6356d4d-webhook-certs\") pod \"openstack-operator-controller-manager-549dd7c895-84tm5\" (UID: \"33e1b915-d740-4ec7-b74e-b8b8b6356d4d\") " pod="openstack-operators/openstack-operator-controller-manager-549dd7c895-84tm5" Feb 18 14:50:46 crc kubenswrapper[4957]: I0218 14:50:46.119392 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xpd6z\" (UniqueName: \"kubernetes.io/projected/b724d9a9-8ae5-4295-9b4e-5ec65793b59f-kube-api-access-xpd6z\") pod \"test-operator-controller-manager-7866795846-2fdgz\" (UID: \"b724d9a9-8ae5-4295-9b4e-5ec65793b59f\") " pod="openstack-operators/test-operator-controller-manager-7866795846-2fdgz" Feb 18 14:50:46 crc kubenswrapper[4957]: I0218 14:50:46.119444 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-68wbk\" (UniqueName: \"kubernetes.io/projected/c17ba5f2-7fb4-4ed7-8623-f987653f8f9b-kube-api-access-68wbk\") pod \"watcher-operator-controller-manager-5db88f68c-kshbq\" (UID: \"c17ba5f2-7fb4-4ed7-8623-f987653f8f9b\") " pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-kshbq" Feb 18 14:50:46 crc kubenswrapper[4957]: I0218 14:50:46.119493 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hclsx\" (UniqueName: \"kubernetes.io/projected/33e1b915-d740-4ec7-b74e-b8b8b6356d4d-kube-api-access-hclsx\") pod \"openstack-operator-controller-manager-549dd7c895-84tm5\" (UID: \"33e1b915-d740-4ec7-b74e-b8b8b6356d4d\") " pod="openstack-operators/openstack-operator-controller-manager-549dd7c895-84tm5" Feb 18 14:50:46 crc kubenswrapper[4957]: I0218 14:50:46.119558 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/33e1b915-d740-4ec7-b74e-b8b8b6356d4d-metrics-certs\") pod \"openstack-operator-controller-manager-549dd7c895-84tm5\" (UID: \"33e1b915-d740-4ec7-b74e-b8b8b6356d4d\") " pod="openstack-operators/openstack-operator-controller-manager-549dd7c895-84tm5" Feb 18 14:50:46 crc kubenswrapper[4957]: I0218 14:50:46.130831 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-d7pcl"] Feb 18 14:50:46 crc kubenswrapper[4957]: I0218 14:50:46.132637 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-d7pcl" Feb 18 14:50:46 crc kubenswrapper[4957]: I0218 14:50:46.136094 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-747rr" Feb 18 14:50:46 crc kubenswrapper[4957]: I0218 14:50:46.136246 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-d7pcl"] Feb 18 14:50:46 crc kubenswrapper[4957]: I0218 14:50:46.161526 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-9mf8z" Feb 18 14:50:46 crc kubenswrapper[4957]: I0218 14:50:46.163607 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-68wbk\" (UniqueName: \"kubernetes.io/projected/c17ba5f2-7fb4-4ed7-8623-f987653f8f9b-kube-api-access-68wbk\") pod \"watcher-operator-controller-manager-5db88f68c-kshbq\" (UID: \"c17ba5f2-7fb4-4ed7-8623-f987653f8f9b\") " pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-kshbq" Feb 18 14:50:46 crc kubenswrapper[4957]: I0218 14:50:46.168157 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xpd6z\" (UniqueName: \"kubernetes.io/projected/b724d9a9-8ae5-4295-9b4e-5ec65793b59f-kube-api-access-xpd6z\") pod \"test-operator-controller-manager-7866795846-2fdgz\" (UID: \"b724d9a9-8ae5-4295-9b4e-5ec65793b59f\") " pod="openstack-operators/test-operator-controller-manager-7866795846-2fdgz" Feb 18 14:50:46 crc kubenswrapper[4957]: I0218 14:50:46.187737 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68f46476f-5k7g6" Feb 18 14:50:46 crc kubenswrapper[4957]: I0218 14:50:46.208822 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-b85vd"] Feb 18 14:50:46 crc kubenswrapper[4957]: I0218 14:50:46.211209 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-54bf66477-rc4j4" Feb 18 14:50:46 crc kubenswrapper[4957]: W0218 14:50:46.215965 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcc38dff8_4b46_4281_96a3_ff88c8200f59.slice/crio-5095b2c2860855d6050c46ec5b8084dae0cd68f0a025f80338f9ba60a9b476bf WatchSource:0}: Error finding container 5095b2c2860855d6050c46ec5b8084dae0cd68f0a025f80338f9ba60a9b476bf: Status 404 returned error can't find the container with id 5095b2c2860855d6050c46ec5b8084dae0cd68f0a025f80338f9ba60a9b476bf Feb 18 14:50:46 crc kubenswrapper[4957]: I0218 14:50:46.226489 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/33e1b915-d740-4ec7-b74e-b8b8b6356d4d-webhook-certs\") pod \"openstack-operator-controller-manager-549dd7c895-84tm5\" (UID: \"33e1b915-d740-4ec7-b74e-b8b8b6356d4d\") " pod="openstack-operators/openstack-operator-controller-manager-549dd7c895-84tm5" Feb 18 14:50:46 crc kubenswrapper[4957]: E0218 14:50:46.226638 4957 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 18 14:50:46 crc kubenswrapper[4957]: E0218 14:50:46.226790 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/33e1b915-d740-4ec7-b74e-b8b8b6356d4d-webhook-certs podName:33e1b915-d740-4ec7-b74e-b8b8b6356d4d nodeName:}" failed. No retries permitted until 2026-02-18 14:50:46.726762375 +0000 UTC m=+1153.247627119 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/33e1b915-d740-4ec7-b74e-b8b8b6356d4d-webhook-certs") pod "openstack-operator-controller-manager-549dd7c895-84tm5" (UID: "33e1b915-d740-4ec7-b74e-b8b8b6356d4d") : secret "webhook-server-cert" not found Feb 18 14:50:46 crc kubenswrapper[4957]: I0218 14:50:46.227197 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hclsx\" (UniqueName: \"kubernetes.io/projected/33e1b915-d740-4ec7-b74e-b8b8b6356d4d-kube-api-access-hclsx\") pod \"openstack-operator-controller-manager-549dd7c895-84tm5\" (UID: \"33e1b915-d740-4ec7-b74e-b8b8b6356d4d\") " pod="openstack-operators/openstack-operator-controller-manager-549dd7c895-84tm5" Feb 18 14:50:46 crc kubenswrapper[4957]: I0218 14:50:46.227504 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8bd25216-306e-42c0-93da-a51803507c1f-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cwqlsr\" (UID: \"8bd25216-306e-42c0-93da-a51803507c1f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cwqlsr" Feb 18 14:50:46 crc kubenswrapper[4957]: E0218 14:50:46.227606 4957 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 18 14:50:46 crc kubenswrapper[4957]: E0218 14:50:46.227635 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8bd25216-306e-42c0-93da-a51803507c1f-cert podName:8bd25216-306e-42c0-93da-a51803507c1f nodeName:}" failed. No retries permitted until 2026-02-18 14:50:47.22762699 +0000 UTC m=+1153.748491734 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/8bd25216-306e-42c0-93da-a51803507c1f-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cwqlsr" (UID: "8bd25216-306e-42c0-93da-a51803507c1f") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 18 14:50:46 crc kubenswrapper[4957]: I0218 14:50:46.227964 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttpds\" (UniqueName: \"kubernetes.io/projected/33e776b3-c81e-4655-82a8-88c63ff8adf7-kube-api-access-ttpds\") pod \"rabbitmq-cluster-operator-manager-668c99d594-d7pcl\" (UID: \"33e776b3-c81e-4655-82a8-88c63ff8adf7\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-d7pcl" Feb 18 14:50:46 crc kubenswrapper[4957]: I0218 14:50:46.228245 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/33e1b915-d740-4ec7-b74e-b8b8b6356d4d-metrics-certs\") pod \"openstack-operator-controller-manager-549dd7c895-84tm5\" (UID: \"33e1b915-d740-4ec7-b74e-b8b8b6356d4d\") " pod="openstack-operators/openstack-operator-controller-manager-549dd7c895-84tm5" Feb 18 14:50:46 crc kubenswrapper[4957]: E0218 14:50:46.228363 4957 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 18 14:50:46 crc kubenswrapper[4957]: E0218 14:50:46.228388 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/33e1b915-d740-4ec7-b74e-b8b8b6356d4d-metrics-certs podName:33e1b915-d740-4ec7-b74e-b8b8b6356d4d nodeName:}" failed. No retries permitted until 2026-02-18 14:50:46.728379822 +0000 UTC m=+1153.249244566 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/33e1b915-d740-4ec7-b74e-b8b8b6356d4d-metrics-certs") pod "openstack-operator-controller-manager-549dd7c895-84tm5" (UID: "33e1b915-d740-4ec7-b74e-b8b8b6356d4d") : secret "metrics-server-cert" not found Feb 18 14:50:46 crc kubenswrapper[4957]: I0218 14:50:46.232427 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-7866795846-2fdgz" Feb 18 14:50:46 crc kubenswrapper[4957]: I0218 14:50:46.250277 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hclsx\" (UniqueName: \"kubernetes.io/projected/33e1b915-d740-4ec7-b74e-b8b8b6356d4d-kube-api-access-hclsx\") pod \"openstack-operator-controller-manager-549dd7c895-84tm5\" (UID: \"33e1b915-d740-4ec7-b74e-b8b8b6356d4d\") " pod="openstack-operators/openstack-operator-controller-manager-549dd7c895-84tm5" Feb 18 14:50:46 crc kubenswrapper[4957]: I0218 14:50:46.258552 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-kshbq" Feb 18 14:50:46 crc kubenswrapper[4957]: I0218 14:50:46.331817 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ttpds\" (UniqueName: \"kubernetes.io/projected/33e776b3-c81e-4655-82a8-88c63ff8adf7-kube-api-access-ttpds\") pod \"rabbitmq-cluster-operator-manager-668c99d594-d7pcl\" (UID: \"33e776b3-c81e-4655-82a8-88c63ff8adf7\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-d7pcl" Feb 18 14:50:46 crc kubenswrapper[4957]: I0218 14:50:46.370561 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ttpds\" (UniqueName: \"kubernetes.io/projected/33e776b3-c81e-4655-82a8-88c63ff8adf7-kube-api-access-ttpds\") pod \"rabbitmq-cluster-operator-manager-668c99d594-d7pcl\" (UID: \"33e776b3-c81e-4655-82a8-88c63ff8adf7\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-d7pcl" Feb 18 14:50:46 crc kubenswrapper[4957]: I0218 14:50:46.549528 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987464f4-b85vd" event={"ID":"cc38dff8-4b46-4281-96a3-ff88c8200f59","Type":"ContainerStarted","Data":"5095b2c2860855d6050c46ec5b8084dae0cd68f0a025f80338f9ba60a9b476bf"} Feb 18 14:50:46 crc kubenswrapper[4957]: I0218 14:50:46.587015 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-d7pcl" Feb 18 14:50:46 crc kubenswrapper[4957]: I0218 14:50:46.738440 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-hhg5g"] Feb 18 14:50:46 crc kubenswrapper[4957]: I0218 14:50:46.740863 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/33e1b915-d740-4ec7-b74e-b8b8b6356d4d-webhook-certs\") pod \"openstack-operator-controller-manager-549dd7c895-84tm5\" (UID: \"33e1b915-d740-4ec7-b74e-b8b8b6356d4d\") " pod="openstack-operators/openstack-operator-controller-manager-549dd7c895-84tm5" Feb 18 14:50:46 crc kubenswrapper[4957]: I0218 14:50:46.740995 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/33e1b915-d740-4ec7-b74e-b8b8b6356d4d-metrics-certs\") pod \"openstack-operator-controller-manager-549dd7c895-84tm5\" (UID: \"33e1b915-d740-4ec7-b74e-b8b8b6356d4d\") " pod="openstack-operators/openstack-operator-controller-manager-549dd7c895-84tm5" Feb 18 14:50:46 crc kubenswrapper[4957]: E0218 14:50:46.741271 4957 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 18 14:50:46 crc kubenswrapper[4957]: E0218 14:50:46.741856 4957 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 18 14:50:46 crc kubenswrapper[4957]: E0218 14:50:46.742006 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/33e1b915-d740-4ec7-b74e-b8b8b6356d4d-webhook-certs podName:33e1b915-d740-4ec7-b74e-b8b8b6356d4d nodeName:}" failed. No retries permitted until 2026-02-18 14:50:47.741930456 +0000 UTC m=+1154.262795200 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/33e1b915-d740-4ec7-b74e-b8b8b6356d4d-webhook-certs") pod "openstack-operator-controller-manager-549dd7c895-84tm5" (UID: "33e1b915-d740-4ec7-b74e-b8b8b6356d4d") : secret "webhook-server-cert" not found Feb 18 14:50:46 crc kubenswrapper[4957]: E0218 14:50:46.750536 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/33e1b915-d740-4ec7-b74e-b8b8b6356d4d-metrics-certs podName:33e1b915-d740-4ec7-b74e-b8b8b6356d4d nodeName:}" failed. No retries permitted until 2026-02-18 14:50:47.750501615 +0000 UTC m=+1154.271366449 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/33e1b915-d740-4ec7-b74e-b8b8b6356d4d-metrics-certs") pod "openstack-operator-controller-manager-549dd7c895-84tm5" (UID: "33e1b915-d740-4ec7-b74e-b8b8b6356d4d") : secret "metrics-server-cert" not found Feb 18 14:50:46 crc kubenswrapper[4957]: I0218 14:50:46.773561 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-slwlj"] Feb 18 14:50:46 crc kubenswrapper[4957]: W0218 14:50:46.776272 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod18f96572_e72c_48ae_b22b_4c6fb7a4d7b9.slice/crio-deb54bc50f589cb143ec205c33b027350b21359da3f23520bd1040916ea0960b WatchSource:0}: Error finding container deb54bc50f589cb143ec205c33b027350b21359da3f23520bd1040916ea0960b: Status 404 returned error can't find the container with id deb54bc50f589cb143ec205c33b027350b21359da3f23520bd1040916ea0960b Feb 18 14:50:46 crc kubenswrapper[4957]: I0218 14:50:46.809633 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-xt2c9"] Feb 18 14:50:46 crc kubenswrapper[4957]: I0218 14:50:46.843730 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6c6f7318-74c7-4971-9888-45a6c025bdde-cert\") pod \"infra-operator-controller-manager-79d975b745-pgchj\" (UID: \"6c6f7318-74c7-4971-9888-45a6c025bdde\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-pgchj" Feb 18 14:50:46 crc kubenswrapper[4957]: E0218 14:50:46.844075 4957 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 18 14:50:46 crc kubenswrapper[4957]: E0218 14:50:46.844174 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6c6f7318-74c7-4971-9888-45a6c025bdde-cert podName:6c6f7318-74c7-4971-9888-45a6c025bdde nodeName:}" failed. No retries permitted until 2026-02-18 14:50:48.844143749 +0000 UTC m=+1155.365008663 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/6c6f7318-74c7-4971-9888-45a6c025bdde-cert") pod "infra-operator-controller-manager-79d975b745-pgchj" (UID: "6c6f7318-74c7-4971-9888-45a6c025bdde") : secret "infra-operator-webhook-server-cert" not found Feb 18 14:50:47 crc kubenswrapper[4957]: I0218 14:50:47.034010 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-jv5dd"] Feb 18 14:50:47 crc kubenswrapper[4957]: W0218 14:50:47.052282 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod86c162c7_c82d_4627_bf84_11d5fb80199f.slice/crio-e6a9db1de86df2e28b59c4febdc2eb544a9c6a93108ee0869dc4d9b525f2b54e WatchSource:0}: Error finding container e6a9db1de86df2e28b59c4febdc2eb544a9c6a93108ee0869dc4d9b525f2b54e: Status 404 returned error can't find the container with id e6a9db1de86df2e28b59c4febdc2eb544a9c6a93108ee0869dc4d9b525f2b54e Feb 18 14:50:47 crc kubenswrapper[4957]: I0218 14:50:47.080220 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-vt4tc"] Feb 18 14:50:47 crc kubenswrapper[4957]: I0218 14:50:47.251111 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8bd25216-306e-42c0-93da-a51803507c1f-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cwqlsr\" (UID: \"8bd25216-306e-42c0-93da-a51803507c1f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cwqlsr" Feb 18 14:50:47 crc kubenswrapper[4957]: E0218 14:50:47.252495 4957 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 18 14:50:47 crc kubenswrapper[4957]: E0218 14:50:47.252571 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8bd25216-306e-42c0-93da-a51803507c1f-cert podName:8bd25216-306e-42c0-93da-a51803507c1f nodeName:}" failed. No retries permitted until 2026-02-18 14:50:49.252550096 +0000 UTC m=+1155.773414920 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/8bd25216-306e-42c0-93da-a51803507c1f-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cwqlsr" (UID: "8bd25216-306e-42c0-93da-a51803507c1f") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 18 14:50:47 crc kubenswrapper[4957]: I0218 14:50:47.566207 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-slwlj" event={"ID":"e6651ea1-6311-4597-81cc-a8637f8cc88a","Type":"ContainerStarted","Data":"c0becc1e533e3afde4988da770ed98dd3fddba132edd33d0cfb983be80486ca7"} Feb 18 14:50:47 crc kubenswrapper[4957]: I0218 14:50:47.568022 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-xt2c9" event={"ID":"07a618be-7572-49b8-aeb3-12ce37fbe7b3","Type":"ContainerStarted","Data":"9019c2d43e46a68f12ace29f1d1a3070056506c58a046941cca5f64f676c6b07"} Feb 18 14:50:47 crc kubenswrapper[4957]: I0218 14:50:47.571797 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-jv5dd" event={"ID":"86c162c7-c82d-4627-bf84-11d5fb80199f","Type":"ContainerStarted","Data":"e6a9db1de86df2e28b59c4febdc2eb544a9c6a93108ee0869dc4d9b525f2b54e"} Feb 18 14:50:47 crc kubenswrapper[4957]: I0218 14:50:47.572957 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-vt4tc" event={"ID":"78c8fb66-d71a-44b7-b858-51f7ca26a407","Type":"ContainerStarted","Data":"1d98092a2e91ea30b6950b2394e656205147858faa61987ee797602805058857"} Feb 18 14:50:47 crc kubenswrapper[4957]: I0218 14:50:47.574704 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-hhg5g" event={"ID":"18f96572-e72c-48ae-b22b-4c6fb7a4d7b9","Type":"ContainerStarted","Data":"deb54bc50f589cb143ec205c33b027350b21359da3f23520bd1040916ea0960b"} Feb 18 14:50:47 crc kubenswrapper[4957]: I0218 14:50:47.719065 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-xg6pp"] Feb 18 14:50:47 crc kubenswrapper[4957]: I0218 14:50:47.730296 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-czjx4"] Feb 18 14:50:47 crc kubenswrapper[4957]: I0218 14:50:47.743463 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-fx4tl"] Feb 18 14:50:47 crc kubenswrapper[4957]: I0218 14:50:47.750084 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-rd7mm"] Feb 18 14:50:47 crc kubenswrapper[4957]: W0218 14:50:47.755200 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod91fd8838_0687_420b_b3dd_4130e221a66d.slice/crio-9aa34431baac1d4bb3e93cad05124ffa0f0fcfde59efef37d87cd72f2e312196 WatchSource:0}: Error finding container 9aa34431baac1d4bb3e93cad05124ffa0f0fcfde59efef37d87cd72f2e312196: Status 404 returned error can't find the container with id 9aa34431baac1d4bb3e93cad05124ffa0f0fcfde59efef37d87cd72f2e312196 Feb 18 14:50:47 crc kubenswrapper[4957]: I0218 14:50:47.758090 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-5k7g6"] Feb 18 14:50:47 crc kubenswrapper[4957]: I0218 14:50:47.763835 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/33e1b915-d740-4ec7-b74e-b8b8b6356d4d-webhook-certs\") pod \"openstack-operator-controller-manager-549dd7c895-84tm5\" (UID: \"33e1b915-d740-4ec7-b74e-b8b8b6356d4d\") " pod="openstack-operators/openstack-operator-controller-manager-549dd7c895-84tm5" Feb 18 14:50:47 crc kubenswrapper[4957]: I0218 14:50:47.763920 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/33e1b915-d740-4ec7-b74e-b8b8b6356d4d-metrics-certs\") pod \"openstack-operator-controller-manager-549dd7c895-84tm5\" (UID: \"33e1b915-d740-4ec7-b74e-b8b8b6356d4d\") " pod="openstack-operators/openstack-operator-controller-manager-549dd7c895-84tm5" Feb 18 14:50:47 crc kubenswrapper[4957]: E0218 14:50:47.764066 4957 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 18 14:50:47 crc kubenswrapper[4957]: E0218 14:50:47.764193 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/33e1b915-d740-4ec7-b74e-b8b8b6356d4d-webhook-certs podName:33e1b915-d740-4ec7-b74e-b8b8b6356d4d nodeName:}" failed. No retries permitted until 2026-02-18 14:50:49.764151975 +0000 UTC m=+1156.285016719 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/33e1b915-d740-4ec7-b74e-b8b8b6356d4d-webhook-certs") pod "openstack-operator-controller-manager-549dd7c895-84tm5" (UID: "33e1b915-d740-4ec7-b74e-b8b8b6356d4d") : secret "webhook-server-cert" not found Feb 18 14:50:47 crc kubenswrapper[4957]: E0218 14:50:47.764201 4957 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 18 14:50:47 crc kubenswrapper[4957]: E0218 14:50:47.764304 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/33e1b915-d740-4ec7-b74e-b8b8b6356d4d-metrics-certs podName:33e1b915-d740-4ec7-b74e-b8b8b6356d4d nodeName:}" failed. No retries permitted until 2026-02-18 14:50:49.764275958 +0000 UTC m=+1156.285140882 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/33e1b915-d740-4ec7-b74e-b8b8b6356d4d-metrics-certs") pod "openstack-operator-controller-manager-549dd7c895-84tm5" (UID: "33e1b915-d740-4ec7-b74e-b8b8b6356d4d") : secret "metrics-server-cert" not found Feb 18 14:50:47 crc kubenswrapper[4957]: I0218 14:50:47.764320 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-kx5gv"] Feb 18 14:50:47 crc kubenswrapper[4957]: W0218 14:50:47.781825 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeb1f93ca_0a6f_4582_9d3a_329ddd1dd4fe.slice/crio-d042e51849e85ae5353fb4196f821dbedaf912103c0bca73508d533ebed5030a WatchSource:0}: Error finding container d042e51849e85ae5353fb4196f821dbedaf912103c0bca73508d533ebed5030a: Status 404 returned error can't find the container with id d042e51849e85ae5353fb4196f821dbedaf912103c0bca73508d533ebed5030a Feb 18 14:50:47 crc kubenswrapper[4957]: W0218 14:50:47.787705 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8adf52f0_b132_4541_8962_7fae9bce89c6.slice/crio-dba91cb67698dcbf257c4c9f006427a9d5b7472b9780330473f040b1a1b7855a WatchSource:0}: Error finding container dba91cb67698dcbf257c4c9f006427a9d5b7472b9780330473f040b1a1b7855a: Status 404 returned error can't find the container with id dba91cb67698dcbf257c4c9f006427a9d5b7472b9780330473f040b1a1b7855a Feb 18 14:50:47 crc kubenswrapper[4957]: W0218 14:50:47.813937 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf70d6609_fcf8_47f9_89dc_986f8f2f902b.slice/crio-ed98de7778d43b727b0c42d1d09f68f57a747b0b4e3f91fa4b9901ab4fbea698 WatchSource:0}: Error finding container ed98de7778d43b727b0c42d1d09f68f57a747b0b4e3f91fa4b9901ab4fbea698: Status 404 returned error can't find the container with id ed98de7778d43b727b0c42d1d09f68f57a747b0b4e3f91fa4b9901ab4fbea698 Feb 18 14:50:48 crc kubenswrapper[4957]: I0218 14:50:48.451748 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-9mf8z"] Feb 18 14:50:48 crc kubenswrapper[4957]: I0218 14:50:48.467321 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-2fdgz"] Feb 18 14:50:48 crc kubenswrapper[4957]: I0218 14:50:48.483659 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-vn8z8"] Feb 18 14:50:48 crc kubenswrapper[4957]: I0218 14:50:48.518961 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-cxzhc"] Feb 18 14:50:48 crc kubenswrapper[4957]: I0218 14:50:48.561323 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-zc4tx"] Feb 18 14:50:48 crc kubenswrapper[4957]: I0218 14:50:48.585287 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5db88f68c-kshbq"] Feb 18 14:50:48 crc kubenswrapper[4957]: I0218 14:50:48.599398 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-rd7mm" event={"ID":"eb1f93ca-0a6f-4582-9d3a-329ddd1dd4fe","Type":"ContainerStarted","Data":"d042e51849e85ae5353fb4196f821dbedaf912103c0bca73508d533ebed5030a"} Feb 18 14:50:48 crc kubenswrapper[4957]: I0218 14:50:48.602394 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-czjx4" event={"ID":"147c50a5-37fc-4b06-803f-8ad1d1fd4625","Type":"ContainerStarted","Data":"e15bd2eaa03cd3c6c74dec87c13466c2869da4ee835dab215b9dbfa258f2a60c"} Feb 18 14:50:48 crc kubenswrapper[4957]: I0218 14:50:48.604963 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-54bf66477-rc4j4"] Feb 18 14:50:48 crc kubenswrapper[4957]: I0218 14:50:48.614265 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-xg6pp" event={"ID":"94bb800a-9927-4d0f-b9d2-53e4fb398fda","Type":"ContainerStarted","Data":"e5d8545d152381dcc24814d1005a5981a61f8a90b3b0df62604441f6cdf7154b"} Feb 18 14:50:48 crc kubenswrapper[4957]: I0218 14:50:48.621656 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-kx5gv" event={"ID":"f70d6609-fcf8-47f9-89dc-986f8f2f902b","Type":"ContainerStarted","Data":"ed98de7778d43b727b0c42d1d09f68f57a747b0b4e3f91fa4b9901ab4fbea698"} Feb 18 14:50:48 crc kubenswrapper[4957]: I0218 14:50:48.625542 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-d7pcl"] Feb 18 14:50:48 crc kubenswrapper[4957]: I0218 14:50:48.626805 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-fx4tl" event={"ID":"91fd8838-0687-420b-b3dd-4130e221a66d","Type":"ContainerStarted","Data":"9aa34431baac1d4bb3e93cad05124ffa0f0fcfde59efef37d87cd72f2e312196"} Feb 18 14:50:48 crc kubenswrapper[4957]: I0218 14:50:48.632680 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68f46476f-5k7g6" event={"ID":"8adf52f0-b132-4541-8962-7fae9bce89c6","Type":"ContainerStarted","Data":"dba91cb67698dcbf257c4c9f006427a9d5b7472b9780330473f040b1a1b7855a"} Feb 18 14:50:48 crc kubenswrapper[4957]: I0218 14:50:48.892778 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6c6f7318-74c7-4971-9888-45a6c025bdde-cert\") pod \"infra-operator-controller-manager-79d975b745-pgchj\" (UID: \"6c6f7318-74c7-4971-9888-45a6c025bdde\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-pgchj" Feb 18 14:50:48 crc kubenswrapper[4957]: E0218 14:50:48.893011 4957 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 18 14:50:48 crc kubenswrapper[4957]: E0218 14:50:48.893086 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6c6f7318-74c7-4971-9888-45a6c025bdde-cert podName:6c6f7318-74c7-4971-9888-45a6c025bdde nodeName:}" failed. No retries permitted until 2026-02-18 14:50:52.893062036 +0000 UTC m=+1159.413926780 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/6c6f7318-74c7-4971-9888-45a6c025bdde-cert") pod "infra-operator-controller-manager-79d975b745-pgchj" (UID: "6c6f7318-74c7-4971-9888-45a6c025bdde") : secret "infra-operator-webhook-server-cert" not found Feb 18 14:50:49 crc kubenswrapper[4957]: I0218 14:50:49.302974 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8bd25216-306e-42c0-93da-a51803507c1f-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cwqlsr\" (UID: \"8bd25216-306e-42c0-93da-a51803507c1f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cwqlsr" Feb 18 14:50:49 crc kubenswrapper[4957]: E0218 14:50:49.304754 4957 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 18 14:50:49 crc kubenswrapper[4957]: E0218 14:50:49.304803 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8bd25216-306e-42c0-93da-a51803507c1f-cert podName:8bd25216-306e-42c0-93da-a51803507c1f nodeName:}" failed. No retries permitted until 2026-02-18 14:50:53.304789431 +0000 UTC m=+1159.825654175 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/8bd25216-306e-42c0-93da-a51803507c1f-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cwqlsr" (UID: "8bd25216-306e-42c0-93da-a51803507c1f") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 18 14:50:49 crc kubenswrapper[4957]: I0218 14:50:49.815833 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/33e1b915-d740-4ec7-b74e-b8b8b6356d4d-metrics-certs\") pod \"openstack-operator-controller-manager-549dd7c895-84tm5\" (UID: \"33e1b915-d740-4ec7-b74e-b8b8b6356d4d\") " pod="openstack-operators/openstack-operator-controller-manager-549dd7c895-84tm5" Feb 18 14:50:49 crc kubenswrapper[4957]: I0218 14:50:49.816192 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/33e1b915-d740-4ec7-b74e-b8b8b6356d4d-webhook-certs\") pod \"openstack-operator-controller-manager-549dd7c895-84tm5\" (UID: \"33e1b915-d740-4ec7-b74e-b8b8b6356d4d\") " pod="openstack-operators/openstack-operator-controller-manager-549dd7c895-84tm5" Feb 18 14:50:49 crc kubenswrapper[4957]: E0218 14:50:49.816751 4957 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 18 14:50:49 crc kubenswrapper[4957]: E0218 14:50:49.816922 4957 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 18 14:50:49 crc kubenswrapper[4957]: E0218 14:50:49.817043 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/33e1b915-d740-4ec7-b74e-b8b8b6356d4d-metrics-certs podName:33e1b915-d740-4ec7-b74e-b8b8b6356d4d nodeName:}" failed. No retries permitted until 2026-02-18 14:50:53.817013697 +0000 UTC m=+1160.337878611 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/33e1b915-d740-4ec7-b74e-b8b8b6356d4d-metrics-certs") pod "openstack-operator-controller-manager-549dd7c895-84tm5" (UID: "33e1b915-d740-4ec7-b74e-b8b8b6356d4d") : secret "metrics-server-cert" not found Feb 18 14:50:49 crc kubenswrapper[4957]: E0218 14:50:49.818333 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/33e1b915-d740-4ec7-b74e-b8b8b6356d4d-webhook-certs podName:33e1b915-d740-4ec7-b74e-b8b8b6356d4d nodeName:}" failed. No retries permitted until 2026-02-18 14:50:53.818278804 +0000 UTC m=+1160.339143568 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/33e1b915-d740-4ec7-b74e-b8b8b6356d4d-webhook-certs") pod "openstack-operator-controller-manager-549dd7c895-84tm5" (UID: "33e1b915-d740-4ec7-b74e-b8b8b6356d4d") : secret "webhook-server-cert" not found Feb 18 14:50:52 crc kubenswrapper[4957]: W0218 14:50:52.145849 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podff480d9a_ead3_47a1_a765_59507dfe0853.slice/crio-8a5819ccbc2bc8579c1c5320e38bebf034fec9431a5de9210bef896317b51ac5 WatchSource:0}: Error finding container 8a5819ccbc2bc8579c1c5320e38bebf034fec9431a5de9210bef896317b51ac5: Status 404 returned error can't find the container with id 8a5819ccbc2bc8579c1c5320e38bebf034fec9431a5de9210bef896317b51ac5 Feb 18 14:50:52 crc kubenswrapper[4957]: W0218 14:50:52.150199 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb724d9a9_8ae5_4295_9b4e_5ec65793b59f.slice/crio-52024d68dfe4cd2d973420efb55696ffba9a384a96d12d7009df441bb91163b8 WatchSource:0}: Error finding container 52024d68dfe4cd2d973420efb55696ffba9a384a96d12d7009df441bb91163b8: Status 404 returned error can't find the container with id 52024d68dfe4cd2d973420efb55696ffba9a384a96d12d7009df441bb91163b8 Feb 18 14:50:52 crc kubenswrapper[4957]: W0218 14:50:52.156884 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod644451ba_ce73_4312_b6cd_af99eb6c9fbc.slice/crio-eae80cd79670407bc03511fddd8616a23a97ecc7ab86c452f45bec433eef7cdd WatchSource:0}: Error finding container eae80cd79670407bc03511fddd8616a23a97ecc7ab86c452f45bec433eef7cdd: Status 404 returned error can't find the container with id eae80cd79670407bc03511fddd8616a23a97ecc7ab86c452f45bec433eef7cdd Feb 18 14:50:52 crc kubenswrapper[4957]: W0218 14:50:52.158995 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8aaaba83_1c93_481a_9627_a46dbd3eef31.slice/crio-6a7a616de011050d9722f933f80ba6c37b45cba7b693d435c07c7ef343d16c3f WatchSource:0}: Error finding container 6a7a616de011050d9722f933f80ba6c37b45cba7b693d435c07c7ef343d16c3f: Status 404 returned error can't find the container with id 6a7a616de011050d9722f933f80ba6c37b45cba7b693d435c07c7ef343d16c3f Feb 18 14:50:52 crc kubenswrapper[4957]: W0218 14:50:52.170882 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef6b6faf_f852_4948_8d1b_d53eace855a4.slice/crio-10910111ebd3e94fd55c85cf2072c4b990a5323432c9297c56dcf73087170fee WatchSource:0}: Error finding container 10910111ebd3e94fd55c85cf2072c4b990a5323432c9297c56dcf73087170fee: Status 404 returned error can't find the container with id 10910111ebd3e94fd55c85cf2072c4b990a5323432c9297c56dcf73087170fee Feb 18 14:50:52 crc kubenswrapper[4957]: W0218 14:50:52.173338 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc17ba5f2_7fb4_4ed7_8623_f987653f8f9b.slice/crio-656589d53b35d58d1b7c84ed3e64211106af3dc021330d077c49f1d9c9987b20 WatchSource:0}: Error finding container 656589d53b35d58d1b7c84ed3e64211106af3dc021330d077c49f1d9c9987b20: Status 404 returned error can't find the container with id 656589d53b35d58d1b7c84ed3e64211106af3dc021330d077c49f1d9c9987b20 Feb 18 14:50:52 crc kubenswrapper[4957]: W0218 14:50:52.178966 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf507ee0e_6836_4f30_b79e_63979d76a449.slice/crio-18603c57728895fee75bdae1ed8a72ca4fbcb2fd1959558a61a7ec78cee0bb2e WatchSource:0}: Error finding container 18603c57728895fee75bdae1ed8a72ca4fbcb2fd1959558a61a7ec78cee0bb2e: Status 404 returned error can't find the container with id 18603c57728895fee75bdae1ed8a72ca4fbcb2fd1959558a61a7ec78cee0bb2e Feb 18 14:50:52 crc kubenswrapper[4957]: W0218 14:50:52.188274 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod33e776b3_c81e_4655_82a8_88c63ff8adf7.slice/crio-a4316467f086c899aab15c2bb3ed54a29bfdcb53394a4790389fbeb3fa5cfb6e WatchSource:0}: Error finding container a4316467f086c899aab15c2bb3ed54a29bfdcb53394a4790389fbeb3fa5cfb6e: Status 404 returned error can't find the container with id a4316467f086c899aab15c2bb3ed54a29bfdcb53394a4790389fbeb3fa5cfb6e Feb 18 14:50:52 crc kubenswrapper[4957]: I0218 14:50:52.708783 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-7866795846-2fdgz" event={"ID":"b724d9a9-8ae5-4295-9b4e-5ec65793b59f","Type":"ContainerStarted","Data":"52024d68dfe4cd2d973420efb55696ffba9a384a96d12d7009df441bb91163b8"} Feb 18 14:50:52 crc kubenswrapper[4957]: I0218 14:50:52.710947 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-9mf8z" event={"ID":"644451ba-ce73-4312-b6cd-af99eb6c9fbc","Type":"ContainerStarted","Data":"eae80cd79670407bc03511fddd8616a23a97ecc7ab86c452f45bec433eef7cdd"} Feb 18 14:50:52 crc kubenswrapper[4957]: I0218 14:50:52.712740 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-d7pcl" event={"ID":"33e776b3-c81e-4655-82a8-88c63ff8adf7","Type":"ContainerStarted","Data":"a4316467f086c899aab15c2bb3ed54a29bfdcb53394a4790389fbeb3fa5cfb6e"} Feb 18 14:50:52 crc kubenswrapper[4957]: I0218 14:50:52.714395 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-kshbq" event={"ID":"c17ba5f2-7fb4-4ed7-8623-f987653f8f9b","Type":"ContainerStarted","Data":"656589d53b35d58d1b7c84ed3e64211106af3dc021330d077c49f1d9c9987b20"} Feb 18 14:50:52 crc kubenswrapper[4957]: I0218 14:50:52.716811 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-54bf66477-rc4j4" event={"ID":"f507ee0e-6836-4f30-b79e-63979d76a449","Type":"ContainerStarted","Data":"18603c57728895fee75bdae1ed8a72ca4fbcb2fd1959558a61a7ec78cee0bb2e"} Feb 18 14:50:52 crc kubenswrapper[4957]: I0218 14:50:52.719142 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-vn8z8" event={"ID":"8aaaba83-1c93-481a-9627-a46dbd3eef31","Type":"ContainerStarted","Data":"6a7a616de011050d9722f933f80ba6c37b45cba7b693d435c07c7ef343d16c3f"} Feb 18 14:50:52 crc kubenswrapper[4957]: I0218 14:50:52.721085 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-zc4tx" event={"ID":"ff480d9a-ead3-47a1-a765-59507dfe0853","Type":"ContainerStarted","Data":"8a5819ccbc2bc8579c1c5320e38bebf034fec9431a5de9210bef896317b51ac5"} Feb 18 14:50:52 crc kubenswrapper[4957]: I0218 14:50:52.722873 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-cxzhc" event={"ID":"ef6b6faf-f852-4948-8d1b-d53eace855a4","Type":"ContainerStarted","Data":"10910111ebd3e94fd55c85cf2072c4b990a5323432c9297c56dcf73087170fee"} Feb 18 14:50:52 crc kubenswrapper[4957]: I0218 14:50:52.904637 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6c6f7318-74c7-4971-9888-45a6c025bdde-cert\") pod \"infra-operator-controller-manager-79d975b745-pgchj\" (UID: \"6c6f7318-74c7-4971-9888-45a6c025bdde\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-pgchj" Feb 18 14:50:52 crc kubenswrapper[4957]: E0218 14:50:52.904945 4957 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 18 14:50:52 crc kubenswrapper[4957]: E0218 14:50:52.905025 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6c6f7318-74c7-4971-9888-45a6c025bdde-cert podName:6c6f7318-74c7-4971-9888-45a6c025bdde nodeName:}" failed. No retries permitted until 2026-02-18 14:51:00.905002433 +0000 UTC m=+1167.425867177 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/6c6f7318-74c7-4971-9888-45a6c025bdde-cert") pod "infra-operator-controller-manager-79d975b745-pgchj" (UID: "6c6f7318-74c7-4971-9888-45a6c025bdde") : secret "infra-operator-webhook-server-cert" not found Feb 18 14:50:53 crc kubenswrapper[4957]: I0218 14:50:53.312434 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8bd25216-306e-42c0-93da-a51803507c1f-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cwqlsr\" (UID: \"8bd25216-306e-42c0-93da-a51803507c1f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cwqlsr" Feb 18 14:50:53 crc kubenswrapper[4957]: E0218 14:50:53.315412 4957 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 18 14:50:53 crc kubenswrapper[4957]: E0218 14:50:53.315504 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8bd25216-306e-42c0-93da-a51803507c1f-cert podName:8bd25216-306e-42c0-93da-a51803507c1f nodeName:}" failed. No retries permitted until 2026-02-18 14:51:01.315485301 +0000 UTC m=+1167.836350045 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/8bd25216-306e-42c0-93da-a51803507c1f-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cwqlsr" (UID: "8bd25216-306e-42c0-93da-a51803507c1f") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 18 14:50:53 crc kubenswrapper[4957]: I0218 14:50:53.824699 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/33e1b915-d740-4ec7-b74e-b8b8b6356d4d-webhook-certs\") pod \"openstack-operator-controller-manager-549dd7c895-84tm5\" (UID: \"33e1b915-d740-4ec7-b74e-b8b8b6356d4d\") " pod="openstack-operators/openstack-operator-controller-manager-549dd7c895-84tm5" Feb 18 14:50:53 crc kubenswrapper[4957]: E0218 14:50:53.824963 4957 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 18 14:50:53 crc kubenswrapper[4957]: I0218 14:50:53.825357 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/33e1b915-d740-4ec7-b74e-b8b8b6356d4d-metrics-certs\") pod \"openstack-operator-controller-manager-549dd7c895-84tm5\" (UID: \"33e1b915-d740-4ec7-b74e-b8b8b6356d4d\") " pod="openstack-operators/openstack-operator-controller-manager-549dd7c895-84tm5" Feb 18 14:50:53 crc kubenswrapper[4957]: E0218 14:50:53.825478 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/33e1b915-d740-4ec7-b74e-b8b8b6356d4d-webhook-certs podName:33e1b915-d740-4ec7-b74e-b8b8b6356d4d nodeName:}" failed. No retries permitted until 2026-02-18 14:51:01.82541315 +0000 UTC m=+1168.346277894 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/33e1b915-d740-4ec7-b74e-b8b8b6356d4d-webhook-certs") pod "openstack-operator-controller-manager-549dd7c895-84tm5" (UID: "33e1b915-d740-4ec7-b74e-b8b8b6356d4d") : secret "webhook-server-cert" not found Feb 18 14:50:53 crc kubenswrapper[4957]: E0218 14:50:53.825539 4957 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 18 14:50:53 crc kubenswrapper[4957]: E0218 14:50:53.825654 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/33e1b915-d740-4ec7-b74e-b8b8b6356d4d-metrics-certs podName:33e1b915-d740-4ec7-b74e-b8b8b6356d4d nodeName:}" failed. No retries permitted until 2026-02-18 14:51:01.825619936 +0000 UTC m=+1168.346484870 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/33e1b915-d740-4ec7-b74e-b8b8b6356d4d-metrics-certs") pod "openstack-operator-controller-manager-549dd7c895-84tm5" (UID: "33e1b915-d740-4ec7-b74e-b8b8b6356d4d") : secret "metrics-server-cert" not found Feb 18 14:51:00 crc kubenswrapper[4957]: I0218 14:51:00.966138 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6c6f7318-74c7-4971-9888-45a6c025bdde-cert\") pod \"infra-operator-controller-manager-79d975b745-pgchj\" (UID: \"6c6f7318-74c7-4971-9888-45a6c025bdde\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-pgchj" Feb 18 14:51:00 crc kubenswrapper[4957]: I0218 14:51:00.990503 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6c6f7318-74c7-4971-9888-45a6c025bdde-cert\") pod \"infra-operator-controller-manager-79d975b745-pgchj\" (UID: \"6c6f7318-74c7-4971-9888-45a6c025bdde\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-pgchj" Feb 18 14:51:01 crc kubenswrapper[4957]: I0218 14:51:01.100385 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79d975b745-pgchj" Feb 18 14:51:01 crc kubenswrapper[4957]: I0218 14:51:01.376252 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8bd25216-306e-42c0-93da-a51803507c1f-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cwqlsr\" (UID: \"8bd25216-306e-42c0-93da-a51803507c1f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cwqlsr" Feb 18 14:51:01 crc kubenswrapper[4957]: I0218 14:51:01.380296 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8bd25216-306e-42c0-93da-a51803507c1f-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cwqlsr\" (UID: \"8bd25216-306e-42c0-93da-a51803507c1f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cwqlsr" Feb 18 14:51:01 crc kubenswrapper[4957]: I0218 14:51:01.410541 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cwqlsr" Feb 18 14:51:01 crc kubenswrapper[4957]: I0218 14:51:01.892553 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/33e1b915-d740-4ec7-b74e-b8b8b6356d4d-webhook-certs\") pod \"openstack-operator-controller-manager-549dd7c895-84tm5\" (UID: \"33e1b915-d740-4ec7-b74e-b8b8b6356d4d\") " pod="openstack-operators/openstack-operator-controller-manager-549dd7c895-84tm5" Feb 18 14:51:01 crc kubenswrapper[4957]: E0218 14:51:01.892850 4957 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 18 14:51:01 crc kubenswrapper[4957]: E0218 14:51:01.893144 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/33e1b915-d740-4ec7-b74e-b8b8b6356d4d-webhook-certs podName:33e1b915-d740-4ec7-b74e-b8b8b6356d4d nodeName:}" failed. No retries permitted until 2026-02-18 14:51:17.893115387 +0000 UTC m=+1184.413980131 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/33e1b915-d740-4ec7-b74e-b8b8b6356d4d-webhook-certs") pod "openstack-operator-controller-manager-549dd7c895-84tm5" (UID: "33e1b915-d740-4ec7-b74e-b8b8b6356d4d") : secret "webhook-server-cert" not found Feb 18 14:51:01 crc kubenswrapper[4957]: I0218 14:51:01.893032 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/33e1b915-d740-4ec7-b74e-b8b8b6356d4d-metrics-certs\") pod \"openstack-operator-controller-manager-549dd7c895-84tm5\" (UID: \"33e1b915-d740-4ec7-b74e-b8b8b6356d4d\") " pod="openstack-operators/openstack-operator-controller-manager-549dd7c895-84tm5" Feb 18 14:51:01 crc kubenswrapper[4957]: I0218 14:51:01.904190 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/33e1b915-d740-4ec7-b74e-b8b8b6356d4d-metrics-certs\") pod \"openstack-operator-controller-manager-549dd7c895-84tm5\" (UID: \"33e1b915-d740-4ec7-b74e-b8b8b6356d4d\") " pod="openstack-operators/openstack-operator-controller-manager-549dd7c895-84tm5" Feb 18 14:51:03 crc kubenswrapper[4957]: E0218 14:51:03.102809 4957 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ironic-operator@sha256:7e1b0b7b172ad0d707ab80dd72d609e1d0f5bbd38a22c24a28ed0f17a960c867" Feb 18 14:51:03 crc kubenswrapper[4957]: E0218 14:51:03.103199 4957 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ironic-operator@sha256:7e1b0b7b172ad0d707ab80dd72d609e1d0f5bbd38a22c24a28ed0f17a960c867,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vgjhh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-operator-controller-manager-554564d7fc-fx4tl_openstack-operators(91fd8838-0687-420b-b3dd-4130e221a66d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 14:51:03 crc kubenswrapper[4957]: E0218 14:51:03.104617 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-fx4tl" podUID="91fd8838-0687-420b-b3dd-4130e221a66d" Feb 18 14:51:03 crc kubenswrapper[4957]: E0218 14:51:03.814574 4957 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/horizon-operator@sha256:9f2e1299d908411457e53b49e1062265d2b9d76f6719db24d1be9347c388e4da" Feb 18 14:51:03 crc kubenswrapper[4957]: E0218 14:51:03.814910 4957 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/horizon-operator@sha256:9f2e1299d908411457e53b49e1062265d2b9d76f6719db24d1be9347c388e4da,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dshnc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-operator-controller-manager-5b9b8895d5-xg6pp_openstack-operators(94bb800a-9927-4d0f-b9d2-53e4fb398fda): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 14:51:03 crc kubenswrapper[4957]: E0218 14:51:03.816137 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-xg6pp" podUID="94bb800a-9927-4d0f-b9d2-53e4fb398fda" Feb 18 14:51:03 crc kubenswrapper[4957]: E0218 14:51:03.833555 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ironic-operator@sha256:7e1b0b7b172ad0d707ab80dd72d609e1d0f5bbd38a22c24a28ed0f17a960c867\\\"\"" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-fx4tl" podUID="91fd8838-0687-420b-b3dd-4130e221a66d" Feb 18 14:51:03 crc kubenswrapper[4957]: E0218 14:51:03.834034 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/horizon-operator@sha256:9f2e1299d908411457e53b49e1062265d2b9d76f6719db24d1be9347c388e4da\\\"\"" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-xg6pp" podUID="94bb800a-9927-4d0f-b9d2-53e4fb398fda" Feb 18 14:51:06 crc kubenswrapper[4957]: E0218 14:51:06.355112 4957 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/manila-operator@sha256:8fb0a33b8d93cf9f84f079af5f2ceb680afada4e44542514959146779f57f64c" Feb 18 14:51:06 crc kubenswrapper[4957]: E0218 14:51:06.355385 4957 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:8fb0a33b8d93cf9f84f079af5f2ceb680afada4e44542514959146779f57f64c,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-775rt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-54f6768c69-czjx4_openstack-operators(147c50a5-37fc-4b06-803f-8ad1d1fd4625): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 14:51:06 crc kubenswrapper[4957]: E0218 14:51:06.356609 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-czjx4" podUID="147c50a5-37fc-4b06-803f-8ad1d1fd4625" Feb 18 14:51:06 crc kubenswrapper[4957]: E0218 14:51:06.858739 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:8fb0a33b8d93cf9f84f079af5f2ceb680afada4e44542514959146779f57f64c\\\"\"" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-czjx4" podUID="147c50a5-37fc-4b06-803f-8ad1d1fd4625" Feb 18 14:51:07 crc kubenswrapper[4957]: E0218 14:51:07.618844 4957 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/swift-operator@sha256:3d676f1281e24ef07de617570d2f7fbf625032e41866d1551a856c052248bb04" Feb 18 14:51:07 crc kubenswrapper[4957]: E0218 14:51:07.619548 4957 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:3d676f1281e24ef07de617570d2f7fbf625032e41866d1551a856c052248bb04,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vvbnq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-68f46476f-5k7g6_openstack-operators(8adf52f0-b132-4541-8962-7fae9bce89c6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 14:51:07 crc kubenswrapper[4957]: E0218 14:51:07.621607 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/swift-operator-controller-manager-68f46476f-5k7g6" podUID="8adf52f0-b132-4541-8962-7fae9bce89c6" Feb 18 14:51:07 crc kubenswrapper[4957]: E0218 14:51:07.869698 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:3d676f1281e24ef07de617570d2f7fbf625032e41866d1551a856c052248bb04\\\"\"" pod="openstack-operators/swift-operator-controller-manager-68f46476f-5k7g6" podUID="8adf52f0-b132-4541-8962-7fae9bce89c6" Feb 18 14:51:11 crc kubenswrapper[4957]: E0218 14:51:11.046565 4957 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/neutron-operator@sha256:e4689246ae78635dc3c1db9c677d8b16b8f94276df15fb9c84bfc57cc6578fcf" Feb 18 14:51:11 crc kubenswrapper[4957]: E0218 14:51:11.047633 4957 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:e4689246ae78635dc3c1db9c677d8b16b8f94276df15fb9c84bfc57cc6578fcf,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2gjdd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-64ddbf8bb-vn8z8_openstack-operators(8aaaba83-1c93-481a-9627-a46dbd3eef31): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 14:51:11 crc kubenswrapper[4957]: E0218 14:51:11.048978 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-vn8z8" podUID="8aaaba83-1c93-481a-9627-a46dbd3eef31" Feb 18 14:51:11 crc kubenswrapper[4957]: E0218 14:51:11.492893 4957 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/octavia-operator@sha256:229fc8c8d94dd4102d2151cd4ec1eaaa09d897c2b396d06e903f61ea29c1fa34" Feb 18 14:51:11 crc kubenswrapper[4957]: E0218 14:51:11.493081 4957 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:229fc8c8d94dd4102d2151cd4ec1eaaa09d897c2b396d06e903f61ea29c1fa34,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vx6gn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-69f8888797-kx5gv_openstack-operators(f70d6609-fcf8-47f9-89dc-986f8f2f902b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 14:51:11 crc kubenswrapper[4957]: E0218 14:51:11.495137 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-kx5gv" podUID="f70d6609-fcf8-47f9-89dc-986f8f2f902b" Feb 18 14:51:11 crc kubenswrapper[4957]: E0218 14:51:11.903106 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:e4689246ae78635dc3c1db9c677d8b16b8f94276df15fb9c84bfc57cc6578fcf\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-vn8z8" podUID="8aaaba83-1c93-481a-9627-a46dbd3eef31" Feb 18 14:51:11 crc kubenswrapper[4957]: E0218 14:51:11.906481 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:229fc8c8d94dd4102d2151cd4ec1eaaa09d897c2b396d06e903f61ea29c1fa34\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-kx5gv" podUID="f70d6609-fcf8-47f9-89dc-986f8f2f902b" Feb 18 14:51:13 crc kubenswrapper[4957]: E0218 14:51:13.811009 4957 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/watcher-operator@sha256:d01ae848290e880c09127d5297418dea40fc7f090fdab9bf2c578c7e7f53aec0" Feb 18 14:51:13 crc kubenswrapper[4957]: E0218 14:51:13.811622 4957 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:d01ae848290e880c09127d5297418dea40fc7f090fdab9bf2c578c7e7f53aec0,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-68wbk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-5db88f68c-kshbq_openstack-operators(c17ba5f2-7fb4-4ed7-8623-f987653f8f9b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 14:51:13 crc kubenswrapper[4957]: E0218 14:51:13.812943 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-kshbq" podUID="c17ba5f2-7fb4-4ed7-8623-f987653f8f9b" Feb 18 14:51:13 crc kubenswrapper[4957]: E0218 14:51:13.920469 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:d01ae848290e880c09127d5297418dea40fc7f090fdab9bf2c578c7e7f53aec0\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-kshbq" podUID="c17ba5f2-7fb4-4ed7-8623-f987653f8f9b" Feb 18 14:51:14 crc kubenswrapper[4957]: E0218 14:51:14.397156 4957 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1" Feb 18 14:51:14 crc kubenswrapper[4957]: E0218 14:51:14.397466 4957 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jwc8j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-b4d948c87-vt4tc_openstack-operators(78c8fb66-d71a-44b7-b858-51f7ca26a407): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 14:51:14 crc kubenswrapper[4957]: E0218 14:51:14.399164 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-vt4tc" podUID="78c8fb66-d71a-44b7-b858-51f7ca26a407" Feb 18 14:51:14 crc kubenswrapper[4957]: E0218 14:51:14.788316 4957 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/test-operator@sha256:f0fabdf79095def0f8b1c0442925548a94ca94bed4de2d3b171277129f8079e6" Feb 18 14:51:14 crc kubenswrapper[4957]: E0218 14:51:14.788847 4957 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:f0fabdf79095def0f8b1c0442925548a94ca94bed4de2d3b171277129f8079e6,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xpd6z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-7866795846-2fdgz_openstack-operators(b724d9a9-8ae5-4295-9b4e-5ec65793b59f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 14:51:14 crc kubenswrapper[4957]: E0218 14:51:14.789911 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/test-operator-controller-manager-7866795846-2fdgz" podUID="b724d9a9-8ae5-4295-9b4e-5ec65793b59f" Feb 18 14:51:14 crc kubenswrapper[4957]: E0218 14:51:14.928633 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-vt4tc" podUID="78c8fb66-d71a-44b7-b858-51f7ca26a407" Feb 18 14:51:14 crc kubenswrapper[4957]: E0218 14:51:14.928693 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:f0fabdf79095def0f8b1c0442925548a94ca94bed4de2d3b171277129f8079e6\\\"\"" pod="openstack-operators/test-operator-controller-manager-7866795846-2fdgz" podUID="b724d9a9-8ae5-4295-9b4e-5ec65793b59f" Feb 18 14:51:17 crc kubenswrapper[4957]: E0218 14:51:17.318123 4957 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ovn-operator@sha256:543c103838f3e6ef48755665a7695dfa3ed84753c557560257d265db31f92759" Feb 18 14:51:17 crc kubenswrapper[4957]: E0218 14:51:17.318617 4957 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:543c103838f3e6ef48755665a7695dfa3ed84753c557560257d265db31f92759,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-d4wtp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-d44cf6b75-cxzhc_openstack-operators(ef6b6faf-f852-4948-8d1b-d53eace855a4): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 14:51:17 crc kubenswrapper[4957]: E0218 14:51:17.319872 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-cxzhc" podUID="ef6b6faf-f852-4948-8d1b-d53eace855a4" Feb 18 14:51:17 crc kubenswrapper[4957]: E0218 14:51:17.774672 4957 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd" Feb 18 14:51:17 crc kubenswrapper[4957]: E0218 14:51:17.774899 4957 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kstr4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-8497b45c89-9mf8z_openstack-operators(644451ba-ce73-4312-b6cd-af99eb6c9fbc): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 14:51:17 crc kubenswrapper[4957]: E0218 14:51:17.776893 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-9mf8z" podUID="644451ba-ce73-4312-b6cd-af99eb6c9fbc" Feb 18 14:51:17 crc kubenswrapper[4957]: E0218 14:51:17.940674 4957 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.73:5001/openstack-k8s-operators/telemetry-operator:49fb0a393e644ad55559f09981950c6ee3a56dc1" Feb 18 14:51:17 crc kubenswrapper[4957]: E0218 14:51:17.940744 4957 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.73:5001/openstack-k8s-operators/telemetry-operator:49fb0a393e644ad55559f09981950c6ee3a56dc1" Feb 18 14:51:17 crc kubenswrapper[4957]: E0218 14:51:17.940909 4957 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.73:5001/openstack-k8s-operators/telemetry-operator:49fb0a393e644ad55559f09981950c6ee3a56dc1,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tqlck,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-54bf66477-rc4j4_openstack-operators(f507ee0e-6836-4f30-b79e-63979d76a449): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 14:51:17 crc kubenswrapper[4957]: E0218 14:51:17.941991 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/telemetry-operator-controller-manager-54bf66477-rc4j4" podUID="f507ee0e-6836-4f30-b79e-63979d76a449" Feb 18 14:51:17 crc kubenswrapper[4957]: I0218 14:51:17.944371 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/33e1b915-d740-4ec7-b74e-b8b8b6356d4d-webhook-certs\") pod \"openstack-operator-controller-manager-549dd7c895-84tm5\" (UID: \"33e1b915-d740-4ec7-b74e-b8b8b6356d4d\") " pod="openstack-operators/openstack-operator-controller-manager-549dd7c895-84tm5" Feb 18 14:51:17 crc kubenswrapper[4957]: I0218 14:51:17.956981 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/33e1b915-d740-4ec7-b74e-b8b8b6356d4d-webhook-certs\") pod \"openstack-operator-controller-manager-549dd7c895-84tm5\" (UID: \"33e1b915-d740-4ec7-b74e-b8b8b6356d4d\") " pod="openstack-operators/openstack-operator-controller-manager-549dd7c895-84tm5" Feb 18 14:51:17 crc kubenswrapper[4957]: E0218 14:51:17.962942 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:543c103838f3e6ef48755665a7695dfa3ed84753c557560257d265db31f92759\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-cxzhc" podUID="ef6b6faf-f852-4948-8d1b-d53eace855a4" Feb 18 14:51:17 crc kubenswrapper[4957]: E0218 14:51:17.963339 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.73:5001/openstack-k8s-operators/telemetry-operator:49fb0a393e644ad55559f09981950c6ee3a56dc1\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-54bf66477-rc4j4" podUID="f507ee0e-6836-4f30-b79e-63979d76a449" Feb 18 14:51:17 crc kubenswrapper[4957]: E0218 14:51:17.963570 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd\\\"\"" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-9mf8z" podUID="644451ba-ce73-4312-b6cd-af99eb6c9fbc" Feb 18 14:51:18 crc kubenswrapper[4957]: I0218 14:51:18.064980 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-549dd7c895-84tm5" Feb 18 14:51:18 crc kubenswrapper[4957]: E0218 14:51:18.397967 4957 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Feb 18 14:51:18 crc kubenswrapper[4957]: E0218 14:51:18.398281 4957 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ttpds,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-d7pcl_openstack-operators(33e776b3-c81e-4655-82a8-88c63ff8adf7): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 14:51:18 crc kubenswrapper[4957]: E0218 14:51:18.404902 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-d7pcl" podUID="33e776b3-c81e-4655-82a8-88c63ff8adf7" Feb 18 14:51:18 crc kubenswrapper[4957]: E0218 14:51:18.906990 4957 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838" Feb 18 14:51:18 crc kubenswrapper[4957]: E0218 14:51:18.907240 4957 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-r2jtd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-567668f5cf-rd7mm_openstack-operators(eb1f93ca-0a6f-4582-9d3a-329ddd1dd4fe): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 14:51:18 crc kubenswrapper[4957]: E0218 14:51:18.908768 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-rd7mm" podUID="eb1f93ca-0a6f-4582-9d3a-329ddd1dd4fe" Feb 18 14:51:19 crc kubenswrapper[4957]: E0218 14:51:19.021059 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-d7pcl" podUID="33e776b3-c81e-4655-82a8-88c63ff8adf7" Feb 18 14:51:19 crc kubenswrapper[4957]: E0218 14:51:19.025632 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838\\\"\"" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-rd7mm" podUID="eb1f93ca-0a6f-4582-9d3a-329ddd1dd4fe" Feb 18 14:51:19 crc kubenswrapper[4957]: I0218 14:51:19.480639 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-pgchj"] Feb 18 14:51:19 crc kubenswrapper[4957]: W0218 14:51:19.495775 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6c6f7318_74c7_4971_9888_45a6c025bdde.slice/crio-04d88c443561d948d6494c7746804a59c57c0fe6d3a461a6c62e59e3179f4365 WatchSource:0}: Error finding container 04d88c443561d948d6494c7746804a59c57c0fe6d3a461a6c62e59e3179f4365: Status 404 returned error can't find the container with id 04d88c443561d948d6494c7746804a59c57c0fe6d3a461a6c62e59e3179f4365 Feb 18 14:51:19 crc kubenswrapper[4957]: I0218 14:51:19.652646 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cwqlsr"] Feb 18 14:51:19 crc kubenswrapper[4957]: W0218 14:51:19.654289 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8bd25216_306e_42c0_93da_a51803507c1f.slice/crio-48b327b88d6d959dec0c74c57510ef7ae5197503eb79cb597bfea6c54a21cb1d WatchSource:0}: Error finding container 48b327b88d6d959dec0c74c57510ef7ae5197503eb79cb597bfea6c54a21cb1d: Status 404 returned error can't find the container with id 48b327b88d6d959dec0c74c57510ef7ae5197503eb79cb597bfea6c54a21cb1d Feb 18 14:51:19 crc kubenswrapper[4957]: I0218 14:51:19.786111 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-549dd7c895-84tm5"] Feb 18 14:51:19 crc kubenswrapper[4957]: W0218 14:51:19.794270 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod33e1b915_d740_4ec7_b74e_b8b8b6356d4d.slice/crio-57ea7f051b357ab0fd2daa37f825bbeb8a2afdd8049a8b9909aca88c977a6d69 WatchSource:0}: Error finding container 57ea7f051b357ab0fd2daa37f825bbeb8a2afdd8049a8b9909aca88c977a6d69: Status 404 returned error can't find the container with id 57ea7f051b357ab0fd2daa37f825bbeb8a2afdd8049a8b9909aca88c977a6d69 Feb 18 14:51:19 crc kubenswrapper[4957]: I0218 14:51:19.975495 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-xt2c9" event={"ID":"07a618be-7572-49b8-aeb3-12ce37fbe7b3","Type":"ContainerStarted","Data":"55cb52efab64e1a01e30e5e8774b1a8050be910854a4f247409108e4efdd6c46"} Feb 18 14:51:19 crc kubenswrapper[4957]: I0218 14:51:19.977512 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-xt2c9" Feb 18 14:51:19 crc kubenswrapper[4957]: I0218 14:51:19.979249 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-fx4tl" event={"ID":"91fd8838-0687-420b-b3dd-4130e221a66d","Type":"ContainerStarted","Data":"82fc05ab6ab6921cb405de4a1c392ed2a0127f16a96515ebfb8dfbe845722ea4"} Feb 18 14:51:19 crc kubenswrapper[4957]: I0218 14:51:19.979740 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-fx4tl" Feb 18 14:51:19 crc kubenswrapper[4957]: I0218 14:51:19.981113 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-hhg5g" event={"ID":"18f96572-e72c-48ae-b22b-4c6fb7a4d7b9","Type":"ContainerStarted","Data":"293cf42b3f50593b2932ae2af41ab4fbca55dc0922eb5797a858e5c7d72ab522"} Feb 18 14:51:19 crc kubenswrapper[4957]: I0218 14:51:19.981629 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-hhg5g" Feb 18 14:51:19 crc kubenswrapper[4957]: I0218 14:51:19.983283 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-slwlj" event={"ID":"e6651ea1-6311-4597-81cc-a8637f8cc88a","Type":"ContainerStarted","Data":"3245fc06290f3d7eaba43e456fa8f2118ac49f70601a4bc9cfd7c9d95b6e5db7"} Feb 18 14:51:19 crc kubenswrapper[4957]: I0218 14:51:19.983709 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-slwlj" Feb 18 14:51:19 crc kubenswrapper[4957]: I0218 14:51:19.986123 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-xg6pp" event={"ID":"94bb800a-9927-4d0f-b9d2-53e4fb398fda","Type":"ContainerStarted","Data":"76b788c994d17d21d44017ad51eb06796e1e04f448cb944c32f06a18b81832ef"} Feb 18 14:51:19 crc kubenswrapper[4957]: I0218 14:51:19.986531 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-xg6pp" Feb 18 14:51:19 crc kubenswrapper[4957]: I0218 14:51:19.990352 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79d975b745-pgchj" event={"ID":"6c6f7318-74c7-4971-9888-45a6c025bdde","Type":"ContainerStarted","Data":"04d88c443561d948d6494c7746804a59c57c0fe6d3a461a6c62e59e3179f4365"} Feb 18 14:51:19 crc kubenswrapper[4957]: I0218 14:51:19.991893 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-zc4tx" event={"ID":"ff480d9a-ead3-47a1-a765-59507dfe0853","Type":"ContainerStarted","Data":"14fd62a5a09beb5fc28157c1ed82c2f6be5742bb8bd993c668c3451458eeac47"} Feb 18 14:51:19 crc kubenswrapper[4957]: I0218 14:51:19.992599 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-zc4tx" Feb 18 14:51:19 crc kubenswrapper[4957]: I0218 14:51:19.999244 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-xt2c9" podStartSLOduration=5.495921708 podStartE2EDuration="35.999224034s" podCreationTimestamp="2026-02-18 14:50:44 +0000 UTC" firstStartedPulling="2026-02-18 14:50:46.811276793 +0000 UTC m=+1153.332141537" lastFinishedPulling="2026-02-18 14:51:17.314579119 +0000 UTC m=+1183.835443863" observedRunningTime="2026-02-18 14:51:19.992927801 +0000 UTC m=+1186.513792555" watchObservedRunningTime="2026-02-18 14:51:19.999224034 +0000 UTC m=+1186.520088778" Feb 18 14:51:20 crc kubenswrapper[4957]: I0218 14:51:20.007595 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-jv5dd" event={"ID":"86c162c7-c82d-4627-bf84-11d5fb80199f","Type":"ContainerStarted","Data":"7d548e1a834b8f4f53e2dea7c767a1d0b912338920fd8605de78b88090ef6d59"} Feb 18 14:51:20 crc kubenswrapper[4957]: I0218 14:51:20.007684 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-jv5dd" Feb 18 14:51:20 crc kubenswrapper[4957]: I0218 14:51:20.009293 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cwqlsr" event={"ID":"8bd25216-306e-42c0-93da-a51803507c1f","Type":"ContainerStarted","Data":"48b327b88d6d959dec0c74c57510ef7ae5197503eb79cb597bfea6c54a21cb1d"} Feb 18 14:51:20 crc kubenswrapper[4957]: I0218 14:51:20.010558 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-549dd7c895-84tm5" event={"ID":"33e1b915-d740-4ec7-b74e-b8b8b6356d4d","Type":"ContainerStarted","Data":"57ea7f051b357ab0fd2daa37f825bbeb8a2afdd8049a8b9909aca88c977a6d69"} Feb 18 14:51:20 crc kubenswrapper[4957]: I0218 14:51:20.012185 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987464f4-b85vd" event={"ID":"cc38dff8-4b46-4281-96a3-ff88c8200f59","Type":"ContainerStarted","Data":"0a62d87beb942e7bb0389dbbe45d3ae760e02628c464a58cbdee9272f63cb387"} Feb 18 14:51:20 crc kubenswrapper[4957]: I0218 14:51:20.012731 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-77987464f4-b85vd" Feb 18 14:51:20 crc kubenswrapper[4957]: I0218 14:51:20.065368 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-xg6pp" podStartSLOduration=4.538199395 podStartE2EDuration="36.065351577s" podCreationTimestamp="2026-02-18 14:50:44 +0000 UTC" firstStartedPulling="2026-02-18 14:50:47.743577076 +0000 UTC m=+1154.264441820" lastFinishedPulling="2026-02-18 14:51:19.270729258 +0000 UTC m=+1185.791594002" observedRunningTime="2026-02-18 14:51:20.062066302 +0000 UTC m=+1186.582931046" watchObservedRunningTime="2026-02-18 14:51:20.065351577 +0000 UTC m=+1186.586216321" Feb 18 14:51:20 crc kubenswrapper[4957]: I0218 14:51:20.089594 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-fx4tl" podStartSLOduration=4.793093529 podStartE2EDuration="36.089578712s" podCreationTimestamp="2026-02-18 14:50:44 +0000 UTC" firstStartedPulling="2026-02-18 14:50:47.761854648 +0000 UTC m=+1154.282719392" lastFinishedPulling="2026-02-18 14:51:19.058339821 +0000 UTC m=+1185.579204575" observedRunningTime="2026-02-18 14:51:20.085117772 +0000 UTC m=+1186.605982516" watchObservedRunningTime="2026-02-18 14:51:20.089578712 +0000 UTC m=+1186.610443456" Feb 18 14:51:20 crc kubenswrapper[4957]: I0218 14:51:20.116815 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-slwlj" podStartSLOduration=11.89130033 podStartE2EDuration="36.116797964s" podCreationTimestamp="2026-02-18 14:50:44 +0000 UTC" firstStartedPulling="2026-02-18 14:50:46.808412079 +0000 UTC m=+1153.329276823" lastFinishedPulling="2026-02-18 14:51:11.033909703 +0000 UTC m=+1177.554774457" observedRunningTime="2026-02-18 14:51:20.111594062 +0000 UTC m=+1186.632458806" watchObservedRunningTime="2026-02-18 14:51:20.116797964 +0000 UTC m=+1186.637662698" Feb 18 14:51:20 crc kubenswrapper[4957]: I0218 14:51:20.138093 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-hhg5g" podStartSLOduration=9.604529246 podStartE2EDuration="36.138073842s" podCreationTimestamp="2026-02-18 14:50:44 +0000 UTC" firstStartedPulling="2026-02-18 14:50:46.800303434 +0000 UTC m=+1153.321168168" lastFinishedPulling="2026-02-18 14:51:13.33384802 +0000 UTC m=+1179.854712764" observedRunningTime="2026-02-18 14:51:20.137709652 +0000 UTC m=+1186.658574406" watchObservedRunningTime="2026-02-18 14:51:20.138073842 +0000 UTC m=+1186.658938596" Feb 18 14:51:20 crc kubenswrapper[4957]: I0218 14:51:20.175559 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-zc4tx" podStartSLOduration=9.410942667 podStartE2EDuration="36.175536142s" podCreationTimestamp="2026-02-18 14:50:44 +0000 UTC" firstStartedPulling="2026-02-18 14:50:52.151767847 +0000 UTC m=+1158.672632601" lastFinishedPulling="2026-02-18 14:51:18.916361332 +0000 UTC m=+1185.437226076" observedRunningTime="2026-02-18 14:51:20.166513209 +0000 UTC m=+1186.687377953" watchObservedRunningTime="2026-02-18 14:51:20.175536142 +0000 UTC m=+1186.696400886" Feb 18 14:51:20 crc kubenswrapper[4957]: I0218 14:51:20.190391 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-jv5dd" podStartSLOduration=12.214722146 podStartE2EDuration="36.190373883s" podCreationTimestamp="2026-02-18 14:50:44 +0000 UTC" firstStartedPulling="2026-02-18 14:50:47.058273116 +0000 UTC m=+1153.579137850" lastFinishedPulling="2026-02-18 14:51:11.033924843 +0000 UTC m=+1177.554789587" observedRunningTime="2026-02-18 14:51:20.186396688 +0000 UTC m=+1186.707261452" watchObservedRunningTime="2026-02-18 14:51:20.190373883 +0000 UTC m=+1186.711238627" Feb 18 14:51:20 crc kubenswrapper[4957]: I0218 14:51:20.227481 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-77987464f4-b85vd" podStartSLOduration=11.473059747 podStartE2EDuration="36.227455212s" podCreationTimestamp="2026-02-18 14:50:44 +0000 UTC" firstStartedPulling="2026-02-18 14:50:46.279540779 +0000 UTC m=+1152.800405523" lastFinishedPulling="2026-02-18 14:51:11.033936234 +0000 UTC m=+1177.554800988" observedRunningTime="2026-02-18 14:51:20.225108163 +0000 UTC m=+1186.745972907" watchObservedRunningTime="2026-02-18 14:51:20.227455212 +0000 UTC m=+1186.748319956" Feb 18 14:51:21 crc kubenswrapper[4957]: I0218 14:51:21.024233 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-549dd7c895-84tm5" event={"ID":"33e1b915-d740-4ec7-b74e-b8b8b6356d4d","Type":"ContainerStarted","Data":"36d4be2eb68074c03ba91f396a4d8e45729d01422b9eba20f191932441ec74f5"} Feb 18 14:51:22 crc kubenswrapper[4957]: I0218 14:51:22.032896 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-549dd7c895-84tm5" Feb 18 14:51:22 crc kubenswrapper[4957]: I0218 14:51:22.237682 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-549dd7c895-84tm5" podStartSLOduration=37.237665423 podStartE2EDuration="37.237665423s" podCreationTimestamp="2026-02-18 14:50:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:51:21.061070965 +0000 UTC m=+1187.581935709" watchObservedRunningTime="2026-02-18 14:51:22.237665423 +0000 UTC m=+1188.758530157" Feb 18 14:51:24 crc kubenswrapper[4957]: I0218 14:51:24.053414 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-czjx4" event={"ID":"147c50a5-37fc-4b06-803f-8ad1d1fd4625","Type":"ContainerStarted","Data":"775828847bbe6eab090bab810ff3d23e9c9158df42f6aa15ac2d381ab4f6e1a4"} Feb 18 14:51:24 crc kubenswrapper[4957]: I0218 14:51:24.054151 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-czjx4" Feb 18 14:51:24 crc kubenswrapper[4957]: I0218 14:51:24.056046 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cwqlsr" event={"ID":"8bd25216-306e-42c0-93da-a51803507c1f","Type":"ContainerStarted","Data":"7784a495e093959bb54bfa3752c432302bc14dec26cfc01f800810757b0aa226"} Feb 18 14:51:24 crc kubenswrapper[4957]: I0218 14:51:24.056091 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cwqlsr" Feb 18 14:51:24 crc kubenswrapper[4957]: I0218 14:51:24.058280 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79d975b745-pgchj" event={"ID":"6c6f7318-74c7-4971-9888-45a6c025bdde","Type":"ContainerStarted","Data":"c1c834fa1e5f54a66603ef6c4c7a02077f8ad7ce4500b581beae338a5652083b"} Feb 18 14:51:24 crc kubenswrapper[4957]: I0218 14:51:24.059656 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-79d975b745-pgchj" Feb 18 14:51:24 crc kubenswrapper[4957]: I0218 14:51:24.061552 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68f46476f-5k7g6" event={"ID":"8adf52f0-b132-4541-8962-7fae9bce89c6","Type":"ContainerStarted","Data":"64c5948e60606b306816ffc784ec763072230aa73135b3beca2d6c498bb68dcb"} Feb 18 14:51:24 crc kubenswrapper[4957]: I0218 14:51:24.061851 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-68f46476f-5k7g6" Feb 18 14:51:24 crc kubenswrapper[4957]: I0218 14:51:24.081475 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-czjx4" podStartSLOduration=4.471490586 podStartE2EDuration="40.081450245s" podCreationTimestamp="2026-02-18 14:50:44 +0000 UTC" firstStartedPulling="2026-02-18 14:50:47.740838347 +0000 UTC m=+1154.261703091" lastFinishedPulling="2026-02-18 14:51:23.350797986 +0000 UTC m=+1189.871662750" observedRunningTime="2026-02-18 14:51:24.076926273 +0000 UTC m=+1190.597791017" watchObservedRunningTime="2026-02-18 14:51:24.081450245 +0000 UTC m=+1190.602314989" Feb 18 14:51:24 crc kubenswrapper[4957]: I0218 14:51:24.111885 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-79d975b745-pgchj" podStartSLOduration=36.256490886 podStartE2EDuration="40.111860939s" podCreationTimestamp="2026-02-18 14:50:44 +0000 UTC" firstStartedPulling="2026-02-18 14:51:19.498596855 +0000 UTC m=+1186.019461609" lastFinishedPulling="2026-02-18 14:51:23.353966908 +0000 UTC m=+1189.874831662" observedRunningTime="2026-02-18 14:51:24.106072761 +0000 UTC m=+1190.626937505" watchObservedRunningTime="2026-02-18 14:51:24.111860939 +0000 UTC m=+1190.632725683" Feb 18 14:51:24 crc kubenswrapper[4957]: I0218 14:51:24.182226 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-68f46476f-5k7g6" podStartSLOduration=3.623901119 podStartE2EDuration="39.182199595s" podCreationTimestamp="2026-02-18 14:50:45 +0000 UTC" firstStartedPulling="2026-02-18 14:50:47.802182361 +0000 UTC m=+1154.323047105" lastFinishedPulling="2026-02-18 14:51:23.360480817 +0000 UTC m=+1189.881345581" observedRunningTime="2026-02-18 14:51:24.132957473 +0000 UTC m=+1190.653822227" watchObservedRunningTime="2026-02-18 14:51:24.182199595 +0000 UTC m=+1190.703064339" Feb 18 14:51:24 crc kubenswrapper[4957]: I0218 14:51:24.183039 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cwqlsr" podStartSLOduration=36.48803118 podStartE2EDuration="40.183031319s" podCreationTimestamp="2026-02-18 14:50:44 +0000 UTC" firstStartedPulling="2026-02-18 14:51:19.656309541 +0000 UTC m=+1186.177174285" lastFinishedPulling="2026-02-18 14:51:23.35130967 +0000 UTC m=+1189.872174424" observedRunningTime="2026-02-18 14:51:24.169838575 +0000 UTC m=+1190.690703319" watchObservedRunningTime="2026-02-18 14:51:24.183031319 +0000 UTC m=+1190.703896063" Feb 18 14:51:25 crc kubenswrapper[4957]: I0218 14:51:25.026621 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-slwlj" Feb 18 14:51:25 crc kubenswrapper[4957]: I0218 14:51:25.103958 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-77987464f4-b85vd" Feb 18 14:51:25 crc kubenswrapper[4957]: I0218 14:51:25.244325 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-jv5dd" Feb 18 14:51:25 crc kubenswrapper[4957]: I0218 14:51:25.269305 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-xt2c9" Feb 18 14:51:25 crc kubenswrapper[4957]: I0218 14:51:25.448056 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-hhg5g" Feb 18 14:51:25 crc kubenswrapper[4957]: I0218 14:51:25.511702 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-xg6pp" Feb 18 14:51:25 crc kubenswrapper[4957]: I0218 14:51:25.535504 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-fx4tl" Feb 18 14:51:25 crc kubenswrapper[4957]: I0218 14:51:25.931901 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-zc4tx" Feb 18 14:51:26 crc kubenswrapper[4957]: I0218 14:51:26.084573 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-vn8z8" event={"ID":"8aaaba83-1c93-481a-9627-a46dbd3eef31","Type":"ContainerStarted","Data":"8cdf0afe83a85d6bf96fad5146949a009b78b545e507909d9843eb47dfecaaef"} Feb 18 14:51:26 crc kubenswrapper[4957]: I0218 14:51:26.085446 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-vn8z8" Feb 18 14:51:26 crc kubenswrapper[4957]: I0218 14:51:26.111964 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-vn8z8" podStartSLOduration=9.639540835 podStartE2EDuration="42.111940566s" podCreationTimestamp="2026-02-18 14:50:44 +0000 UTC" firstStartedPulling="2026-02-18 14:50:52.162762027 +0000 UTC m=+1158.683626771" lastFinishedPulling="2026-02-18 14:51:24.635161758 +0000 UTC m=+1191.156026502" observedRunningTime="2026-02-18 14:51:26.10522673 +0000 UTC m=+1192.626091484" watchObservedRunningTime="2026-02-18 14:51:26.111940566 +0000 UTC m=+1192.632805310" Feb 18 14:51:28 crc kubenswrapper[4957]: I0218 14:51:28.072824 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-549dd7c895-84tm5" Feb 18 14:51:28 crc kubenswrapper[4957]: I0218 14:51:28.105588 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-kx5gv" event={"ID":"f70d6609-fcf8-47f9-89dc-986f8f2f902b","Type":"ContainerStarted","Data":"fa6417bfc6a7b42ddeef2365810ad7dff17a1a7009678576c67615b5019beb0e"} Feb 18 14:51:28 crc kubenswrapper[4957]: I0218 14:51:28.106138 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-kx5gv" Feb 18 14:51:28 crc kubenswrapper[4957]: I0218 14:51:28.107870 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-7866795846-2fdgz" event={"ID":"b724d9a9-8ae5-4295-9b4e-5ec65793b59f","Type":"ContainerStarted","Data":"fd8403759e30331549c2e4eacb6050e7241900bfd9dc4c1665a5aaa4974cb2d7"} Feb 18 14:51:28 crc kubenswrapper[4957]: I0218 14:51:28.108117 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-7866795846-2fdgz" Feb 18 14:51:28 crc kubenswrapper[4957]: I0218 14:51:28.150387 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-kx5gv" podStartSLOduration=4.351572208 podStartE2EDuration="44.150363878s" podCreationTimestamp="2026-02-18 14:50:44 +0000 UTC" firstStartedPulling="2026-02-18 14:50:47.840052132 +0000 UTC m=+1154.360916876" lastFinishedPulling="2026-02-18 14:51:27.638843802 +0000 UTC m=+1194.159708546" observedRunningTime="2026-02-18 14:51:28.124791675 +0000 UTC m=+1194.645656419" watchObservedRunningTime="2026-02-18 14:51:28.150363878 +0000 UTC m=+1194.671228622" Feb 18 14:51:28 crc kubenswrapper[4957]: I0218 14:51:28.152031 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-7866795846-2fdgz" podStartSLOduration=7.666094794 podStartE2EDuration="43.152021556s" podCreationTimestamp="2026-02-18 14:50:45 +0000 UTC" firstStartedPulling="2026-02-18 14:50:52.152879579 +0000 UTC m=+1158.673744323" lastFinishedPulling="2026-02-18 14:51:27.638806341 +0000 UTC m=+1194.159671085" observedRunningTime="2026-02-18 14:51:28.144127867 +0000 UTC m=+1194.664992611" watchObservedRunningTime="2026-02-18 14:51:28.152021556 +0000 UTC m=+1194.672886300" Feb 18 14:51:29 crc kubenswrapper[4957]: I0218 14:51:29.130580 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-vt4tc" event={"ID":"78c8fb66-d71a-44b7-b858-51f7ca26a407","Type":"ContainerStarted","Data":"98ad5e6220bfbd67eb6aef761204ea24e3a5e2b248bdf7884135e2d062f8e421"} Feb 18 14:51:29 crc kubenswrapper[4957]: I0218 14:51:29.130813 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-vt4tc" Feb 18 14:51:29 crc kubenswrapper[4957]: I0218 14:51:29.133944 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-kshbq" event={"ID":"c17ba5f2-7fb4-4ed7-8623-f987653f8f9b","Type":"ContainerStarted","Data":"94dd8748d3f7a11c97dbfaccdb5220072e7a27a32f484ff69678927cb7b0efad"} Feb 18 14:51:29 crc kubenswrapper[4957]: I0218 14:51:29.134382 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-kshbq" Feb 18 14:51:29 crc kubenswrapper[4957]: I0218 14:51:29.154095 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-vt4tc" podStartSLOduration=3.555909549 podStartE2EDuration="45.154077129s" podCreationTimestamp="2026-02-18 14:50:44 +0000 UTC" firstStartedPulling="2026-02-18 14:50:47.063043075 +0000 UTC m=+1153.583907819" lastFinishedPulling="2026-02-18 14:51:28.661210655 +0000 UTC m=+1195.182075399" observedRunningTime="2026-02-18 14:51:29.146457037 +0000 UTC m=+1195.667321781" watchObservedRunningTime="2026-02-18 14:51:29.154077129 +0000 UTC m=+1195.674941873" Feb 18 14:51:29 crc kubenswrapper[4957]: I0218 14:51:29.169766 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-kshbq" podStartSLOduration=7.683425339 podStartE2EDuration="44.169732744s" podCreationTimestamp="2026-02-18 14:50:45 +0000 UTC" firstStartedPulling="2026-02-18 14:50:52.17766146 +0000 UTC m=+1158.698526204" lastFinishedPulling="2026-02-18 14:51:28.663968855 +0000 UTC m=+1195.184833609" observedRunningTime="2026-02-18 14:51:29.166796539 +0000 UTC m=+1195.687661293" watchObservedRunningTime="2026-02-18 14:51:29.169732744 +0000 UTC m=+1195.690597488" Feb 18 14:51:31 crc kubenswrapper[4957]: I0218 14:51:31.107921 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-79d975b745-pgchj" Feb 18 14:51:31 crc kubenswrapper[4957]: I0218 14:51:31.178590 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-54bf66477-rc4j4" event={"ID":"f507ee0e-6836-4f30-b79e-63979d76a449","Type":"ContainerStarted","Data":"c57fb1d63bd24ab77c2265fcc4abd95ad9b30a2b4f8f1e289dfc3c4f662a2c38"} Feb 18 14:51:31 crc kubenswrapper[4957]: I0218 14:51:31.179050 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-54bf66477-rc4j4" Feb 18 14:51:31 crc kubenswrapper[4957]: I0218 14:51:31.212196 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-54bf66477-rc4j4" podStartSLOduration=8.115447664 podStartE2EDuration="46.212175693s" podCreationTimestamp="2026-02-18 14:50:45 +0000 UTC" firstStartedPulling="2026-02-18 14:50:52.185839918 +0000 UTC m=+1158.706704672" lastFinishedPulling="2026-02-18 14:51:30.282567957 +0000 UTC m=+1196.803432701" observedRunningTime="2026-02-18 14:51:31.203943693 +0000 UTC m=+1197.724808457" watchObservedRunningTime="2026-02-18 14:51:31.212175693 +0000 UTC m=+1197.733040437" Feb 18 14:51:31 crc kubenswrapper[4957]: I0218 14:51:31.419918 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cwqlsr" Feb 18 14:51:32 crc kubenswrapper[4957]: I0218 14:51:32.193204 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-9mf8z" event={"ID":"644451ba-ce73-4312-b6cd-af99eb6c9fbc","Type":"ContainerStarted","Data":"24e1387807fa85236773e4e4bc9fa2e09df0e198efe5be275265d1a261fc2df9"} Feb 18 14:51:32 crc kubenswrapper[4957]: I0218 14:51:32.194168 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-9mf8z" Feb 18 14:51:32 crc kubenswrapper[4957]: I0218 14:51:32.235997 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-9mf8z" podStartSLOduration=7.745286879 podStartE2EDuration="47.235961647s" podCreationTimestamp="2026-02-18 14:50:45 +0000 UTC" firstStartedPulling="2026-02-18 14:50:52.162032806 +0000 UTC m=+1158.682897550" lastFinishedPulling="2026-02-18 14:51:31.652707534 +0000 UTC m=+1198.173572318" observedRunningTime="2026-02-18 14:51:32.222187246 +0000 UTC m=+1198.743051990" watchObservedRunningTime="2026-02-18 14:51:32.235961647 +0000 UTC m=+1198.756826391" Feb 18 14:51:33 crc kubenswrapper[4957]: I0218 14:51:33.204244 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-cxzhc" event={"ID":"ef6b6faf-f852-4948-8d1b-d53eace855a4","Type":"ContainerStarted","Data":"f4537ea6a2e7c2b56620884a941925810c36ac2a1fd051c6f12253b81587426e"} Feb 18 14:51:33 crc kubenswrapper[4957]: I0218 14:51:33.204765 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-cxzhc" Feb 18 14:51:33 crc kubenswrapper[4957]: I0218 14:51:33.206195 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-rd7mm" event={"ID":"eb1f93ca-0a6f-4582-9d3a-329ddd1dd4fe","Type":"ContainerStarted","Data":"cabb58affd24b77604dc16dd5acc1e8c116d4b9953701d961a7a64b31b2e4c32"} Feb 18 14:51:33 crc kubenswrapper[4957]: I0218 14:51:33.206354 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-rd7mm" Feb 18 14:51:33 crc kubenswrapper[4957]: I0218 14:51:33.207626 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-d7pcl" event={"ID":"33e776b3-c81e-4655-82a8-88c63ff8adf7","Type":"ContainerStarted","Data":"3dcdc21909fe479edf48311f9ce2eef3b7e38d5d08ab62b69edbe6218561c6a3"} Feb 18 14:51:33 crc kubenswrapper[4957]: I0218 14:51:33.228950 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-cxzhc" podStartSLOduration=8.785886267 podStartE2EDuration="49.228917033s" podCreationTimestamp="2026-02-18 14:50:44 +0000 UTC" firstStartedPulling="2026-02-18 14:50:52.175028563 +0000 UTC m=+1158.695893307" lastFinishedPulling="2026-02-18 14:51:32.618059329 +0000 UTC m=+1199.138924073" observedRunningTime="2026-02-18 14:51:33.221549899 +0000 UTC m=+1199.742414633" watchObservedRunningTime="2026-02-18 14:51:33.228917033 +0000 UTC m=+1199.749781777" Feb 18 14:51:33 crc kubenswrapper[4957]: I0218 14:51:33.253856 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-d7pcl" podStartSLOduration=7.677582168 podStartE2EDuration="48.253815657s" podCreationTimestamp="2026-02-18 14:50:45 +0000 UTC" firstStartedPulling="2026-02-18 14:50:52.193542952 +0000 UTC m=+1158.714407696" lastFinishedPulling="2026-02-18 14:51:32.769776441 +0000 UTC m=+1199.290641185" observedRunningTime="2026-02-18 14:51:33.244985501 +0000 UTC m=+1199.765850255" watchObservedRunningTime="2026-02-18 14:51:33.253815657 +0000 UTC m=+1199.774680401" Feb 18 14:51:33 crc kubenswrapper[4957]: I0218 14:51:33.264601 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-rd7mm" podStartSLOduration=4.390574422 podStartE2EDuration="49.26457336s" podCreationTimestamp="2026-02-18 14:50:44 +0000 UTC" firstStartedPulling="2026-02-18 14:50:47.809237546 +0000 UTC m=+1154.330102290" lastFinishedPulling="2026-02-18 14:51:32.683236474 +0000 UTC m=+1199.204101228" observedRunningTime="2026-02-18 14:51:33.260873753 +0000 UTC m=+1199.781738527" watchObservedRunningTime="2026-02-18 14:51:33.26457336 +0000 UTC m=+1199.785438104" Feb 18 14:51:35 crc kubenswrapper[4957]: I0218 14:51:35.589596 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-vt4tc" Feb 18 14:51:35 crc kubenswrapper[4957]: I0218 14:51:35.717364 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-czjx4" Feb 18 14:51:35 crc kubenswrapper[4957]: I0218 14:51:35.756620 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-vn8z8" Feb 18 14:51:35 crc kubenswrapper[4957]: I0218 14:51:35.784136 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-kx5gv" Feb 18 14:51:36 crc kubenswrapper[4957]: I0218 14:51:36.165438 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-9mf8z" Feb 18 14:51:36 crc kubenswrapper[4957]: I0218 14:51:36.192010 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-68f46476f-5k7g6" Feb 18 14:51:36 crc kubenswrapper[4957]: I0218 14:51:36.255115 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-54bf66477-rc4j4" Feb 18 14:51:36 crc kubenswrapper[4957]: I0218 14:51:36.255212 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-7866795846-2fdgz" Feb 18 14:51:36 crc kubenswrapper[4957]: I0218 14:51:36.268577 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-kshbq" Feb 18 14:51:45 crc kubenswrapper[4957]: I0218 14:51:45.727902 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-rd7mm" Feb 18 14:51:45 crc kubenswrapper[4957]: I0218 14:51:45.859348 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-cxzhc" Feb 18 14:52:04 crc kubenswrapper[4957]: I0218 14:52:04.033857 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-kjlcd"] Feb 18 14:52:04 crc kubenswrapper[4957]: I0218 14:52:04.037552 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-kjlcd" Feb 18 14:52:04 crc kubenswrapper[4957]: I0218 14:52:04.041738 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Feb 18 14:52:04 crc kubenswrapper[4957]: I0218 14:52:04.042257 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Feb 18 14:52:04 crc kubenswrapper[4957]: I0218 14:52:04.042510 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-kgrw5" Feb 18 14:52:04 crc kubenswrapper[4957]: I0218 14:52:04.042724 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Feb 18 14:52:04 crc kubenswrapper[4957]: I0218 14:52:04.061467 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-kjlcd"] Feb 18 14:52:04 crc kubenswrapper[4957]: I0218 14:52:04.109024 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/957cf5bf-ce95-4a7c-ab2d-fc51db7343cc-config\") pod \"dnsmasq-dns-675f4bcbfc-kjlcd\" (UID: \"957cf5bf-ce95-4a7c-ab2d-fc51db7343cc\") " pod="openstack/dnsmasq-dns-675f4bcbfc-kjlcd" Feb 18 14:52:04 crc kubenswrapper[4957]: I0218 14:52:04.109282 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8qfm\" (UniqueName: \"kubernetes.io/projected/957cf5bf-ce95-4a7c-ab2d-fc51db7343cc-kube-api-access-g8qfm\") pod \"dnsmasq-dns-675f4bcbfc-kjlcd\" (UID: \"957cf5bf-ce95-4a7c-ab2d-fc51db7343cc\") " pod="openstack/dnsmasq-dns-675f4bcbfc-kjlcd" Feb 18 14:52:04 crc kubenswrapper[4957]: I0218 14:52:04.140541 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-gbczr"] Feb 18 14:52:04 crc kubenswrapper[4957]: I0218 14:52:04.142288 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-gbczr" Feb 18 14:52:04 crc kubenswrapper[4957]: I0218 14:52:04.154195 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Feb 18 14:52:04 crc kubenswrapper[4957]: I0218 14:52:04.167924 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-gbczr"] Feb 18 14:52:04 crc kubenswrapper[4957]: I0218 14:52:04.210862 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7017d3a7-f4a1-47ad-a5ad-e4b80ac7ed60-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-gbczr\" (UID: \"7017d3a7-f4a1-47ad-a5ad-e4b80ac7ed60\") " pod="openstack/dnsmasq-dns-78dd6ddcc-gbczr" Feb 18 14:52:04 crc kubenswrapper[4957]: I0218 14:52:04.210931 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/957cf5bf-ce95-4a7c-ab2d-fc51db7343cc-config\") pod \"dnsmasq-dns-675f4bcbfc-kjlcd\" (UID: \"957cf5bf-ce95-4a7c-ab2d-fc51db7343cc\") " pod="openstack/dnsmasq-dns-675f4bcbfc-kjlcd" Feb 18 14:52:04 crc kubenswrapper[4957]: I0218 14:52:04.210969 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7017d3a7-f4a1-47ad-a5ad-e4b80ac7ed60-config\") pod \"dnsmasq-dns-78dd6ddcc-gbczr\" (UID: \"7017d3a7-f4a1-47ad-a5ad-e4b80ac7ed60\") " pod="openstack/dnsmasq-dns-78dd6ddcc-gbczr" Feb 18 14:52:04 crc kubenswrapper[4957]: I0218 14:52:04.211036 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4b78\" (UniqueName: \"kubernetes.io/projected/7017d3a7-f4a1-47ad-a5ad-e4b80ac7ed60-kube-api-access-b4b78\") pod \"dnsmasq-dns-78dd6ddcc-gbczr\" (UID: \"7017d3a7-f4a1-47ad-a5ad-e4b80ac7ed60\") " pod="openstack/dnsmasq-dns-78dd6ddcc-gbczr" Feb 18 14:52:04 crc kubenswrapper[4957]: I0218 14:52:04.211079 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g8qfm\" (UniqueName: \"kubernetes.io/projected/957cf5bf-ce95-4a7c-ab2d-fc51db7343cc-kube-api-access-g8qfm\") pod \"dnsmasq-dns-675f4bcbfc-kjlcd\" (UID: \"957cf5bf-ce95-4a7c-ab2d-fc51db7343cc\") " pod="openstack/dnsmasq-dns-675f4bcbfc-kjlcd" Feb 18 14:52:04 crc kubenswrapper[4957]: I0218 14:52:04.212235 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/957cf5bf-ce95-4a7c-ab2d-fc51db7343cc-config\") pod \"dnsmasq-dns-675f4bcbfc-kjlcd\" (UID: \"957cf5bf-ce95-4a7c-ab2d-fc51db7343cc\") " pod="openstack/dnsmasq-dns-675f4bcbfc-kjlcd" Feb 18 14:52:04 crc kubenswrapper[4957]: I0218 14:52:04.234404 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g8qfm\" (UniqueName: \"kubernetes.io/projected/957cf5bf-ce95-4a7c-ab2d-fc51db7343cc-kube-api-access-g8qfm\") pod \"dnsmasq-dns-675f4bcbfc-kjlcd\" (UID: \"957cf5bf-ce95-4a7c-ab2d-fc51db7343cc\") " pod="openstack/dnsmasq-dns-675f4bcbfc-kjlcd" Feb 18 14:52:04 crc kubenswrapper[4957]: I0218 14:52:04.313120 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b4b78\" (UniqueName: \"kubernetes.io/projected/7017d3a7-f4a1-47ad-a5ad-e4b80ac7ed60-kube-api-access-b4b78\") pod \"dnsmasq-dns-78dd6ddcc-gbczr\" (UID: \"7017d3a7-f4a1-47ad-a5ad-e4b80ac7ed60\") " pod="openstack/dnsmasq-dns-78dd6ddcc-gbczr" Feb 18 14:52:04 crc kubenswrapper[4957]: I0218 14:52:04.313657 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7017d3a7-f4a1-47ad-a5ad-e4b80ac7ed60-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-gbczr\" (UID: \"7017d3a7-f4a1-47ad-a5ad-e4b80ac7ed60\") " pod="openstack/dnsmasq-dns-78dd6ddcc-gbczr" Feb 18 14:52:04 crc kubenswrapper[4957]: I0218 14:52:04.313772 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7017d3a7-f4a1-47ad-a5ad-e4b80ac7ed60-config\") pod \"dnsmasq-dns-78dd6ddcc-gbczr\" (UID: \"7017d3a7-f4a1-47ad-a5ad-e4b80ac7ed60\") " pod="openstack/dnsmasq-dns-78dd6ddcc-gbczr" Feb 18 14:52:04 crc kubenswrapper[4957]: I0218 14:52:04.314522 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7017d3a7-f4a1-47ad-a5ad-e4b80ac7ed60-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-gbczr\" (UID: \"7017d3a7-f4a1-47ad-a5ad-e4b80ac7ed60\") " pod="openstack/dnsmasq-dns-78dd6ddcc-gbczr" Feb 18 14:52:04 crc kubenswrapper[4957]: I0218 14:52:04.314959 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7017d3a7-f4a1-47ad-a5ad-e4b80ac7ed60-config\") pod \"dnsmasq-dns-78dd6ddcc-gbczr\" (UID: \"7017d3a7-f4a1-47ad-a5ad-e4b80ac7ed60\") " pod="openstack/dnsmasq-dns-78dd6ddcc-gbczr" Feb 18 14:52:04 crc kubenswrapper[4957]: I0218 14:52:04.332243 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b4b78\" (UniqueName: \"kubernetes.io/projected/7017d3a7-f4a1-47ad-a5ad-e4b80ac7ed60-kube-api-access-b4b78\") pod \"dnsmasq-dns-78dd6ddcc-gbczr\" (UID: \"7017d3a7-f4a1-47ad-a5ad-e4b80ac7ed60\") " pod="openstack/dnsmasq-dns-78dd6ddcc-gbczr" Feb 18 14:52:04 crc kubenswrapper[4957]: I0218 14:52:04.366481 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-kjlcd" Feb 18 14:52:04 crc kubenswrapper[4957]: I0218 14:52:04.463762 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-gbczr" Feb 18 14:52:04 crc kubenswrapper[4957]: I0218 14:52:04.952028 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-kjlcd"] Feb 18 14:52:05 crc kubenswrapper[4957]: I0218 14:52:05.039284 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-gbczr"] Feb 18 14:52:05 crc kubenswrapper[4957]: I0218 14:52:05.505028 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-gbczr" event={"ID":"7017d3a7-f4a1-47ad-a5ad-e4b80ac7ed60","Type":"ContainerStarted","Data":"df75817c9afe7e5dab1ad5bf8cd2b530fdb15ee7ca336a423a746bf5524c988e"} Feb 18 14:52:05 crc kubenswrapper[4957]: I0218 14:52:05.509133 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-kjlcd" event={"ID":"957cf5bf-ce95-4a7c-ab2d-fc51db7343cc","Type":"ContainerStarted","Data":"170a355681c299ea74c7afa6f2d04cfc550b540fa1ca0a6e8c3b1683c2afc4f1"} Feb 18 14:52:06 crc kubenswrapper[4957]: I0218 14:52:06.694647 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-kjlcd"] Feb 18 14:52:06 crc kubenswrapper[4957]: I0218 14:52:06.741519 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-hjs77"] Feb 18 14:52:06 crc kubenswrapper[4957]: I0218 14:52:06.743270 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-hjs77" Feb 18 14:52:06 crc kubenswrapper[4957]: I0218 14:52:06.760767 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-hjs77"] Feb 18 14:52:06 crc kubenswrapper[4957]: I0218 14:52:06.870405 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea49c46a-0b24-4d9c-9dc5-ed3c7f1e9433-config\") pod \"dnsmasq-dns-666b6646f7-hjs77\" (UID: \"ea49c46a-0b24-4d9c-9dc5-ed3c7f1e9433\") " pod="openstack/dnsmasq-dns-666b6646f7-hjs77" Feb 18 14:52:06 crc kubenswrapper[4957]: I0218 14:52:06.870512 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dsc9s\" (UniqueName: \"kubernetes.io/projected/ea49c46a-0b24-4d9c-9dc5-ed3c7f1e9433-kube-api-access-dsc9s\") pod \"dnsmasq-dns-666b6646f7-hjs77\" (UID: \"ea49c46a-0b24-4d9c-9dc5-ed3c7f1e9433\") " pod="openstack/dnsmasq-dns-666b6646f7-hjs77" Feb 18 14:52:06 crc kubenswrapper[4957]: I0218 14:52:06.870597 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ea49c46a-0b24-4d9c-9dc5-ed3c7f1e9433-dns-svc\") pod \"dnsmasq-dns-666b6646f7-hjs77\" (UID: \"ea49c46a-0b24-4d9c-9dc5-ed3c7f1e9433\") " pod="openstack/dnsmasq-dns-666b6646f7-hjs77" Feb 18 14:52:06 crc kubenswrapper[4957]: I0218 14:52:06.974191 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ea49c46a-0b24-4d9c-9dc5-ed3c7f1e9433-dns-svc\") pod \"dnsmasq-dns-666b6646f7-hjs77\" (UID: \"ea49c46a-0b24-4d9c-9dc5-ed3c7f1e9433\") " pod="openstack/dnsmasq-dns-666b6646f7-hjs77" Feb 18 14:52:06 crc kubenswrapper[4957]: I0218 14:52:06.974257 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea49c46a-0b24-4d9c-9dc5-ed3c7f1e9433-config\") pod \"dnsmasq-dns-666b6646f7-hjs77\" (UID: \"ea49c46a-0b24-4d9c-9dc5-ed3c7f1e9433\") " pod="openstack/dnsmasq-dns-666b6646f7-hjs77" Feb 18 14:52:06 crc kubenswrapper[4957]: I0218 14:52:06.974329 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dsc9s\" (UniqueName: \"kubernetes.io/projected/ea49c46a-0b24-4d9c-9dc5-ed3c7f1e9433-kube-api-access-dsc9s\") pod \"dnsmasq-dns-666b6646f7-hjs77\" (UID: \"ea49c46a-0b24-4d9c-9dc5-ed3c7f1e9433\") " pod="openstack/dnsmasq-dns-666b6646f7-hjs77" Feb 18 14:52:06 crc kubenswrapper[4957]: I0218 14:52:06.975627 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea49c46a-0b24-4d9c-9dc5-ed3c7f1e9433-config\") pod \"dnsmasq-dns-666b6646f7-hjs77\" (UID: \"ea49c46a-0b24-4d9c-9dc5-ed3c7f1e9433\") " pod="openstack/dnsmasq-dns-666b6646f7-hjs77" Feb 18 14:52:06 crc kubenswrapper[4957]: I0218 14:52:06.975785 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ea49c46a-0b24-4d9c-9dc5-ed3c7f1e9433-dns-svc\") pod \"dnsmasq-dns-666b6646f7-hjs77\" (UID: \"ea49c46a-0b24-4d9c-9dc5-ed3c7f1e9433\") " pod="openstack/dnsmasq-dns-666b6646f7-hjs77" Feb 18 14:52:07 crc kubenswrapper[4957]: I0218 14:52:07.021248 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dsc9s\" (UniqueName: \"kubernetes.io/projected/ea49c46a-0b24-4d9c-9dc5-ed3c7f1e9433-kube-api-access-dsc9s\") pod \"dnsmasq-dns-666b6646f7-hjs77\" (UID: \"ea49c46a-0b24-4d9c-9dc5-ed3c7f1e9433\") " pod="openstack/dnsmasq-dns-666b6646f7-hjs77" Feb 18 14:52:07 crc kubenswrapper[4957]: I0218 14:52:07.070794 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-hjs77" Feb 18 14:52:07 crc kubenswrapper[4957]: I0218 14:52:07.254843 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-gbczr"] Feb 18 14:52:07 crc kubenswrapper[4957]: I0218 14:52:07.317023 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-pnzzz"] Feb 18 14:52:07 crc kubenswrapper[4957]: I0218 14:52:07.318455 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-pnzzz" Feb 18 14:52:07 crc kubenswrapper[4957]: I0218 14:52:07.343386 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-pnzzz"] Feb 18 14:52:07 crc kubenswrapper[4957]: I0218 14:52:07.490843 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/031ed1ca-79fe-4aa7-ade3-f5d612b2c628-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-pnzzz\" (UID: \"031ed1ca-79fe-4aa7-ade3-f5d612b2c628\") " pod="openstack/dnsmasq-dns-57d769cc4f-pnzzz" Feb 18 14:52:07 crc kubenswrapper[4957]: I0218 14:52:07.492256 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/031ed1ca-79fe-4aa7-ade3-f5d612b2c628-config\") pod \"dnsmasq-dns-57d769cc4f-pnzzz\" (UID: \"031ed1ca-79fe-4aa7-ade3-f5d612b2c628\") " pod="openstack/dnsmasq-dns-57d769cc4f-pnzzz" Feb 18 14:52:07 crc kubenswrapper[4957]: I0218 14:52:07.492358 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72t2h\" (UniqueName: \"kubernetes.io/projected/031ed1ca-79fe-4aa7-ade3-f5d612b2c628-kube-api-access-72t2h\") pod \"dnsmasq-dns-57d769cc4f-pnzzz\" (UID: \"031ed1ca-79fe-4aa7-ade3-f5d612b2c628\") " pod="openstack/dnsmasq-dns-57d769cc4f-pnzzz" Feb 18 14:52:07 crc kubenswrapper[4957]: I0218 14:52:07.595333 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/031ed1ca-79fe-4aa7-ade3-f5d612b2c628-config\") pod \"dnsmasq-dns-57d769cc4f-pnzzz\" (UID: \"031ed1ca-79fe-4aa7-ade3-f5d612b2c628\") " pod="openstack/dnsmasq-dns-57d769cc4f-pnzzz" Feb 18 14:52:07 crc kubenswrapper[4957]: I0218 14:52:07.595408 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-72t2h\" (UniqueName: \"kubernetes.io/projected/031ed1ca-79fe-4aa7-ade3-f5d612b2c628-kube-api-access-72t2h\") pod \"dnsmasq-dns-57d769cc4f-pnzzz\" (UID: \"031ed1ca-79fe-4aa7-ade3-f5d612b2c628\") " pod="openstack/dnsmasq-dns-57d769cc4f-pnzzz" Feb 18 14:52:07 crc kubenswrapper[4957]: I0218 14:52:07.597357 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/031ed1ca-79fe-4aa7-ade3-f5d612b2c628-config\") pod \"dnsmasq-dns-57d769cc4f-pnzzz\" (UID: \"031ed1ca-79fe-4aa7-ade3-f5d612b2c628\") " pod="openstack/dnsmasq-dns-57d769cc4f-pnzzz" Feb 18 14:52:07 crc kubenswrapper[4957]: I0218 14:52:07.606178 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/031ed1ca-79fe-4aa7-ade3-f5d612b2c628-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-pnzzz\" (UID: \"031ed1ca-79fe-4aa7-ade3-f5d612b2c628\") " pod="openstack/dnsmasq-dns-57d769cc4f-pnzzz" Feb 18 14:52:07 crc kubenswrapper[4957]: I0218 14:52:07.607135 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/031ed1ca-79fe-4aa7-ade3-f5d612b2c628-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-pnzzz\" (UID: \"031ed1ca-79fe-4aa7-ade3-f5d612b2c628\") " pod="openstack/dnsmasq-dns-57d769cc4f-pnzzz" Feb 18 14:52:07 crc kubenswrapper[4957]: I0218 14:52:07.631662 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-72t2h\" (UniqueName: \"kubernetes.io/projected/031ed1ca-79fe-4aa7-ade3-f5d612b2c628-kube-api-access-72t2h\") pod \"dnsmasq-dns-57d769cc4f-pnzzz\" (UID: \"031ed1ca-79fe-4aa7-ade3-f5d612b2c628\") " pod="openstack/dnsmasq-dns-57d769cc4f-pnzzz" Feb 18 14:52:07 crc kubenswrapper[4957]: I0218 14:52:07.663966 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-pnzzz" Feb 18 14:52:07 crc kubenswrapper[4957]: I0218 14:52:07.829451 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-hjs77"] Feb 18 14:52:07 crc kubenswrapper[4957]: I0218 14:52:07.953289 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 18 14:52:07 crc kubenswrapper[4957]: I0218 14:52:07.956667 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 18 14:52:07 crc kubenswrapper[4957]: I0218 14:52:07.962574 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-pr78g" Feb 18 14:52:07 crc kubenswrapper[4957]: I0218 14:52:07.962665 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Feb 18 14:52:07 crc kubenswrapper[4957]: I0218 14:52:07.962681 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 18 14:52:07 crc kubenswrapper[4957]: I0218 14:52:07.962933 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 18 14:52:07 crc kubenswrapper[4957]: I0218 14:52:07.963031 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 18 14:52:07 crc kubenswrapper[4957]: I0218 14:52:07.980511 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 18 14:52:07 crc kubenswrapper[4957]: I0218 14:52:07.983863 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Feb 18 14:52:08 crc kubenswrapper[4957]: I0218 14:52:08.020389 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 18 14:52:08 crc kubenswrapper[4957]: I0218 14:52:08.075027 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-2"] Feb 18 14:52:08 crc kubenswrapper[4957]: I0218 14:52:08.111234 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-1"] Feb 18 14:52:08 crc kubenswrapper[4957]: I0218 14:52:08.112496 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Feb 18 14:52:08 crc kubenswrapper[4957]: I0218 14:52:08.118596 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Feb 18 14:52:08 crc kubenswrapper[4957]: I0218 14:52:08.126739 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a0e8ec2b-400b-4454-acdd-517a1727e9f8-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"a0e8ec2b-400b-4454-acdd-517a1727e9f8\") " pod="openstack/rabbitmq-server-0" Feb 18 14:52:08 crc kubenswrapper[4957]: I0218 14:52:08.127258 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a0e8ec2b-400b-4454-acdd-517a1727e9f8-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"a0e8ec2b-400b-4454-acdd-517a1727e9f8\") " pod="openstack/rabbitmq-server-0" Feb 18 14:52:08 crc kubenswrapper[4957]: I0218 14:52:08.127317 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a0e8ec2b-400b-4454-acdd-517a1727e9f8-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"a0e8ec2b-400b-4454-acdd-517a1727e9f8\") " pod="openstack/rabbitmq-server-0" Feb 18 14:52:08 crc kubenswrapper[4957]: I0218 14:52:08.127595 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkbjz\" (UniqueName: \"kubernetes.io/projected/a0e8ec2b-400b-4454-acdd-517a1727e9f8-kube-api-access-xkbjz\") pod \"rabbitmq-server-0\" (UID: \"a0e8ec2b-400b-4454-acdd-517a1727e9f8\") " pod="openstack/rabbitmq-server-0" Feb 18 14:52:08 crc kubenswrapper[4957]: I0218 14:52:08.127706 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a0e8ec2b-400b-4454-acdd-517a1727e9f8-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"a0e8ec2b-400b-4454-acdd-517a1727e9f8\") " pod="openstack/rabbitmq-server-0" Feb 18 14:52:08 crc kubenswrapper[4957]: I0218 14:52:08.127866 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a0e8ec2b-400b-4454-acdd-517a1727e9f8-config-data\") pod \"rabbitmq-server-0\" (UID: \"a0e8ec2b-400b-4454-acdd-517a1727e9f8\") " pod="openstack/rabbitmq-server-0" Feb 18 14:52:08 crc kubenswrapper[4957]: I0218 14:52:08.128029 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a0e8ec2b-400b-4454-acdd-517a1727e9f8-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"a0e8ec2b-400b-4454-acdd-517a1727e9f8\") " pod="openstack/rabbitmq-server-0" Feb 18 14:52:08 crc kubenswrapper[4957]: I0218 14:52:08.128073 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a0e8ec2b-400b-4454-acdd-517a1727e9f8-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"a0e8ec2b-400b-4454-acdd-517a1727e9f8\") " pod="openstack/rabbitmq-server-0" Feb 18 14:52:08 crc kubenswrapper[4957]: I0218 14:52:08.128163 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a0e8ec2b-400b-4454-acdd-517a1727e9f8-server-conf\") pod \"rabbitmq-server-0\" (UID: \"a0e8ec2b-400b-4454-acdd-517a1727e9f8\") " pod="openstack/rabbitmq-server-0" Feb 18 14:52:08 crc kubenswrapper[4957]: I0218 14:52:08.128228 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a0e8ec2b-400b-4454-acdd-517a1727e9f8-pod-info\") pod \"rabbitmq-server-0\" (UID: \"a0e8ec2b-400b-4454-acdd-517a1727e9f8\") " pod="openstack/rabbitmq-server-0" Feb 18 14:52:08 crc kubenswrapper[4957]: I0218 14:52:08.128343 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-d9925550-aa8b-4d58-92f5-133e76b1ee6d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d9925550-aa8b-4d58-92f5-133e76b1ee6d\") pod \"rabbitmq-server-0\" (UID: \"a0e8ec2b-400b-4454-acdd-517a1727e9f8\") " pod="openstack/rabbitmq-server-0" Feb 18 14:52:08 crc kubenswrapper[4957]: I0218 14:52:08.170608 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Feb 18 14:52:08 crc kubenswrapper[4957]: I0218 14:52:08.233044 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-1"] Feb 18 14:52:08 crc kubenswrapper[4957]: I0218 14:52:08.234665 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/81a0cd7a-3f64-4555-96e9-ad69c2518568-server-conf\") pod \"rabbitmq-server-2\" (UID: \"81a0cd7a-3f64-4555-96e9-ad69c2518568\") " pod="openstack/rabbitmq-server-2" Feb 18 14:52:08 crc kubenswrapper[4957]: I0218 14:52:08.234734 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xkbjz\" (UniqueName: \"kubernetes.io/projected/a0e8ec2b-400b-4454-acdd-517a1727e9f8-kube-api-access-xkbjz\") pod \"rabbitmq-server-0\" (UID: \"a0e8ec2b-400b-4454-acdd-517a1727e9f8\") " pod="openstack/rabbitmq-server-0" Feb 18 14:52:08 crc kubenswrapper[4957]: I0218 14:52:08.234760 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-975c040f-4cc1-48f6-a5ab-06ccc1fd028f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-975c040f-4cc1-48f6-a5ab-06ccc1fd028f\") pod \"rabbitmq-server-2\" (UID: \"81a0cd7a-3f64-4555-96e9-ad69c2518568\") " pod="openstack/rabbitmq-server-2" Feb 18 14:52:08 crc kubenswrapper[4957]: I0218 14:52:08.234795 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a0e8ec2b-400b-4454-acdd-517a1727e9f8-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"a0e8ec2b-400b-4454-acdd-517a1727e9f8\") " pod="openstack/rabbitmq-server-0" Feb 18 14:52:08 crc kubenswrapper[4957]: I0218 14:52:08.234967 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gggwp\" (UniqueName: \"kubernetes.io/projected/f237ab9c-fc69-491b-98da-97ce92214eb0-kube-api-access-gggwp\") pod \"rabbitmq-server-1\" (UID: \"f237ab9c-fc69-491b-98da-97ce92214eb0\") " pod="openstack/rabbitmq-server-1" Feb 18 14:52:08 crc kubenswrapper[4957]: I0218 14:52:08.235075 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f237ab9c-fc69-491b-98da-97ce92214eb0-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"f237ab9c-fc69-491b-98da-97ce92214eb0\") " pod="openstack/rabbitmq-server-1" Feb 18 14:52:08 crc kubenswrapper[4957]: I0218 14:52:08.235123 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/81a0cd7a-3f64-4555-96e9-ad69c2518568-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"81a0cd7a-3f64-4555-96e9-ad69c2518568\") " pod="openstack/rabbitmq-server-2" Feb 18 14:52:08 crc kubenswrapper[4957]: I0218 14:52:08.235199 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a0e8ec2b-400b-4454-acdd-517a1727e9f8-config-data\") pod \"rabbitmq-server-0\" (UID: \"a0e8ec2b-400b-4454-acdd-517a1727e9f8\") " pod="openstack/rabbitmq-server-0" Feb 18 14:52:08 crc kubenswrapper[4957]: I0218 14:52:08.235286 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/81a0cd7a-3f64-4555-96e9-ad69c2518568-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"81a0cd7a-3f64-4555-96e9-ad69c2518568\") " pod="openstack/rabbitmq-server-2" Feb 18 14:52:08 crc kubenswrapper[4957]: I0218 14:52:08.235341 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/81a0cd7a-3f64-4555-96e9-ad69c2518568-config-data\") pod \"rabbitmq-server-2\" (UID: \"81a0cd7a-3f64-4555-96e9-ad69c2518568\") " pod="openstack/rabbitmq-server-2" Feb 18 14:52:08 crc kubenswrapper[4957]: I0218 14:52:08.235447 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/81a0cd7a-3f64-4555-96e9-ad69c2518568-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"81a0cd7a-3f64-4555-96e9-ad69c2518568\") " pod="openstack/rabbitmq-server-2" Feb 18 14:52:08 crc kubenswrapper[4957]: I0218 14:52:08.235541 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a0e8ec2b-400b-4454-acdd-517a1727e9f8-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"a0e8ec2b-400b-4454-acdd-517a1727e9f8\") " pod="openstack/rabbitmq-server-0" Feb 18 14:52:08 crc kubenswrapper[4957]: I0218 14:52:08.235680 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a0e8ec2b-400b-4454-acdd-517a1727e9f8-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"a0e8ec2b-400b-4454-acdd-517a1727e9f8\") " pod="openstack/rabbitmq-server-0" Feb 18 14:52:08 crc kubenswrapper[4957]: I0218 14:52:08.235752 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f237ab9c-fc69-491b-98da-97ce92214eb0-pod-info\") pod \"rabbitmq-server-1\" (UID: \"f237ab9c-fc69-491b-98da-97ce92214eb0\") " pod="openstack/rabbitmq-server-1" Feb 18 14:52:08 crc kubenswrapper[4957]: I0218 14:52:08.235801 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/81a0cd7a-3f64-4555-96e9-ad69c2518568-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"81a0cd7a-3f64-4555-96e9-ad69c2518568\") " pod="openstack/rabbitmq-server-2" Feb 18 14:52:08 crc kubenswrapper[4957]: I0218 14:52:08.235834 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a0e8ec2b-400b-4454-acdd-517a1727e9f8-server-conf\") pod \"rabbitmq-server-0\" (UID: \"a0e8ec2b-400b-4454-acdd-517a1727e9f8\") " pod="openstack/rabbitmq-server-0" Feb 18 14:52:08 crc kubenswrapper[4957]: I0218 14:52:08.235891 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/81a0cd7a-3f64-4555-96e9-ad69c2518568-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"81a0cd7a-3f64-4555-96e9-ad69c2518568\") " pod="openstack/rabbitmq-server-2" Feb 18 14:52:08 crc kubenswrapper[4957]: I0218 14:52:08.235918 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/81a0cd7a-3f64-4555-96e9-ad69c2518568-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"81a0cd7a-3f64-4555-96e9-ad69c2518568\") " pod="openstack/rabbitmq-server-2" Feb 18 14:52:08 crc kubenswrapper[4957]: I0218 14:52:08.235946 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a0e8ec2b-400b-4454-acdd-517a1727e9f8-pod-info\") pod \"rabbitmq-server-0\" (UID: \"a0e8ec2b-400b-4454-acdd-517a1727e9f8\") " pod="openstack/rabbitmq-server-0" Feb 18 14:52:08 crc kubenswrapper[4957]: I0218 14:52:08.235986 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-dac279e8-a928-4788-ad49-611c729ef5f6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dac279e8-a928-4788-ad49-611c729ef5f6\") pod \"rabbitmq-server-1\" (UID: \"f237ab9c-fc69-491b-98da-97ce92214eb0\") " pod="openstack/rabbitmq-server-1" Feb 18 14:52:08 crc kubenswrapper[4957]: I0218 14:52:08.236031 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/81a0cd7a-3f64-4555-96e9-ad69c2518568-pod-info\") pod \"rabbitmq-server-2\" (UID: \"81a0cd7a-3f64-4555-96e9-ad69c2518568\") " pod="openstack/rabbitmq-server-2" Feb 18 14:52:08 crc kubenswrapper[4957]: I0218 14:52:08.236064 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-d9925550-aa8b-4d58-92f5-133e76b1ee6d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d9925550-aa8b-4d58-92f5-133e76b1ee6d\") pod \"rabbitmq-server-0\" (UID: \"a0e8ec2b-400b-4454-acdd-517a1727e9f8\") " pod="openstack/rabbitmq-server-0" Feb 18 14:52:08 crc kubenswrapper[4957]: I0218 14:52:08.236113 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f237ab9c-fc69-491b-98da-97ce92214eb0-config-data\") pod \"rabbitmq-server-1\" (UID: \"f237ab9c-fc69-491b-98da-97ce92214eb0\") " pod="openstack/rabbitmq-server-1" Feb 18 14:52:08 crc kubenswrapper[4957]: I0218 14:52:08.236131 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f237ab9c-fc69-491b-98da-97ce92214eb0-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"f237ab9c-fc69-491b-98da-97ce92214eb0\") " pod="openstack/rabbitmq-server-1" Feb 18 14:52:08 crc kubenswrapper[4957]: I0218 14:52:08.236160 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a0e8ec2b-400b-4454-acdd-517a1727e9f8-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"a0e8ec2b-400b-4454-acdd-517a1727e9f8\") " pod="openstack/rabbitmq-server-0" Feb 18 14:52:08 crc kubenswrapper[4957]: I0218 14:52:08.236237 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f237ab9c-fc69-491b-98da-97ce92214eb0-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"f237ab9c-fc69-491b-98da-97ce92214eb0\") " pod="openstack/rabbitmq-server-1" Feb 18 14:52:08 crc kubenswrapper[4957]: I0218 14:52:08.236258 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f237ab9c-fc69-491b-98da-97ce92214eb0-server-conf\") pod \"rabbitmq-server-1\" (UID: \"f237ab9c-fc69-491b-98da-97ce92214eb0\") " pod="openstack/rabbitmq-server-1" Feb 18 14:52:08 crc kubenswrapper[4957]: I0218 14:52:08.236282 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a0e8ec2b-400b-4454-acdd-517a1727e9f8-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"a0e8ec2b-400b-4454-acdd-517a1727e9f8\") " pod="openstack/rabbitmq-server-0" Feb 18 14:52:08 crc kubenswrapper[4957]: I0218 14:52:08.236301 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f237ab9c-fc69-491b-98da-97ce92214eb0-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"f237ab9c-fc69-491b-98da-97ce92214eb0\") " pod="openstack/rabbitmq-server-1" Feb 18 14:52:08 crc kubenswrapper[4957]: I0218 14:52:08.236341 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f237ab9c-fc69-491b-98da-97ce92214eb0-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"f237ab9c-fc69-491b-98da-97ce92214eb0\") " pod="openstack/rabbitmq-server-1" Feb 18 14:52:08 crc kubenswrapper[4957]: I0218 14:52:08.236851 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a0e8ec2b-400b-4454-acdd-517a1727e9f8-config-data\") pod \"rabbitmq-server-0\" (UID: \"a0e8ec2b-400b-4454-acdd-517a1727e9f8\") " pod="openstack/rabbitmq-server-0" Feb 18 14:52:08 crc kubenswrapper[4957]: I0218 14:52:08.236301 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a0e8ec2b-400b-4454-acdd-517a1727e9f8-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"a0e8ec2b-400b-4454-acdd-517a1727e9f8\") " pod="openstack/rabbitmq-server-0" Feb 18 14:52:08 crc kubenswrapper[4957]: I0218 14:52:08.237624 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a0e8ec2b-400b-4454-acdd-517a1727e9f8-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"a0e8ec2b-400b-4454-acdd-517a1727e9f8\") " pod="openstack/rabbitmq-server-0" Feb 18 14:52:08 crc kubenswrapper[4957]: I0218 14:52:08.239078 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a0e8ec2b-400b-4454-acdd-517a1727e9f8-server-conf\") pod \"rabbitmq-server-0\" (UID: \"a0e8ec2b-400b-4454-acdd-517a1727e9f8\") " pod="openstack/rabbitmq-server-0" Feb 18 14:52:08 crc kubenswrapper[4957]: I0218 14:52:08.240724 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a0e8ec2b-400b-4454-acdd-517a1727e9f8-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"a0e8ec2b-400b-4454-acdd-517a1727e9f8\") " pod="openstack/rabbitmq-server-0" Feb 18 14:52:08 crc kubenswrapper[4957]: I0218 14:52:08.243289 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a0e8ec2b-400b-4454-acdd-517a1727e9f8-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"a0e8ec2b-400b-4454-acdd-517a1727e9f8\") " pod="openstack/rabbitmq-server-0" Feb 18 14:52:08 crc kubenswrapper[4957]: I0218 14:52:08.244377 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f237ab9c-fc69-491b-98da-97ce92214eb0-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"f237ab9c-fc69-491b-98da-97ce92214eb0\") " pod="openstack/rabbitmq-server-1" Feb 18 14:52:08 crc kubenswrapper[4957]: I0218 14:52:08.245941 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ln6fz\" (UniqueName: \"kubernetes.io/projected/81a0cd7a-3f64-4555-96e9-ad69c2518568-kube-api-access-ln6fz\") pod \"rabbitmq-server-2\" (UID: \"81a0cd7a-3f64-4555-96e9-ad69c2518568\") " pod="openstack/rabbitmq-server-2" Feb 18 14:52:08 crc kubenswrapper[4957]: I0218 14:52:08.247783 4957 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 18 14:52:08 crc kubenswrapper[4957]: I0218 14:52:08.247843 4957 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-d9925550-aa8b-4d58-92f5-133e76b1ee6d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d9925550-aa8b-4d58-92f5-133e76b1ee6d\") pod \"rabbitmq-server-0\" (UID: \"a0e8ec2b-400b-4454-acdd-517a1727e9f8\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/bc18ac0c40d0aacb89d93d2c4c0188f67a28fb5cdb73a182e239f99271fd422f/globalmount\"" pod="openstack/rabbitmq-server-0" Feb 18 14:52:08 crc kubenswrapper[4957]: I0218 14:52:08.251657 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a0e8ec2b-400b-4454-acdd-517a1727e9f8-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"a0e8ec2b-400b-4454-acdd-517a1727e9f8\") " pod="openstack/rabbitmq-server-0" Feb 18 14:52:08 crc kubenswrapper[4957]: I0218 14:52:08.259759 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a0e8ec2b-400b-4454-acdd-517a1727e9f8-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"a0e8ec2b-400b-4454-acdd-517a1727e9f8\") " pod="openstack/rabbitmq-server-0" Feb 18 14:52:08 crc kubenswrapper[4957]: I0218 14:52:08.260223 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a0e8ec2b-400b-4454-acdd-517a1727e9f8-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"a0e8ec2b-400b-4454-acdd-517a1727e9f8\") " pod="openstack/rabbitmq-server-0" Feb 18 14:52:08 crc kubenswrapper[4957]: I0218 14:52:08.264778 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xkbjz\" (UniqueName: \"kubernetes.io/projected/a0e8ec2b-400b-4454-acdd-517a1727e9f8-kube-api-access-xkbjz\") pod \"rabbitmq-server-0\" (UID: \"a0e8ec2b-400b-4454-acdd-517a1727e9f8\") " pod="openstack/rabbitmq-server-0" Feb 18 14:52:08 crc kubenswrapper[4957]: I0218 14:52:08.271572 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a0e8ec2b-400b-4454-acdd-517a1727e9f8-pod-info\") pod \"rabbitmq-server-0\" (UID: \"a0e8ec2b-400b-4454-acdd-517a1727e9f8\") " pod="openstack/rabbitmq-server-0" Feb 18 14:52:08 crc kubenswrapper[4957]: I0218 14:52:08.339048 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-d9925550-aa8b-4d58-92f5-133e76b1ee6d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d9925550-aa8b-4d58-92f5-133e76b1ee6d\") pod \"rabbitmq-server-0\" (UID: \"a0e8ec2b-400b-4454-acdd-517a1727e9f8\") " pod="openstack/rabbitmq-server-0" Feb 18 14:52:08 crc kubenswrapper[4957]: I0218 14:52:08.352927 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f237ab9c-fc69-491b-98da-97ce92214eb0-config-data\") pod \"rabbitmq-server-1\" (UID: \"f237ab9c-fc69-491b-98da-97ce92214eb0\") " pod="openstack/rabbitmq-server-1" Feb 18 14:52:08 crc kubenswrapper[4957]: I0218 14:52:08.353006 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f237ab9c-fc69-491b-98da-97ce92214eb0-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"f237ab9c-fc69-491b-98da-97ce92214eb0\") " pod="openstack/rabbitmq-server-1" Feb 18 14:52:08 crc kubenswrapper[4957]: I0218 14:52:08.353058 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f237ab9c-fc69-491b-98da-97ce92214eb0-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"f237ab9c-fc69-491b-98da-97ce92214eb0\") " pod="openstack/rabbitmq-server-1" Feb 18 14:52:08 crc kubenswrapper[4957]: I0218 14:52:08.353079 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f237ab9c-fc69-491b-98da-97ce92214eb0-server-conf\") pod \"rabbitmq-server-1\" (UID: \"f237ab9c-fc69-491b-98da-97ce92214eb0\") " pod="openstack/rabbitmq-server-1" Feb 18 14:52:08 crc kubenswrapper[4957]: I0218 14:52:08.353114 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f237ab9c-fc69-491b-98da-97ce92214eb0-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"f237ab9c-fc69-491b-98da-97ce92214eb0\") " pod="openstack/rabbitmq-server-1" Feb 18 14:52:08 crc kubenswrapper[4957]: I0218 14:52:08.353149 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f237ab9c-fc69-491b-98da-97ce92214eb0-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"f237ab9c-fc69-491b-98da-97ce92214eb0\") " pod="openstack/rabbitmq-server-1" Feb 18 14:52:08 crc kubenswrapper[4957]: I0218 14:52:08.353210 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f237ab9c-fc69-491b-98da-97ce92214eb0-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"f237ab9c-fc69-491b-98da-97ce92214eb0\") " pod="openstack/rabbitmq-server-1" Feb 18 14:52:08 crc kubenswrapper[4957]: I0218 14:52:08.353239 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ln6fz\" (UniqueName: \"kubernetes.io/projected/81a0cd7a-3f64-4555-96e9-ad69c2518568-kube-api-access-ln6fz\") pod \"rabbitmq-server-2\" (UID: \"81a0cd7a-3f64-4555-96e9-ad69c2518568\") " pod="openstack/rabbitmq-server-2" Feb 18 14:52:08 crc kubenswrapper[4957]: I0218 14:52:08.353290 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/81a0cd7a-3f64-4555-96e9-ad69c2518568-server-conf\") pod \"rabbitmq-server-2\" (UID: \"81a0cd7a-3f64-4555-96e9-ad69c2518568\") " pod="openstack/rabbitmq-server-2" Feb 18 14:52:08 crc kubenswrapper[4957]: I0218 14:52:08.353351 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-975c040f-4cc1-48f6-a5ab-06ccc1fd028f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-975c040f-4cc1-48f6-a5ab-06ccc1fd028f\") pod \"rabbitmq-server-2\" (UID: \"81a0cd7a-3f64-4555-96e9-ad69c2518568\") " pod="openstack/rabbitmq-server-2" Feb 18 14:52:08 crc kubenswrapper[4957]: I0218 14:52:08.353396 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gggwp\" (UniqueName: \"kubernetes.io/projected/f237ab9c-fc69-491b-98da-97ce92214eb0-kube-api-access-gggwp\") pod \"rabbitmq-server-1\" (UID: \"f237ab9c-fc69-491b-98da-97ce92214eb0\") " pod="openstack/rabbitmq-server-1" Feb 18 14:52:08 crc kubenswrapper[4957]: I0218 14:52:08.353469 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f237ab9c-fc69-491b-98da-97ce92214eb0-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"f237ab9c-fc69-491b-98da-97ce92214eb0\") " pod="openstack/rabbitmq-server-1" Feb 18 14:52:08 crc kubenswrapper[4957]: I0218 14:52:08.353503 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/81a0cd7a-3f64-4555-96e9-ad69c2518568-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"81a0cd7a-3f64-4555-96e9-ad69c2518568\") " pod="openstack/rabbitmq-server-2" Feb 18 14:52:08 crc kubenswrapper[4957]: I0218 14:52:08.353568 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/81a0cd7a-3f64-4555-96e9-ad69c2518568-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"81a0cd7a-3f64-4555-96e9-ad69c2518568\") " pod="openstack/rabbitmq-server-2" Feb 18 14:52:08 crc kubenswrapper[4957]: I0218 14:52:08.353604 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/81a0cd7a-3f64-4555-96e9-ad69c2518568-config-data\") pod \"rabbitmq-server-2\" (UID: \"81a0cd7a-3f64-4555-96e9-ad69c2518568\") " pod="openstack/rabbitmq-server-2" Feb 18 14:52:08 crc kubenswrapper[4957]: I0218 14:52:08.353683 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/81a0cd7a-3f64-4555-96e9-ad69c2518568-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"81a0cd7a-3f64-4555-96e9-ad69c2518568\") " pod="openstack/rabbitmq-server-2" Feb 18 14:52:08 crc kubenswrapper[4957]: I0218 14:52:08.353787 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f237ab9c-fc69-491b-98da-97ce92214eb0-pod-info\") pod \"rabbitmq-server-1\" (UID: \"f237ab9c-fc69-491b-98da-97ce92214eb0\") " pod="openstack/rabbitmq-server-1" Feb 18 14:52:08 crc kubenswrapper[4957]: I0218 14:52:08.353831 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/81a0cd7a-3f64-4555-96e9-ad69c2518568-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"81a0cd7a-3f64-4555-96e9-ad69c2518568\") " pod="openstack/rabbitmq-server-2" Feb 18 14:52:08 crc kubenswrapper[4957]: I0218 14:52:08.353881 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/81a0cd7a-3f64-4555-96e9-ad69c2518568-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"81a0cd7a-3f64-4555-96e9-ad69c2518568\") " pod="openstack/rabbitmq-server-2" Feb 18 14:52:08 crc kubenswrapper[4957]: I0218 14:52:08.353915 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/81a0cd7a-3f64-4555-96e9-ad69c2518568-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"81a0cd7a-3f64-4555-96e9-ad69c2518568\") " pod="openstack/rabbitmq-server-2" Feb 18 14:52:08 crc kubenswrapper[4957]: I0218 14:52:08.353948 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-dac279e8-a928-4788-ad49-611c729ef5f6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dac279e8-a928-4788-ad49-611c729ef5f6\") pod \"rabbitmq-server-1\" (UID: \"f237ab9c-fc69-491b-98da-97ce92214eb0\") " pod="openstack/rabbitmq-server-1" Feb 18 14:52:08 crc kubenswrapper[4957]: I0218 14:52:08.354010 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/81a0cd7a-3f64-4555-96e9-ad69c2518568-pod-info\") pod \"rabbitmq-server-2\" (UID: \"81a0cd7a-3f64-4555-96e9-ad69c2518568\") " pod="openstack/rabbitmq-server-2" Feb 18 14:52:08 crc kubenswrapper[4957]: I0218 14:52:08.358600 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f237ab9c-fc69-491b-98da-97ce92214eb0-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"f237ab9c-fc69-491b-98da-97ce92214eb0\") " pod="openstack/rabbitmq-server-1" Feb 18 14:52:08 crc kubenswrapper[4957]: I0218 14:52:08.359898 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/81a0cd7a-3f64-4555-96e9-ad69c2518568-config-data\") pod \"rabbitmq-server-2\" (UID: \"81a0cd7a-3f64-4555-96e9-ad69c2518568\") " pod="openstack/rabbitmq-server-2" Feb 18 14:52:08 crc kubenswrapper[4957]: I0218 14:52:08.360247 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f237ab9c-fc69-491b-98da-97ce92214eb0-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"f237ab9c-fc69-491b-98da-97ce92214eb0\") " pod="openstack/rabbitmq-server-1" Feb 18 14:52:08 crc kubenswrapper[4957]: I0218 14:52:08.360748 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f237ab9c-fc69-491b-98da-97ce92214eb0-server-conf\") pod \"rabbitmq-server-1\" (UID: \"f237ab9c-fc69-491b-98da-97ce92214eb0\") " pod="openstack/rabbitmq-server-1" Feb 18 14:52:08 crc kubenswrapper[4957]: I0218 14:52:08.361227 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/81a0cd7a-3f64-4555-96e9-ad69c2518568-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"81a0cd7a-3f64-4555-96e9-ad69c2518568\") " pod="openstack/rabbitmq-server-2" Feb 18 14:52:08 crc kubenswrapper[4957]: I0218 14:52:08.363073 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/81a0cd7a-3f64-4555-96e9-ad69c2518568-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"81a0cd7a-3f64-4555-96e9-ad69c2518568\") " pod="openstack/rabbitmq-server-2" Feb 18 14:52:08 crc kubenswrapper[4957]: I0218 14:52:08.363742 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/81a0cd7a-3f64-4555-96e9-ad69c2518568-server-conf\") pod \"rabbitmq-server-2\" (UID: \"81a0cd7a-3f64-4555-96e9-ad69c2518568\") " pod="openstack/rabbitmq-server-2" Feb 18 14:52:08 crc kubenswrapper[4957]: I0218 14:52:08.366327 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/81a0cd7a-3f64-4555-96e9-ad69c2518568-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"81a0cd7a-3f64-4555-96e9-ad69c2518568\") " pod="openstack/rabbitmq-server-2" Feb 18 14:52:08 crc kubenswrapper[4957]: I0218 14:52:08.375516 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/81a0cd7a-3f64-4555-96e9-ad69c2518568-pod-info\") pod \"rabbitmq-server-2\" (UID: \"81a0cd7a-3f64-4555-96e9-ad69c2518568\") " pod="openstack/rabbitmq-server-2" Feb 18 14:52:08 crc kubenswrapper[4957]: I0218 14:52:08.380340 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f237ab9c-fc69-491b-98da-97ce92214eb0-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"f237ab9c-fc69-491b-98da-97ce92214eb0\") " pod="openstack/rabbitmq-server-1" Feb 18 14:52:08 crc kubenswrapper[4957]: I0218 14:52:08.380873 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f237ab9c-fc69-491b-98da-97ce92214eb0-pod-info\") pod \"rabbitmq-server-1\" (UID: \"f237ab9c-fc69-491b-98da-97ce92214eb0\") " pod="openstack/rabbitmq-server-1" Feb 18 14:52:08 crc kubenswrapper[4957]: I0218 14:52:08.382615 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f237ab9c-fc69-491b-98da-97ce92214eb0-config-data\") pod \"rabbitmq-server-1\" (UID: \"f237ab9c-fc69-491b-98da-97ce92214eb0\") " pod="openstack/rabbitmq-server-1" Feb 18 14:52:08 crc kubenswrapper[4957]: I0218 14:52:08.394999 4957 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 18 14:52:08 crc kubenswrapper[4957]: I0218 14:52:08.395070 4957 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-dac279e8-a928-4788-ad49-611c729ef5f6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dac279e8-a928-4788-ad49-611c729ef5f6\") pod \"rabbitmq-server-1\" (UID: \"f237ab9c-fc69-491b-98da-97ce92214eb0\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/99592bf2358da8fc86c08666f7bb1935d4f3939ad446ee641006a3e81849f401/globalmount\"" pod="openstack/rabbitmq-server-1" Feb 18 14:52:08 crc kubenswrapper[4957]: I0218 14:52:08.395710 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/81a0cd7a-3f64-4555-96e9-ad69c2518568-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"81a0cd7a-3f64-4555-96e9-ad69c2518568\") " pod="openstack/rabbitmq-server-2" Feb 18 14:52:08 crc kubenswrapper[4957]: I0218 14:52:08.395726 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f237ab9c-fc69-491b-98da-97ce92214eb0-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"f237ab9c-fc69-491b-98da-97ce92214eb0\") " pod="openstack/rabbitmq-server-1" Feb 18 14:52:08 crc kubenswrapper[4957]: I0218 14:52:08.396156 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/81a0cd7a-3f64-4555-96e9-ad69c2518568-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"81a0cd7a-3f64-4555-96e9-ad69c2518568\") " pod="openstack/rabbitmq-server-2" Feb 18 14:52:08 crc kubenswrapper[4957]: I0218 14:52:08.396589 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ln6fz\" (UniqueName: \"kubernetes.io/projected/81a0cd7a-3f64-4555-96e9-ad69c2518568-kube-api-access-ln6fz\") pod \"rabbitmq-server-2\" (UID: \"81a0cd7a-3f64-4555-96e9-ad69c2518568\") " pod="openstack/rabbitmq-server-2" Feb 18 14:52:08 crc kubenswrapper[4957]: I0218 14:52:08.396646 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/81a0cd7a-3f64-4555-96e9-ad69c2518568-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"81a0cd7a-3f64-4555-96e9-ad69c2518568\") " pod="openstack/rabbitmq-server-2" Feb 18 14:52:08 crc kubenswrapper[4957]: I0218 14:52:08.399298 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gggwp\" (UniqueName: \"kubernetes.io/projected/f237ab9c-fc69-491b-98da-97ce92214eb0-kube-api-access-gggwp\") pod \"rabbitmq-server-1\" (UID: \"f237ab9c-fc69-491b-98da-97ce92214eb0\") " pod="openstack/rabbitmq-server-1" Feb 18 14:52:08 crc kubenswrapper[4957]: I0218 14:52:08.403730 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f237ab9c-fc69-491b-98da-97ce92214eb0-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"f237ab9c-fc69-491b-98da-97ce92214eb0\") " pod="openstack/rabbitmq-server-1" Feb 18 14:52:08 crc kubenswrapper[4957]: I0218 14:52:08.411554 4957 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 18 14:52:08 crc kubenswrapper[4957]: I0218 14:52:08.411607 4957 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-975c040f-4cc1-48f6-a5ab-06ccc1fd028f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-975c040f-4cc1-48f6-a5ab-06ccc1fd028f\") pod \"rabbitmq-server-2\" (UID: \"81a0cd7a-3f64-4555-96e9-ad69c2518568\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/351bbc9e70a65695cf515d8d5ce5c1884e5f13a79c1a8fc47be9bcdf43886012/globalmount\"" pod="openstack/rabbitmq-server-2" Feb 18 14:52:08 crc kubenswrapper[4957]: I0218 14:52:08.426141 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f237ab9c-fc69-491b-98da-97ce92214eb0-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"f237ab9c-fc69-491b-98da-97ce92214eb0\") " pod="openstack/rabbitmq-server-1" Feb 18 14:52:08 crc kubenswrapper[4957]: I0218 14:52:08.432437 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-pnzzz"] Feb 18 14:52:08 crc kubenswrapper[4957]: I0218 14:52:08.489969 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 18 14:52:08 crc kubenswrapper[4957]: I0218 14:52:08.492819 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:52:08 crc kubenswrapper[4957]: I0218 14:52:08.507242 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 18 14:52:08 crc kubenswrapper[4957]: I0218 14:52:08.509492 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-n2q7z" Feb 18 14:52:08 crc kubenswrapper[4957]: I0218 14:52:08.509659 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 18 14:52:08 crc kubenswrapper[4957]: I0218 14:52:08.510015 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 18 14:52:08 crc kubenswrapper[4957]: I0218 14:52:08.510091 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 18 14:52:08 crc kubenswrapper[4957]: I0218 14:52:08.510537 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 18 14:52:08 crc kubenswrapper[4957]: I0218 14:52:08.510787 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 18 14:52:08 crc kubenswrapper[4957]: I0218 14:52:08.510975 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 18 14:52:08 crc kubenswrapper[4957]: I0218 14:52:08.519308 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-dac279e8-a928-4788-ad49-611c729ef5f6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dac279e8-a928-4788-ad49-611c729ef5f6\") pod \"rabbitmq-server-1\" (UID: \"f237ab9c-fc69-491b-98da-97ce92214eb0\") " pod="openstack/rabbitmq-server-1" Feb 18 14:52:08 crc kubenswrapper[4957]: I0218 14:52:08.537750 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-975c040f-4cc1-48f6-a5ab-06ccc1fd028f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-975c040f-4cc1-48f6-a5ab-06ccc1fd028f\") pod \"rabbitmq-server-2\" (UID: \"81a0cd7a-3f64-4555-96e9-ad69c2518568\") " pod="openstack/rabbitmq-server-2" Feb 18 14:52:08 crc kubenswrapper[4957]: I0218 14:52:08.561837 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/12d531cb-f58f-44a6-a638-29ebb85fdbb3-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"12d531cb-f58f-44a6-a638-29ebb85fdbb3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:52:08 crc kubenswrapper[4957]: I0218 14:52:08.561938 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/12d531cb-f58f-44a6-a638-29ebb85fdbb3-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"12d531cb-f58f-44a6-a638-29ebb85fdbb3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:52:08 crc kubenswrapper[4957]: I0218 14:52:08.561974 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/12d531cb-f58f-44a6-a638-29ebb85fdbb3-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"12d531cb-f58f-44a6-a638-29ebb85fdbb3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:52:08 crc kubenswrapper[4957]: I0218 14:52:08.562001 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/12d531cb-f58f-44a6-a638-29ebb85fdbb3-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"12d531cb-f58f-44a6-a638-29ebb85fdbb3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:52:08 crc kubenswrapper[4957]: I0218 14:52:08.562026 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/12d531cb-f58f-44a6-a638-29ebb85fdbb3-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"12d531cb-f58f-44a6-a638-29ebb85fdbb3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:52:08 crc kubenswrapper[4957]: I0218 14:52:08.562046 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/12d531cb-f58f-44a6-a638-29ebb85fdbb3-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"12d531cb-f58f-44a6-a638-29ebb85fdbb3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:52:08 crc kubenswrapper[4957]: I0218 14:52:08.562081 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/12d531cb-f58f-44a6-a638-29ebb85fdbb3-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"12d531cb-f58f-44a6-a638-29ebb85fdbb3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:52:08 crc kubenswrapper[4957]: I0218 14:52:08.562100 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjjsw\" (UniqueName: \"kubernetes.io/projected/12d531cb-f58f-44a6-a638-29ebb85fdbb3-kube-api-access-rjjsw\") pod \"rabbitmq-cell1-server-0\" (UID: \"12d531cb-f58f-44a6-a638-29ebb85fdbb3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:52:08 crc kubenswrapper[4957]: I0218 14:52:08.562141 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-971c377d-ef63-4d1e-9ba8-5f5ad7361f48\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-971c377d-ef63-4d1e-9ba8-5f5ad7361f48\") pod \"rabbitmq-cell1-server-0\" (UID: \"12d531cb-f58f-44a6-a638-29ebb85fdbb3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:52:08 crc kubenswrapper[4957]: I0218 14:52:08.562177 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/12d531cb-f58f-44a6-a638-29ebb85fdbb3-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"12d531cb-f58f-44a6-a638-29ebb85fdbb3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:52:08 crc kubenswrapper[4957]: I0218 14:52:08.562208 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/12d531cb-f58f-44a6-a638-29ebb85fdbb3-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"12d531cb-f58f-44a6-a638-29ebb85fdbb3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:52:08 crc kubenswrapper[4957]: I0218 14:52:08.599479 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-hjs77" event={"ID":"ea49c46a-0b24-4d9c-9dc5-ed3c7f1e9433","Type":"ContainerStarted","Data":"2241434f3a2851759f88b8d4f4e85190c3380b1e8275f8b92675c8820530d41c"} Feb 18 14:52:08 crc kubenswrapper[4957]: I0218 14:52:08.604592 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-pnzzz" event={"ID":"031ed1ca-79fe-4aa7-ade3-f5d612b2c628","Type":"ContainerStarted","Data":"7b34271156fae5bb38f086ddcc3e353e2288da8b7849ae00cc7caddc8b2ec047"} Feb 18 14:52:08 crc kubenswrapper[4957]: I0218 14:52:08.609575 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 18 14:52:08 crc kubenswrapper[4957]: I0218 14:52:08.664442 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/12d531cb-f58f-44a6-a638-29ebb85fdbb3-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"12d531cb-f58f-44a6-a638-29ebb85fdbb3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:52:08 crc kubenswrapper[4957]: I0218 14:52:08.664511 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/12d531cb-f58f-44a6-a638-29ebb85fdbb3-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"12d531cb-f58f-44a6-a638-29ebb85fdbb3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:52:08 crc kubenswrapper[4957]: I0218 14:52:08.664552 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/12d531cb-f58f-44a6-a638-29ebb85fdbb3-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"12d531cb-f58f-44a6-a638-29ebb85fdbb3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:52:08 crc kubenswrapper[4957]: I0218 14:52:08.664580 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/12d531cb-f58f-44a6-a638-29ebb85fdbb3-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"12d531cb-f58f-44a6-a638-29ebb85fdbb3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:52:08 crc kubenswrapper[4957]: I0218 14:52:08.664637 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/12d531cb-f58f-44a6-a638-29ebb85fdbb3-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"12d531cb-f58f-44a6-a638-29ebb85fdbb3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:52:08 crc kubenswrapper[4957]: I0218 14:52:08.664661 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjjsw\" (UniqueName: \"kubernetes.io/projected/12d531cb-f58f-44a6-a638-29ebb85fdbb3-kube-api-access-rjjsw\") pod \"rabbitmq-cell1-server-0\" (UID: \"12d531cb-f58f-44a6-a638-29ebb85fdbb3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:52:08 crc kubenswrapper[4957]: I0218 14:52:08.664693 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-971c377d-ef63-4d1e-9ba8-5f5ad7361f48\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-971c377d-ef63-4d1e-9ba8-5f5ad7361f48\") pod \"rabbitmq-cell1-server-0\" (UID: \"12d531cb-f58f-44a6-a638-29ebb85fdbb3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:52:08 crc kubenswrapper[4957]: I0218 14:52:08.664733 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/12d531cb-f58f-44a6-a638-29ebb85fdbb3-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"12d531cb-f58f-44a6-a638-29ebb85fdbb3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:52:08 crc kubenswrapper[4957]: I0218 14:52:08.664774 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/12d531cb-f58f-44a6-a638-29ebb85fdbb3-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"12d531cb-f58f-44a6-a638-29ebb85fdbb3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:52:08 crc kubenswrapper[4957]: I0218 14:52:08.664833 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/12d531cb-f58f-44a6-a638-29ebb85fdbb3-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"12d531cb-f58f-44a6-a638-29ebb85fdbb3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:52:08 crc kubenswrapper[4957]: I0218 14:52:08.664883 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/12d531cb-f58f-44a6-a638-29ebb85fdbb3-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"12d531cb-f58f-44a6-a638-29ebb85fdbb3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:52:08 crc kubenswrapper[4957]: I0218 14:52:08.667007 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/12d531cb-f58f-44a6-a638-29ebb85fdbb3-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"12d531cb-f58f-44a6-a638-29ebb85fdbb3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:52:08 crc kubenswrapper[4957]: I0218 14:52:08.668841 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/12d531cb-f58f-44a6-a638-29ebb85fdbb3-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"12d531cb-f58f-44a6-a638-29ebb85fdbb3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:52:08 crc kubenswrapper[4957]: I0218 14:52:08.669116 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/12d531cb-f58f-44a6-a638-29ebb85fdbb3-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"12d531cb-f58f-44a6-a638-29ebb85fdbb3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:52:08 crc kubenswrapper[4957]: I0218 14:52:08.669576 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/12d531cb-f58f-44a6-a638-29ebb85fdbb3-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"12d531cb-f58f-44a6-a638-29ebb85fdbb3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:52:08 crc kubenswrapper[4957]: I0218 14:52:08.671253 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/12d531cb-f58f-44a6-a638-29ebb85fdbb3-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"12d531cb-f58f-44a6-a638-29ebb85fdbb3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:52:08 crc kubenswrapper[4957]: I0218 14:52:08.673224 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/12d531cb-f58f-44a6-a638-29ebb85fdbb3-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"12d531cb-f58f-44a6-a638-29ebb85fdbb3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:52:08 crc kubenswrapper[4957]: I0218 14:52:08.673944 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/12d531cb-f58f-44a6-a638-29ebb85fdbb3-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"12d531cb-f58f-44a6-a638-29ebb85fdbb3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:52:08 crc kubenswrapper[4957]: I0218 14:52:08.674313 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/12d531cb-f58f-44a6-a638-29ebb85fdbb3-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"12d531cb-f58f-44a6-a638-29ebb85fdbb3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:52:08 crc kubenswrapper[4957]: I0218 14:52:08.677387 4957 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 18 14:52:08 crc kubenswrapper[4957]: I0218 14:52:08.677445 4957 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-971c377d-ef63-4d1e-9ba8-5f5ad7361f48\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-971c377d-ef63-4d1e-9ba8-5f5ad7361f48\") pod \"rabbitmq-cell1-server-0\" (UID: \"12d531cb-f58f-44a6-a638-29ebb85fdbb3\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/8c9b356ebc43732f1fe9abea7b6708f9500d803b22e1280123110210f298fda6/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:52:08 crc kubenswrapper[4957]: I0218 14:52:08.680693 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/12d531cb-f58f-44a6-a638-29ebb85fdbb3-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"12d531cb-f58f-44a6-a638-29ebb85fdbb3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:52:08 crc kubenswrapper[4957]: I0218 14:52:08.688731 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rjjsw\" (UniqueName: \"kubernetes.io/projected/12d531cb-f58f-44a6-a638-29ebb85fdbb3-kube-api-access-rjjsw\") pod \"rabbitmq-cell1-server-0\" (UID: \"12d531cb-f58f-44a6-a638-29ebb85fdbb3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:52:08 crc kubenswrapper[4957]: I0218 14:52:08.744547 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-971c377d-ef63-4d1e-9ba8-5f5ad7361f48\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-971c377d-ef63-4d1e-9ba8-5f5ad7361f48\") pod \"rabbitmq-cell1-server-0\" (UID: \"12d531cb-f58f-44a6-a638-29ebb85fdbb3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:52:08 crc kubenswrapper[4957]: I0218 14:52:08.772967 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Feb 18 14:52:08 crc kubenswrapper[4957]: I0218 14:52:08.787072 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Feb 18 14:52:08 crc kubenswrapper[4957]: I0218 14:52:08.840918 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:52:09 crc kubenswrapper[4957]: I0218 14:52:09.252864 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 18 14:52:09 crc kubenswrapper[4957]: I0218 14:52:09.469219 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Feb 18 14:52:09 crc kubenswrapper[4957]: I0218 14:52:09.473142 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 18 14:52:09 crc kubenswrapper[4957]: I0218 14:52:09.478038 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Feb 18 14:52:09 crc kubenswrapper[4957]: I0218 14:52:09.479342 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-svdx2" Feb 18 14:52:09 crc kubenswrapper[4957]: I0218 14:52:09.482931 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Feb 18 14:52:09 crc kubenswrapper[4957]: I0218 14:52:09.483599 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Feb 18 14:52:09 crc kubenswrapper[4957]: I0218 14:52:09.489723 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Feb 18 14:52:09 crc kubenswrapper[4957]: I0218 14:52:09.509877 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 18 14:52:09 crc kubenswrapper[4957]: I0218 14:52:09.515488 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Feb 18 14:52:09 crc kubenswrapper[4957]: I0218 14:52:09.520381 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e98adae-ca1c-4f1d-a7e4-6ea6ccea7070-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"8e98adae-ca1c-4f1d-a7e4-6ea6ccea7070\") " pod="openstack/openstack-galera-0" Feb 18 14:52:09 crc kubenswrapper[4957]: I0218 14:52:09.520466 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/8e98adae-ca1c-4f1d-a7e4-6ea6ccea7070-config-data-generated\") pod \"openstack-galera-0\" (UID: \"8e98adae-ca1c-4f1d-a7e4-6ea6ccea7070\") " pod="openstack/openstack-galera-0" Feb 18 14:52:09 crc kubenswrapper[4957]: I0218 14:52:09.520499 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/8e98adae-ca1c-4f1d-a7e4-6ea6ccea7070-kolla-config\") pod \"openstack-galera-0\" (UID: \"8e98adae-ca1c-4f1d-a7e4-6ea6ccea7070\") " pod="openstack/openstack-galera-0" Feb 18 14:52:09 crc kubenswrapper[4957]: I0218 14:52:09.520613 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/8e98adae-ca1c-4f1d-a7e4-6ea6ccea7070-config-data-default\") pod \"openstack-galera-0\" (UID: \"8e98adae-ca1c-4f1d-a7e4-6ea6ccea7070\") " pod="openstack/openstack-galera-0" Feb 18 14:52:09 crc kubenswrapper[4957]: I0218 14:52:09.520655 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-db8ms\" (UniqueName: \"kubernetes.io/projected/8e98adae-ca1c-4f1d-a7e4-6ea6ccea7070-kube-api-access-db8ms\") pod \"openstack-galera-0\" (UID: \"8e98adae-ca1c-4f1d-a7e4-6ea6ccea7070\") " pod="openstack/openstack-galera-0" Feb 18 14:52:09 crc kubenswrapper[4957]: I0218 14:52:09.520789 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-5b4321e9-312c-428b-a8a1-1d986a2d7a8d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5b4321e9-312c-428b-a8a1-1d986a2d7a8d\") pod \"openstack-galera-0\" (UID: \"8e98adae-ca1c-4f1d-a7e4-6ea6ccea7070\") " pod="openstack/openstack-galera-0" Feb 18 14:52:09 crc kubenswrapper[4957]: I0218 14:52:09.520841 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e98adae-ca1c-4f1d-a7e4-6ea6ccea7070-operator-scripts\") pod \"openstack-galera-0\" (UID: \"8e98adae-ca1c-4f1d-a7e4-6ea6ccea7070\") " pod="openstack/openstack-galera-0" Feb 18 14:52:09 crc kubenswrapper[4957]: I0218 14:52:09.520909 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e98adae-ca1c-4f1d-a7e4-6ea6ccea7070-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"8e98adae-ca1c-4f1d-a7e4-6ea6ccea7070\") " pod="openstack/openstack-galera-0" Feb 18 14:52:09 crc kubenswrapper[4957]: I0218 14:52:09.634054 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/8e98adae-ca1c-4f1d-a7e4-6ea6ccea7070-config-data-default\") pod \"openstack-galera-0\" (UID: \"8e98adae-ca1c-4f1d-a7e4-6ea6ccea7070\") " pod="openstack/openstack-galera-0" Feb 18 14:52:09 crc kubenswrapper[4957]: I0218 14:52:09.634140 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-db8ms\" (UniqueName: \"kubernetes.io/projected/8e98adae-ca1c-4f1d-a7e4-6ea6ccea7070-kube-api-access-db8ms\") pod \"openstack-galera-0\" (UID: \"8e98adae-ca1c-4f1d-a7e4-6ea6ccea7070\") " pod="openstack/openstack-galera-0" Feb 18 14:52:09 crc kubenswrapper[4957]: I0218 14:52:09.634224 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-5b4321e9-312c-428b-a8a1-1d986a2d7a8d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5b4321e9-312c-428b-a8a1-1d986a2d7a8d\") pod \"openstack-galera-0\" (UID: \"8e98adae-ca1c-4f1d-a7e4-6ea6ccea7070\") " pod="openstack/openstack-galera-0" Feb 18 14:52:09 crc kubenswrapper[4957]: I0218 14:52:09.634260 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e98adae-ca1c-4f1d-a7e4-6ea6ccea7070-operator-scripts\") pod \"openstack-galera-0\" (UID: \"8e98adae-ca1c-4f1d-a7e4-6ea6ccea7070\") " pod="openstack/openstack-galera-0" Feb 18 14:52:09 crc kubenswrapper[4957]: I0218 14:52:09.634305 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e98adae-ca1c-4f1d-a7e4-6ea6ccea7070-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"8e98adae-ca1c-4f1d-a7e4-6ea6ccea7070\") " pod="openstack/openstack-galera-0" Feb 18 14:52:09 crc kubenswrapper[4957]: I0218 14:52:09.634355 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e98adae-ca1c-4f1d-a7e4-6ea6ccea7070-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"8e98adae-ca1c-4f1d-a7e4-6ea6ccea7070\") " pod="openstack/openstack-galera-0" Feb 18 14:52:09 crc kubenswrapper[4957]: I0218 14:52:09.634378 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/8e98adae-ca1c-4f1d-a7e4-6ea6ccea7070-config-data-generated\") pod \"openstack-galera-0\" (UID: \"8e98adae-ca1c-4f1d-a7e4-6ea6ccea7070\") " pod="openstack/openstack-galera-0" Feb 18 14:52:09 crc kubenswrapper[4957]: I0218 14:52:09.634398 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/8e98adae-ca1c-4f1d-a7e4-6ea6ccea7070-kolla-config\") pod \"openstack-galera-0\" (UID: \"8e98adae-ca1c-4f1d-a7e4-6ea6ccea7070\") " pod="openstack/openstack-galera-0" Feb 18 14:52:09 crc kubenswrapper[4957]: I0218 14:52:09.635640 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/8e98adae-ca1c-4f1d-a7e4-6ea6ccea7070-kolla-config\") pod \"openstack-galera-0\" (UID: \"8e98adae-ca1c-4f1d-a7e4-6ea6ccea7070\") " pod="openstack/openstack-galera-0" Feb 18 14:52:09 crc kubenswrapper[4957]: I0218 14:52:09.657820 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/8e98adae-ca1c-4f1d-a7e4-6ea6ccea7070-config-data-generated\") pod \"openstack-galera-0\" (UID: \"8e98adae-ca1c-4f1d-a7e4-6ea6ccea7070\") " pod="openstack/openstack-galera-0" Feb 18 14:52:09 crc kubenswrapper[4957]: I0218 14:52:09.659878 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e98adae-ca1c-4f1d-a7e4-6ea6ccea7070-operator-scripts\") pod \"openstack-galera-0\" (UID: \"8e98adae-ca1c-4f1d-a7e4-6ea6ccea7070\") " pod="openstack/openstack-galera-0" Feb 18 14:52:09 crc kubenswrapper[4957]: I0218 14:52:09.660917 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/8e98adae-ca1c-4f1d-a7e4-6ea6ccea7070-config-data-default\") pod \"openstack-galera-0\" (UID: \"8e98adae-ca1c-4f1d-a7e4-6ea6ccea7070\") " pod="openstack/openstack-galera-0" Feb 18 14:52:09 crc kubenswrapper[4957]: I0218 14:52:09.670088 4957 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 18 14:52:09 crc kubenswrapper[4957]: I0218 14:52:09.670140 4957 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-5b4321e9-312c-428b-a8a1-1d986a2d7a8d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5b4321e9-312c-428b-a8a1-1d986a2d7a8d\") pod \"openstack-galera-0\" (UID: \"8e98adae-ca1c-4f1d-a7e4-6ea6ccea7070\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1ec45dbde105a74b68ba30c7d6f9d569989991f6c2510e7a3aaf4f6f1be85ebe/globalmount\"" pod="openstack/openstack-galera-0" Feb 18 14:52:09 crc kubenswrapper[4957]: I0218 14:52:09.685014 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e98adae-ca1c-4f1d-a7e4-6ea6ccea7070-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"8e98adae-ca1c-4f1d-a7e4-6ea6ccea7070\") " pod="openstack/openstack-galera-0" Feb 18 14:52:09 crc kubenswrapper[4957]: I0218 14:52:09.685471 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e98adae-ca1c-4f1d-a7e4-6ea6ccea7070-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"8e98adae-ca1c-4f1d-a7e4-6ea6ccea7070\") " pod="openstack/openstack-galera-0" Feb 18 14:52:09 crc kubenswrapper[4957]: I0218 14:52:09.699074 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-db8ms\" (UniqueName: \"kubernetes.io/projected/8e98adae-ca1c-4f1d-a7e4-6ea6ccea7070-kube-api-access-db8ms\") pod \"openstack-galera-0\" (UID: \"8e98adae-ca1c-4f1d-a7e4-6ea6ccea7070\") " pod="openstack/openstack-galera-0" Feb 18 14:52:09 crc kubenswrapper[4957]: I0218 14:52:09.793201 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"a0e8ec2b-400b-4454-acdd-517a1727e9f8","Type":"ContainerStarted","Data":"d67250fb9db8781552259b7dbac5385acbed98df4080642ade0a4a394cd97556"} Feb 18 14:52:09 crc kubenswrapper[4957]: I0218 14:52:09.811257 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-5b4321e9-312c-428b-a8a1-1d986a2d7a8d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5b4321e9-312c-428b-a8a1-1d986a2d7a8d\") pod \"openstack-galera-0\" (UID: \"8e98adae-ca1c-4f1d-a7e4-6ea6ccea7070\") " pod="openstack/openstack-galera-0" Feb 18 14:52:09 crc kubenswrapper[4957]: I0218 14:52:09.832766 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 18 14:52:09 crc kubenswrapper[4957]: I0218 14:52:09.849564 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 18 14:52:09 crc kubenswrapper[4957]: I0218 14:52:09.878060 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-1"] Feb 18 14:52:09 crc kubenswrapper[4957]: W0218 14:52:09.890440 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod12d531cb_f58f_44a6_a638_29ebb85fdbb3.slice/crio-78f2929aebeac7505d82fdbf59f327e266f392924bcb88cf2df6c8d626d61787 WatchSource:0}: Error finding container 78f2929aebeac7505d82fdbf59f327e266f392924bcb88cf2df6c8d626d61787: Status 404 returned error can't find the container with id 78f2929aebeac7505d82fdbf59f327e266f392924bcb88cf2df6c8d626d61787 Feb 18 14:52:10 crc kubenswrapper[4957]: I0218 14:52:10.805688 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"12d531cb-f58f-44a6-a638-29ebb85fdbb3","Type":"ContainerStarted","Data":"78f2929aebeac7505d82fdbf59f327e266f392924bcb88cf2df6c8d626d61787"} Feb 18 14:52:10 crc kubenswrapper[4957]: I0218 14:52:10.807755 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"81a0cd7a-3f64-4555-96e9-ad69c2518568","Type":"ContainerStarted","Data":"4e9712164a7fce0a4a1172bd337e33d4d6261de86c47fa3fab801240089b1046"} Feb 18 14:52:10 crc kubenswrapper[4957]: I0218 14:52:10.809910 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"f237ab9c-fc69-491b-98da-97ce92214eb0","Type":"ContainerStarted","Data":"13dcdb4960568a4dd191f12e79245344602d80cb9347623ce8312359b5d97738"} Feb 18 14:52:10 crc kubenswrapper[4957]: I0218 14:52:10.828291 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 18 14:52:10 crc kubenswrapper[4957]: W0218 14:52:10.885893 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8e98adae_ca1c_4f1d_a7e4_6ea6ccea7070.slice/crio-f6e0900afdd73b5e7147b0f6f9c30406ad69b83253c8febf0922ea027a2055d5 WatchSource:0}: Error finding container f6e0900afdd73b5e7147b0f6f9c30406ad69b83253c8febf0922ea027a2055d5: Status 404 returned error can't find the container with id f6e0900afdd73b5e7147b0f6f9c30406ad69b83253c8febf0922ea027a2055d5 Feb 18 14:52:10 crc kubenswrapper[4957]: I0218 14:52:10.961176 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 18 14:52:10 crc kubenswrapper[4957]: I0218 14:52:10.963906 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 18 14:52:10 crc kubenswrapper[4957]: I0218 14:52:10.966502 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-rd78f" Feb 18 14:52:10 crc kubenswrapper[4957]: I0218 14:52:10.972885 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Feb 18 14:52:10 crc kubenswrapper[4957]: I0218 14:52:10.973343 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Feb 18 14:52:10 crc kubenswrapper[4957]: I0218 14:52:10.973752 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Feb 18 14:52:10 crc kubenswrapper[4957]: I0218 14:52:10.997840 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 18 14:52:11 crc kubenswrapper[4957]: I0218 14:52:11.024027 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Feb 18 14:52:11 crc kubenswrapper[4957]: I0218 14:52:11.025513 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 18 14:52:11 crc kubenswrapper[4957]: I0218 14:52:11.031530 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-x6xx7" Feb 18 14:52:11 crc kubenswrapper[4957]: I0218 14:52:11.032159 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Feb 18 14:52:11 crc kubenswrapper[4957]: I0218 14:52:11.036786 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Feb 18 14:52:11 crc kubenswrapper[4957]: I0218 14:52:11.055345 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 18 14:52:11 crc kubenswrapper[4957]: I0218 14:52:11.097701 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac146603-56e6-49dc-afe3-d46b005945a3-combined-ca-bundle\") pod \"memcached-0\" (UID: \"ac146603-56e6-49dc-afe3-d46b005945a3\") " pod="openstack/memcached-0" Feb 18 14:52:11 crc kubenswrapper[4957]: I0218 14:52:11.097747 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ac146603-56e6-49dc-afe3-d46b005945a3-config-data\") pod \"memcached-0\" (UID: \"ac146603-56e6-49dc-afe3-d46b005945a3\") " pod="openstack/memcached-0" Feb 18 14:52:11 crc kubenswrapper[4957]: I0218 14:52:11.097778 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/a1f75ef8-c8ef-4d67-8c40-11ff5f8f5f4c-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"a1f75ef8-c8ef-4d67-8c40-11ff5f8f5f4c\") " pod="openstack/openstack-cell1-galera-0" Feb 18 14:52:11 crc kubenswrapper[4957]: I0218 14:52:11.097799 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/a1f75ef8-c8ef-4d67-8c40-11ff5f8f5f4c-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"a1f75ef8-c8ef-4d67-8c40-11ff5f8f5f4c\") " pod="openstack/openstack-cell1-galera-0" Feb 18 14:52:11 crc kubenswrapper[4957]: I0218 14:52:11.097813 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/ac146603-56e6-49dc-afe3-d46b005945a3-memcached-tls-certs\") pod \"memcached-0\" (UID: \"ac146603-56e6-49dc-afe3-d46b005945a3\") " pod="openstack/memcached-0" Feb 18 14:52:11 crc kubenswrapper[4957]: I0218 14:52:11.097855 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-np6bp\" (UniqueName: \"kubernetes.io/projected/a1f75ef8-c8ef-4d67-8c40-11ff5f8f5f4c-kube-api-access-np6bp\") pod \"openstack-cell1-galera-0\" (UID: \"a1f75ef8-c8ef-4d67-8c40-11ff5f8f5f4c\") " pod="openstack/openstack-cell1-galera-0" Feb 18 14:52:11 crc kubenswrapper[4957]: I0218 14:52:11.097888 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a1f75ef8-c8ef-4d67-8c40-11ff5f8f5f4c-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"a1f75ef8-c8ef-4d67-8c40-11ff5f8f5f4c\") " pod="openstack/openstack-cell1-galera-0" Feb 18 14:52:11 crc kubenswrapper[4957]: I0218 14:52:11.097931 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ac146603-56e6-49dc-afe3-d46b005945a3-kolla-config\") pod \"memcached-0\" (UID: \"ac146603-56e6-49dc-afe3-d46b005945a3\") " pod="openstack/memcached-0" Feb 18 14:52:11 crc kubenswrapper[4957]: I0218 14:52:11.097951 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1f75ef8-c8ef-4d67-8c40-11ff5f8f5f4c-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"a1f75ef8-c8ef-4d67-8c40-11ff5f8f5f4c\") " pod="openstack/openstack-cell1-galera-0" Feb 18 14:52:11 crc kubenswrapper[4957]: I0218 14:52:11.097990 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/a1f75ef8-c8ef-4d67-8c40-11ff5f8f5f4c-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"a1f75ef8-c8ef-4d67-8c40-11ff5f8f5f4c\") " pod="openstack/openstack-cell1-galera-0" Feb 18 14:52:11 crc kubenswrapper[4957]: I0218 14:52:11.098005 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/a1f75ef8-c8ef-4d67-8c40-11ff5f8f5f4c-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"a1f75ef8-c8ef-4d67-8c40-11ff5f8f5f4c\") " pod="openstack/openstack-cell1-galera-0" Feb 18 14:52:11 crc kubenswrapper[4957]: I0218 14:52:11.098053 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94tjx\" (UniqueName: \"kubernetes.io/projected/ac146603-56e6-49dc-afe3-d46b005945a3-kube-api-access-94tjx\") pod \"memcached-0\" (UID: \"ac146603-56e6-49dc-afe3-d46b005945a3\") " pod="openstack/memcached-0" Feb 18 14:52:11 crc kubenswrapper[4957]: I0218 14:52:11.098073 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-ff557c76-adb1-426f-9b27-e4a4195591c8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ff557c76-adb1-426f-9b27-e4a4195591c8\") pod \"openstack-cell1-galera-0\" (UID: \"a1f75ef8-c8ef-4d67-8c40-11ff5f8f5f4c\") " pod="openstack/openstack-cell1-galera-0" Feb 18 14:52:11 crc kubenswrapper[4957]: I0218 14:52:11.200086 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/a1f75ef8-c8ef-4d67-8c40-11ff5f8f5f4c-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"a1f75ef8-c8ef-4d67-8c40-11ff5f8f5f4c\") " pod="openstack/openstack-cell1-galera-0" Feb 18 14:52:11 crc kubenswrapper[4957]: I0218 14:52:11.200161 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/a1f75ef8-c8ef-4d67-8c40-11ff5f8f5f4c-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"a1f75ef8-c8ef-4d67-8c40-11ff5f8f5f4c\") " pod="openstack/openstack-cell1-galera-0" Feb 18 14:52:11 crc kubenswrapper[4957]: I0218 14:52:11.200246 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-94tjx\" (UniqueName: \"kubernetes.io/projected/ac146603-56e6-49dc-afe3-d46b005945a3-kube-api-access-94tjx\") pod \"memcached-0\" (UID: \"ac146603-56e6-49dc-afe3-d46b005945a3\") " pod="openstack/memcached-0" Feb 18 14:52:11 crc kubenswrapper[4957]: I0218 14:52:11.200276 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-ff557c76-adb1-426f-9b27-e4a4195591c8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ff557c76-adb1-426f-9b27-e4a4195591c8\") pod \"openstack-cell1-galera-0\" (UID: \"a1f75ef8-c8ef-4d67-8c40-11ff5f8f5f4c\") " pod="openstack/openstack-cell1-galera-0" Feb 18 14:52:11 crc kubenswrapper[4957]: I0218 14:52:11.200317 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac146603-56e6-49dc-afe3-d46b005945a3-combined-ca-bundle\") pod \"memcached-0\" (UID: \"ac146603-56e6-49dc-afe3-d46b005945a3\") " pod="openstack/memcached-0" Feb 18 14:52:11 crc kubenswrapper[4957]: I0218 14:52:11.200344 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ac146603-56e6-49dc-afe3-d46b005945a3-config-data\") pod \"memcached-0\" (UID: \"ac146603-56e6-49dc-afe3-d46b005945a3\") " pod="openstack/memcached-0" Feb 18 14:52:11 crc kubenswrapper[4957]: I0218 14:52:11.200382 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/a1f75ef8-c8ef-4d67-8c40-11ff5f8f5f4c-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"a1f75ef8-c8ef-4d67-8c40-11ff5f8f5f4c\") " pod="openstack/openstack-cell1-galera-0" Feb 18 14:52:11 crc kubenswrapper[4957]: I0218 14:52:11.200408 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/a1f75ef8-c8ef-4d67-8c40-11ff5f8f5f4c-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"a1f75ef8-c8ef-4d67-8c40-11ff5f8f5f4c\") " pod="openstack/openstack-cell1-galera-0" Feb 18 14:52:11 crc kubenswrapper[4957]: I0218 14:52:11.200445 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/ac146603-56e6-49dc-afe3-d46b005945a3-memcached-tls-certs\") pod \"memcached-0\" (UID: \"ac146603-56e6-49dc-afe3-d46b005945a3\") " pod="openstack/memcached-0" Feb 18 14:52:11 crc kubenswrapper[4957]: I0218 14:52:11.200500 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-np6bp\" (UniqueName: \"kubernetes.io/projected/a1f75ef8-c8ef-4d67-8c40-11ff5f8f5f4c-kube-api-access-np6bp\") pod \"openstack-cell1-galera-0\" (UID: \"a1f75ef8-c8ef-4d67-8c40-11ff5f8f5f4c\") " pod="openstack/openstack-cell1-galera-0" Feb 18 14:52:11 crc kubenswrapper[4957]: I0218 14:52:11.200535 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a1f75ef8-c8ef-4d67-8c40-11ff5f8f5f4c-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"a1f75ef8-c8ef-4d67-8c40-11ff5f8f5f4c\") " pod="openstack/openstack-cell1-galera-0" Feb 18 14:52:11 crc kubenswrapper[4957]: I0218 14:52:11.200595 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ac146603-56e6-49dc-afe3-d46b005945a3-kolla-config\") pod \"memcached-0\" (UID: \"ac146603-56e6-49dc-afe3-d46b005945a3\") " pod="openstack/memcached-0" Feb 18 14:52:11 crc kubenswrapper[4957]: I0218 14:52:11.200625 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1f75ef8-c8ef-4d67-8c40-11ff5f8f5f4c-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"a1f75ef8-c8ef-4d67-8c40-11ff5f8f5f4c\") " pod="openstack/openstack-cell1-galera-0" Feb 18 14:52:11 crc kubenswrapper[4957]: I0218 14:52:11.200858 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/a1f75ef8-c8ef-4d67-8c40-11ff5f8f5f4c-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"a1f75ef8-c8ef-4d67-8c40-11ff5f8f5f4c\") " pod="openstack/openstack-cell1-galera-0" Feb 18 14:52:11 crc kubenswrapper[4957]: I0218 14:52:11.201391 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ac146603-56e6-49dc-afe3-d46b005945a3-config-data\") pod \"memcached-0\" (UID: \"ac146603-56e6-49dc-afe3-d46b005945a3\") " pod="openstack/memcached-0" Feb 18 14:52:11 crc kubenswrapper[4957]: I0218 14:52:11.202432 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/a1f75ef8-c8ef-4d67-8c40-11ff5f8f5f4c-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"a1f75ef8-c8ef-4d67-8c40-11ff5f8f5f4c\") " pod="openstack/openstack-cell1-galera-0" Feb 18 14:52:11 crc kubenswrapper[4957]: I0218 14:52:11.202876 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/a1f75ef8-c8ef-4d67-8c40-11ff5f8f5f4c-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"a1f75ef8-c8ef-4d67-8c40-11ff5f8f5f4c\") " pod="openstack/openstack-cell1-galera-0" Feb 18 14:52:11 crc kubenswrapper[4957]: I0218 14:52:11.209117 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ac146603-56e6-49dc-afe3-d46b005945a3-kolla-config\") pod \"memcached-0\" (UID: \"ac146603-56e6-49dc-afe3-d46b005945a3\") " pod="openstack/memcached-0" Feb 18 14:52:11 crc kubenswrapper[4957]: I0218 14:52:11.210151 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a1f75ef8-c8ef-4d67-8c40-11ff5f8f5f4c-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"a1f75ef8-c8ef-4d67-8c40-11ff5f8f5f4c\") " pod="openstack/openstack-cell1-galera-0" Feb 18 14:52:11 crc kubenswrapper[4957]: I0218 14:52:11.217752 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/a1f75ef8-c8ef-4d67-8c40-11ff5f8f5f4c-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"a1f75ef8-c8ef-4d67-8c40-11ff5f8f5f4c\") " pod="openstack/openstack-cell1-galera-0" Feb 18 14:52:11 crc kubenswrapper[4957]: I0218 14:52:11.220614 4957 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 18 14:52:11 crc kubenswrapper[4957]: I0218 14:52:11.220658 4957 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-ff557c76-adb1-426f-9b27-e4a4195591c8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ff557c76-adb1-426f-9b27-e4a4195591c8\") pod \"openstack-cell1-galera-0\" (UID: \"a1f75ef8-c8ef-4d67-8c40-11ff5f8f5f4c\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/f3dd333c54ad2fb6a21d0980eb367da674da792fa749ea0ecc5fb36742415ddf/globalmount\"" pod="openstack/openstack-cell1-galera-0" Feb 18 14:52:11 crc kubenswrapper[4957]: I0218 14:52:11.222536 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1f75ef8-c8ef-4d67-8c40-11ff5f8f5f4c-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"a1f75ef8-c8ef-4d67-8c40-11ff5f8f5f4c\") " pod="openstack/openstack-cell1-galera-0" Feb 18 14:52:11 crc kubenswrapper[4957]: I0218 14:52:11.224343 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac146603-56e6-49dc-afe3-d46b005945a3-combined-ca-bundle\") pod \"memcached-0\" (UID: \"ac146603-56e6-49dc-afe3-d46b005945a3\") " pod="openstack/memcached-0" Feb 18 14:52:11 crc kubenswrapper[4957]: I0218 14:52:11.226043 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/ac146603-56e6-49dc-afe3-d46b005945a3-memcached-tls-certs\") pod \"memcached-0\" (UID: \"ac146603-56e6-49dc-afe3-d46b005945a3\") " pod="openstack/memcached-0" Feb 18 14:52:11 crc kubenswrapper[4957]: I0218 14:52:11.227610 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-np6bp\" (UniqueName: \"kubernetes.io/projected/a1f75ef8-c8ef-4d67-8c40-11ff5f8f5f4c-kube-api-access-np6bp\") pod \"openstack-cell1-galera-0\" (UID: \"a1f75ef8-c8ef-4d67-8c40-11ff5f8f5f4c\") " pod="openstack/openstack-cell1-galera-0" Feb 18 14:52:11 crc kubenswrapper[4957]: I0218 14:52:11.235499 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-94tjx\" (UniqueName: \"kubernetes.io/projected/ac146603-56e6-49dc-afe3-d46b005945a3-kube-api-access-94tjx\") pod \"memcached-0\" (UID: \"ac146603-56e6-49dc-afe3-d46b005945a3\") " pod="openstack/memcached-0" Feb 18 14:52:11 crc kubenswrapper[4957]: I0218 14:52:11.317733 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-ff557c76-adb1-426f-9b27-e4a4195591c8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ff557c76-adb1-426f-9b27-e4a4195591c8\") pod \"openstack-cell1-galera-0\" (UID: \"a1f75ef8-c8ef-4d67-8c40-11ff5f8f5f4c\") " pod="openstack/openstack-cell1-galera-0" Feb 18 14:52:11 crc kubenswrapper[4957]: I0218 14:52:11.365174 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 18 14:52:11 crc kubenswrapper[4957]: I0218 14:52:11.597385 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 18 14:52:11 crc kubenswrapper[4957]: I0218 14:52:11.867230 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"8e98adae-ca1c-4f1d-a7e4-6ea6ccea7070","Type":"ContainerStarted","Data":"f6e0900afdd73b5e7147b0f6f9c30406ad69b83253c8febf0922ea027a2055d5"} Feb 18 14:52:12 crc kubenswrapper[4957]: I0218 14:52:12.113748 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 18 14:52:12 crc kubenswrapper[4957]: W0218 14:52:12.121028 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podac146603_56e6_49dc_afe3_d46b005945a3.slice/crio-4360a393e32988600186cdc24c2bbc592a2d2ebbca95212f52813b799de308d2 WatchSource:0}: Error finding container 4360a393e32988600186cdc24c2bbc592a2d2ebbca95212f52813b799de308d2: Status 404 returned error can't find the container with id 4360a393e32988600186cdc24c2bbc592a2d2ebbca95212f52813b799de308d2 Feb 18 14:52:12 crc kubenswrapper[4957]: I0218 14:52:12.424492 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 18 14:52:12 crc kubenswrapper[4957]: I0218 14:52:12.895989 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"a1f75ef8-c8ef-4d67-8c40-11ff5f8f5f4c","Type":"ContainerStarted","Data":"bbf2e69ec76f147232d773b652dd734cbd0f3ab9e07ff0f6f991706c7a425513"} Feb 18 14:52:12 crc kubenswrapper[4957]: I0218 14:52:12.929569 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"ac146603-56e6-49dc-afe3-d46b005945a3","Type":"ContainerStarted","Data":"4360a393e32988600186cdc24c2bbc592a2d2ebbca95212f52813b799de308d2"} Feb 18 14:52:13 crc kubenswrapper[4957]: I0218 14:52:13.901323 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 18 14:52:13 crc kubenswrapper[4957]: I0218 14:52:13.903443 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 18 14:52:13 crc kubenswrapper[4957]: I0218 14:52:13.916181 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-rcl4g" Feb 18 14:52:13 crc kubenswrapper[4957]: I0218 14:52:13.938645 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 18 14:52:14 crc kubenswrapper[4957]: I0218 14:52:14.007591 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zh6wb\" (UniqueName: \"kubernetes.io/projected/48a06b17-3799-49aa-97b4-40b55c95fa86-kube-api-access-zh6wb\") pod \"kube-state-metrics-0\" (UID: \"48a06b17-3799-49aa-97b4-40b55c95fa86\") " pod="openstack/kube-state-metrics-0" Feb 18 14:52:14 crc kubenswrapper[4957]: I0218 14:52:14.112575 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zh6wb\" (UniqueName: \"kubernetes.io/projected/48a06b17-3799-49aa-97b4-40b55c95fa86-kube-api-access-zh6wb\") pod \"kube-state-metrics-0\" (UID: \"48a06b17-3799-49aa-97b4-40b55c95fa86\") " pod="openstack/kube-state-metrics-0" Feb 18 14:52:14 crc kubenswrapper[4957]: I0218 14:52:14.147807 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zh6wb\" (UniqueName: \"kubernetes.io/projected/48a06b17-3799-49aa-97b4-40b55c95fa86-kube-api-access-zh6wb\") pod \"kube-state-metrics-0\" (UID: \"48a06b17-3799-49aa-97b4-40b55c95fa86\") " pod="openstack/kube-state-metrics-0" Feb 18 14:52:14 crc kubenswrapper[4957]: I0218 14:52:14.250014 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 18 14:52:14 crc kubenswrapper[4957]: I0218 14:52:14.791711 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-ui-dashboards-66cbf594b5-624nc"] Feb 18 14:52:14 crc kubenswrapper[4957]: I0218 14:52:14.794438 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-624nc" Feb 18 14:52:14 crc kubenswrapper[4957]: I0218 14:52:14.802054 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-ui-dashboards" Feb 18 14:52:14 crc kubenswrapper[4957]: I0218 14:52:14.802234 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-ui-dashboards-sa-dockercfg-258fk" Feb 18 14:52:14 crc kubenswrapper[4957]: I0218 14:52:14.818715 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-ui-dashboards-66cbf594b5-624nc"] Feb 18 14:52:14 crc kubenswrapper[4957]: I0218 14:52:14.962546 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lw297\" (UniqueName: \"kubernetes.io/projected/791dd051-aa05-448b-b1d5-26cafb4662fd-kube-api-access-lw297\") pod \"observability-ui-dashboards-66cbf594b5-624nc\" (UID: \"791dd051-aa05-448b-b1d5-26cafb4662fd\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-624nc" Feb 18 14:52:14 crc kubenswrapper[4957]: I0218 14:52:14.962710 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/791dd051-aa05-448b-b1d5-26cafb4662fd-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-624nc\" (UID: \"791dd051-aa05-448b-b1d5-26cafb4662fd\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-624nc" Feb 18 14:52:15 crc kubenswrapper[4957]: I0218 14:52:15.075325 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lw297\" (UniqueName: \"kubernetes.io/projected/791dd051-aa05-448b-b1d5-26cafb4662fd-kube-api-access-lw297\") pod \"observability-ui-dashboards-66cbf594b5-624nc\" (UID: \"791dd051-aa05-448b-b1d5-26cafb4662fd\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-624nc" Feb 18 14:52:15 crc kubenswrapper[4957]: I0218 14:52:15.075397 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/791dd051-aa05-448b-b1d5-26cafb4662fd-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-624nc\" (UID: \"791dd051-aa05-448b-b1d5-26cafb4662fd\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-624nc" Feb 18 14:52:15 crc kubenswrapper[4957]: E0218 14:52:15.075563 4957 secret.go:188] Couldn't get secret openshift-operators/observability-ui-dashboards: secret "observability-ui-dashboards" not found Feb 18 14:52:15 crc kubenswrapper[4957]: E0218 14:52:15.075620 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/791dd051-aa05-448b-b1d5-26cafb4662fd-serving-cert podName:791dd051-aa05-448b-b1d5-26cafb4662fd nodeName:}" failed. No retries permitted until 2026-02-18 14:52:15.575601059 +0000 UTC m=+1242.096465803 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/791dd051-aa05-448b-b1d5-26cafb4662fd-serving-cert") pod "observability-ui-dashboards-66cbf594b5-624nc" (UID: "791dd051-aa05-448b-b1d5-26cafb4662fd") : secret "observability-ui-dashboards" not found Feb 18 14:52:15 crc kubenswrapper[4957]: I0218 14:52:15.159819 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 18 14:52:15 crc kubenswrapper[4957]: I0218 14:52:15.214084 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 18 14:52:15 crc kubenswrapper[4957]: I0218 14:52:15.214242 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 18 14:52:15 crc kubenswrapper[4957]: I0218 14:52:15.226171 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Feb 18 14:52:15 crc kubenswrapper[4957]: I0218 14:52:15.226435 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Feb 18 14:52:15 crc kubenswrapper[4957]: I0218 14:52:15.226549 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-8sf4s" Feb 18 14:52:15 crc kubenswrapper[4957]: I0218 14:52:15.226658 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Feb 18 14:52:15 crc kubenswrapper[4957]: I0218 14:52:15.231615 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lw297\" (UniqueName: \"kubernetes.io/projected/791dd051-aa05-448b-b1d5-26cafb4662fd-kube-api-access-lw297\") pod \"observability-ui-dashboards-66cbf594b5-624nc\" (UID: \"791dd051-aa05-448b-b1d5-26cafb4662fd\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-624nc" Feb 18 14:52:15 crc kubenswrapper[4957]: I0218 14:52:15.257397 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Feb 18 14:52:15 crc kubenswrapper[4957]: I0218 14:52:15.257664 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Feb 18 14:52:15 crc kubenswrapper[4957]: I0218 14:52:15.257823 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Feb 18 14:52:15 crc kubenswrapper[4957]: I0218 14:52:15.290750 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Feb 18 14:52:15 crc kubenswrapper[4957]: I0218 14:52:15.390251 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/383c2ab1-7f15-422c-a4ed-3c899e3a8c74-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"383c2ab1-7f15-422c-a4ed-3c899e3a8c74\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:52:15 crc kubenswrapper[4957]: I0218 14:52:15.390380 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjm5b\" (UniqueName: \"kubernetes.io/projected/383c2ab1-7f15-422c-a4ed-3c899e3a8c74-kube-api-access-tjm5b\") pod \"prometheus-metric-storage-0\" (UID: \"383c2ab1-7f15-422c-a4ed-3c899e3a8c74\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:52:15 crc kubenswrapper[4957]: I0218 14:52:15.390485 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/383c2ab1-7f15-422c-a4ed-3c899e3a8c74-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"383c2ab1-7f15-422c-a4ed-3c899e3a8c74\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:52:15 crc kubenswrapper[4957]: I0218 14:52:15.390533 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/383c2ab1-7f15-422c-a4ed-3c899e3a8c74-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"383c2ab1-7f15-422c-a4ed-3c899e3a8c74\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:52:15 crc kubenswrapper[4957]: I0218 14:52:15.390644 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/383c2ab1-7f15-422c-a4ed-3c899e3a8c74-config\") pod \"prometheus-metric-storage-0\" (UID: \"383c2ab1-7f15-422c-a4ed-3c899e3a8c74\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:52:15 crc kubenswrapper[4957]: I0218 14:52:15.390730 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/383c2ab1-7f15-422c-a4ed-3c899e3a8c74-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"383c2ab1-7f15-422c-a4ed-3c899e3a8c74\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:52:15 crc kubenswrapper[4957]: I0218 14:52:15.390852 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/383c2ab1-7f15-422c-a4ed-3c899e3a8c74-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"383c2ab1-7f15-422c-a4ed-3c899e3a8c74\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:52:15 crc kubenswrapper[4957]: I0218 14:52:15.390941 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-74077cac-0270-4dff-af12-9ef374ad97f5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-74077cac-0270-4dff-af12-9ef374ad97f5\") pod \"prometheus-metric-storage-0\" (UID: \"383c2ab1-7f15-422c-a4ed-3c899e3a8c74\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:52:15 crc kubenswrapper[4957]: I0218 14:52:15.391233 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/383c2ab1-7f15-422c-a4ed-3c899e3a8c74-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"383c2ab1-7f15-422c-a4ed-3c899e3a8c74\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:52:15 crc kubenswrapper[4957]: I0218 14:52:15.391274 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/383c2ab1-7f15-422c-a4ed-3c899e3a8c74-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"383c2ab1-7f15-422c-a4ed-3c899e3a8c74\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:52:15 crc kubenswrapper[4957]: I0218 14:52:15.394432 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-84fccf866c-pq2q4"] Feb 18 14:52:15 crc kubenswrapper[4957]: I0218 14:52:15.404687 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccf866c-pq2q4" Feb 18 14:52:15 crc kubenswrapper[4957]: I0218 14:52:15.493910 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/383c2ab1-7f15-422c-a4ed-3c899e3a8c74-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"383c2ab1-7f15-422c-a4ed-3c899e3a8c74\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:52:15 crc kubenswrapper[4957]: I0218 14:52:15.493972 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tjm5b\" (UniqueName: \"kubernetes.io/projected/383c2ab1-7f15-422c-a4ed-3c899e3a8c74-kube-api-access-tjm5b\") pod \"prometheus-metric-storage-0\" (UID: \"383c2ab1-7f15-422c-a4ed-3c899e3a8c74\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:52:15 crc kubenswrapper[4957]: I0218 14:52:15.494012 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/383c2ab1-7f15-422c-a4ed-3c899e3a8c74-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"383c2ab1-7f15-422c-a4ed-3c899e3a8c74\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:52:15 crc kubenswrapper[4957]: I0218 14:52:15.494033 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/383c2ab1-7f15-422c-a4ed-3c899e3a8c74-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"383c2ab1-7f15-422c-a4ed-3c899e3a8c74\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:52:15 crc kubenswrapper[4957]: I0218 14:52:15.494064 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/95e72012-f049-432b-8490-b92c0e5724b8-oauth-serving-cert\") pod \"console-84fccf866c-pq2q4\" (UID: \"95e72012-f049-432b-8490-b92c0e5724b8\") " pod="openshift-console/console-84fccf866c-pq2q4" Feb 18 14:52:15 crc kubenswrapper[4957]: I0218 14:52:15.494086 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/383c2ab1-7f15-422c-a4ed-3c899e3a8c74-config\") pod \"prometheus-metric-storage-0\" (UID: \"383c2ab1-7f15-422c-a4ed-3c899e3a8c74\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:52:15 crc kubenswrapper[4957]: I0218 14:52:15.494113 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/383c2ab1-7f15-422c-a4ed-3c899e3a8c74-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"383c2ab1-7f15-422c-a4ed-3c899e3a8c74\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:52:15 crc kubenswrapper[4957]: I0218 14:52:15.494140 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/95e72012-f049-432b-8490-b92c0e5724b8-console-serving-cert\") pod \"console-84fccf866c-pq2q4\" (UID: \"95e72012-f049-432b-8490-b92c0e5724b8\") " pod="openshift-console/console-84fccf866c-pq2q4" Feb 18 14:52:15 crc kubenswrapper[4957]: I0218 14:52:15.494158 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/383c2ab1-7f15-422c-a4ed-3c899e3a8c74-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"383c2ab1-7f15-422c-a4ed-3c899e3a8c74\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:52:15 crc kubenswrapper[4957]: I0218 14:52:15.494176 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/95e72012-f049-432b-8490-b92c0e5724b8-service-ca\") pod \"console-84fccf866c-pq2q4\" (UID: \"95e72012-f049-432b-8490-b92c0e5724b8\") " pod="openshift-console/console-84fccf866c-pq2q4" Feb 18 14:52:15 crc kubenswrapper[4957]: I0218 14:52:15.494218 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-74077cac-0270-4dff-af12-9ef374ad97f5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-74077cac-0270-4dff-af12-9ef374ad97f5\") pod \"prometheus-metric-storage-0\" (UID: \"383c2ab1-7f15-422c-a4ed-3c899e3a8c74\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:52:15 crc kubenswrapper[4957]: I0218 14:52:15.494255 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/95e72012-f049-432b-8490-b92c0e5724b8-trusted-ca-bundle\") pod \"console-84fccf866c-pq2q4\" (UID: \"95e72012-f049-432b-8490-b92c0e5724b8\") " pod="openshift-console/console-84fccf866c-pq2q4" Feb 18 14:52:15 crc kubenswrapper[4957]: I0218 14:52:15.494274 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/95e72012-f049-432b-8490-b92c0e5724b8-console-config\") pod \"console-84fccf866c-pq2q4\" (UID: \"95e72012-f049-432b-8490-b92c0e5724b8\") " pod="openshift-console/console-84fccf866c-pq2q4" Feb 18 14:52:15 crc kubenswrapper[4957]: I0218 14:52:15.494317 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6lbq\" (UniqueName: \"kubernetes.io/projected/95e72012-f049-432b-8490-b92c0e5724b8-kube-api-access-p6lbq\") pod \"console-84fccf866c-pq2q4\" (UID: \"95e72012-f049-432b-8490-b92c0e5724b8\") " pod="openshift-console/console-84fccf866c-pq2q4" Feb 18 14:52:15 crc kubenswrapper[4957]: I0218 14:52:15.494344 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/95e72012-f049-432b-8490-b92c0e5724b8-console-oauth-config\") pod \"console-84fccf866c-pq2q4\" (UID: \"95e72012-f049-432b-8490-b92c0e5724b8\") " pod="openshift-console/console-84fccf866c-pq2q4" Feb 18 14:52:15 crc kubenswrapper[4957]: I0218 14:52:15.494364 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/383c2ab1-7f15-422c-a4ed-3c899e3a8c74-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"383c2ab1-7f15-422c-a4ed-3c899e3a8c74\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:52:15 crc kubenswrapper[4957]: I0218 14:52:15.494386 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/383c2ab1-7f15-422c-a4ed-3c899e3a8c74-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"383c2ab1-7f15-422c-a4ed-3c899e3a8c74\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:52:15 crc kubenswrapper[4957]: I0218 14:52:15.499020 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/383c2ab1-7f15-422c-a4ed-3c899e3a8c74-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"383c2ab1-7f15-422c-a4ed-3c899e3a8c74\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:52:15 crc kubenswrapper[4957]: I0218 14:52:15.505186 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/383c2ab1-7f15-422c-a4ed-3c899e3a8c74-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"383c2ab1-7f15-422c-a4ed-3c899e3a8c74\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:52:15 crc kubenswrapper[4957]: I0218 14:52:15.506843 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/383c2ab1-7f15-422c-a4ed-3c899e3a8c74-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"383c2ab1-7f15-422c-a4ed-3c899e3a8c74\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:52:15 crc kubenswrapper[4957]: I0218 14:52:15.509646 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/383c2ab1-7f15-422c-a4ed-3c899e3a8c74-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"383c2ab1-7f15-422c-a4ed-3c899e3a8c74\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:52:15 crc kubenswrapper[4957]: I0218 14:52:15.509907 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/383c2ab1-7f15-422c-a4ed-3c899e3a8c74-config\") pod \"prometheus-metric-storage-0\" (UID: \"383c2ab1-7f15-422c-a4ed-3c899e3a8c74\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:52:15 crc kubenswrapper[4957]: I0218 14:52:15.523179 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/383c2ab1-7f15-422c-a4ed-3c899e3a8c74-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"383c2ab1-7f15-422c-a4ed-3c899e3a8c74\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:52:15 crc kubenswrapper[4957]: I0218 14:52:15.525041 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/383c2ab1-7f15-422c-a4ed-3c899e3a8c74-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"383c2ab1-7f15-422c-a4ed-3c899e3a8c74\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:52:15 crc kubenswrapper[4957]: I0218 14:52:15.545334 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/383c2ab1-7f15-422c-a4ed-3c899e3a8c74-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"383c2ab1-7f15-422c-a4ed-3c899e3a8c74\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:52:15 crc kubenswrapper[4957]: I0218 14:52:15.569075 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-84fccf866c-pq2q4"] Feb 18 14:52:15 crc kubenswrapper[4957]: I0218 14:52:15.573074 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tjm5b\" (UniqueName: \"kubernetes.io/projected/383c2ab1-7f15-422c-a4ed-3c899e3a8c74-kube-api-access-tjm5b\") pod \"prometheus-metric-storage-0\" (UID: \"383c2ab1-7f15-422c-a4ed-3c899e3a8c74\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:52:15 crc kubenswrapper[4957]: I0218 14:52:15.611972 4957 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 18 14:52:15 crc kubenswrapper[4957]: I0218 14:52:15.612020 4957 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-74077cac-0270-4dff-af12-9ef374ad97f5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-74077cac-0270-4dff-af12-9ef374ad97f5\") pod \"prometheus-metric-storage-0\" (UID: \"383c2ab1-7f15-422c-a4ed-3c899e3a8c74\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/18b2dcaab171f80c014d746e99657ace56755fa862ee3ab13f5fab6859dcbdba/globalmount\"" pod="openstack/prometheus-metric-storage-0" Feb 18 14:52:15 crc kubenswrapper[4957]: I0218 14:52:15.614163 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6lbq\" (UniqueName: \"kubernetes.io/projected/95e72012-f049-432b-8490-b92c0e5724b8-kube-api-access-p6lbq\") pod \"console-84fccf866c-pq2q4\" (UID: \"95e72012-f049-432b-8490-b92c0e5724b8\") " pod="openshift-console/console-84fccf866c-pq2q4" Feb 18 14:52:15 crc kubenswrapper[4957]: I0218 14:52:15.614209 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/95e72012-f049-432b-8490-b92c0e5724b8-console-oauth-config\") pod \"console-84fccf866c-pq2q4\" (UID: \"95e72012-f049-432b-8490-b92c0e5724b8\") " pod="openshift-console/console-84fccf866c-pq2q4" Feb 18 14:52:15 crc kubenswrapper[4957]: I0218 14:52:15.614273 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/95e72012-f049-432b-8490-b92c0e5724b8-oauth-serving-cert\") pod \"console-84fccf866c-pq2q4\" (UID: \"95e72012-f049-432b-8490-b92c0e5724b8\") " pod="openshift-console/console-84fccf866c-pq2q4" Feb 18 14:52:15 crc kubenswrapper[4957]: I0218 14:52:15.614315 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/95e72012-f049-432b-8490-b92c0e5724b8-console-serving-cert\") pod \"console-84fccf866c-pq2q4\" (UID: \"95e72012-f049-432b-8490-b92c0e5724b8\") " pod="openshift-console/console-84fccf866c-pq2q4" Feb 18 14:52:15 crc kubenswrapper[4957]: I0218 14:52:15.614334 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/95e72012-f049-432b-8490-b92c0e5724b8-service-ca\") pod \"console-84fccf866c-pq2q4\" (UID: \"95e72012-f049-432b-8490-b92c0e5724b8\") " pod="openshift-console/console-84fccf866c-pq2q4" Feb 18 14:52:15 crc kubenswrapper[4957]: I0218 14:52:15.614371 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/791dd051-aa05-448b-b1d5-26cafb4662fd-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-624nc\" (UID: \"791dd051-aa05-448b-b1d5-26cafb4662fd\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-624nc" Feb 18 14:52:15 crc kubenswrapper[4957]: I0218 14:52:15.614449 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/95e72012-f049-432b-8490-b92c0e5724b8-trusted-ca-bundle\") pod \"console-84fccf866c-pq2q4\" (UID: \"95e72012-f049-432b-8490-b92c0e5724b8\") " pod="openshift-console/console-84fccf866c-pq2q4" Feb 18 14:52:15 crc kubenswrapper[4957]: I0218 14:52:15.614475 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/95e72012-f049-432b-8490-b92c0e5724b8-console-config\") pod \"console-84fccf866c-pq2q4\" (UID: \"95e72012-f049-432b-8490-b92c0e5724b8\") " pod="openshift-console/console-84fccf866c-pq2q4" Feb 18 14:52:15 crc kubenswrapper[4957]: I0218 14:52:15.615261 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/95e72012-f049-432b-8490-b92c0e5724b8-console-config\") pod \"console-84fccf866c-pq2q4\" (UID: \"95e72012-f049-432b-8490-b92c0e5724b8\") " pod="openshift-console/console-84fccf866c-pq2q4" Feb 18 14:52:15 crc kubenswrapper[4957]: I0218 14:52:15.648449 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/95e72012-f049-432b-8490-b92c0e5724b8-console-serving-cert\") pod \"console-84fccf866c-pq2q4\" (UID: \"95e72012-f049-432b-8490-b92c0e5724b8\") " pod="openshift-console/console-84fccf866c-pq2q4" Feb 18 14:52:15 crc kubenswrapper[4957]: I0218 14:52:15.662321 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p6lbq\" (UniqueName: \"kubernetes.io/projected/95e72012-f049-432b-8490-b92c0e5724b8-kube-api-access-p6lbq\") pod \"console-84fccf866c-pq2q4\" (UID: \"95e72012-f049-432b-8490-b92c0e5724b8\") " pod="openshift-console/console-84fccf866c-pq2q4" Feb 18 14:52:15 crc kubenswrapper[4957]: I0218 14:52:15.672836 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/95e72012-f049-432b-8490-b92c0e5724b8-console-oauth-config\") pod \"console-84fccf866c-pq2q4\" (UID: \"95e72012-f049-432b-8490-b92c0e5724b8\") " pod="openshift-console/console-84fccf866c-pq2q4" Feb 18 14:52:15 crc kubenswrapper[4957]: I0218 14:52:15.674513 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/95e72012-f049-432b-8490-b92c0e5724b8-trusted-ca-bundle\") pod \"console-84fccf866c-pq2q4\" (UID: \"95e72012-f049-432b-8490-b92c0e5724b8\") " pod="openshift-console/console-84fccf866c-pq2q4" Feb 18 14:52:15 crc kubenswrapper[4957]: I0218 14:52:15.674859 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/95e72012-f049-432b-8490-b92c0e5724b8-service-ca\") pod \"console-84fccf866c-pq2q4\" (UID: \"95e72012-f049-432b-8490-b92c0e5724b8\") " pod="openshift-console/console-84fccf866c-pq2q4" Feb 18 14:52:15 crc kubenswrapper[4957]: I0218 14:52:15.675267 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/95e72012-f049-432b-8490-b92c0e5724b8-oauth-serving-cert\") pod \"console-84fccf866c-pq2q4\" (UID: \"95e72012-f049-432b-8490-b92c0e5724b8\") " pod="openshift-console/console-84fccf866c-pq2q4" Feb 18 14:52:15 crc kubenswrapper[4957]: I0218 14:52:15.712249 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/791dd051-aa05-448b-b1d5-26cafb4662fd-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-624nc\" (UID: \"791dd051-aa05-448b-b1d5-26cafb4662fd\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-624nc" Feb 18 14:52:15 crc kubenswrapper[4957]: I0218 14:52:15.736383 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 18 14:52:15 crc kubenswrapper[4957]: I0218 14:52:15.768034 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccf866c-pq2q4" Feb 18 14:52:15 crc kubenswrapper[4957]: I0218 14:52:15.793621 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-624nc" Feb 18 14:52:15 crc kubenswrapper[4957]: I0218 14:52:15.889925 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-74077cac-0270-4dff-af12-9ef374ad97f5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-74077cac-0270-4dff-af12-9ef374ad97f5\") pod \"prometheus-metric-storage-0\" (UID: \"383c2ab1-7f15-422c-a4ed-3c899e3a8c74\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:52:15 crc kubenswrapper[4957]: I0218 14:52:15.931269 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 18 14:52:16 crc kubenswrapper[4957]: I0218 14:52:16.120892 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"48a06b17-3799-49aa-97b4-40b55c95fa86","Type":"ContainerStarted","Data":"195d2c4e98d270c5824eb48385c33114b6369a9191f87d913ecb78765d87024f"} Feb 18 14:52:17 crc kubenswrapper[4957]: I0218 14:52:17.114387 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-ui-dashboards-66cbf594b5-624nc"] Feb 18 14:52:17 crc kubenswrapper[4957]: I0218 14:52:17.144849 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 18 14:52:17 crc kubenswrapper[4957]: I0218 14:52:17.216834 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 18 14:52:17 crc kubenswrapper[4957]: I0218 14:52:17.218713 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 18 14:52:17 crc kubenswrapper[4957]: I0218 14:52:17.226923 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Feb 18 14:52:17 crc kubenswrapper[4957]: I0218 14:52:17.227301 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-wxrrg" Feb 18 14:52:17 crc kubenswrapper[4957]: I0218 14:52:17.227448 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Feb 18 14:52:17 crc kubenswrapper[4957]: I0218 14:52:17.227566 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Feb 18 14:52:17 crc kubenswrapper[4957]: I0218 14:52:17.227684 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Feb 18 14:52:17 crc kubenswrapper[4957]: I0218 14:52:17.227720 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 18 14:52:17 crc kubenswrapper[4957]: I0218 14:52:17.288962 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7be80ac2-8e92-4cb0-8184-c35add0ccc9b-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"7be80ac2-8e92-4cb0-8184-c35add0ccc9b\") " pod="openstack/ovsdbserver-nb-0" Feb 18 14:52:17 crc kubenswrapper[4957]: I0218 14:52:17.289016 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/7be80ac2-8e92-4cb0-8184-c35add0ccc9b-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"7be80ac2-8e92-4cb0-8184-c35add0ccc9b\") " pod="openstack/ovsdbserver-nb-0" Feb 18 14:52:17 crc kubenswrapper[4957]: I0218 14:52:17.289089 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7be80ac2-8e92-4cb0-8184-c35add0ccc9b-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"7be80ac2-8e92-4cb0-8184-c35add0ccc9b\") " pod="openstack/ovsdbserver-nb-0" Feb 18 14:52:17 crc kubenswrapper[4957]: I0218 14:52:17.289134 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7be80ac2-8e92-4cb0-8184-c35add0ccc9b-config\") pod \"ovsdbserver-nb-0\" (UID: \"7be80ac2-8e92-4cb0-8184-c35add0ccc9b\") " pod="openstack/ovsdbserver-nb-0" Feb 18 14:52:17 crc kubenswrapper[4957]: I0218 14:52:17.289154 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-10e5c89b-63b2-428c-954b-ad0582a6b6f3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-10e5c89b-63b2-428c-954b-ad0582a6b6f3\") pod \"ovsdbserver-nb-0\" (UID: \"7be80ac2-8e92-4cb0-8184-c35add0ccc9b\") " pod="openstack/ovsdbserver-nb-0" Feb 18 14:52:17 crc kubenswrapper[4957]: I0218 14:52:17.289252 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7be80ac2-8e92-4cb0-8184-c35add0ccc9b-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"7be80ac2-8e92-4cb0-8184-c35add0ccc9b\") " pod="openstack/ovsdbserver-nb-0" Feb 18 14:52:17 crc kubenswrapper[4957]: I0218 14:52:17.289278 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7be80ac2-8e92-4cb0-8184-c35add0ccc9b-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"7be80ac2-8e92-4cb0-8184-c35add0ccc9b\") " pod="openstack/ovsdbserver-nb-0" Feb 18 14:52:17 crc kubenswrapper[4957]: I0218 14:52:17.289320 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-297r2\" (UniqueName: \"kubernetes.io/projected/7be80ac2-8e92-4cb0-8184-c35add0ccc9b-kube-api-access-297r2\") pod \"ovsdbserver-nb-0\" (UID: \"7be80ac2-8e92-4cb0-8184-c35add0ccc9b\") " pod="openstack/ovsdbserver-nb-0" Feb 18 14:52:17 crc kubenswrapper[4957]: I0218 14:52:17.380372 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-84fccf866c-pq2q4"] Feb 18 14:52:17 crc kubenswrapper[4957]: I0218 14:52:17.391293 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7be80ac2-8e92-4cb0-8184-c35add0ccc9b-config\") pod \"ovsdbserver-nb-0\" (UID: \"7be80ac2-8e92-4cb0-8184-c35add0ccc9b\") " pod="openstack/ovsdbserver-nb-0" Feb 18 14:52:17 crc kubenswrapper[4957]: I0218 14:52:17.391353 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-10e5c89b-63b2-428c-954b-ad0582a6b6f3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-10e5c89b-63b2-428c-954b-ad0582a6b6f3\") pod \"ovsdbserver-nb-0\" (UID: \"7be80ac2-8e92-4cb0-8184-c35add0ccc9b\") " pod="openstack/ovsdbserver-nb-0" Feb 18 14:52:17 crc kubenswrapper[4957]: I0218 14:52:17.391490 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7be80ac2-8e92-4cb0-8184-c35add0ccc9b-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"7be80ac2-8e92-4cb0-8184-c35add0ccc9b\") " pod="openstack/ovsdbserver-nb-0" Feb 18 14:52:17 crc kubenswrapper[4957]: I0218 14:52:17.391528 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7be80ac2-8e92-4cb0-8184-c35add0ccc9b-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"7be80ac2-8e92-4cb0-8184-c35add0ccc9b\") " pod="openstack/ovsdbserver-nb-0" Feb 18 14:52:17 crc kubenswrapper[4957]: I0218 14:52:17.391580 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-297r2\" (UniqueName: \"kubernetes.io/projected/7be80ac2-8e92-4cb0-8184-c35add0ccc9b-kube-api-access-297r2\") pod \"ovsdbserver-nb-0\" (UID: \"7be80ac2-8e92-4cb0-8184-c35add0ccc9b\") " pod="openstack/ovsdbserver-nb-0" Feb 18 14:52:17 crc kubenswrapper[4957]: I0218 14:52:17.391607 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7be80ac2-8e92-4cb0-8184-c35add0ccc9b-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"7be80ac2-8e92-4cb0-8184-c35add0ccc9b\") " pod="openstack/ovsdbserver-nb-0" Feb 18 14:52:17 crc kubenswrapper[4957]: I0218 14:52:17.391629 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/7be80ac2-8e92-4cb0-8184-c35add0ccc9b-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"7be80ac2-8e92-4cb0-8184-c35add0ccc9b\") " pod="openstack/ovsdbserver-nb-0" Feb 18 14:52:17 crc kubenswrapper[4957]: I0218 14:52:17.391672 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7be80ac2-8e92-4cb0-8184-c35add0ccc9b-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"7be80ac2-8e92-4cb0-8184-c35add0ccc9b\") " pod="openstack/ovsdbserver-nb-0" Feb 18 14:52:17 crc kubenswrapper[4957]: I0218 14:52:17.394164 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7be80ac2-8e92-4cb0-8184-c35add0ccc9b-config\") pod \"ovsdbserver-nb-0\" (UID: \"7be80ac2-8e92-4cb0-8184-c35add0ccc9b\") " pod="openstack/ovsdbserver-nb-0" Feb 18 14:52:17 crc kubenswrapper[4957]: I0218 14:52:17.394482 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/7be80ac2-8e92-4cb0-8184-c35add0ccc9b-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"7be80ac2-8e92-4cb0-8184-c35add0ccc9b\") " pod="openstack/ovsdbserver-nb-0" Feb 18 14:52:17 crc kubenswrapper[4957]: I0218 14:52:17.395193 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7be80ac2-8e92-4cb0-8184-c35add0ccc9b-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"7be80ac2-8e92-4cb0-8184-c35add0ccc9b\") " pod="openstack/ovsdbserver-nb-0" Feb 18 14:52:17 crc kubenswrapper[4957]: I0218 14:52:17.399199 4957 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 18 14:52:17 crc kubenswrapper[4957]: I0218 14:52:17.399243 4957 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-10e5c89b-63b2-428c-954b-ad0582a6b6f3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-10e5c89b-63b2-428c-954b-ad0582a6b6f3\") pod \"ovsdbserver-nb-0\" (UID: \"7be80ac2-8e92-4cb0-8184-c35add0ccc9b\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/801ced7ffdfc3d9afd4895fe54ed7c5f0da46a9a4b857e088120a05c5ab39809/globalmount\"" pod="openstack/ovsdbserver-nb-0" Feb 18 14:52:17 crc kubenswrapper[4957]: I0218 14:52:17.408584 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7be80ac2-8e92-4cb0-8184-c35add0ccc9b-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"7be80ac2-8e92-4cb0-8184-c35add0ccc9b\") " pod="openstack/ovsdbserver-nb-0" Feb 18 14:52:17 crc kubenswrapper[4957]: I0218 14:52:17.414240 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7be80ac2-8e92-4cb0-8184-c35add0ccc9b-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"7be80ac2-8e92-4cb0-8184-c35add0ccc9b\") " pod="openstack/ovsdbserver-nb-0" Feb 18 14:52:17 crc kubenswrapper[4957]: I0218 14:52:17.419154 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7be80ac2-8e92-4cb0-8184-c35add0ccc9b-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"7be80ac2-8e92-4cb0-8184-c35add0ccc9b\") " pod="openstack/ovsdbserver-nb-0" Feb 18 14:52:17 crc kubenswrapper[4957]: I0218 14:52:17.421604 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-297r2\" (UniqueName: \"kubernetes.io/projected/7be80ac2-8e92-4cb0-8184-c35add0ccc9b-kube-api-access-297r2\") pod \"ovsdbserver-nb-0\" (UID: \"7be80ac2-8e92-4cb0-8184-c35add0ccc9b\") " pod="openstack/ovsdbserver-nb-0" Feb 18 14:52:17 crc kubenswrapper[4957]: I0218 14:52:17.517771 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-10e5c89b-63b2-428c-954b-ad0582a6b6f3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-10e5c89b-63b2-428c-954b-ad0582a6b6f3\") pod \"ovsdbserver-nb-0\" (UID: \"7be80ac2-8e92-4cb0-8184-c35add0ccc9b\") " pod="openstack/ovsdbserver-nb-0" Feb 18 14:52:17 crc kubenswrapper[4957]: W0218 14:52:17.541900 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod95e72012_f049_432b_8490_b92c0e5724b8.slice/crio-bc058ca4e5ab1f13e45fa7a33806b36b854905eb730406ec1f311890391755be WatchSource:0}: Error finding container bc058ca4e5ab1f13e45fa7a33806b36b854905eb730406ec1f311890391755be: Status 404 returned error can't find the container with id bc058ca4e5ab1f13e45fa7a33806b36b854905eb730406ec1f311890391755be Feb 18 14:52:17 crc kubenswrapper[4957]: I0218 14:52:17.562705 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 18 14:52:17 crc kubenswrapper[4957]: I0218 14:52:17.674541 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-5s4rw"] Feb 18 14:52:17 crc kubenswrapper[4957]: I0218 14:52:17.681284 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-5s4rw" Feb 18 14:52:17 crc kubenswrapper[4957]: I0218 14:52:17.685880 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-2gxg7" Feb 18 14:52:17 crc kubenswrapper[4957]: I0218 14:52:17.686475 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Feb 18 14:52:17 crc kubenswrapper[4957]: I0218 14:52:17.686602 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Feb 18 14:52:17 crc kubenswrapper[4957]: I0218 14:52:17.699325 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-bgjwb"] Feb 18 14:52:17 crc kubenswrapper[4957]: I0218 14:52:17.705720 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-bgjwb" Feb 18 14:52:17 crc kubenswrapper[4957]: I0218 14:52:17.737287 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-5s4rw"] Feb 18 14:52:17 crc kubenswrapper[4957]: I0218 14:52:17.774816 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-bgjwb"] Feb 18 14:52:17 crc kubenswrapper[4957]: I0218 14:52:17.804446 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2c0f60cd-99b5-453c-9353-5c6298f95d2b-scripts\") pod \"ovn-controller-ovs-bgjwb\" (UID: \"2c0f60cd-99b5-453c-9353-5c6298f95d2b\") " pod="openstack/ovn-controller-ovs-bgjwb" Feb 18 14:52:17 crc kubenswrapper[4957]: I0218 14:52:17.804804 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wbwnb\" (UniqueName: \"kubernetes.io/projected/e2b4f5fe-0b27-47d8-8158-b51ad4229e86-kube-api-access-wbwnb\") pod \"ovn-controller-5s4rw\" (UID: \"e2b4f5fe-0b27-47d8-8158-b51ad4229e86\") " pod="openstack/ovn-controller-5s4rw" Feb 18 14:52:17 crc kubenswrapper[4957]: I0218 14:52:17.804933 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2c0f60cd-99b5-453c-9353-5c6298f95d2b-var-run\") pod \"ovn-controller-ovs-bgjwb\" (UID: \"2c0f60cd-99b5-453c-9353-5c6298f95d2b\") " pod="openstack/ovn-controller-ovs-bgjwb" Feb 18 14:52:17 crc kubenswrapper[4957]: I0218 14:52:17.805029 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/2c0f60cd-99b5-453c-9353-5c6298f95d2b-var-lib\") pod \"ovn-controller-ovs-bgjwb\" (UID: \"2c0f60cd-99b5-453c-9353-5c6298f95d2b\") " pod="openstack/ovn-controller-ovs-bgjwb" Feb 18 14:52:17 crc kubenswrapper[4957]: I0218 14:52:17.805146 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e2b4f5fe-0b27-47d8-8158-b51ad4229e86-var-run\") pod \"ovn-controller-5s4rw\" (UID: \"e2b4f5fe-0b27-47d8-8158-b51ad4229e86\") " pod="openstack/ovn-controller-5s4rw" Feb 18 14:52:17 crc kubenswrapper[4957]: I0218 14:52:17.805270 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e2b4f5fe-0b27-47d8-8158-b51ad4229e86-scripts\") pod \"ovn-controller-5s4rw\" (UID: \"e2b4f5fe-0b27-47d8-8158-b51ad4229e86\") " pod="openstack/ovn-controller-5s4rw" Feb 18 14:52:17 crc kubenswrapper[4957]: I0218 14:52:17.805376 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2b4f5fe-0b27-47d8-8158-b51ad4229e86-ovn-controller-tls-certs\") pod \"ovn-controller-5s4rw\" (UID: \"e2b4f5fe-0b27-47d8-8158-b51ad4229e86\") " pod="openstack/ovn-controller-5s4rw" Feb 18 14:52:17 crc kubenswrapper[4957]: I0218 14:52:17.805486 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/2c0f60cd-99b5-453c-9353-5c6298f95d2b-etc-ovs\") pod \"ovn-controller-ovs-bgjwb\" (UID: \"2c0f60cd-99b5-453c-9353-5c6298f95d2b\") " pod="openstack/ovn-controller-ovs-bgjwb" Feb 18 14:52:17 crc kubenswrapper[4957]: I0218 14:52:17.805603 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2b4f5fe-0b27-47d8-8158-b51ad4229e86-combined-ca-bundle\") pod \"ovn-controller-5s4rw\" (UID: \"e2b4f5fe-0b27-47d8-8158-b51ad4229e86\") " pod="openstack/ovn-controller-5s4rw" Feb 18 14:52:17 crc kubenswrapper[4957]: I0218 14:52:17.805720 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/e2b4f5fe-0b27-47d8-8158-b51ad4229e86-var-log-ovn\") pod \"ovn-controller-5s4rw\" (UID: \"e2b4f5fe-0b27-47d8-8158-b51ad4229e86\") " pod="openstack/ovn-controller-5s4rw" Feb 18 14:52:17 crc kubenswrapper[4957]: I0218 14:52:17.805820 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8h2m\" (UniqueName: \"kubernetes.io/projected/2c0f60cd-99b5-453c-9353-5c6298f95d2b-kube-api-access-r8h2m\") pod \"ovn-controller-ovs-bgjwb\" (UID: \"2c0f60cd-99b5-453c-9353-5c6298f95d2b\") " pod="openstack/ovn-controller-ovs-bgjwb" Feb 18 14:52:17 crc kubenswrapper[4957]: I0218 14:52:17.805966 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/e2b4f5fe-0b27-47d8-8158-b51ad4229e86-var-run-ovn\") pod \"ovn-controller-5s4rw\" (UID: \"e2b4f5fe-0b27-47d8-8158-b51ad4229e86\") " pod="openstack/ovn-controller-5s4rw" Feb 18 14:52:17 crc kubenswrapper[4957]: I0218 14:52:17.806066 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/2c0f60cd-99b5-453c-9353-5c6298f95d2b-var-log\") pod \"ovn-controller-ovs-bgjwb\" (UID: \"2c0f60cd-99b5-453c-9353-5c6298f95d2b\") " pod="openstack/ovn-controller-ovs-bgjwb" Feb 18 14:52:17 crc kubenswrapper[4957]: I0218 14:52:17.908149 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2c0f60cd-99b5-453c-9353-5c6298f95d2b-scripts\") pod \"ovn-controller-ovs-bgjwb\" (UID: \"2c0f60cd-99b5-453c-9353-5c6298f95d2b\") " pod="openstack/ovn-controller-ovs-bgjwb" Feb 18 14:52:17 crc kubenswrapper[4957]: I0218 14:52:17.908242 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wbwnb\" (UniqueName: \"kubernetes.io/projected/e2b4f5fe-0b27-47d8-8158-b51ad4229e86-kube-api-access-wbwnb\") pod \"ovn-controller-5s4rw\" (UID: \"e2b4f5fe-0b27-47d8-8158-b51ad4229e86\") " pod="openstack/ovn-controller-5s4rw" Feb 18 14:52:17 crc kubenswrapper[4957]: I0218 14:52:17.908616 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2c0f60cd-99b5-453c-9353-5c6298f95d2b-var-run\") pod \"ovn-controller-ovs-bgjwb\" (UID: \"2c0f60cd-99b5-453c-9353-5c6298f95d2b\") " pod="openstack/ovn-controller-ovs-bgjwb" Feb 18 14:52:17 crc kubenswrapper[4957]: I0218 14:52:17.908641 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/2c0f60cd-99b5-453c-9353-5c6298f95d2b-var-lib\") pod \"ovn-controller-ovs-bgjwb\" (UID: \"2c0f60cd-99b5-453c-9353-5c6298f95d2b\") " pod="openstack/ovn-controller-ovs-bgjwb" Feb 18 14:52:17 crc kubenswrapper[4957]: I0218 14:52:17.908790 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e2b4f5fe-0b27-47d8-8158-b51ad4229e86-var-run\") pod \"ovn-controller-5s4rw\" (UID: \"e2b4f5fe-0b27-47d8-8158-b51ad4229e86\") " pod="openstack/ovn-controller-5s4rw" Feb 18 14:52:17 crc kubenswrapper[4957]: I0218 14:52:17.908929 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e2b4f5fe-0b27-47d8-8158-b51ad4229e86-scripts\") pod \"ovn-controller-5s4rw\" (UID: \"e2b4f5fe-0b27-47d8-8158-b51ad4229e86\") " pod="openstack/ovn-controller-5s4rw" Feb 18 14:52:17 crc kubenswrapper[4957]: I0218 14:52:17.908960 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2b4f5fe-0b27-47d8-8158-b51ad4229e86-ovn-controller-tls-certs\") pod \"ovn-controller-5s4rw\" (UID: \"e2b4f5fe-0b27-47d8-8158-b51ad4229e86\") " pod="openstack/ovn-controller-5s4rw" Feb 18 14:52:17 crc kubenswrapper[4957]: I0218 14:52:17.908982 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/2c0f60cd-99b5-453c-9353-5c6298f95d2b-etc-ovs\") pod \"ovn-controller-ovs-bgjwb\" (UID: \"2c0f60cd-99b5-453c-9353-5c6298f95d2b\") " pod="openstack/ovn-controller-ovs-bgjwb" Feb 18 14:52:17 crc kubenswrapper[4957]: I0218 14:52:17.909181 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2b4f5fe-0b27-47d8-8158-b51ad4229e86-combined-ca-bundle\") pod \"ovn-controller-5s4rw\" (UID: \"e2b4f5fe-0b27-47d8-8158-b51ad4229e86\") " pod="openstack/ovn-controller-5s4rw" Feb 18 14:52:17 crc kubenswrapper[4957]: I0218 14:52:17.909219 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/e2b4f5fe-0b27-47d8-8158-b51ad4229e86-var-log-ovn\") pod \"ovn-controller-5s4rw\" (UID: \"e2b4f5fe-0b27-47d8-8158-b51ad4229e86\") " pod="openstack/ovn-controller-5s4rw" Feb 18 14:52:17 crc kubenswrapper[4957]: I0218 14:52:17.909243 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r8h2m\" (UniqueName: \"kubernetes.io/projected/2c0f60cd-99b5-453c-9353-5c6298f95d2b-kube-api-access-r8h2m\") pod \"ovn-controller-ovs-bgjwb\" (UID: \"2c0f60cd-99b5-453c-9353-5c6298f95d2b\") " pod="openstack/ovn-controller-ovs-bgjwb" Feb 18 14:52:17 crc kubenswrapper[4957]: I0218 14:52:17.909293 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/e2b4f5fe-0b27-47d8-8158-b51ad4229e86-var-run-ovn\") pod \"ovn-controller-5s4rw\" (UID: \"e2b4f5fe-0b27-47d8-8158-b51ad4229e86\") " pod="openstack/ovn-controller-5s4rw" Feb 18 14:52:17 crc kubenswrapper[4957]: I0218 14:52:17.909327 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/2c0f60cd-99b5-453c-9353-5c6298f95d2b-var-log\") pod \"ovn-controller-ovs-bgjwb\" (UID: \"2c0f60cd-99b5-453c-9353-5c6298f95d2b\") " pod="openstack/ovn-controller-ovs-bgjwb" Feb 18 14:52:17 crc kubenswrapper[4957]: I0218 14:52:17.909940 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/2c0f60cd-99b5-453c-9353-5c6298f95d2b-var-log\") pod \"ovn-controller-ovs-bgjwb\" (UID: \"2c0f60cd-99b5-453c-9353-5c6298f95d2b\") " pod="openstack/ovn-controller-ovs-bgjwb" Feb 18 14:52:17 crc kubenswrapper[4957]: I0218 14:52:17.910832 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/2c0f60cd-99b5-453c-9353-5c6298f95d2b-var-lib\") pod \"ovn-controller-ovs-bgjwb\" (UID: \"2c0f60cd-99b5-453c-9353-5c6298f95d2b\") " pod="openstack/ovn-controller-ovs-bgjwb" Feb 18 14:52:17 crc kubenswrapper[4957]: I0218 14:52:17.910878 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/e2b4f5fe-0b27-47d8-8158-b51ad4229e86-var-log-ovn\") pod \"ovn-controller-5s4rw\" (UID: \"e2b4f5fe-0b27-47d8-8158-b51ad4229e86\") " pod="openstack/ovn-controller-5s4rw" Feb 18 14:52:17 crc kubenswrapper[4957]: I0218 14:52:17.911306 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2c0f60cd-99b5-453c-9353-5c6298f95d2b-var-run\") pod \"ovn-controller-ovs-bgjwb\" (UID: \"2c0f60cd-99b5-453c-9353-5c6298f95d2b\") " pod="openstack/ovn-controller-ovs-bgjwb" Feb 18 14:52:17 crc kubenswrapper[4957]: I0218 14:52:17.911504 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e2b4f5fe-0b27-47d8-8158-b51ad4229e86-var-run\") pod \"ovn-controller-5s4rw\" (UID: \"e2b4f5fe-0b27-47d8-8158-b51ad4229e86\") " pod="openstack/ovn-controller-5s4rw" Feb 18 14:52:17 crc kubenswrapper[4957]: I0218 14:52:17.911796 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/2c0f60cd-99b5-453c-9353-5c6298f95d2b-etc-ovs\") pod \"ovn-controller-ovs-bgjwb\" (UID: \"2c0f60cd-99b5-453c-9353-5c6298f95d2b\") " pod="openstack/ovn-controller-ovs-bgjwb" Feb 18 14:52:17 crc kubenswrapper[4957]: I0218 14:52:17.911919 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/e2b4f5fe-0b27-47d8-8158-b51ad4229e86-var-run-ovn\") pod \"ovn-controller-5s4rw\" (UID: \"e2b4f5fe-0b27-47d8-8158-b51ad4229e86\") " pod="openstack/ovn-controller-5s4rw" Feb 18 14:52:17 crc kubenswrapper[4957]: I0218 14:52:17.912619 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2c0f60cd-99b5-453c-9353-5c6298f95d2b-scripts\") pod \"ovn-controller-ovs-bgjwb\" (UID: \"2c0f60cd-99b5-453c-9353-5c6298f95d2b\") " pod="openstack/ovn-controller-ovs-bgjwb" Feb 18 14:52:17 crc kubenswrapper[4957]: I0218 14:52:17.914022 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e2b4f5fe-0b27-47d8-8158-b51ad4229e86-scripts\") pod \"ovn-controller-5s4rw\" (UID: \"e2b4f5fe-0b27-47d8-8158-b51ad4229e86\") " pod="openstack/ovn-controller-5s4rw" Feb 18 14:52:17 crc kubenswrapper[4957]: I0218 14:52:17.918236 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2b4f5fe-0b27-47d8-8158-b51ad4229e86-ovn-controller-tls-certs\") pod \"ovn-controller-5s4rw\" (UID: \"e2b4f5fe-0b27-47d8-8158-b51ad4229e86\") " pod="openstack/ovn-controller-5s4rw" Feb 18 14:52:17 crc kubenswrapper[4957]: I0218 14:52:17.922519 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2b4f5fe-0b27-47d8-8158-b51ad4229e86-combined-ca-bundle\") pod \"ovn-controller-5s4rw\" (UID: \"e2b4f5fe-0b27-47d8-8158-b51ad4229e86\") " pod="openstack/ovn-controller-5s4rw" Feb 18 14:52:17 crc kubenswrapper[4957]: I0218 14:52:17.928597 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wbwnb\" (UniqueName: \"kubernetes.io/projected/e2b4f5fe-0b27-47d8-8158-b51ad4229e86-kube-api-access-wbwnb\") pod \"ovn-controller-5s4rw\" (UID: \"e2b4f5fe-0b27-47d8-8158-b51ad4229e86\") " pod="openstack/ovn-controller-5s4rw" Feb 18 14:52:17 crc kubenswrapper[4957]: I0218 14:52:17.941087 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r8h2m\" (UniqueName: \"kubernetes.io/projected/2c0f60cd-99b5-453c-9353-5c6298f95d2b-kube-api-access-r8h2m\") pod \"ovn-controller-ovs-bgjwb\" (UID: \"2c0f60cd-99b5-453c-9353-5c6298f95d2b\") " pod="openstack/ovn-controller-ovs-bgjwb" Feb 18 14:52:18 crc kubenswrapper[4957]: I0218 14:52:18.021303 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-5s4rw" Feb 18 14:52:18 crc kubenswrapper[4957]: I0218 14:52:18.039200 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-bgjwb" Feb 18 14:52:18 crc kubenswrapper[4957]: I0218 14:52:18.318484 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-84fccf866c-pq2q4" event={"ID":"95e72012-f049-432b-8490-b92c0e5724b8","Type":"ContainerStarted","Data":"bc058ca4e5ab1f13e45fa7a33806b36b854905eb730406ec1f311890391755be"} Feb 18 14:52:18 crc kubenswrapper[4957]: I0218 14:52:18.318531 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"383c2ab1-7f15-422c-a4ed-3c899e3a8c74","Type":"ContainerStarted","Data":"4d3ea28f0ffb9d113f8b314aecfee337e551f6ca3dd85545b810a06ff684769a"} Feb 18 14:52:18 crc kubenswrapper[4957]: I0218 14:52:18.359609 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-624nc" event={"ID":"791dd051-aa05-448b-b1d5-26cafb4662fd","Type":"ContainerStarted","Data":"89ea3e706d8ac71dfb1eb1e3bf86b2d324129df8515ea4201380d0d9dabd1c2d"} Feb 18 14:52:19 crc kubenswrapper[4957]: I0218 14:52:19.444262 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-84fccf866c-pq2q4" event={"ID":"95e72012-f049-432b-8490-b92c0e5724b8","Type":"ContainerStarted","Data":"5e1bcb031d8c22adc291ab4ddeff87798dc82f26b4768e6947cd172516be96d8"} Feb 18 14:52:19 crc kubenswrapper[4957]: I0218 14:52:19.484699 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-84fccf866c-pq2q4" podStartSLOduration=4.4846790930000004 podStartE2EDuration="4.484679093s" podCreationTimestamp="2026-02-18 14:52:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:52:19.482780788 +0000 UTC m=+1246.003645542" watchObservedRunningTime="2026-02-18 14:52:19.484679093 +0000 UTC m=+1246.005543837" Feb 18 14:52:19 crc kubenswrapper[4957]: I0218 14:52:19.537299 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-5s4rw"] Feb 18 14:52:19 crc kubenswrapper[4957]: I0218 14:52:19.691863 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 18 14:52:20 crc kubenswrapper[4957]: I0218 14:52:20.471349 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-bgjwb"] Feb 18 14:52:20 crc kubenswrapper[4957]: I0218 14:52:20.925237 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 18 14:52:20 crc kubenswrapper[4957]: I0218 14:52:20.927371 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 18 14:52:20 crc kubenswrapper[4957]: I0218 14:52:20.929295 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 18 14:52:20 crc kubenswrapper[4957]: I0218 14:52:20.942907 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Feb 18 14:52:20 crc kubenswrapper[4957]: I0218 14:52:20.943119 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Feb 18 14:52:20 crc kubenswrapper[4957]: I0218 14:52:20.943263 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Feb 18 14:52:20 crc kubenswrapper[4957]: I0218 14:52:20.943427 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-vdm2d" Feb 18 14:52:21 crc kubenswrapper[4957]: I0218 14:52:21.046857 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79f094ab-0b7e-4749-8c78-237f70a4bcef-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"79f094ab-0b7e-4749-8c78-237f70a4bcef\") " pod="openstack/ovsdbserver-sb-0" Feb 18 14:52:21 crc kubenswrapper[4957]: I0218 14:52:21.047033 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qpqv8\" (UniqueName: \"kubernetes.io/projected/79f094ab-0b7e-4749-8c78-237f70a4bcef-kube-api-access-qpqv8\") pod \"ovsdbserver-sb-0\" (UID: \"79f094ab-0b7e-4749-8c78-237f70a4bcef\") " pod="openstack/ovsdbserver-sb-0" Feb 18 14:52:21 crc kubenswrapper[4957]: I0218 14:52:21.047118 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79f094ab-0b7e-4749-8c78-237f70a4bcef-config\") pod \"ovsdbserver-sb-0\" (UID: \"79f094ab-0b7e-4749-8c78-237f70a4bcef\") " pod="openstack/ovsdbserver-sb-0" Feb 18 14:52:21 crc kubenswrapper[4957]: I0218 14:52:21.047191 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/79f094ab-0b7e-4749-8c78-237f70a4bcef-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"79f094ab-0b7e-4749-8c78-237f70a4bcef\") " pod="openstack/ovsdbserver-sb-0" Feb 18 14:52:21 crc kubenswrapper[4957]: I0218 14:52:21.047257 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/79f094ab-0b7e-4749-8c78-237f70a4bcef-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"79f094ab-0b7e-4749-8c78-237f70a4bcef\") " pod="openstack/ovsdbserver-sb-0" Feb 18 14:52:21 crc kubenswrapper[4957]: I0218 14:52:21.047341 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/79f094ab-0b7e-4749-8c78-237f70a4bcef-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"79f094ab-0b7e-4749-8c78-237f70a4bcef\") " pod="openstack/ovsdbserver-sb-0" Feb 18 14:52:21 crc kubenswrapper[4957]: I0218 14:52:21.047418 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-e676592d-3007-44f0-bac4-4316d6e51ea2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e676592d-3007-44f0-bac4-4316d6e51ea2\") pod \"ovsdbserver-sb-0\" (UID: \"79f094ab-0b7e-4749-8c78-237f70a4bcef\") " pod="openstack/ovsdbserver-sb-0" Feb 18 14:52:21 crc kubenswrapper[4957]: I0218 14:52:21.047477 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/79f094ab-0b7e-4749-8c78-237f70a4bcef-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"79f094ab-0b7e-4749-8c78-237f70a4bcef\") " pod="openstack/ovsdbserver-sb-0" Feb 18 14:52:21 crc kubenswrapper[4957]: I0218 14:52:21.155062 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-e676592d-3007-44f0-bac4-4316d6e51ea2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e676592d-3007-44f0-bac4-4316d6e51ea2\") pod \"ovsdbserver-sb-0\" (UID: \"79f094ab-0b7e-4749-8c78-237f70a4bcef\") " pod="openstack/ovsdbserver-sb-0" Feb 18 14:52:21 crc kubenswrapper[4957]: I0218 14:52:21.155873 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/79f094ab-0b7e-4749-8c78-237f70a4bcef-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"79f094ab-0b7e-4749-8c78-237f70a4bcef\") " pod="openstack/ovsdbserver-sb-0" Feb 18 14:52:21 crc kubenswrapper[4957]: I0218 14:52:21.155971 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79f094ab-0b7e-4749-8c78-237f70a4bcef-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"79f094ab-0b7e-4749-8c78-237f70a4bcef\") " pod="openstack/ovsdbserver-sb-0" Feb 18 14:52:21 crc kubenswrapper[4957]: I0218 14:52:21.156076 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qpqv8\" (UniqueName: \"kubernetes.io/projected/79f094ab-0b7e-4749-8c78-237f70a4bcef-kube-api-access-qpqv8\") pod \"ovsdbserver-sb-0\" (UID: \"79f094ab-0b7e-4749-8c78-237f70a4bcef\") " pod="openstack/ovsdbserver-sb-0" Feb 18 14:52:21 crc kubenswrapper[4957]: I0218 14:52:21.156189 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79f094ab-0b7e-4749-8c78-237f70a4bcef-config\") pod \"ovsdbserver-sb-0\" (UID: \"79f094ab-0b7e-4749-8c78-237f70a4bcef\") " pod="openstack/ovsdbserver-sb-0" Feb 18 14:52:21 crc kubenswrapper[4957]: I0218 14:52:21.156522 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/79f094ab-0b7e-4749-8c78-237f70a4bcef-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"79f094ab-0b7e-4749-8c78-237f70a4bcef\") " pod="openstack/ovsdbserver-sb-0" Feb 18 14:52:21 crc kubenswrapper[4957]: I0218 14:52:21.157198 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/79f094ab-0b7e-4749-8c78-237f70a4bcef-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"79f094ab-0b7e-4749-8c78-237f70a4bcef\") " pod="openstack/ovsdbserver-sb-0" Feb 18 14:52:21 crc kubenswrapper[4957]: I0218 14:52:21.157242 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/79f094ab-0b7e-4749-8c78-237f70a4bcef-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"79f094ab-0b7e-4749-8c78-237f70a4bcef\") " pod="openstack/ovsdbserver-sb-0" Feb 18 14:52:21 crc kubenswrapper[4957]: I0218 14:52:21.157385 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/79f094ab-0b7e-4749-8c78-237f70a4bcef-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"79f094ab-0b7e-4749-8c78-237f70a4bcef\") " pod="openstack/ovsdbserver-sb-0" Feb 18 14:52:21 crc kubenswrapper[4957]: I0218 14:52:21.158159 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79f094ab-0b7e-4749-8c78-237f70a4bcef-config\") pod \"ovsdbserver-sb-0\" (UID: \"79f094ab-0b7e-4749-8c78-237f70a4bcef\") " pod="openstack/ovsdbserver-sb-0" Feb 18 14:52:21 crc kubenswrapper[4957]: I0218 14:52:21.158517 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/79f094ab-0b7e-4749-8c78-237f70a4bcef-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"79f094ab-0b7e-4749-8c78-237f70a4bcef\") " pod="openstack/ovsdbserver-sb-0" Feb 18 14:52:21 crc kubenswrapper[4957]: I0218 14:52:21.160141 4957 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 18 14:52:21 crc kubenswrapper[4957]: I0218 14:52:21.160183 4957 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-e676592d-3007-44f0-bac4-4316d6e51ea2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e676592d-3007-44f0-bac4-4316d6e51ea2\") pod \"ovsdbserver-sb-0\" (UID: \"79f094ab-0b7e-4749-8c78-237f70a4bcef\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/f1ade7d168d7927eb6e61025a77db9b81fed01b841013dcd1cf3b6948fa7c3bd/globalmount\"" pod="openstack/ovsdbserver-sb-0" Feb 18 14:52:21 crc kubenswrapper[4957]: I0218 14:52:21.165742 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/79f094ab-0b7e-4749-8c78-237f70a4bcef-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"79f094ab-0b7e-4749-8c78-237f70a4bcef\") " pod="openstack/ovsdbserver-sb-0" Feb 18 14:52:21 crc kubenswrapper[4957]: I0218 14:52:21.168483 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/79f094ab-0b7e-4749-8c78-237f70a4bcef-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"79f094ab-0b7e-4749-8c78-237f70a4bcef\") " pod="openstack/ovsdbserver-sb-0" Feb 18 14:52:21 crc kubenswrapper[4957]: I0218 14:52:21.175275 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79f094ab-0b7e-4749-8c78-237f70a4bcef-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"79f094ab-0b7e-4749-8c78-237f70a4bcef\") " pod="openstack/ovsdbserver-sb-0" Feb 18 14:52:21 crc kubenswrapper[4957]: I0218 14:52:21.176496 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qpqv8\" (UniqueName: \"kubernetes.io/projected/79f094ab-0b7e-4749-8c78-237f70a4bcef-kube-api-access-qpqv8\") pod \"ovsdbserver-sb-0\" (UID: \"79f094ab-0b7e-4749-8c78-237f70a4bcef\") " pod="openstack/ovsdbserver-sb-0" Feb 18 14:52:21 crc kubenswrapper[4957]: I0218 14:52:21.242650 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-e676592d-3007-44f0-bac4-4316d6e51ea2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e676592d-3007-44f0-bac4-4316d6e51ea2\") pod \"ovsdbserver-sb-0\" (UID: \"79f094ab-0b7e-4749-8c78-237f70a4bcef\") " pod="openstack/ovsdbserver-sb-0" Feb 18 14:52:21 crc kubenswrapper[4957]: I0218 14:52:21.279994 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 18 14:52:23 crc kubenswrapper[4957]: W0218 14:52:23.706093 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7be80ac2_8e92_4cb0_8184_c35add0ccc9b.slice/crio-59e8b6f954448566e7ec7a3f8f1c2335d3b773e68fe3c0fdcfe0b4347b42d233 WatchSource:0}: Error finding container 59e8b6f954448566e7ec7a3f8f1c2335d3b773e68fe3c0fdcfe0b4347b42d233: Status 404 returned error can't find the container with id 59e8b6f954448566e7ec7a3f8f1c2335d3b773e68fe3c0fdcfe0b4347b42d233 Feb 18 14:52:24 crc kubenswrapper[4957]: I0218 14:52:24.528763 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-bgjwb" event={"ID":"2c0f60cd-99b5-453c-9353-5c6298f95d2b","Type":"ContainerStarted","Data":"aff0673fd2130aca74c088c312750e145f1a12c984f0e7cde7d414888446dc1c"} Feb 18 14:52:24 crc kubenswrapper[4957]: I0218 14:52:24.530403 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-5s4rw" event={"ID":"e2b4f5fe-0b27-47d8-8158-b51ad4229e86","Type":"ContainerStarted","Data":"2f50b6d364057679503ded08c8cf885848ebc116d3c1340231c2d9fb4f030ca5"} Feb 18 14:52:24 crc kubenswrapper[4957]: I0218 14:52:24.532745 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"7be80ac2-8e92-4cb0-8184-c35add0ccc9b","Type":"ContainerStarted","Data":"59e8b6f954448566e7ec7a3f8f1c2335d3b773e68fe3c0fdcfe0b4347b42d233"} Feb 18 14:52:25 crc kubenswrapper[4957]: I0218 14:52:25.768794 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-84fccf866c-pq2q4" Feb 18 14:52:25 crc kubenswrapper[4957]: I0218 14:52:25.769011 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-84fccf866c-pq2q4" Feb 18 14:52:25 crc kubenswrapper[4957]: I0218 14:52:25.773548 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-84fccf866c-pq2q4" Feb 18 14:52:26 crc kubenswrapper[4957]: I0218 14:52:26.557473 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-84fccf866c-pq2q4" Feb 18 14:52:26 crc kubenswrapper[4957]: I0218 14:52:26.607947 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f4db7666f-9jmvl"] Feb 18 14:52:37 crc kubenswrapper[4957]: I0218 14:52:37.278852 4957 patch_prober.go:28] interesting pod/machine-config-daemon-x8wwg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 14:52:37 crc kubenswrapper[4957]: I0218 14:52:37.279641 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 14:52:37 crc kubenswrapper[4957]: E0218 14:52:37.865219 4957 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Feb 18 14:52:37 crc kubenswrapper[4957]: E0218 14:52:37.865397 4957 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xkbjz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-0_openstack(a0e8ec2b-400b-4454-acdd-517a1727e9f8): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 14:52:37 crc kubenswrapper[4957]: E0218 14:52:37.866648 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-0" podUID="a0e8ec2b-400b-4454-acdd-517a1727e9f8" Feb 18 14:52:38 crc kubenswrapper[4957]: E0218 14:52:38.218063 4957 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/cluster-observability-operator/dashboards-console-plugin-rhel9@sha256:093d2731ac848ed5fd57356b155a19d3bf7b8db96d95b09c5d0095e143f7254f" Feb 18 14:52:38 crc kubenswrapper[4957]: E0218 14:52:38.218337 4957 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:observability-ui-dashboards,Image:registry.redhat.io/cluster-observability-operator/dashboards-console-plugin-rhel9@sha256:093d2731ac848ed5fd57356b155a19d3bf7b8db96d95b09c5d0095e143f7254f,Command:[],Args:[-port=9443 -cert=/var/serving-cert/tls.crt -key=/var/serving-cert/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:web,HostPort:0,ContainerPort:9443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:serving-cert,ReadOnly:true,MountPath:/var/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lw297,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000350000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod observability-ui-dashboards-66cbf594b5-624nc_openshift-operators(791dd051-aa05-448b-b1d5-26cafb4662fd): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 18 14:52:38 crc kubenswrapper[4957]: E0218 14:52:38.220619 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"observability-ui-dashboards\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-624nc" podUID="791dd051-aa05-448b-b1d5-26cafb4662fd" Feb 18 14:52:38 crc kubenswrapper[4957]: E0218 14:52:38.694843 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"observability-ui-dashboards\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/dashboards-console-plugin-rhel9@sha256:093d2731ac848ed5fd57356b155a19d3bf7b8db96d95b09c5d0095e143f7254f\\\"\"" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-624nc" podUID="791dd051-aa05-448b-b1d5-26cafb4662fd" Feb 18 14:52:38 crc kubenswrapper[4957]: E0218 14:52:38.695342 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-server-0" podUID="a0e8ec2b-400b-4454-acdd-517a1727e9f8" Feb 18 14:52:40 crc kubenswrapper[4957]: E0218 14:52:40.478224 4957 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Feb 18 14:52:40 crc kubenswrapper[4957]: E0218 14:52:40.478718 4957 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gggwp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-1_openstack(f237ab9c-fc69-491b-98da-97ce92214eb0): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 14:52:40 crc kubenswrapper[4957]: E0218 14:52:40.479868 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-1" podUID="f237ab9c-fc69-491b-98da-97ce92214eb0" Feb 18 14:52:40 crc kubenswrapper[4957]: E0218 14:52:40.485899 4957 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" Feb 18 14:52:40 crc kubenswrapper[4957]: E0218 14:52:40.485867 4957 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Feb 18 14:52:40 crc kubenswrapper[4957]: E0218 14:52:40.486081 4957 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-db8ms,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-galera-0_openstack(8e98adae-ca1c-4f1d-a7e4-6ea6ccea7070): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 14:52:40 crc kubenswrapper[4957]: E0218 14:52:40.486198 4957 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ln6fz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-2_openstack(81a0cd7a-3f64-4555-96e9-ad69c2518568): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 14:52:40 crc kubenswrapper[4957]: E0218 14:52:40.487467 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-galera-0" podUID="8e98adae-ca1c-4f1d-a7e4-6ea6ccea7070" Feb 18 14:52:40 crc kubenswrapper[4957]: E0218 14:52:40.487546 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-2" podUID="81a0cd7a-3f64-4555-96e9-ad69c2518568" Feb 18 14:52:40 crc kubenswrapper[4957]: E0218 14:52:40.525732 4957 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" Feb 18 14:52:40 crc kubenswrapper[4957]: E0218 14:52:40.525955 4957 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-np6bp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-cell1-galera-0_openstack(a1f75ef8-c8ef-4d67-8c40-11ff5f8f5f4c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 14:52:40 crc kubenswrapper[4957]: E0218 14:52:40.527939 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-cell1-galera-0" podUID="a1f75ef8-c8ef-4d67-8c40-11ff5f8f5f4c" Feb 18 14:52:40 crc kubenswrapper[4957]: E0218 14:52:40.533114 4957 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Feb 18 14:52:40 crc kubenswrapper[4957]: E0218 14:52:40.533267 4957 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rjjsw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cell1-server-0_openstack(12d531cb-f58f-44a6-a638-29ebb85fdbb3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 14:52:40 crc kubenswrapper[4957]: E0218 14:52:40.534869 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-cell1-server-0" podUID="12d531cb-f58f-44a6-a638-29ebb85fdbb3" Feb 18 14:52:40 crc kubenswrapper[4957]: E0218 14:52:40.713732 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-server-1" podUID="f237ab9c-fc69-491b-98da-97ce92214eb0" Feb 18 14:52:40 crc kubenswrapper[4957]: E0218 14:52:40.713895 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-server-2" podUID="81a0cd7a-3f64-4555-96e9-ad69c2518568" Feb 18 14:52:40 crc kubenswrapper[4957]: E0218 14:52:40.713992 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb:current-podified\\\"\"" pod="openstack/openstack-galera-0" podUID="8e98adae-ca1c-4f1d-a7e4-6ea6ccea7070" Feb 18 14:52:40 crc kubenswrapper[4957]: E0218 14:52:40.714378 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb:current-podified\\\"\"" pod="openstack/openstack-cell1-galera-0" podUID="a1f75ef8-c8ef-4d67-8c40-11ff5f8f5f4c" Feb 18 14:52:40 crc kubenswrapper[4957]: E0218 14:52:40.716149 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-cell1-server-0" podUID="12d531cb-f58f-44a6-a638-29ebb85fdbb3" Feb 18 14:52:41 crc kubenswrapper[4957]: E0218 14:52:41.179614 4957 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = writing blob: storing blob to file \"/var/tmp/container_images_storage3920989452/1\": happened during read: context canceled" image="quay.io/podified-antelope-centos9/openstack-ovn-nb-db-server:current-podified" Feb 18 14:52:41 crc kubenswrapper[4957]: E0218 14:52:41.179809 4957 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ovsdbserver-nb,Image:quay.io/podified-antelope-centos9/openstack-ovn-nb-db-server:current-podified,Command:[/usr/bin/dumb-init],Args:[/usr/local/bin/container-scripts/setup.sh],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n56h665h665h699h665h558h64hd9h79h67fhd9h566hddh685h66bh677hb9h5ddhfchb6hc9hbch5dbh7h5cfh6fhdbhb9h688h9bh5fch94q,ValueFrom:nil,},EnvVar{Name:OVN_LOGDIR,Value:/tmp,ValueFrom:nil,},EnvVar{Name:OVN_RUNDIR,Value:/tmp,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovndbcluster-nb-etc-ovn,ReadOnly:false,MountPath:/etc/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdb-rundir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndb.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovndb.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-297r2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/cleanup.sh],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:20,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovsdbserver-nb-0_openstack(7be80ac2-8e92-4cb0-8184-c35add0ccc9b): ErrImagePull: rpc error: code = Canceled desc = writing blob: storing blob to file \"/var/tmp/container_images_storage3920989452/1\": happened during read: context canceled" logger="UnhandledError" Feb 18 14:52:41 crc kubenswrapper[4957]: E0218 14:52:41.206874 4957 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-memcached:current-podified" Feb 18 14:52:41 crc kubenswrapper[4957]: E0218 14:52:41.207084 4957 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:memcached,Image:quay.io/podified-antelope-centos9/openstack-memcached:current-podified,Command:[/usr/bin/dumb-init -- /usr/local/bin/kolla_start],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:memcached,HostPort:0,ContainerPort:11211,Protocol:TCP,HostIP:,},ContainerPort{Name:memcached-tls,HostPort:0,ContainerPort:11212,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:POD_IPS,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIPs,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:CONFIG_HASH,Value:n56fh557hbch5bdh5fdhcch65dh66dh5c8h5cdh8fhb4h5c5hch57bh5dhcfhb7hc5h6h564h565h5ddh599hdbh5bfh9h566h577h59h86h548q,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/src,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/certs/memcached.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/private/memcached.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-94tjx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42457,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42457,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod memcached-0_openstack(ac146603-56e6-49dc-afe3-d46b005945a3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 14:52:41 crc kubenswrapper[4957]: E0218 14:52:41.208592 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/memcached-0" podUID="ac146603-56e6-49dc-afe3-d46b005945a3" Feb 18 14:52:41 crc kubenswrapper[4957]: E0218 14:52:41.725839 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-memcached:current-podified\\\"\"" pod="openstack/memcached-0" podUID="ac146603-56e6-49dc-afe3-d46b005945a3" Feb 18 14:52:41 crc kubenswrapper[4957]: E0218 14:52:41.947672 4957 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = reading blob sha256:7a0458b3a462aa0da991f651a47f43e24a878ab9d5220ab120bf240881729201: Get \"https://quay.io/v2/podified-antelope-centos9/openstack-ovn-controller/blobs/sha256:7a0458b3a462aa0da991f651a47f43e24a878ab9d5220ab120bf240881729201\": context canceled" image="quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified" Feb 18 14:52:41 crc kubenswrapper[4957]: E0218 14:52:41.948032 4957 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ovn-controller,Image:quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified,Command:[ovn-controller --pidfile unix:/run/openvswitch/db.sock --certificate=/etc/pki/tls/certs/ovndb.crt --private-key=/etc/pki/tls/private/ovndb.key --ca-cert=/etc/pki/tls/certs/ovndbca.crt],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n8h698h74hb5h64dh55fhbhb6h57h54dh589h5d7hf9h7ch647h5fdh77h689h55bh689h5f6h6dhf5h557h5b8h5c5h59bh66bh68dhc6h5b7h5bcq,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:var-run,ReadOnly:false,MountPath:/var/run/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-run-ovn,ReadOnly:false,MountPath:/var/run/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-log-ovn,ReadOnly:false,MountPath:/var/log/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-controller-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndb.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-controller-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovndb.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-controller-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wbwnb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/ovn_controller_liveness.sh],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/ovn_controller_readiness.sh],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/usr/share/ovn/scripts/ovn-ctl stop_controller],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_ADMIN SYS_ADMIN SYS_NICE],Drop:[],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-controller-5s4rw_openstack(e2b4f5fe-0b27-47d8-8158-b51ad4229e86): ErrImagePull: rpc error: code = Canceled desc = reading blob sha256:7a0458b3a462aa0da991f651a47f43e24a878ab9d5220ab120bf240881729201: Get \"https://quay.io/v2/podified-antelope-centos9/openstack-ovn-controller/blobs/sha256:7a0458b3a462aa0da991f651a47f43e24a878ab9d5220ab120bf240881729201\": context canceled" logger="UnhandledError" Feb 18 14:52:41 crc kubenswrapper[4957]: E0218 14:52:41.949392 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovn-controller\" with ErrImagePull: \"rpc error: code = Canceled desc = reading blob sha256:7a0458b3a462aa0da991f651a47f43e24a878ab9d5220ab120bf240881729201: Get \\\"https://quay.io/v2/podified-antelope-centos9/openstack-ovn-controller/blobs/sha256:7a0458b3a462aa0da991f651a47f43e24a878ab9d5220ab120bf240881729201\\\": context canceled\"" pod="openstack/ovn-controller-5s4rw" podUID="e2b4f5fe-0b27-47d8-8158-b51ad4229e86" Feb 18 14:52:42 crc kubenswrapper[4957]: E0218 14:52:41.999755 4957 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 18 14:52:42 crc kubenswrapper[4957]: E0218 14:52:41.999958 4957 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g8qfm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-kjlcd_openstack(957cf5bf-ce95-4a7c-ab2d-fc51db7343cc): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 14:52:42 crc kubenswrapper[4957]: E0218 14:52:42.003083 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-kjlcd" podUID="957cf5bf-ce95-4a7c-ab2d-fc51db7343cc" Feb 18 14:52:42 crc kubenswrapper[4957]: E0218 14:52:42.346799 4957 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-prometheus-config-reloader-rhel9@sha256:9a2097bc5b2e02bc1703f64c452ce8fe4bc6775b732db930ff4770b76ae4653a" Feb 18 14:52:42 crc kubenswrapper[4957]: E0218 14:52:42.347016 4957 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init-config-reloader,Image:registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-prometheus-config-reloader-rhel9@sha256:9a2097bc5b2e02bc1703f64c452ce8fe4bc6775b732db930ff4770b76ae4653a,Command:[/bin/prometheus-config-reloader],Args:[--watch-interval=0 --listen-address=:8081 --config-file=/etc/prometheus/config/prometheus.yaml.gz --config-envsubst-file=/etc/prometheus/config_out/prometheus.env.yaml --watched-dir=/etc/prometheus/rules/prometheus-metric-storage-rulefiles-0 --watched-dir=/etc/prometheus/rules/prometheus-metric-storage-rulefiles-1 --watched-dir=/etc/prometheus/rules/prometheus-metric-storage-rulefiles-2],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:reloader-init,HostPort:0,ContainerPort:8081,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:SHARD,Value:0,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:false,MountPath:/etc/prometheus/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-out,ReadOnly:false,MountPath:/etc/prometheus/config_out,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:prometheus-metric-storage-rulefiles-0,ReadOnly:false,MountPath:/etc/prometheus/rules/prometheus-metric-storage-rulefiles-0,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:prometheus-metric-storage-rulefiles-1,ReadOnly:false,MountPath:/etc/prometheus/rules/prometheus-metric-storage-rulefiles-1,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:prometheus-metric-storage-rulefiles-2,ReadOnly:false,MountPath:/etc/prometheus/rules/prometheus-metric-storage-rulefiles-2,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tjm5b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod prometheus-metric-storage-0_openstack(383c2ab1-7f15-422c-a4ed-3c899e3a8c74): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 18 14:52:42 crc kubenswrapper[4957]: E0218 14:52:42.348820 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init-config-reloader\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openstack/prometheus-metric-storage-0" podUID="383c2ab1-7f15-422c-a4ed-3c899e3a8c74" Feb 18 14:52:42 crc kubenswrapper[4957]: E0218 14:52:42.414213 4957 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 18 14:52:42 crc kubenswrapper[4957]: E0218 14:52:42.414564 4957 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-72t2h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57d769cc4f-pnzzz_openstack(031ed1ca-79fe-4aa7-ade3-f5d612b2c628): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 14:52:42 crc kubenswrapper[4957]: E0218 14:52:42.415732 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-57d769cc4f-pnzzz" podUID="031ed1ca-79fe-4aa7-ade3-f5d612b2c628" Feb 18 14:52:42 crc kubenswrapper[4957]: E0218 14:52:42.425808 4957 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 18 14:52:42 crc kubenswrapper[4957]: E0218 14:52:42.426171 4957 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-b4b78,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-gbczr_openstack(7017d3a7-f4a1-47ad-a5ad-e4b80ac7ed60): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 14:52:42 crc kubenswrapper[4957]: E0218 14:52:42.427356 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-gbczr" podUID="7017d3a7-f4a1-47ad-a5ad-e4b80ac7ed60" Feb 18 14:52:42 crc kubenswrapper[4957]: E0218 14:52:42.496980 4957 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 18 14:52:42 crc kubenswrapper[4957]: E0218 14:52:42.497181 4957 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dsc9s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-666b6646f7-hjs77_openstack(ea49c46a-0b24-4d9c-9dc5-ed3c7f1e9433): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 14:52:42 crc kubenswrapper[4957]: E0218 14:52:42.498669 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-666b6646f7-hjs77" podUID="ea49c46a-0b24-4d9c-9dc5-ed3c7f1e9433" Feb 18 14:52:42 crc kubenswrapper[4957]: E0218 14:52:42.735324 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-57d769cc4f-pnzzz" podUID="031ed1ca-79fe-4aa7-ade3-f5d612b2c628" Feb 18 14:52:42 crc kubenswrapper[4957]: E0218 14:52:42.735853 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovn-controller\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified\\\"\"" pod="openstack/ovn-controller-5s4rw" podUID="e2b4f5fe-0b27-47d8-8158-b51ad4229e86" Feb 18 14:52:42 crc kubenswrapper[4957]: E0218 14:52:42.736008 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-666b6646f7-hjs77" podUID="ea49c46a-0b24-4d9c-9dc5-ed3c7f1e9433" Feb 18 14:52:42 crc kubenswrapper[4957]: E0218 14:52:42.736453 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init-config-reloader\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-prometheus-config-reloader-rhel9@sha256:9a2097bc5b2e02bc1703f64c452ce8fe4bc6775b732db930ff4770b76ae4653a\\\"\"" pod="openstack/prometheus-metric-storage-0" podUID="383c2ab1-7f15-422c-a4ed-3c899e3a8c74" Feb 18 14:52:44 crc kubenswrapper[4957]: E0218 14:52:44.155571 4957 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ovn-base:current-podified" Feb 18 14:52:44 crc kubenswrapper[4957]: E0218 14:52:44.156275 4957 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:ovsdb-server-init,Image:quay.io/podified-antelope-centos9/openstack-ovn-base:current-podified,Command:[/usr/local/bin/container-scripts/init-ovsdb-server.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n8h698h74hb5h64dh55fhbhb6h57h54dh589h5d7hf9h7ch647h5fdh77h689h55bh689h5f6h6dhf5h557h5b8h5c5h59bh66bh68dhc6h5b7h5bcq,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-ovs,ReadOnly:false,MountPath:/etc/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-run,ReadOnly:false,MountPath:/var/run/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-log,ReadOnly:false,MountPath:/var/log/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-lib,ReadOnly:false,MountPath:/var/lib/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r8h2m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_ADMIN SYS_ADMIN SYS_NICE],Drop:[],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-controller-ovs-bgjwb_openstack(2c0f60cd-99b5-453c-9353-5c6298f95d2b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 14:52:44 crc kubenswrapper[4957]: E0218 14:52:44.157809 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdb-server-init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ovn-controller-ovs-bgjwb" podUID="2c0f60cd-99b5-453c-9353-5c6298f95d2b" Feb 18 14:52:44 crc kubenswrapper[4957]: I0218 14:52:44.250586 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-gbczr" Feb 18 14:52:44 crc kubenswrapper[4957]: I0218 14:52:44.256482 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-kjlcd" Feb 18 14:52:44 crc kubenswrapper[4957]: I0218 14:52:44.303348 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7017d3a7-f4a1-47ad-a5ad-e4b80ac7ed60-dns-svc\") pod \"7017d3a7-f4a1-47ad-a5ad-e4b80ac7ed60\" (UID: \"7017d3a7-f4a1-47ad-a5ad-e4b80ac7ed60\") " Feb 18 14:52:44 crc kubenswrapper[4957]: I0218 14:52:44.303444 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b4b78\" (UniqueName: \"kubernetes.io/projected/7017d3a7-f4a1-47ad-a5ad-e4b80ac7ed60-kube-api-access-b4b78\") pod \"7017d3a7-f4a1-47ad-a5ad-e4b80ac7ed60\" (UID: \"7017d3a7-f4a1-47ad-a5ad-e4b80ac7ed60\") " Feb 18 14:52:44 crc kubenswrapper[4957]: I0218 14:52:44.303486 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/957cf5bf-ce95-4a7c-ab2d-fc51db7343cc-config\") pod \"957cf5bf-ce95-4a7c-ab2d-fc51db7343cc\" (UID: \"957cf5bf-ce95-4a7c-ab2d-fc51db7343cc\") " Feb 18 14:52:44 crc kubenswrapper[4957]: I0218 14:52:44.303516 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g8qfm\" (UniqueName: \"kubernetes.io/projected/957cf5bf-ce95-4a7c-ab2d-fc51db7343cc-kube-api-access-g8qfm\") pod \"957cf5bf-ce95-4a7c-ab2d-fc51db7343cc\" (UID: \"957cf5bf-ce95-4a7c-ab2d-fc51db7343cc\") " Feb 18 14:52:44 crc kubenswrapper[4957]: I0218 14:52:44.303587 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7017d3a7-f4a1-47ad-a5ad-e4b80ac7ed60-config\") pod \"7017d3a7-f4a1-47ad-a5ad-e4b80ac7ed60\" (UID: \"7017d3a7-f4a1-47ad-a5ad-e4b80ac7ed60\") " Feb 18 14:52:44 crc kubenswrapper[4957]: I0218 14:52:44.304596 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/957cf5bf-ce95-4a7c-ab2d-fc51db7343cc-config" (OuterVolumeSpecName: "config") pod "957cf5bf-ce95-4a7c-ab2d-fc51db7343cc" (UID: "957cf5bf-ce95-4a7c-ab2d-fc51db7343cc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:52:44 crc kubenswrapper[4957]: I0218 14:52:44.305539 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7017d3a7-f4a1-47ad-a5ad-e4b80ac7ed60-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7017d3a7-f4a1-47ad-a5ad-e4b80ac7ed60" (UID: "7017d3a7-f4a1-47ad-a5ad-e4b80ac7ed60"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:52:44 crc kubenswrapper[4957]: I0218 14:52:44.306003 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7017d3a7-f4a1-47ad-a5ad-e4b80ac7ed60-config" (OuterVolumeSpecName: "config") pod "7017d3a7-f4a1-47ad-a5ad-e4b80ac7ed60" (UID: "7017d3a7-f4a1-47ad-a5ad-e4b80ac7ed60"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:52:44 crc kubenswrapper[4957]: I0218 14:52:44.311835 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7017d3a7-f4a1-47ad-a5ad-e4b80ac7ed60-kube-api-access-b4b78" (OuterVolumeSpecName: "kube-api-access-b4b78") pod "7017d3a7-f4a1-47ad-a5ad-e4b80ac7ed60" (UID: "7017d3a7-f4a1-47ad-a5ad-e4b80ac7ed60"). InnerVolumeSpecName "kube-api-access-b4b78". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:52:44 crc kubenswrapper[4957]: I0218 14:52:44.327528 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/957cf5bf-ce95-4a7c-ab2d-fc51db7343cc-kube-api-access-g8qfm" (OuterVolumeSpecName: "kube-api-access-g8qfm") pod "957cf5bf-ce95-4a7c-ab2d-fc51db7343cc" (UID: "957cf5bf-ce95-4a7c-ab2d-fc51db7343cc"). InnerVolumeSpecName "kube-api-access-g8qfm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:52:44 crc kubenswrapper[4957]: I0218 14:52:44.407719 4957 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7017d3a7-f4a1-47ad-a5ad-e4b80ac7ed60-config\") on node \"crc\" DevicePath \"\"" Feb 18 14:52:44 crc kubenswrapper[4957]: I0218 14:52:44.407794 4957 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7017d3a7-f4a1-47ad-a5ad-e4b80ac7ed60-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 14:52:44 crc kubenswrapper[4957]: I0218 14:52:44.407808 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b4b78\" (UniqueName: \"kubernetes.io/projected/7017d3a7-f4a1-47ad-a5ad-e4b80ac7ed60-kube-api-access-b4b78\") on node \"crc\" DevicePath \"\"" Feb 18 14:52:44 crc kubenswrapper[4957]: I0218 14:52:44.407822 4957 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/957cf5bf-ce95-4a7c-ab2d-fc51db7343cc-config\") on node \"crc\" DevicePath \"\"" Feb 18 14:52:44 crc kubenswrapper[4957]: I0218 14:52:44.407831 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g8qfm\" (UniqueName: \"kubernetes.io/projected/957cf5bf-ce95-4a7c-ab2d-fc51db7343cc-kube-api-access-g8qfm\") on node \"crc\" DevicePath \"\"" Feb 18 14:52:44 crc kubenswrapper[4957]: I0218 14:52:44.754591 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-gbczr" event={"ID":"7017d3a7-f4a1-47ad-a5ad-e4b80ac7ed60","Type":"ContainerDied","Data":"df75817c9afe7e5dab1ad5bf8cd2b530fdb15ee7ca336a423a746bf5524c988e"} Feb 18 14:52:44 crc kubenswrapper[4957]: I0218 14:52:44.754632 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-gbczr" Feb 18 14:52:44 crc kubenswrapper[4957]: I0218 14:52:44.755986 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-kjlcd" event={"ID":"957cf5bf-ce95-4a7c-ab2d-fc51db7343cc","Type":"ContainerDied","Data":"170a355681c299ea74c7afa6f2d04cfc550b540fa1ca0a6e8c3b1683c2afc4f1"} Feb 18 14:52:44 crc kubenswrapper[4957]: I0218 14:52:44.756182 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-kjlcd" Feb 18 14:52:44 crc kubenswrapper[4957]: E0218 14:52:44.758035 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdb-server-init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-ovn-base:current-podified\\\"\"" pod="openstack/ovn-controller-ovs-bgjwb" podUID="2c0f60cd-99b5-453c-9353-5c6298f95d2b" Feb 18 14:52:44 crc kubenswrapper[4957]: I0218 14:52:44.839769 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-kjlcd"] Feb 18 14:52:44 crc kubenswrapper[4957]: I0218 14:52:44.856654 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-kjlcd"] Feb 18 14:52:44 crc kubenswrapper[4957]: I0218 14:52:44.874601 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-gbczr"] Feb 18 14:52:44 crc kubenswrapper[4957]: I0218 14:52:44.883562 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-gbczr"] Feb 18 14:52:45 crc kubenswrapper[4957]: I0218 14:52:45.618964 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 18 14:52:45 crc kubenswrapper[4957]: I0218 14:52:45.769198 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"79f094ab-0b7e-4749-8c78-237f70a4bcef","Type":"ContainerStarted","Data":"106c8a22f6b6cb7c2848cba87c08469992c2ef061c028ffb1316bed77f7b8421"} Feb 18 14:52:46 crc kubenswrapper[4957]: I0218 14:52:46.244010 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7017d3a7-f4a1-47ad-a5ad-e4b80ac7ed60" path="/var/lib/kubelet/pods/7017d3a7-f4a1-47ad-a5ad-e4b80ac7ed60/volumes" Feb 18 14:52:46 crc kubenswrapper[4957]: I0218 14:52:46.244589 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="957cf5bf-ce95-4a7c-ab2d-fc51db7343cc" path="/var/lib/kubelet/pods/957cf5bf-ce95-4a7c-ab2d-fc51db7343cc/volumes" Feb 18 14:52:46 crc kubenswrapper[4957]: E0218 14:52:46.249991 4957 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: reading blob sha256:9aee425378d2c16cd44177dc54a274b312897f5860a8e78fdfda555a0d79dd71: Get \"https://registry.k8s.io/v2/kube-state-metrics/kube-state-metrics/blobs/sha256:9aee425378d2c16cd44177dc54a274b312897f5860a8e78fdfda555a0d79dd71\": context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Feb 18 14:52:46 crc kubenswrapper[4957]: E0218 14:52:46.250074 4957 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying system image from manifest list: reading blob sha256:9aee425378d2c16cd44177dc54a274b312897f5860a8e78fdfda555a0d79dd71: Get \"https://registry.k8s.io/v2/kube-state-metrics/kube-state-metrics/blobs/sha256:9aee425378d2c16cd44177dc54a274b312897f5860a8e78fdfda555a0d79dd71\": context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Feb 18 14:52:46 crc kubenswrapper[4957]: E0218 14:52:46.250250 4957 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-state-metrics,Image:registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0,Command:[],Args:[--resources=pods --namespaces=openstack],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http-metrics,HostPort:0,ContainerPort:8080,Protocol:TCP,HostIP:,},ContainerPort{Name:telemetry,HostPort:0,ContainerPort:8081,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zh6wb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{0 8080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-state-metrics-0_openstack(48a06b17-3799-49aa-97b4-40b55c95fa86): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: reading blob sha256:9aee425378d2c16cd44177dc54a274b312897f5860a8e78fdfda555a0d79dd71: Get \"https://registry.k8s.io/v2/kube-state-metrics/kube-state-metrics/blobs/sha256:9aee425378d2c16cd44177dc54a274b312897f5860a8e78fdfda555a0d79dd71\": context canceled" logger="UnhandledError" Feb 18 14:52:46 crc kubenswrapper[4957]: E0218 14:52:46.252229 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: reading blob sha256:9aee425378d2c16cd44177dc54a274b312897f5860a8e78fdfda555a0d79dd71: Get \\\"https://registry.k8s.io/v2/kube-state-metrics/kube-state-metrics/blobs/sha256:9aee425378d2c16cd44177dc54a274b312897f5860a8e78fdfda555a0d79dd71\\\": context canceled\"" pod="openstack/kube-state-metrics-0" podUID="48a06b17-3799-49aa-97b4-40b55c95fa86" Feb 18 14:52:46 crc kubenswrapper[4957]: E0218 14:52:46.462449 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdbserver-nb\" with ErrImagePull: \"rpc error: code = Canceled desc = writing blob: storing blob to file \\\"/var/tmp/container_images_storage3920989452/1\\\": happened during read: context canceled\"" pod="openstack/ovsdbserver-nb-0" podUID="7be80ac2-8e92-4cb0-8184-c35add0ccc9b" Feb 18 14:52:46 crc kubenswrapper[4957]: I0218 14:52:46.783025 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"7be80ac2-8e92-4cb0-8184-c35add0ccc9b","Type":"ContainerStarted","Data":"1b4f1b4197aad55a3ed3f6e042d2c5352fa7087abf94120c86b4c28f0ebde6ca"} Feb 18 14:52:46 crc kubenswrapper[4957]: E0218 14:52:46.784217 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0\\\"\"" pod="openstack/kube-state-metrics-0" podUID="48a06b17-3799-49aa-97b4-40b55c95fa86" Feb 18 14:52:46 crc kubenswrapper[4957]: E0218 14:52:46.785651 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdbserver-nb\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-ovn-nb-db-server:current-podified\\\"\"" pod="openstack/ovsdbserver-nb-0" podUID="7be80ac2-8e92-4cb0-8184-c35add0ccc9b" Feb 18 14:52:47 crc kubenswrapper[4957]: I0218 14:52:47.793616 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"79f094ab-0b7e-4749-8c78-237f70a4bcef","Type":"ContainerStarted","Data":"0a2040eb8b0f8780f7c1cbc7e8071e287771cb887edf3d93db0880f9edb718e9"} Feb 18 14:52:47 crc kubenswrapper[4957]: I0218 14:52:47.794022 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"79f094ab-0b7e-4749-8c78-237f70a4bcef","Type":"ContainerStarted","Data":"01bb31400adb5c9b832c5d24cf21d89681ea7d0b2fd4e9b82f1aa158eabf10c0"} Feb 18 14:52:47 crc kubenswrapper[4957]: E0218 14:52:47.796022 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdbserver-nb\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-ovn-nb-db-server:current-podified\\\"\"" pod="openstack/ovsdbserver-nb-0" podUID="7be80ac2-8e92-4cb0-8184-c35add0ccc9b" Feb 18 14:52:47 crc kubenswrapper[4957]: I0218 14:52:47.840974 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=27.796135667 podStartE2EDuration="28.840952393s" podCreationTimestamp="2026-02-18 14:52:19 +0000 UTC" firstStartedPulling="2026-02-18 14:52:45.673539639 +0000 UTC m=+1272.194404383" lastFinishedPulling="2026-02-18 14:52:46.718356365 +0000 UTC m=+1273.239221109" observedRunningTime="2026-02-18 14:52:47.837684118 +0000 UTC m=+1274.358548862" watchObservedRunningTime="2026-02-18 14:52:47.840952393 +0000 UTC m=+1274.361817137" Feb 18 14:52:48 crc kubenswrapper[4957]: I0218 14:52:48.280532 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Feb 18 14:52:51 crc kubenswrapper[4957]: I0218 14:52:51.280351 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Feb 18 14:52:51 crc kubenswrapper[4957]: I0218 14:52:51.321790 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Feb 18 14:52:51 crc kubenswrapper[4957]: I0218 14:52:51.690826 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f4db7666f-9jmvl" podUID="7f1e72bc-e441-4172-b761-7d580c7a6ebc" containerName="console" containerID="cri-o://d95140379f41a68d3520bb53d130245803292a9350d3a82bd8e2187deda0e519" gracePeriod=15 Feb 18 14:52:51 crc kubenswrapper[4957]: I0218 14:52:51.836740 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f4db7666f-9jmvl_7f1e72bc-e441-4172-b761-7d580c7a6ebc/console/0.log" Feb 18 14:52:51 crc kubenswrapper[4957]: I0218 14:52:51.837200 4957 generic.go:334] "Generic (PLEG): container finished" podID="7f1e72bc-e441-4172-b761-7d580c7a6ebc" containerID="d95140379f41a68d3520bb53d130245803292a9350d3a82bd8e2187deda0e519" exitCode=2 Feb 18 14:52:51 crc kubenswrapper[4957]: I0218 14:52:51.837365 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f4db7666f-9jmvl" event={"ID":"7f1e72bc-e441-4172-b761-7d580c7a6ebc","Type":"ContainerDied","Data":"d95140379f41a68d3520bb53d130245803292a9350d3a82bd8e2187deda0e519"} Feb 18 14:52:52 crc kubenswrapper[4957]: I0218 14:52:52.293206 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f4db7666f-9jmvl_7f1e72bc-e441-4172-b761-7d580c7a6ebc/console/0.log" Feb 18 14:52:52 crc kubenswrapper[4957]: I0218 14:52:52.293616 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f4db7666f-9jmvl" Feb 18 14:52:52 crc kubenswrapper[4957]: I0218 14:52:52.399701 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7f1e72bc-e441-4172-b761-7d580c7a6ebc-oauth-serving-cert\") pod \"7f1e72bc-e441-4172-b761-7d580c7a6ebc\" (UID: \"7f1e72bc-e441-4172-b761-7d580c7a6ebc\") " Feb 18 14:52:52 crc kubenswrapper[4957]: I0218 14:52:52.400508 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f1e72bc-e441-4172-b761-7d580c7a6ebc-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "7f1e72bc-e441-4172-b761-7d580c7a6ebc" (UID: "7f1e72bc-e441-4172-b761-7d580c7a6ebc"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:52:52 crc kubenswrapper[4957]: I0218 14:52:52.400629 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7f1e72bc-e441-4172-b761-7d580c7a6ebc-console-oauth-config\") pod \"7f1e72bc-e441-4172-b761-7d580c7a6ebc\" (UID: \"7f1e72bc-e441-4172-b761-7d580c7a6ebc\") " Feb 18 14:52:52 crc kubenswrapper[4957]: I0218 14:52:52.401115 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tmghv\" (UniqueName: \"kubernetes.io/projected/7f1e72bc-e441-4172-b761-7d580c7a6ebc-kube-api-access-tmghv\") pod \"7f1e72bc-e441-4172-b761-7d580c7a6ebc\" (UID: \"7f1e72bc-e441-4172-b761-7d580c7a6ebc\") " Feb 18 14:52:52 crc kubenswrapper[4957]: I0218 14:52:52.401151 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7f1e72bc-e441-4172-b761-7d580c7a6ebc-console-serving-cert\") pod \"7f1e72bc-e441-4172-b761-7d580c7a6ebc\" (UID: \"7f1e72bc-e441-4172-b761-7d580c7a6ebc\") " Feb 18 14:52:52 crc kubenswrapper[4957]: I0218 14:52:52.401190 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7f1e72bc-e441-4172-b761-7d580c7a6ebc-trusted-ca-bundle\") pod \"7f1e72bc-e441-4172-b761-7d580c7a6ebc\" (UID: \"7f1e72bc-e441-4172-b761-7d580c7a6ebc\") " Feb 18 14:52:52 crc kubenswrapper[4957]: I0218 14:52:52.401337 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7f1e72bc-e441-4172-b761-7d580c7a6ebc-console-config\") pod \"7f1e72bc-e441-4172-b761-7d580c7a6ebc\" (UID: \"7f1e72bc-e441-4172-b761-7d580c7a6ebc\") " Feb 18 14:52:52 crc kubenswrapper[4957]: I0218 14:52:52.401456 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7f1e72bc-e441-4172-b761-7d580c7a6ebc-service-ca\") pod \"7f1e72bc-e441-4172-b761-7d580c7a6ebc\" (UID: \"7f1e72bc-e441-4172-b761-7d580c7a6ebc\") " Feb 18 14:52:52 crc kubenswrapper[4957]: I0218 14:52:52.401725 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f1e72bc-e441-4172-b761-7d580c7a6ebc-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "7f1e72bc-e441-4172-b761-7d580c7a6ebc" (UID: "7f1e72bc-e441-4172-b761-7d580c7a6ebc"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:52:52 crc kubenswrapper[4957]: I0218 14:52:52.402099 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f1e72bc-e441-4172-b761-7d580c7a6ebc-console-config" (OuterVolumeSpecName: "console-config") pod "7f1e72bc-e441-4172-b761-7d580c7a6ebc" (UID: "7f1e72bc-e441-4172-b761-7d580c7a6ebc"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:52:52 crc kubenswrapper[4957]: I0218 14:52:52.402159 4957 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7f1e72bc-e441-4172-b761-7d580c7a6ebc-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 14:52:52 crc kubenswrapper[4957]: I0218 14:52:52.402215 4957 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7f1e72bc-e441-4172-b761-7d580c7a6ebc-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:52:52 crc kubenswrapper[4957]: I0218 14:52:52.402276 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f1e72bc-e441-4172-b761-7d580c7a6ebc-service-ca" (OuterVolumeSpecName: "service-ca") pod "7f1e72bc-e441-4172-b761-7d580c7a6ebc" (UID: "7f1e72bc-e441-4172-b761-7d580c7a6ebc"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:52:52 crc kubenswrapper[4957]: I0218 14:52:52.403942 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f1e72bc-e441-4172-b761-7d580c7a6ebc-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "7f1e72bc-e441-4172-b761-7d580c7a6ebc" (UID: "7f1e72bc-e441-4172-b761-7d580c7a6ebc"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:52:52 crc kubenswrapper[4957]: I0218 14:52:52.404334 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f1e72bc-e441-4172-b761-7d580c7a6ebc-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "7f1e72bc-e441-4172-b761-7d580c7a6ebc" (UID: "7f1e72bc-e441-4172-b761-7d580c7a6ebc"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:52:52 crc kubenswrapper[4957]: I0218 14:52:52.404373 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f1e72bc-e441-4172-b761-7d580c7a6ebc-kube-api-access-tmghv" (OuterVolumeSpecName: "kube-api-access-tmghv") pod "7f1e72bc-e441-4172-b761-7d580c7a6ebc" (UID: "7f1e72bc-e441-4172-b761-7d580c7a6ebc"). InnerVolumeSpecName "kube-api-access-tmghv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:52:52 crc kubenswrapper[4957]: I0218 14:52:52.504840 4957 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7f1e72bc-e441-4172-b761-7d580c7a6ebc-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 18 14:52:52 crc kubenswrapper[4957]: I0218 14:52:52.504888 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tmghv\" (UniqueName: \"kubernetes.io/projected/7f1e72bc-e441-4172-b761-7d580c7a6ebc-kube-api-access-tmghv\") on node \"crc\" DevicePath \"\"" Feb 18 14:52:52 crc kubenswrapper[4957]: I0218 14:52:52.504906 4957 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7f1e72bc-e441-4172-b761-7d580c7a6ebc-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 14:52:52 crc kubenswrapper[4957]: I0218 14:52:52.504919 4957 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7f1e72bc-e441-4172-b761-7d580c7a6ebc-console-config\") on node \"crc\" DevicePath \"\"" Feb 18 14:52:52 crc kubenswrapper[4957]: I0218 14:52:52.504932 4957 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7f1e72bc-e441-4172-b761-7d580c7a6ebc-service-ca\") on node \"crc\" DevicePath \"\"" Feb 18 14:52:52 crc kubenswrapper[4957]: I0218 14:52:52.847685 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f4db7666f-9jmvl_7f1e72bc-e441-4172-b761-7d580c7a6ebc/console/0.log" Feb 18 14:52:52 crc kubenswrapper[4957]: I0218 14:52:52.848171 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f4db7666f-9jmvl" event={"ID":"7f1e72bc-e441-4172-b761-7d580c7a6ebc","Type":"ContainerDied","Data":"96b567d8437bf02dcd88506f739fda65f8d04c03e11424d93324e16fa9742ed3"} Feb 18 14:52:52 crc kubenswrapper[4957]: I0218 14:52:52.848210 4957 scope.go:117] "RemoveContainer" containerID="d95140379f41a68d3520bb53d130245803292a9350d3a82bd8e2187deda0e519" Feb 18 14:52:52 crc kubenswrapper[4957]: I0218 14:52:52.848279 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f4db7666f-9jmvl" Feb 18 14:52:52 crc kubenswrapper[4957]: I0218 14:52:52.961900 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f4db7666f-9jmvl"] Feb 18 14:52:52 crc kubenswrapper[4957]: I0218 14:52:52.968808 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f4db7666f-9jmvl"] Feb 18 14:52:53 crc kubenswrapper[4957]: I0218 14:52:53.857693 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"a1f75ef8-c8ef-4d67-8c40-11ff5f8f5f4c","Type":"ContainerStarted","Data":"acd2db0fbfc9d44ed15fd98764683de899393a720af2934d55755bb257365009"} Feb 18 14:52:53 crc kubenswrapper[4957]: I0218 14:52:53.859336 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-624nc" event={"ID":"791dd051-aa05-448b-b1d5-26cafb4662fd","Type":"ContainerStarted","Data":"6efaa3f5ca075bdaad2d72e730801a482d03fc45a73f15ba4bb3e8baca576919"} Feb 18 14:52:53 crc kubenswrapper[4957]: I0218 14:52:53.863753 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"12d531cb-f58f-44a6-a638-29ebb85fdbb3","Type":"ContainerStarted","Data":"d210d83635aa51b299952aa7eac3ab5e0c00e01b6107d3c3293d4f59d4388270"} Feb 18 14:52:53 crc kubenswrapper[4957]: I0218 14:52:53.867475 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"ac146603-56e6-49dc-afe3-d46b005945a3","Type":"ContainerStarted","Data":"28001150f660ec3ad66299083a1d6e5043345a7526204360d86acce16ed38566"} Feb 18 14:52:53 crc kubenswrapper[4957]: I0218 14:52:53.867857 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Feb 18 14:52:53 crc kubenswrapper[4957]: I0218 14:52:53.869753 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"a0e8ec2b-400b-4454-acdd-517a1727e9f8","Type":"ContainerStarted","Data":"5ae784f50deb6c10121274079b9c6592c06242581b699c710aac5ddd82f24d65"} Feb 18 14:52:53 crc kubenswrapper[4957]: I0218 14:52:53.872923 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"8e98adae-ca1c-4f1d-a7e4-6ea6ccea7070","Type":"ContainerStarted","Data":"76e03bec851fe677318b3bdeb55a981282bdaae70ac526d9568493a00afba2cc"} Feb 18 14:52:53 crc kubenswrapper[4957]: I0218 14:52:53.925820 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-624nc" podStartSLOduration=3.991560141 podStartE2EDuration="39.925798952s" podCreationTimestamp="2026-02-18 14:52:14 +0000 UTC" firstStartedPulling="2026-02-18 14:52:17.151085318 +0000 UTC m=+1243.671950062" lastFinishedPulling="2026-02-18 14:52:53.085324129 +0000 UTC m=+1279.606188873" observedRunningTime="2026-02-18 14:52:53.923184336 +0000 UTC m=+1280.444049080" watchObservedRunningTime="2026-02-18 14:52:53.925798952 +0000 UTC m=+1280.446663686" Feb 18 14:52:53 crc kubenswrapper[4957]: I0218 14:52:53.985317 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=3.42717624 podStartE2EDuration="43.985293953s" podCreationTimestamp="2026-02-18 14:52:10 +0000 UTC" firstStartedPulling="2026-02-18 14:52:12.12468198 +0000 UTC m=+1238.645546724" lastFinishedPulling="2026-02-18 14:52:52.682799643 +0000 UTC m=+1279.203664437" observedRunningTime="2026-02-18 14:52:53.9845028 +0000 UTC m=+1280.505367554" watchObservedRunningTime="2026-02-18 14:52:53.985293953 +0000 UTC m=+1280.506158697" Feb 18 14:52:54 crc kubenswrapper[4957]: I0218 14:52:54.231505 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f1e72bc-e441-4172-b761-7d580c7a6ebc" path="/var/lib/kubelet/pods/7f1e72bc-e441-4172-b761-7d580c7a6ebc/volumes" Feb 18 14:52:54 crc kubenswrapper[4957]: I0218 14:52:54.886602 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"81a0cd7a-3f64-4555-96e9-ad69c2518568","Type":"ContainerStarted","Data":"a137f2d606d0d74bf7ba4698dcc499a5b8f0e2ad8ab294862942ac6ee18f3a52"} Feb 18 14:52:55 crc kubenswrapper[4957]: I0218 14:52:55.895186 4957 generic.go:334] "Generic (PLEG): container finished" podID="ea49c46a-0b24-4d9c-9dc5-ed3c7f1e9433" containerID="dcc45996235dadcb9c373e97fcff0e83e1c26bbbf7dbc22da9cdb136e2b6d180" exitCode=0 Feb 18 14:52:55 crc kubenswrapper[4957]: I0218 14:52:55.895282 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-hjs77" event={"ID":"ea49c46a-0b24-4d9c-9dc5-ed3c7f1e9433","Type":"ContainerDied","Data":"dcc45996235dadcb9c373e97fcff0e83e1c26bbbf7dbc22da9cdb136e2b6d180"} Feb 18 14:52:55 crc kubenswrapper[4957]: I0218 14:52:55.897587 4957 generic.go:334] "Generic (PLEG): container finished" podID="031ed1ca-79fe-4aa7-ade3-f5d612b2c628" containerID="196eea9f9bdaaba02641be44ab5b1d35d4b5055546311f8c9902151496a2299f" exitCode=0 Feb 18 14:52:55 crc kubenswrapper[4957]: I0218 14:52:55.897656 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-pnzzz" event={"ID":"031ed1ca-79fe-4aa7-ade3-f5d612b2c628","Type":"ContainerDied","Data":"196eea9f9bdaaba02641be44ab5b1d35d4b5055546311f8c9902151496a2299f"} Feb 18 14:52:56 crc kubenswrapper[4957]: I0218 14:52:56.322465 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Feb 18 14:52:56 crc kubenswrapper[4957]: I0218 14:52:56.620661 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-hjs77"] Feb 18 14:52:56 crc kubenswrapper[4957]: I0218 14:52:56.657165 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-59s68"] Feb 18 14:52:56 crc kubenswrapper[4957]: E0218 14:52:56.657699 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f1e72bc-e441-4172-b761-7d580c7a6ebc" containerName="console" Feb 18 14:52:56 crc kubenswrapper[4957]: I0218 14:52:56.657722 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f1e72bc-e441-4172-b761-7d580c7a6ebc" containerName="console" Feb 18 14:52:56 crc kubenswrapper[4957]: I0218 14:52:56.657996 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f1e72bc-e441-4172-b761-7d580c7a6ebc" containerName="console" Feb 18 14:52:56 crc kubenswrapper[4957]: I0218 14:52:56.659410 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f896c8c65-59s68" Feb 18 14:52:56 crc kubenswrapper[4957]: I0218 14:52:56.661683 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Feb 18 14:52:56 crc kubenswrapper[4957]: I0218 14:52:56.675561 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-59s68"] Feb 18 14:52:56 crc kubenswrapper[4957]: I0218 14:52:56.722205 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-ztccc"] Feb 18 14:52:56 crc kubenswrapper[4957]: I0218 14:52:56.723514 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-ztccc" Feb 18 14:52:56 crc kubenswrapper[4957]: I0218 14:52:56.729565 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Feb 18 14:52:56 crc kubenswrapper[4957]: I0218 14:52:56.741724 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-ztccc"] Feb 18 14:52:56 crc kubenswrapper[4957]: I0218 14:52:56.801991 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrdp6\" (UniqueName: \"kubernetes.io/projected/f6d43c29-7df2-45a3-8b11-0cf3f6134f09-kube-api-access-qrdp6\") pod \"dnsmasq-dns-7f896c8c65-59s68\" (UID: \"f6d43c29-7df2-45a3-8b11-0cf3f6134f09\") " pod="openstack/dnsmasq-dns-7f896c8c65-59s68" Feb 18 14:52:56 crc kubenswrapper[4957]: I0218 14:52:56.802073 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f6d43c29-7df2-45a3-8b11-0cf3f6134f09-dns-svc\") pod \"dnsmasq-dns-7f896c8c65-59s68\" (UID: \"f6d43c29-7df2-45a3-8b11-0cf3f6134f09\") " pod="openstack/dnsmasq-dns-7f896c8c65-59s68" Feb 18 14:52:56 crc kubenswrapper[4957]: I0218 14:52:56.802215 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6d43c29-7df2-45a3-8b11-0cf3f6134f09-config\") pod \"dnsmasq-dns-7f896c8c65-59s68\" (UID: \"f6d43c29-7df2-45a3-8b11-0cf3f6134f09\") " pod="openstack/dnsmasq-dns-7f896c8c65-59s68" Feb 18 14:52:56 crc kubenswrapper[4957]: I0218 14:52:56.802277 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f6d43c29-7df2-45a3-8b11-0cf3f6134f09-ovsdbserver-sb\") pod \"dnsmasq-dns-7f896c8c65-59s68\" (UID: \"f6d43c29-7df2-45a3-8b11-0cf3f6134f09\") " pod="openstack/dnsmasq-dns-7f896c8c65-59s68" Feb 18 14:52:56 crc kubenswrapper[4957]: I0218 14:52:56.904573 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93827c35-2801-441b-9a24-813c5d4a29ed-config\") pod \"ovn-controller-metrics-ztccc\" (UID: \"93827c35-2801-441b-9a24-813c5d4a29ed\") " pod="openstack/ovn-controller-metrics-ztccc" Feb 18 14:52:56 crc kubenswrapper[4957]: I0218 14:52:56.905018 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdw9b\" (UniqueName: \"kubernetes.io/projected/93827c35-2801-441b-9a24-813c5d4a29ed-kube-api-access-gdw9b\") pod \"ovn-controller-metrics-ztccc\" (UID: \"93827c35-2801-441b-9a24-813c5d4a29ed\") " pod="openstack/ovn-controller-metrics-ztccc" Feb 18 14:52:56 crc kubenswrapper[4957]: I0218 14:52:56.905059 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93827c35-2801-441b-9a24-813c5d4a29ed-combined-ca-bundle\") pod \"ovn-controller-metrics-ztccc\" (UID: \"93827c35-2801-441b-9a24-813c5d4a29ed\") " pod="openstack/ovn-controller-metrics-ztccc" Feb 18 14:52:56 crc kubenswrapper[4957]: I0218 14:52:56.905127 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qrdp6\" (UniqueName: \"kubernetes.io/projected/f6d43c29-7df2-45a3-8b11-0cf3f6134f09-kube-api-access-qrdp6\") pod \"dnsmasq-dns-7f896c8c65-59s68\" (UID: \"f6d43c29-7df2-45a3-8b11-0cf3f6134f09\") " pod="openstack/dnsmasq-dns-7f896c8c65-59s68" Feb 18 14:52:56 crc kubenswrapper[4957]: I0218 14:52:56.905207 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f6d43c29-7df2-45a3-8b11-0cf3f6134f09-dns-svc\") pod \"dnsmasq-dns-7f896c8c65-59s68\" (UID: \"f6d43c29-7df2-45a3-8b11-0cf3f6134f09\") " pod="openstack/dnsmasq-dns-7f896c8c65-59s68" Feb 18 14:52:56 crc kubenswrapper[4957]: I0218 14:52:56.905296 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/93827c35-2801-441b-9a24-813c5d4a29ed-ovs-rundir\") pod \"ovn-controller-metrics-ztccc\" (UID: \"93827c35-2801-441b-9a24-813c5d4a29ed\") " pod="openstack/ovn-controller-metrics-ztccc" Feb 18 14:52:56 crc kubenswrapper[4957]: I0218 14:52:56.905366 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/93827c35-2801-441b-9a24-813c5d4a29ed-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-ztccc\" (UID: \"93827c35-2801-441b-9a24-813c5d4a29ed\") " pod="openstack/ovn-controller-metrics-ztccc" Feb 18 14:52:56 crc kubenswrapper[4957]: I0218 14:52:56.905406 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/93827c35-2801-441b-9a24-813c5d4a29ed-ovn-rundir\") pod \"ovn-controller-metrics-ztccc\" (UID: \"93827c35-2801-441b-9a24-813c5d4a29ed\") " pod="openstack/ovn-controller-metrics-ztccc" Feb 18 14:52:56 crc kubenswrapper[4957]: I0218 14:52:56.905477 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6d43c29-7df2-45a3-8b11-0cf3f6134f09-config\") pod \"dnsmasq-dns-7f896c8c65-59s68\" (UID: \"f6d43c29-7df2-45a3-8b11-0cf3f6134f09\") " pod="openstack/dnsmasq-dns-7f896c8c65-59s68" Feb 18 14:52:56 crc kubenswrapper[4957]: I0218 14:52:56.905562 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f6d43c29-7df2-45a3-8b11-0cf3f6134f09-ovsdbserver-sb\") pod \"dnsmasq-dns-7f896c8c65-59s68\" (UID: \"f6d43c29-7df2-45a3-8b11-0cf3f6134f09\") " pod="openstack/dnsmasq-dns-7f896c8c65-59s68" Feb 18 14:52:56 crc kubenswrapper[4957]: I0218 14:52:56.906538 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f6d43c29-7df2-45a3-8b11-0cf3f6134f09-ovsdbserver-sb\") pod \"dnsmasq-dns-7f896c8c65-59s68\" (UID: \"f6d43c29-7df2-45a3-8b11-0cf3f6134f09\") " pod="openstack/dnsmasq-dns-7f896c8c65-59s68" Feb 18 14:52:56 crc kubenswrapper[4957]: I0218 14:52:56.907187 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6d43c29-7df2-45a3-8b11-0cf3f6134f09-config\") pod \"dnsmasq-dns-7f896c8c65-59s68\" (UID: \"f6d43c29-7df2-45a3-8b11-0cf3f6134f09\") " pod="openstack/dnsmasq-dns-7f896c8c65-59s68" Feb 18 14:52:56 crc kubenswrapper[4957]: I0218 14:52:56.908038 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f6d43c29-7df2-45a3-8b11-0cf3f6134f09-dns-svc\") pod \"dnsmasq-dns-7f896c8c65-59s68\" (UID: \"f6d43c29-7df2-45a3-8b11-0cf3f6134f09\") " pod="openstack/dnsmasq-dns-7f896c8c65-59s68" Feb 18 14:52:56 crc kubenswrapper[4957]: I0218 14:52:56.925659 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-hjs77" event={"ID":"ea49c46a-0b24-4d9c-9dc5-ed3c7f1e9433","Type":"ContainerStarted","Data":"63b0d51e0259abe027f4c7aab8cab3ffcff8d3d6b431db40d2df655784a4e714"} Feb 18 14:52:56 crc kubenswrapper[4957]: I0218 14:52:56.926028 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-666b6646f7-hjs77" podUID="ea49c46a-0b24-4d9c-9dc5-ed3c7f1e9433" containerName="dnsmasq-dns" containerID="cri-o://63b0d51e0259abe027f4c7aab8cab3ffcff8d3d6b431db40d2df655784a4e714" gracePeriod=10 Feb 18 14:52:56 crc kubenswrapper[4957]: I0218 14:52:56.926417 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-666b6646f7-hjs77" Feb 18 14:52:56 crc kubenswrapper[4957]: I0218 14:52:56.930546 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"f237ab9c-fc69-491b-98da-97ce92214eb0","Type":"ContainerStarted","Data":"d656f6ea94db1d3987b5635bb9c637ada193464854e5a9f7930c8f18d02e6007"} Feb 18 14:52:56 crc kubenswrapper[4957]: I0218 14:52:56.935679 4957 generic.go:334] "Generic (PLEG): container finished" podID="a1f75ef8-c8ef-4d67-8c40-11ff5f8f5f4c" containerID="acd2db0fbfc9d44ed15fd98764683de899393a720af2934d55755bb257365009" exitCode=0 Feb 18 14:52:56 crc kubenswrapper[4957]: I0218 14:52:56.935787 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"a1f75ef8-c8ef-4d67-8c40-11ff5f8f5f4c","Type":"ContainerDied","Data":"acd2db0fbfc9d44ed15fd98764683de899393a720af2934d55755bb257365009"} Feb 18 14:52:56 crc kubenswrapper[4957]: I0218 14:52:56.936261 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qrdp6\" (UniqueName: \"kubernetes.io/projected/f6d43c29-7df2-45a3-8b11-0cf3f6134f09-kube-api-access-qrdp6\") pod \"dnsmasq-dns-7f896c8c65-59s68\" (UID: \"f6d43c29-7df2-45a3-8b11-0cf3f6134f09\") " pod="openstack/dnsmasq-dns-7f896c8c65-59s68" Feb 18 14:52:56 crc kubenswrapper[4957]: I0218 14:52:56.960208 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-pnzzz" event={"ID":"031ed1ca-79fe-4aa7-ade3-f5d612b2c628","Type":"ContainerStarted","Data":"47a725816dae2961b949e3796e3026c3ea07eccd565fa2012d7c46dd3840dbb1"} Feb 18 14:52:56 crc kubenswrapper[4957]: I0218 14:52:56.960818 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57d769cc4f-pnzzz" Feb 18 14:52:56 crc kubenswrapper[4957]: I0218 14:52:56.961559 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-666b6646f7-hjs77" podStartSLOduration=-9223371985.893229 podStartE2EDuration="50.961546808s" podCreationTimestamp="2026-02-18 14:52:06 +0000 UTC" firstStartedPulling="2026-02-18 14:52:07.86777503 +0000 UTC m=+1234.388639774" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:52:56.954499883 +0000 UTC m=+1283.475364637" watchObservedRunningTime="2026-02-18 14:52:56.961546808 +0000 UTC m=+1283.482411552" Feb 18 14:52:56 crc kubenswrapper[4957]: I0218 14:52:56.977792 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f896c8c65-59s68" Feb 18 14:52:57 crc kubenswrapper[4957]: I0218 14:52:57.012592 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93827c35-2801-441b-9a24-813c5d4a29ed-config\") pod \"ovn-controller-metrics-ztccc\" (UID: \"93827c35-2801-441b-9a24-813c5d4a29ed\") " pod="openstack/ovn-controller-metrics-ztccc" Feb 18 14:52:57 crc kubenswrapper[4957]: I0218 14:52:57.014667 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gdw9b\" (UniqueName: \"kubernetes.io/projected/93827c35-2801-441b-9a24-813c5d4a29ed-kube-api-access-gdw9b\") pod \"ovn-controller-metrics-ztccc\" (UID: \"93827c35-2801-441b-9a24-813c5d4a29ed\") " pod="openstack/ovn-controller-metrics-ztccc" Feb 18 14:52:57 crc kubenswrapper[4957]: I0218 14:52:57.014789 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93827c35-2801-441b-9a24-813c5d4a29ed-combined-ca-bundle\") pod \"ovn-controller-metrics-ztccc\" (UID: \"93827c35-2801-441b-9a24-813c5d4a29ed\") " pod="openstack/ovn-controller-metrics-ztccc" Feb 18 14:52:57 crc kubenswrapper[4957]: I0218 14:52:57.014943 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/93827c35-2801-441b-9a24-813c5d4a29ed-ovs-rundir\") pod \"ovn-controller-metrics-ztccc\" (UID: \"93827c35-2801-441b-9a24-813c5d4a29ed\") " pod="openstack/ovn-controller-metrics-ztccc" Feb 18 14:52:57 crc kubenswrapper[4957]: I0218 14:52:57.015205 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/93827c35-2801-441b-9a24-813c5d4a29ed-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-ztccc\" (UID: \"93827c35-2801-441b-9a24-813c5d4a29ed\") " pod="openstack/ovn-controller-metrics-ztccc" Feb 18 14:52:57 crc kubenswrapper[4957]: I0218 14:52:57.015352 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/93827c35-2801-441b-9a24-813c5d4a29ed-ovn-rundir\") pod \"ovn-controller-metrics-ztccc\" (UID: \"93827c35-2801-441b-9a24-813c5d4a29ed\") " pod="openstack/ovn-controller-metrics-ztccc" Feb 18 14:52:57 crc kubenswrapper[4957]: I0218 14:52:57.015938 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/93827c35-2801-441b-9a24-813c5d4a29ed-ovn-rundir\") pod \"ovn-controller-metrics-ztccc\" (UID: \"93827c35-2801-441b-9a24-813c5d4a29ed\") " pod="openstack/ovn-controller-metrics-ztccc" Feb 18 14:52:57 crc kubenswrapper[4957]: I0218 14:52:57.013891 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93827c35-2801-441b-9a24-813c5d4a29ed-config\") pod \"ovn-controller-metrics-ztccc\" (UID: \"93827c35-2801-441b-9a24-813c5d4a29ed\") " pod="openstack/ovn-controller-metrics-ztccc" Feb 18 14:52:57 crc kubenswrapper[4957]: I0218 14:52:57.018153 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/93827c35-2801-441b-9a24-813c5d4a29ed-ovs-rundir\") pod \"ovn-controller-metrics-ztccc\" (UID: \"93827c35-2801-441b-9a24-813c5d4a29ed\") " pod="openstack/ovn-controller-metrics-ztccc" Feb 18 14:52:57 crc kubenswrapper[4957]: I0218 14:52:57.032582 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/93827c35-2801-441b-9a24-813c5d4a29ed-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-ztccc\" (UID: \"93827c35-2801-441b-9a24-813c5d4a29ed\") " pod="openstack/ovn-controller-metrics-ztccc" Feb 18 14:52:57 crc kubenswrapper[4957]: I0218 14:52:57.033900 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93827c35-2801-441b-9a24-813c5d4a29ed-combined-ca-bundle\") pod \"ovn-controller-metrics-ztccc\" (UID: \"93827c35-2801-441b-9a24-813c5d4a29ed\") " pod="openstack/ovn-controller-metrics-ztccc" Feb 18 14:52:57 crc kubenswrapper[4957]: I0218 14:52:57.049210 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gdw9b\" (UniqueName: \"kubernetes.io/projected/93827c35-2801-441b-9a24-813c5d4a29ed-kube-api-access-gdw9b\") pod \"ovn-controller-metrics-ztccc\" (UID: \"93827c35-2801-441b-9a24-813c5d4a29ed\") " pod="openstack/ovn-controller-metrics-ztccc" Feb 18 14:52:57 crc kubenswrapper[4957]: I0218 14:52:57.080025 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57d769cc4f-pnzzz" podStartSLOduration=3.878987233 podStartE2EDuration="50.080004403s" podCreationTimestamp="2026-02-18 14:52:07 +0000 UTC" firstStartedPulling="2026-02-18 14:52:08.461762915 +0000 UTC m=+1234.982627659" lastFinishedPulling="2026-02-18 14:52:54.662780085 +0000 UTC m=+1281.183644829" observedRunningTime="2026-02-18 14:52:57.06855858 +0000 UTC m=+1283.589423324" watchObservedRunningTime="2026-02-18 14:52:57.080004403 +0000 UTC m=+1283.600869147" Feb 18 14:52:57 crc kubenswrapper[4957]: I0218 14:52:57.125581 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-pnzzz"] Feb 18 14:52:57 crc kubenswrapper[4957]: I0218 14:52:57.167005 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-5wr9w"] Feb 18 14:52:57 crc kubenswrapper[4957]: I0218 14:52:57.168928 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-5wr9w" Feb 18 14:52:57 crc kubenswrapper[4957]: I0218 14:52:57.175311 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Feb 18 14:52:57 crc kubenswrapper[4957]: I0218 14:52:57.183876 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-5wr9w"] Feb 18 14:52:57 crc kubenswrapper[4957]: I0218 14:52:57.330731 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9d31b9ed-0265-41fb-9f57-538cd96a5524-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-5wr9w\" (UID: \"9d31b9ed-0265-41fb-9f57-538cd96a5524\") " pod="openstack/dnsmasq-dns-86db49b7ff-5wr9w" Feb 18 14:52:57 crc kubenswrapper[4957]: I0218 14:52:57.331150 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8jfm\" (UniqueName: \"kubernetes.io/projected/9d31b9ed-0265-41fb-9f57-538cd96a5524-kube-api-access-v8jfm\") pod \"dnsmasq-dns-86db49b7ff-5wr9w\" (UID: \"9d31b9ed-0265-41fb-9f57-538cd96a5524\") " pod="openstack/dnsmasq-dns-86db49b7ff-5wr9w" Feb 18 14:52:57 crc kubenswrapper[4957]: I0218 14:52:57.331305 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9d31b9ed-0265-41fb-9f57-538cd96a5524-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-5wr9w\" (UID: \"9d31b9ed-0265-41fb-9f57-538cd96a5524\") " pod="openstack/dnsmasq-dns-86db49b7ff-5wr9w" Feb 18 14:52:57 crc kubenswrapper[4957]: I0218 14:52:57.331330 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d31b9ed-0265-41fb-9f57-538cd96a5524-config\") pod \"dnsmasq-dns-86db49b7ff-5wr9w\" (UID: \"9d31b9ed-0265-41fb-9f57-538cd96a5524\") " pod="openstack/dnsmasq-dns-86db49b7ff-5wr9w" Feb 18 14:52:57 crc kubenswrapper[4957]: I0218 14:52:57.334453 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9d31b9ed-0265-41fb-9f57-538cd96a5524-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-5wr9w\" (UID: \"9d31b9ed-0265-41fb-9f57-538cd96a5524\") " pod="openstack/dnsmasq-dns-86db49b7ff-5wr9w" Feb 18 14:52:57 crc kubenswrapper[4957]: I0218 14:52:57.349648 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-ztccc" Feb 18 14:52:57 crc kubenswrapper[4957]: I0218 14:52:57.437397 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9d31b9ed-0265-41fb-9f57-538cd96a5524-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-5wr9w\" (UID: \"9d31b9ed-0265-41fb-9f57-538cd96a5524\") " pod="openstack/dnsmasq-dns-86db49b7ff-5wr9w" Feb 18 14:52:57 crc kubenswrapper[4957]: I0218 14:52:57.437441 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v8jfm\" (UniqueName: \"kubernetes.io/projected/9d31b9ed-0265-41fb-9f57-538cd96a5524-kube-api-access-v8jfm\") pod \"dnsmasq-dns-86db49b7ff-5wr9w\" (UID: \"9d31b9ed-0265-41fb-9f57-538cd96a5524\") " pod="openstack/dnsmasq-dns-86db49b7ff-5wr9w" Feb 18 14:52:57 crc kubenswrapper[4957]: I0218 14:52:57.437483 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9d31b9ed-0265-41fb-9f57-538cd96a5524-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-5wr9w\" (UID: \"9d31b9ed-0265-41fb-9f57-538cd96a5524\") " pod="openstack/dnsmasq-dns-86db49b7ff-5wr9w" Feb 18 14:52:57 crc kubenswrapper[4957]: I0218 14:52:57.437498 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d31b9ed-0265-41fb-9f57-538cd96a5524-config\") pod \"dnsmasq-dns-86db49b7ff-5wr9w\" (UID: \"9d31b9ed-0265-41fb-9f57-538cd96a5524\") " pod="openstack/dnsmasq-dns-86db49b7ff-5wr9w" Feb 18 14:52:57 crc kubenswrapper[4957]: I0218 14:52:57.437567 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9d31b9ed-0265-41fb-9f57-538cd96a5524-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-5wr9w\" (UID: \"9d31b9ed-0265-41fb-9f57-538cd96a5524\") " pod="openstack/dnsmasq-dns-86db49b7ff-5wr9w" Feb 18 14:52:57 crc kubenswrapper[4957]: I0218 14:52:57.439737 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9d31b9ed-0265-41fb-9f57-538cd96a5524-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-5wr9w\" (UID: \"9d31b9ed-0265-41fb-9f57-538cd96a5524\") " pod="openstack/dnsmasq-dns-86db49b7ff-5wr9w" Feb 18 14:52:57 crc kubenswrapper[4957]: I0218 14:52:57.455302 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9d31b9ed-0265-41fb-9f57-538cd96a5524-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-5wr9w\" (UID: \"9d31b9ed-0265-41fb-9f57-538cd96a5524\") " pod="openstack/dnsmasq-dns-86db49b7ff-5wr9w" Feb 18 14:52:57 crc kubenswrapper[4957]: I0218 14:52:57.456801 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9d31b9ed-0265-41fb-9f57-538cd96a5524-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-5wr9w\" (UID: \"9d31b9ed-0265-41fb-9f57-538cd96a5524\") " pod="openstack/dnsmasq-dns-86db49b7ff-5wr9w" Feb 18 14:52:57 crc kubenswrapper[4957]: I0218 14:52:57.465293 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d31b9ed-0265-41fb-9f57-538cd96a5524-config\") pod \"dnsmasq-dns-86db49b7ff-5wr9w\" (UID: \"9d31b9ed-0265-41fb-9f57-538cd96a5524\") " pod="openstack/dnsmasq-dns-86db49b7ff-5wr9w" Feb 18 14:52:57 crc kubenswrapper[4957]: I0218 14:52:57.538112 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v8jfm\" (UniqueName: \"kubernetes.io/projected/9d31b9ed-0265-41fb-9f57-538cd96a5524-kube-api-access-v8jfm\") pod \"dnsmasq-dns-86db49b7ff-5wr9w\" (UID: \"9d31b9ed-0265-41fb-9f57-538cd96a5524\") " pod="openstack/dnsmasq-dns-86db49b7ff-5wr9w" Feb 18 14:52:57 crc kubenswrapper[4957]: I0218 14:52:57.570939 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-5wr9w" Feb 18 14:52:57 crc kubenswrapper[4957]: I0218 14:52:57.826857 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-hjs77" Feb 18 14:52:57 crc kubenswrapper[4957]: I0218 14:52:57.951042 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea49c46a-0b24-4d9c-9dc5-ed3c7f1e9433-config\") pod \"ea49c46a-0b24-4d9c-9dc5-ed3c7f1e9433\" (UID: \"ea49c46a-0b24-4d9c-9dc5-ed3c7f1e9433\") " Feb 18 14:52:57 crc kubenswrapper[4957]: I0218 14:52:57.951106 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dsc9s\" (UniqueName: \"kubernetes.io/projected/ea49c46a-0b24-4d9c-9dc5-ed3c7f1e9433-kube-api-access-dsc9s\") pod \"ea49c46a-0b24-4d9c-9dc5-ed3c7f1e9433\" (UID: \"ea49c46a-0b24-4d9c-9dc5-ed3c7f1e9433\") " Feb 18 14:52:57 crc kubenswrapper[4957]: I0218 14:52:57.951193 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ea49c46a-0b24-4d9c-9dc5-ed3c7f1e9433-dns-svc\") pod \"ea49c46a-0b24-4d9c-9dc5-ed3c7f1e9433\" (UID: \"ea49c46a-0b24-4d9c-9dc5-ed3c7f1e9433\") " Feb 18 14:52:57 crc kubenswrapper[4957]: I0218 14:52:57.957715 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-59s68"] Feb 18 14:52:57 crc kubenswrapper[4957]: I0218 14:52:57.970577 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea49c46a-0b24-4d9c-9dc5-ed3c7f1e9433-kube-api-access-dsc9s" (OuterVolumeSpecName: "kube-api-access-dsc9s") pod "ea49c46a-0b24-4d9c-9dc5-ed3c7f1e9433" (UID: "ea49c46a-0b24-4d9c-9dc5-ed3c7f1e9433"). InnerVolumeSpecName "kube-api-access-dsc9s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:52:58 crc kubenswrapper[4957]: I0218 14:52:58.010754 4957 generic.go:334] "Generic (PLEG): container finished" podID="ea49c46a-0b24-4d9c-9dc5-ed3c7f1e9433" containerID="63b0d51e0259abe027f4c7aab8cab3ffcff8d3d6b431db40d2df655784a4e714" exitCode=0 Feb 18 14:52:58 crc kubenswrapper[4957]: I0218 14:52:58.010930 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-hjs77" event={"ID":"ea49c46a-0b24-4d9c-9dc5-ed3c7f1e9433","Type":"ContainerDied","Data":"63b0d51e0259abe027f4c7aab8cab3ffcff8d3d6b431db40d2df655784a4e714"} Feb 18 14:52:58 crc kubenswrapper[4957]: I0218 14:52:58.010977 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-hjs77" event={"ID":"ea49c46a-0b24-4d9c-9dc5-ed3c7f1e9433","Type":"ContainerDied","Data":"2241434f3a2851759f88b8d4f4e85190c3380b1e8275f8b92675c8820530d41c"} Feb 18 14:52:58 crc kubenswrapper[4957]: I0218 14:52:58.011005 4957 scope.go:117] "RemoveContainer" containerID="63b0d51e0259abe027f4c7aab8cab3ffcff8d3d6b431db40d2df655784a4e714" Feb 18 14:52:58 crc kubenswrapper[4957]: I0218 14:52:58.011275 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-hjs77" Feb 18 14:52:58 crc kubenswrapper[4957]: I0218 14:52:58.024079 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f896c8c65-59s68" event={"ID":"f6d43c29-7df2-45a3-8b11-0cf3f6134f09","Type":"ContainerStarted","Data":"4313656481bd97ba3feee4643bf9520ea12ea3acca427abb27fe51e535080e51"} Feb 18 14:52:58 crc kubenswrapper[4957]: I0218 14:52:58.028450 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea49c46a-0b24-4d9c-9dc5-ed3c7f1e9433-config" (OuterVolumeSpecName: "config") pod "ea49c46a-0b24-4d9c-9dc5-ed3c7f1e9433" (UID: "ea49c46a-0b24-4d9c-9dc5-ed3c7f1e9433"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:52:58 crc kubenswrapper[4957]: I0218 14:52:58.036911 4957 generic.go:334] "Generic (PLEG): container finished" podID="8e98adae-ca1c-4f1d-a7e4-6ea6ccea7070" containerID="76e03bec851fe677318b3bdeb55a981282bdaae70ac526d9568493a00afba2cc" exitCode=0 Feb 18 14:52:58 crc kubenswrapper[4957]: I0218 14:52:58.037295 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"8e98adae-ca1c-4f1d-a7e4-6ea6ccea7070","Type":"ContainerDied","Data":"76e03bec851fe677318b3bdeb55a981282bdaae70ac526d9568493a00afba2cc"} Feb 18 14:52:58 crc kubenswrapper[4957]: I0218 14:52:58.040591 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"a1f75ef8-c8ef-4d67-8c40-11ff5f8f5f4c","Type":"ContainerStarted","Data":"b75fc79baf9d486a4ac87956cd0d8a072675147f14f4b685af9fdcd0c2c1d609"} Feb 18 14:52:58 crc kubenswrapper[4957]: I0218 14:52:58.059560 4957 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea49c46a-0b24-4d9c-9dc5-ed3c7f1e9433-config\") on node \"crc\" DevicePath \"\"" Feb 18 14:52:58 crc kubenswrapper[4957]: I0218 14:52:58.059599 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dsc9s\" (UniqueName: \"kubernetes.io/projected/ea49c46a-0b24-4d9c-9dc5-ed3c7f1e9433-kube-api-access-dsc9s\") on node \"crc\" DevicePath \"\"" Feb 18 14:52:58 crc kubenswrapper[4957]: I0218 14:52:58.069512 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea49c46a-0b24-4d9c-9dc5-ed3c7f1e9433-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ea49c46a-0b24-4d9c-9dc5-ed3c7f1e9433" (UID: "ea49c46a-0b24-4d9c-9dc5-ed3c7f1e9433"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:52:58 crc kubenswrapper[4957]: I0218 14:52:58.139792 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=9.003522758 podStartE2EDuration="49.139765373s" podCreationTimestamp="2026-02-18 14:52:09 +0000 UTC" firstStartedPulling="2026-02-18 14:52:12.51534234 +0000 UTC m=+1239.036207074" lastFinishedPulling="2026-02-18 14:52:52.651584935 +0000 UTC m=+1279.172449689" observedRunningTime="2026-02-18 14:52:58.110245925 +0000 UTC m=+1284.631110689" watchObservedRunningTime="2026-02-18 14:52:58.139765373 +0000 UTC m=+1284.660630117" Feb 18 14:52:58 crc kubenswrapper[4957]: I0218 14:52:58.166160 4957 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ea49c46a-0b24-4d9c-9dc5-ed3c7f1e9433-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 14:52:58 crc kubenswrapper[4957]: I0218 14:52:58.189683 4957 scope.go:117] "RemoveContainer" containerID="dcc45996235dadcb9c373e97fcff0e83e1c26bbbf7dbc22da9cdb136e2b6d180" Feb 18 14:52:58 crc kubenswrapper[4957]: I0218 14:52:58.261673 4957 scope.go:117] "RemoveContainer" containerID="63b0d51e0259abe027f4c7aab8cab3ffcff8d3d6b431db40d2df655784a4e714" Feb 18 14:52:58 crc kubenswrapper[4957]: E0218 14:52:58.276164 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"63b0d51e0259abe027f4c7aab8cab3ffcff8d3d6b431db40d2df655784a4e714\": container with ID starting with 63b0d51e0259abe027f4c7aab8cab3ffcff8d3d6b431db40d2df655784a4e714 not found: ID does not exist" containerID="63b0d51e0259abe027f4c7aab8cab3ffcff8d3d6b431db40d2df655784a4e714" Feb 18 14:52:58 crc kubenswrapper[4957]: I0218 14:52:58.276210 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"63b0d51e0259abe027f4c7aab8cab3ffcff8d3d6b431db40d2df655784a4e714"} err="failed to get container status \"63b0d51e0259abe027f4c7aab8cab3ffcff8d3d6b431db40d2df655784a4e714\": rpc error: code = NotFound desc = could not find container \"63b0d51e0259abe027f4c7aab8cab3ffcff8d3d6b431db40d2df655784a4e714\": container with ID starting with 63b0d51e0259abe027f4c7aab8cab3ffcff8d3d6b431db40d2df655784a4e714 not found: ID does not exist" Feb 18 14:52:58 crc kubenswrapper[4957]: I0218 14:52:58.276238 4957 scope.go:117] "RemoveContainer" containerID="dcc45996235dadcb9c373e97fcff0e83e1c26bbbf7dbc22da9cdb136e2b6d180" Feb 18 14:52:58 crc kubenswrapper[4957]: E0218 14:52:58.284229 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dcc45996235dadcb9c373e97fcff0e83e1c26bbbf7dbc22da9cdb136e2b6d180\": container with ID starting with dcc45996235dadcb9c373e97fcff0e83e1c26bbbf7dbc22da9cdb136e2b6d180 not found: ID does not exist" containerID="dcc45996235dadcb9c373e97fcff0e83e1c26bbbf7dbc22da9cdb136e2b6d180" Feb 18 14:52:58 crc kubenswrapper[4957]: I0218 14:52:58.284287 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dcc45996235dadcb9c373e97fcff0e83e1c26bbbf7dbc22da9cdb136e2b6d180"} err="failed to get container status \"dcc45996235dadcb9c373e97fcff0e83e1c26bbbf7dbc22da9cdb136e2b6d180\": rpc error: code = NotFound desc = could not find container \"dcc45996235dadcb9c373e97fcff0e83e1c26bbbf7dbc22da9cdb136e2b6d180\": container with ID starting with dcc45996235dadcb9c373e97fcff0e83e1c26bbbf7dbc22da9cdb136e2b6d180 not found: ID does not exist" Feb 18 14:52:58 crc kubenswrapper[4957]: I0218 14:52:58.340649 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-ztccc"] Feb 18 14:52:58 crc kubenswrapper[4957]: W0218 14:52:58.349365 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod93827c35_2801_441b_9a24_813c5d4a29ed.slice/crio-41a01ae676beab5d23da5aee70f6e7c5e9227f2736a55858bf14ff9d30f762f4 WatchSource:0}: Error finding container 41a01ae676beab5d23da5aee70f6e7c5e9227f2736a55858bf14ff9d30f762f4: Status 404 returned error can't find the container with id 41a01ae676beab5d23da5aee70f6e7c5e9227f2736a55858bf14ff9d30f762f4 Feb 18 14:52:58 crc kubenswrapper[4957]: I0218 14:52:58.353932 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-5wr9w"] Feb 18 14:52:58 crc kubenswrapper[4957]: I0218 14:52:58.364386 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-hjs77"] Feb 18 14:52:58 crc kubenswrapper[4957]: I0218 14:52:58.373305 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-hjs77"] Feb 18 14:52:59 crc kubenswrapper[4957]: I0218 14:52:59.060774 4957 generic.go:334] "Generic (PLEG): container finished" podID="9d31b9ed-0265-41fb-9f57-538cd96a5524" containerID="3a76d281766d271cfba901273f3afdbb686ae0c80027c8f1f371a0a804ff4643" exitCode=0 Feb 18 14:52:59 crc kubenswrapper[4957]: I0218 14:52:59.060870 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-5wr9w" event={"ID":"9d31b9ed-0265-41fb-9f57-538cd96a5524","Type":"ContainerDied","Data":"3a76d281766d271cfba901273f3afdbb686ae0c80027c8f1f371a0a804ff4643"} Feb 18 14:52:59 crc kubenswrapper[4957]: I0218 14:52:59.061526 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-5wr9w" event={"ID":"9d31b9ed-0265-41fb-9f57-538cd96a5524","Type":"ContainerStarted","Data":"35f4d88346b30da7d1ce646e8b0219c1bfac2d1e3721c6129dd7ab540434123e"} Feb 18 14:52:59 crc kubenswrapper[4957]: I0218 14:52:59.066172 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"8e98adae-ca1c-4f1d-a7e4-6ea6ccea7070","Type":"ContainerStarted","Data":"c2d6a2612ba18fb658d5c66424e36628b24b284c48f3a48ee8a77762ba6d77e3"} Feb 18 14:52:59 crc kubenswrapper[4957]: I0218 14:52:59.069294 4957 generic.go:334] "Generic (PLEG): container finished" podID="f6d43c29-7df2-45a3-8b11-0cf3f6134f09" containerID="ebac4825633d459a6d06a27eeb731901e2f9453f332e9732010a0759dc1ac644" exitCode=0 Feb 18 14:52:59 crc kubenswrapper[4957]: I0218 14:52:59.069347 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f896c8c65-59s68" event={"ID":"f6d43c29-7df2-45a3-8b11-0cf3f6134f09","Type":"ContainerDied","Data":"ebac4825633d459a6d06a27eeb731901e2f9453f332e9732010a0759dc1ac644"} Feb 18 14:52:59 crc kubenswrapper[4957]: I0218 14:52:59.079918 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-ztccc" event={"ID":"93827c35-2801-441b-9a24-813c5d4a29ed","Type":"ContainerStarted","Data":"eddad1b3c0b8d2317b6b0b3bbe3f0aac75f10e379e93005bcda7c95fa8297e2f"} Feb 18 14:52:59 crc kubenswrapper[4957]: I0218 14:52:59.079962 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-ztccc" event={"ID":"93827c35-2801-441b-9a24-813c5d4a29ed","Type":"ContainerStarted","Data":"41a01ae676beab5d23da5aee70f6e7c5e9227f2736a55858bf14ff9d30f762f4"} Feb 18 14:52:59 crc kubenswrapper[4957]: I0218 14:52:59.080043 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57d769cc4f-pnzzz" podUID="031ed1ca-79fe-4aa7-ade3-f5d612b2c628" containerName="dnsmasq-dns" containerID="cri-o://47a725816dae2961b949e3796e3026c3ea07eccd565fa2012d7c46dd3840dbb1" gracePeriod=10 Feb 18 14:52:59 crc kubenswrapper[4957]: I0218 14:52:59.122554 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=-9223371985.73225 podStartE2EDuration="51.122526043s" podCreationTimestamp="2026-02-18 14:52:08 +0000 UTC" firstStartedPulling="2026-02-18 14:52:10.899796368 +0000 UTC m=+1237.420661112" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:52:59.112192703 +0000 UTC m=+1285.633057457" watchObservedRunningTime="2026-02-18 14:52:59.122526043 +0000 UTC m=+1285.643390787" Feb 18 14:52:59 crc kubenswrapper[4957]: I0218 14:52:59.176780 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-ztccc" podStartSLOduration=3.17675294 podStartE2EDuration="3.17675294s" podCreationTimestamp="2026-02-18 14:52:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:52:59.172927449 +0000 UTC m=+1285.693792193" watchObservedRunningTime="2026-02-18 14:52:59.17675294 +0000 UTC m=+1285.697617684" Feb 18 14:52:59 crc kubenswrapper[4957]: I0218 14:52:59.851515 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Feb 18 14:52:59 crc kubenswrapper[4957]: I0218 14:52:59.852007 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Feb 18 14:52:59 crc kubenswrapper[4957]: I0218 14:52:59.941815 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-pnzzz" Feb 18 14:53:00 crc kubenswrapper[4957]: I0218 14:53:00.040031 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/031ed1ca-79fe-4aa7-ade3-f5d612b2c628-config\") pod \"031ed1ca-79fe-4aa7-ade3-f5d612b2c628\" (UID: \"031ed1ca-79fe-4aa7-ade3-f5d612b2c628\") " Feb 18 14:53:00 crc kubenswrapper[4957]: I0218 14:53:00.040588 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/031ed1ca-79fe-4aa7-ade3-f5d612b2c628-dns-svc\") pod \"031ed1ca-79fe-4aa7-ade3-f5d612b2c628\" (UID: \"031ed1ca-79fe-4aa7-ade3-f5d612b2c628\") " Feb 18 14:53:00 crc kubenswrapper[4957]: I0218 14:53:00.040676 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-72t2h\" (UniqueName: \"kubernetes.io/projected/031ed1ca-79fe-4aa7-ade3-f5d612b2c628-kube-api-access-72t2h\") pod \"031ed1ca-79fe-4aa7-ade3-f5d612b2c628\" (UID: \"031ed1ca-79fe-4aa7-ade3-f5d612b2c628\") " Feb 18 14:53:00 crc kubenswrapper[4957]: I0218 14:53:00.048806 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/031ed1ca-79fe-4aa7-ade3-f5d612b2c628-kube-api-access-72t2h" (OuterVolumeSpecName: "kube-api-access-72t2h") pod "031ed1ca-79fe-4aa7-ade3-f5d612b2c628" (UID: "031ed1ca-79fe-4aa7-ade3-f5d612b2c628"). InnerVolumeSpecName "kube-api-access-72t2h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:53:00 crc kubenswrapper[4957]: I0218 14:53:00.098076 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-5s4rw" event={"ID":"e2b4f5fe-0b27-47d8-8158-b51ad4229e86","Type":"ContainerStarted","Data":"d68de4597d66a145bc4fd5a05089d84e894ac89e08d1039d469fc5e692379fcc"} Feb 18 14:53:00 crc kubenswrapper[4957]: I0218 14:53:00.098667 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-5s4rw" Feb 18 14:53:00 crc kubenswrapper[4957]: I0218 14:53:00.105825 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f896c8c65-59s68" event={"ID":"f6d43c29-7df2-45a3-8b11-0cf3f6134f09","Type":"ContainerStarted","Data":"96189e3d0b874a98db27892072f0c184f53aa79555b5e9709ffa1c202f5b7929"} Feb 18 14:53:00 crc kubenswrapper[4957]: I0218 14:53:00.105916 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7f896c8c65-59s68" Feb 18 14:53:00 crc kubenswrapper[4957]: I0218 14:53:00.109225 4957 generic.go:334] "Generic (PLEG): container finished" podID="031ed1ca-79fe-4aa7-ade3-f5d612b2c628" containerID="47a725816dae2961b949e3796e3026c3ea07eccd565fa2012d7c46dd3840dbb1" exitCode=0 Feb 18 14:53:00 crc kubenswrapper[4957]: I0218 14:53:00.109286 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-pnzzz" event={"ID":"031ed1ca-79fe-4aa7-ade3-f5d612b2c628","Type":"ContainerDied","Data":"47a725816dae2961b949e3796e3026c3ea07eccd565fa2012d7c46dd3840dbb1"} Feb 18 14:53:00 crc kubenswrapper[4957]: I0218 14:53:00.109321 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-pnzzz" event={"ID":"031ed1ca-79fe-4aa7-ade3-f5d612b2c628","Type":"ContainerDied","Data":"7b34271156fae5bb38f086ddcc3e353e2288da8b7849ae00cc7caddc8b2ec047"} Feb 18 14:53:00 crc kubenswrapper[4957]: I0218 14:53:00.109341 4957 scope.go:117] "RemoveContainer" containerID="47a725816dae2961b949e3796e3026c3ea07eccd565fa2012d7c46dd3840dbb1" Feb 18 14:53:00 crc kubenswrapper[4957]: I0218 14:53:00.109496 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-pnzzz" Feb 18 14:53:00 crc kubenswrapper[4957]: I0218 14:53:00.113953 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-5wr9w" event={"ID":"9d31b9ed-0265-41fb-9f57-538cd96a5524","Type":"ContainerStarted","Data":"91b4ffb058fba64c9163ee43114b9c44f38cf1474092358cc10af3cdd0fc650a"} Feb 18 14:53:00 crc kubenswrapper[4957]: I0218 14:53:00.114602 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-86db49b7ff-5wr9w" Feb 18 14:53:00 crc kubenswrapper[4957]: I0218 14:53:00.124134 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/031ed1ca-79fe-4aa7-ade3-f5d612b2c628-config" (OuterVolumeSpecName: "config") pod "031ed1ca-79fe-4aa7-ade3-f5d612b2c628" (UID: "031ed1ca-79fe-4aa7-ade3-f5d612b2c628"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:53:00 crc kubenswrapper[4957]: I0218 14:53:00.135172 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-5s4rw" podStartSLOduration=7.986260701 podStartE2EDuration="43.135079911s" podCreationTimestamp="2026-02-18 14:52:17 +0000 UTC" firstStartedPulling="2026-02-18 14:52:23.707869522 +0000 UTC m=+1250.228734266" lastFinishedPulling="2026-02-18 14:52:58.856688722 +0000 UTC m=+1285.377553476" observedRunningTime="2026-02-18 14:53:00.120597609 +0000 UTC m=+1286.641462353" watchObservedRunningTime="2026-02-18 14:53:00.135079911 +0000 UTC m=+1286.655944665" Feb 18 14:53:00 crc kubenswrapper[4957]: I0218 14:53:00.143425 4957 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/031ed1ca-79fe-4aa7-ade3-f5d612b2c628-config\") on node \"crc\" DevicePath \"\"" Feb 18 14:53:00 crc kubenswrapper[4957]: I0218 14:53:00.143485 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-72t2h\" (UniqueName: \"kubernetes.io/projected/031ed1ca-79fe-4aa7-ade3-f5d612b2c628-kube-api-access-72t2h\") on node \"crc\" DevicePath \"\"" Feb 18 14:53:00 crc kubenswrapper[4957]: I0218 14:53:00.156281 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7f896c8c65-59s68" podStartSLOduration=4.156259146 podStartE2EDuration="4.156259146s" podCreationTimestamp="2026-02-18 14:52:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:53:00.145236986 +0000 UTC m=+1286.666101740" watchObservedRunningTime="2026-02-18 14:53:00.156259146 +0000 UTC m=+1286.677123890" Feb 18 14:53:00 crc kubenswrapper[4957]: I0218 14:53:00.216176 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-86db49b7ff-5wr9w" podStartSLOduration=3.216154308 podStartE2EDuration="3.216154308s" podCreationTimestamp="2026-02-18 14:52:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:53:00.199860114 +0000 UTC m=+1286.720724858" watchObservedRunningTime="2026-02-18 14:53:00.216154308 +0000 UTC m=+1286.737019052" Feb 18 14:53:00 crc kubenswrapper[4957]: I0218 14:53:00.243848 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea49c46a-0b24-4d9c-9dc5-ed3c7f1e9433" path="/var/lib/kubelet/pods/ea49c46a-0b24-4d9c-9dc5-ed3c7f1e9433/volumes" Feb 18 14:53:00 crc kubenswrapper[4957]: I0218 14:53:00.493218 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/031ed1ca-79fe-4aa7-ade3-f5d612b2c628-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "031ed1ca-79fe-4aa7-ade3-f5d612b2c628" (UID: "031ed1ca-79fe-4aa7-ade3-f5d612b2c628"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:53:00 crc kubenswrapper[4957]: I0218 14:53:00.552320 4957 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/031ed1ca-79fe-4aa7-ade3-f5d612b2c628-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 14:53:01 crc kubenswrapper[4957]: I0218 14:53:00.816618 4957 scope.go:117] "RemoveContainer" containerID="196eea9f9bdaaba02641be44ab5b1d35d4b5055546311f8c9902151496a2299f" Feb 18 14:53:01 crc kubenswrapper[4957]: I0218 14:53:00.823711 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-pnzzz"] Feb 18 14:53:01 crc kubenswrapper[4957]: I0218 14:53:00.842875 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-pnzzz"] Feb 18 14:53:01 crc kubenswrapper[4957]: I0218 14:53:00.953719 4957 scope.go:117] "RemoveContainer" containerID="47a725816dae2961b949e3796e3026c3ea07eccd565fa2012d7c46dd3840dbb1" Feb 18 14:53:01 crc kubenswrapper[4957]: E0218 14:53:00.954979 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"47a725816dae2961b949e3796e3026c3ea07eccd565fa2012d7c46dd3840dbb1\": container with ID starting with 47a725816dae2961b949e3796e3026c3ea07eccd565fa2012d7c46dd3840dbb1 not found: ID does not exist" containerID="47a725816dae2961b949e3796e3026c3ea07eccd565fa2012d7c46dd3840dbb1" Feb 18 14:53:01 crc kubenswrapper[4957]: I0218 14:53:00.955023 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47a725816dae2961b949e3796e3026c3ea07eccd565fa2012d7c46dd3840dbb1"} err="failed to get container status \"47a725816dae2961b949e3796e3026c3ea07eccd565fa2012d7c46dd3840dbb1\": rpc error: code = NotFound desc = could not find container \"47a725816dae2961b949e3796e3026c3ea07eccd565fa2012d7c46dd3840dbb1\": container with ID starting with 47a725816dae2961b949e3796e3026c3ea07eccd565fa2012d7c46dd3840dbb1 not found: ID does not exist" Feb 18 14:53:01 crc kubenswrapper[4957]: I0218 14:53:00.955048 4957 scope.go:117] "RemoveContainer" containerID="196eea9f9bdaaba02641be44ab5b1d35d4b5055546311f8c9902151496a2299f" Feb 18 14:53:01 crc kubenswrapper[4957]: E0218 14:53:00.955407 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"196eea9f9bdaaba02641be44ab5b1d35d4b5055546311f8c9902151496a2299f\": container with ID starting with 196eea9f9bdaaba02641be44ab5b1d35d4b5055546311f8c9902151496a2299f not found: ID does not exist" containerID="196eea9f9bdaaba02641be44ab5b1d35d4b5055546311f8c9902151496a2299f" Feb 18 14:53:01 crc kubenswrapper[4957]: I0218 14:53:00.955511 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"196eea9f9bdaaba02641be44ab5b1d35d4b5055546311f8c9902151496a2299f"} err="failed to get container status \"196eea9f9bdaaba02641be44ab5b1d35d4b5055546311f8c9902151496a2299f\": rpc error: code = NotFound desc = could not find container \"196eea9f9bdaaba02641be44ab5b1d35d4b5055546311f8c9902151496a2299f\": container with ID starting with 196eea9f9bdaaba02641be44ab5b1d35d4b5055546311f8c9902151496a2299f not found: ID does not exist" Feb 18 14:53:01 crc kubenswrapper[4957]: I0218 14:53:01.135367 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"383c2ab1-7f15-422c-a4ed-3c899e3a8c74","Type":"ContainerStarted","Data":"d8b2071481292b9d247fc49961706323623ac512cdaf2f0f00c3971ed7058910"} Feb 18 14:53:01 crc kubenswrapper[4957]: I0218 14:53:01.140792 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"7be80ac2-8e92-4cb0-8184-c35add0ccc9b","Type":"ContainerStarted","Data":"4497c2a589656a10eed8652488b3650aa84e488eef663aba9e656f04b5802c75"} Feb 18 14:53:01 crc kubenswrapper[4957]: I0218 14:53:01.246012 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=8.953768506 podStartE2EDuration="45.245983858s" podCreationTimestamp="2026-02-18 14:52:16 +0000 UTC" firstStartedPulling="2026-02-18 14:52:23.708273424 +0000 UTC m=+1250.229138168" lastFinishedPulling="2026-02-18 14:53:00.000488776 +0000 UTC m=+1286.521353520" observedRunningTime="2026-02-18 14:53:01.19104339 +0000 UTC m=+1287.711908144" watchObservedRunningTime="2026-02-18 14:53:01.245983858 +0000 UTC m=+1287.766848602" Feb 18 14:53:01 crc kubenswrapper[4957]: I0218 14:53:01.367699 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Feb 18 14:53:01 crc kubenswrapper[4957]: I0218 14:53:01.598399 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Feb 18 14:53:01 crc kubenswrapper[4957]: I0218 14:53:01.598916 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Feb 18 14:53:02 crc kubenswrapper[4957]: I0218 14:53:02.176065 4957 generic.go:334] "Generic (PLEG): container finished" podID="2c0f60cd-99b5-453c-9353-5c6298f95d2b" containerID="cf877cfa21a6ea715f5bf47255c92c00d9f1ee4b0c2d695e93ed394d2f655583" exitCode=0 Feb 18 14:53:02 crc kubenswrapper[4957]: I0218 14:53:02.178350 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-bgjwb" event={"ID":"2c0f60cd-99b5-453c-9353-5c6298f95d2b","Type":"ContainerDied","Data":"cf877cfa21a6ea715f5bf47255c92c00d9f1ee4b0c2d695e93ed394d2f655583"} Feb 18 14:53:02 crc kubenswrapper[4957]: I0218 14:53:02.227830 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="031ed1ca-79fe-4aa7-ade3-f5d612b2c628" path="/var/lib/kubelet/pods/031ed1ca-79fe-4aa7-ade3-f5d612b2c628/volumes" Feb 18 14:53:02 crc kubenswrapper[4957]: I0218 14:53:02.563527 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Feb 18 14:53:02 crc kubenswrapper[4957]: I0218 14:53:02.563930 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Feb 18 14:53:03 crc kubenswrapper[4957]: E0218 14:53:03.088136 4957 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.213:36006->38.102.83.213:46479: write tcp 38.102.83.213:36006->38.102.83.213:46479: write: broken pipe Feb 18 14:53:03 crc kubenswrapper[4957]: I0218 14:53:03.203714 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-bgjwb" event={"ID":"2c0f60cd-99b5-453c-9353-5c6298f95d2b","Type":"ContainerStarted","Data":"9814ee21255d92207de4352cfafd81a1b7bc60f455fab99988d25382f62e2d20"} Feb 18 14:53:03 crc kubenswrapper[4957]: I0218 14:53:03.203767 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-bgjwb" event={"ID":"2c0f60cd-99b5-453c-9353-5c6298f95d2b","Type":"ContainerStarted","Data":"98340965cdea452ffb1f100bfba0cae27372e97af5b46aaae780d8423f5e808b"} Feb 18 14:53:03 crc kubenswrapper[4957]: I0218 14:53:03.204138 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-bgjwb" Feb 18 14:53:03 crc kubenswrapper[4957]: I0218 14:53:03.204275 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-bgjwb" Feb 18 14:53:03 crc kubenswrapper[4957]: I0218 14:53:03.210646 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"48a06b17-3799-49aa-97b4-40b55c95fa86","Type":"ContainerStarted","Data":"433d13fb9543cea07951bcd4a2518e3dfc9993b92b1249e1b91cea16e4706a25"} Feb 18 14:53:03 crc kubenswrapper[4957]: I0218 14:53:03.211599 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 18 14:53:03 crc kubenswrapper[4957]: I0218 14:53:03.243023 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-bgjwb" podStartSLOduration=9.562959665 podStartE2EDuration="46.242997125s" podCreationTimestamp="2026-02-18 14:52:17 +0000 UTC" firstStartedPulling="2026-02-18 14:52:24.384026327 +0000 UTC m=+1250.904891071" lastFinishedPulling="2026-02-18 14:53:01.064063787 +0000 UTC m=+1287.584928531" observedRunningTime="2026-02-18 14:53:03.237353161 +0000 UTC m=+1289.758217915" watchObservedRunningTime="2026-02-18 14:53:03.242997125 +0000 UTC m=+1289.763861869" Feb 18 14:53:03 crc kubenswrapper[4957]: I0218 14:53:03.264096 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=3.620627421 podStartE2EDuration="50.264080298s" podCreationTimestamp="2026-02-18 14:52:13 +0000 UTC" firstStartedPulling="2026-02-18 14:52:15.764603356 +0000 UTC m=+1242.285468100" lastFinishedPulling="2026-02-18 14:53:02.408056233 +0000 UTC m=+1288.928920977" observedRunningTime="2026-02-18 14:53:03.256492247 +0000 UTC m=+1289.777356991" watchObservedRunningTime="2026-02-18 14:53:03.264080298 +0000 UTC m=+1289.784945042" Feb 18 14:53:04 crc kubenswrapper[4957]: I0218 14:53:04.086004 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-59s68"] Feb 18 14:53:04 crc kubenswrapper[4957]: I0218 14:53:04.086245 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7f896c8c65-59s68" podUID="f6d43c29-7df2-45a3-8b11-0cf3f6134f09" containerName="dnsmasq-dns" containerID="cri-o://96189e3d0b874a98db27892072f0c184f53aa79555b5e9709ffa1c202f5b7929" gracePeriod=10 Feb 18 14:53:04 crc kubenswrapper[4957]: I0218 14:53:04.087562 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7f896c8c65-59s68" Feb 18 14:53:04 crc kubenswrapper[4957]: I0218 14:53:04.146520 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-698758b865-hss4q"] Feb 18 14:53:04 crc kubenswrapper[4957]: E0218 14:53:04.147170 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea49c46a-0b24-4d9c-9dc5-ed3c7f1e9433" containerName="init" Feb 18 14:53:04 crc kubenswrapper[4957]: I0218 14:53:04.147189 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea49c46a-0b24-4d9c-9dc5-ed3c7f1e9433" containerName="init" Feb 18 14:53:04 crc kubenswrapper[4957]: E0218 14:53:04.147228 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea49c46a-0b24-4d9c-9dc5-ed3c7f1e9433" containerName="dnsmasq-dns" Feb 18 14:53:04 crc kubenswrapper[4957]: I0218 14:53:04.147235 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea49c46a-0b24-4d9c-9dc5-ed3c7f1e9433" containerName="dnsmasq-dns" Feb 18 14:53:04 crc kubenswrapper[4957]: E0218 14:53:04.147245 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="031ed1ca-79fe-4aa7-ade3-f5d612b2c628" containerName="dnsmasq-dns" Feb 18 14:53:04 crc kubenswrapper[4957]: I0218 14:53:04.147251 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="031ed1ca-79fe-4aa7-ade3-f5d612b2c628" containerName="dnsmasq-dns" Feb 18 14:53:04 crc kubenswrapper[4957]: E0218 14:53:04.147283 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="031ed1ca-79fe-4aa7-ade3-f5d612b2c628" containerName="init" Feb 18 14:53:04 crc kubenswrapper[4957]: I0218 14:53:04.147290 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="031ed1ca-79fe-4aa7-ade3-f5d612b2c628" containerName="init" Feb 18 14:53:04 crc kubenswrapper[4957]: I0218 14:53:04.147519 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="031ed1ca-79fe-4aa7-ade3-f5d612b2c628" containerName="dnsmasq-dns" Feb 18 14:53:04 crc kubenswrapper[4957]: I0218 14:53:04.147541 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea49c46a-0b24-4d9c-9dc5-ed3c7f1e9433" containerName="dnsmasq-dns" Feb 18 14:53:04 crc kubenswrapper[4957]: I0218 14:53:04.148944 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-hss4q" Feb 18 14:53:04 crc kubenswrapper[4957]: I0218 14:53:04.195616 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-hss4q"] Feb 18 14:53:04 crc kubenswrapper[4957]: I0218 14:53:04.214947 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Feb 18 14:53:04 crc kubenswrapper[4957]: I0218 14:53:04.281157 4957 generic.go:334] "Generic (PLEG): container finished" podID="f6d43c29-7df2-45a3-8b11-0cf3f6134f09" containerID="96189e3d0b874a98db27892072f0c184f53aa79555b5e9709ffa1c202f5b7929" exitCode=0 Feb 18 14:53:04 crc kubenswrapper[4957]: I0218 14:53:04.282275 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmxx4\" (UniqueName: \"kubernetes.io/projected/d7283a8b-4d7e-4959-af72-06f15b3d73b0-kube-api-access-pmxx4\") pod \"dnsmasq-dns-698758b865-hss4q\" (UID: \"d7283a8b-4d7e-4959-af72-06f15b3d73b0\") " pod="openstack/dnsmasq-dns-698758b865-hss4q" Feb 18 14:53:04 crc kubenswrapper[4957]: I0218 14:53:04.282836 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7283a8b-4d7e-4959-af72-06f15b3d73b0-config\") pod \"dnsmasq-dns-698758b865-hss4q\" (UID: \"d7283a8b-4d7e-4959-af72-06f15b3d73b0\") " pod="openstack/dnsmasq-dns-698758b865-hss4q" Feb 18 14:53:04 crc kubenswrapper[4957]: I0218 14:53:04.282987 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d7283a8b-4d7e-4959-af72-06f15b3d73b0-dns-svc\") pod \"dnsmasq-dns-698758b865-hss4q\" (UID: \"d7283a8b-4d7e-4959-af72-06f15b3d73b0\") " pod="openstack/dnsmasq-dns-698758b865-hss4q" Feb 18 14:53:04 crc kubenswrapper[4957]: I0218 14:53:04.283030 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d7283a8b-4d7e-4959-af72-06f15b3d73b0-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-hss4q\" (UID: \"d7283a8b-4d7e-4959-af72-06f15b3d73b0\") " pod="openstack/dnsmasq-dns-698758b865-hss4q" Feb 18 14:53:04 crc kubenswrapper[4957]: I0218 14:53:04.282990 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f896c8c65-59s68" event={"ID":"f6d43c29-7df2-45a3-8b11-0cf3f6134f09","Type":"ContainerDied","Data":"96189e3d0b874a98db27892072f0c184f53aa79555b5e9709ffa1c202f5b7929"} Feb 18 14:53:04 crc kubenswrapper[4957]: I0218 14:53:04.283087 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d7283a8b-4d7e-4959-af72-06f15b3d73b0-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-hss4q\" (UID: \"d7283a8b-4d7e-4959-af72-06f15b3d73b0\") " pod="openstack/dnsmasq-dns-698758b865-hss4q" Feb 18 14:53:04 crc kubenswrapper[4957]: I0218 14:53:04.387514 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Feb 18 14:53:04 crc kubenswrapper[4957]: I0218 14:53:04.389020 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmxx4\" (UniqueName: \"kubernetes.io/projected/d7283a8b-4d7e-4959-af72-06f15b3d73b0-kube-api-access-pmxx4\") pod \"dnsmasq-dns-698758b865-hss4q\" (UID: \"d7283a8b-4d7e-4959-af72-06f15b3d73b0\") " pod="openstack/dnsmasq-dns-698758b865-hss4q" Feb 18 14:53:04 crc kubenswrapper[4957]: I0218 14:53:04.389171 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7283a8b-4d7e-4959-af72-06f15b3d73b0-config\") pod \"dnsmasq-dns-698758b865-hss4q\" (UID: \"d7283a8b-4d7e-4959-af72-06f15b3d73b0\") " pod="openstack/dnsmasq-dns-698758b865-hss4q" Feb 18 14:53:04 crc kubenswrapper[4957]: I0218 14:53:04.389280 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d7283a8b-4d7e-4959-af72-06f15b3d73b0-dns-svc\") pod \"dnsmasq-dns-698758b865-hss4q\" (UID: \"d7283a8b-4d7e-4959-af72-06f15b3d73b0\") " pod="openstack/dnsmasq-dns-698758b865-hss4q" Feb 18 14:53:04 crc kubenswrapper[4957]: I0218 14:53:04.389348 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d7283a8b-4d7e-4959-af72-06f15b3d73b0-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-hss4q\" (UID: \"d7283a8b-4d7e-4959-af72-06f15b3d73b0\") " pod="openstack/dnsmasq-dns-698758b865-hss4q" Feb 18 14:53:04 crc kubenswrapper[4957]: I0218 14:53:04.389497 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d7283a8b-4d7e-4959-af72-06f15b3d73b0-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-hss4q\" (UID: \"d7283a8b-4d7e-4959-af72-06f15b3d73b0\") " pod="openstack/dnsmasq-dns-698758b865-hss4q" Feb 18 14:53:04 crc kubenswrapper[4957]: I0218 14:53:04.391176 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d7283a8b-4d7e-4959-af72-06f15b3d73b0-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-hss4q\" (UID: \"d7283a8b-4d7e-4959-af72-06f15b3d73b0\") " pod="openstack/dnsmasq-dns-698758b865-hss4q" Feb 18 14:53:04 crc kubenswrapper[4957]: I0218 14:53:04.391225 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d7283a8b-4d7e-4959-af72-06f15b3d73b0-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-hss4q\" (UID: \"d7283a8b-4d7e-4959-af72-06f15b3d73b0\") " pod="openstack/dnsmasq-dns-698758b865-hss4q" Feb 18 14:53:04 crc kubenswrapper[4957]: I0218 14:53:04.391453 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d7283a8b-4d7e-4959-af72-06f15b3d73b0-dns-svc\") pod \"dnsmasq-dns-698758b865-hss4q\" (UID: \"d7283a8b-4d7e-4959-af72-06f15b3d73b0\") " pod="openstack/dnsmasq-dns-698758b865-hss4q" Feb 18 14:53:04 crc kubenswrapper[4957]: I0218 14:53:04.391634 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7283a8b-4d7e-4959-af72-06f15b3d73b0-config\") pod \"dnsmasq-dns-698758b865-hss4q\" (UID: \"d7283a8b-4d7e-4959-af72-06f15b3d73b0\") " pod="openstack/dnsmasq-dns-698758b865-hss4q" Feb 18 14:53:04 crc kubenswrapper[4957]: I0218 14:53:04.486342 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pmxx4\" (UniqueName: \"kubernetes.io/projected/d7283a8b-4d7e-4959-af72-06f15b3d73b0-kube-api-access-pmxx4\") pod \"dnsmasq-dns-698758b865-hss4q\" (UID: \"d7283a8b-4d7e-4959-af72-06f15b3d73b0\") " pod="openstack/dnsmasq-dns-698758b865-hss4q" Feb 18 14:53:04 crc kubenswrapper[4957]: I0218 14:53:04.491534 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-hss4q" Feb 18 14:53:04 crc kubenswrapper[4957]: I0218 14:53:04.869645 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f896c8c65-59s68" Feb 18 14:53:04 crc kubenswrapper[4957]: I0218 14:53:04.907201 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qrdp6\" (UniqueName: \"kubernetes.io/projected/f6d43c29-7df2-45a3-8b11-0cf3f6134f09-kube-api-access-qrdp6\") pod \"f6d43c29-7df2-45a3-8b11-0cf3f6134f09\" (UID: \"f6d43c29-7df2-45a3-8b11-0cf3f6134f09\") " Feb 18 14:53:04 crc kubenswrapper[4957]: I0218 14:53:04.907579 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f6d43c29-7df2-45a3-8b11-0cf3f6134f09-ovsdbserver-sb\") pod \"f6d43c29-7df2-45a3-8b11-0cf3f6134f09\" (UID: \"f6d43c29-7df2-45a3-8b11-0cf3f6134f09\") " Feb 18 14:53:04 crc kubenswrapper[4957]: I0218 14:53:04.907714 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6d43c29-7df2-45a3-8b11-0cf3f6134f09-config\") pod \"f6d43c29-7df2-45a3-8b11-0cf3f6134f09\" (UID: \"f6d43c29-7df2-45a3-8b11-0cf3f6134f09\") " Feb 18 14:53:04 crc kubenswrapper[4957]: I0218 14:53:04.907915 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f6d43c29-7df2-45a3-8b11-0cf3f6134f09-dns-svc\") pod \"f6d43c29-7df2-45a3-8b11-0cf3f6134f09\" (UID: \"f6d43c29-7df2-45a3-8b11-0cf3f6134f09\") " Feb 18 14:53:04 crc kubenswrapper[4957]: I0218 14:53:04.921545 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6d43c29-7df2-45a3-8b11-0cf3f6134f09-kube-api-access-qrdp6" (OuterVolumeSpecName: "kube-api-access-qrdp6") pod "f6d43c29-7df2-45a3-8b11-0cf3f6134f09" (UID: "f6d43c29-7df2-45a3-8b11-0cf3f6134f09"). InnerVolumeSpecName "kube-api-access-qrdp6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:53:04 crc kubenswrapper[4957]: I0218 14:53:04.981945 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f6d43c29-7df2-45a3-8b11-0cf3f6134f09-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f6d43c29-7df2-45a3-8b11-0cf3f6134f09" (UID: "f6d43c29-7df2-45a3-8b11-0cf3f6134f09"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:53:05 crc kubenswrapper[4957]: I0218 14:53:05.009994 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qrdp6\" (UniqueName: \"kubernetes.io/projected/f6d43c29-7df2-45a3-8b11-0cf3f6134f09-kube-api-access-qrdp6\") on node \"crc\" DevicePath \"\"" Feb 18 14:53:05 crc kubenswrapper[4957]: I0218 14:53:05.010033 4957 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f6d43c29-7df2-45a3-8b11-0cf3f6134f09-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 18 14:53:05 crc kubenswrapper[4957]: I0218 14:53:05.015169 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f6d43c29-7df2-45a3-8b11-0cf3f6134f09-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f6d43c29-7df2-45a3-8b11-0cf3f6134f09" (UID: "f6d43c29-7df2-45a3-8b11-0cf3f6134f09"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:53:05 crc kubenswrapper[4957]: I0218 14:53:05.015827 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f6d43c29-7df2-45a3-8b11-0cf3f6134f09-config" (OuterVolumeSpecName: "config") pod "f6d43c29-7df2-45a3-8b11-0cf3f6134f09" (UID: "f6d43c29-7df2-45a3-8b11-0cf3f6134f09"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:53:05 crc kubenswrapper[4957]: I0218 14:53:05.111967 4957 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6d43c29-7df2-45a3-8b11-0cf3f6134f09-config\") on node \"crc\" DevicePath \"\"" Feb 18 14:53:05 crc kubenswrapper[4957]: I0218 14:53:05.111998 4957 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f6d43c29-7df2-45a3-8b11-0cf3f6134f09-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 14:53:05 crc kubenswrapper[4957]: I0218 14:53:05.149682 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-hss4q"] Feb 18 14:53:05 crc kubenswrapper[4957]: I0218 14:53:05.234164 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Feb 18 14:53:05 crc kubenswrapper[4957]: E0218 14:53:05.236508 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6d43c29-7df2-45a3-8b11-0cf3f6134f09" containerName="init" Feb 18 14:53:05 crc kubenswrapper[4957]: I0218 14:53:05.236535 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6d43c29-7df2-45a3-8b11-0cf3f6134f09" containerName="init" Feb 18 14:53:05 crc kubenswrapper[4957]: E0218 14:53:05.236580 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6d43c29-7df2-45a3-8b11-0cf3f6134f09" containerName="dnsmasq-dns" Feb 18 14:53:05 crc kubenswrapper[4957]: I0218 14:53:05.236588 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6d43c29-7df2-45a3-8b11-0cf3f6134f09" containerName="dnsmasq-dns" Feb 18 14:53:05 crc kubenswrapper[4957]: I0218 14:53:05.236840 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6d43c29-7df2-45a3-8b11-0cf3f6134f09" containerName="dnsmasq-dns" Feb 18 14:53:05 crc kubenswrapper[4957]: I0218 14:53:05.247318 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 18 14:53:05 crc kubenswrapper[4957]: I0218 14:53:05.250532 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Feb 18 14:53:05 crc kubenswrapper[4957]: I0218 14:53:05.250735 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-rgvkq" Feb 18 14:53:05 crc kubenswrapper[4957]: I0218 14:53:05.251036 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Feb 18 14:53:05 crc kubenswrapper[4957]: I0218 14:53:05.251184 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Feb 18 14:53:05 crc kubenswrapper[4957]: I0218 14:53:05.263833 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 18 14:53:05 crc kubenswrapper[4957]: I0218 14:53:05.295280 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-hss4q" event={"ID":"d7283a8b-4d7e-4959-af72-06f15b3d73b0","Type":"ContainerStarted","Data":"66f87977a717fe636aaf4ca22ae08761df04af2d8c02fabb73b9c78f88a16835"} Feb 18 14:53:05 crc kubenswrapper[4957]: I0218 14:53:05.298902 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f896c8c65-59s68" Feb 18 14:53:05 crc kubenswrapper[4957]: I0218 14:53:05.299536 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f896c8c65-59s68" event={"ID":"f6d43c29-7df2-45a3-8b11-0cf3f6134f09","Type":"ContainerDied","Data":"4313656481bd97ba3feee4643bf9520ea12ea3acca427abb27fe51e535080e51"} Feb 18 14:53:05 crc kubenswrapper[4957]: I0218 14:53:05.299578 4957 scope.go:117] "RemoveContainer" containerID="96189e3d0b874a98db27892072f0c184f53aa79555b5e9709ffa1c202f5b7929" Feb 18 14:53:05 crc kubenswrapper[4957]: I0218 14:53:05.315901 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-7399b3bb-ce04-4b35-a398-2454e88e8531\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7399b3bb-ce04-4b35-a398-2454e88e8531\") pod \"swift-storage-0\" (UID: \"5790a7ec-79bb-49af-842f-e2b879f33184\") " pod="openstack/swift-storage-0" Feb 18 14:53:05 crc kubenswrapper[4957]: I0218 14:53:05.316063 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5790a7ec-79bb-49af-842f-e2b879f33184-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"5790a7ec-79bb-49af-842f-e2b879f33184\") " pod="openstack/swift-storage-0" Feb 18 14:53:05 crc kubenswrapper[4957]: I0218 14:53:05.316116 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/5790a7ec-79bb-49af-842f-e2b879f33184-cache\") pod \"swift-storage-0\" (UID: \"5790a7ec-79bb-49af-842f-e2b879f33184\") " pod="openstack/swift-storage-0" Feb 18 14:53:05 crc kubenswrapper[4957]: I0218 14:53:05.316186 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hhb4v\" (UniqueName: \"kubernetes.io/projected/5790a7ec-79bb-49af-842f-e2b879f33184-kube-api-access-hhb4v\") pod \"swift-storage-0\" (UID: \"5790a7ec-79bb-49af-842f-e2b879f33184\") " pod="openstack/swift-storage-0" Feb 18 14:53:05 crc kubenswrapper[4957]: I0218 14:53:05.316231 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/5790a7ec-79bb-49af-842f-e2b879f33184-lock\") pod \"swift-storage-0\" (UID: \"5790a7ec-79bb-49af-842f-e2b879f33184\") " pod="openstack/swift-storage-0" Feb 18 14:53:05 crc kubenswrapper[4957]: I0218 14:53:05.316321 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/5790a7ec-79bb-49af-842f-e2b879f33184-etc-swift\") pod \"swift-storage-0\" (UID: \"5790a7ec-79bb-49af-842f-e2b879f33184\") " pod="openstack/swift-storage-0" Feb 18 14:53:05 crc kubenswrapper[4957]: I0218 14:53:05.354246 4957 scope.go:117] "RemoveContainer" containerID="ebac4825633d459a6d06a27eeb731901e2f9453f332e9732010a0759dc1ac644" Feb 18 14:53:05 crc kubenswrapper[4957]: I0218 14:53:05.366524 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-59s68"] Feb 18 14:53:05 crc kubenswrapper[4957]: I0218 14:53:05.378373 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-59s68"] Feb 18 14:53:05 crc kubenswrapper[4957]: I0218 14:53:05.421657 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-7399b3bb-ce04-4b35-a398-2454e88e8531\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7399b3bb-ce04-4b35-a398-2454e88e8531\") pod \"swift-storage-0\" (UID: \"5790a7ec-79bb-49af-842f-e2b879f33184\") " pod="openstack/swift-storage-0" Feb 18 14:53:05 crc kubenswrapper[4957]: I0218 14:53:05.421723 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5790a7ec-79bb-49af-842f-e2b879f33184-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"5790a7ec-79bb-49af-842f-e2b879f33184\") " pod="openstack/swift-storage-0" Feb 18 14:53:05 crc kubenswrapper[4957]: I0218 14:53:05.421761 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/5790a7ec-79bb-49af-842f-e2b879f33184-cache\") pod \"swift-storage-0\" (UID: \"5790a7ec-79bb-49af-842f-e2b879f33184\") " pod="openstack/swift-storage-0" Feb 18 14:53:05 crc kubenswrapper[4957]: I0218 14:53:05.421795 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hhb4v\" (UniqueName: \"kubernetes.io/projected/5790a7ec-79bb-49af-842f-e2b879f33184-kube-api-access-hhb4v\") pod \"swift-storage-0\" (UID: \"5790a7ec-79bb-49af-842f-e2b879f33184\") " pod="openstack/swift-storage-0" Feb 18 14:53:05 crc kubenswrapper[4957]: I0218 14:53:05.421822 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/5790a7ec-79bb-49af-842f-e2b879f33184-lock\") pod \"swift-storage-0\" (UID: \"5790a7ec-79bb-49af-842f-e2b879f33184\") " pod="openstack/swift-storage-0" Feb 18 14:53:05 crc kubenswrapper[4957]: I0218 14:53:05.421852 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/5790a7ec-79bb-49af-842f-e2b879f33184-etc-swift\") pod \"swift-storage-0\" (UID: \"5790a7ec-79bb-49af-842f-e2b879f33184\") " pod="openstack/swift-storage-0" Feb 18 14:53:05 crc kubenswrapper[4957]: E0218 14:53:05.422014 4957 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 18 14:53:05 crc kubenswrapper[4957]: E0218 14:53:05.422028 4957 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 18 14:53:05 crc kubenswrapper[4957]: E0218 14:53:05.422071 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5790a7ec-79bb-49af-842f-e2b879f33184-etc-swift podName:5790a7ec-79bb-49af-842f-e2b879f33184 nodeName:}" failed. No retries permitted until 2026-02-18 14:53:05.922054407 +0000 UTC m=+1292.442919151 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/5790a7ec-79bb-49af-842f-e2b879f33184-etc-swift") pod "swift-storage-0" (UID: "5790a7ec-79bb-49af-842f-e2b879f33184") : configmap "swift-ring-files" not found Feb 18 14:53:05 crc kubenswrapper[4957]: I0218 14:53:05.423144 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/5790a7ec-79bb-49af-842f-e2b879f33184-lock\") pod \"swift-storage-0\" (UID: \"5790a7ec-79bb-49af-842f-e2b879f33184\") " pod="openstack/swift-storage-0" Feb 18 14:53:05 crc kubenswrapper[4957]: I0218 14:53:05.423350 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/5790a7ec-79bb-49af-842f-e2b879f33184-cache\") pod \"swift-storage-0\" (UID: \"5790a7ec-79bb-49af-842f-e2b879f33184\") " pod="openstack/swift-storage-0" Feb 18 14:53:05 crc kubenswrapper[4957]: I0218 14:53:05.428174 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5790a7ec-79bb-49af-842f-e2b879f33184-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"5790a7ec-79bb-49af-842f-e2b879f33184\") " pod="openstack/swift-storage-0" Feb 18 14:53:05 crc kubenswrapper[4957]: I0218 14:53:05.459455 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hhb4v\" (UniqueName: \"kubernetes.io/projected/5790a7ec-79bb-49af-842f-e2b879f33184-kube-api-access-hhb4v\") pod \"swift-storage-0\" (UID: \"5790a7ec-79bb-49af-842f-e2b879f33184\") " pod="openstack/swift-storage-0" Feb 18 14:53:05 crc kubenswrapper[4957]: I0218 14:53:05.476865 4957 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 18 14:53:05 crc kubenswrapper[4957]: I0218 14:53:05.476912 4957 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-7399b3bb-ce04-4b35-a398-2454e88e8531\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7399b3bb-ce04-4b35-a398-2454e88e8531\") pod \"swift-storage-0\" (UID: \"5790a7ec-79bb-49af-842f-e2b879f33184\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/39003254e94661b1116263a6c781816bfc1c1f73d83798e065d3ec9e2dc2256b/globalmount\"" pod="openstack/swift-storage-0" Feb 18 14:53:05 crc kubenswrapper[4957]: I0218 14:53:05.572701 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-7399b3bb-ce04-4b35-a398-2454e88e8531\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7399b3bb-ce04-4b35-a398-2454e88e8531\") pod \"swift-storage-0\" (UID: \"5790a7ec-79bb-49af-842f-e2b879f33184\") " pod="openstack/swift-storage-0" Feb 18 14:53:05 crc kubenswrapper[4957]: I0218 14:53:05.636717 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Feb 18 14:53:05 crc kubenswrapper[4957]: I0218 14:53:05.707350 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Feb 18 14:53:05 crc kubenswrapper[4957]: I0218 14:53:05.915325 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Feb 18 14:53:05 crc kubenswrapper[4957]: I0218 14:53:05.917307 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 18 14:53:05 crc kubenswrapper[4957]: I0218 14:53:05.920714 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-d2dbw" Feb 18 14:53:05 crc kubenswrapper[4957]: I0218 14:53:05.920902 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Feb 18 14:53:05 crc kubenswrapper[4957]: I0218 14:53:05.921021 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Feb 18 14:53:05 crc kubenswrapper[4957]: I0218 14:53:05.921221 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Feb 18 14:53:05 crc kubenswrapper[4957]: I0218 14:53:05.935448 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/5790a7ec-79bb-49af-842f-e2b879f33184-etc-swift\") pod \"swift-storage-0\" (UID: \"5790a7ec-79bb-49af-842f-e2b879f33184\") " pod="openstack/swift-storage-0" Feb 18 14:53:05 crc kubenswrapper[4957]: E0218 14:53:05.935649 4957 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 18 14:53:05 crc kubenswrapper[4957]: E0218 14:53:05.935667 4957 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 18 14:53:05 crc kubenswrapper[4957]: E0218 14:53:05.935719 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5790a7ec-79bb-49af-842f-e2b879f33184-etc-swift podName:5790a7ec-79bb-49af-842f-e2b879f33184 nodeName:}" failed. No retries permitted until 2026-02-18 14:53:06.935705104 +0000 UTC m=+1293.456569848 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/5790a7ec-79bb-49af-842f-e2b879f33184-etc-swift") pod "swift-storage-0" (UID: "5790a7ec-79bb-49af-842f-e2b879f33184") : configmap "swift-ring-files" not found Feb 18 14:53:05 crc kubenswrapper[4957]: I0218 14:53:05.942298 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 18 14:53:06 crc kubenswrapper[4957]: I0218 14:53:06.036285 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Feb 18 14:53:06 crc kubenswrapper[4957]: I0218 14:53:06.038788 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/164c9825-00c8-4fcb-a706-b2afb16b9229-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"164c9825-00c8-4fcb-a706-b2afb16b9229\") " pod="openstack/ovn-northd-0" Feb 18 14:53:06 crc kubenswrapper[4957]: I0218 14:53:06.038904 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/164c9825-00c8-4fcb-a706-b2afb16b9229-config\") pod \"ovn-northd-0\" (UID: \"164c9825-00c8-4fcb-a706-b2afb16b9229\") " pod="openstack/ovn-northd-0" Feb 18 14:53:06 crc kubenswrapper[4957]: I0218 14:53:06.038929 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nc2t7\" (UniqueName: \"kubernetes.io/projected/164c9825-00c8-4fcb-a706-b2afb16b9229-kube-api-access-nc2t7\") pod \"ovn-northd-0\" (UID: \"164c9825-00c8-4fcb-a706-b2afb16b9229\") " pod="openstack/ovn-northd-0" Feb 18 14:53:06 crc kubenswrapper[4957]: I0218 14:53:06.039073 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/164c9825-00c8-4fcb-a706-b2afb16b9229-scripts\") pod \"ovn-northd-0\" (UID: \"164c9825-00c8-4fcb-a706-b2afb16b9229\") " pod="openstack/ovn-northd-0" Feb 18 14:53:06 crc kubenswrapper[4957]: I0218 14:53:06.039294 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/164c9825-00c8-4fcb-a706-b2afb16b9229-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"164c9825-00c8-4fcb-a706-b2afb16b9229\") " pod="openstack/ovn-northd-0" Feb 18 14:53:06 crc kubenswrapper[4957]: I0218 14:53:06.039345 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/164c9825-00c8-4fcb-a706-b2afb16b9229-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"164c9825-00c8-4fcb-a706-b2afb16b9229\") " pod="openstack/ovn-northd-0" Feb 18 14:53:06 crc kubenswrapper[4957]: I0218 14:53:06.039454 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/164c9825-00c8-4fcb-a706-b2afb16b9229-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"164c9825-00c8-4fcb-a706-b2afb16b9229\") " pod="openstack/ovn-northd-0" Feb 18 14:53:06 crc kubenswrapper[4957]: I0218 14:53:06.133412 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Feb 18 14:53:06 crc kubenswrapper[4957]: I0218 14:53:06.141492 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/164c9825-00c8-4fcb-a706-b2afb16b9229-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"164c9825-00c8-4fcb-a706-b2afb16b9229\") " pod="openstack/ovn-northd-0" Feb 18 14:53:06 crc kubenswrapper[4957]: I0218 14:53:06.141628 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/164c9825-00c8-4fcb-a706-b2afb16b9229-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"164c9825-00c8-4fcb-a706-b2afb16b9229\") " pod="openstack/ovn-northd-0" Feb 18 14:53:06 crc kubenswrapper[4957]: I0218 14:53:06.141705 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/164c9825-00c8-4fcb-a706-b2afb16b9229-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"164c9825-00c8-4fcb-a706-b2afb16b9229\") " pod="openstack/ovn-northd-0" Feb 18 14:53:06 crc kubenswrapper[4957]: I0218 14:53:06.141790 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/164c9825-00c8-4fcb-a706-b2afb16b9229-config\") pod \"ovn-northd-0\" (UID: \"164c9825-00c8-4fcb-a706-b2afb16b9229\") " pod="openstack/ovn-northd-0" Feb 18 14:53:06 crc kubenswrapper[4957]: I0218 14:53:06.141812 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nc2t7\" (UniqueName: \"kubernetes.io/projected/164c9825-00c8-4fcb-a706-b2afb16b9229-kube-api-access-nc2t7\") pod \"ovn-northd-0\" (UID: \"164c9825-00c8-4fcb-a706-b2afb16b9229\") " pod="openstack/ovn-northd-0" Feb 18 14:53:06 crc kubenswrapper[4957]: I0218 14:53:06.141868 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/164c9825-00c8-4fcb-a706-b2afb16b9229-scripts\") pod \"ovn-northd-0\" (UID: \"164c9825-00c8-4fcb-a706-b2afb16b9229\") " pod="openstack/ovn-northd-0" Feb 18 14:53:06 crc kubenswrapper[4957]: I0218 14:53:06.141955 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/164c9825-00c8-4fcb-a706-b2afb16b9229-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"164c9825-00c8-4fcb-a706-b2afb16b9229\") " pod="openstack/ovn-northd-0" Feb 18 14:53:06 crc kubenswrapper[4957]: I0218 14:53:06.142321 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/164c9825-00c8-4fcb-a706-b2afb16b9229-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"164c9825-00c8-4fcb-a706-b2afb16b9229\") " pod="openstack/ovn-northd-0" Feb 18 14:53:06 crc kubenswrapper[4957]: I0218 14:53:06.143161 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/164c9825-00c8-4fcb-a706-b2afb16b9229-scripts\") pod \"ovn-northd-0\" (UID: \"164c9825-00c8-4fcb-a706-b2afb16b9229\") " pod="openstack/ovn-northd-0" Feb 18 14:53:06 crc kubenswrapper[4957]: I0218 14:53:06.143180 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/164c9825-00c8-4fcb-a706-b2afb16b9229-config\") pod \"ovn-northd-0\" (UID: \"164c9825-00c8-4fcb-a706-b2afb16b9229\") " pod="openstack/ovn-northd-0" Feb 18 14:53:06 crc kubenswrapper[4957]: I0218 14:53:06.146841 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/164c9825-00c8-4fcb-a706-b2afb16b9229-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"164c9825-00c8-4fcb-a706-b2afb16b9229\") " pod="openstack/ovn-northd-0" Feb 18 14:53:06 crc kubenswrapper[4957]: I0218 14:53:06.147544 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/164c9825-00c8-4fcb-a706-b2afb16b9229-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"164c9825-00c8-4fcb-a706-b2afb16b9229\") " pod="openstack/ovn-northd-0" Feb 18 14:53:06 crc kubenswrapper[4957]: I0218 14:53:06.148094 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/164c9825-00c8-4fcb-a706-b2afb16b9229-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"164c9825-00c8-4fcb-a706-b2afb16b9229\") " pod="openstack/ovn-northd-0" Feb 18 14:53:06 crc kubenswrapper[4957]: I0218 14:53:06.188441 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nc2t7\" (UniqueName: \"kubernetes.io/projected/164c9825-00c8-4fcb-a706-b2afb16b9229-kube-api-access-nc2t7\") pod \"ovn-northd-0\" (UID: \"164c9825-00c8-4fcb-a706-b2afb16b9229\") " pod="openstack/ovn-northd-0" Feb 18 14:53:06 crc kubenswrapper[4957]: I0218 14:53:06.244581 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 18 14:53:06 crc kubenswrapper[4957]: I0218 14:53:06.248969 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f6d43c29-7df2-45a3-8b11-0cf3f6134f09" path="/var/lib/kubelet/pods/f6d43c29-7df2-45a3-8b11-0cf3f6134f09/volumes" Feb 18 14:53:06 crc kubenswrapper[4957]: I0218 14:53:06.315528 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-hzpv6"] Feb 18 14:53:06 crc kubenswrapper[4957]: I0218 14:53:06.317377 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-hzpv6" Feb 18 14:53:06 crc kubenswrapper[4957]: I0218 14:53:06.320738 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Feb 18 14:53:06 crc kubenswrapper[4957]: I0218 14:53:06.320889 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 18 14:53:06 crc kubenswrapper[4957]: I0218 14:53:06.321093 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Feb 18 14:53:06 crc kubenswrapper[4957]: I0218 14:53:06.324636 4957 generic.go:334] "Generic (PLEG): container finished" podID="d7283a8b-4d7e-4959-af72-06f15b3d73b0" containerID="5b0dab01c64196efb43cdca4cc00dd15f59fa6a73036ced69de0f45d6d56b4f3" exitCode=0 Feb 18 14:53:06 crc kubenswrapper[4957]: I0218 14:53:06.324693 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-hss4q" event={"ID":"d7283a8b-4d7e-4959-af72-06f15b3d73b0","Type":"ContainerDied","Data":"5b0dab01c64196efb43cdca4cc00dd15f59fa6a73036ced69de0f45d6d56b4f3"} Feb 18 14:53:06 crc kubenswrapper[4957]: I0218 14:53:06.344833 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-hzpv6"] Feb 18 14:53:06 crc kubenswrapper[4957]: I0218 14:53:06.458302 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4wpbn\" (UniqueName: \"kubernetes.io/projected/73ec2a80-4748-44f3-a979-1c854a4f3b49-kube-api-access-4wpbn\") pod \"swift-ring-rebalance-hzpv6\" (UID: \"73ec2a80-4748-44f3-a979-1c854a4f3b49\") " pod="openstack/swift-ring-rebalance-hzpv6" Feb 18 14:53:06 crc kubenswrapper[4957]: I0218 14:53:06.459084 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/73ec2a80-4748-44f3-a979-1c854a4f3b49-etc-swift\") pod \"swift-ring-rebalance-hzpv6\" (UID: \"73ec2a80-4748-44f3-a979-1c854a4f3b49\") " pod="openstack/swift-ring-rebalance-hzpv6" Feb 18 14:53:06 crc kubenswrapper[4957]: I0218 14:53:06.459169 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/73ec2a80-4748-44f3-a979-1c854a4f3b49-ring-data-devices\") pod \"swift-ring-rebalance-hzpv6\" (UID: \"73ec2a80-4748-44f3-a979-1c854a4f3b49\") " pod="openstack/swift-ring-rebalance-hzpv6" Feb 18 14:53:06 crc kubenswrapper[4957]: I0218 14:53:06.459300 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/73ec2a80-4748-44f3-a979-1c854a4f3b49-dispersionconf\") pod \"swift-ring-rebalance-hzpv6\" (UID: \"73ec2a80-4748-44f3-a979-1c854a4f3b49\") " pod="openstack/swift-ring-rebalance-hzpv6" Feb 18 14:53:06 crc kubenswrapper[4957]: I0218 14:53:06.459862 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73ec2a80-4748-44f3-a979-1c854a4f3b49-combined-ca-bundle\") pod \"swift-ring-rebalance-hzpv6\" (UID: \"73ec2a80-4748-44f3-a979-1c854a4f3b49\") " pod="openstack/swift-ring-rebalance-hzpv6" Feb 18 14:53:06 crc kubenswrapper[4957]: I0218 14:53:06.459938 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/73ec2a80-4748-44f3-a979-1c854a4f3b49-swiftconf\") pod \"swift-ring-rebalance-hzpv6\" (UID: \"73ec2a80-4748-44f3-a979-1c854a4f3b49\") " pod="openstack/swift-ring-rebalance-hzpv6" Feb 18 14:53:06 crc kubenswrapper[4957]: I0218 14:53:06.460125 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/73ec2a80-4748-44f3-a979-1c854a4f3b49-scripts\") pod \"swift-ring-rebalance-hzpv6\" (UID: \"73ec2a80-4748-44f3-a979-1c854a4f3b49\") " pod="openstack/swift-ring-rebalance-hzpv6" Feb 18 14:53:06 crc kubenswrapper[4957]: I0218 14:53:06.561782 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/73ec2a80-4748-44f3-a979-1c854a4f3b49-etc-swift\") pod \"swift-ring-rebalance-hzpv6\" (UID: \"73ec2a80-4748-44f3-a979-1c854a4f3b49\") " pod="openstack/swift-ring-rebalance-hzpv6" Feb 18 14:53:06 crc kubenswrapper[4957]: I0218 14:53:06.561838 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/73ec2a80-4748-44f3-a979-1c854a4f3b49-ring-data-devices\") pod \"swift-ring-rebalance-hzpv6\" (UID: \"73ec2a80-4748-44f3-a979-1c854a4f3b49\") " pod="openstack/swift-ring-rebalance-hzpv6" Feb 18 14:53:06 crc kubenswrapper[4957]: I0218 14:53:06.561874 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/73ec2a80-4748-44f3-a979-1c854a4f3b49-dispersionconf\") pod \"swift-ring-rebalance-hzpv6\" (UID: \"73ec2a80-4748-44f3-a979-1c854a4f3b49\") " pod="openstack/swift-ring-rebalance-hzpv6" Feb 18 14:53:06 crc kubenswrapper[4957]: I0218 14:53:06.561995 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73ec2a80-4748-44f3-a979-1c854a4f3b49-combined-ca-bundle\") pod \"swift-ring-rebalance-hzpv6\" (UID: \"73ec2a80-4748-44f3-a979-1c854a4f3b49\") " pod="openstack/swift-ring-rebalance-hzpv6" Feb 18 14:53:06 crc kubenswrapper[4957]: I0218 14:53:06.562021 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/73ec2a80-4748-44f3-a979-1c854a4f3b49-swiftconf\") pod \"swift-ring-rebalance-hzpv6\" (UID: \"73ec2a80-4748-44f3-a979-1c854a4f3b49\") " pod="openstack/swift-ring-rebalance-hzpv6" Feb 18 14:53:06 crc kubenswrapper[4957]: I0218 14:53:06.562050 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/73ec2a80-4748-44f3-a979-1c854a4f3b49-scripts\") pod \"swift-ring-rebalance-hzpv6\" (UID: \"73ec2a80-4748-44f3-a979-1c854a4f3b49\") " pod="openstack/swift-ring-rebalance-hzpv6" Feb 18 14:53:06 crc kubenswrapper[4957]: I0218 14:53:06.562094 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4wpbn\" (UniqueName: \"kubernetes.io/projected/73ec2a80-4748-44f3-a979-1c854a4f3b49-kube-api-access-4wpbn\") pod \"swift-ring-rebalance-hzpv6\" (UID: \"73ec2a80-4748-44f3-a979-1c854a4f3b49\") " pod="openstack/swift-ring-rebalance-hzpv6" Feb 18 14:53:06 crc kubenswrapper[4957]: I0218 14:53:06.564462 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/73ec2a80-4748-44f3-a979-1c854a4f3b49-ring-data-devices\") pod \"swift-ring-rebalance-hzpv6\" (UID: \"73ec2a80-4748-44f3-a979-1c854a4f3b49\") " pod="openstack/swift-ring-rebalance-hzpv6" Feb 18 14:53:06 crc kubenswrapper[4957]: I0218 14:53:06.564576 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/73ec2a80-4748-44f3-a979-1c854a4f3b49-scripts\") pod \"swift-ring-rebalance-hzpv6\" (UID: \"73ec2a80-4748-44f3-a979-1c854a4f3b49\") " pod="openstack/swift-ring-rebalance-hzpv6" Feb 18 14:53:06 crc kubenswrapper[4957]: I0218 14:53:06.564932 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/73ec2a80-4748-44f3-a979-1c854a4f3b49-etc-swift\") pod \"swift-ring-rebalance-hzpv6\" (UID: \"73ec2a80-4748-44f3-a979-1c854a4f3b49\") " pod="openstack/swift-ring-rebalance-hzpv6" Feb 18 14:53:06 crc kubenswrapper[4957]: I0218 14:53:06.567785 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/73ec2a80-4748-44f3-a979-1c854a4f3b49-dispersionconf\") pod \"swift-ring-rebalance-hzpv6\" (UID: \"73ec2a80-4748-44f3-a979-1c854a4f3b49\") " pod="openstack/swift-ring-rebalance-hzpv6" Feb 18 14:53:06 crc kubenswrapper[4957]: I0218 14:53:06.568291 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/73ec2a80-4748-44f3-a979-1c854a4f3b49-swiftconf\") pod \"swift-ring-rebalance-hzpv6\" (UID: \"73ec2a80-4748-44f3-a979-1c854a4f3b49\") " pod="openstack/swift-ring-rebalance-hzpv6" Feb 18 14:53:06 crc kubenswrapper[4957]: I0218 14:53:06.568457 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73ec2a80-4748-44f3-a979-1c854a4f3b49-combined-ca-bundle\") pod \"swift-ring-rebalance-hzpv6\" (UID: \"73ec2a80-4748-44f3-a979-1c854a4f3b49\") " pod="openstack/swift-ring-rebalance-hzpv6" Feb 18 14:53:06 crc kubenswrapper[4957]: I0218 14:53:06.584921 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4wpbn\" (UniqueName: \"kubernetes.io/projected/73ec2a80-4748-44f3-a979-1c854a4f3b49-kube-api-access-4wpbn\") pod \"swift-ring-rebalance-hzpv6\" (UID: \"73ec2a80-4748-44f3-a979-1c854a4f3b49\") " pod="openstack/swift-ring-rebalance-hzpv6" Feb 18 14:53:06 crc kubenswrapper[4957]: I0218 14:53:06.825751 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 18 14:53:06 crc kubenswrapper[4957]: W0218 14:53:06.828933 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod164c9825_00c8_4fcb_a706_b2afb16b9229.slice/crio-a1faae5f16216a6b4766c0c290e73c4f802baf6a9e7f06cbb777a0733b4f1385 WatchSource:0}: Error finding container a1faae5f16216a6b4766c0c290e73c4f802baf6a9e7f06cbb777a0733b4f1385: Status 404 returned error can't find the container with id a1faae5f16216a6b4766c0c290e73c4f802baf6a9e7f06cbb777a0733b4f1385 Feb 18 14:53:06 crc kubenswrapper[4957]: I0218 14:53:06.837389 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-hzpv6" Feb 18 14:53:06 crc kubenswrapper[4957]: I0218 14:53:06.977337 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/5790a7ec-79bb-49af-842f-e2b879f33184-etc-swift\") pod \"swift-storage-0\" (UID: \"5790a7ec-79bb-49af-842f-e2b879f33184\") " pod="openstack/swift-storage-0" Feb 18 14:53:06 crc kubenswrapper[4957]: E0218 14:53:06.977588 4957 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 18 14:53:06 crc kubenswrapper[4957]: E0218 14:53:06.978049 4957 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 18 14:53:06 crc kubenswrapper[4957]: E0218 14:53:06.978130 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5790a7ec-79bb-49af-842f-e2b879f33184-etc-swift podName:5790a7ec-79bb-49af-842f-e2b879f33184 nodeName:}" failed. No retries permitted until 2026-02-18 14:53:08.9781075 +0000 UTC m=+1295.498972244 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/5790a7ec-79bb-49af-842f-e2b879f33184-etc-swift") pod "swift-storage-0" (UID: "5790a7ec-79bb-49af-842f-e2b879f33184") : configmap "swift-ring-files" not found Feb 18 14:53:07 crc kubenswrapper[4957]: I0218 14:53:07.279680 4957 patch_prober.go:28] interesting pod/machine-config-daemon-x8wwg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 14:53:07 crc kubenswrapper[4957]: I0218 14:53:07.279754 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 14:53:07 crc kubenswrapper[4957]: W0218 14:53:07.333105 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod73ec2a80_4748_44f3_a979_1c854a4f3b49.slice/crio-b067e88e5026b0630948a152e54c6b1f1e1aa9517c96fe6b1782a3df8bc83ac8 WatchSource:0}: Error finding container b067e88e5026b0630948a152e54c6b1f1e1aa9517c96fe6b1782a3df8bc83ac8: Status 404 returned error can't find the container with id b067e88e5026b0630948a152e54c6b1f1e1aa9517c96fe6b1782a3df8bc83ac8 Feb 18 14:53:07 crc kubenswrapper[4957]: I0218 14:53:07.341385 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"164c9825-00c8-4fcb-a706-b2afb16b9229","Type":"ContainerStarted","Data":"a1faae5f16216a6b4766c0c290e73c4f802baf6a9e7f06cbb777a0733b4f1385"} Feb 18 14:53:07 crc kubenswrapper[4957]: I0218 14:53:07.344169 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-hss4q" event={"ID":"d7283a8b-4d7e-4959-af72-06f15b3d73b0","Type":"ContainerStarted","Data":"f8bf5fff9370b1b5a8810319f2867ae628e75c611521dfbefc4ee7981333eb63"} Feb 18 14:53:07 crc kubenswrapper[4957]: I0218 14:53:07.344385 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698758b865-hss4q" Feb 18 14:53:07 crc kubenswrapper[4957]: I0218 14:53:07.348321 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-hzpv6"] Feb 18 14:53:07 crc kubenswrapper[4957]: I0218 14:53:07.368194 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-698758b865-hss4q" podStartSLOduration=3.368172764 podStartE2EDuration="3.368172764s" podCreationTimestamp="2026-02-18 14:53:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:53:07.360391147 +0000 UTC m=+1293.881255891" watchObservedRunningTime="2026-02-18 14:53:07.368172764 +0000 UTC m=+1293.889037508" Feb 18 14:53:07 crc kubenswrapper[4957]: I0218 14:53:07.575340 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-86db49b7ff-5wr9w" Feb 18 14:53:08 crc kubenswrapper[4957]: I0218 14:53:08.372704 4957 generic.go:334] "Generic (PLEG): container finished" podID="383c2ab1-7f15-422c-a4ed-3c899e3a8c74" containerID="d8b2071481292b9d247fc49961706323623ac512cdaf2f0f00c3971ed7058910" exitCode=0 Feb 18 14:53:08 crc kubenswrapper[4957]: I0218 14:53:08.372919 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"383c2ab1-7f15-422c-a4ed-3c899e3a8c74","Type":"ContainerDied","Data":"d8b2071481292b9d247fc49961706323623ac512cdaf2f0f00c3971ed7058910"} Feb 18 14:53:08 crc kubenswrapper[4957]: I0218 14:53:08.382150 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-hzpv6" event={"ID":"73ec2a80-4748-44f3-a979-1c854a4f3b49","Type":"ContainerStarted","Data":"b067e88e5026b0630948a152e54c6b1f1e1aa9517c96fe6b1782a3df8bc83ac8"} Feb 18 14:53:08 crc kubenswrapper[4957]: I0218 14:53:08.518587 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-vbjt5"] Feb 18 14:53:08 crc kubenswrapper[4957]: I0218 14:53:08.521779 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-vbjt5" Feb 18 14:53:08 crc kubenswrapper[4957]: I0218 14:53:08.525323 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Feb 18 14:53:08 crc kubenswrapper[4957]: I0218 14:53:08.535275 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-vbjt5"] Feb 18 14:53:08 crc kubenswrapper[4957]: I0218 14:53:08.617451 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9c9d6af0-44f2-4777-9031-70eecd73be87-operator-scripts\") pod \"root-account-create-update-vbjt5\" (UID: \"9c9d6af0-44f2-4777-9031-70eecd73be87\") " pod="openstack/root-account-create-update-vbjt5" Feb 18 14:53:08 crc kubenswrapper[4957]: I0218 14:53:08.617772 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svxnv\" (UniqueName: \"kubernetes.io/projected/9c9d6af0-44f2-4777-9031-70eecd73be87-kube-api-access-svxnv\") pod \"root-account-create-update-vbjt5\" (UID: \"9c9d6af0-44f2-4777-9031-70eecd73be87\") " pod="openstack/root-account-create-update-vbjt5" Feb 18 14:53:08 crc kubenswrapper[4957]: I0218 14:53:08.719936 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9c9d6af0-44f2-4777-9031-70eecd73be87-operator-scripts\") pod \"root-account-create-update-vbjt5\" (UID: \"9c9d6af0-44f2-4777-9031-70eecd73be87\") " pod="openstack/root-account-create-update-vbjt5" Feb 18 14:53:08 crc kubenswrapper[4957]: I0218 14:53:08.720081 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-svxnv\" (UniqueName: \"kubernetes.io/projected/9c9d6af0-44f2-4777-9031-70eecd73be87-kube-api-access-svxnv\") pod \"root-account-create-update-vbjt5\" (UID: \"9c9d6af0-44f2-4777-9031-70eecd73be87\") " pod="openstack/root-account-create-update-vbjt5" Feb 18 14:53:08 crc kubenswrapper[4957]: I0218 14:53:08.720766 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9c9d6af0-44f2-4777-9031-70eecd73be87-operator-scripts\") pod \"root-account-create-update-vbjt5\" (UID: \"9c9d6af0-44f2-4777-9031-70eecd73be87\") " pod="openstack/root-account-create-update-vbjt5" Feb 18 14:53:08 crc kubenswrapper[4957]: I0218 14:53:08.740678 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-svxnv\" (UniqueName: \"kubernetes.io/projected/9c9d6af0-44f2-4777-9031-70eecd73be87-kube-api-access-svxnv\") pod \"root-account-create-update-vbjt5\" (UID: \"9c9d6af0-44f2-4777-9031-70eecd73be87\") " pod="openstack/root-account-create-update-vbjt5" Feb 18 14:53:08 crc kubenswrapper[4957]: I0218 14:53:08.853918 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-vbjt5" Feb 18 14:53:09 crc kubenswrapper[4957]: I0218 14:53:09.027202 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/5790a7ec-79bb-49af-842f-e2b879f33184-etc-swift\") pod \"swift-storage-0\" (UID: \"5790a7ec-79bb-49af-842f-e2b879f33184\") " pod="openstack/swift-storage-0" Feb 18 14:53:09 crc kubenswrapper[4957]: E0218 14:53:09.027878 4957 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 18 14:53:09 crc kubenswrapper[4957]: E0218 14:53:09.027891 4957 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 18 14:53:09 crc kubenswrapper[4957]: E0218 14:53:09.027935 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5790a7ec-79bb-49af-842f-e2b879f33184-etc-swift podName:5790a7ec-79bb-49af-842f-e2b879f33184 nodeName:}" failed. No retries permitted until 2026-02-18 14:53:13.027920303 +0000 UTC m=+1299.548785047 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/5790a7ec-79bb-49af-842f-e2b879f33184-etc-swift") pod "swift-storage-0" (UID: "5790a7ec-79bb-49af-842f-e2b879f33184") : configmap "swift-ring-files" not found Feb 18 14:53:09 crc kubenswrapper[4957]: I0218 14:53:09.341328 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-vbjt5"] Feb 18 14:53:09 crc kubenswrapper[4957]: I0218 14:53:09.391270 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"164c9825-00c8-4fcb-a706-b2afb16b9229","Type":"ContainerStarted","Data":"e85b4feda85aef872ce52ae50280a5e90c41c92a0710a7294501c0fb9c5661c2"} Feb 18 14:53:09 crc kubenswrapper[4957]: I0218 14:53:09.392303 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"164c9825-00c8-4fcb-a706-b2afb16b9229","Type":"ContainerStarted","Data":"90992215fd67db820c2b5aa792d28f9391920aca718729253bc1dbc05be1269a"} Feb 18 14:53:09 crc kubenswrapper[4957]: I0218 14:53:09.393819 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Feb 18 14:53:09 crc kubenswrapper[4957]: I0218 14:53:09.421513 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.993799967 podStartE2EDuration="4.421495158s" podCreationTimestamp="2026-02-18 14:53:05 +0000 UTC" firstStartedPulling="2026-02-18 14:53:06.831678761 +0000 UTC m=+1293.352543505" lastFinishedPulling="2026-02-18 14:53:08.259373952 +0000 UTC m=+1294.780238696" observedRunningTime="2026-02-18 14:53:09.417896405 +0000 UTC m=+1295.938761169" watchObservedRunningTime="2026-02-18 14:53:09.421495158 +0000 UTC m=+1295.942359902" Feb 18 14:53:10 crc kubenswrapper[4957]: W0218 14:53:10.508104 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9c9d6af0_44f2_4777_9031_70eecd73be87.slice/crio-4b25f037945bc5caaaa964532279c055636270916f049ef9ab35f109e3843254 WatchSource:0}: Error finding container 4b25f037945bc5caaaa964532279c055636270916f049ef9ab35f109e3843254: Status 404 returned error can't find the container with id 4b25f037945bc5caaaa964532279c055636270916f049ef9ab35f109e3843254 Feb 18 14:53:11 crc kubenswrapper[4957]: I0218 14:53:11.420842 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-vbjt5" event={"ID":"9c9d6af0-44f2-4777-9031-70eecd73be87","Type":"ContainerStarted","Data":"4b25f037945bc5caaaa964532279c055636270916f049ef9ab35f109e3843254"} Feb 18 14:53:11 crc kubenswrapper[4957]: I0218 14:53:11.536952 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-7f89w"] Feb 18 14:53:11 crc kubenswrapper[4957]: I0218 14:53:11.538271 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-7f89w" Feb 18 14:53:11 crc kubenswrapper[4957]: I0218 14:53:11.547195 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-7f89w"] Feb 18 14:53:11 crc kubenswrapper[4957]: I0218 14:53:11.580803 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gq7v2\" (UniqueName: \"kubernetes.io/projected/644bd125-826c-4c6a-85e7-ab56ade2412f-kube-api-access-gq7v2\") pod \"glance-db-create-7f89w\" (UID: \"644bd125-826c-4c6a-85e7-ab56ade2412f\") " pod="openstack/glance-db-create-7f89w" Feb 18 14:53:11 crc kubenswrapper[4957]: I0218 14:53:11.580871 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/644bd125-826c-4c6a-85e7-ab56ade2412f-operator-scripts\") pod \"glance-db-create-7f89w\" (UID: \"644bd125-826c-4c6a-85e7-ab56ade2412f\") " pod="openstack/glance-db-create-7f89w" Feb 18 14:53:11 crc kubenswrapper[4957]: I0218 14:53:11.643623 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-43b7-account-create-update-69wnm"] Feb 18 14:53:11 crc kubenswrapper[4957]: I0218 14:53:11.646715 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-43b7-account-create-update-69wnm" Feb 18 14:53:11 crc kubenswrapper[4957]: I0218 14:53:11.660077 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Feb 18 14:53:11 crc kubenswrapper[4957]: I0218 14:53:11.672029 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-43b7-account-create-update-69wnm"] Feb 18 14:53:11 crc kubenswrapper[4957]: I0218 14:53:11.685401 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7n5f\" (UniqueName: \"kubernetes.io/projected/9a720a9a-d59f-4a82-8b76-87196880798e-kube-api-access-d7n5f\") pod \"glance-43b7-account-create-update-69wnm\" (UID: \"9a720a9a-d59f-4a82-8b76-87196880798e\") " pod="openstack/glance-43b7-account-create-update-69wnm" Feb 18 14:53:11 crc kubenswrapper[4957]: I0218 14:53:11.686360 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gq7v2\" (UniqueName: \"kubernetes.io/projected/644bd125-826c-4c6a-85e7-ab56ade2412f-kube-api-access-gq7v2\") pod \"glance-db-create-7f89w\" (UID: \"644bd125-826c-4c6a-85e7-ab56ade2412f\") " pod="openstack/glance-db-create-7f89w" Feb 18 14:53:11 crc kubenswrapper[4957]: I0218 14:53:11.686459 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/644bd125-826c-4c6a-85e7-ab56ade2412f-operator-scripts\") pod \"glance-db-create-7f89w\" (UID: \"644bd125-826c-4c6a-85e7-ab56ade2412f\") " pod="openstack/glance-db-create-7f89w" Feb 18 14:53:11 crc kubenswrapper[4957]: I0218 14:53:11.686536 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9a720a9a-d59f-4a82-8b76-87196880798e-operator-scripts\") pod \"glance-43b7-account-create-update-69wnm\" (UID: \"9a720a9a-d59f-4a82-8b76-87196880798e\") " pod="openstack/glance-43b7-account-create-update-69wnm" Feb 18 14:53:11 crc kubenswrapper[4957]: I0218 14:53:11.687709 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/644bd125-826c-4c6a-85e7-ab56ade2412f-operator-scripts\") pod \"glance-db-create-7f89w\" (UID: \"644bd125-826c-4c6a-85e7-ab56ade2412f\") " pod="openstack/glance-db-create-7f89w" Feb 18 14:53:11 crc kubenswrapper[4957]: I0218 14:53:11.712900 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gq7v2\" (UniqueName: \"kubernetes.io/projected/644bd125-826c-4c6a-85e7-ab56ade2412f-kube-api-access-gq7v2\") pod \"glance-db-create-7f89w\" (UID: \"644bd125-826c-4c6a-85e7-ab56ade2412f\") " pod="openstack/glance-db-create-7f89w" Feb 18 14:53:11 crc kubenswrapper[4957]: I0218 14:53:11.788242 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9a720a9a-d59f-4a82-8b76-87196880798e-operator-scripts\") pod \"glance-43b7-account-create-update-69wnm\" (UID: \"9a720a9a-d59f-4a82-8b76-87196880798e\") " pod="openstack/glance-43b7-account-create-update-69wnm" Feb 18 14:53:11 crc kubenswrapper[4957]: I0218 14:53:11.788402 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d7n5f\" (UniqueName: \"kubernetes.io/projected/9a720a9a-d59f-4a82-8b76-87196880798e-kube-api-access-d7n5f\") pod \"glance-43b7-account-create-update-69wnm\" (UID: \"9a720a9a-d59f-4a82-8b76-87196880798e\") " pod="openstack/glance-43b7-account-create-update-69wnm" Feb 18 14:53:11 crc kubenswrapper[4957]: I0218 14:53:11.788989 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9a720a9a-d59f-4a82-8b76-87196880798e-operator-scripts\") pod \"glance-43b7-account-create-update-69wnm\" (UID: \"9a720a9a-d59f-4a82-8b76-87196880798e\") " pod="openstack/glance-43b7-account-create-update-69wnm" Feb 18 14:53:11 crc kubenswrapper[4957]: I0218 14:53:11.814944 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d7n5f\" (UniqueName: \"kubernetes.io/projected/9a720a9a-d59f-4a82-8b76-87196880798e-kube-api-access-d7n5f\") pod \"glance-43b7-account-create-update-69wnm\" (UID: \"9a720a9a-d59f-4a82-8b76-87196880798e\") " pod="openstack/glance-43b7-account-create-update-69wnm" Feb 18 14:53:11 crc kubenswrapper[4957]: I0218 14:53:11.865964 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-7f89w" Feb 18 14:53:11 crc kubenswrapper[4957]: I0218 14:53:11.987699 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-43b7-account-create-update-69wnm" Feb 18 14:53:12 crc kubenswrapper[4957]: I0218 14:53:12.242877 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-g6wjq"] Feb 18 14:53:12 crc kubenswrapper[4957]: I0218 14:53:12.245823 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-g6wjq" Feb 18 14:53:12 crc kubenswrapper[4957]: I0218 14:53:12.260866 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-g6wjq"] Feb 18 14:53:12 crc kubenswrapper[4957]: I0218 14:53:12.306613 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2tcs\" (UniqueName: \"kubernetes.io/projected/a6828e6e-401e-4774-83d8-eb1dab6661a3-kube-api-access-l2tcs\") pod \"keystone-db-create-g6wjq\" (UID: \"a6828e6e-401e-4774-83d8-eb1dab6661a3\") " pod="openstack/keystone-db-create-g6wjq" Feb 18 14:53:12 crc kubenswrapper[4957]: I0218 14:53:12.306791 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a6828e6e-401e-4774-83d8-eb1dab6661a3-operator-scripts\") pod \"keystone-db-create-g6wjq\" (UID: \"a6828e6e-401e-4774-83d8-eb1dab6661a3\") " pod="openstack/keystone-db-create-g6wjq" Feb 18 14:53:12 crc kubenswrapper[4957]: I0218 14:53:12.356362 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-a8b2-account-create-update-pwb27"] Feb 18 14:53:12 crc kubenswrapper[4957]: I0218 14:53:12.358041 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-a8b2-account-create-update-pwb27" Feb 18 14:53:12 crc kubenswrapper[4957]: I0218 14:53:12.370102 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Feb 18 14:53:12 crc kubenswrapper[4957]: I0218 14:53:12.381432 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-a8b2-account-create-update-pwb27"] Feb 18 14:53:12 crc kubenswrapper[4957]: I0218 14:53:12.408970 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a6828e6e-401e-4774-83d8-eb1dab6661a3-operator-scripts\") pod \"keystone-db-create-g6wjq\" (UID: \"a6828e6e-401e-4774-83d8-eb1dab6661a3\") " pod="openstack/keystone-db-create-g6wjq" Feb 18 14:53:12 crc kubenswrapper[4957]: I0218 14:53:12.409045 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rcg9g\" (UniqueName: \"kubernetes.io/projected/23769552-282f-49ac-b650-6047f54aa60e-kube-api-access-rcg9g\") pod \"keystone-a8b2-account-create-update-pwb27\" (UID: \"23769552-282f-49ac-b650-6047f54aa60e\") " pod="openstack/keystone-a8b2-account-create-update-pwb27" Feb 18 14:53:12 crc kubenswrapper[4957]: I0218 14:53:12.409078 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/23769552-282f-49ac-b650-6047f54aa60e-operator-scripts\") pod \"keystone-a8b2-account-create-update-pwb27\" (UID: \"23769552-282f-49ac-b650-6047f54aa60e\") " pod="openstack/keystone-a8b2-account-create-update-pwb27" Feb 18 14:53:12 crc kubenswrapper[4957]: I0218 14:53:12.409465 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l2tcs\" (UniqueName: \"kubernetes.io/projected/a6828e6e-401e-4774-83d8-eb1dab6661a3-kube-api-access-l2tcs\") pod \"keystone-db-create-g6wjq\" (UID: \"a6828e6e-401e-4774-83d8-eb1dab6661a3\") " pod="openstack/keystone-db-create-g6wjq" Feb 18 14:53:12 crc kubenswrapper[4957]: I0218 14:53:12.411344 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a6828e6e-401e-4774-83d8-eb1dab6661a3-operator-scripts\") pod \"keystone-db-create-g6wjq\" (UID: \"a6828e6e-401e-4774-83d8-eb1dab6661a3\") " pod="openstack/keystone-db-create-g6wjq" Feb 18 14:53:12 crc kubenswrapper[4957]: I0218 14:53:12.428862 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-7f89w"] Feb 18 14:53:12 crc kubenswrapper[4957]: I0218 14:53:12.432731 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l2tcs\" (UniqueName: \"kubernetes.io/projected/a6828e6e-401e-4774-83d8-eb1dab6661a3-kube-api-access-l2tcs\") pod \"keystone-db-create-g6wjq\" (UID: \"a6828e6e-401e-4774-83d8-eb1dab6661a3\") " pod="openstack/keystone-db-create-g6wjq" Feb 18 14:53:12 crc kubenswrapper[4957]: W0218 14:53:12.433168 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod644bd125_826c_4c6a_85e7_ab56ade2412f.slice/crio-124d21962eb0e8744cc02ded1a6922b07f8481dd6e000052fe59f364b1fdfb76 WatchSource:0}: Error finding container 124d21962eb0e8744cc02ded1a6922b07f8481dd6e000052fe59f364b1fdfb76: Status 404 returned error can't find the container with id 124d21962eb0e8744cc02ded1a6922b07f8481dd6e000052fe59f364b1fdfb76 Feb 18 14:53:12 crc kubenswrapper[4957]: I0218 14:53:12.445693 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-hzpv6" event={"ID":"73ec2a80-4748-44f3-a979-1c854a4f3b49","Type":"ContainerStarted","Data":"64e7a4ff94ae8524520791c2fb09e798dc4ce8ee24989374ad99c4f871de292d"} Feb 18 14:53:12 crc kubenswrapper[4957]: I0218 14:53:12.451899 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-vbjt5" event={"ID":"9c9d6af0-44f2-4777-9031-70eecd73be87","Type":"ContainerStarted","Data":"f90cb7227354720dcb784183e790fc281faa8f9ddead2c64c3b1a34b749cc189"} Feb 18 14:53:12 crc kubenswrapper[4957]: I0218 14:53:12.477904 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-hzpv6" podStartSLOduration=1.8709804270000001 podStartE2EDuration="6.477885525s" podCreationTimestamp="2026-02-18 14:53:06 +0000 UTC" firstStartedPulling="2026-02-18 14:53:07.335330169 +0000 UTC m=+1293.856194913" lastFinishedPulling="2026-02-18 14:53:11.942235267 +0000 UTC m=+1298.463100011" observedRunningTime="2026-02-18 14:53:12.464899258 +0000 UTC m=+1298.985764002" watchObservedRunningTime="2026-02-18 14:53:12.477885525 +0000 UTC m=+1298.998750269" Feb 18 14:53:12 crc kubenswrapper[4957]: I0218 14:53:12.492118 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-vbjt5" podStartSLOduration=4.492079178 podStartE2EDuration="4.492079178s" podCreationTimestamp="2026-02-18 14:53:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:53:12.478790341 +0000 UTC m=+1298.999655075" watchObservedRunningTime="2026-02-18 14:53:12.492079178 +0000 UTC m=+1299.012943922" Feb 18 14:53:12 crc kubenswrapper[4957]: I0218 14:53:12.513051 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rcg9g\" (UniqueName: \"kubernetes.io/projected/23769552-282f-49ac-b650-6047f54aa60e-kube-api-access-rcg9g\") pod \"keystone-a8b2-account-create-update-pwb27\" (UID: \"23769552-282f-49ac-b650-6047f54aa60e\") " pod="openstack/keystone-a8b2-account-create-update-pwb27" Feb 18 14:53:12 crc kubenswrapper[4957]: I0218 14:53:12.513116 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/23769552-282f-49ac-b650-6047f54aa60e-operator-scripts\") pod \"keystone-a8b2-account-create-update-pwb27\" (UID: \"23769552-282f-49ac-b650-6047f54aa60e\") " pod="openstack/keystone-a8b2-account-create-update-pwb27" Feb 18 14:53:12 crc kubenswrapper[4957]: I0218 14:53:12.514446 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/23769552-282f-49ac-b650-6047f54aa60e-operator-scripts\") pod \"keystone-a8b2-account-create-update-pwb27\" (UID: \"23769552-282f-49ac-b650-6047f54aa60e\") " pod="openstack/keystone-a8b2-account-create-update-pwb27" Feb 18 14:53:12 crc kubenswrapper[4957]: I0218 14:53:12.537160 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rcg9g\" (UniqueName: \"kubernetes.io/projected/23769552-282f-49ac-b650-6047f54aa60e-kube-api-access-rcg9g\") pod \"keystone-a8b2-account-create-update-pwb27\" (UID: \"23769552-282f-49ac-b650-6047f54aa60e\") " pod="openstack/keystone-a8b2-account-create-update-pwb27" Feb 18 14:53:12 crc kubenswrapper[4957]: I0218 14:53:12.547527 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-f4rlf"] Feb 18 14:53:12 crc kubenswrapper[4957]: I0218 14:53:12.549389 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-f4rlf" Feb 18 14:53:12 crc kubenswrapper[4957]: I0218 14:53:12.566869 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-f4rlf"] Feb 18 14:53:12 crc kubenswrapper[4957]: I0218 14:53:12.596149 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-g6wjq" Feb 18 14:53:12 crc kubenswrapper[4957]: I0218 14:53:12.617126 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5v9n\" (UniqueName: \"kubernetes.io/projected/147d3a76-36f3-4ba7-85ad-a05dfe2ec485-kube-api-access-c5v9n\") pod \"placement-db-create-f4rlf\" (UID: \"147d3a76-36f3-4ba7-85ad-a05dfe2ec485\") " pod="openstack/placement-db-create-f4rlf" Feb 18 14:53:12 crc kubenswrapper[4957]: I0218 14:53:12.617304 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/147d3a76-36f3-4ba7-85ad-a05dfe2ec485-operator-scripts\") pod \"placement-db-create-f4rlf\" (UID: \"147d3a76-36f3-4ba7-85ad-a05dfe2ec485\") " pod="openstack/placement-db-create-f4rlf" Feb 18 14:53:12 crc kubenswrapper[4957]: I0218 14:53:12.662237 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-7083-account-create-update-hrlxp"] Feb 18 14:53:12 crc kubenswrapper[4957]: I0218 14:53:12.665592 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7083-account-create-update-hrlxp" Feb 18 14:53:12 crc kubenswrapper[4957]: I0218 14:53:12.668554 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Feb 18 14:53:12 crc kubenswrapper[4957]: I0218 14:53:12.689039 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-7083-account-create-update-hrlxp"] Feb 18 14:53:12 crc kubenswrapper[4957]: I0218 14:53:12.690058 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-a8b2-account-create-update-pwb27" Feb 18 14:53:12 crc kubenswrapper[4957]: I0218 14:53:12.700840 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-43b7-account-create-update-69wnm"] Feb 18 14:53:12 crc kubenswrapper[4957]: I0218 14:53:12.719496 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/147d3a76-36f3-4ba7-85ad-a05dfe2ec485-operator-scripts\") pod \"placement-db-create-f4rlf\" (UID: \"147d3a76-36f3-4ba7-85ad-a05dfe2ec485\") " pod="openstack/placement-db-create-f4rlf" Feb 18 14:53:12 crc kubenswrapper[4957]: I0218 14:53:12.719656 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c7e53e53-e1cc-43a4-8b83-ba7ed5bb323b-operator-scripts\") pod \"placement-7083-account-create-update-hrlxp\" (UID: \"c7e53e53-e1cc-43a4-8b83-ba7ed5bb323b\") " pod="openstack/placement-7083-account-create-update-hrlxp" Feb 18 14:53:12 crc kubenswrapper[4957]: I0218 14:53:12.719753 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c5v9n\" (UniqueName: \"kubernetes.io/projected/147d3a76-36f3-4ba7-85ad-a05dfe2ec485-kube-api-access-c5v9n\") pod \"placement-db-create-f4rlf\" (UID: \"147d3a76-36f3-4ba7-85ad-a05dfe2ec485\") " pod="openstack/placement-db-create-f4rlf" Feb 18 14:53:12 crc kubenswrapper[4957]: I0218 14:53:12.720165 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99lx6\" (UniqueName: \"kubernetes.io/projected/c7e53e53-e1cc-43a4-8b83-ba7ed5bb323b-kube-api-access-99lx6\") pod \"placement-7083-account-create-update-hrlxp\" (UID: \"c7e53e53-e1cc-43a4-8b83-ba7ed5bb323b\") " pod="openstack/placement-7083-account-create-update-hrlxp" Feb 18 14:53:12 crc kubenswrapper[4957]: I0218 14:53:12.720265 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/147d3a76-36f3-4ba7-85ad-a05dfe2ec485-operator-scripts\") pod \"placement-db-create-f4rlf\" (UID: \"147d3a76-36f3-4ba7-85ad-a05dfe2ec485\") " pod="openstack/placement-db-create-f4rlf" Feb 18 14:53:12 crc kubenswrapper[4957]: I0218 14:53:12.744411 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c5v9n\" (UniqueName: \"kubernetes.io/projected/147d3a76-36f3-4ba7-85ad-a05dfe2ec485-kube-api-access-c5v9n\") pod \"placement-db-create-f4rlf\" (UID: \"147d3a76-36f3-4ba7-85ad-a05dfe2ec485\") " pod="openstack/placement-db-create-f4rlf" Feb 18 14:53:12 crc kubenswrapper[4957]: I0218 14:53:12.823955 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c7e53e53-e1cc-43a4-8b83-ba7ed5bb323b-operator-scripts\") pod \"placement-7083-account-create-update-hrlxp\" (UID: \"c7e53e53-e1cc-43a4-8b83-ba7ed5bb323b\") " pod="openstack/placement-7083-account-create-update-hrlxp" Feb 18 14:53:12 crc kubenswrapper[4957]: I0218 14:53:12.824142 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-99lx6\" (UniqueName: \"kubernetes.io/projected/c7e53e53-e1cc-43a4-8b83-ba7ed5bb323b-kube-api-access-99lx6\") pod \"placement-7083-account-create-update-hrlxp\" (UID: \"c7e53e53-e1cc-43a4-8b83-ba7ed5bb323b\") " pod="openstack/placement-7083-account-create-update-hrlxp" Feb 18 14:53:12 crc kubenswrapper[4957]: I0218 14:53:12.825409 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c7e53e53-e1cc-43a4-8b83-ba7ed5bb323b-operator-scripts\") pod \"placement-7083-account-create-update-hrlxp\" (UID: \"c7e53e53-e1cc-43a4-8b83-ba7ed5bb323b\") " pod="openstack/placement-7083-account-create-update-hrlxp" Feb 18 14:53:12 crc kubenswrapper[4957]: I0218 14:53:12.848639 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-99lx6\" (UniqueName: \"kubernetes.io/projected/c7e53e53-e1cc-43a4-8b83-ba7ed5bb323b-kube-api-access-99lx6\") pod \"placement-7083-account-create-update-hrlxp\" (UID: \"c7e53e53-e1cc-43a4-8b83-ba7ed5bb323b\") " pod="openstack/placement-7083-account-create-update-hrlxp" Feb 18 14:53:12 crc kubenswrapper[4957]: I0218 14:53:12.888908 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-f4rlf" Feb 18 14:53:12 crc kubenswrapper[4957]: I0218 14:53:12.986078 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7083-account-create-update-hrlxp" Feb 18 14:53:13 crc kubenswrapper[4957]: I0218 14:53:13.030949 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/5790a7ec-79bb-49af-842f-e2b879f33184-etc-swift\") pod \"swift-storage-0\" (UID: \"5790a7ec-79bb-49af-842f-e2b879f33184\") " pod="openstack/swift-storage-0" Feb 18 14:53:13 crc kubenswrapper[4957]: E0218 14:53:13.031800 4957 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 18 14:53:13 crc kubenswrapper[4957]: E0218 14:53:13.031821 4957 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 18 14:53:13 crc kubenswrapper[4957]: E0218 14:53:13.031927 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5790a7ec-79bb-49af-842f-e2b879f33184-etc-swift podName:5790a7ec-79bb-49af-842f-e2b879f33184 nodeName:}" failed. No retries permitted until 2026-02-18 14:53:21.031888286 +0000 UTC m=+1307.552753030 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/5790a7ec-79bb-49af-842f-e2b879f33184-etc-swift") pod "swift-storage-0" (UID: "5790a7ec-79bb-49af-842f-e2b879f33184") : configmap "swift-ring-files" not found Feb 18 14:53:13 crc kubenswrapper[4957]: I0218 14:53:13.129088 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-g6wjq"] Feb 18 14:53:13 crc kubenswrapper[4957]: W0218 14:53:13.171994 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda6828e6e_401e_4774_83d8_eb1dab6661a3.slice/crio-fd1285146f89bc6f9497ce1aedcb8264c64cec90b4ad5c4221c334f8478f9ac1 WatchSource:0}: Error finding container fd1285146f89bc6f9497ce1aedcb8264c64cec90b4ad5c4221c334f8478f9ac1: Status 404 returned error can't find the container with id fd1285146f89bc6f9497ce1aedcb8264c64cec90b4ad5c4221c334f8478f9ac1 Feb 18 14:53:13 crc kubenswrapper[4957]: I0218 14:53:13.283230 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-a8b2-account-create-update-pwb27"] Feb 18 14:53:13 crc kubenswrapper[4957]: W0218 14:53:13.519081 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod147d3a76_36f3_4ba7_85ad_a05dfe2ec485.slice/crio-9269627efb3eb2adca02f62f71d321357dd40dd2581cf57e078c3d15e39663c7 WatchSource:0}: Error finding container 9269627efb3eb2adca02f62f71d321357dd40dd2581cf57e078c3d15e39663c7: Status 404 returned error can't find the container with id 9269627efb3eb2adca02f62f71d321357dd40dd2581cf57e078c3d15e39663c7 Feb 18 14:53:13 crc kubenswrapper[4957]: I0218 14:53:13.520564 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-43b7-account-create-update-69wnm" event={"ID":"9a720a9a-d59f-4a82-8b76-87196880798e","Type":"ContainerStarted","Data":"b8ba44a9044a907512244e6690a33cd19c71efaa27f8e53fa4e8e8f6f432144a"} Feb 18 14:53:13 crc kubenswrapper[4957]: I0218 14:53:13.520608 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-43b7-account-create-update-69wnm" event={"ID":"9a720a9a-d59f-4a82-8b76-87196880798e","Type":"ContainerStarted","Data":"41ac01e3689aebfbc8665c800ccf5e4c8124132992f6a8048493370e67072d0a"} Feb 18 14:53:13 crc kubenswrapper[4957]: I0218 14:53:13.525045 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-f4rlf"] Feb 18 14:53:13 crc kubenswrapper[4957]: I0218 14:53:13.527333 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-g6wjq" event={"ID":"a6828e6e-401e-4774-83d8-eb1dab6661a3","Type":"ContainerStarted","Data":"fd1285146f89bc6f9497ce1aedcb8264c64cec90b4ad5c4221c334f8478f9ac1"} Feb 18 14:53:13 crc kubenswrapper[4957]: I0218 14:53:13.529157 4957 generic.go:334] "Generic (PLEG): container finished" podID="9c9d6af0-44f2-4777-9031-70eecd73be87" containerID="f90cb7227354720dcb784183e790fc281faa8f9ddead2c64c3b1a34b749cc189" exitCode=0 Feb 18 14:53:13 crc kubenswrapper[4957]: I0218 14:53:13.529218 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-vbjt5" event={"ID":"9c9d6af0-44f2-4777-9031-70eecd73be87","Type":"ContainerDied","Data":"f90cb7227354720dcb784183e790fc281faa8f9ddead2c64c3b1a34b749cc189"} Feb 18 14:53:13 crc kubenswrapper[4957]: I0218 14:53:13.530799 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-a8b2-account-create-update-pwb27" event={"ID":"23769552-282f-49ac-b650-6047f54aa60e","Type":"ContainerStarted","Data":"c18de5e9139a9b688977f8a3cf55a870a57f7ace93221bfc200696c06fe45c81"} Feb 18 14:53:13 crc kubenswrapper[4957]: I0218 14:53:13.533491 4957 generic.go:334] "Generic (PLEG): container finished" podID="644bd125-826c-4c6a-85e7-ab56ade2412f" containerID="ffcba3119d19cadd3913bde02e917a950f9dcad735ea738b9a917c81bbec112d" exitCode=0 Feb 18 14:53:13 crc kubenswrapper[4957]: I0218 14:53:13.534580 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-7f89w" event={"ID":"644bd125-826c-4c6a-85e7-ab56ade2412f","Type":"ContainerDied","Data":"ffcba3119d19cadd3913bde02e917a950f9dcad735ea738b9a917c81bbec112d"} Feb 18 14:53:13 crc kubenswrapper[4957]: I0218 14:53:13.534610 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-7f89w" event={"ID":"644bd125-826c-4c6a-85e7-ab56ade2412f","Type":"ContainerStarted","Data":"124d21962eb0e8744cc02ded1a6922b07f8481dd6e000052fe59f364b1fdfb76"} Feb 18 14:53:13 crc kubenswrapper[4957]: I0218 14:53:13.549995 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-43b7-account-create-update-69wnm" podStartSLOduration=2.5499752129999997 podStartE2EDuration="2.549975213s" podCreationTimestamp="2026-02-18 14:53:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:53:13.53747767 +0000 UTC m=+1300.058342414" watchObservedRunningTime="2026-02-18 14:53:13.549975213 +0000 UTC m=+1300.070839957" Feb 18 14:53:13 crc kubenswrapper[4957]: I0218 14:53:13.646724 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-7083-account-create-update-hrlxp"] Feb 18 14:53:13 crc kubenswrapper[4957]: W0218 14:53:13.649751 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc7e53e53_e1cc_43a4_8b83_ba7ed5bb323b.slice/crio-ba78a0530f17fba2fb83ba6c940bec3c211da2db99da5886406fa80fc332ffad WatchSource:0}: Error finding container ba78a0530f17fba2fb83ba6c940bec3c211da2db99da5886406fa80fc332ffad: Status 404 returned error can't find the container with id ba78a0530f17fba2fb83ba6c940bec3c211da2db99da5886406fa80fc332ffad Feb 18 14:53:14 crc kubenswrapper[4957]: I0218 14:53:14.016391 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-6fnm7"] Feb 18 14:53:14 crc kubenswrapper[4957]: I0218 14:53:14.017819 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-6fnm7" Feb 18 14:53:14 crc kubenswrapper[4957]: I0218 14:53:14.032279 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-6fnm7"] Feb 18 14:53:14 crc kubenswrapper[4957]: I0218 14:53:14.062198 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/91430091-a9a7-4cb1-b10e-85ccaeb24fbc-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-6fnm7\" (UID: \"91430091-a9a7-4cb1-b10e-85ccaeb24fbc\") " pod="openstack/mysqld-exporter-openstack-db-create-6fnm7" Feb 18 14:53:14 crc kubenswrapper[4957]: I0218 14:53:14.062250 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbmhf\" (UniqueName: \"kubernetes.io/projected/91430091-a9a7-4cb1-b10e-85ccaeb24fbc-kube-api-access-qbmhf\") pod \"mysqld-exporter-openstack-db-create-6fnm7\" (UID: \"91430091-a9a7-4cb1-b10e-85ccaeb24fbc\") " pod="openstack/mysqld-exporter-openstack-db-create-6fnm7" Feb 18 14:53:14 crc kubenswrapper[4957]: I0218 14:53:14.165715 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/91430091-a9a7-4cb1-b10e-85ccaeb24fbc-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-6fnm7\" (UID: \"91430091-a9a7-4cb1-b10e-85ccaeb24fbc\") " pod="openstack/mysqld-exporter-openstack-db-create-6fnm7" Feb 18 14:53:14 crc kubenswrapper[4957]: I0218 14:53:14.165763 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qbmhf\" (UniqueName: \"kubernetes.io/projected/91430091-a9a7-4cb1-b10e-85ccaeb24fbc-kube-api-access-qbmhf\") pod \"mysqld-exporter-openstack-db-create-6fnm7\" (UID: \"91430091-a9a7-4cb1-b10e-85ccaeb24fbc\") " pod="openstack/mysqld-exporter-openstack-db-create-6fnm7" Feb 18 14:53:14 crc kubenswrapper[4957]: I0218 14:53:14.166562 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/91430091-a9a7-4cb1-b10e-85ccaeb24fbc-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-6fnm7\" (UID: \"91430091-a9a7-4cb1-b10e-85ccaeb24fbc\") " pod="openstack/mysqld-exporter-openstack-db-create-6fnm7" Feb 18 14:53:14 crc kubenswrapper[4957]: I0218 14:53:14.213163 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qbmhf\" (UniqueName: \"kubernetes.io/projected/91430091-a9a7-4cb1-b10e-85ccaeb24fbc-kube-api-access-qbmhf\") pod \"mysqld-exporter-openstack-db-create-6fnm7\" (UID: \"91430091-a9a7-4cb1-b10e-85ccaeb24fbc\") " pod="openstack/mysqld-exporter-openstack-db-create-6fnm7" Feb 18 14:53:14 crc kubenswrapper[4957]: I0218 14:53:14.351312 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-66ea-account-create-update-ttrz9"] Feb 18 14:53:14 crc kubenswrapper[4957]: I0218 14:53:14.353889 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-66ea-account-create-update-ttrz9" Feb 18 14:53:14 crc kubenswrapper[4957]: I0218 14:53:14.355565 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 18 14:53:14 crc kubenswrapper[4957]: I0218 14:53:14.355821 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-openstack-db-secret" Feb 18 14:53:14 crc kubenswrapper[4957]: I0218 14:53:14.356674 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-6fnm7" Feb 18 14:53:14 crc kubenswrapper[4957]: I0218 14:53:14.376623 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-66ea-account-create-update-ttrz9"] Feb 18 14:53:14 crc kubenswrapper[4957]: I0218 14:53:14.403613 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9976d545-5d6c-44b9-993d-14890ae6a93e-operator-scripts\") pod \"mysqld-exporter-66ea-account-create-update-ttrz9\" (UID: \"9976d545-5d6c-44b9-993d-14890ae6a93e\") " pod="openstack/mysqld-exporter-66ea-account-create-update-ttrz9" Feb 18 14:53:14 crc kubenswrapper[4957]: I0218 14:53:14.403954 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrftq\" (UniqueName: \"kubernetes.io/projected/9976d545-5d6c-44b9-993d-14890ae6a93e-kube-api-access-qrftq\") pod \"mysqld-exporter-66ea-account-create-update-ttrz9\" (UID: \"9976d545-5d6c-44b9-993d-14890ae6a93e\") " pod="openstack/mysqld-exporter-66ea-account-create-update-ttrz9" Feb 18 14:53:14 crc kubenswrapper[4957]: I0218 14:53:14.494567 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-698758b865-hss4q" Feb 18 14:53:14 crc kubenswrapper[4957]: I0218 14:53:14.519834 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9976d545-5d6c-44b9-993d-14890ae6a93e-operator-scripts\") pod \"mysqld-exporter-66ea-account-create-update-ttrz9\" (UID: \"9976d545-5d6c-44b9-993d-14890ae6a93e\") " pod="openstack/mysqld-exporter-66ea-account-create-update-ttrz9" Feb 18 14:53:14 crc kubenswrapper[4957]: I0218 14:53:14.520031 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qrftq\" (UniqueName: \"kubernetes.io/projected/9976d545-5d6c-44b9-993d-14890ae6a93e-kube-api-access-qrftq\") pod \"mysqld-exporter-66ea-account-create-update-ttrz9\" (UID: \"9976d545-5d6c-44b9-993d-14890ae6a93e\") " pod="openstack/mysqld-exporter-66ea-account-create-update-ttrz9" Feb 18 14:53:14 crc kubenswrapper[4957]: I0218 14:53:14.520812 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9976d545-5d6c-44b9-993d-14890ae6a93e-operator-scripts\") pod \"mysqld-exporter-66ea-account-create-update-ttrz9\" (UID: \"9976d545-5d6c-44b9-993d-14890ae6a93e\") " pod="openstack/mysqld-exporter-66ea-account-create-update-ttrz9" Feb 18 14:53:14 crc kubenswrapper[4957]: I0218 14:53:14.552464 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qrftq\" (UniqueName: \"kubernetes.io/projected/9976d545-5d6c-44b9-993d-14890ae6a93e-kube-api-access-qrftq\") pod \"mysqld-exporter-66ea-account-create-update-ttrz9\" (UID: \"9976d545-5d6c-44b9-993d-14890ae6a93e\") " pod="openstack/mysqld-exporter-66ea-account-create-update-ttrz9" Feb 18 14:53:14 crc kubenswrapper[4957]: I0218 14:53:14.567262 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7083-account-create-update-hrlxp" event={"ID":"c7e53e53-e1cc-43a4-8b83-ba7ed5bb323b","Type":"ContainerStarted","Data":"ba78a0530f17fba2fb83ba6c940bec3c211da2db99da5886406fa80fc332ffad"} Feb 18 14:53:14 crc kubenswrapper[4957]: I0218 14:53:14.586866 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-f4rlf" event={"ID":"147d3a76-36f3-4ba7-85ad-a05dfe2ec485","Type":"ContainerStarted","Data":"9269627efb3eb2adca02f62f71d321357dd40dd2581cf57e078c3d15e39663c7"} Feb 18 14:53:14 crc kubenswrapper[4957]: I0218 14:53:14.618156 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-5wr9w"] Feb 18 14:53:14 crc kubenswrapper[4957]: I0218 14:53:14.618386 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-86db49b7ff-5wr9w" podUID="9d31b9ed-0265-41fb-9f57-538cd96a5524" containerName="dnsmasq-dns" containerID="cri-o://91b4ffb058fba64c9163ee43114b9c44f38cf1474092358cc10af3cdd0fc650a" gracePeriod=10 Feb 18 14:53:14 crc kubenswrapper[4957]: I0218 14:53:14.708770 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-66ea-account-create-update-ttrz9" Feb 18 14:53:15 crc kubenswrapper[4957]: I0218 14:53:15.218504 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-6fnm7"] Feb 18 14:53:15 crc kubenswrapper[4957]: I0218 14:53:15.473232 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-7f89w" Feb 18 14:53:15 crc kubenswrapper[4957]: I0218 14:53:15.476375 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-vbjt5" Feb 18 14:53:15 crc kubenswrapper[4957]: I0218 14:53:15.603411 4957 generic.go:334] "Generic (PLEG): container finished" podID="9a720a9a-d59f-4a82-8b76-87196880798e" containerID="b8ba44a9044a907512244e6690a33cd19c71efaa27f8e53fa4e8e8f6f432144a" exitCode=0 Feb 18 14:53:15 crc kubenswrapper[4957]: I0218 14:53:15.603497 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-43b7-account-create-update-69wnm" event={"ID":"9a720a9a-d59f-4a82-8b76-87196880798e","Type":"ContainerDied","Data":"b8ba44a9044a907512244e6690a33cd19c71efaa27f8e53fa4e8e8f6f432144a"} Feb 18 14:53:15 crc kubenswrapper[4957]: I0218 14:53:15.605164 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-g6wjq" event={"ID":"a6828e6e-401e-4774-83d8-eb1dab6661a3","Type":"ContainerStarted","Data":"ab1eb9f7c718801405de55e7bcd5bb513c967c41a647c2cd7f8373128378eb56"} Feb 18 14:53:15 crc kubenswrapper[4957]: I0218 14:53:15.608608 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-6fnm7" event={"ID":"91430091-a9a7-4cb1-b10e-85ccaeb24fbc","Type":"ContainerStarted","Data":"cb5ef4a4458f581c7ee15f1c420034170d64b9701db43a6c55dbc89267cfdce9"} Feb 18 14:53:15 crc kubenswrapper[4957]: I0218 14:53:15.610929 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-vbjt5" event={"ID":"9c9d6af0-44f2-4777-9031-70eecd73be87","Type":"ContainerDied","Data":"4b25f037945bc5caaaa964532279c055636270916f049ef9ab35f109e3843254"} Feb 18 14:53:15 crc kubenswrapper[4957]: I0218 14:53:15.610962 4957 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4b25f037945bc5caaaa964532279c055636270916f049ef9ab35f109e3843254" Feb 18 14:53:15 crc kubenswrapper[4957]: I0218 14:53:15.611027 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-vbjt5" Feb 18 14:53:15 crc kubenswrapper[4957]: I0218 14:53:15.622222 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-7f89w" event={"ID":"644bd125-826c-4c6a-85e7-ab56ade2412f","Type":"ContainerDied","Data":"124d21962eb0e8744cc02ded1a6922b07f8481dd6e000052fe59f364b1fdfb76"} Feb 18 14:53:15 crc kubenswrapper[4957]: I0218 14:53:15.622261 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-7f89w" Feb 18 14:53:15 crc kubenswrapper[4957]: I0218 14:53:15.622272 4957 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="124d21962eb0e8744cc02ded1a6922b07f8481dd6e000052fe59f364b1fdfb76" Feb 18 14:53:15 crc kubenswrapper[4957]: I0218 14:53:15.631332 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gq7v2\" (UniqueName: \"kubernetes.io/projected/644bd125-826c-4c6a-85e7-ab56ade2412f-kube-api-access-gq7v2\") pod \"644bd125-826c-4c6a-85e7-ab56ade2412f\" (UID: \"644bd125-826c-4c6a-85e7-ab56ade2412f\") " Feb 18 14:53:15 crc kubenswrapper[4957]: I0218 14:53:15.631394 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/644bd125-826c-4c6a-85e7-ab56ade2412f-operator-scripts\") pod \"644bd125-826c-4c6a-85e7-ab56ade2412f\" (UID: \"644bd125-826c-4c6a-85e7-ab56ade2412f\") " Feb 18 14:53:15 crc kubenswrapper[4957]: I0218 14:53:15.631479 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9c9d6af0-44f2-4777-9031-70eecd73be87-operator-scripts\") pod \"9c9d6af0-44f2-4777-9031-70eecd73be87\" (UID: \"9c9d6af0-44f2-4777-9031-70eecd73be87\") " Feb 18 14:53:15 crc kubenswrapper[4957]: I0218 14:53:15.631520 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-svxnv\" (UniqueName: \"kubernetes.io/projected/9c9d6af0-44f2-4777-9031-70eecd73be87-kube-api-access-svxnv\") pod \"9c9d6af0-44f2-4777-9031-70eecd73be87\" (UID: \"9c9d6af0-44f2-4777-9031-70eecd73be87\") " Feb 18 14:53:15 crc kubenswrapper[4957]: I0218 14:53:15.631559 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-66ea-account-create-update-ttrz9"] Feb 18 14:53:15 crc kubenswrapper[4957]: I0218 14:53:15.632668 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c9d6af0-44f2-4777-9031-70eecd73be87-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9c9d6af0-44f2-4777-9031-70eecd73be87" (UID: "9c9d6af0-44f2-4777-9031-70eecd73be87"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:53:15 crc kubenswrapper[4957]: I0218 14:53:15.632673 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/644bd125-826c-4c6a-85e7-ab56ade2412f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "644bd125-826c-4c6a-85e7-ab56ade2412f" (UID: "644bd125-826c-4c6a-85e7-ab56ade2412f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:53:15 crc kubenswrapper[4957]: W0218 14:53:15.635123 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9976d545_5d6c_44b9_993d_14890ae6a93e.slice/crio-06ec1653157d70bca40bc830edea3dfc10518e9d6d9909cdf567f429f52eecb5 WatchSource:0}: Error finding container 06ec1653157d70bca40bc830edea3dfc10518e9d6d9909cdf567f429f52eecb5: Status 404 returned error can't find the container with id 06ec1653157d70bca40bc830edea3dfc10518e9d6d9909cdf567f429f52eecb5 Feb 18 14:53:15 crc kubenswrapper[4957]: I0218 14:53:15.638974 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c9d6af0-44f2-4777-9031-70eecd73be87-kube-api-access-svxnv" (OuterVolumeSpecName: "kube-api-access-svxnv") pod "9c9d6af0-44f2-4777-9031-70eecd73be87" (UID: "9c9d6af0-44f2-4777-9031-70eecd73be87"). InnerVolumeSpecName "kube-api-access-svxnv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:53:15 crc kubenswrapper[4957]: I0218 14:53:15.641540 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/644bd125-826c-4c6a-85e7-ab56ade2412f-kube-api-access-gq7v2" (OuterVolumeSpecName: "kube-api-access-gq7v2") pod "644bd125-826c-4c6a-85e7-ab56ade2412f" (UID: "644bd125-826c-4c6a-85e7-ab56ade2412f"). InnerVolumeSpecName "kube-api-access-gq7v2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:53:15 crc kubenswrapper[4957]: I0218 14:53:15.735208 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gq7v2\" (UniqueName: \"kubernetes.io/projected/644bd125-826c-4c6a-85e7-ab56ade2412f-kube-api-access-gq7v2\") on node \"crc\" DevicePath \"\"" Feb 18 14:53:15 crc kubenswrapper[4957]: I0218 14:53:15.735259 4957 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/644bd125-826c-4c6a-85e7-ab56ade2412f-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 14:53:15 crc kubenswrapper[4957]: I0218 14:53:15.735272 4957 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9c9d6af0-44f2-4777-9031-70eecd73be87-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 14:53:15 crc kubenswrapper[4957]: I0218 14:53:15.735284 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-svxnv\" (UniqueName: \"kubernetes.io/projected/9c9d6af0-44f2-4777-9031-70eecd73be87-kube-api-access-svxnv\") on node \"crc\" DevicePath \"\"" Feb 18 14:53:16 crc kubenswrapper[4957]: I0218 14:53:16.344433 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-5wr9w" Feb 18 14:53:16 crc kubenswrapper[4957]: I0218 14:53:16.448218 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v8jfm\" (UniqueName: \"kubernetes.io/projected/9d31b9ed-0265-41fb-9f57-538cd96a5524-kube-api-access-v8jfm\") pod \"9d31b9ed-0265-41fb-9f57-538cd96a5524\" (UID: \"9d31b9ed-0265-41fb-9f57-538cd96a5524\") " Feb 18 14:53:16 crc kubenswrapper[4957]: I0218 14:53:16.448748 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9d31b9ed-0265-41fb-9f57-538cd96a5524-ovsdbserver-sb\") pod \"9d31b9ed-0265-41fb-9f57-538cd96a5524\" (UID: \"9d31b9ed-0265-41fb-9f57-538cd96a5524\") " Feb 18 14:53:16 crc kubenswrapper[4957]: I0218 14:53:16.448860 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d31b9ed-0265-41fb-9f57-538cd96a5524-config\") pod \"9d31b9ed-0265-41fb-9f57-538cd96a5524\" (UID: \"9d31b9ed-0265-41fb-9f57-538cd96a5524\") " Feb 18 14:53:16 crc kubenswrapper[4957]: I0218 14:53:16.448900 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9d31b9ed-0265-41fb-9f57-538cd96a5524-ovsdbserver-nb\") pod \"9d31b9ed-0265-41fb-9f57-538cd96a5524\" (UID: \"9d31b9ed-0265-41fb-9f57-538cd96a5524\") " Feb 18 14:53:16 crc kubenswrapper[4957]: I0218 14:53:16.449137 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9d31b9ed-0265-41fb-9f57-538cd96a5524-dns-svc\") pod \"9d31b9ed-0265-41fb-9f57-538cd96a5524\" (UID: \"9d31b9ed-0265-41fb-9f57-538cd96a5524\") " Feb 18 14:53:16 crc kubenswrapper[4957]: I0218 14:53:16.456953 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d31b9ed-0265-41fb-9f57-538cd96a5524-kube-api-access-v8jfm" (OuterVolumeSpecName: "kube-api-access-v8jfm") pod "9d31b9ed-0265-41fb-9f57-538cd96a5524" (UID: "9d31b9ed-0265-41fb-9f57-538cd96a5524"). InnerVolumeSpecName "kube-api-access-v8jfm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:53:16 crc kubenswrapper[4957]: I0218 14:53:16.536230 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d31b9ed-0265-41fb-9f57-538cd96a5524-config" (OuterVolumeSpecName: "config") pod "9d31b9ed-0265-41fb-9f57-538cd96a5524" (UID: "9d31b9ed-0265-41fb-9f57-538cd96a5524"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:53:16 crc kubenswrapper[4957]: I0218 14:53:16.536245 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d31b9ed-0265-41fb-9f57-538cd96a5524-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "9d31b9ed-0265-41fb-9f57-538cd96a5524" (UID: "9d31b9ed-0265-41fb-9f57-538cd96a5524"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:53:16 crc kubenswrapper[4957]: I0218 14:53:16.539984 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d31b9ed-0265-41fb-9f57-538cd96a5524-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9d31b9ed-0265-41fb-9f57-538cd96a5524" (UID: "9d31b9ed-0265-41fb-9f57-538cd96a5524"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:53:16 crc kubenswrapper[4957]: I0218 14:53:16.549908 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d31b9ed-0265-41fb-9f57-538cd96a5524-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "9d31b9ed-0265-41fb-9f57-538cd96a5524" (UID: "9d31b9ed-0265-41fb-9f57-538cd96a5524"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:53:16 crc kubenswrapper[4957]: I0218 14:53:16.552180 4957 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d31b9ed-0265-41fb-9f57-538cd96a5524-config\") on node \"crc\" DevicePath \"\"" Feb 18 14:53:16 crc kubenswrapper[4957]: I0218 14:53:16.552198 4957 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9d31b9ed-0265-41fb-9f57-538cd96a5524-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 18 14:53:16 crc kubenswrapper[4957]: I0218 14:53:16.552207 4957 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9d31b9ed-0265-41fb-9f57-538cd96a5524-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 14:53:16 crc kubenswrapper[4957]: I0218 14:53:16.552216 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v8jfm\" (UniqueName: \"kubernetes.io/projected/9d31b9ed-0265-41fb-9f57-538cd96a5524-kube-api-access-v8jfm\") on node \"crc\" DevicePath \"\"" Feb 18 14:53:16 crc kubenswrapper[4957]: I0218 14:53:16.552226 4957 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9d31b9ed-0265-41fb-9f57-538cd96a5524-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 18 14:53:16 crc kubenswrapper[4957]: I0218 14:53:16.636092 4957 generic.go:334] "Generic (PLEG): container finished" podID="a6828e6e-401e-4774-83d8-eb1dab6661a3" containerID="ab1eb9f7c718801405de55e7bcd5bb513c967c41a647c2cd7f8373128378eb56" exitCode=0 Feb 18 14:53:16 crc kubenswrapper[4957]: I0218 14:53:16.636167 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-g6wjq" event={"ID":"a6828e6e-401e-4774-83d8-eb1dab6661a3","Type":"ContainerDied","Data":"ab1eb9f7c718801405de55e7bcd5bb513c967c41a647c2cd7f8373128378eb56"} Feb 18 14:53:16 crc kubenswrapper[4957]: I0218 14:53:16.639302 4957 generic.go:334] "Generic (PLEG): container finished" podID="91430091-a9a7-4cb1-b10e-85ccaeb24fbc" containerID="330f00f70ba75594a5d00cf4550a12dfd55ef09908bc75397b83d21789ec1bf5" exitCode=0 Feb 18 14:53:16 crc kubenswrapper[4957]: I0218 14:53:16.639350 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-6fnm7" event={"ID":"91430091-a9a7-4cb1-b10e-85ccaeb24fbc","Type":"ContainerDied","Data":"330f00f70ba75594a5d00cf4550a12dfd55ef09908bc75397b83d21789ec1bf5"} Feb 18 14:53:16 crc kubenswrapper[4957]: I0218 14:53:16.641052 4957 generic.go:334] "Generic (PLEG): container finished" podID="23769552-282f-49ac-b650-6047f54aa60e" containerID="f7db9bb83d2e8d7b375f755c3737b86b6776a8d268f39ccec4e0e74b0e0ab0aa" exitCode=0 Feb 18 14:53:16 crc kubenswrapper[4957]: I0218 14:53:16.641103 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-a8b2-account-create-update-pwb27" event={"ID":"23769552-282f-49ac-b650-6047f54aa60e","Type":"ContainerDied","Data":"f7db9bb83d2e8d7b375f755c3737b86b6776a8d268f39ccec4e0e74b0e0ab0aa"} Feb 18 14:53:16 crc kubenswrapper[4957]: I0218 14:53:16.643865 4957 generic.go:334] "Generic (PLEG): container finished" podID="9d31b9ed-0265-41fb-9f57-538cd96a5524" containerID="91b4ffb058fba64c9163ee43114b9c44f38cf1474092358cc10af3cdd0fc650a" exitCode=0 Feb 18 14:53:16 crc kubenswrapper[4957]: I0218 14:53:16.643905 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-5wr9w" event={"ID":"9d31b9ed-0265-41fb-9f57-538cd96a5524","Type":"ContainerDied","Data":"91b4ffb058fba64c9163ee43114b9c44f38cf1474092358cc10af3cdd0fc650a"} Feb 18 14:53:16 crc kubenswrapper[4957]: I0218 14:53:16.643921 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-5wr9w" event={"ID":"9d31b9ed-0265-41fb-9f57-538cd96a5524","Type":"ContainerDied","Data":"35f4d88346b30da7d1ce646e8b0219c1bfac2d1e3721c6129dd7ab540434123e"} Feb 18 14:53:16 crc kubenswrapper[4957]: I0218 14:53:16.643938 4957 scope.go:117] "RemoveContainer" containerID="91b4ffb058fba64c9163ee43114b9c44f38cf1474092358cc10af3cdd0fc650a" Feb 18 14:53:16 crc kubenswrapper[4957]: I0218 14:53:16.644028 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-5wr9w" Feb 18 14:53:16 crc kubenswrapper[4957]: I0218 14:53:16.647209 4957 generic.go:334] "Generic (PLEG): container finished" podID="c7e53e53-e1cc-43a4-8b83-ba7ed5bb323b" containerID="daec13b1ac2780187283f3f5b92aa166d43a39e17de2cd241beca8f2bfe84743" exitCode=0 Feb 18 14:53:16 crc kubenswrapper[4957]: I0218 14:53:16.647300 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7083-account-create-update-hrlxp" event={"ID":"c7e53e53-e1cc-43a4-8b83-ba7ed5bb323b","Type":"ContainerDied","Data":"daec13b1ac2780187283f3f5b92aa166d43a39e17de2cd241beca8f2bfe84743"} Feb 18 14:53:16 crc kubenswrapper[4957]: I0218 14:53:16.665050 4957 generic.go:334] "Generic (PLEG): container finished" podID="147d3a76-36f3-4ba7-85ad-a05dfe2ec485" containerID="a8afd1b0df8368e492ffee19d866f2b61b22e1b01968a86b65e563fa8cd29b12" exitCode=0 Feb 18 14:53:16 crc kubenswrapper[4957]: I0218 14:53:16.665141 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-f4rlf" event={"ID":"147d3a76-36f3-4ba7-85ad-a05dfe2ec485","Type":"ContainerDied","Data":"a8afd1b0df8368e492ffee19d866f2b61b22e1b01968a86b65e563fa8cd29b12"} Feb 18 14:53:16 crc kubenswrapper[4957]: I0218 14:53:16.666957 4957 generic.go:334] "Generic (PLEG): container finished" podID="9976d545-5d6c-44b9-993d-14890ae6a93e" containerID="7267c9aa2a039d95be3611d4e5e41801655a8fc77da9ab18d32c09815ec615a2" exitCode=0 Feb 18 14:53:16 crc kubenswrapper[4957]: I0218 14:53:16.667118 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-66ea-account-create-update-ttrz9" event={"ID":"9976d545-5d6c-44b9-993d-14890ae6a93e","Type":"ContainerDied","Data":"7267c9aa2a039d95be3611d4e5e41801655a8fc77da9ab18d32c09815ec615a2"} Feb 18 14:53:16 crc kubenswrapper[4957]: I0218 14:53:16.667139 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-66ea-account-create-update-ttrz9" event={"ID":"9976d545-5d6c-44b9-993d-14890ae6a93e","Type":"ContainerStarted","Data":"06ec1653157d70bca40bc830edea3dfc10518e9d6d9909cdf567f429f52eecb5"} Feb 18 14:53:16 crc kubenswrapper[4957]: I0218 14:53:16.686993 4957 scope.go:117] "RemoveContainer" containerID="3a76d281766d271cfba901273f3afdbb686ae0c80027c8f1f371a0a804ff4643" Feb 18 14:53:16 crc kubenswrapper[4957]: I0218 14:53:16.751965 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-5wr9w"] Feb 18 14:53:16 crc kubenswrapper[4957]: I0218 14:53:16.760169 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-5wr9w"] Feb 18 14:53:16 crc kubenswrapper[4957]: I0218 14:53:16.776482 4957 scope.go:117] "RemoveContainer" containerID="91b4ffb058fba64c9163ee43114b9c44f38cf1474092358cc10af3cdd0fc650a" Feb 18 14:53:16 crc kubenswrapper[4957]: E0218 14:53:16.776817 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"91b4ffb058fba64c9163ee43114b9c44f38cf1474092358cc10af3cdd0fc650a\": container with ID starting with 91b4ffb058fba64c9163ee43114b9c44f38cf1474092358cc10af3cdd0fc650a not found: ID does not exist" containerID="91b4ffb058fba64c9163ee43114b9c44f38cf1474092358cc10af3cdd0fc650a" Feb 18 14:53:16 crc kubenswrapper[4957]: I0218 14:53:16.777554 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"91b4ffb058fba64c9163ee43114b9c44f38cf1474092358cc10af3cdd0fc650a"} err="failed to get container status \"91b4ffb058fba64c9163ee43114b9c44f38cf1474092358cc10af3cdd0fc650a\": rpc error: code = NotFound desc = could not find container \"91b4ffb058fba64c9163ee43114b9c44f38cf1474092358cc10af3cdd0fc650a\": container with ID starting with 91b4ffb058fba64c9163ee43114b9c44f38cf1474092358cc10af3cdd0fc650a not found: ID does not exist" Feb 18 14:53:16 crc kubenswrapper[4957]: I0218 14:53:16.777616 4957 scope.go:117] "RemoveContainer" containerID="3a76d281766d271cfba901273f3afdbb686ae0c80027c8f1f371a0a804ff4643" Feb 18 14:53:16 crc kubenswrapper[4957]: E0218 14:53:16.778903 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a76d281766d271cfba901273f3afdbb686ae0c80027c8f1f371a0a804ff4643\": container with ID starting with 3a76d281766d271cfba901273f3afdbb686ae0c80027c8f1f371a0a804ff4643 not found: ID does not exist" containerID="3a76d281766d271cfba901273f3afdbb686ae0c80027c8f1f371a0a804ff4643" Feb 18 14:53:16 crc kubenswrapper[4957]: I0218 14:53:16.778932 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a76d281766d271cfba901273f3afdbb686ae0c80027c8f1f371a0a804ff4643"} err="failed to get container status \"3a76d281766d271cfba901273f3afdbb686ae0c80027c8f1f371a0a804ff4643\": rpc error: code = NotFound desc = could not find container \"3a76d281766d271cfba901273f3afdbb686ae0c80027c8f1f371a0a804ff4643\": container with ID starting with 3a76d281766d271cfba901273f3afdbb686ae0c80027c8f1f371a0a804ff4643 not found: ID does not exist" Feb 18 14:53:17 crc kubenswrapper[4957]: I0218 14:53:17.191093 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-43b7-account-create-update-69wnm" Feb 18 14:53:17 crc kubenswrapper[4957]: I0218 14:53:17.265820 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9a720a9a-d59f-4a82-8b76-87196880798e-operator-scripts\") pod \"9a720a9a-d59f-4a82-8b76-87196880798e\" (UID: \"9a720a9a-d59f-4a82-8b76-87196880798e\") " Feb 18 14:53:17 crc kubenswrapper[4957]: I0218 14:53:17.265982 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d7n5f\" (UniqueName: \"kubernetes.io/projected/9a720a9a-d59f-4a82-8b76-87196880798e-kube-api-access-d7n5f\") pod \"9a720a9a-d59f-4a82-8b76-87196880798e\" (UID: \"9a720a9a-d59f-4a82-8b76-87196880798e\") " Feb 18 14:53:17 crc kubenswrapper[4957]: I0218 14:53:17.268076 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a720a9a-d59f-4a82-8b76-87196880798e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9a720a9a-d59f-4a82-8b76-87196880798e" (UID: "9a720a9a-d59f-4a82-8b76-87196880798e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:53:17 crc kubenswrapper[4957]: I0218 14:53:17.286699 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a720a9a-d59f-4a82-8b76-87196880798e-kube-api-access-d7n5f" (OuterVolumeSpecName: "kube-api-access-d7n5f") pod "9a720a9a-d59f-4a82-8b76-87196880798e" (UID: "9a720a9a-d59f-4a82-8b76-87196880798e"). InnerVolumeSpecName "kube-api-access-d7n5f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:53:17 crc kubenswrapper[4957]: I0218 14:53:17.369210 4957 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9a720a9a-d59f-4a82-8b76-87196880798e-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 14:53:17 crc kubenswrapper[4957]: I0218 14:53:17.369249 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d7n5f\" (UniqueName: \"kubernetes.io/projected/9a720a9a-d59f-4a82-8b76-87196880798e-kube-api-access-d7n5f\") on node \"crc\" DevicePath \"\"" Feb 18 14:53:17 crc kubenswrapper[4957]: I0218 14:53:17.704675 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-43b7-account-create-update-69wnm" Feb 18 14:53:17 crc kubenswrapper[4957]: I0218 14:53:17.705149 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-43b7-account-create-update-69wnm" event={"ID":"9a720a9a-d59f-4a82-8b76-87196880798e","Type":"ContainerDied","Data":"41ac01e3689aebfbc8665c800ccf5e4c8124132992f6a8048493370e67072d0a"} Feb 18 14:53:17 crc kubenswrapper[4957]: I0218 14:53:17.705184 4957 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="41ac01e3689aebfbc8665c800ccf5e4c8124132992f6a8048493370e67072d0a" Feb 18 14:53:18 crc kubenswrapper[4957]: I0218 14:53:18.129865 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-6fnm7" Feb 18 14:53:18 crc kubenswrapper[4957]: I0218 14:53:18.188309 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qbmhf\" (UniqueName: \"kubernetes.io/projected/91430091-a9a7-4cb1-b10e-85ccaeb24fbc-kube-api-access-qbmhf\") pod \"91430091-a9a7-4cb1-b10e-85ccaeb24fbc\" (UID: \"91430091-a9a7-4cb1-b10e-85ccaeb24fbc\") " Feb 18 14:53:18 crc kubenswrapper[4957]: I0218 14:53:18.189327 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/91430091-a9a7-4cb1-b10e-85ccaeb24fbc-operator-scripts\") pod \"91430091-a9a7-4cb1-b10e-85ccaeb24fbc\" (UID: \"91430091-a9a7-4cb1-b10e-85ccaeb24fbc\") " Feb 18 14:53:18 crc kubenswrapper[4957]: I0218 14:53:18.189793 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/91430091-a9a7-4cb1-b10e-85ccaeb24fbc-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "91430091-a9a7-4cb1-b10e-85ccaeb24fbc" (UID: "91430091-a9a7-4cb1-b10e-85ccaeb24fbc"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:53:18 crc kubenswrapper[4957]: I0218 14:53:18.190437 4957 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/91430091-a9a7-4cb1-b10e-85ccaeb24fbc-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 14:53:18 crc kubenswrapper[4957]: I0218 14:53:18.201638 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91430091-a9a7-4cb1-b10e-85ccaeb24fbc-kube-api-access-qbmhf" (OuterVolumeSpecName: "kube-api-access-qbmhf") pod "91430091-a9a7-4cb1-b10e-85ccaeb24fbc" (UID: "91430091-a9a7-4cb1-b10e-85ccaeb24fbc"). InnerVolumeSpecName "kube-api-access-qbmhf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:53:18 crc kubenswrapper[4957]: I0218 14:53:18.232772 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d31b9ed-0265-41fb-9f57-538cd96a5524" path="/var/lib/kubelet/pods/9d31b9ed-0265-41fb-9f57-538cd96a5524/volumes" Feb 18 14:53:18 crc kubenswrapper[4957]: I0218 14:53:18.293648 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qbmhf\" (UniqueName: \"kubernetes.io/projected/91430091-a9a7-4cb1-b10e-85ccaeb24fbc-kube-api-access-qbmhf\") on node \"crc\" DevicePath \"\"" Feb 18 14:53:18 crc kubenswrapper[4957]: I0218 14:53:18.717730 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-6fnm7" event={"ID":"91430091-a9a7-4cb1-b10e-85ccaeb24fbc","Type":"ContainerDied","Data":"cb5ef4a4458f581c7ee15f1c420034170d64b9701db43a6c55dbc89267cfdce9"} Feb 18 14:53:18 crc kubenswrapper[4957]: I0218 14:53:18.719209 4957 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cb5ef4a4458f581c7ee15f1c420034170d64b9701db43a6c55dbc89267cfdce9" Feb 18 14:53:18 crc kubenswrapper[4957]: I0218 14:53:18.717989 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-6fnm7" Feb 18 14:53:19 crc kubenswrapper[4957]: I0218 14:53:19.822544 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-vbjt5"] Feb 18 14:53:19 crc kubenswrapper[4957]: I0218 14:53:19.833148 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-vbjt5"] Feb 18 14:53:20 crc kubenswrapper[4957]: I0218 14:53:20.235365 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c9d6af0-44f2-4777-9031-70eecd73be87" path="/var/lib/kubelet/pods/9c9d6af0-44f2-4777-9031-70eecd73be87/volumes" Feb 18 14:53:21 crc kubenswrapper[4957]: I0218 14:53:21.046800 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/5790a7ec-79bb-49af-842f-e2b879f33184-etc-swift\") pod \"swift-storage-0\" (UID: \"5790a7ec-79bb-49af-842f-e2b879f33184\") " pod="openstack/swift-storage-0" Feb 18 14:53:21 crc kubenswrapper[4957]: E0218 14:53:21.047114 4957 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 18 14:53:21 crc kubenswrapper[4957]: E0218 14:53:21.047155 4957 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 18 14:53:21 crc kubenswrapper[4957]: E0218 14:53:21.047241 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5790a7ec-79bb-49af-842f-e2b879f33184-etc-swift podName:5790a7ec-79bb-49af-842f-e2b879f33184 nodeName:}" failed. No retries permitted until 2026-02-18 14:53:37.047213058 +0000 UTC m=+1323.568077842 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/5790a7ec-79bb-49af-842f-e2b879f33184-etc-swift") pod "swift-storage-0" (UID: "5790a7ec-79bb-49af-842f-e2b879f33184") : configmap "swift-ring-files" not found Feb 18 14:53:21 crc kubenswrapper[4957]: I0218 14:53:21.761611 4957 generic.go:334] "Generic (PLEG): container finished" podID="73ec2a80-4748-44f3-a979-1c854a4f3b49" containerID="64e7a4ff94ae8524520791c2fb09e798dc4ce8ee24989374ad99c4f871de292d" exitCode=0 Feb 18 14:53:21 crc kubenswrapper[4957]: I0218 14:53:21.761751 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-hzpv6" event={"ID":"73ec2a80-4748-44f3-a979-1c854a4f3b49","Type":"ContainerDied","Data":"64e7a4ff94ae8524520791c2fb09e798dc4ce8ee24989374ad99c4f871de292d"} Feb 18 14:53:21 crc kubenswrapper[4957]: I0218 14:53:21.872253 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-n9k7h"] Feb 18 14:53:21 crc kubenswrapper[4957]: E0218 14:53:21.872845 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a720a9a-d59f-4a82-8b76-87196880798e" containerName="mariadb-account-create-update" Feb 18 14:53:21 crc kubenswrapper[4957]: I0218 14:53:21.872873 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a720a9a-d59f-4a82-8b76-87196880798e" containerName="mariadb-account-create-update" Feb 18 14:53:21 crc kubenswrapper[4957]: E0218 14:53:21.872893 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91430091-a9a7-4cb1-b10e-85ccaeb24fbc" containerName="mariadb-database-create" Feb 18 14:53:21 crc kubenswrapper[4957]: I0218 14:53:21.872903 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="91430091-a9a7-4cb1-b10e-85ccaeb24fbc" containerName="mariadb-database-create" Feb 18 14:53:21 crc kubenswrapper[4957]: E0218 14:53:21.872916 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="644bd125-826c-4c6a-85e7-ab56ade2412f" containerName="mariadb-database-create" Feb 18 14:53:21 crc kubenswrapper[4957]: I0218 14:53:21.872924 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="644bd125-826c-4c6a-85e7-ab56ade2412f" containerName="mariadb-database-create" Feb 18 14:53:21 crc kubenswrapper[4957]: E0218 14:53:21.872934 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d31b9ed-0265-41fb-9f57-538cd96a5524" containerName="init" Feb 18 14:53:21 crc kubenswrapper[4957]: I0218 14:53:21.872941 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d31b9ed-0265-41fb-9f57-538cd96a5524" containerName="init" Feb 18 14:53:21 crc kubenswrapper[4957]: E0218 14:53:21.872998 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d31b9ed-0265-41fb-9f57-538cd96a5524" containerName="dnsmasq-dns" Feb 18 14:53:21 crc kubenswrapper[4957]: I0218 14:53:21.873007 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d31b9ed-0265-41fb-9f57-538cd96a5524" containerName="dnsmasq-dns" Feb 18 14:53:21 crc kubenswrapper[4957]: E0218 14:53:21.873023 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c9d6af0-44f2-4777-9031-70eecd73be87" containerName="mariadb-account-create-update" Feb 18 14:53:21 crc kubenswrapper[4957]: I0218 14:53:21.873033 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c9d6af0-44f2-4777-9031-70eecd73be87" containerName="mariadb-account-create-update" Feb 18 14:53:21 crc kubenswrapper[4957]: I0218 14:53:21.873309 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="644bd125-826c-4c6a-85e7-ab56ade2412f" containerName="mariadb-database-create" Feb 18 14:53:21 crc kubenswrapper[4957]: I0218 14:53:21.873345 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a720a9a-d59f-4a82-8b76-87196880798e" containerName="mariadb-account-create-update" Feb 18 14:53:21 crc kubenswrapper[4957]: I0218 14:53:21.873361 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="91430091-a9a7-4cb1-b10e-85ccaeb24fbc" containerName="mariadb-database-create" Feb 18 14:53:21 crc kubenswrapper[4957]: I0218 14:53:21.873383 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c9d6af0-44f2-4777-9031-70eecd73be87" containerName="mariadb-account-create-update" Feb 18 14:53:21 crc kubenswrapper[4957]: I0218 14:53:21.873396 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d31b9ed-0265-41fb-9f57-538cd96a5524" containerName="dnsmasq-dns" Feb 18 14:53:21 crc kubenswrapper[4957]: I0218 14:53:21.874340 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-n9k7h" Feb 18 14:53:21 crc kubenswrapper[4957]: I0218 14:53:21.876456 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-f948p" Feb 18 14:53:21 crc kubenswrapper[4957]: I0218 14:53:21.877898 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Feb 18 14:53:21 crc kubenswrapper[4957]: I0218 14:53:21.895276 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-n9k7h"] Feb 18 14:53:21 crc kubenswrapper[4957]: I0218 14:53:21.966691 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lx6d\" (UniqueName: \"kubernetes.io/projected/7e292891-edfa-438c-aadb-3a12e7fdd9a4-kube-api-access-2lx6d\") pod \"glance-db-sync-n9k7h\" (UID: \"7e292891-edfa-438c-aadb-3a12e7fdd9a4\") " pod="openstack/glance-db-sync-n9k7h" Feb 18 14:53:21 crc kubenswrapper[4957]: I0218 14:53:21.966859 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e292891-edfa-438c-aadb-3a12e7fdd9a4-config-data\") pod \"glance-db-sync-n9k7h\" (UID: \"7e292891-edfa-438c-aadb-3a12e7fdd9a4\") " pod="openstack/glance-db-sync-n9k7h" Feb 18 14:53:21 crc kubenswrapper[4957]: I0218 14:53:21.966925 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/7e292891-edfa-438c-aadb-3a12e7fdd9a4-db-sync-config-data\") pod \"glance-db-sync-n9k7h\" (UID: \"7e292891-edfa-438c-aadb-3a12e7fdd9a4\") " pod="openstack/glance-db-sync-n9k7h" Feb 18 14:53:21 crc kubenswrapper[4957]: I0218 14:53:21.966962 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e292891-edfa-438c-aadb-3a12e7fdd9a4-combined-ca-bundle\") pod \"glance-db-sync-n9k7h\" (UID: \"7e292891-edfa-438c-aadb-3a12e7fdd9a4\") " pod="openstack/glance-db-sync-n9k7h" Feb 18 14:53:22 crc kubenswrapper[4957]: I0218 14:53:22.069284 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/7e292891-edfa-438c-aadb-3a12e7fdd9a4-db-sync-config-data\") pod \"glance-db-sync-n9k7h\" (UID: \"7e292891-edfa-438c-aadb-3a12e7fdd9a4\") " pod="openstack/glance-db-sync-n9k7h" Feb 18 14:53:22 crc kubenswrapper[4957]: I0218 14:53:22.069351 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e292891-edfa-438c-aadb-3a12e7fdd9a4-combined-ca-bundle\") pod \"glance-db-sync-n9k7h\" (UID: \"7e292891-edfa-438c-aadb-3a12e7fdd9a4\") " pod="openstack/glance-db-sync-n9k7h" Feb 18 14:53:22 crc kubenswrapper[4957]: I0218 14:53:22.069500 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2lx6d\" (UniqueName: \"kubernetes.io/projected/7e292891-edfa-438c-aadb-3a12e7fdd9a4-kube-api-access-2lx6d\") pod \"glance-db-sync-n9k7h\" (UID: \"7e292891-edfa-438c-aadb-3a12e7fdd9a4\") " pod="openstack/glance-db-sync-n9k7h" Feb 18 14:53:22 crc kubenswrapper[4957]: I0218 14:53:22.069611 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e292891-edfa-438c-aadb-3a12e7fdd9a4-config-data\") pod \"glance-db-sync-n9k7h\" (UID: \"7e292891-edfa-438c-aadb-3a12e7fdd9a4\") " pod="openstack/glance-db-sync-n9k7h" Feb 18 14:53:22 crc kubenswrapper[4957]: I0218 14:53:22.077638 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/7e292891-edfa-438c-aadb-3a12e7fdd9a4-db-sync-config-data\") pod \"glance-db-sync-n9k7h\" (UID: \"7e292891-edfa-438c-aadb-3a12e7fdd9a4\") " pod="openstack/glance-db-sync-n9k7h" Feb 18 14:53:22 crc kubenswrapper[4957]: I0218 14:53:22.077717 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e292891-edfa-438c-aadb-3a12e7fdd9a4-combined-ca-bundle\") pod \"glance-db-sync-n9k7h\" (UID: \"7e292891-edfa-438c-aadb-3a12e7fdd9a4\") " pod="openstack/glance-db-sync-n9k7h" Feb 18 14:53:22 crc kubenswrapper[4957]: I0218 14:53:22.082788 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e292891-edfa-438c-aadb-3a12e7fdd9a4-config-data\") pod \"glance-db-sync-n9k7h\" (UID: \"7e292891-edfa-438c-aadb-3a12e7fdd9a4\") " pod="openstack/glance-db-sync-n9k7h" Feb 18 14:53:22 crc kubenswrapper[4957]: I0218 14:53:22.091283 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2lx6d\" (UniqueName: \"kubernetes.io/projected/7e292891-edfa-438c-aadb-3a12e7fdd9a4-kube-api-access-2lx6d\") pod \"glance-db-sync-n9k7h\" (UID: \"7e292891-edfa-438c-aadb-3a12e7fdd9a4\") " pod="openstack/glance-db-sync-n9k7h" Feb 18 14:53:22 crc kubenswrapper[4957]: I0218 14:53:22.207883 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-n9k7h" Feb 18 14:53:24 crc kubenswrapper[4957]: I0218 14:53:24.430720 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-g6wjq" Feb 18 14:53:24 crc kubenswrapper[4957]: I0218 14:53:24.437893 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-66ea-account-create-update-ttrz9" Feb 18 14:53:24 crc kubenswrapper[4957]: I0218 14:53:24.455976 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7083-account-create-update-hrlxp" Feb 18 14:53:24 crc kubenswrapper[4957]: I0218 14:53:24.460823 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-a8b2-account-create-update-pwb27" Feb 18 14:53:24 crc kubenswrapper[4957]: I0218 14:53:24.516301 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-f4rlf" Feb 18 14:53:24 crc kubenswrapper[4957]: I0218 14:53:24.524860 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/147d3a76-36f3-4ba7-85ad-a05dfe2ec485-operator-scripts\") pod \"147d3a76-36f3-4ba7-85ad-a05dfe2ec485\" (UID: \"147d3a76-36f3-4ba7-85ad-a05dfe2ec485\") " Feb 18 14:53:24 crc kubenswrapper[4957]: I0218 14:53:24.542683 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a6828e6e-401e-4774-83d8-eb1dab6661a3-operator-scripts\") pod \"a6828e6e-401e-4774-83d8-eb1dab6661a3\" (UID: \"a6828e6e-401e-4774-83d8-eb1dab6661a3\") " Feb 18 14:53:24 crc kubenswrapper[4957]: I0218 14:53:24.542810 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-99lx6\" (UniqueName: \"kubernetes.io/projected/c7e53e53-e1cc-43a4-8b83-ba7ed5bb323b-kube-api-access-99lx6\") pod \"c7e53e53-e1cc-43a4-8b83-ba7ed5bb323b\" (UID: \"c7e53e53-e1cc-43a4-8b83-ba7ed5bb323b\") " Feb 18 14:53:24 crc kubenswrapper[4957]: I0218 14:53:24.542867 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l2tcs\" (UniqueName: \"kubernetes.io/projected/a6828e6e-401e-4774-83d8-eb1dab6661a3-kube-api-access-l2tcs\") pod \"a6828e6e-401e-4774-83d8-eb1dab6661a3\" (UID: \"a6828e6e-401e-4774-83d8-eb1dab6661a3\") " Feb 18 14:53:24 crc kubenswrapper[4957]: I0218 14:53:24.543296 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qrftq\" (UniqueName: \"kubernetes.io/projected/9976d545-5d6c-44b9-993d-14890ae6a93e-kube-api-access-qrftq\") pod \"9976d545-5d6c-44b9-993d-14890ae6a93e\" (UID: \"9976d545-5d6c-44b9-993d-14890ae6a93e\") " Feb 18 14:53:24 crc kubenswrapper[4957]: I0218 14:53:24.543352 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rcg9g\" (UniqueName: \"kubernetes.io/projected/23769552-282f-49ac-b650-6047f54aa60e-kube-api-access-rcg9g\") pod \"23769552-282f-49ac-b650-6047f54aa60e\" (UID: \"23769552-282f-49ac-b650-6047f54aa60e\") " Feb 18 14:53:24 crc kubenswrapper[4957]: I0218 14:53:24.551591 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/147d3a76-36f3-4ba7-85ad-a05dfe2ec485-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "147d3a76-36f3-4ba7-85ad-a05dfe2ec485" (UID: "147d3a76-36f3-4ba7-85ad-a05dfe2ec485"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:53:24 crc kubenswrapper[4957]: I0218 14:53:24.556146 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-hzpv6" Feb 18 14:53:24 crc kubenswrapper[4957]: I0218 14:53:24.557516 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a6828e6e-401e-4774-83d8-eb1dab6661a3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a6828e6e-401e-4774-83d8-eb1dab6661a3" (UID: "a6828e6e-401e-4774-83d8-eb1dab6661a3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:53:24 crc kubenswrapper[4957]: I0218 14:53:24.565720 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9976d545-5d6c-44b9-993d-14890ae6a93e-kube-api-access-qrftq" (OuterVolumeSpecName: "kube-api-access-qrftq") pod "9976d545-5d6c-44b9-993d-14890ae6a93e" (UID: "9976d545-5d6c-44b9-993d-14890ae6a93e"). InnerVolumeSpecName "kube-api-access-qrftq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:53:24 crc kubenswrapper[4957]: I0218 14:53:24.568539 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7e53e53-e1cc-43a4-8b83-ba7ed5bb323b-kube-api-access-99lx6" (OuterVolumeSpecName: "kube-api-access-99lx6") pod "c7e53e53-e1cc-43a4-8b83-ba7ed5bb323b" (UID: "c7e53e53-e1cc-43a4-8b83-ba7ed5bb323b"). InnerVolumeSpecName "kube-api-access-99lx6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:53:24 crc kubenswrapper[4957]: I0218 14:53:24.577303 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6828e6e-401e-4774-83d8-eb1dab6661a3-kube-api-access-l2tcs" (OuterVolumeSpecName: "kube-api-access-l2tcs") pod "a6828e6e-401e-4774-83d8-eb1dab6661a3" (UID: "a6828e6e-401e-4774-83d8-eb1dab6661a3"). InnerVolumeSpecName "kube-api-access-l2tcs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:53:24 crc kubenswrapper[4957]: I0218 14:53:24.605726 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23769552-282f-49ac-b650-6047f54aa60e-kube-api-access-rcg9g" (OuterVolumeSpecName: "kube-api-access-rcg9g") pod "23769552-282f-49ac-b650-6047f54aa60e" (UID: "23769552-282f-49ac-b650-6047f54aa60e"). InnerVolumeSpecName "kube-api-access-rcg9g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:53:24 crc kubenswrapper[4957]: I0218 14:53:24.647759 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/73ec2a80-4748-44f3-a979-1c854a4f3b49-etc-swift\") pod \"73ec2a80-4748-44f3-a979-1c854a4f3b49\" (UID: \"73ec2a80-4748-44f3-a979-1c854a4f3b49\") " Feb 18 14:53:24 crc kubenswrapper[4957]: I0218 14:53:24.647845 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/23769552-282f-49ac-b650-6047f54aa60e-operator-scripts\") pod \"23769552-282f-49ac-b650-6047f54aa60e\" (UID: \"23769552-282f-49ac-b650-6047f54aa60e\") " Feb 18 14:53:24 crc kubenswrapper[4957]: I0218 14:53:24.647876 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9976d545-5d6c-44b9-993d-14890ae6a93e-operator-scripts\") pod \"9976d545-5d6c-44b9-993d-14890ae6a93e\" (UID: \"9976d545-5d6c-44b9-993d-14890ae6a93e\") " Feb 18 14:53:24 crc kubenswrapper[4957]: I0218 14:53:24.647910 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c7e53e53-e1cc-43a4-8b83-ba7ed5bb323b-operator-scripts\") pod \"c7e53e53-e1cc-43a4-8b83-ba7ed5bb323b\" (UID: \"c7e53e53-e1cc-43a4-8b83-ba7ed5bb323b\") " Feb 18 14:53:24 crc kubenswrapper[4957]: I0218 14:53:24.647936 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73ec2a80-4748-44f3-a979-1c854a4f3b49-combined-ca-bundle\") pod \"73ec2a80-4748-44f3-a979-1c854a4f3b49\" (UID: \"73ec2a80-4748-44f3-a979-1c854a4f3b49\") " Feb 18 14:53:24 crc kubenswrapper[4957]: I0218 14:53:24.647980 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/73ec2a80-4748-44f3-a979-1c854a4f3b49-swiftconf\") pod \"73ec2a80-4748-44f3-a979-1c854a4f3b49\" (UID: \"73ec2a80-4748-44f3-a979-1c854a4f3b49\") " Feb 18 14:53:24 crc kubenswrapper[4957]: I0218 14:53:24.647996 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c5v9n\" (UniqueName: \"kubernetes.io/projected/147d3a76-36f3-4ba7-85ad-a05dfe2ec485-kube-api-access-c5v9n\") pod \"147d3a76-36f3-4ba7-85ad-a05dfe2ec485\" (UID: \"147d3a76-36f3-4ba7-85ad-a05dfe2ec485\") " Feb 18 14:53:24 crc kubenswrapper[4957]: I0218 14:53:24.648104 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/73ec2a80-4748-44f3-a979-1c854a4f3b49-dispersionconf\") pod \"73ec2a80-4748-44f3-a979-1c854a4f3b49\" (UID: \"73ec2a80-4748-44f3-a979-1c854a4f3b49\") " Feb 18 14:53:24 crc kubenswrapper[4957]: I0218 14:53:24.648155 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4wpbn\" (UniqueName: \"kubernetes.io/projected/73ec2a80-4748-44f3-a979-1c854a4f3b49-kube-api-access-4wpbn\") pod \"73ec2a80-4748-44f3-a979-1c854a4f3b49\" (UID: \"73ec2a80-4748-44f3-a979-1c854a4f3b49\") " Feb 18 14:53:24 crc kubenswrapper[4957]: I0218 14:53:24.648216 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/73ec2a80-4748-44f3-a979-1c854a4f3b49-ring-data-devices\") pod \"73ec2a80-4748-44f3-a979-1c854a4f3b49\" (UID: \"73ec2a80-4748-44f3-a979-1c854a4f3b49\") " Feb 18 14:53:24 crc kubenswrapper[4957]: I0218 14:53:24.648319 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/73ec2a80-4748-44f3-a979-1c854a4f3b49-scripts\") pod \"73ec2a80-4748-44f3-a979-1c854a4f3b49\" (UID: \"73ec2a80-4748-44f3-a979-1c854a4f3b49\") " Feb 18 14:53:24 crc kubenswrapper[4957]: I0218 14:53:24.648482 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9976d545-5d6c-44b9-993d-14890ae6a93e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9976d545-5d6c-44b9-993d-14890ae6a93e" (UID: "9976d545-5d6c-44b9-993d-14890ae6a93e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:53:24 crc kubenswrapper[4957]: I0218 14:53:24.648891 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c7e53e53-e1cc-43a4-8b83-ba7ed5bb323b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c7e53e53-e1cc-43a4-8b83-ba7ed5bb323b" (UID: "c7e53e53-e1cc-43a4-8b83-ba7ed5bb323b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:53:24 crc kubenswrapper[4957]: I0218 14:53:24.648967 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qrftq\" (UniqueName: \"kubernetes.io/projected/9976d545-5d6c-44b9-993d-14890ae6a93e-kube-api-access-qrftq\") on node \"crc\" DevicePath \"\"" Feb 18 14:53:24 crc kubenswrapper[4957]: I0218 14:53:24.649155 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rcg9g\" (UniqueName: \"kubernetes.io/projected/23769552-282f-49ac-b650-6047f54aa60e-kube-api-access-rcg9g\") on node \"crc\" DevicePath \"\"" Feb 18 14:53:24 crc kubenswrapper[4957]: I0218 14:53:24.649215 4957 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9976d545-5d6c-44b9-993d-14890ae6a93e-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 14:53:24 crc kubenswrapper[4957]: I0218 14:53:24.649271 4957 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/147d3a76-36f3-4ba7-85ad-a05dfe2ec485-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 14:53:24 crc kubenswrapper[4957]: I0218 14:53:24.649325 4957 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a6828e6e-401e-4774-83d8-eb1dab6661a3-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 14:53:24 crc kubenswrapper[4957]: I0218 14:53:24.649562 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-99lx6\" (UniqueName: \"kubernetes.io/projected/c7e53e53-e1cc-43a4-8b83-ba7ed5bb323b-kube-api-access-99lx6\") on node \"crc\" DevicePath \"\"" Feb 18 14:53:24 crc kubenswrapper[4957]: I0218 14:53:24.649618 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l2tcs\" (UniqueName: \"kubernetes.io/projected/a6828e6e-401e-4774-83d8-eb1dab6661a3-kube-api-access-l2tcs\") on node \"crc\" DevicePath \"\"" Feb 18 14:53:24 crc kubenswrapper[4957]: I0218 14:53:24.649232 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/73ec2a80-4748-44f3-a979-1c854a4f3b49-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "73ec2a80-4748-44f3-a979-1c854a4f3b49" (UID: "73ec2a80-4748-44f3-a979-1c854a4f3b49"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:53:24 crc kubenswrapper[4957]: I0218 14:53:24.650043 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23769552-282f-49ac-b650-6047f54aa60e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "23769552-282f-49ac-b650-6047f54aa60e" (UID: "23769552-282f-49ac-b650-6047f54aa60e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:53:24 crc kubenswrapper[4957]: I0218 14:53:24.653333 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/73ec2a80-4748-44f3-a979-1c854a4f3b49-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "73ec2a80-4748-44f3-a979-1c854a4f3b49" (UID: "73ec2a80-4748-44f3-a979-1c854a4f3b49"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:53:24 crc kubenswrapper[4957]: I0218 14:53:24.655861 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73ec2a80-4748-44f3-a979-1c854a4f3b49-kube-api-access-4wpbn" (OuterVolumeSpecName: "kube-api-access-4wpbn") pod "73ec2a80-4748-44f3-a979-1c854a4f3b49" (UID: "73ec2a80-4748-44f3-a979-1c854a4f3b49"). InnerVolumeSpecName "kube-api-access-4wpbn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:53:24 crc kubenswrapper[4957]: I0218 14:53:24.657620 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/147d3a76-36f3-4ba7-85ad-a05dfe2ec485-kube-api-access-c5v9n" (OuterVolumeSpecName: "kube-api-access-c5v9n") pod "147d3a76-36f3-4ba7-85ad-a05dfe2ec485" (UID: "147d3a76-36f3-4ba7-85ad-a05dfe2ec485"). InnerVolumeSpecName "kube-api-access-c5v9n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:53:24 crc kubenswrapper[4957]: I0218 14:53:24.673390 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73ec2a80-4748-44f3-a979-1c854a4f3b49-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "73ec2a80-4748-44f3-a979-1c854a4f3b49" (UID: "73ec2a80-4748-44f3-a979-1c854a4f3b49"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:53:24 crc kubenswrapper[4957]: I0218 14:53:24.678901 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/73ec2a80-4748-44f3-a979-1c854a4f3b49-scripts" (OuterVolumeSpecName: "scripts") pod "73ec2a80-4748-44f3-a979-1c854a4f3b49" (UID: "73ec2a80-4748-44f3-a979-1c854a4f3b49"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:53:24 crc kubenswrapper[4957]: I0218 14:53:24.679424 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73ec2a80-4748-44f3-a979-1c854a4f3b49-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "73ec2a80-4748-44f3-a979-1c854a4f3b49" (UID: "73ec2a80-4748-44f3-a979-1c854a4f3b49"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:53:24 crc kubenswrapper[4957]: I0218 14:53:24.706486 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73ec2a80-4748-44f3-a979-1c854a4f3b49-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "73ec2a80-4748-44f3-a979-1c854a4f3b49" (UID: "73ec2a80-4748-44f3-a979-1c854a4f3b49"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:53:24 crc kubenswrapper[4957]: I0218 14:53:24.751945 4957 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/23769552-282f-49ac-b650-6047f54aa60e-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 14:53:24 crc kubenswrapper[4957]: I0218 14:53:24.751981 4957 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c7e53e53-e1cc-43a4-8b83-ba7ed5bb323b-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 14:53:24 crc kubenswrapper[4957]: I0218 14:53:24.751992 4957 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73ec2a80-4748-44f3-a979-1c854a4f3b49-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:53:24 crc kubenswrapper[4957]: I0218 14:53:24.752001 4957 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/73ec2a80-4748-44f3-a979-1c854a4f3b49-swiftconf\") on node \"crc\" DevicePath \"\"" Feb 18 14:53:24 crc kubenswrapper[4957]: I0218 14:53:24.752010 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c5v9n\" (UniqueName: \"kubernetes.io/projected/147d3a76-36f3-4ba7-85ad-a05dfe2ec485-kube-api-access-c5v9n\") on node \"crc\" DevicePath \"\"" Feb 18 14:53:24 crc kubenswrapper[4957]: I0218 14:53:24.752020 4957 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/73ec2a80-4748-44f3-a979-1c854a4f3b49-dispersionconf\") on node \"crc\" DevicePath \"\"" Feb 18 14:53:24 crc kubenswrapper[4957]: I0218 14:53:24.752028 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4wpbn\" (UniqueName: \"kubernetes.io/projected/73ec2a80-4748-44f3-a979-1c854a4f3b49-kube-api-access-4wpbn\") on node \"crc\" DevicePath \"\"" Feb 18 14:53:24 crc kubenswrapper[4957]: I0218 14:53:24.752036 4957 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/73ec2a80-4748-44f3-a979-1c854a4f3b49-ring-data-devices\") on node \"crc\" DevicePath \"\"" Feb 18 14:53:24 crc kubenswrapper[4957]: I0218 14:53:24.752044 4957 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/73ec2a80-4748-44f3-a979-1c854a4f3b49-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 14:53:24 crc kubenswrapper[4957]: I0218 14:53:24.752051 4957 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/73ec2a80-4748-44f3-a979-1c854a4f3b49-etc-swift\") on node \"crc\" DevicePath \"\"" Feb 18 14:53:24 crc kubenswrapper[4957]: I0218 14:53:24.829247 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-66ea-account-create-update-ttrz9" event={"ID":"9976d545-5d6c-44b9-993d-14890ae6a93e","Type":"ContainerDied","Data":"06ec1653157d70bca40bc830edea3dfc10518e9d6d9909cdf567f429f52eecb5"} Feb 18 14:53:24 crc kubenswrapper[4957]: I0218 14:53:24.829298 4957 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="06ec1653157d70bca40bc830edea3dfc10518e9d6d9909cdf567f429f52eecb5" Feb 18 14:53:24 crc kubenswrapper[4957]: I0218 14:53:24.829394 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-66ea-account-create-update-ttrz9" Feb 18 14:53:24 crc kubenswrapper[4957]: I0218 14:53:24.839824 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-g6wjq" event={"ID":"a6828e6e-401e-4774-83d8-eb1dab6661a3","Type":"ContainerDied","Data":"fd1285146f89bc6f9497ce1aedcb8264c64cec90b4ad5c4221c334f8478f9ac1"} Feb 18 14:53:24 crc kubenswrapper[4957]: I0218 14:53:24.839879 4957 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fd1285146f89bc6f9497ce1aedcb8264c64cec90b4ad5c4221c334f8478f9ac1" Feb 18 14:53:24 crc kubenswrapper[4957]: I0218 14:53:24.839884 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-g6wjq" Feb 18 14:53:24 crc kubenswrapper[4957]: I0218 14:53:24.842805 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-fb8lv"] Feb 18 14:53:24 crc kubenswrapper[4957]: E0218 14:53:24.843411 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9976d545-5d6c-44b9-993d-14890ae6a93e" containerName="mariadb-account-create-update" Feb 18 14:53:24 crc kubenswrapper[4957]: I0218 14:53:24.843463 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="9976d545-5d6c-44b9-993d-14890ae6a93e" containerName="mariadb-account-create-update" Feb 18 14:53:24 crc kubenswrapper[4957]: E0218 14:53:24.843474 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7e53e53-e1cc-43a4-8b83-ba7ed5bb323b" containerName="mariadb-account-create-update" Feb 18 14:53:24 crc kubenswrapper[4957]: I0218 14:53:24.843480 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7e53e53-e1cc-43a4-8b83-ba7ed5bb323b" containerName="mariadb-account-create-update" Feb 18 14:53:24 crc kubenswrapper[4957]: E0218 14:53:24.843514 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6828e6e-401e-4774-83d8-eb1dab6661a3" containerName="mariadb-database-create" Feb 18 14:53:24 crc kubenswrapper[4957]: I0218 14:53:24.843520 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6828e6e-401e-4774-83d8-eb1dab6661a3" containerName="mariadb-database-create" Feb 18 14:53:24 crc kubenswrapper[4957]: E0218 14:53:24.843535 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73ec2a80-4748-44f3-a979-1c854a4f3b49" containerName="swift-ring-rebalance" Feb 18 14:53:24 crc kubenswrapper[4957]: I0218 14:53:24.843541 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="73ec2a80-4748-44f3-a979-1c854a4f3b49" containerName="swift-ring-rebalance" Feb 18 14:53:24 crc kubenswrapper[4957]: E0218 14:53:24.843552 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="147d3a76-36f3-4ba7-85ad-a05dfe2ec485" containerName="mariadb-database-create" Feb 18 14:53:24 crc kubenswrapper[4957]: I0218 14:53:24.843559 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="147d3a76-36f3-4ba7-85ad-a05dfe2ec485" containerName="mariadb-database-create" Feb 18 14:53:24 crc kubenswrapper[4957]: E0218 14:53:24.843572 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23769552-282f-49ac-b650-6047f54aa60e" containerName="mariadb-account-create-update" Feb 18 14:53:24 crc kubenswrapper[4957]: I0218 14:53:24.843579 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="23769552-282f-49ac-b650-6047f54aa60e" containerName="mariadb-account-create-update" Feb 18 14:53:24 crc kubenswrapper[4957]: I0218 14:53:24.843788 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="23769552-282f-49ac-b650-6047f54aa60e" containerName="mariadb-account-create-update" Feb 18 14:53:24 crc kubenswrapper[4957]: I0218 14:53:24.843810 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7e53e53-e1cc-43a4-8b83-ba7ed5bb323b" containerName="mariadb-account-create-update" Feb 18 14:53:24 crc kubenswrapper[4957]: I0218 14:53:24.843824 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="9976d545-5d6c-44b9-993d-14890ae6a93e" containerName="mariadb-account-create-update" Feb 18 14:53:24 crc kubenswrapper[4957]: I0218 14:53:24.843841 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="147d3a76-36f3-4ba7-85ad-a05dfe2ec485" containerName="mariadb-database-create" Feb 18 14:53:24 crc kubenswrapper[4957]: I0218 14:53:24.843853 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6828e6e-401e-4774-83d8-eb1dab6661a3" containerName="mariadb-database-create" Feb 18 14:53:24 crc kubenswrapper[4957]: I0218 14:53:24.843874 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="73ec2a80-4748-44f3-a979-1c854a4f3b49" containerName="swift-ring-rebalance" Feb 18 14:53:24 crc kubenswrapper[4957]: I0218 14:53:24.844775 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-fb8lv" Feb 18 14:53:24 crc kubenswrapper[4957]: I0218 14:53:24.845870 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-hzpv6" event={"ID":"73ec2a80-4748-44f3-a979-1c854a4f3b49","Type":"ContainerDied","Data":"b067e88e5026b0630948a152e54c6b1f1e1aa9517c96fe6b1782a3df8bc83ac8"} Feb 18 14:53:24 crc kubenswrapper[4957]: I0218 14:53:24.845906 4957 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b067e88e5026b0630948a152e54c6b1f1e1aa9517c96fe6b1782a3df8bc83ac8" Feb 18 14:53:24 crc kubenswrapper[4957]: I0218 14:53:24.845976 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-hzpv6" Feb 18 14:53:24 crc kubenswrapper[4957]: I0218 14:53:24.846990 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Feb 18 14:53:24 crc kubenswrapper[4957]: I0218 14:53:24.852945 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-a8b2-account-create-update-pwb27" Feb 18 14:53:24 crc kubenswrapper[4957]: I0218 14:53:24.853090 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-a8b2-account-create-update-pwb27" event={"ID":"23769552-282f-49ac-b650-6047f54aa60e","Type":"ContainerDied","Data":"c18de5e9139a9b688977f8a3cf55a870a57f7ace93221bfc200696c06fe45c81"} Feb 18 14:53:24 crc kubenswrapper[4957]: I0218 14:53:24.853285 4957 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c18de5e9139a9b688977f8a3cf55a870a57f7ace93221bfc200696c06fe45c81" Feb 18 14:53:24 crc kubenswrapper[4957]: I0218 14:53:24.859623 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-fb8lv"] Feb 18 14:53:24 crc kubenswrapper[4957]: I0218 14:53:24.861553 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7083-account-create-update-hrlxp" event={"ID":"c7e53e53-e1cc-43a4-8b83-ba7ed5bb323b","Type":"ContainerDied","Data":"ba78a0530f17fba2fb83ba6c940bec3c211da2db99da5886406fa80fc332ffad"} Feb 18 14:53:24 crc kubenswrapper[4957]: I0218 14:53:24.861595 4957 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ba78a0530f17fba2fb83ba6c940bec3c211da2db99da5886406fa80fc332ffad" Feb 18 14:53:24 crc kubenswrapper[4957]: I0218 14:53:24.861718 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7083-account-create-update-hrlxp" Feb 18 14:53:24 crc kubenswrapper[4957]: I0218 14:53:24.871688 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-f4rlf" event={"ID":"147d3a76-36f3-4ba7-85ad-a05dfe2ec485","Type":"ContainerDied","Data":"9269627efb3eb2adca02f62f71d321357dd40dd2581cf57e078c3d15e39663c7"} Feb 18 14:53:24 crc kubenswrapper[4957]: I0218 14:53:24.871734 4957 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9269627efb3eb2adca02f62f71d321357dd40dd2581cf57e078c3d15e39663c7" Feb 18 14:53:24 crc kubenswrapper[4957]: I0218 14:53:24.871813 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-f4rlf" Feb 18 14:53:24 crc kubenswrapper[4957]: I0218 14:53:24.955358 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mglz9\" (UniqueName: \"kubernetes.io/projected/c9333dc0-cbe3-4497-93e0-bb2473de68b0-kube-api-access-mglz9\") pod \"root-account-create-update-fb8lv\" (UID: \"c9333dc0-cbe3-4497-93e0-bb2473de68b0\") " pod="openstack/root-account-create-update-fb8lv" Feb 18 14:53:24 crc kubenswrapper[4957]: I0218 14:53:24.955435 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c9333dc0-cbe3-4497-93e0-bb2473de68b0-operator-scripts\") pod \"root-account-create-update-fb8lv\" (UID: \"c9333dc0-cbe3-4497-93e0-bb2473de68b0\") " pod="openstack/root-account-create-update-fb8lv" Feb 18 14:53:25 crc kubenswrapper[4957]: I0218 14:53:25.059049 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mglz9\" (UniqueName: \"kubernetes.io/projected/c9333dc0-cbe3-4497-93e0-bb2473de68b0-kube-api-access-mglz9\") pod \"root-account-create-update-fb8lv\" (UID: \"c9333dc0-cbe3-4497-93e0-bb2473de68b0\") " pod="openstack/root-account-create-update-fb8lv" Feb 18 14:53:25 crc kubenswrapper[4957]: I0218 14:53:25.059329 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c9333dc0-cbe3-4497-93e0-bb2473de68b0-operator-scripts\") pod \"root-account-create-update-fb8lv\" (UID: \"c9333dc0-cbe3-4497-93e0-bb2473de68b0\") " pod="openstack/root-account-create-update-fb8lv" Feb 18 14:53:25 crc kubenswrapper[4957]: I0218 14:53:25.060366 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c9333dc0-cbe3-4497-93e0-bb2473de68b0-operator-scripts\") pod \"root-account-create-update-fb8lv\" (UID: \"c9333dc0-cbe3-4497-93e0-bb2473de68b0\") " pod="openstack/root-account-create-update-fb8lv" Feb 18 14:53:25 crc kubenswrapper[4957]: I0218 14:53:25.076927 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mglz9\" (UniqueName: \"kubernetes.io/projected/c9333dc0-cbe3-4497-93e0-bb2473de68b0-kube-api-access-mglz9\") pod \"root-account-create-update-fb8lv\" (UID: \"c9333dc0-cbe3-4497-93e0-bb2473de68b0\") " pod="openstack/root-account-create-update-fb8lv" Feb 18 14:53:25 crc kubenswrapper[4957]: I0218 14:53:25.216928 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-n9k7h"] Feb 18 14:53:25 crc kubenswrapper[4957]: W0218 14:53:25.220245 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7e292891_edfa_438c_aadb_3a12e7fdd9a4.slice/crio-a4c5d76202ac258be45d01b20ae0da5d786f032d317f8fb410d8912e9794b686 WatchSource:0}: Error finding container a4c5d76202ac258be45d01b20ae0da5d786f032d317f8fb410d8912e9794b686: Status 404 returned error can't find the container with id a4c5d76202ac258be45d01b20ae0da5d786f032d317f8fb410d8912e9794b686 Feb 18 14:53:25 crc kubenswrapper[4957]: I0218 14:53:25.222943 4957 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 14:53:25 crc kubenswrapper[4957]: I0218 14:53:25.285335 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-fb8lv" Feb 18 14:53:25 crc kubenswrapper[4957]: I0218 14:53:25.713712 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-fb8lv"] Feb 18 14:53:25 crc kubenswrapper[4957]: I0218 14:53:25.883986 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-fb8lv" event={"ID":"c9333dc0-cbe3-4497-93e0-bb2473de68b0","Type":"ContainerStarted","Data":"273cd824cf778ae86bd3d656af425e4366459408d62ef884fa7dcb09f2bf6ea2"} Feb 18 14:53:25 crc kubenswrapper[4957]: I0218 14:53:25.886242 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-n9k7h" event={"ID":"7e292891-edfa-438c-aadb-3a12e7fdd9a4","Type":"ContainerStarted","Data":"a4c5d76202ac258be45d01b20ae0da5d786f032d317f8fb410d8912e9794b686"} Feb 18 14:53:25 crc kubenswrapper[4957]: I0218 14:53:25.890251 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"383c2ab1-7f15-422c-a4ed-3c899e3a8c74","Type":"ContainerStarted","Data":"8f029520e2ca59bbaef36a5c4bce5f194692763eecfd2eb7a68043aab76dee8f"} Feb 18 14:53:25 crc kubenswrapper[4957]: I0218 14:53:25.891677 4957 generic.go:334] "Generic (PLEG): container finished" podID="12d531cb-f58f-44a6-a638-29ebb85fdbb3" containerID="d210d83635aa51b299952aa7eac3ab5e0c00e01b6107d3c3293d4f59d4388270" exitCode=0 Feb 18 14:53:25 crc kubenswrapper[4957]: I0218 14:53:25.891726 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"12d531cb-f58f-44a6-a638-29ebb85fdbb3","Type":"ContainerDied","Data":"d210d83635aa51b299952aa7eac3ab5e0c00e01b6107d3c3293d4f59d4388270"} Feb 18 14:53:25 crc kubenswrapper[4957]: I0218 14:53:25.896032 4957 generic.go:334] "Generic (PLEG): container finished" podID="a0e8ec2b-400b-4454-acdd-517a1727e9f8" containerID="5ae784f50deb6c10121274079b9c6592c06242581b699c710aac5ddd82f24d65" exitCode=0 Feb 18 14:53:25 crc kubenswrapper[4957]: I0218 14:53:25.896194 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"a0e8ec2b-400b-4454-acdd-517a1727e9f8","Type":"ContainerDied","Data":"5ae784f50deb6c10121274079b9c6592c06242581b699c710aac5ddd82f24d65"} Feb 18 14:53:25 crc kubenswrapper[4957]: I0218 14:53:25.911678 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-fb8lv" podStartSLOduration=1.911659797 podStartE2EDuration="1.911659797s" podCreationTimestamp="2026-02-18 14:53:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:53:25.903681534 +0000 UTC m=+1312.424546288" watchObservedRunningTime="2026-02-18 14:53:25.911659797 +0000 UTC m=+1312.432524541" Feb 18 14:53:26 crc kubenswrapper[4957]: I0218 14:53:26.340787 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Feb 18 14:53:26 crc kubenswrapper[4957]: I0218 14:53:26.920653 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"12d531cb-f58f-44a6-a638-29ebb85fdbb3","Type":"ContainerStarted","Data":"351d83c0114e5e295ff8d56e0e90957a2bdf7388269b2c16995008eb48ee3a67"} Feb 18 14:53:26 crc kubenswrapper[4957]: I0218 14:53:26.922387 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:53:26 crc kubenswrapper[4957]: I0218 14:53:26.924561 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"a0e8ec2b-400b-4454-acdd-517a1727e9f8","Type":"ContainerStarted","Data":"6a1aba853a6b85f2dc58df840dede15df72b5d14a98d48ba6dec3e389c1d1050"} Feb 18 14:53:26 crc kubenswrapper[4957]: I0218 14:53:26.925181 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 18 14:53:26 crc kubenswrapper[4957]: I0218 14:53:26.927212 4957 generic.go:334] "Generic (PLEG): container finished" podID="c9333dc0-cbe3-4497-93e0-bb2473de68b0" containerID="9a9697902f52127868f1c437e783f67935f02fc41ba4e03511ff79d0b1afaa2f" exitCode=0 Feb 18 14:53:26 crc kubenswrapper[4957]: I0218 14:53:26.927273 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-fb8lv" event={"ID":"c9333dc0-cbe3-4497-93e0-bb2473de68b0","Type":"ContainerDied","Data":"9a9697902f52127868f1c437e783f67935f02fc41ba4e03511ff79d0b1afaa2f"} Feb 18 14:53:26 crc kubenswrapper[4957]: I0218 14:53:26.953899 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=-9223371956.900896 podStartE2EDuration="1m19.953879097s" podCreationTimestamp="2026-02-18 14:52:07 +0000 UTC" firstStartedPulling="2026-02-18 14:52:09.932157246 +0000 UTC m=+1236.453021990" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:53:26.943530846 +0000 UTC m=+1313.464395610" watchObservedRunningTime="2026-02-18 14:53:26.953879097 +0000 UTC m=+1313.474743841" Feb 18 14:53:26 crc kubenswrapper[4957]: I0218 14:53:26.999069 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=38.582666975 podStartE2EDuration="1m20.9990451s" podCreationTimestamp="2026-02-18 14:52:06 +0000 UTC" firstStartedPulling="2026-02-18 14:52:09.315205014 +0000 UTC m=+1235.836069758" lastFinishedPulling="2026-02-18 14:52:51.731583139 +0000 UTC m=+1278.252447883" observedRunningTime="2026-02-18 14:53:26.990645016 +0000 UTC m=+1313.511509770" watchObservedRunningTime="2026-02-18 14:53:26.9990451 +0000 UTC m=+1313.519909844" Feb 18 14:53:27 crc kubenswrapper[4957]: I0218 14:53:27.946570 4957 generic.go:334] "Generic (PLEG): container finished" podID="81a0cd7a-3f64-4555-96e9-ad69c2518568" containerID="a137f2d606d0d74bf7ba4698dcc499a5b8f0e2ad8ab294862942ac6ee18f3a52" exitCode=0 Feb 18 14:53:27 crc kubenswrapper[4957]: I0218 14:53:27.946903 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"81a0cd7a-3f64-4555-96e9-ad69c2518568","Type":"ContainerDied","Data":"a137f2d606d0d74bf7ba4698dcc499a5b8f0e2ad8ab294862942ac6ee18f3a52"} Feb 18 14:53:27 crc kubenswrapper[4957]: I0218 14:53:27.956492 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"383c2ab1-7f15-422c-a4ed-3c899e3a8c74","Type":"ContainerStarted","Data":"59f7116cb5db6a924d7e7a58a3dd919de240d0b5326611e603db981243e90536"} Feb 18 14:53:28 crc kubenswrapper[4957]: I0218 14:53:28.391714 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-fb8lv" Feb 18 14:53:28 crc kubenswrapper[4957]: I0218 14:53:28.473352 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c9333dc0-cbe3-4497-93e0-bb2473de68b0-operator-scripts\") pod \"c9333dc0-cbe3-4497-93e0-bb2473de68b0\" (UID: \"c9333dc0-cbe3-4497-93e0-bb2473de68b0\") " Feb 18 14:53:28 crc kubenswrapper[4957]: I0218 14:53:28.473476 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mglz9\" (UniqueName: \"kubernetes.io/projected/c9333dc0-cbe3-4497-93e0-bb2473de68b0-kube-api-access-mglz9\") pod \"c9333dc0-cbe3-4497-93e0-bb2473de68b0\" (UID: \"c9333dc0-cbe3-4497-93e0-bb2473de68b0\") " Feb 18 14:53:28 crc kubenswrapper[4957]: I0218 14:53:28.475627 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9333dc0-cbe3-4497-93e0-bb2473de68b0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c9333dc0-cbe3-4497-93e0-bb2473de68b0" (UID: "c9333dc0-cbe3-4497-93e0-bb2473de68b0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:53:28 crc kubenswrapper[4957]: I0218 14:53:28.479707 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9333dc0-cbe3-4497-93e0-bb2473de68b0-kube-api-access-mglz9" (OuterVolumeSpecName: "kube-api-access-mglz9") pod "c9333dc0-cbe3-4497-93e0-bb2473de68b0" (UID: "c9333dc0-cbe3-4497-93e0-bb2473de68b0"). InnerVolumeSpecName "kube-api-access-mglz9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:53:28 crc kubenswrapper[4957]: I0218 14:53:28.576038 4957 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c9333dc0-cbe3-4497-93e0-bb2473de68b0-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 14:53:28 crc kubenswrapper[4957]: I0218 14:53:28.576076 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mglz9\" (UniqueName: \"kubernetes.io/projected/c9333dc0-cbe3-4497-93e0-bb2473de68b0-kube-api-access-mglz9\") on node \"crc\" DevicePath \"\"" Feb 18 14:53:28 crc kubenswrapper[4957]: I0218 14:53:28.967950 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"81a0cd7a-3f64-4555-96e9-ad69c2518568","Type":"ContainerStarted","Data":"f083acbf4c7e675f9c2dc0dacda93d033ac034b808f632f4e7aceaf7cba5fad8"} Feb 18 14:53:28 crc kubenswrapper[4957]: I0218 14:53:28.969403 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-2" Feb 18 14:53:28 crc kubenswrapper[4957]: I0218 14:53:28.969654 4957 generic.go:334] "Generic (PLEG): container finished" podID="f237ab9c-fc69-491b-98da-97ce92214eb0" containerID="d656f6ea94db1d3987b5635bb9c637ada193464854e5a9f7930c8f18d02e6007" exitCode=0 Feb 18 14:53:28 crc kubenswrapper[4957]: I0218 14:53:28.969752 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"f237ab9c-fc69-491b-98da-97ce92214eb0","Type":"ContainerDied","Data":"d656f6ea94db1d3987b5635bb9c637ada193464854e5a9f7930c8f18d02e6007"} Feb 18 14:53:28 crc kubenswrapper[4957]: I0218 14:53:28.973372 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-fb8lv" event={"ID":"c9333dc0-cbe3-4497-93e0-bb2473de68b0","Type":"ContainerDied","Data":"273cd824cf778ae86bd3d656af425e4366459408d62ef884fa7dcb09f2bf6ea2"} Feb 18 14:53:28 crc kubenswrapper[4957]: I0218 14:53:28.973499 4957 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="273cd824cf778ae86bd3d656af425e4366459408d62ef884fa7dcb09f2bf6ea2" Feb 18 14:53:28 crc kubenswrapper[4957]: I0218 14:53:28.973434 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-fb8lv" Feb 18 14:53:29 crc kubenswrapper[4957]: I0218 14:53:29.025023 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-2" podStartSLOduration=-9223371953.829779 podStartE2EDuration="1m23.024997439s" podCreationTimestamp="2026-02-18 14:52:06 +0000 UTC" firstStartedPulling="2026-02-18 14:52:09.750657508 +0000 UTC m=+1236.271522252" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:53:29.018030086 +0000 UTC m=+1315.538894830" watchObservedRunningTime="2026-02-18 14:53:29.024997439 +0000 UTC m=+1315.545862183" Feb 18 14:53:29 crc kubenswrapper[4957]: I0218 14:53:29.626051 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-6fgdz"] Feb 18 14:53:29 crc kubenswrapper[4957]: E0218 14:53:29.627155 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9333dc0-cbe3-4497-93e0-bb2473de68b0" containerName="mariadb-account-create-update" Feb 18 14:53:29 crc kubenswrapper[4957]: I0218 14:53:29.627185 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9333dc0-cbe3-4497-93e0-bb2473de68b0" containerName="mariadb-account-create-update" Feb 18 14:53:29 crc kubenswrapper[4957]: I0218 14:53:29.628033 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9333dc0-cbe3-4497-93e0-bb2473de68b0" containerName="mariadb-account-create-update" Feb 18 14:53:29 crc kubenswrapper[4957]: I0218 14:53:29.629008 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-6fgdz" Feb 18 14:53:29 crc kubenswrapper[4957]: I0218 14:53:29.686684 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-6fgdz"] Feb 18 14:53:29 crc kubenswrapper[4957]: I0218 14:53:29.703875 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzwjh\" (UniqueName: \"kubernetes.io/projected/c49ef747-0aae-4f86-9688-69e8fd172494-kube-api-access-wzwjh\") pod \"mysqld-exporter-openstack-cell1-db-create-6fgdz\" (UID: \"c49ef747-0aae-4f86-9688-69e8fd172494\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-6fgdz" Feb 18 14:53:29 crc kubenswrapper[4957]: I0218 14:53:29.704789 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c49ef747-0aae-4f86-9688-69e8fd172494-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-6fgdz\" (UID: \"c49ef747-0aae-4f86-9688-69e8fd172494\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-6fgdz" Feb 18 14:53:29 crc kubenswrapper[4957]: I0218 14:53:29.806781 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c49ef747-0aae-4f86-9688-69e8fd172494-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-6fgdz\" (UID: \"c49ef747-0aae-4f86-9688-69e8fd172494\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-6fgdz" Feb 18 14:53:29 crc kubenswrapper[4957]: I0218 14:53:29.806914 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wzwjh\" (UniqueName: \"kubernetes.io/projected/c49ef747-0aae-4f86-9688-69e8fd172494-kube-api-access-wzwjh\") pod \"mysqld-exporter-openstack-cell1-db-create-6fgdz\" (UID: \"c49ef747-0aae-4f86-9688-69e8fd172494\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-6fgdz" Feb 18 14:53:29 crc kubenswrapper[4957]: I0218 14:53:29.807701 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c49ef747-0aae-4f86-9688-69e8fd172494-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-6fgdz\" (UID: \"c49ef747-0aae-4f86-9688-69e8fd172494\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-6fgdz" Feb 18 14:53:29 crc kubenswrapper[4957]: I0218 14:53:29.827492 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-7b22-account-create-update-zxhd8"] Feb 18 14:53:29 crc kubenswrapper[4957]: I0218 14:53:29.828769 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-7b22-account-create-update-zxhd8" Feb 18 14:53:29 crc kubenswrapper[4957]: I0218 14:53:29.830365 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wzwjh\" (UniqueName: \"kubernetes.io/projected/c49ef747-0aae-4f86-9688-69e8fd172494-kube-api-access-wzwjh\") pod \"mysqld-exporter-openstack-cell1-db-create-6fgdz\" (UID: \"c49ef747-0aae-4f86-9688-69e8fd172494\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-6fgdz" Feb 18 14:53:29 crc kubenswrapper[4957]: I0218 14:53:29.838941 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-openstack-cell1-db-secret" Feb 18 14:53:29 crc kubenswrapper[4957]: I0218 14:53:29.839187 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-7b22-account-create-update-zxhd8"] Feb 18 14:53:29 crc kubenswrapper[4957]: I0218 14:53:29.909152 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2dzx\" (UniqueName: \"kubernetes.io/projected/d8d94c2a-28d8-4811-97be-0020687f1773-kube-api-access-s2dzx\") pod \"mysqld-exporter-7b22-account-create-update-zxhd8\" (UID: \"d8d94c2a-28d8-4811-97be-0020687f1773\") " pod="openstack/mysqld-exporter-7b22-account-create-update-zxhd8" Feb 18 14:53:29 crc kubenswrapper[4957]: I0218 14:53:29.909209 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d8d94c2a-28d8-4811-97be-0020687f1773-operator-scripts\") pod \"mysqld-exporter-7b22-account-create-update-zxhd8\" (UID: \"d8d94c2a-28d8-4811-97be-0020687f1773\") " pod="openstack/mysqld-exporter-7b22-account-create-update-zxhd8" Feb 18 14:53:29 crc kubenswrapper[4957]: I0218 14:53:29.973202 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-6fgdz" Feb 18 14:53:30 crc kubenswrapper[4957]: I0218 14:53:30.003083 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"f237ab9c-fc69-491b-98da-97ce92214eb0","Type":"ContainerStarted","Data":"360a95ea3a9eed582ab8623dd5c867b4c8d186486451fb4fa22e41369f135fb6"} Feb 18 14:53:30 crc kubenswrapper[4957]: I0218 14:53:30.012045 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dzx\" (UniqueName: \"kubernetes.io/projected/d8d94c2a-28d8-4811-97be-0020687f1773-kube-api-access-s2dzx\") pod \"mysqld-exporter-7b22-account-create-update-zxhd8\" (UID: \"d8d94c2a-28d8-4811-97be-0020687f1773\") " pod="openstack/mysqld-exporter-7b22-account-create-update-zxhd8" Feb 18 14:53:30 crc kubenswrapper[4957]: I0218 14:53:30.012462 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d8d94c2a-28d8-4811-97be-0020687f1773-operator-scripts\") pod \"mysqld-exporter-7b22-account-create-update-zxhd8\" (UID: \"d8d94c2a-28d8-4811-97be-0020687f1773\") " pod="openstack/mysqld-exporter-7b22-account-create-update-zxhd8" Feb 18 14:53:30 crc kubenswrapper[4957]: I0218 14:53:30.013163 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d8d94c2a-28d8-4811-97be-0020687f1773-operator-scripts\") pod \"mysqld-exporter-7b22-account-create-update-zxhd8\" (UID: \"d8d94c2a-28d8-4811-97be-0020687f1773\") " pod="openstack/mysqld-exporter-7b22-account-create-update-zxhd8" Feb 18 14:53:30 crc kubenswrapper[4957]: I0218 14:53:30.033024 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dzx\" (UniqueName: \"kubernetes.io/projected/d8d94c2a-28d8-4811-97be-0020687f1773-kube-api-access-s2dzx\") pod \"mysqld-exporter-7b22-account-create-update-zxhd8\" (UID: \"d8d94c2a-28d8-4811-97be-0020687f1773\") " pod="openstack/mysqld-exporter-7b22-account-create-update-zxhd8" Feb 18 14:53:30 crc kubenswrapper[4957]: I0218 14:53:30.038571 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-1" podStartSLOduration=-9223371952.816227 podStartE2EDuration="1m24.038549285s" podCreationTimestamp="2026-02-18 14:52:06 +0000 UTC" firstStartedPulling="2026-02-18 14:52:09.899140046 +0000 UTC m=+1236.420004790" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:53:30.031307834 +0000 UTC m=+1316.552172578" watchObservedRunningTime="2026-02-18 14:53:30.038549285 +0000 UTC m=+1316.559414019" Feb 18 14:53:30 crc kubenswrapper[4957]: I0218 14:53:30.194236 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-7b22-account-create-update-zxhd8" Feb 18 14:53:30 crc kubenswrapper[4957]: I0218 14:53:30.584048 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-6fgdz"] Feb 18 14:53:30 crc kubenswrapper[4957]: W0218 14:53:30.965083 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd8d94c2a_28d8_4811_97be_0020687f1773.slice/crio-d7df16639f7ec53f4ad2431b97d15879e8661f1ab448dec19c7c07dbc8c1047a WatchSource:0}: Error finding container d7df16639f7ec53f4ad2431b97d15879e8661f1ab448dec19c7c07dbc8c1047a: Status 404 returned error can't find the container with id d7df16639f7ec53f4ad2431b97d15879e8661f1ab448dec19c7c07dbc8c1047a Feb 18 14:53:30 crc kubenswrapper[4957]: I0218 14:53:30.966680 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-7b22-account-create-update-zxhd8"] Feb 18 14:53:31 crc kubenswrapper[4957]: I0218 14:53:31.014164 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-7b22-account-create-update-zxhd8" event={"ID":"d8d94c2a-28d8-4811-97be-0020687f1773","Type":"ContainerStarted","Data":"d7df16639f7ec53f4ad2431b97d15879e8661f1ab448dec19c7c07dbc8c1047a"} Feb 18 14:53:31 crc kubenswrapper[4957]: I0218 14:53:31.016695 4957 generic.go:334] "Generic (PLEG): container finished" podID="c49ef747-0aae-4f86-9688-69e8fd172494" containerID="80851f0c18779713941fc2673565d410018a4662791dbb85c663bcbc5d866a26" exitCode=0 Feb 18 14:53:31 crc kubenswrapper[4957]: I0218 14:53:31.016725 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-6fgdz" event={"ID":"c49ef747-0aae-4f86-9688-69e8fd172494","Type":"ContainerDied","Data":"80851f0c18779713941fc2673565d410018a4662791dbb85c663bcbc5d866a26"} Feb 18 14:53:31 crc kubenswrapper[4957]: I0218 14:53:31.016743 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-6fgdz" event={"ID":"c49ef747-0aae-4f86-9688-69e8fd172494","Type":"ContainerStarted","Data":"40c0d4d134adbc4c4cc3c4fd388a95784510a6a5b556d098cacf38c42d7162ae"} Feb 18 14:53:32 crc kubenswrapper[4957]: I0218 14:53:32.058986 4957 generic.go:334] "Generic (PLEG): container finished" podID="d8d94c2a-28d8-4811-97be-0020687f1773" containerID="df0543d7ec8630cf94804960e03800bd22ba1c6caadf5370cfe686c2dd710a74" exitCode=0 Feb 18 14:53:32 crc kubenswrapper[4957]: I0218 14:53:32.059808 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-7b22-account-create-update-zxhd8" event={"ID":"d8d94c2a-28d8-4811-97be-0020687f1773","Type":"ContainerDied","Data":"df0543d7ec8630cf94804960e03800bd22ba1c6caadf5370cfe686c2dd710a74"} Feb 18 14:53:33 crc kubenswrapper[4957]: I0218 14:53:33.063471 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-5s4rw" podUID="e2b4f5fe-0b27-47d8-8158-b51ad4229e86" containerName="ovn-controller" probeResult="failure" output=< Feb 18 14:53:33 crc kubenswrapper[4957]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 18 14:53:33 crc kubenswrapper[4957]: > Feb 18 14:53:33 crc kubenswrapper[4957]: I0218 14:53:33.088580 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-bgjwb" Feb 18 14:53:33 crc kubenswrapper[4957]: I0218 14:53:33.133210 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-bgjwb" Feb 18 14:53:33 crc kubenswrapper[4957]: I0218 14:53:33.320289 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-6fgdz" Feb 18 14:53:33 crc kubenswrapper[4957]: I0218 14:53:33.388313 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c49ef747-0aae-4f86-9688-69e8fd172494-operator-scripts\") pod \"c49ef747-0aae-4f86-9688-69e8fd172494\" (UID: \"c49ef747-0aae-4f86-9688-69e8fd172494\") " Feb 18 14:53:33 crc kubenswrapper[4957]: I0218 14:53:33.388412 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wzwjh\" (UniqueName: \"kubernetes.io/projected/c49ef747-0aae-4f86-9688-69e8fd172494-kube-api-access-wzwjh\") pod \"c49ef747-0aae-4f86-9688-69e8fd172494\" (UID: \"c49ef747-0aae-4f86-9688-69e8fd172494\") " Feb 18 14:53:33 crc kubenswrapper[4957]: I0218 14:53:33.390216 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c49ef747-0aae-4f86-9688-69e8fd172494-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c49ef747-0aae-4f86-9688-69e8fd172494" (UID: "c49ef747-0aae-4f86-9688-69e8fd172494"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:53:33 crc kubenswrapper[4957]: I0218 14:53:33.407844 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-5s4rw-config-k5xw2"] Feb 18 14:53:33 crc kubenswrapper[4957]: E0218 14:53:33.408444 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c49ef747-0aae-4f86-9688-69e8fd172494" containerName="mariadb-database-create" Feb 18 14:53:33 crc kubenswrapper[4957]: I0218 14:53:33.408461 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="c49ef747-0aae-4f86-9688-69e8fd172494" containerName="mariadb-database-create" Feb 18 14:53:33 crc kubenswrapper[4957]: I0218 14:53:33.408673 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="c49ef747-0aae-4f86-9688-69e8fd172494" containerName="mariadb-database-create" Feb 18 14:53:33 crc kubenswrapper[4957]: I0218 14:53:33.409689 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-5s4rw-config-k5xw2" Feb 18 14:53:33 crc kubenswrapper[4957]: I0218 14:53:33.410535 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c49ef747-0aae-4f86-9688-69e8fd172494-kube-api-access-wzwjh" (OuterVolumeSpecName: "kube-api-access-wzwjh") pod "c49ef747-0aae-4f86-9688-69e8fd172494" (UID: "c49ef747-0aae-4f86-9688-69e8fd172494"). InnerVolumeSpecName "kube-api-access-wzwjh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:53:33 crc kubenswrapper[4957]: I0218 14:53:33.428811 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Feb 18 14:53:33 crc kubenswrapper[4957]: I0218 14:53:33.441565 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-5s4rw-config-k5xw2"] Feb 18 14:53:33 crc kubenswrapper[4957]: I0218 14:53:33.490706 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2km6\" (UniqueName: \"kubernetes.io/projected/bb9a4bcc-e05d-4168-8615-e87279f3b92d-kube-api-access-n2km6\") pod \"ovn-controller-5s4rw-config-k5xw2\" (UID: \"bb9a4bcc-e05d-4168-8615-e87279f3b92d\") " pod="openstack/ovn-controller-5s4rw-config-k5xw2" Feb 18 14:53:33 crc kubenswrapper[4957]: I0218 14:53:33.490799 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/bb9a4bcc-e05d-4168-8615-e87279f3b92d-var-log-ovn\") pod \"ovn-controller-5s4rw-config-k5xw2\" (UID: \"bb9a4bcc-e05d-4168-8615-e87279f3b92d\") " pod="openstack/ovn-controller-5s4rw-config-k5xw2" Feb 18 14:53:33 crc kubenswrapper[4957]: I0218 14:53:33.490923 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/bb9a4bcc-e05d-4168-8615-e87279f3b92d-var-run\") pod \"ovn-controller-5s4rw-config-k5xw2\" (UID: \"bb9a4bcc-e05d-4168-8615-e87279f3b92d\") " pod="openstack/ovn-controller-5s4rw-config-k5xw2" Feb 18 14:53:33 crc kubenswrapper[4957]: I0218 14:53:33.490977 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bb9a4bcc-e05d-4168-8615-e87279f3b92d-scripts\") pod \"ovn-controller-5s4rw-config-k5xw2\" (UID: \"bb9a4bcc-e05d-4168-8615-e87279f3b92d\") " pod="openstack/ovn-controller-5s4rw-config-k5xw2" Feb 18 14:53:33 crc kubenswrapper[4957]: I0218 14:53:33.491363 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/bb9a4bcc-e05d-4168-8615-e87279f3b92d-var-run-ovn\") pod \"ovn-controller-5s4rw-config-k5xw2\" (UID: \"bb9a4bcc-e05d-4168-8615-e87279f3b92d\") " pod="openstack/ovn-controller-5s4rw-config-k5xw2" Feb 18 14:53:33 crc kubenswrapper[4957]: I0218 14:53:33.491448 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/bb9a4bcc-e05d-4168-8615-e87279f3b92d-additional-scripts\") pod \"ovn-controller-5s4rw-config-k5xw2\" (UID: \"bb9a4bcc-e05d-4168-8615-e87279f3b92d\") " pod="openstack/ovn-controller-5s4rw-config-k5xw2" Feb 18 14:53:33 crc kubenswrapper[4957]: I0218 14:53:33.491663 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wzwjh\" (UniqueName: \"kubernetes.io/projected/c49ef747-0aae-4f86-9688-69e8fd172494-kube-api-access-wzwjh\") on node \"crc\" DevicePath \"\"" Feb 18 14:53:33 crc kubenswrapper[4957]: I0218 14:53:33.491690 4957 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c49ef747-0aae-4f86-9688-69e8fd172494-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 14:53:33 crc kubenswrapper[4957]: I0218 14:53:33.593592 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/bb9a4bcc-e05d-4168-8615-e87279f3b92d-var-run-ovn\") pod \"ovn-controller-5s4rw-config-k5xw2\" (UID: \"bb9a4bcc-e05d-4168-8615-e87279f3b92d\") " pod="openstack/ovn-controller-5s4rw-config-k5xw2" Feb 18 14:53:33 crc kubenswrapper[4957]: I0218 14:53:33.594065 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/bb9a4bcc-e05d-4168-8615-e87279f3b92d-additional-scripts\") pod \"ovn-controller-5s4rw-config-k5xw2\" (UID: \"bb9a4bcc-e05d-4168-8615-e87279f3b92d\") " pod="openstack/ovn-controller-5s4rw-config-k5xw2" Feb 18 14:53:33 crc kubenswrapper[4957]: I0218 14:53:33.594130 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n2km6\" (UniqueName: \"kubernetes.io/projected/bb9a4bcc-e05d-4168-8615-e87279f3b92d-kube-api-access-n2km6\") pod \"ovn-controller-5s4rw-config-k5xw2\" (UID: \"bb9a4bcc-e05d-4168-8615-e87279f3b92d\") " pod="openstack/ovn-controller-5s4rw-config-k5xw2" Feb 18 14:53:33 crc kubenswrapper[4957]: I0218 14:53:33.594140 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/bb9a4bcc-e05d-4168-8615-e87279f3b92d-var-run-ovn\") pod \"ovn-controller-5s4rw-config-k5xw2\" (UID: \"bb9a4bcc-e05d-4168-8615-e87279f3b92d\") " pod="openstack/ovn-controller-5s4rw-config-k5xw2" Feb 18 14:53:33 crc kubenswrapper[4957]: I0218 14:53:33.594237 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/bb9a4bcc-e05d-4168-8615-e87279f3b92d-var-log-ovn\") pod \"ovn-controller-5s4rw-config-k5xw2\" (UID: \"bb9a4bcc-e05d-4168-8615-e87279f3b92d\") " pod="openstack/ovn-controller-5s4rw-config-k5xw2" Feb 18 14:53:33 crc kubenswrapper[4957]: I0218 14:53:33.594279 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/bb9a4bcc-e05d-4168-8615-e87279f3b92d-var-log-ovn\") pod \"ovn-controller-5s4rw-config-k5xw2\" (UID: \"bb9a4bcc-e05d-4168-8615-e87279f3b92d\") " pod="openstack/ovn-controller-5s4rw-config-k5xw2" Feb 18 14:53:33 crc kubenswrapper[4957]: I0218 14:53:33.594285 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/bb9a4bcc-e05d-4168-8615-e87279f3b92d-var-run\") pod \"ovn-controller-5s4rw-config-k5xw2\" (UID: \"bb9a4bcc-e05d-4168-8615-e87279f3b92d\") " pod="openstack/ovn-controller-5s4rw-config-k5xw2" Feb 18 14:53:33 crc kubenswrapper[4957]: I0218 14:53:33.594353 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bb9a4bcc-e05d-4168-8615-e87279f3b92d-scripts\") pod \"ovn-controller-5s4rw-config-k5xw2\" (UID: \"bb9a4bcc-e05d-4168-8615-e87279f3b92d\") " pod="openstack/ovn-controller-5s4rw-config-k5xw2" Feb 18 14:53:33 crc kubenswrapper[4957]: I0218 14:53:33.594368 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/bb9a4bcc-e05d-4168-8615-e87279f3b92d-var-run\") pod \"ovn-controller-5s4rw-config-k5xw2\" (UID: \"bb9a4bcc-e05d-4168-8615-e87279f3b92d\") " pod="openstack/ovn-controller-5s4rw-config-k5xw2" Feb 18 14:53:33 crc kubenswrapper[4957]: I0218 14:53:33.595079 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/bb9a4bcc-e05d-4168-8615-e87279f3b92d-additional-scripts\") pod \"ovn-controller-5s4rw-config-k5xw2\" (UID: \"bb9a4bcc-e05d-4168-8615-e87279f3b92d\") " pod="openstack/ovn-controller-5s4rw-config-k5xw2" Feb 18 14:53:33 crc kubenswrapper[4957]: I0218 14:53:33.596733 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bb9a4bcc-e05d-4168-8615-e87279f3b92d-scripts\") pod \"ovn-controller-5s4rw-config-k5xw2\" (UID: \"bb9a4bcc-e05d-4168-8615-e87279f3b92d\") " pod="openstack/ovn-controller-5s4rw-config-k5xw2" Feb 18 14:53:33 crc kubenswrapper[4957]: I0218 14:53:33.616606 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n2km6\" (UniqueName: \"kubernetes.io/projected/bb9a4bcc-e05d-4168-8615-e87279f3b92d-kube-api-access-n2km6\") pod \"ovn-controller-5s4rw-config-k5xw2\" (UID: \"bb9a4bcc-e05d-4168-8615-e87279f3b92d\") " pod="openstack/ovn-controller-5s4rw-config-k5xw2" Feb 18 14:53:33 crc kubenswrapper[4957]: I0218 14:53:33.627004 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-7b22-account-create-update-zxhd8" Feb 18 14:53:33 crc kubenswrapper[4957]: I0218 14:53:33.695836 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s2dzx\" (UniqueName: \"kubernetes.io/projected/d8d94c2a-28d8-4811-97be-0020687f1773-kube-api-access-s2dzx\") pod \"d8d94c2a-28d8-4811-97be-0020687f1773\" (UID: \"d8d94c2a-28d8-4811-97be-0020687f1773\") " Feb 18 14:53:33 crc kubenswrapper[4957]: I0218 14:53:33.696032 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d8d94c2a-28d8-4811-97be-0020687f1773-operator-scripts\") pod \"d8d94c2a-28d8-4811-97be-0020687f1773\" (UID: \"d8d94c2a-28d8-4811-97be-0020687f1773\") " Feb 18 14:53:33 crc kubenswrapper[4957]: I0218 14:53:33.696606 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8d94c2a-28d8-4811-97be-0020687f1773-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d8d94c2a-28d8-4811-97be-0020687f1773" (UID: "d8d94c2a-28d8-4811-97be-0020687f1773"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:53:33 crc kubenswrapper[4957]: I0218 14:53:33.696721 4957 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d8d94c2a-28d8-4811-97be-0020687f1773-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 14:53:33 crc kubenswrapper[4957]: I0218 14:53:33.699929 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8d94c2a-28d8-4811-97be-0020687f1773-kube-api-access-s2dzx" (OuterVolumeSpecName: "kube-api-access-s2dzx") pod "d8d94c2a-28d8-4811-97be-0020687f1773" (UID: "d8d94c2a-28d8-4811-97be-0020687f1773"). InnerVolumeSpecName "kube-api-access-s2dzx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:53:33 crc kubenswrapper[4957]: I0218 14:53:33.798581 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s2dzx\" (UniqueName: \"kubernetes.io/projected/d8d94c2a-28d8-4811-97be-0020687f1773-kube-api-access-s2dzx\") on node \"crc\" DevicePath \"\"" Feb 18 14:53:33 crc kubenswrapper[4957]: I0218 14:53:33.849019 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-5s4rw-config-k5xw2" Feb 18 14:53:34 crc kubenswrapper[4957]: I0218 14:53:34.092191 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-7b22-account-create-update-zxhd8" event={"ID":"d8d94c2a-28d8-4811-97be-0020687f1773","Type":"ContainerDied","Data":"d7df16639f7ec53f4ad2431b97d15879e8661f1ab448dec19c7c07dbc8c1047a"} Feb 18 14:53:34 crc kubenswrapper[4957]: I0218 14:53:34.092686 4957 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d7df16639f7ec53f4ad2431b97d15879e8661f1ab448dec19c7c07dbc8c1047a" Feb 18 14:53:34 crc kubenswrapper[4957]: I0218 14:53:34.092743 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-7b22-account-create-update-zxhd8" Feb 18 14:53:34 crc kubenswrapper[4957]: I0218 14:53:34.105527 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"383c2ab1-7f15-422c-a4ed-3c899e3a8c74","Type":"ContainerStarted","Data":"fccc976c9722aa4a2fe67ed0ff29a05d8dc16986238e71fd80704ea04c8d5922"} Feb 18 14:53:34 crc kubenswrapper[4957]: I0218 14:53:34.116814 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-6fgdz" Feb 18 14:53:34 crc kubenswrapper[4957]: I0218 14:53:34.116807 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-6fgdz" event={"ID":"c49ef747-0aae-4f86-9688-69e8fd172494","Type":"ContainerDied","Data":"40c0d4d134adbc4c4cc3c4fd388a95784510a6a5b556d098cacf38c42d7162ae"} Feb 18 14:53:34 crc kubenswrapper[4957]: I0218 14:53:34.116986 4957 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="40c0d4d134adbc4c4cc3c4fd388a95784510a6a5b556d098cacf38c42d7162ae" Feb 18 14:53:34 crc kubenswrapper[4957]: I0218 14:53:34.157825 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=4.012937353 podStartE2EDuration="1m20.157801782s" podCreationTimestamp="2026-02-18 14:52:14 +0000 UTC" firstStartedPulling="2026-02-18 14:52:17.151062627 +0000 UTC m=+1243.671927371" lastFinishedPulling="2026-02-18 14:53:33.295927056 +0000 UTC m=+1319.816791800" observedRunningTime="2026-02-18 14:53:34.148865762 +0000 UTC m=+1320.669730506" watchObservedRunningTime="2026-02-18 14:53:34.157801782 +0000 UTC m=+1320.678666526" Feb 18 14:53:34 crc kubenswrapper[4957]: I0218 14:53:34.365895 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-5s4rw-config-k5xw2"] Feb 18 14:53:34 crc kubenswrapper[4957]: I0218 14:53:34.995771 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-0"] Feb 18 14:53:34 crc kubenswrapper[4957]: E0218 14:53:34.996650 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8d94c2a-28d8-4811-97be-0020687f1773" containerName="mariadb-account-create-update" Feb 18 14:53:34 crc kubenswrapper[4957]: I0218 14:53:34.996676 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8d94c2a-28d8-4811-97be-0020687f1773" containerName="mariadb-account-create-update" Feb 18 14:53:34 crc kubenswrapper[4957]: I0218 14:53:34.996924 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8d94c2a-28d8-4811-97be-0020687f1773" containerName="mariadb-account-create-update" Feb 18 14:53:34 crc kubenswrapper[4957]: I0218 14:53:34.997873 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Feb 18 14:53:35 crc kubenswrapper[4957]: I0218 14:53:35.006872 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-config-data" Feb 18 14:53:35 crc kubenswrapper[4957]: I0218 14:53:35.029764 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Feb 18 14:53:35 crc kubenswrapper[4957]: I0218 14:53:35.126749 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f35cf4ac-acd8-4db3-b633-bab9cac6e322-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"f35cf4ac-acd8-4db3-b633-bab9cac6e322\") " pod="openstack/mysqld-exporter-0" Feb 18 14:53:35 crc kubenswrapper[4957]: I0218 14:53:35.127166 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wrd2\" (UniqueName: \"kubernetes.io/projected/f35cf4ac-acd8-4db3-b633-bab9cac6e322-kube-api-access-2wrd2\") pod \"mysqld-exporter-0\" (UID: \"f35cf4ac-acd8-4db3-b633-bab9cac6e322\") " pod="openstack/mysqld-exporter-0" Feb 18 14:53:35 crc kubenswrapper[4957]: I0218 14:53:35.127218 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f35cf4ac-acd8-4db3-b633-bab9cac6e322-config-data\") pod \"mysqld-exporter-0\" (UID: \"f35cf4ac-acd8-4db3-b633-bab9cac6e322\") " pod="openstack/mysqld-exporter-0" Feb 18 14:53:35 crc kubenswrapper[4957]: I0218 14:53:35.228981 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2wrd2\" (UniqueName: \"kubernetes.io/projected/f35cf4ac-acd8-4db3-b633-bab9cac6e322-kube-api-access-2wrd2\") pod \"mysqld-exporter-0\" (UID: \"f35cf4ac-acd8-4db3-b633-bab9cac6e322\") " pod="openstack/mysqld-exporter-0" Feb 18 14:53:35 crc kubenswrapper[4957]: I0218 14:53:35.229344 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f35cf4ac-acd8-4db3-b633-bab9cac6e322-config-data\") pod \"mysqld-exporter-0\" (UID: \"f35cf4ac-acd8-4db3-b633-bab9cac6e322\") " pod="openstack/mysqld-exporter-0" Feb 18 14:53:35 crc kubenswrapper[4957]: I0218 14:53:35.229650 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f35cf4ac-acd8-4db3-b633-bab9cac6e322-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"f35cf4ac-acd8-4db3-b633-bab9cac6e322\") " pod="openstack/mysqld-exporter-0" Feb 18 14:53:35 crc kubenswrapper[4957]: I0218 14:53:35.251843 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f35cf4ac-acd8-4db3-b633-bab9cac6e322-config-data\") pod \"mysqld-exporter-0\" (UID: \"f35cf4ac-acd8-4db3-b633-bab9cac6e322\") " pod="openstack/mysqld-exporter-0" Feb 18 14:53:35 crc kubenswrapper[4957]: I0218 14:53:35.252192 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f35cf4ac-acd8-4db3-b633-bab9cac6e322-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"f35cf4ac-acd8-4db3-b633-bab9cac6e322\") " pod="openstack/mysqld-exporter-0" Feb 18 14:53:35 crc kubenswrapper[4957]: I0218 14:53:35.271294 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2wrd2\" (UniqueName: \"kubernetes.io/projected/f35cf4ac-acd8-4db3-b633-bab9cac6e322-kube-api-access-2wrd2\") pod \"mysqld-exporter-0\" (UID: \"f35cf4ac-acd8-4db3-b633-bab9cac6e322\") " pod="openstack/mysqld-exporter-0" Feb 18 14:53:35 crc kubenswrapper[4957]: I0218 14:53:35.323310 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Feb 18 14:53:35 crc kubenswrapper[4957]: I0218 14:53:35.932248 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Feb 18 14:53:37 crc kubenswrapper[4957]: I0218 14:53:37.063836 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/5790a7ec-79bb-49af-842f-e2b879f33184-etc-swift\") pod \"swift-storage-0\" (UID: \"5790a7ec-79bb-49af-842f-e2b879f33184\") " pod="openstack/swift-storage-0" Feb 18 14:53:37 crc kubenswrapper[4957]: I0218 14:53:37.072999 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/5790a7ec-79bb-49af-842f-e2b879f33184-etc-swift\") pod \"swift-storage-0\" (UID: \"5790a7ec-79bb-49af-842f-e2b879f33184\") " pod="openstack/swift-storage-0" Feb 18 14:53:37 crc kubenswrapper[4957]: I0218 14:53:37.154700 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 18 14:53:37 crc kubenswrapper[4957]: I0218 14:53:37.278939 4957 patch_prober.go:28] interesting pod/machine-config-daemon-x8wwg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 14:53:37 crc kubenswrapper[4957]: I0218 14:53:37.278992 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 14:53:37 crc kubenswrapper[4957]: I0218 14:53:37.279031 4957 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" Feb 18 14:53:37 crc kubenswrapper[4957]: I0218 14:53:37.279798 4957 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3685eb85ff4135d1d788989087159dbba7bab3b709d7c66d8e50ece795084250"} pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 14:53:37 crc kubenswrapper[4957]: I0218 14:53:37.279850 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" containerName="machine-config-daemon" containerID="cri-o://3685eb85ff4135d1d788989087159dbba7bab3b709d7c66d8e50ece795084250" gracePeriod=600 Feb 18 14:53:38 crc kubenswrapper[4957]: I0218 14:53:38.073286 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-5s4rw" podUID="e2b4f5fe-0b27-47d8-8158-b51ad4229e86" containerName="ovn-controller" probeResult="failure" output=< Feb 18 14:53:38 crc kubenswrapper[4957]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 18 14:53:38 crc kubenswrapper[4957]: > Feb 18 14:53:38 crc kubenswrapper[4957]: I0218 14:53:38.161157 4957 generic.go:334] "Generic (PLEG): container finished" podID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" containerID="3685eb85ff4135d1d788989087159dbba7bab3b709d7c66d8e50ece795084250" exitCode=0 Feb 18 14:53:38 crc kubenswrapper[4957]: I0218 14:53:38.161199 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" event={"ID":"4cde17e3-43e9-4bed-afe8-5b76229e35cf","Type":"ContainerDied","Data":"3685eb85ff4135d1d788989087159dbba7bab3b709d7c66d8e50ece795084250"} Feb 18 14:53:38 crc kubenswrapper[4957]: I0218 14:53:38.161230 4957 scope.go:117] "RemoveContainer" containerID="cfb8c9a50ccd65f948148f9634be317a0e1f6018a21e87a691f9e80583c8ce0b" Feb 18 14:53:38 crc kubenswrapper[4957]: I0218 14:53:38.612644 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="a0e8ec2b-400b-4454-acdd-517a1727e9f8" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.129:5671: connect: connection refused" Feb 18 14:53:38 crc kubenswrapper[4957]: I0218 14:53:38.775126 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-2" podUID="81a0cd7a-3f64-4555-96e9-ad69c2518568" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.130:5671: connect: connection refused" Feb 18 14:53:38 crc kubenswrapper[4957]: I0218 14:53:38.788156 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-1" Feb 18 14:53:38 crc kubenswrapper[4957]: I0218 14:53:38.844472 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="12d531cb-f58f-44a6-a638-29ebb85fdbb3" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.132:5671: connect: connection refused" Feb 18 14:53:42 crc kubenswrapper[4957]: W0218 14:53:42.304088 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbb9a4bcc_e05d_4168_8615_e87279f3b92d.slice/crio-105b93dd13c119fe72fb81b857882ee5b32b70821d6a0adba2d32026994354b2 WatchSource:0}: Error finding container 105b93dd13c119fe72fb81b857882ee5b32b70821d6a0adba2d32026994354b2: Status 404 returned error can't find the container with id 105b93dd13c119fe72fb81b857882ee5b32b70821d6a0adba2d32026994354b2 Feb 18 14:53:42 crc kubenswrapper[4957]: I0218 14:53:42.964685 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 18 14:53:42 crc kubenswrapper[4957]: W0218 14:53:42.976381 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5790a7ec_79bb_49af_842f_e2b879f33184.slice/crio-243f1167de24c9eb4cff71ec977155593a86706e98f6f98a88f2d4858ef6d4f0 WatchSource:0}: Error finding container 243f1167de24c9eb4cff71ec977155593a86706e98f6f98a88f2d4858ef6d4f0: Status 404 returned error can't find the container with id 243f1167de24c9eb4cff71ec977155593a86706e98f6f98a88f2d4858ef6d4f0 Feb 18 14:53:42 crc kubenswrapper[4957]: I0218 14:53:42.994559 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Feb 18 14:53:43 crc kubenswrapper[4957]: I0218 14:53:43.063859 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-5s4rw" Feb 18 14:53:43 crc kubenswrapper[4957]: I0218 14:53:43.226358 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-n9k7h" event={"ID":"7e292891-edfa-438c-aadb-3a12e7fdd9a4","Type":"ContainerStarted","Data":"54647dfc8eddac82556d1a05a747cc199d72ca7ab1847176d3da33bb8107b08d"} Feb 18 14:53:43 crc kubenswrapper[4957]: I0218 14:53:43.228596 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"5790a7ec-79bb-49af-842f-e2b879f33184","Type":"ContainerStarted","Data":"243f1167de24c9eb4cff71ec977155593a86706e98f6f98a88f2d4858ef6d4f0"} Feb 18 14:53:43 crc kubenswrapper[4957]: I0218 14:53:43.229937 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"f35cf4ac-acd8-4db3-b633-bab9cac6e322","Type":"ContainerStarted","Data":"ef9b1dd4702a1feb8d4b9db96f518c1cd388f2ca91ef865eb8052c5ab313e316"} Feb 18 14:53:43 crc kubenswrapper[4957]: I0218 14:53:43.234122 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-5s4rw-config-k5xw2" event={"ID":"bb9a4bcc-e05d-4168-8615-e87279f3b92d","Type":"ContainerStarted","Data":"db5f4b49bc3138d1bb72a3a99d8840b031c7e803d218b53cfbc5deb258777807"} Feb 18 14:53:43 crc kubenswrapper[4957]: I0218 14:53:43.234170 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-5s4rw-config-k5xw2" event={"ID":"bb9a4bcc-e05d-4168-8615-e87279f3b92d","Type":"ContainerStarted","Data":"105b93dd13c119fe72fb81b857882ee5b32b70821d6a0adba2d32026994354b2"} Feb 18 14:53:43 crc kubenswrapper[4957]: I0218 14:53:43.239647 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" event={"ID":"4cde17e3-43e9-4bed-afe8-5b76229e35cf","Type":"ContainerStarted","Data":"c4e669be0ab1992542c422c61f78e7526ae93fe9c47c7d1457d0a05970a49398"} Feb 18 14:53:43 crc kubenswrapper[4957]: I0218 14:53:43.252833 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-n9k7h" podStartSLOduration=4.989637024 podStartE2EDuration="22.252782752s" podCreationTimestamp="2026-02-18 14:53:21 +0000 UTC" firstStartedPulling="2026-02-18 14:53:25.222743611 +0000 UTC m=+1311.743608355" lastFinishedPulling="2026-02-18 14:53:42.485889339 +0000 UTC m=+1329.006754083" observedRunningTime="2026-02-18 14:53:43.249255539 +0000 UTC m=+1329.770120283" watchObservedRunningTime="2026-02-18 14:53:43.252782752 +0000 UTC m=+1329.773647496" Feb 18 14:53:43 crc kubenswrapper[4957]: I0218 14:53:43.270639 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-5s4rw-config-k5xw2" podStartSLOduration=10.270575989 podStartE2EDuration="10.270575989s" podCreationTimestamp="2026-02-18 14:53:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:53:43.267813199 +0000 UTC m=+1329.788677943" watchObservedRunningTime="2026-02-18 14:53:43.270575989 +0000 UTC m=+1329.791440753" Feb 18 14:53:44 crc kubenswrapper[4957]: I0218 14:53:44.256555 4957 generic.go:334] "Generic (PLEG): container finished" podID="bb9a4bcc-e05d-4168-8615-e87279f3b92d" containerID="db5f4b49bc3138d1bb72a3a99d8840b031c7e803d218b53cfbc5deb258777807" exitCode=0 Feb 18 14:53:44 crc kubenswrapper[4957]: I0218 14:53:44.257519 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-5s4rw-config-k5xw2" event={"ID":"bb9a4bcc-e05d-4168-8615-e87279f3b92d","Type":"ContainerDied","Data":"db5f4b49bc3138d1bb72a3a99d8840b031c7e803d218b53cfbc5deb258777807"} Feb 18 14:53:45 crc kubenswrapper[4957]: I0218 14:53:45.287348 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"f35cf4ac-acd8-4db3-b633-bab9cac6e322","Type":"ContainerStarted","Data":"797b010ef357795bfe0a8e95e8d7e95190074c7bf5a8522e72145362c38177a8"} Feb 18 14:53:45 crc kubenswrapper[4957]: I0218 14:53:45.294637 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"5790a7ec-79bb-49af-842f-e2b879f33184","Type":"ContainerStarted","Data":"3d3c09796447245e257fe9efac5708ab4fb315c10f779270af0ac30942bbd858"} Feb 18 14:53:45 crc kubenswrapper[4957]: I0218 14:53:45.361729 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-0" podStartSLOduration=9.574012863 podStartE2EDuration="11.361701573s" podCreationTimestamp="2026-02-18 14:53:34 +0000 UTC" firstStartedPulling="2026-02-18 14:53:42.999573538 +0000 UTC m=+1329.520438282" lastFinishedPulling="2026-02-18 14:53:44.787262248 +0000 UTC m=+1331.308126992" observedRunningTime="2026-02-18 14:53:45.311238395 +0000 UTC m=+1331.832103139" watchObservedRunningTime="2026-02-18 14:53:45.361701573 +0000 UTC m=+1331.882566317" Feb 18 14:53:45 crc kubenswrapper[4957]: I0218 14:53:45.730482 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-5s4rw-config-k5xw2" Feb 18 14:53:45 crc kubenswrapper[4957]: I0218 14:53:45.867438 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/bb9a4bcc-e05d-4168-8615-e87279f3b92d-var-run\") pod \"bb9a4bcc-e05d-4168-8615-e87279f3b92d\" (UID: \"bb9a4bcc-e05d-4168-8615-e87279f3b92d\") " Feb 18 14:53:45 crc kubenswrapper[4957]: I0218 14:53:45.867696 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/bb9a4bcc-e05d-4168-8615-e87279f3b92d-var-log-ovn\") pod \"bb9a4bcc-e05d-4168-8615-e87279f3b92d\" (UID: \"bb9a4bcc-e05d-4168-8615-e87279f3b92d\") " Feb 18 14:53:45 crc kubenswrapper[4957]: I0218 14:53:45.867844 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n2km6\" (UniqueName: \"kubernetes.io/projected/bb9a4bcc-e05d-4168-8615-e87279f3b92d-kube-api-access-n2km6\") pod \"bb9a4bcc-e05d-4168-8615-e87279f3b92d\" (UID: \"bb9a4bcc-e05d-4168-8615-e87279f3b92d\") " Feb 18 14:53:45 crc kubenswrapper[4957]: I0218 14:53:45.867574 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bb9a4bcc-e05d-4168-8615-e87279f3b92d-var-run" (OuterVolumeSpecName: "var-run") pod "bb9a4bcc-e05d-4168-8615-e87279f3b92d" (UID: "bb9a4bcc-e05d-4168-8615-e87279f3b92d"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 14:53:45 crc kubenswrapper[4957]: I0218 14:53:45.868042 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/bb9a4bcc-e05d-4168-8615-e87279f3b92d-additional-scripts\") pod \"bb9a4bcc-e05d-4168-8615-e87279f3b92d\" (UID: \"bb9a4bcc-e05d-4168-8615-e87279f3b92d\") " Feb 18 14:53:45 crc kubenswrapper[4957]: I0218 14:53:45.868189 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bb9a4bcc-e05d-4168-8615-e87279f3b92d-scripts\") pod \"bb9a4bcc-e05d-4168-8615-e87279f3b92d\" (UID: \"bb9a4bcc-e05d-4168-8615-e87279f3b92d\") " Feb 18 14:53:45 crc kubenswrapper[4957]: I0218 14:53:45.868314 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/bb9a4bcc-e05d-4168-8615-e87279f3b92d-var-run-ovn\") pod \"bb9a4bcc-e05d-4168-8615-e87279f3b92d\" (UID: \"bb9a4bcc-e05d-4168-8615-e87279f3b92d\") " Feb 18 14:53:45 crc kubenswrapper[4957]: I0218 14:53:45.868044 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bb9a4bcc-e05d-4168-8615-e87279f3b92d-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "bb9a4bcc-e05d-4168-8615-e87279f3b92d" (UID: "bb9a4bcc-e05d-4168-8615-e87279f3b92d"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 14:53:45 crc kubenswrapper[4957]: I0218 14:53:45.868894 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bb9a4bcc-e05d-4168-8615-e87279f3b92d-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "bb9a4bcc-e05d-4168-8615-e87279f3b92d" (UID: "bb9a4bcc-e05d-4168-8615-e87279f3b92d"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 14:53:45 crc kubenswrapper[4957]: I0218 14:53:45.868933 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bb9a4bcc-e05d-4168-8615-e87279f3b92d-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "bb9a4bcc-e05d-4168-8615-e87279f3b92d" (UID: "bb9a4bcc-e05d-4168-8615-e87279f3b92d"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:53:45 crc kubenswrapper[4957]: I0218 14:53:45.869140 4957 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/bb9a4bcc-e05d-4168-8615-e87279f3b92d-var-run\") on node \"crc\" DevicePath \"\"" Feb 18 14:53:45 crc kubenswrapper[4957]: I0218 14:53:45.869372 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bb9a4bcc-e05d-4168-8615-e87279f3b92d-scripts" (OuterVolumeSpecName: "scripts") pod "bb9a4bcc-e05d-4168-8615-e87279f3b92d" (UID: "bb9a4bcc-e05d-4168-8615-e87279f3b92d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:53:45 crc kubenswrapper[4957]: I0218 14:53:45.873390 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb9a4bcc-e05d-4168-8615-e87279f3b92d-kube-api-access-n2km6" (OuterVolumeSpecName: "kube-api-access-n2km6") pod "bb9a4bcc-e05d-4168-8615-e87279f3b92d" (UID: "bb9a4bcc-e05d-4168-8615-e87279f3b92d"). InnerVolumeSpecName "kube-api-access-n2km6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:53:45 crc kubenswrapper[4957]: I0218 14:53:45.932136 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Feb 18 14:53:45 crc kubenswrapper[4957]: I0218 14:53:45.933977 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Feb 18 14:53:45 crc kubenswrapper[4957]: I0218 14:53:45.971167 4957 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/bb9a4bcc-e05d-4168-8615-e87279f3b92d-var-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 18 14:53:45 crc kubenswrapper[4957]: I0218 14:53:45.971210 4957 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/bb9a4bcc-e05d-4168-8615-e87279f3b92d-var-log-ovn\") on node \"crc\" DevicePath \"\"" Feb 18 14:53:45 crc kubenswrapper[4957]: I0218 14:53:45.971222 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n2km6\" (UniqueName: \"kubernetes.io/projected/bb9a4bcc-e05d-4168-8615-e87279f3b92d-kube-api-access-n2km6\") on node \"crc\" DevicePath \"\"" Feb 18 14:53:45 crc kubenswrapper[4957]: I0218 14:53:45.971236 4957 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/bb9a4bcc-e05d-4168-8615-e87279f3b92d-additional-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 14:53:45 crc kubenswrapper[4957]: I0218 14:53:45.971246 4957 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bb9a4bcc-e05d-4168-8615-e87279f3b92d-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 14:53:46 crc kubenswrapper[4957]: I0218 14:53:46.309825 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-5s4rw-config-k5xw2" event={"ID":"bb9a4bcc-e05d-4168-8615-e87279f3b92d","Type":"ContainerDied","Data":"105b93dd13c119fe72fb81b857882ee5b32b70821d6a0adba2d32026994354b2"} Feb 18 14:53:46 crc kubenswrapper[4957]: I0218 14:53:46.309879 4957 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="105b93dd13c119fe72fb81b857882ee5b32b70821d6a0adba2d32026994354b2" Feb 18 14:53:46 crc kubenswrapper[4957]: I0218 14:53:46.309955 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-5s4rw-config-k5xw2" Feb 18 14:53:46 crc kubenswrapper[4957]: I0218 14:53:46.318568 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"5790a7ec-79bb-49af-842f-e2b879f33184","Type":"ContainerStarted","Data":"ef74f6743dd9674015a05124f813713f416b7baf96695d5c6b907ea3dfc63400"} Feb 18 14:53:46 crc kubenswrapper[4957]: I0218 14:53:46.318618 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"5790a7ec-79bb-49af-842f-e2b879f33184","Type":"ContainerStarted","Data":"5edd5b17e9527dd9e2ea47458847e864227d27de83be3ae8299e84bb703907fb"} Feb 18 14:53:46 crc kubenswrapper[4957]: I0218 14:53:46.318641 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"5790a7ec-79bb-49af-842f-e2b879f33184","Type":"ContainerStarted","Data":"e34c369de4e2a1d340664e2b7a1b41cefa42a1d0c5293806f01afc666b39dd28"} Feb 18 14:53:46 crc kubenswrapper[4957]: I0218 14:53:46.320769 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Feb 18 14:53:46 crc kubenswrapper[4957]: I0218 14:53:46.419951 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-5s4rw-config-k5xw2"] Feb 18 14:53:46 crc kubenswrapper[4957]: I0218 14:53:46.436670 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-5s4rw-config-k5xw2"] Feb 18 14:53:48 crc kubenswrapper[4957]: I0218 14:53:48.258770 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb9a4bcc-e05d-4168-8615-e87279f3b92d" path="/var/lib/kubelet/pods/bb9a4bcc-e05d-4168-8615-e87279f3b92d/volumes" Feb 18 14:53:48 crc kubenswrapper[4957]: I0218 14:53:48.367988 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"5790a7ec-79bb-49af-842f-e2b879f33184","Type":"ContainerStarted","Data":"4ecde0112b29f949373389b7a7b227b5bce5c605be4cdc10949bf94643ad3857"} Feb 18 14:53:48 crc kubenswrapper[4957]: I0218 14:53:48.368069 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"5790a7ec-79bb-49af-842f-e2b879f33184","Type":"ContainerStarted","Data":"7e353a7c6d0728d704536043275d6da2eca4e495e1189dce89dd7e6d0456af90"} Feb 18 14:53:48 crc kubenswrapper[4957]: I0218 14:53:48.368086 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"5790a7ec-79bb-49af-842f-e2b879f33184","Type":"ContainerStarted","Data":"283300cbfe9ee6b11b7364f92c2578e5e6925674cc334e8a852229b1f6ba2702"} Feb 18 14:53:48 crc kubenswrapper[4957]: I0218 14:53:48.368097 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"5790a7ec-79bb-49af-842f-e2b879f33184","Type":"ContainerStarted","Data":"a64b3a139604c71af8775784f37357a90d9e5d985d6cb84506ece2e3c6e41bf7"} Feb 18 14:53:48 crc kubenswrapper[4957]: I0218 14:53:48.610879 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="a0e8ec2b-400b-4454-acdd-517a1727e9f8" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.129:5671: connect: connection refused" Feb 18 14:53:48 crc kubenswrapper[4957]: I0218 14:53:48.774692 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-2" podUID="81a0cd7a-3f64-4555-96e9-ad69c2518568" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.130:5671: connect: connection refused" Feb 18 14:53:48 crc kubenswrapper[4957]: I0218 14:53:48.789637 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-1" Feb 18 14:53:48 crc kubenswrapper[4957]: I0218 14:53:48.842592 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:53:49 crc kubenswrapper[4957]: I0218 14:53:49.181405 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 18 14:53:49 crc kubenswrapper[4957]: I0218 14:53:49.181775 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="383c2ab1-7f15-422c-a4ed-3c899e3a8c74" containerName="thanos-sidecar" containerID="cri-o://fccc976c9722aa4a2fe67ed0ff29a05d8dc16986238e71fd80704ea04c8d5922" gracePeriod=600 Feb 18 14:53:49 crc kubenswrapper[4957]: I0218 14:53:49.181829 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="383c2ab1-7f15-422c-a4ed-3c899e3a8c74" containerName="config-reloader" containerID="cri-o://59f7116cb5db6a924d7e7a58a3dd919de240d0b5326611e603db981243e90536" gracePeriod=600 Feb 18 14:53:49 crc kubenswrapper[4957]: I0218 14:53:49.181919 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="383c2ab1-7f15-422c-a4ed-3c899e3a8c74" containerName="prometheus" containerID="cri-o://8f029520e2ca59bbaef36a5c4bce5f194692763eecfd2eb7a68043aab76dee8f" gracePeriod=600 Feb 18 14:53:50 crc kubenswrapper[4957]: I0218 14:53:50.401485 4957 generic.go:334] "Generic (PLEG): container finished" podID="383c2ab1-7f15-422c-a4ed-3c899e3a8c74" containerID="fccc976c9722aa4a2fe67ed0ff29a05d8dc16986238e71fd80704ea04c8d5922" exitCode=0 Feb 18 14:53:50 crc kubenswrapper[4957]: I0218 14:53:50.401925 4957 generic.go:334] "Generic (PLEG): container finished" podID="383c2ab1-7f15-422c-a4ed-3c899e3a8c74" containerID="59f7116cb5db6a924d7e7a58a3dd919de240d0b5326611e603db981243e90536" exitCode=0 Feb 18 14:53:50 crc kubenswrapper[4957]: I0218 14:53:50.401935 4957 generic.go:334] "Generic (PLEG): container finished" podID="383c2ab1-7f15-422c-a4ed-3c899e3a8c74" containerID="8f029520e2ca59bbaef36a5c4bce5f194692763eecfd2eb7a68043aab76dee8f" exitCode=0 Feb 18 14:53:50 crc kubenswrapper[4957]: I0218 14:53:50.401957 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"383c2ab1-7f15-422c-a4ed-3c899e3a8c74","Type":"ContainerDied","Data":"fccc976c9722aa4a2fe67ed0ff29a05d8dc16986238e71fd80704ea04c8d5922"} Feb 18 14:53:50 crc kubenswrapper[4957]: I0218 14:53:50.401990 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"383c2ab1-7f15-422c-a4ed-3c899e3a8c74","Type":"ContainerDied","Data":"59f7116cb5db6a924d7e7a58a3dd919de240d0b5326611e603db981243e90536"} Feb 18 14:53:50 crc kubenswrapper[4957]: I0218 14:53:50.402003 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"383c2ab1-7f15-422c-a4ed-3c899e3a8c74","Type":"ContainerDied","Data":"8f029520e2ca59bbaef36a5c4bce5f194692763eecfd2eb7a68043aab76dee8f"} Feb 18 14:53:50 crc kubenswrapper[4957]: I0218 14:53:50.932314 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="383c2ab1-7f15-422c-a4ed-3c899e3a8c74" containerName="prometheus" probeResult="failure" output="Get \"http://10.217.0.138:9090/-/ready\": dial tcp 10.217.0.138:9090: connect: connection refused" Feb 18 14:53:51 crc kubenswrapper[4957]: I0218 14:53:51.672241 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 18 14:53:51 crc kubenswrapper[4957]: I0218 14:53:51.799468 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/383c2ab1-7f15-422c-a4ed-3c899e3a8c74-prometheus-metric-storage-rulefiles-2\") pod \"383c2ab1-7f15-422c-a4ed-3c899e3a8c74\" (UID: \"383c2ab1-7f15-422c-a4ed-3c899e3a8c74\") " Feb 18 14:53:51 crc kubenswrapper[4957]: I0218 14:53:51.799686 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/383c2ab1-7f15-422c-a4ed-3c899e3a8c74-thanos-prometheus-http-client-file\") pod \"383c2ab1-7f15-422c-a4ed-3c899e3a8c74\" (UID: \"383c2ab1-7f15-422c-a4ed-3c899e3a8c74\") " Feb 18 14:53:51 crc kubenswrapper[4957]: I0218 14:53:51.799786 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tjm5b\" (UniqueName: \"kubernetes.io/projected/383c2ab1-7f15-422c-a4ed-3c899e3a8c74-kube-api-access-tjm5b\") pod \"383c2ab1-7f15-422c-a4ed-3c899e3a8c74\" (UID: \"383c2ab1-7f15-422c-a4ed-3c899e3a8c74\") " Feb 18 14:53:51 crc kubenswrapper[4957]: I0218 14:53:51.799878 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/383c2ab1-7f15-422c-a4ed-3c899e3a8c74-web-config\") pod \"383c2ab1-7f15-422c-a4ed-3c899e3a8c74\" (UID: \"383c2ab1-7f15-422c-a4ed-3c899e3a8c74\") " Feb 18 14:53:51 crc kubenswrapper[4957]: I0218 14:53:51.800070 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/383c2ab1-7f15-422c-a4ed-3c899e3a8c74-config-out\") pod \"383c2ab1-7f15-422c-a4ed-3c899e3a8c74\" (UID: \"383c2ab1-7f15-422c-a4ed-3c899e3a8c74\") " Feb 18 14:53:51 crc kubenswrapper[4957]: I0218 14:53:51.800190 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/383c2ab1-7f15-422c-a4ed-3c899e3a8c74-tls-assets\") pod \"383c2ab1-7f15-422c-a4ed-3c899e3a8c74\" (UID: \"383c2ab1-7f15-422c-a4ed-3c899e3a8c74\") " Feb 18 14:53:51 crc kubenswrapper[4957]: I0218 14:53:51.800255 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/383c2ab1-7f15-422c-a4ed-3c899e3a8c74-config\") pod \"383c2ab1-7f15-422c-a4ed-3c899e3a8c74\" (UID: \"383c2ab1-7f15-422c-a4ed-3c899e3a8c74\") " Feb 18 14:53:51 crc kubenswrapper[4957]: I0218 14:53:51.800364 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/383c2ab1-7f15-422c-a4ed-3c899e3a8c74-prometheus-metric-storage-rulefiles-0\") pod \"383c2ab1-7f15-422c-a4ed-3c899e3a8c74\" (UID: \"383c2ab1-7f15-422c-a4ed-3c899e3a8c74\") " Feb 18 14:53:51 crc kubenswrapper[4957]: I0218 14:53:51.800456 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/383c2ab1-7f15-422c-a4ed-3c899e3a8c74-prometheus-metric-storage-rulefiles-1\") pod \"383c2ab1-7f15-422c-a4ed-3c899e3a8c74\" (UID: \"383c2ab1-7f15-422c-a4ed-3c899e3a8c74\") " Feb 18 14:53:51 crc kubenswrapper[4957]: I0218 14:53:51.800721 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-74077cac-0270-4dff-af12-9ef374ad97f5\") pod \"383c2ab1-7f15-422c-a4ed-3c899e3a8c74\" (UID: \"383c2ab1-7f15-422c-a4ed-3c899e3a8c74\") " Feb 18 14:53:51 crc kubenswrapper[4957]: I0218 14:53:51.821480 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/383c2ab1-7f15-422c-a4ed-3c899e3a8c74-prometheus-metric-storage-rulefiles-2" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-2") pod "383c2ab1-7f15-422c-a4ed-3c899e3a8c74" (UID: "383c2ab1-7f15-422c-a4ed-3c899e3a8c74"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-2". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:53:51 crc kubenswrapper[4957]: I0218 14:53:51.826182 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/383c2ab1-7f15-422c-a4ed-3c899e3a8c74-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "383c2ab1-7f15-422c-a4ed-3c899e3a8c74" (UID: "383c2ab1-7f15-422c-a4ed-3c899e3a8c74"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:53:51 crc kubenswrapper[4957]: I0218 14:53:51.827084 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/383c2ab1-7f15-422c-a4ed-3c899e3a8c74-prometheus-metric-storage-rulefiles-1" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-1") pod "383c2ab1-7f15-422c-a4ed-3c899e3a8c74" (UID: "383c2ab1-7f15-422c-a4ed-3c899e3a8c74"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:53:52 crc kubenswrapper[4957]: I0218 14:53:52.861000 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/383c2ab1-7f15-422c-a4ed-3c899e3a8c74-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "383c2ab1-7f15-422c-a4ed-3c899e3a8c74" (UID: "383c2ab1-7f15-422c-a4ed-3c899e3a8c74"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:53:52 crc kubenswrapper[4957]: I0218 14:53:52.864357 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/383c2ab1-7f15-422c-a4ed-3c899e3a8c74-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "383c2ab1-7f15-422c-a4ed-3c899e3a8c74" (UID: "383c2ab1-7f15-422c-a4ed-3c899e3a8c74"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:53:52 crc kubenswrapper[4957]: I0218 14:53:52.901061 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/383c2ab1-7f15-422c-a4ed-3c899e3a8c74-config-out" (OuterVolumeSpecName: "config-out") pod "383c2ab1-7f15-422c-a4ed-3c899e3a8c74" (UID: "383c2ab1-7f15-422c-a4ed-3c899e3a8c74"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:53:52 crc kubenswrapper[4957]: I0218 14:53:52.903094 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/383c2ab1-7f15-422c-a4ed-3c899e3a8c74-config" (OuterVolumeSpecName: "config") pod "383c2ab1-7f15-422c-a4ed-3c899e3a8c74" (UID: "383c2ab1-7f15-422c-a4ed-3c899e3a8c74"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:53:52 crc kubenswrapper[4957]: I0218 14:53:52.922666 4957 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/383c2ab1-7f15-422c-a4ed-3c899e3a8c74-config-out\") on node \"crc\" DevicePath \"\"" Feb 18 14:53:52 crc kubenswrapper[4957]: I0218 14:53:52.922859 4957 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/383c2ab1-7f15-422c-a4ed-3c899e3a8c74-tls-assets\") on node \"crc\" DevicePath \"\"" Feb 18 14:53:52 crc kubenswrapper[4957]: I0218 14:53:52.922873 4957 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/383c2ab1-7f15-422c-a4ed-3c899e3a8c74-config\") on node \"crc\" DevicePath \"\"" Feb 18 14:53:52 crc kubenswrapper[4957]: I0218 14:53:52.922882 4957 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/383c2ab1-7f15-422c-a4ed-3c899e3a8c74-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Feb 18 14:53:52 crc kubenswrapper[4957]: I0218 14:53:52.922892 4957 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/383c2ab1-7f15-422c-a4ed-3c899e3a8c74-prometheus-metric-storage-rulefiles-1\") on node \"crc\" DevicePath \"\"" Feb 18 14:53:52 crc kubenswrapper[4957]: I0218 14:53:52.922903 4957 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/383c2ab1-7f15-422c-a4ed-3c899e3a8c74-prometheus-metric-storage-rulefiles-2\") on node \"crc\" DevicePath \"\"" Feb 18 14:53:52 crc kubenswrapper[4957]: I0218 14:53:52.922914 4957 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/383c2ab1-7f15-422c-a4ed-3c899e3a8c74-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Feb 18 14:53:52 crc kubenswrapper[4957]: I0218 14:53:52.925140 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/383c2ab1-7f15-422c-a4ed-3c899e3a8c74-web-config" (OuterVolumeSpecName: "web-config") pod "383c2ab1-7f15-422c-a4ed-3c899e3a8c74" (UID: "383c2ab1-7f15-422c-a4ed-3c899e3a8c74"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:53:52 crc kubenswrapper[4957]: I0218 14:53:52.925736 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/383c2ab1-7f15-422c-a4ed-3c899e3a8c74-kube-api-access-tjm5b" (OuterVolumeSpecName: "kube-api-access-tjm5b") pod "383c2ab1-7f15-422c-a4ed-3c899e3a8c74" (UID: "383c2ab1-7f15-422c-a4ed-3c899e3a8c74"). InnerVolumeSpecName "kube-api-access-tjm5b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:53:52 crc kubenswrapper[4957]: I0218 14:53:52.999587 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 18 14:53:53 crc kubenswrapper[4957]: I0218 14:53:53.060977 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tjm5b\" (UniqueName: \"kubernetes.io/projected/383c2ab1-7f15-422c-a4ed-3c899e3a8c74-kube-api-access-tjm5b\") on node \"crc\" DevicePath \"\"" Feb 18 14:53:53 crc kubenswrapper[4957]: I0218 14:53:53.061013 4957 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/383c2ab1-7f15-422c-a4ed-3c899e3a8c74-web-config\") on node \"crc\" DevicePath \"\"" Feb 18 14:53:53 crc kubenswrapper[4957]: I0218 14:53:53.102979 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"383c2ab1-7f15-422c-a4ed-3c899e3a8c74","Type":"ContainerDied","Data":"4d3ea28f0ffb9d113f8b314aecfee337e551f6ca3dd85545b810a06ff684769a"} Feb 18 14:53:53 crc kubenswrapper[4957]: I0218 14:53:53.103037 4957 scope.go:117] "RemoveContainer" containerID="fccc976c9722aa4a2fe67ed0ff29a05d8dc16986238e71fd80704ea04c8d5922" Feb 18 14:53:53 crc kubenswrapper[4957]: I0218 14:53:53.129627 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"5790a7ec-79bb-49af-842f-e2b879f33184","Type":"ContainerStarted","Data":"bef5b8975b475c30ac0c8e05f94c12831e95bae8d6a1ebc3ff9adf25b982fc8d"} Feb 18 14:53:53 crc kubenswrapper[4957]: I0218 14:53:53.199269 4957 scope.go:117] "RemoveContainer" containerID="59f7116cb5db6a924d7e7a58a3dd919de240d0b5326611e603db981243e90536" Feb 18 14:53:53 crc kubenswrapper[4957]: I0218 14:53:53.249864 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-74077cac-0270-4dff-af12-9ef374ad97f5" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "383c2ab1-7f15-422c-a4ed-3c899e3a8c74" (UID: "383c2ab1-7f15-422c-a4ed-3c899e3a8c74"). InnerVolumeSpecName "pvc-74077cac-0270-4dff-af12-9ef374ad97f5". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 18 14:53:53 crc kubenswrapper[4957]: I0218 14:53:53.267410 4957 scope.go:117] "RemoveContainer" containerID="8f029520e2ca59bbaef36a5c4bce5f194692763eecfd2eb7a68043aab76dee8f" Feb 18 14:53:53 crc kubenswrapper[4957]: I0218 14:53:53.268801 4957 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-74077cac-0270-4dff-af12-9ef374ad97f5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-74077cac-0270-4dff-af12-9ef374ad97f5\") on node \"crc\" " Feb 18 14:53:53 crc kubenswrapper[4957]: I0218 14:53:53.300605 4957 scope.go:117] "RemoveContainer" containerID="d8b2071481292b9d247fc49961706323623ac512cdaf2f0f00c3971ed7058910" Feb 18 14:53:53 crc kubenswrapper[4957]: I0218 14:53:53.325318 4957 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 18 14:53:53 crc kubenswrapper[4957]: I0218 14:53:53.325512 4957 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-74077cac-0270-4dff-af12-9ef374ad97f5" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-74077cac-0270-4dff-af12-9ef374ad97f5") on node "crc" Feb 18 14:53:53 crc kubenswrapper[4957]: I0218 14:53:53.349167 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 18 14:53:53 crc kubenswrapper[4957]: I0218 14:53:53.359576 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 18 14:53:53 crc kubenswrapper[4957]: I0218 14:53:53.370358 4957 reconciler_common.go:293] "Volume detached for volume \"pvc-74077cac-0270-4dff-af12-9ef374ad97f5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-74077cac-0270-4dff-af12-9ef374ad97f5\") on node \"crc\" DevicePath \"\"" Feb 18 14:53:53 crc kubenswrapper[4957]: I0218 14:53:53.380302 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 18 14:53:53 crc kubenswrapper[4957]: E0218 14:53:53.380821 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="383c2ab1-7f15-422c-a4ed-3c899e3a8c74" containerName="thanos-sidecar" Feb 18 14:53:53 crc kubenswrapper[4957]: I0218 14:53:53.380840 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="383c2ab1-7f15-422c-a4ed-3c899e3a8c74" containerName="thanos-sidecar" Feb 18 14:53:53 crc kubenswrapper[4957]: E0218 14:53:53.380862 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="383c2ab1-7f15-422c-a4ed-3c899e3a8c74" containerName="config-reloader" Feb 18 14:53:53 crc kubenswrapper[4957]: I0218 14:53:53.380871 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="383c2ab1-7f15-422c-a4ed-3c899e3a8c74" containerName="config-reloader" Feb 18 14:53:53 crc kubenswrapper[4957]: E0218 14:53:53.380889 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="383c2ab1-7f15-422c-a4ed-3c899e3a8c74" containerName="prometheus" Feb 18 14:53:53 crc kubenswrapper[4957]: I0218 14:53:53.380898 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="383c2ab1-7f15-422c-a4ed-3c899e3a8c74" containerName="prometheus" Feb 18 14:53:53 crc kubenswrapper[4957]: E0218 14:53:53.380914 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb9a4bcc-e05d-4168-8615-e87279f3b92d" containerName="ovn-config" Feb 18 14:53:53 crc kubenswrapper[4957]: I0218 14:53:53.380920 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb9a4bcc-e05d-4168-8615-e87279f3b92d" containerName="ovn-config" Feb 18 14:53:53 crc kubenswrapper[4957]: E0218 14:53:53.380954 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="383c2ab1-7f15-422c-a4ed-3c899e3a8c74" containerName="init-config-reloader" Feb 18 14:53:53 crc kubenswrapper[4957]: I0218 14:53:53.380963 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="383c2ab1-7f15-422c-a4ed-3c899e3a8c74" containerName="init-config-reloader" Feb 18 14:53:53 crc kubenswrapper[4957]: I0218 14:53:53.381197 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb9a4bcc-e05d-4168-8615-e87279f3b92d" containerName="ovn-config" Feb 18 14:53:53 crc kubenswrapper[4957]: I0218 14:53:53.381224 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="383c2ab1-7f15-422c-a4ed-3c899e3a8c74" containerName="thanos-sidecar" Feb 18 14:53:53 crc kubenswrapper[4957]: I0218 14:53:53.381249 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="383c2ab1-7f15-422c-a4ed-3c899e3a8c74" containerName="prometheus" Feb 18 14:53:53 crc kubenswrapper[4957]: I0218 14:53:53.381264 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="383c2ab1-7f15-422c-a4ed-3c899e3a8c74" containerName="config-reloader" Feb 18 14:53:53 crc kubenswrapper[4957]: I0218 14:53:53.383568 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 18 14:53:53 crc kubenswrapper[4957]: I0218 14:53:53.390277 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Feb 18 14:53:53 crc kubenswrapper[4957]: I0218 14:53:53.390558 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Feb 18 14:53:53 crc kubenswrapper[4957]: I0218 14:53:53.390788 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Feb 18 14:53:53 crc kubenswrapper[4957]: I0218 14:53:53.390903 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Feb 18 14:53:53 crc kubenswrapper[4957]: I0218 14:53:53.391007 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Feb 18 14:53:53 crc kubenswrapper[4957]: I0218 14:53:53.391140 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-metric-storage-prometheus-svc" Feb 18 14:53:53 crc kubenswrapper[4957]: I0218 14:53:53.391286 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-8sf4s" Feb 18 14:53:53 crc kubenswrapper[4957]: I0218 14:53:53.391382 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Feb 18 14:53:53 crc kubenswrapper[4957]: I0218 14:53:53.400530 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Feb 18 14:53:53 crc kubenswrapper[4957]: I0218 14:53:53.402090 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 18 14:53:53 crc kubenswrapper[4957]: I0218 14:53:53.473471 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/a600b253-74c8-473b-ba57-e03ac741c902-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"a600b253-74c8-473b-ba57-e03ac741c902\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:53:53 crc kubenswrapper[4957]: I0218 14:53:53.473655 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/a600b253-74c8-473b-ba57-e03ac741c902-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"a600b253-74c8-473b-ba57-e03ac741c902\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:53:53 crc kubenswrapper[4957]: I0218 14:53:53.473690 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a600b253-74c8-473b-ba57-e03ac741c902-config\") pod \"prometheus-metric-storage-0\" (UID: \"a600b253-74c8-473b-ba57-e03ac741c902\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:53:53 crc kubenswrapper[4957]: I0218 14:53:53.473723 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a600b253-74c8-473b-ba57-e03ac741c902-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"a600b253-74c8-473b-ba57-e03ac741c902\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:53:53 crc kubenswrapper[4957]: I0218 14:53:53.473761 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/a600b253-74c8-473b-ba57-e03ac741c902-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"a600b253-74c8-473b-ba57-e03ac741c902\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:53:53 crc kubenswrapper[4957]: I0218 14:53:53.473859 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/a600b253-74c8-473b-ba57-e03ac741c902-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"a600b253-74c8-473b-ba57-e03ac741c902\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:53:53 crc kubenswrapper[4957]: I0218 14:53:53.473951 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/a600b253-74c8-473b-ba57-e03ac741c902-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"a600b253-74c8-473b-ba57-e03ac741c902\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:53:53 crc kubenswrapper[4957]: I0218 14:53:53.474024 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shbbv\" (UniqueName: \"kubernetes.io/projected/a600b253-74c8-473b-ba57-e03ac741c902-kube-api-access-shbbv\") pod \"prometheus-metric-storage-0\" (UID: \"a600b253-74c8-473b-ba57-e03ac741c902\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:53:53 crc kubenswrapper[4957]: I0218 14:53:53.474173 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/a600b253-74c8-473b-ba57-e03ac741c902-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"a600b253-74c8-473b-ba57-e03ac741c902\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:53:53 crc kubenswrapper[4957]: I0218 14:53:53.474257 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/a600b253-74c8-473b-ba57-e03ac741c902-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"a600b253-74c8-473b-ba57-e03ac741c902\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:53:53 crc kubenswrapper[4957]: I0218 14:53:53.474368 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/a600b253-74c8-473b-ba57-e03ac741c902-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"a600b253-74c8-473b-ba57-e03ac741c902\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:53:53 crc kubenswrapper[4957]: I0218 14:53:53.474515 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-74077cac-0270-4dff-af12-9ef374ad97f5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-74077cac-0270-4dff-af12-9ef374ad97f5\") pod \"prometheus-metric-storage-0\" (UID: \"a600b253-74c8-473b-ba57-e03ac741c902\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:53:53 crc kubenswrapper[4957]: I0218 14:53:53.474599 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/a600b253-74c8-473b-ba57-e03ac741c902-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"a600b253-74c8-473b-ba57-e03ac741c902\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:53:53 crc kubenswrapper[4957]: I0218 14:53:53.577336 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/a600b253-74c8-473b-ba57-e03ac741c902-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"a600b253-74c8-473b-ba57-e03ac741c902\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:53:53 crc kubenswrapper[4957]: I0218 14:53:53.577558 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/a600b253-74c8-473b-ba57-e03ac741c902-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"a600b253-74c8-473b-ba57-e03ac741c902\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:53:53 crc kubenswrapper[4957]: I0218 14:53:53.578562 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/a600b253-74c8-473b-ba57-e03ac741c902-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"a600b253-74c8-473b-ba57-e03ac741c902\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:53:53 crc kubenswrapper[4957]: I0218 14:53:53.578639 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shbbv\" (UniqueName: \"kubernetes.io/projected/a600b253-74c8-473b-ba57-e03ac741c902-kube-api-access-shbbv\") pod \"prometheus-metric-storage-0\" (UID: \"a600b253-74c8-473b-ba57-e03ac741c902\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:53:53 crc kubenswrapper[4957]: I0218 14:53:53.579017 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/a600b253-74c8-473b-ba57-e03ac741c902-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"a600b253-74c8-473b-ba57-e03ac741c902\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:53:53 crc kubenswrapper[4957]: I0218 14:53:53.579081 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/a600b253-74c8-473b-ba57-e03ac741c902-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"a600b253-74c8-473b-ba57-e03ac741c902\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:53:53 crc kubenswrapper[4957]: I0218 14:53:53.579138 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/a600b253-74c8-473b-ba57-e03ac741c902-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"a600b253-74c8-473b-ba57-e03ac741c902\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:53:53 crc kubenswrapper[4957]: I0218 14:53:53.579205 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-74077cac-0270-4dff-af12-9ef374ad97f5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-74077cac-0270-4dff-af12-9ef374ad97f5\") pod \"prometheus-metric-storage-0\" (UID: \"a600b253-74c8-473b-ba57-e03ac741c902\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:53:53 crc kubenswrapper[4957]: I0218 14:53:53.579242 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/a600b253-74c8-473b-ba57-e03ac741c902-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"a600b253-74c8-473b-ba57-e03ac741c902\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:53:53 crc kubenswrapper[4957]: I0218 14:53:53.579302 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/a600b253-74c8-473b-ba57-e03ac741c902-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"a600b253-74c8-473b-ba57-e03ac741c902\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:53:53 crc kubenswrapper[4957]: I0218 14:53:53.579357 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/a600b253-74c8-473b-ba57-e03ac741c902-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"a600b253-74c8-473b-ba57-e03ac741c902\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:53:53 crc kubenswrapper[4957]: I0218 14:53:53.579568 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a600b253-74c8-473b-ba57-e03ac741c902-config\") pod \"prometheus-metric-storage-0\" (UID: \"a600b253-74c8-473b-ba57-e03ac741c902\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:53:53 crc kubenswrapper[4957]: I0218 14:53:53.579605 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a600b253-74c8-473b-ba57-e03ac741c902-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"a600b253-74c8-473b-ba57-e03ac741c902\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:53:53 crc kubenswrapper[4957]: I0218 14:53:53.579631 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/a600b253-74c8-473b-ba57-e03ac741c902-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"a600b253-74c8-473b-ba57-e03ac741c902\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:53:53 crc kubenswrapper[4957]: I0218 14:53:53.583217 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/a600b253-74c8-473b-ba57-e03ac741c902-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"a600b253-74c8-473b-ba57-e03ac741c902\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:53:53 crc kubenswrapper[4957]: I0218 14:53:53.583521 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/a600b253-74c8-473b-ba57-e03ac741c902-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"a600b253-74c8-473b-ba57-e03ac741c902\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:53:53 crc kubenswrapper[4957]: I0218 14:53:53.583764 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/a600b253-74c8-473b-ba57-e03ac741c902-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"a600b253-74c8-473b-ba57-e03ac741c902\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:53:53 crc kubenswrapper[4957]: I0218 14:53:53.584198 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/a600b253-74c8-473b-ba57-e03ac741c902-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"a600b253-74c8-473b-ba57-e03ac741c902\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:53:53 crc kubenswrapper[4957]: I0218 14:53:53.585243 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/a600b253-74c8-473b-ba57-e03ac741c902-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"a600b253-74c8-473b-ba57-e03ac741c902\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:53:53 crc kubenswrapper[4957]: I0218 14:53:53.585567 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/a600b253-74c8-473b-ba57-e03ac741c902-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"a600b253-74c8-473b-ba57-e03ac741c902\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:53:53 crc kubenswrapper[4957]: I0218 14:53:53.586175 4957 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 18 14:53:53 crc kubenswrapper[4957]: I0218 14:53:53.586209 4957 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-74077cac-0270-4dff-af12-9ef374ad97f5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-74077cac-0270-4dff-af12-9ef374ad97f5\") pod \"prometheus-metric-storage-0\" (UID: \"a600b253-74c8-473b-ba57-e03ac741c902\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/18b2dcaab171f80c014d746e99657ace56755fa862ee3ab13f5fab6859dcbdba/globalmount\"" pod="openstack/prometheus-metric-storage-0" Feb 18 14:53:53 crc kubenswrapper[4957]: I0218 14:53:53.587277 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a600b253-74c8-473b-ba57-e03ac741c902-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"a600b253-74c8-473b-ba57-e03ac741c902\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:53:53 crc kubenswrapper[4957]: I0218 14:53:53.587378 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/a600b253-74c8-473b-ba57-e03ac741c902-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"a600b253-74c8-473b-ba57-e03ac741c902\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:53:53 crc kubenswrapper[4957]: I0218 14:53:53.595050 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/a600b253-74c8-473b-ba57-e03ac741c902-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"a600b253-74c8-473b-ba57-e03ac741c902\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:53:53 crc kubenswrapper[4957]: I0218 14:53:53.595316 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/a600b253-74c8-473b-ba57-e03ac741c902-config\") pod \"prometheus-metric-storage-0\" (UID: \"a600b253-74c8-473b-ba57-e03ac741c902\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:53:53 crc kubenswrapper[4957]: I0218 14:53:53.601226 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shbbv\" (UniqueName: \"kubernetes.io/projected/a600b253-74c8-473b-ba57-e03ac741c902-kube-api-access-shbbv\") pod \"prometheus-metric-storage-0\" (UID: \"a600b253-74c8-473b-ba57-e03ac741c902\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:53:53 crc kubenswrapper[4957]: I0218 14:53:53.644334 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-74077cac-0270-4dff-af12-9ef374ad97f5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-74077cac-0270-4dff-af12-9ef374ad97f5\") pod \"prometheus-metric-storage-0\" (UID: \"a600b253-74c8-473b-ba57-e03ac741c902\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:53:53 crc kubenswrapper[4957]: I0218 14:53:53.710116 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 18 14:53:54 crc kubenswrapper[4957]: I0218 14:53:54.153891 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"5790a7ec-79bb-49af-842f-e2b879f33184","Type":"ContainerStarted","Data":"65020aac00e15ce1529558ddcae63cab59ce1a5ce1a34d9681f9bb2bf73f3ff1"} Feb 18 14:53:54 crc kubenswrapper[4957]: I0218 14:53:54.154482 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"5790a7ec-79bb-49af-842f-e2b879f33184","Type":"ContainerStarted","Data":"3c7bf283d92d34d79330052eb3acf49e97d48be75aba085f5151cedbee4fade9"} Feb 18 14:53:54 crc kubenswrapper[4957]: I0218 14:53:54.154493 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"5790a7ec-79bb-49af-842f-e2b879f33184","Type":"ContainerStarted","Data":"a133f4b8b1c50f11cc62c1dcf3796de500983652bc1f04601c4007562884dfa8"} Feb 18 14:53:54 crc kubenswrapper[4957]: I0218 14:53:54.154503 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"5790a7ec-79bb-49af-842f-e2b879f33184","Type":"ContainerStarted","Data":"a2381ec3ed2c8ab34e1acaaedd046f1a37be1c542a962d089efa9e91864a0dbb"} Feb 18 14:53:54 crc kubenswrapper[4957]: I0218 14:53:54.154511 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"5790a7ec-79bb-49af-842f-e2b879f33184","Type":"ContainerStarted","Data":"8965f150e8b23f0ea82750a76fe605c16eee10ab1e3780bc182c0e4c3a40d3f9"} Feb 18 14:53:54 crc kubenswrapper[4957]: I0218 14:53:54.200442 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 18 14:53:54 crc kubenswrapper[4957]: W0218 14:53:54.209192 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda600b253_74c8_473b_ba57_e03ac741c902.slice/crio-0901f26082d5413a99a9a0ee663cc211ba2e8ab8fb1404237da69ef62744a8c5 WatchSource:0}: Error finding container 0901f26082d5413a99a9a0ee663cc211ba2e8ab8fb1404237da69ef62744a8c5: Status 404 returned error can't find the container with id 0901f26082d5413a99a9a0ee663cc211ba2e8ab8fb1404237da69ef62744a8c5 Feb 18 14:53:54 crc kubenswrapper[4957]: I0218 14:53:54.233730 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="383c2ab1-7f15-422c-a4ed-3c899e3a8c74" path="/var/lib/kubelet/pods/383c2ab1-7f15-422c-a4ed-3c899e3a8c74/volumes" Feb 18 14:53:55 crc kubenswrapper[4957]: I0218 14:53:55.166588 4957 generic.go:334] "Generic (PLEG): container finished" podID="7e292891-edfa-438c-aadb-3a12e7fdd9a4" containerID="54647dfc8eddac82556d1a05a747cc199d72ca7ab1847176d3da33bb8107b08d" exitCode=0 Feb 18 14:53:55 crc kubenswrapper[4957]: I0218 14:53:55.167023 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-n9k7h" event={"ID":"7e292891-edfa-438c-aadb-3a12e7fdd9a4","Type":"ContainerDied","Data":"54647dfc8eddac82556d1a05a747cc199d72ca7ab1847176d3da33bb8107b08d"} Feb 18 14:53:55 crc kubenswrapper[4957]: I0218 14:53:55.171345 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"a600b253-74c8-473b-ba57-e03ac741c902","Type":"ContainerStarted","Data":"0901f26082d5413a99a9a0ee663cc211ba2e8ab8fb1404237da69ef62744a8c5"} Feb 18 14:53:55 crc kubenswrapper[4957]: I0218 14:53:55.177292 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"5790a7ec-79bb-49af-842f-e2b879f33184","Type":"ContainerStarted","Data":"063bd89f4d0555c0544b0ebe7f022764b5b9f411716bd8849001af90ee752132"} Feb 18 14:53:55 crc kubenswrapper[4957]: I0218 14:53:55.228600 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=42.91961776 podStartE2EDuration="51.228578312s" podCreationTimestamp="2026-02-18 14:53:04 +0000 UTC" firstStartedPulling="2026-02-18 14:53:42.978267658 +0000 UTC m=+1329.499132402" lastFinishedPulling="2026-02-18 14:53:51.28722821 +0000 UTC m=+1337.808092954" observedRunningTime="2026-02-18 14:53:55.215914644 +0000 UTC m=+1341.736779388" watchObservedRunningTime="2026-02-18 14:53:55.228578312 +0000 UTC m=+1341.749443056" Feb 18 14:53:55 crc kubenswrapper[4957]: I0218 14:53:55.496934 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-fchng"] Feb 18 14:53:55 crc kubenswrapper[4957]: I0218 14:53:55.498785 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-fchng" Feb 18 14:53:55 crc kubenswrapper[4957]: I0218 14:53:55.501207 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Feb 18 14:53:55 crc kubenswrapper[4957]: I0218 14:53:55.530741 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-fchng"] Feb 18 14:53:55 crc kubenswrapper[4957]: I0218 14:53:55.624611 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/33f87402-e008-4ddf-9943-f50e164ea0b5-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-fchng\" (UID: \"33f87402-e008-4ddf-9943-f50e164ea0b5\") " pod="openstack/dnsmasq-dns-764c5664d7-fchng" Feb 18 14:53:55 crc kubenswrapper[4957]: I0218 14:53:55.624822 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xsqwr\" (UniqueName: \"kubernetes.io/projected/33f87402-e008-4ddf-9943-f50e164ea0b5-kube-api-access-xsqwr\") pod \"dnsmasq-dns-764c5664d7-fchng\" (UID: \"33f87402-e008-4ddf-9943-f50e164ea0b5\") " pod="openstack/dnsmasq-dns-764c5664d7-fchng" Feb 18 14:53:55 crc kubenswrapper[4957]: I0218 14:53:55.624875 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/33f87402-e008-4ddf-9943-f50e164ea0b5-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-fchng\" (UID: \"33f87402-e008-4ddf-9943-f50e164ea0b5\") " pod="openstack/dnsmasq-dns-764c5664d7-fchng" Feb 18 14:53:55 crc kubenswrapper[4957]: I0218 14:53:55.624991 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/33f87402-e008-4ddf-9943-f50e164ea0b5-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-fchng\" (UID: \"33f87402-e008-4ddf-9943-f50e164ea0b5\") " pod="openstack/dnsmasq-dns-764c5664d7-fchng" Feb 18 14:53:55 crc kubenswrapper[4957]: I0218 14:53:55.625024 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/33f87402-e008-4ddf-9943-f50e164ea0b5-dns-svc\") pod \"dnsmasq-dns-764c5664d7-fchng\" (UID: \"33f87402-e008-4ddf-9943-f50e164ea0b5\") " pod="openstack/dnsmasq-dns-764c5664d7-fchng" Feb 18 14:53:55 crc kubenswrapper[4957]: I0218 14:53:55.625084 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33f87402-e008-4ddf-9943-f50e164ea0b5-config\") pod \"dnsmasq-dns-764c5664d7-fchng\" (UID: \"33f87402-e008-4ddf-9943-f50e164ea0b5\") " pod="openstack/dnsmasq-dns-764c5664d7-fchng" Feb 18 14:53:55 crc kubenswrapper[4957]: I0218 14:53:55.727273 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/33f87402-e008-4ddf-9943-f50e164ea0b5-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-fchng\" (UID: \"33f87402-e008-4ddf-9943-f50e164ea0b5\") " pod="openstack/dnsmasq-dns-764c5664d7-fchng" Feb 18 14:53:55 crc kubenswrapper[4957]: I0218 14:53:55.727351 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xsqwr\" (UniqueName: \"kubernetes.io/projected/33f87402-e008-4ddf-9943-f50e164ea0b5-kube-api-access-xsqwr\") pod \"dnsmasq-dns-764c5664d7-fchng\" (UID: \"33f87402-e008-4ddf-9943-f50e164ea0b5\") " pod="openstack/dnsmasq-dns-764c5664d7-fchng" Feb 18 14:53:55 crc kubenswrapper[4957]: I0218 14:53:55.727378 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/33f87402-e008-4ddf-9943-f50e164ea0b5-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-fchng\" (UID: \"33f87402-e008-4ddf-9943-f50e164ea0b5\") " pod="openstack/dnsmasq-dns-764c5664d7-fchng" Feb 18 14:53:55 crc kubenswrapper[4957]: I0218 14:53:55.727431 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/33f87402-e008-4ddf-9943-f50e164ea0b5-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-fchng\" (UID: \"33f87402-e008-4ddf-9943-f50e164ea0b5\") " pod="openstack/dnsmasq-dns-764c5664d7-fchng" Feb 18 14:53:55 crc kubenswrapper[4957]: I0218 14:53:55.727448 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/33f87402-e008-4ddf-9943-f50e164ea0b5-dns-svc\") pod \"dnsmasq-dns-764c5664d7-fchng\" (UID: \"33f87402-e008-4ddf-9943-f50e164ea0b5\") " pod="openstack/dnsmasq-dns-764c5664d7-fchng" Feb 18 14:53:55 crc kubenswrapper[4957]: I0218 14:53:55.727475 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33f87402-e008-4ddf-9943-f50e164ea0b5-config\") pod \"dnsmasq-dns-764c5664d7-fchng\" (UID: \"33f87402-e008-4ddf-9943-f50e164ea0b5\") " pod="openstack/dnsmasq-dns-764c5664d7-fchng" Feb 18 14:53:55 crc kubenswrapper[4957]: I0218 14:53:55.728331 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/33f87402-e008-4ddf-9943-f50e164ea0b5-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-fchng\" (UID: \"33f87402-e008-4ddf-9943-f50e164ea0b5\") " pod="openstack/dnsmasq-dns-764c5664d7-fchng" Feb 18 14:53:55 crc kubenswrapper[4957]: I0218 14:53:55.728357 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33f87402-e008-4ddf-9943-f50e164ea0b5-config\") pod \"dnsmasq-dns-764c5664d7-fchng\" (UID: \"33f87402-e008-4ddf-9943-f50e164ea0b5\") " pod="openstack/dnsmasq-dns-764c5664d7-fchng" Feb 18 14:53:55 crc kubenswrapper[4957]: I0218 14:53:55.728462 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/33f87402-e008-4ddf-9943-f50e164ea0b5-dns-svc\") pod \"dnsmasq-dns-764c5664d7-fchng\" (UID: \"33f87402-e008-4ddf-9943-f50e164ea0b5\") " pod="openstack/dnsmasq-dns-764c5664d7-fchng" Feb 18 14:53:55 crc kubenswrapper[4957]: I0218 14:53:55.728765 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/33f87402-e008-4ddf-9943-f50e164ea0b5-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-fchng\" (UID: \"33f87402-e008-4ddf-9943-f50e164ea0b5\") " pod="openstack/dnsmasq-dns-764c5664d7-fchng" Feb 18 14:53:55 crc kubenswrapper[4957]: I0218 14:53:55.729440 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/33f87402-e008-4ddf-9943-f50e164ea0b5-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-fchng\" (UID: \"33f87402-e008-4ddf-9943-f50e164ea0b5\") " pod="openstack/dnsmasq-dns-764c5664d7-fchng" Feb 18 14:53:55 crc kubenswrapper[4957]: I0218 14:53:55.751762 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xsqwr\" (UniqueName: \"kubernetes.io/projected/33f87402-e008-4ddf-9943-f50e164ea0b5-kube-api-access-xsqwr\") pod \"dnsmasq-dns-764c5664d7-fchng\" (UID: \"33f87402-e008-4ddf-9943-f50e164ea0b5\") " pod="openstack/dnsmasq-dns-764c5664d7-fchng" Feb 18 14:53:55 crc kubenswrapper[4957]: I0218 14:53:55.817490 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-fchng" Feb 18 14:53:56 crc kubenswrapper[4957]: I0218 14:53:56.144211 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-fchng"] Feb 18 14:53:56 crc kubenswrapper[4957]: I0218 14:53:56.200478 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-fchng" event={"ID":"33f87402-e008-4ddf-9943-f50e164ea0b5","Type":"ContainerStarted","Data":"600ebd319fe66b2ab96a49b909e84d89726e796e10999503e6a7e932436c59c4"} Feb 18 14:53:56 crc kubenswrapper[4957]: I0218 14:53:56.920508 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-n9k7h" Feb 18 14:53:57 crc kubenswrapper[4957]: I0218 14:53:57.056699 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e292891-edfa-438c-aadb-3a12e7fdd9a4-combined-ca-bundle\") pod \"7e292891-edfa-438c-aadb-3a12e7fdd9a4\" (UID: \"7e292891-edfa-438c-aadb-3a12e7fdd9a4\") " Feb 18 14:53:57 crc kubenswrapper[4957]: I0218 14:53:57.056908 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/7e292891-edfa-438c-aadb-3a12e7fdd9a4-db-sync-config-data\") pod \"7e292891-edfa-438c-aadb-3a12e7fdd9a4\" (UID: \"7e292891-edfa-438c-aadb-3a12e7fdd9a4\") " Feb 18 14:53:57 crc kubenswrapper[4957]: I0218 14:53:57.056951 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2lx6d\" (UniqueName: \"kubernetes.io/projected/7e292891-edfa-438c-aadb-3a12e7fdd9a4-kube-api-access-2lx6d\") pod \"7e292891-edfa-438c-aadb-3a12e7fdd9a4\" (UID: \"7e292891-edfa-438c-aadb-3a12e7fdd9a4\") " Feb 18 14:53:57 crc kubenswrapper[4957]: I0218 14:53:57.056972 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e292891-edfa-438c-aadb-3a12e7fdd9a4-config-data\") pod \"7e292891-edfa-438c-aadb-3a12e7fdd9a4\" (UID: \"7e292891-edfa-438c-aadb-3a12e7fdd9a4\") " Feb 18 14:53:57 crc kubenswrapper[4957]: I0218 14:53:57.112893 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e292891-edfa-438c-aadb-3a12e7fdd9a4-kube-api-access-2lx6d" (OuterVolumeSpecName: "kube-api-access-2lx6d") pod "7e292891-edfa-438c-aadb-3a12e7fdd9a4" (UID: "7e292891-edfa-438c-aadb-3a12e7fdd9a4"). InnerVolumeSpecName "kube-api-access-2lx6d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:53:57 crc kubenswrapper[4957]: I0218 14:53:57.113373 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e292891-edfa-438c-aadb-3a12e7fdd9a4-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "7e292891-edfa-438c-aadb-3a12e7fdd9a4" (UID: "7e292891-edfa-438c-aadb-3a12e7fdd9a4"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:53:57 crc kubenswrapper[4957]: I0218 14:53:57.133874 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e292891-edfa-438c-aadb-3a12e7fdd9a4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7e292891-edfa-438c-aadb-3a12e7fdd9a4" (UID: "7e292891-edfa-438c-aadb-3a12e7fdd9a4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:53:57 crc kubenswrapper[4957]: I0218 14:53:57.155701 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e292891-edfa-438c-aadb-3a12e7fdd9a4-config-data" (OuterVolumeSpecName: "config-data") pod "7e292891-edfa-438c-aadb-3a12e7fdd9a4" (UID: "7e292891-edfa-438c-aadb-3a12e7fdd9a4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:53:57 crc kubenswrapper[4957]: I0218 14:53:57.159283 4957 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e292891-edfa-438c-aadb-3a12e7fdd9a4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:53:57 crc kubenswrapper[4957]: I0218 14:53:57.159334 4957 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/7e292891-edfa-438c-aadb-3a12e7fdd9a4-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 14:53:57 crc kubenswrapper[4957]: I0218 14:53:57.159347 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2lx6d\" (UniqueName: \"kubernetes.io/projected/7e292891-edfa-438c-aadb-3a12e7fdd9a4-kube-api-access-2lx6d\") on node \"crc\" DevicePath \"\"" Feb 18 14:53:57 crc kubenswrapper[4957]: I0218 14:53:57.159363 4957 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e292891-edfa-438c-aadb-3a12e7fdd9a4-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 14:53:57 crc kubenswrapper[4957]: I0218 14:53:57.226986 4957 generic.go:334] "Generic (PLEG): container finished" podID="33f87402-e008-4ddf-9943-f50e164ea0b5" containerID="ac471fc013f3486a15a2ea06a58466797584b5c1239da12ad1af0045cf04d4e6" exitCode=0 Feb 18 14:53:57 crc kubenswrapper[4957]: I0218 14:53:57.227209 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-fchng" event={"ID":"33f87402-e008-4ddf-9943-f50e164ea0b5","Type":"ContainerDied","Data":"ac471fc013f3486a15a2ea06a58466797584b5c1239da12ad1af0045cf04d4e6"} Feb 18 14:53:57 crc kubenswrapper[4957]: I0218 14:53:57.232893 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-n9k7h" event={"ID":"7e292891-edfa-438c-aadb-3a12e7fdd9a4","Type":"ContainerDied","Data":"a4c5d76202ac258be45d01b20ae0da5d786f032d317f8fb410d8912e9794b686"} Feb 18 14:53:57 crc kubenswrapper[4957]: I0218 14:53:57.233198 4957 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a4c5d76202ac258be45d01b20ae0da5d786f032d317f8fb410d8912e9794b686" Feb 18 14:53:57 crc kubenswrapper[4957]: I0218 14:53:57.232952 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-n9k7h" Feb 18 14:53:57 crc kubenswrapper[4957]: I0218 14:53:57.580891 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-fchng"] Feb 18 14:53:57 crc kubenswrapper[4957]: I0218 14:53:57.630587 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-b5ll7"] Feb 18 14:53:57 crc kubenswrapper[4957]: E0218 14:53:57.631051 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e292891-edfa-438c-aadb-3a12e7fdd9a4" containerName="glance-db-sync" Feb 18 14:53:57 crc kubenswrapper[4957]: I0218 14:53:57.631067 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e292891-edfa-438c-aadb-3a12e7fdd9a4" containerName="glance-db-sync" Feb 18 14:53:57 crc kubenswrapper[4957]: I0218 14:53:57.632520 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e292891-edfa-438c-aadb-3a12e7fdd9a4" containerName="glance-db-sync" Feb 18 14:53:57 crc kubenswrapper[4957]: I0218 14:53:57.633645 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-b5ll7" Feb 18 14:53:57 crc kubenswrapper[4957]: I0218 14:53:57.644526 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-b5ll7"] Feb 18 14:53:57 crc kubenswrapper[4957]: I0218 14:53:57.774564 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38b6d3b3-c319-4fb5-b91c-0f8607178ad0-config\") pod \"dnsmasq-dns-74f6bcbc87-b5ll7\" (UID: \"38b6d3b3-c319-4fb5-b91c-0f8607178ad0\") " pod="openstack/dnsmasq-dns-74f6bcbc87-b5ll7" Feb 18 14:53:57 crc kubenswrapper[4957]: I0218 14:53:57.774661 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/38b6d3b3-c319-4fb5-b91c-0f8607178ad0-ovsdbserver-sb\") pod \"dnsmasq-dns-74f6bcbc87-b5ll7\" (UID: \"38b6d3b3-c319-4fb5-b91c-0f8607178ad0\") " pod="openstack/dnsmasq-dns-74f6bcbc87-b5ll7" Feb 18 14:53:57 crc kubenswrapper[4957]: I0218 14:53:57.774691 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/38b6d3b3-c319-4fb5-b91c-0f8607178ad0-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6bcbc87-b5ll7\" (UID: \"38b6d3b3-c319-4fb5-b91c-0f8607178ad0\") " pod="openstack/dnsmasq-dns-74f6bcbc87-b5ll7" Feb 18 14:53:57 crc kubenswrapper[4957]: I0218 14:53:57.774802 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lllkx\" (UniqueName: \"kubernetes.io/projected/38b6d3b3-c319-4fb5-b91c-0f8607178ad0-kube-api-access-lllkx\") pod \"dnsmasq-dns-74f6bcbc87-b5ll7\" (UID: \"38b6d3b3-c319-4fb5-b91c-0f8607178ad0\") " pod="openstack/dnsmasq-dns-74f6bcbc87-b5ll7" Feb 18 14:53:57 crc kubenswrapper[4957]: I0218 14:53:57.775117 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/38b6d3b3-c319-4fb5-b91c-0f8607178ad0-dns-swift-storage-0\") pod \"dnsmasq-dns-74f6bcbc87-b5ll7\" (UID: \"38b6d3b3-c319-4fb5-b91c-0f8607178ad0\") " pod="openstack/dnsmasq-dns-74f6bcbc87-b5ll7" Feb 18 14:53:57 crc kubenswrapper[4957]: I0218 14:53:57.775156 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/38b6d3b3-c319-4fb5-b91c-0f8607178ad0-dns-svc\") pod \"dnsmasq-dns-74f6bcbc87-b5ll7\" (UID: \"38b6d3b3-c319-4fb5-b91c-0f8607178ad0\") " pod="openstack/dnsmasq-dns-74f6bcbc87-b5ll7" Feb 18 14:53:57 crc kubenswrapper[4957]: I0218 14:53:57.876546 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/38b6d3b3-c319-4fb5-b91c-0f8607178ad0-ovsdbserver-sb\") pod \"dnsmasq-dns-74f6bcbc87-b5ll7\" (UID: \"38b6d3b3-c319-4fb5-b91c-0f8607178ad0\") " pod="openstack/dnsmasq-dns-74f6bcbc87-b5ll7" Feb 18 14:53:57 crc kubenswrapper[4957]: I0218 14:53:57.876600 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/38b6d3b3-c319-4fb5-b91c-0f8607178ad0-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6bcbc87-b5ll7\" (UID: \"38b6d3b3-c319-4fb5-b91c-0f8607178ad0\") " pod="openstack/dnsmasq-dns-74f6bcbc87-b5ll7" Feb 18 14:53:57 crc kubenswrapper[4957]: I0218 14:53:57.876623 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lllkx\" (UniqueName: \"kubernetes.io/projected/38b6d3b3-c319-4fb5-b91c-0f8607178ad0-kube-api-access-lllkx\") pod \"dnsmasq-dns-74f6bcbc87-b5ll7\" (UID: \"38b6d3b3-c319-4fb5-b91c-0f8607178ad0\") " pod="openstack/dnsmasq-dns-74f6bcbc87-b5ll7" Feb 18 14:53:57 crc kubenswrapper[4957]: I0218 14:53:57.876707 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/38b6d3b3-c319-4fb5-b91c-0f8607178ad0-dns-swift-storage-0\") pod \"dnsmasq-dns-74f6bcbc87-b5ll7\" (UID: \"38b6d3b3-c319-4fb5-b91c-0f8607178ad0\") " pod="openstack/dnsmasq-dns-74f6bcbc87-b5ll7" Feb 18 14:53:57 crc kubenswrapper[4957]: I0218 14:53:57.876728 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/38b6d3b3-c319-4fb5-b91c-0f8607178ad0-dns-svc\") pod \"dnsmasq-dns-74f6bcbc87-b5ll7\" (UID: \"38b6d3b3-c319-4fb5-b91c-0f8607178ad0\") " pod="openstack/dnsmasq-dns-74f6bcbc87-b5ll7" Feb 18 14:53:57 crc kubenswrapper[4957]: I0218 14:53:57.876823 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38b6d3b3-c319-4fb5-b91c-0f8607178ad0-config\") pod \"dnsmasq-dns-74f6bcbc87-b5ll7\" (UID: \"38b6d3b3-c319-4fb5-b91c-0f8607178ad0\") " pod="openstack/dnsmasq-dns-74f6bcbc87-b5ll7" Feb 18 14:53:57 crc kubenswrapper[4957]: I0218 14:53:57.877789 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/38b6d3b3-c319-4fb5-b91c-0f8607178ad0-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6bcbc87-b5ll7\" (UID: \"38b6d3b3-c319-4fb5-b91c-0f8607178ad0\") " pod="openstack/dnsmasq-dns-74f6bcbc87-b5ll7" Feb 18 14:53:57 crc kubenswrapper[4957]: I0218 14:53:57.877789 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/38b6d3b3-c319-4fb5-b91c-0f8607178ad0-ovsdbserver-sb\") pod \"dnsmasq-dns-74f6bcbc87-b5ll7\" (UID: \"38b6d3b3-c319-4fb5-b91c-0f8607178ad0\") " pod="openstack/dnsmasq-dns-74f6bcbc87-b5ll7" Feb 18 14:53:57 crc kubenswrapper[4957]: I0218 14:53:57.877815 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38b6d3b3-c319-4fb5-b91c-0f8607178ad0-config\") pod \"dnsmasq-dns-74f6bcbc87-b5ll7\" (UID: \"38b6d3b3-c319-4fb5-b91c-0f8607178ad0\") " pod="openstack/dnsmasq-dns-74f6bcbc87-b5ll7" Feb 18 14:53:57 crc kubenswrapper[4957]: I0218 14:53:57.877877 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/38b6d3b3-c319-4fb5-b91c-0f8607178ad0-dns-swift-storage-0\") pod \"dnsmasq-dns-74f6bcbc87-b5ll7\" (UID: \"38b6d3b3-c319-4fb5-b91c-0f8607178ad0\") " pod="openstack/dnsmasq-dns-74f6bcbc87-b5ll7" Feb 18 14:53:57 crc kubenswrapper[4957]: I0218 14:53:57.878610 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/38b6d3b3-c319-4fb5-b91c-0f8607178ad0-dns-svc\") pod \"dnsmasq-dns-74f6bcbc87-b5ll7\" (UID: \"38b6d3b3-c319-4fb5-b91c-0f8607178ad0\") " pod="openstack/dnsmasq-dns-74f6bcbc87-b5ll7" Feb 18 14:53:57 crc kubenswrapper[4957]: I0218 14:53:57.894798 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lllkx\" (UniqueName: \"kubernetes.io/projected/38b6d3b3-c319-4fb5-b91c-0f8607178ad0-kube-api-access-lllkx\") pod \"dnsmasq-dns-74f6bcbc87-b5ll7\" (UID: \"38b6d3b3-c319-4fb5-b91c-0f8607178ad0\") " pod="openstack/dnsmasq-dns-74f6bcbc87-b5ll7" Feb 18 14:53:57 crc kubenswrapper[4957]: I0218 14:53:57.954098 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-b5ll7" Feb 18 14:53:58 crc kubenswrapper[4957]: I0218 14:53:58.242685 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-fchng" event={"ID":"33f87402-e008-4ddf-9943-f50e164ea0b5","Type":"ContainerStarted","Data":"7a1cac59e7feec292c8c04bf2e1df8536ddf82f2cc8d5f21b93b78ef63b2bce4"} Feb 18 14:53:58 crc kubenswrapper[4957]: I0218 14:53:58.242807 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-764c5664d7-fchng" podUID="33f87402-e008-4ddf-9943-f50e164ea0b5" containerName="dnsmasq-dns" containerID="cri-o://7a1cac59e7feec292c8c04bf2e1df8536ddf82f2cc8d5f21b93b78ef63b2bce4" gracePeriod=10 Feb 18 14:53:58 crc kubenswrapper[4957]: I0218 14:53:58.243132 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-764c5664d7-fchng" Feb 18 14:53:58 crc kubenswrapper[4957]: I0218 14:53:58.250467 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"a600b253-74c8-473b-ba57-e03ac741c902","Type":"ContainerStarted","Data":"3732c6374675d74fdd74f42ce35944aad408270dc34d61e7279f0e4497dd18d7"} Feb 18 14:53:58 crc kubenswrapper[4957]: I0218 14:53:58.262269 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-764c5664d7-fchng" podStartSLOduration=3.262251068 podStartE2EDuration="3.262251068s" podCreationTimestamp="2026-02-18 14:53:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:53:58.259445946 +0000 UTC m=+1344.780310690" watchObservedRunningTime="2026-02-18 14:53:58.262251068 +0000 UTC m=+1344.783115812" Feb 18 14:53:58 crc kubenswrapper[4957]: I0218 14:53:58.512839 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-b5ll7"] Feb 18 14:53:58 crc kubenswrapper[4957]: I0218 14:53:58.613593 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 18 14:53:58 crc kubenswrapper[4957]: I0218 14:53:58.774848 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-2" podUID="81a0cd7a-3f64-4555-96e9-ad69c2518568" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.130:5671: connect: connection refused" Feb 18 14:53:59 crc kubenswrapper[4957]: I0218 14:53:59.005156 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-fchng" Feb 18 14:53:59 crc kubenswrapper[4957]: I0218 14:53:59.112383 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/33f87402-e008-4ddf-9943-f50e164ea0b5-ovsdbserver-sb\") pod \"33f87402-e008-4ddf-9943-f50e164ea0b5\" (UID: \"33f87402-e008-4ddf-9943-f50e164ea0b5\") " Feb 18 14:53:59 crc kubenswrapper[4957]: I0218 14:53:59.112454 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/33f87402-e008-4ddf-9943-f50e164ea0b5-dns-swift-storage-0\") pod \"33f87402-e008-4ddf-9943-f50e164ea0b5\" (UID: \"33f87402-e008-4ddf-9943-f50e164ea0b5\") " Feb 18 14:53:59 crc kubenswrapper[4957]: I0218 14:53:59.112480 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/33f87402-e008-4ddf-9943-f50e164ea0b5-dns-svc\") pod \"33f87402-e008-4ddf-9943-f50e164ea0b5\" (UID: \"33f87402-e008-4ddf-9943-f50e164ea0b5\") " Feb 18 14:53:59 crc kubenswrapper[4957]: I0218 14:53:59.112537 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xsqwr\" (UniqueName: \"kubernetes.io/projected/33f87402-e008-4ddf-9943-f50e164ea0b5-kube-api-access-xsqwr\") pod \"33f87402-e008-4ddf-9943-f50e164ea0b5\" (UID: \"33f87402-e008-4ddf-9943-f50e164ea0b5\") " Feb 18 14:53:59 crc kubenswrapper[4957]: I0218 14:53:59.112569 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/33f87402-e008-4ddf-9943-f50e164ea0b5-ovsdbserver-nb\") pod \"33f87402-e008-4ddf-9943-f50e164ea0b5\" (UID: \"33f87402-e008-4ddf-9943-f50e164ea0b5\") " Feb 18 14:53:59 crc kubenswrapper[4957]: I0218 14:53:59.112760 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33f87402-e008-4ddf-9943-f50e164ea0b5-config\") pod \"33f87402-e008-4ddf-9943-f50e164ea0b5\" (UID: \"33f87402-e008-4ddf-9943-f50e164ea0b5\") " Feb 18 14:53:59 crc kubenswrapper[4957]: I0218 14:53:59.118654 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33f87402-e008-4ddf-9943-f50e164ea0b5-kube-api-access-xsqwr" (OuterVolumeSpecName: "kube-api-access-xsqwr") pod "33f87402-e008-4ddf-9943-f50e164ea0b5" (UID: "33f87402-e008-4ddf-9943-f50e164ea0b5"). InnerVolumeSpecName "kube-api-access-xsqwr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:53:59 crc kubenswrapper[4957]: I0218 14:53:59.173037 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/33f87402-e008-4ddf-9943-f50e164ea0b5-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "33f87402-e008-4ddf-9943-f50e164ea0b5" (UID: "33f87402-e008-4ddf-9943-f50e164ea0b5"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:53:59 crc kubenswrapper[4957]: I0218 14:53:59.181379 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/33f87402-e008-4ddf-9943-f50e164ea0b5-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "33f87402-e008-4ddf-9943-f50e164ea0b5" (UID: "33f87402-e008-4ddf-9943-f50e164ea0b5"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:53:59 crc kubenswrapper[4957]: I0218 14:53:59.184066 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/33f87402-e008-4ddf-9943-f50e164ea0b5-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "33f87402-e008-4ddf-9943-f50e164ea0b5" (UID: "33f87402-e008-4ddf-9943-f50e164ea0b5"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:53:59 crc kubenswrapper[4957]: I0218 14:53:59.189960 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/33f87402-e008-4ddf-9943-f50e164ea0b5-config" (OuterVolumeSpecName: "config") pod "33f87402-e008-4ddf-9943-f50e164ea0b5" (UID: "33f87402-e008-4ddf-9943-f50e164ea0b5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:53:59 crc kubenswrapper[4957]: I0218 14:53:59.213410 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/33f87402-e008-4ddf-9943-f50e164ea0b5-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "33f87402-e008-4ddf-9943-f50e164ea0b5" (UID: "33f87402-e008-4ddf-9943-f50e164ea0b5"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:53:59 crc kubenswrapper[4957]: I0218 14:53:59.214834 4957 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/33f87402-e008-4ddf-9943-f50e164ea0b5-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 18 14:53:59 crc kubenswrapper[4957]: I0218 14:53:59.214921 4957 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/33f87402-e008-4ddf-9943-f50e164ea0b5-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 18 14:53:59 crc kubenswrapper[4957]: I0218 14:53:59.214995 4957 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/33f87402-e008-4ddf-9943-f50e164ea0b5-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 14:53:59 crc kubenswrapper[4957]: I0218 14:53:59.215056 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xsqwr\" (UniqueName: \"kubernetes.io/projected/33f87402-e008-4ddf-9943-f50e164ea0b5-kube-api-access-xsqwr\") on node \"crc\" DevicePath \"\"" Feb 18 14:53:59 crc kubenswrapper[4957]: I0218 14:53:59.215117 4957 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/33f87402-e008-4ddf-9943-f50e164ea0b5-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 18 14:53:59 crc kubenswrapper[4957]: I0218 14:53:59.215167 4957 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33f87402-e008-4ddf-9943-f50e164ea0b5-config\") on node \"crc\" DevicePath \"\"" Feb 18 14:53:59 crc kubenswrapper[4957]: I0218 14:53:59.263269 4957 generic.go:334] "Generic (PLEG): container finished" podID="33f87402-e008-4ddf-9943-f50e164ea0b5" containerID="7a1cac59e7feec292c8c04bf2e1df8536ddf82f2cc8d5f21b93b78ef63b2bce4" exitCode=0 Feb 18 14:53:59 crc kubenswrapper[4957]: I0218 14:53:59.263357 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-fchng" event={"ID":"33f87402-e008-4ddf-9943-f50e164ea0b5","Type":"ContainerDied","Data":"7a1cac59e7feec292c8c04bf2e1df8536ddf82f2cc8d5f21b93b78ef63b2bce4"} Feb 18 14:53:59 crc kubenswrapper[4957]: I0218 14:53:59.263379 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-fchng" Feb 18 14:53:59 crc kubenswrapper[4957]: I0218 14:53:59.263399 4957 scope.go:117] "RemoveContainer" containerID="7a1cac59e7feec292c8c04bf2e1df8536ddf82f2cc8d5f21b93b78ef63b2bce4" Feb 18 14:53:59 crc kubenswrapper[4957]: I0218 14:53:59.263388 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-fchng" event={"ID":"33f87402-e008-4ddf-9943-f50e164ea0b5","Type":"ContainerDied","Data":"600ebd319fe66b2ab96a49b909e84d89726e796e10999503e6a7e932436c59c4"} Feb 18 14:53:59 crc kubenswrapper[4957]: I0218 14:53:59.265503 4957 generic.go:334] "Generic (PLEG): container finished" podID="38b6d3b3-c319-4fb5-b91c-0f8607178ad0" containerID="bcb8e02e1a8f62e4f7738021085499e43efe9f8d664c4fbe3771cc772b9da494" exitCode=0 Feb 18 14:53:59 crc kubenswrapper[4957]: I0218 14:53:59.265764 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-b5ll7" event={"ID":"38b6d3b3-c319-4fb5-b91c-0f8607178ad0","Type":"ContainerDied","Data":"bcb8e02e1a8f62e4f7738021085499e43efe9f8d664c4fbe3771cc772b9da494"} Feb 18 14:53:59 crc kubenswrapper[4957]: I0218 14:53:59.265811 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-b5ll7" event={"ID":"38b6d3b3-c319-4fb5-b91c-0f8607178ad0","Type":"ContainerStarted","Data":"7317f4075e67c60790b5a278daba856f0d23cdc471501878baf5a0f427e2bfbc"} Feb 18 14:53:59 crc kubenswrapper[4957]: I0218 14:53:59.309972 4957 scope.go:117] "RemoveContainer" containerID="ac471fc013f3486a15a2ea06a58466797584b5c1239da12ad1af0045cf04d4e6" Feb 18 14:53:59 crc kubenswrapper[4957]: I0218 14:53:59.390285 4957 scope.go:117] "RemoveContainer" containerID="7a1cac59e7feec292c8c04bf2e1df8536ddf82f2cc8d5f21b93b78ef63b2bce4" Feb 18 14:53:59 crc kubenswrapper[4957]: E0218 14:53:59.393756 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7a1cac59e7feec292c8c04bf2e1df8536ddf82f2cc8d5f21b93b78ef63b2bce4\": container with ID starting with 7a1cac59e7feec292c8c04bf2e1df8536ddf82f2cc8d5f21b93b78ef63b2bce4 not found: ID does not exist" containerID="7a1cac59e7feec292c8c04bf2e1df8536ddf82f2cc8d5f21b93b78ef63b2bce4" Feb 18 14:53:59 crc kubenswrapper[4957]: I0218 14:53:59.393879 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a1cac59e7feec292c8c04bf2e1df8536ddf82f2cc8d5f21b93b78ef63b2bce4"} err="failed to get container status \"7a1cac59e7feec292c8c04bf2e1df8536ddf82f2cc8d5f21b93b78ef63b2bce4\": rpc error: code = NotFound desc = could not find container \"7a1cac59e7feec292c8c04bf2e1df8536ddf82f2cc8d5f21b93b78ef63b2bce4\": container with ID starting with 7a1cac59e7feec292c8c04bf2e1df8536ddf82f2cc8d5f21b93b78ef63b2bce4 not found: ID does not exist" Feb 18 14:53:59 crc kubenswrapper[4957]: I0218 14:53:59.393957 4957 scope.go:117] "RemoveContainer" containerID="ac471fc013f3486a15a2ea06a58466797584b5c1239da12ad1af0045cf04d4e6" Feb 18 14:53:59 crc kubenswrapper[4957]: E0218 14:53:59.394491 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ac471fc013f3486a15a2ea06a58466797584b5c1239da12ad1af0045cf04d4e6\": container with ID starting with ac471fc013f3486a15a2ea06a58466797584b5c1239da12ad1af0045cf04d4e6 not found: ID does not exist" containerID="ac471fc013f3486a15a2ea06a58466797584b5c1239da12ad1af0045cf04d4e6" Feb 18 14:53:59 crc kubenswrapper[4957]: I0218 14:53:59.394583 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac471fc013f3486a15a2ea06a58466797584b5c1239da12ad1af0045cf04d4e6"} err="failed to get container status \"ac471fc013f3486a15a2ea06a58466797584b5c1239da12ad1af0045cf04d4e6\": rpc error: code = NotFound desc = could not find container \"ac471fc013f3486a15a2ea06a58466797584b5c1239da12ad1af0045cf04d4e6\": container with ID starting with ac471fc013f3486a15a2ea06a58466797584b5c1239da12ad1af0045cf04d4e6 not found: ID does not exist" Feb 18 14:53:59 crc kubenswrapper[4957]: I0218 14:53:59.407171 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-fchng"] Feb 18 14:53:59 crc kubenswrapper[4957]: I0218 14:53:59.415737 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-fchng"] Feb 18 14:53:59 crc kubenswrapper[4957]: E0218 14:53:59.592760 4957 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod33f87402_e008_4ddf_9943_f50e164ea0b5.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod33f87402_e008_4ddf_9943_f50e164ea0b5.slice/crio-600ebd319fe66b2ab96a49b909e84d89726e796e10999503e6a7e932436c59c4\": RecentStats: unable to find data in memory cache]" Feb 18 14:54:00 crc kubenswrapper[4957]: I0218 14:54:00.236277 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33f87402-e008-4ddf-9943-f50e164ea0b5" path="/var/lib/kubelet/pods/33f87402-e008-4ddf-9943-f50e164ea0b5/volumes" Feb 18 14:54:00 crc kubenswrapper[4957]: I0218 14:54:00.275785 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-b5ll7" event={"ID":"38b6d3b3-c319-4fb5-b91c-0f8607178ad0","Type":"ContainerStarted","Data":"30b68765787f665bb4c6a9ad08829aa955f0bfd70d5d01623724f5fb62bd21f4"} Feb 18 14:54:00 crc kubenswrapper[4957]: I0218 14:54:00.299168 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-74f6bcbc87-b5ll7" podStartSLOduration=3.299144984 podStartE2EDuration="3.299144984s" podCreationTimestamp="2026-02-18 14:53:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:54:00.293608113 +0000 UTC m=+1346.814472877" watchObservedRunningTime="2026-02-18 14:54:00.299144984 +0000 UTC m=+1346.820009728" Feb 18 14:54:01 crc kubenswrapper[4957]: I0218 14:54:01.288070 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-74f6bcbc87-b5ll7" Feb 18 14:54:04 crc kubenswrapper[4957]: I0218 14:54:04.319280 4957 generic.go:334] "Generic (PLEG): container finished" podID="a600b253-74c8-473b-ba57-e03ac741c902" containerID="3732c6374675d74fdd74f42ce35944aad408270dc34d61e7279f0e4497dd18d7" exitCode=0 Feb 18 14:54:04 crc kubenswrapper[4957]: I0218 14:54:04.319370 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"a600b253-74c8-473b-ba57-e03ac741c902","Type":"ContainerDied","Data":"3732c6374675d74fdd74f42ce35944aad408270dc34d61e7279f0e4497dd18d7"} Feb 18 14:54:05 crc kubenswrapper[4957]: I0218 14:54:05.335401 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"a600b253-74c8-473b-ba57-e03ac741c902","Type":"ContainerStarted","Data":"5ab0592f09fce695bc015bdfca0d2e8addf95441cb2ced54059dfcfa0ee32d3c"} Feb 18 14:54:07 crc kubenswrapper[4957]: I0218 14:54:07.955701 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-74f6bcbc87-b5ll7" Feb 18 14:54:08 crc kubenswrapper[4957]: I0218 14:54:08.079272 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-hss4q"] Feb 18 14:54:08 crc kubenswrapper[4957]: I0218 14:54:08.079896 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-698758b865-hss4q" podUID="d7283a8b-4d7e-4959-af72-06f15b3d73b0" containerName="dnsmasq-dns" containerID="cri-o://f8bf5fff9370b1b5a8810319f2867ae628e75c611521dfbefc4ee7981333eb63" gracePeriod=10 Feb 18 14:54:08 crc kubenswrapper[4957]: I0218 14:54:08.367337 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"a600b253-74c8-473b-ba57-e03ac741c902","Type":"ContainerStarted","Data":"d53cf075ae50711f65edb77f628ebddd34dcfe09b770fc6c2e9597f11a218df8"} Feb 18 14:54:08 crc kubenswrapper[4957]: I0218 14:54:08.367677 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"a600b253-74c8-473b-ba57-e03ac741c902","Type":"ContainerStarted","Data":"98180c8a937e90a899670e28e6d1c7fef8b2610d017764095fc6437cbc42d0ae"} Feb 18 14:54:08 crc kubenswrapper[4957]: I0218 14:54:08.371569 4957 generic.go:334] "Generic (PLEG): container finished" podID="d7283a8b-4d7e-4959-af72-06f15b3d73b0" containerID="f8bf5fff9370b1b5a8810319f2867ae628e75c611521dfbefc4ee7981333eb63" exitCode=0 Feb 18 14:54:08 crc kubenswrapper[4957]: I0218 14:54:08.371630 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-hss4q" event={"ID":"d7283a8b-4d7e-4959-af72-06f15b3d73b0","Type":"ContainerDied","Data":"f8bf5fff9370b1b5a8810319f2867ae628e75c611521dfbefc4ee7981333eb63"} Feb 18 14:54:08 crc kubenswrapper[4957]: I0218 14:54:08.399191 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=15.39916695 podStartE2EDuration="15.39916695s" podCreationTimestamp="2026-02-18 14:53:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:54:08.391810736 +0000 UTC m=+1354.912675480" watchObservedRunningTime="2026-02-18 14:54:08.39916695 +0000 UTC m=+1354.920031704" Feb 18 14:54:08 crc kubenswrapper[4957]: I0218 14:54:08.674496 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-hss4q" Feb 18 14:54:08 crc kubenswrapper[4957]: I0218 14:54:08.710320 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Feb 18 14:54:08 crc kubenswrapper[4957]: I0218 14:54:08.710367 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Feb 18 14:54:08 crc kubenswrapper[4957]: I0218 14:54:08.716186 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Feb 18 14:54:08 crc kubenswrapper[4957]: I0218 14:54:08.738569 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pmxx4\" (UniqueName: \"kubernetes.io/projected/d7283a8b-4d7e-4959-af72-06f15b3d73b0-kube-api-access-pmxx4\") pod \"d7283a8b-4d7e-4959-af72-06f15b3d73b0\" (UID: \"d7283a8b-4d7e-4959-af72-06f15b3d73b0\") " Feb 18 14:54:08 crc kubenswrapper[4957]: I0218 14:54:08.738665 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7283a8b-4d7e-4959-af72-06f15b3d73b0-config\") pod \"d7283a8b-4d7e-4959-af72-06f15b3d73b0\" (UID: \"d7283a8b-4d7e-4959-af72-06f15b3d73b0\") " Feb 18 14:54:08 crc kubenswrapper[4957]: I0218 14:54:08.738712 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d7283a8b-4d7e-4959-af72-06f15b3d73b0-ovsdbserver-nb\") pod \"d7283a8b-4d7e-4959-af72-06f15b3d73b0\" (UID: \"d7283a8b-4d7e-4959-af72-06f15b3d73b0\") " Feb 18 14:54:08 crc kubenswrapper[4957]: I0218 14:54:08.738773 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d7283a8b-4d7e-4959-af72-06f15b3d73b0-dns-svc\") pod \"d7283a8b-4d7e-4959-af72-06f15b3d73b0\" (UID: \"d7283a8b-4d7e-4959-af72-06f15b3d73b0\") " Feb 18 14:54:08 crc kubenswrapper[4957]: I0218 14:54:08.738854 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d7283a8b-4d7e-4959-af72-06f15b3d73b0-ovsdbserver-sb\") pod \"d7283a8b-4d7e-4959-af72-06f15b3d73b0\" (UID: \"d7283a8b-4d7e-4959-af72-06f15b3d73b0\") " Feb 18 14:54:08 crc kubenswrapper[4957]: I0218 14:54:08.746494 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7283a8b-4d7e-4959-af72-06f15b3d73b0-kube-api-access-pmxx4" (OuterVolumeSpecName: "kube-api-access-pmxx4") pod "d7283a8b-4d7e-4959-af72-06f15b3d73b0" (UID: "d7283a8b-4d7e-4959-af72-06f15b3d73b0"). InnerVolumeSpecName "kube-api-access-pmxx4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:54:08 crc kubenswrapper[4957]: I0218 14:54:08.779842 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-2" Feb 18 14:54:08 crc kubenswrapper[4957]: I0218 14:54:08.798120 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7283a8b-4d7e-4959-af72-06f15b3d73b0-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d7283a8b-4d7e-4959-af72-06f15b3d73b0" (UID: "d7283a8b-4d7e-4959-af72-06f15b3d73b0"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:54:08 crc kubenswrapper[4957]: I0218 14:54:08.819897 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7283a8b-4d7e-4959-af72-06f15b3d73b0-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d7283a8b-4d7e-4959-af72-06f15b3d73b0" (UID: "d7283a8b-4d7e-4959-af72-06f15b3d73b0"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:54:08 crc kubenswrapper[4957]: I0218 14:54:08.820938 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7283a8b-4d7e-4959-af72-06f15b3d73b0-config" (OuterVolumeSpecName: "config") pod "d7283a8b-4d7e-4959-af72-06f15b3d73b0" (UID: "d7283a8b-4d7e-4959-af72-06f15b3d73b0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:54:08 crc kubenswrapper[4957]: I0218 14:54:08.847872 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pmxx4\" (UniqueName: \"kubernetes.io/projected/d7283a8b-4d7e-4959-af72-06f15b3d73b0-kube-api-access-pmxx4\") on node \"crc\" DevicePath \"\"" Feb 18 14:54:08 crc kubenswrapper[4957]: I0218 14:54:08.847903 4957 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7283a8b-4d7e-4959-af72-06f15b3d73b0-config\") on node \"crc\" DevicePath \"\"" Feb 18 14:54:08 crc kubenswrapper[4957]: I0218 14:54:08.847915 4957 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d7283a8b-4d7e-4959-af72-06f15b3d73b0-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 18 14:54:08 crc kubenswrapper[4957]: I0218 14:54:08.847926 4957 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d7283a8b-4d7e-4959-af72-06f15b3d73b0-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 14:54:08 crc kubenswrapper[4957]: I0218 14:54:08.862192 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7283a8b-4d7e-4959-af72-06f15b3d73b0-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d7283a8b-4d7e-4959-af72-06f15b3d73b0" (UID: "d7283a8b-4d7e-4959-af72-06f15b3d73b0"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:54:08 crc kubenswrapper[4957]: I0218 14:54:08.949670 4957 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d7283a8b-4d7e-4959-af72-06f15b3d73b0-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 18 14:54:09 crc kubenswrapper[4957]: I0218 14:54:09.307759 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-create-5j8tc"] Feb 18 14:54:09 crc kubenswrapper[4957]: E0218 14:54:09.318950 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7283a8b-4d7e-4959-af72-06f15b3d73b0" containerName="init" Feb 18 14:54:09 crc kubenswrapper[4957]: I0218 14:54:09.318968 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7283a8b-4d7e-4959-af72-06f15b3d73b0" containerName="init" Feb 18 14:54:09 crc kubenswrapper[4957]: E0218 14:54:09.318998 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33f87402-e008-4ddf-9943-f50e164ea0b5" containerName="init" Feb 18 14:54:09 crc kubenswrapper[4957]: I0218 14:54:09.319005 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="33f87402-e008-4ddf-9943-f50e164ea0b5" containerName="init" Feb 18 14:54:09 crc kubenswrapper[4957]: E0218 14:54:09.319021 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7283a8b-4d7e-4959-af72-06f15b3d73b0" containerName="dnsmasq-dns" Feb 18 14:54:09 crc kubenswrapper[4957]: I0218 14:54:09.319027 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7283a8b-4d7e-4959-af72-06f15b3d73b0" containerName="dnsmasq-dns" Feb 18 14:54:09 crc kubenswrapper[4957]: E0218 14:54:09.319117 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33f87402-e008-4ddf-9943-f50e164ea0b5" containerName="dnsmasq-dns" Feb 18 14:54:09 crc kubenswrapper[4957]: I0218 14:54:09.319127 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="33f87402-e008-4ddf-9943-f50e164ea0b5" containerName="dnsmasq-dns" Feb 18 14:54:09 crc kubenswrapper[4957]: I0218 14:54:09.319444 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="33f87402-e008-4ddf-9943-f50e164ea0b5" containerName="dnsmasq-dns" Feb 18 14:54:09 crc kubenswrapper[4957]: I0218 14:54:09.319460 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7283a8b-4d7e-4959-af72-06f15b3d73b0" containerName="dnsmasq-dns" Feb 18 14:54:09 crc kubenswrapper[4957]: I0218 14:54:09.320326 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-5j8tc" Feb 18 14:54:09 crc kubenswrapper[4957]: I0218 14:54:09.335192 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-5j8tc"] Feb 18 14:54:09 crc kubenswrapper[4957]: I0218 14:54:09.392632 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-hss4q" Feb 18 14:54:09 crc kubenswrapper[4957]: I0218 14:54:09.392783 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-hss4q" event={"ID":"d7283a8b-4d7e-4959-af72-06f15b3d73b0","Type":"ContainerDied","Data":"66f87977a717fe636aaf4ca22ae08761df04af2d8c02fabb73b9c78f88a16835"} Feb 18 14:54:09 crc kubenswrapper[4957]: I0218 14:54:09.392822 4957 scope.go:117] "RemoveContainer" containerID="f8bf5fff9370b1b5a8810319f2867ae628e75c611521dfbefc4ee7981333eb63" Feb 18 14:54:09 crc kubenswrapper[4957]: I0218 14:54:09.405157 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Feb 18 14:54:09 crc kubenswrapper[4957]: I0218 14:54:09.424470 4957 scope.go:117] "RemoveContainer" containerID="5b0dab01c64196efb43cdca4cc00dd15f59fa6a73036ced69de0f45d6d56b4f3" Feb 18 14:54:09 crc kubenswrapper[4957]: I0218 14:54:09.482266 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-hss4q"] Feb 18 14:54:09 crc kubenswrapper[4957]: I0218 14:54:09.493089 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-698758b865-hss4q"] Feb 18 14:54:09 crc kubenswrapper[4957]: I0218 14:54:09.536616 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b38115e0-8b2a-42e5-8d73-a2009264ce13-operator-scripts\") pod \"heat-db-create-5j8tc\" (UID: \"b38115e0-8b2a-42e5-8d73-a2009264ce13\") " pod="openstack/heat-db-create-5j8tc" Feb 18 14:54:09 crc kubenswrapper[4957]: I0218 14:54:09.536794 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ktz8v\" (UniqueName: \"kubernetes.io/projected/b38115e0-8b2a-42e5-8d73-a2009264ce13-kube-api-access-ktz8v\") pod \"heat-db-create-5j8tc\" (UID: \"b38115e0-8b2a-42e5-8d73-a2009264ce13\") " pod="openstack/heat-db-create-5j8tc" Feb 18 14:54:09 crc kubenswrapper[4957]: I0218 14:54:09.560987 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-a24c-account-create-update-j4ppv"] Feb 18 14:54:09 crc kubenswrapper[4957]: I0218 14:54:09.562714 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-a24c-account-create-update-j4ppv" Feb 18 14:54:09 crc kubenswrapper[4957]: I0218 14:54:09.568727 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-db-secret" Feb 18 14:54:09 crc kubenswrapper[4957]: I0218 14:54:09.590843 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-a24c-account-create-update-j4ppv"] Feb 18 14:54:09 crc kubenswrapper[4957]: I0218 14:54:09.610759 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-tsbrk"] Feb 18 14:54:09 crc kubenswrapper[4957]: I0218 14:54:09.611985 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-tsbrk" Feb 18 14:54:09 crc kubenswrapper[4957]: I0218 14:54:09.638652 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4fe03eaf-305f-4528-b0df-d1d435a75f30-operator-scripts\") pod \"heat-a24c-account-create-update-j4ppv\" (UID: \"4fe03eaf-305f-4528-b0df-d1d435a75f30\") " pod="openstack/heat-a24c-account-create-update-j4ppv" Feb 18 14:54:09 crc kubenswrapper[4957]: I0218 14:54:09.638734 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6mb5z\" (UniqueName: \"kubernetes.io/projected/4fe03eaf-305f-4528-b0df-d1d435a75f30-kube-api-access-6mb5z\") pod \"heat-a24c-account-create-update-j4ppv\" (UID: \"4fe03eaf-305f-4528-b0df-d1d435a75f30\") " pod="openstack/heat-a24c-account-create-update-j4ppv" Feb 18 14:54:09 crc kubenswrapper[4957]: I0218 14:54:09.638878 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b38115e0-8b2a-42e5-8d73-a2009264ce13-operator-scripts\") pod \"heat-db-create-5j8tc\" (UID: \"b38115e0-8b2a-42e5-8d73-a2009264ce13\") " pod="openstack/heat-db-create-5j8tc" Feb 18 14:54:09 crc kubenswrapper[4957]: I0218 14:54:09.638949 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ktz8v\" (UniqueName: \"kubernetes.io/projected/b38115e0-8b2a-42e5-8d73-a2009264ce13-kube-api-access-ktz8v\") pod \"heat-db-create-5j8tc\" (UID: \"b38115e0-8b2a-42e5-8d73-a2009264ce13\") " pod="openstack/heat-db-create-5j8tc" Feb 18 14:54:09 crc kubenswrapper[4957]: I0218 14:54:09.640709 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b38115e0-8b2a-42e5-8d73-a2009264ce13-operator-scripts\") pod \"heat-db-create-5j8tc\" (UID: \"b38115e0-8b2a-42e5-8d73-a2009264ce13\") " pod="openstack/heat-db-create-5j8tc" Feb 18 14:54:09 crc kubenswrapper[4957]: I0218 14:54:09.654502 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-tsbrk"] Feb 18 14:54:09 crc kubenswrapper[4957]: I0218 14:54:09.683734 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ktz8v\" (UniqueName: \"kubernetes.io/projected/b38115e0-8b2a-42e5-8d73-a2009264ce13-kube-api-access-ktz8v\") pod \"heat-db-create-5j8tc\" (UID: \"b38115e0-8b2a-42e5-8d73-a2009264ce13\") " pod="openstack/heat-db-create-5j8tc" Feb 18 14:54:09 crc kubenswrapper[4957]: I0218 14:54:09.706409 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-8r67m"] Feb 18 14:54:09 crc kubenswrapper[4957]: I0218 14:54:09.708109 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-8r67m" Feb 18 14:54:09 crc kubenswrapper[4957]: I0218 14:54:09.730569 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-8r67m"] Feb 18 14:54:09 crc kubenswrapper[4957]: I0218 14:54:09.740825 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/57c6557d-c336-47c4-b261-2896f28b3a6b-operator-scripts\") pod \"cinder-db-create-tsbrk\" (UID: \"57c6557d-c336-47c4-b261-2896f28b3a6b\") " pod="openstack/cinder-db-create-tsbrk" Feb 18 14:54:09 crc kubenswrapper[4957]: I0218 14:54:09.741007 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4fe03eaf-305f-4528-b0df-d1d435a75f30-operator-scripts\") pod \"heat-a24c-account-create-update-j4ppv\" (UID: \"4fe03eaf-305f-4528-b0df-d1d435a75f30\") " pod="openstack/heat-a24c-account-create-update-j4ppv" Feb 18 14:54:09 crc kubenswrapper[4957]: I0218 14:54:09.741076 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6mb5z\" (UniqueName: \"kubernetes.io/projected/4fe03eaf-305f-4528-b0df-d1d435a75f30-kube-api-access-6mb5z\") pod \"heat-a24c-account-create-update-j4ppv\" (UID: \"4fe03eaf-305f-4528-b0df-d1d435a75f30\") " pod="openstack/heat-a24c-account-create-update-j4ppv" Feb 18 14:54:09 crc kubenswrapper[4957]: I0218 14:54:09.741130 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4thvr\" (UniqueName: \"kubernetes.io/projected/57c6557d-c336-47c4-b261-2896f28b3a6b-kube-api-access-4thvr\") pod \"cinder-db-create-tsbrk\" (UID: \"57c6557d-c336-47c4-b261-2896f28b3a6b\") " pod="openstack/cinder-db-create-tsbrk" Feb 18 14:54:09 crc kubenswrapper[4957]: I0218 14:54:09.743784 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4fe03eaf-305f-4528-b0df-d1d435a75f30-operator-scripts\") pod \"heat-a24c-account-create-update-j4ppv\" (UID: \"4fe03eaf-305f-4528-b0df-d1d435a75f30\") " pod="openstack/heat-a24c-account-create-update-j4ppv" Feb 18 14:54:09 crc kubenswrapper[4957]: I0218 14:54:09.758497 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-mflsm"] Feb 18 14:54:09 crc kubenswrapper[4957]: I0218 14:54:09.761140 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-mflsm" Feb 18 14:54:09 crc kubenswrapper[4957]: I0218 14:54:09.771174 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 18 14:54:09 crc kubenswrapper[4957]: I0218 14:54:09.771439 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 18 14:54:09 crc kubenswrapper[4957]: I0218 14:54:09.772293 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 18 14:54:09 crc kubenswrapper[4957]: I0218 14:54:09.772521 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-t4lzm" Feb 18 14:54:09 crc kubenswrapper[4957]: I0218 14:54:09.805846 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-mflsm"] Feb 18 14:54:09 crc kubenswrapper[4957]: I0218 14:54:09.810402 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6mb5z\" (UniqueName: \"kubernetes.io/projected/4fe03eaf-305f-4528-b0df-d1d435a75f30-kube-api-access-6mb5z\") pod \"heat-a24c-account-create-update-j4ppv\" (UID: \"4fe03eaf-305f-4528-b0df-d1d435a75f30\") " pod="openstack/heat-a24c-account-create-update-j4ppv" Feb 18 14:54:09 crc kubenswrapper[4957]: I0218 14:54:09.823293 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-fqdmc"] Feb 18 14:54:09 crc kubenswrapper[4957]: I0218 14:54:09.824668 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-fqdmc" Feb 18 14:54:09 crc kubenswrapper[4957]: I0218 14:54:09.845018 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/13bcc539-d1e7-4b12-b87b-d989a5f8db2d-operator-scripts\") pod \"barbican-db-create-8r67m\" (UID: \"13bcc539-d1e7-4b12-b87b-d989a5f8db2d\") " pod="openstack/barbican-db-create-8r67m" Feb 18 14:54:09 crc kubenswrapper[4957]: I0218 14:54:09.845186 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcf177d4-d412-4457-ad6d-a3423ee3dce0-combined-ca-bundle\") pod \"keystone-db-sync-mflsm\" (UID: \"fcf177d4-d412-4457-ad6d-a3423ee3dce0\") " pod="openstack/keystone-db-sync-mflsm" Feb 18 14:54:09 crc kubenswrapper[4957]: I0218 14:54:09.845341 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4thvr\" (UniqueName: \"kubernetes.io/projected/57c6557d-c336-47c4-b261-2896f28b3a6b-kube-api-access-4thvr\") pod \"cinder-db-create-tsbrk\" (UID: \"57c6557d-c336-47c4-b261-2896f28b3a6b\") " pod="openstack/cinder-db-create-tsbrk" Feb 18 14:54:09 crc kubenswrapper[4957]: I0218 14:54:09.845464 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fkgh2\" (UniqueName: \"kubernetes.io/projected/fcf177d4-d412-4457-ad6d-a3423ee3dce0-kube-api-access-fkgh2\") pod \"keystone-db-sync-mflsm\" (UID: \"fcf177d4-d412-4457-ad6d-a3423ee3dce0\") " pod="openstack/keystone-db-sync-mflsm" Feb 18 14:54:09 crc kubenswrapper[4957]: I0218 14:54:09.845504 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/57c6557d-c336-47c4-b261-2896f28b3a6b-operator-scripts\") pod \"cinder-db-create-tsbrk\" (UID: \"57c6557d-c336-47c4-b261-2896f28b3a6b\") " pod="openstack/cinder-db-create-tsbrk" Feb 18 14:54:09 crc kubenswrapper[4957]: I0218 14:54:09.846241 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/57c6557d-c336-47c4-b261-2896f28b3a6b-operator-scripts\") pod \"cinder-db-create-tsbrk\" (UID: \"57c6557d-c336-47c4-b261-2896f28b3a6b\") " pod="openstack/cinder-db-create-tsbrk" Feb 18 14:54:09 crc kubenswrapper[4957]: I0218 14:54:09.846734 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wf6rw\" (UniqueName: \"kubernetes.io/projected/13bcc539-d1e7-4b12-b87b-d989a5f8db2d-kube-api-access-wf6rw\") pod \"barbican-db-create-8r67m\" (UID: \"13bcc539-d1e7-4b12-b87b-d989a5f8db2d\") " pod="openstack/barbican-db-create-8r67m" Feb 18 14:54:09 crc kubenswrapper[4957]: I0218 14:54:09.846860 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fcf177d4-d412-4457-ad6d-a3423ee3dce0-config-data\") pod \"keystone-db-sync-mflsm\" (UID: \"fcf177d4-d412-4457-ad6d-a3423ee3dce0\") " pod="openstack/keystone-db-sync-mflsm" Feb 18 14:54:09 crc kubenswrapper[4957]: I0218 14:54:09.848388 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-fqdmc"] Feb 18 14:54:09 crc kubenswrapper[4957]: I0218 14:54:09.881654 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4thvr\" (UniqueName: \"kubernetes.io/projected/57c6557d-c336-47c4-b261-2896f28b3a6b-kube-api-access-4thvr\") pod \"cinder-db-create-tsbrk\" (UID: \"57c6557d-c336-47c4-b261-2896f28b3a6b\") " pod="openstack/cinder-db-create-tsbrk" Feb 18 14:54:09 crc kubenswrapper[4957]: I0218 14:54:09.892920 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-a24c-account-create-update-j4ppv" Feb 18 14:54:09 crc kubenswrapper[4957]: I0218 14:54:09.906109 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-14cd-account-create-update-cnqzt"] Feb 18 14:54:09 crc kubenswrapper[4957]: I0218 14:54:09.907518 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-14cd-account-create-update-cnqzt" Feb 18 14:54:09 crc kubenswrapper[4957]: I0218 14:54:09.909913 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Feb 18 14:54:09 crc kubenswrapper[4957]: I0218 14:54:09.931182 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-14cd-account-create-update-cnqzt"] Feb 18 14:54:09 crc kubenswrapper[4957]: I0218 14:54:09.937353 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-tsbrk" Feb 18 14:54:09 crc kubenswrapper[4957]: I0218 14:54:09.950511 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fkgh2\" (UniqueName: \"kubernetes.io/projected/fcf177d4-d412-4457-ad6d-a3423ee3dce0-kube-api-access-fkgh2\") pod \"keystone-db-sync-mflsm\" (UID: \"fcf177d4-d412-4457-ad6d-a3423ee3dce0\") " pod="openstack/keystone-db-sync-mflsm" Feb 18 14:54:09 crc kubenswrapper[4957]: I0218 14:54:09.950634 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wf6rw\" (UniqueName: \"kubernetes.io/projected/13bcc539-d1e7-4b12-b87b-d989a5f8db2d-kube-api-access-wf6rw\") pod \"barbican-db-create-8r67m\" (UID: \"13bcc539-d1e7-4b12-b87b-d989a5f8db2d\") " pod="openstack/barbican-db-create-8r67m" Feb 18 14:54:09 crc kubenswrapper[4957]: I0218 14:54:09.950708 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fcf177d4-d412-4457-ad6d-a3423ee3dce0-config-data\") pod \"keystone-db-sync-mflsm\" (UID: \"fcf177d4-d412-4457-ad6d-a3423ee3dce0\") " pod="openstack/keystone-db-sync-mflsm" Feb 18 14:54:09 crc kubenswrapper[4957]: I0218 14:54:09.951269 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/13bcc539-d1e7-4b12-b87b-d989a5f8db2d-operator-scripts\") pod \"barbican-db-create-8r67m\" (UID: \"13bcc539-d1e7-4b12-b87b-d989a5f8db2d\") " pod="openstack/barbican-db-create-8r67m" Feb 18 14:54:09 crc kubenswrapper[4957]: I0218 14:54:09.951363 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcf177d4-d412-4457-ad6d-a3423ee3dce0-combined-ca-bundle\") pod \"keystone-db-sync-mflsm\" (UID: \"fcf177d4-d412-4457-ad6d-a3423ee3dce0\") " pod="openstack/keystone-db-sync-mflsm" Feb 18 14:54:09 crc kubenswrapper[4957]: I0218 14:54:09.951412 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aff441ba-44a2-40e6-b5ff-aad169a28811-operator-scripts\") pod \"neutron-db-create-fqdmc\" (UID: \"aff441ba-44a2-40e6-b5ff-aad169a28811\") " pod="openstack/neutron-db-create-fqdmc" Feb 18 14:54:09 crc kubenswrapper[4957]: I0218 14:54:09.951469 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nr7fh\" (UniqueName: \"kubernetes.io/projected/aff441ba-44a2-40e6-b5ff-aad169a28811-kube-api-access-nr7fh\") pod \"neutron-db-create-fqdmc\" (UID: \"aff441ba-44a2-40e6-b5ff-aad169a28811\") " pod="openstack/neutron-db-create-fqdmc" Feb 18 14:54:09 crc kubenswrapper[4957]: I0218 14:54:09.952403 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/13bcc539-d1e7-4b12-b87b-d989a5f8db2d-operator-scripts\") pod \"barbican-db-create-8r67m\" (UID: \"13bcc539-d1e7-4b12-b87b-d989a5f8db2d\") " pod="openstack/barbican-db-create-8r67m" Feb 18 14:54:09 crc kubenswrapper[4957]: I0218 14:54:09.956488 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcf177d4-d412-4457-ad6d-a3423ee3dce0-combined-ca-bundle\") pod \"keystone-db-sync-mflsm\" (UID: \"fcf177d4-d412-4457-ad6d-a3423ee3dce0\") " pod="openstack/keystone-db-sync-mflsm" Feb 18 14:54:09 crc kubenswrapper[4957]: I0218 14:54:09.957395 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fcf177d4-d412-4457-ad6d-a3423ee3dce0-config-data\") pod \"keystone-db-sync-mflsm\" (UID: \"fcf177d4-d412-4457-ad6d-a3423ee3dce0\") " pod="openstack/keystone-db-sync-mflsm" Feb 18 14:54:09 crc kubenswrapper[4957]: I0218 14:54:09.977996 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-5j8tc" Feb 18 14:54:09 crc kubenswrapper[4957]: I0218 14:54:09.992628 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wf6rw\" (UniqueName: \"kubernetes.io/projected/13bcc539-d1e7-4b12-b87b-d989a5f8db2d-kube-api-access-wf6rw\") pod \"barbican-db-create-8r67m\" (UID: \"13bcc539-d1e7-4b12-b87b-d989a5f8db2d\") " pod="openstack/barbican-db-create-8r67m" Feb 18 14:54:10 crc kubenswrapper[4957]: I0218 14:54:10.004119 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fkgh2\" (UniqueName: \"kubernetes.io/projected/fcf177d4-d412-4457-ad6d-a3423ee3dce0-kube-api-access-fkgh2\") pod \"keystone-db-sync-mflsm\" (UID: \"fcf177d4-d412-4457-ad6d-a3423ee3dce0\") " pod="openstack/keystone-db-sync-mflsm" Feb 18 14:54:10 crc kubenswrapper[4957]: I0218 14:54:10.016975 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-339a-account-create-update-rjpcj"] Feb 18 14:54:10 crc kubenswrapper[4957]: I0218 14:54:10.021395 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-339a-account-create-update-rjpcj" Feb 18 14:54:10 crc kubenswrapper[4957]: I0218 14:54:10.031225 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Feb 18 14:54:10 crc kubenswrapper[4957]: I0218 14:54:10.043903 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-339a-account-create-update-rjpcj"] Feb 18 14:54:10 crc kubenswrapper[4957]: I0218 14:54:10.054631 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddfpp\" (UniqueName: \"kubernetes.io/projected/87460367-575b-4083-bff0-78dc45a41598-kube-api-access-ddfpp\") pod \"barbican-14cd-account-create-update-cnqzt\" (UID: \"87460367-575b-4083-bff0-78dc45a41598\") " pod="openstack/barbican-14cd-account-create-update-cnqzt" Feb 18 14:54:10 crc kubenswrapper[4957]: I0218 14:54:10.054749 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/87460367-575b-4083-bff0-78dc45a41598-operator-scripts\") pod \"barbican-14cd-account-create-update-cnqzt\" (UID: \"87460367-575b-4083-bff0-78dc45a41598\") " pod="openstack/barbican-14cd-account-create-update-cnqzt" Feb 18 14:54:10 crc kubenswrapper[4957]: I0218 14:54:10.054821 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aff441ba-44a2-40e6-b5ff-aad169a28811-operator-scripts\") pod \"neutron-db-create-fqdmc\" (UID: \"aff441ba-44a2-40e6-b5ff-aad169a28811\") " pod="openstack/neutron-db-create-fqdmc" Feb 18 14:54:10 crc kubenswrapper[4957]: I0218 14:54:10.054858 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nr7fh\" (UniqueName: \"kubernetes.io/projected/aff441ba-44a2-40e6-b5ff-aad169a28811-kube-api-access-nr7fh\") pod \"neutron-db-create-fqdmc\" (UID: \"aff441ba-44a2-40e6-b5ff-aad169a28811\") " pod="openstack/neutron-db-create-fqdmc" Feb 18 14:54:10 crc kubenswrapper[4957]: I0218 14:54:10.055996 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aff441ba-44a2-40e6-b5ff-aad169a28811-operator-scripts\") pod \"neutron-db-create-fqdmc\" (UID: \"aff441ba-44a2-40e6-b5ff-aad169a28811\") " pod="openstack/neutron-db-create-fqdmc" Feb 18 14:54:10 crc kubenswrapper[4957]: I0218 14:54:10.085148 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nr7fh\" (UniqueName: \"kubernetes.io/projected/aff441ba-44a2-40e6-b5ff-aad169a28811-kube-api-access-nr7fh\") pod \"neutron-db-create-fqdmc\" (UID: \"aff441ba-44a2-40e6-b5ff-aad169a28811\") " pod="openstack/neutron-db-create-fqdmc" Feb 18 14:54:10 crc kubenswrapper[4957]: I0218 14:54:10.127953 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-8r67m" Feb 18 14:54:10 crc kubenswrapper[4957]: I0218 14:54:10.146798 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-mflsm" Feb 18 14:54:10 crc kubenswrapper[4957]: I0218 14:54:10.157380 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/87460367-575b-4083-bff0-78dc45a41598-operator-scripts\") pod \"barbican-14cd-account-create-update-cnqzt\" (UID: \"87460367-575b-4083-bff0-78dc45a41598\") " pod="openstack/barbican-14cd-account-create-update-cnqzt" Feb 18 14:54:10 crc kubenswrapper[4957]: I0218 14:54:10.157442 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6f89z\" (UniqueName: \"kubernetes.io/projected/83a8fb77-b238-40f4-8905-b6ebe3595115-kube-api-access-6f89z\") pod \"neutron-339a-account-create-update-rjpcj\" (UID: \"83a8fb77-b238-40f4-8905-b6ebe3595115\") " pod="openstack/neutron-339a-account-create-update-rjpcj" Feb 18 14:54:10 crc kubenswrapper[4957]: I0218 14:54:10.157546 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/83a8fb77-b238-40f4-8905-b6ebe3595115-operator-scripts\") pod \"neutron-339a-account-create-update-rjpcj\" (UID: \"83a8fb77-b238-40f4-8905-b6ebe3595115\") " pod="openstack/neutron-339a-account-create-update-rjpcj" Feb 18 14:54:10 crc kubenswrapper[4957]: I0218 14:54:10.157590 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ddfpp\" (UniqueName: \"kubernetes.io/projected/87460367-575b-4083-bff0-78dc45a41598-kube-api-access-ddfpp\") pod \"barbican-14cd-account-create-update-cnqzt\" (UID: \"87460367-575b-4083-bff0-78dc45a41598\") " pod="openstack/barbican-14cd-account-create-update-cnqzt" Feb 18 14:54:10 crc kubenswrapper[4957]: I0218 14:54:10.158284 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/87460367-575b-4083-bff0-78dc45a41598-operator-scripts\") pod \"barbican-14cd-account-create-update-cnqzt\" (UID: \"87460367-575b-4083-bff0-78dc45a41598\") " pod="openstack/barbican-14cd-account-create-update-cnqzt" Feb 18 14:54:10 crc kubenswrapper[4957]: I0218 14:54:10.175050 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-fqdmc" Feb 18 14:54:10 crc kubenswrapper[4957]: I0218 14:54:10.175904 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-7b22-account-create-update-t5mdx"] Feb 18 14:54:10 crc kubenswrapper[4957]: I0218 14:54:10.177311 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-7b22-account-create-update-t5mdx" Feb 18 14:54:10 crc kubenswrapper[4957]: I0218 14:54:10.180104 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Feb 18 14:54:10 crc kubenswrapper[4957]: I0218 14:54:10.219709 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-7b22-account-create-update-t5mdx"] Feb 18 14:54:10 crc kubenswrapper[4957]: I0218 14:54:10.254852 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ddfpp\" (UniqueName: \"kubernetes.io/projected/87460367-575b-4083-bff0-78dc45a41598-kube-api-access-ddfpp\") pod \"barbican-14cd-account-create-update-cnqzt\" (UID: \"87460367-575b-4083-bff0-78dc45a41598\") " pod="openstack/barbican-14cd-account-create-update-cnqzt" Feb 18 14:54:10 crc kubenswrapper[4957]: I0218 14:54:10.265230 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-14cd-account-create-update-cnqzt" Feb 18 14:54:10 crc kubenswrapper[4957]: I0218 14:54:10.275392 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxrcv\" (UniqueName: \"kubernetes.io/projected/aeb1c40c-9227-4912-944f-c18fafab0fc6-kube-api-access-bxrcv\") pod \"cinder-7b22-account-create-update-t5mdx\" (UID: \"aeb1c40c-9227-4912-944f-c18fafab0fc6\") " pod="openstack/cinder-7b22-account-create-update-t5mdx" Feb 18 14:54:10 crc kubenswrapper[4957]: I0218 14:54:10.275475 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/83a8fb77-b238-40f4-8905-b6ebe3595115-operator-scripts\") pod \"neutron-339a-account-create-update-rjpcj\" (UID: \"83a8fb77-b238-40f4-8905-b6ebe3595115\") " pod="openstack/neutron-339a-account-create-update-rjpcj" Feb 18 14:54:10 crc kubenswrapper[4957]: I0218 14:54:10.275661 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aeb1c40c-9227-4912-944f-c18fafab0fc6-operator-scripts\") pod \"cinder-7b22-account-create-update-t5mdx\" (UID: \"aeb1c40c-9227-4912-944f-c18fafab0fc6\") " pod="openstack/cinder-7b22-account-create-update-t5mdx" Feb 18 14:54:10 crc kubenswrapper[4957]: I0218 14:54:10.275690 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6f89z\" (UniqueName: \"kubernetes.io/projected/83a8fb77-b238-40f4-8905-b6ebe3595115-kube-api-access-6f89z\") pod \"neutron-339a-account-create-update-rjpcj\" (UID: \"83a8fb77-b238-40f4-8905-b6ebe3595115\") " pod="openstack/neutron-339a-account-create-update-rjpcj" Feb 18 14:54:10 crc kubenswrapper[4957]: I0218 14:54:10.277603 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/83a8fb77-b238-40f4-8905-b6ebe3595115-operator-scripts\") pod \"neutron-339a-account-create-update-rjpcj\" (UID: \"83a8fb77-b238-40f4-8905-b6ebe3595115\") " pod="openstack/neutron-339a-account-create-update-rjpcj" Feb 18 14:54:10 crc kubenswrapper[4957]: I0218 14:54:10.288169 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7283a8b-4d7e-4959-af72-06f15b3d73b0" path="/var/lib/kubelet/pods/d7283a8b-4d7e-4959-af72-06f15b3d73b0/volumes" Feb 18 14:54:10 crc kubenswrapper[4957]: I0218 14:54:10.303624 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6f89z\" (UniqueName: \"kubernetes.io/projected/83a8fb77-b238-40f4-8905-b6ebe3595115-kube-api-access-6f89z\") pod \"neutron-339a-account-create-update-rjpcj\" (UID: \"83a8fb77-b238-40f4-8905-b6ebe3595115\") " pod="openstack/neutron-339a-account-create-update-rjpcj" Feb 18 14:54:10 crc kubenswrapper[4957]: I0218 14:54:10.357125 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-339a-account-create-update-rjpcj" Feb 18 14:54:10 crc kubenswrapper[4957]: I0218 14:54:10.379840 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bxrcv\" (UniqueName: \"kubernetes.io/projected/aeb1c40c-9227-4912-944f-c18fafab0fc6-kube-api-access-bxrcv\") pod \"cinder-7b22-account-create-update-t5mdx\" (UID: \"aeb1c40c-9227-4912-944f-c18fafab0fc6\") " pod="openstack/cinder-7b22-account-create-update-t5mdx" Feb 18 14:54:10 crc kubenswrapper[4957]: I0218 14:54:10.380368 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aeb1c40c-9227-4912-944f-c18fafab0fc6-operator-scripts\") pod \"cinder-7b22-account-create-update-t5mdx\" (UID: \"aeb1c40c-9227-4912-944f-c18fafab0fc6\") " pod="openstack/cinder-7b22-account-create-update-t5mdx" Feb 18 14:54:10 crc kubenswrapper[4957]: I0218 14:54:10.381094 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aeb1c40c-9227-4912-944f-c18fafab0fc6-operator-scripts\") pod \"cinder-7b22-account-create-update-t5mdx\" (UID: \"aeb1c40c-9227-4912-944f-c18fafab0fc6\") " pod="openstack/cinder-7b22-account-create-update-t5mdx" Feb 18 14:54:10 crc kubenswrapper[4957]: I0218 14:54:10.426917 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bxrcv\" (UniqueName: \"kubernetes.io/projected/aeb1c40c-9227-4912-944f-c18fafab0fc6-kube-api-access-bxrcv\") pod \"cinder-7b22-account-create-update-t5mdx\" (UID: \"aeb1c40c-9227-4912-944f-c18fafab0fc6\") " pod="openstack/cinder-7b22-account-create-update-t5mdx" Feb 18 14:54:10 crc kubenswrapper[4957]: I0218 14:54:10.674223 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-7b22-account-create-update-t5mdx" Feb 18 14:54:10 crc kubenswrapper[4957]: I0218 14:54:10.773103 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-tsbrk"] Feb 18 14:54:10 crc kubenswrapper[4957]: I0218 14:54:10.997303 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-8r67m"] Feb 18 14:54:11 crc kubenswrapper[4957]: W0218 14:54:11.003590 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4fe03eaf_305f_4528_b0df_d1d435a75f30.slice/crio-9353d7d47515c4a758d96cc5a3d821b7e0ba64f00801a0ff16106396c94b4b71 WatchSource:0}: Error finding container 9353d7d47515c4a758d96cc5a3d821b7e0ba64f00801a0ff16106396c94b4b71: Status 404 returned error can't find the container with id 9353d7d47515c4a758d96cc5a3d821b7e0ba64f00801a0ff16106396c94b4b71 Feb 18 14:54:11 crc kubenswrapper[4957]: I0218 14:54:11.010517 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-a24c-account-create-update-j4ppv"] Feb 18 14:54:11 crc kubenswrapper[4957]: I0218 14:54:11.021462 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-5j8tc"] Feb 18 14:54:11 crc kubenswrapper[4957]: W0218 14:54:11.214372 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfcf177d4_d412_4457_ad6d_a3423ee3dce0.slice/crio-8201317d003ffd4cb1a87b58b33365229f0fab481436cdea9fbf9040a208d954 WatchSource:0}: Error finding container 8201317d003ffd4cb1a87b58b33365229f0fab481436cdea9fbf9040a208d954: Status 404 returned error can't find the container with id 8201317d003ffd4cb1a87b58b33365229f0fab481436cdea9fbf9040a208d954 Feb 18 14:54:11 crc kubenswrapper[4957]: I0218 14:54:11.216800 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-mflsm"] Feb 18 14:54:11 crc kubenswrapper[4957]: I0218 14:54:11.254723 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-fqdmc"] Feb 18 14:54:11 crc kubenswrapper[4957]: I0218 14:54:11.472928 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-mflsm" event={"ID":"fcf177d4-d412-4457-ad6d-a3423ee3dce0","Type":"ContainerStarted","Data":"8201317d003ffd4cb1a87b58b33365229f0fab481436cdea9fbf9040a208d954"} Feb 18 14:54:11 crc kubenswrapper[4957]: I0218 14:54:11.476926 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-fqdmc" event={"ID":"aff441ba-44a2-40e6-b5ff-aad169a28811","Type":"ContainerStarted","Data":"d2839c8077c6fa4527a18b22e1d355ae0a549e31a904bfd3335bbd2c418cc2b4"} Feb 18 14:54:11 crc kubenswrapper[4957]: I0218 14:54:11.486761 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-a24c-account-create-update-j4ppv" event={"ID":"4fe03eaf-305f-4528-b0df-d1d435a75f30","Type":"ContainerStarted","Data":"9353d7d47515c4a758d96cc5a3d821b7e0ba64f00801a0ff16106396c94b4b71"} Feb 18 14:54:11 crc kubenswrapper[4957]: I0218 14:54:11.498300 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-tsbrk" event={"ID":"57c6557d-c336-47c4-b261-2896f28b3a6b","Type":"ContainerStarted","Data":"da015619ca0f0a5498e7901cca85c8401fe3baaf6ba02eed568d37ebcdd4db97"} Feb 18 14:54:11 crc kubenswrapper[4957]: I0218 14:54:11.498372 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-tsbrk" event={"ID":"57c6557d-c336-47c4-b261-2896f28b3a6b","Type":"ContainerStarted","Data":"668d38b812770deeb5a692c504a33f107d108e29f5a2df07a9f2b11dedbc35ba"} Feb 18 14:54:11 crc kubenswrapper[4957]: I0218 14:54:11.503153 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-8r67m" event={"ID":"13bcc539-d1e7-4b12-b87b-d989a5f8db2d","Type":"ContainerStarted","Data":"8e8c6a17e52c3a48d3901875be2c4fa04ca3bea0c6882874a1aa35a7aebf92bd"} Feb 18 14:54:11 crc kubenswrapper[4957]: I0218 14:54:11.514782 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-5j8tc" event={"ID":"b38115e0-8b2a-42e5-8d73-a2009264ce13","Type":"ContainerStarted","Data":"4d3b6f62c898ba12b5c39fc77978bea2735eee4d9c09e19fc29f4022b6e5ca35"} Feb 18 14:54:11 crc kubenswrapper[4957]: I0218 14:54:11.534727 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-create-tsbrk" podStartSLOduration=2.534710468 podStartE2EDuration="2.534710468s" podCreationTimestamp="2026-02-18 14:54:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:54:11.526980093 +0000 UTC m=+1358.047844827" watchObservedRunningTime="2026-02-18 14:54:11.534710468 +0000 UTC m=+1358.055575202" Feb 18 14:54:11 crc kubenswrapper[4957]: I0218 14:54:11.551034 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-create-8r67m" podStartSLOduration=2.551007272 podStartE2EDuration="2.551007272s" podCreationTimestamp="2026-02-18 14:54:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:54:11.546936803 +0000 UTC m=+1358.067801557" watchObservedRunningTime="2026-02-18 14:54:11.551007272 +0000 UTC m=+1358.071872006" Feb 18 14:54:11 crc kubenswrapper[4957]: I0218 14:54:11.684915 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-14cd-account-create-update-cnqzt"] Feb 18 14:54:11 crc kubenswrapper[4957]: I0218 14:54:11.697298 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-339a-account-create-update-rjpcj"] Feb 18 14:54:11 crc kubenswrapper[4957]: W0218 14:54:11.698382 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod83a8fb77_b238_40f4_8905_b6ebe3595115.slice/crio-fc4bb39ce54e3f7dbd4739fc9e247e74c7ccd0bc8932dde05bd966846c302386 WatchSource:0}: Error finding container fc4bb39ce54e3f7dbd4739fc9e247e74c7ccd0bc8932dde05bd966846c302386: Status 404 returned error can't find the container with id fc4bb39ce54e3f7dbd4739fc9e247e74c7ccd0bc8932dde05bd966846c302386 Feb 18 14:54:11 crc kubenswrapper[4957]: W0218 14:54:11.701029 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaeb1c40c_9227_4912_944f_c18fafab0fc6.slice/crio-b26b0f5f7b11449c66ef12871f93f377bdd9ba2e19891e9fd551e7256cc02199 WatchSource:0}: Error finding container b26b0f5f7b11449c66ef12871f93f377bdd9ba2e19891e9fd551e7256cc02199: Status 404 returned error can't find the container with id b26b0f5f7b11449c66ef12871f93f377bdd9ba2e19891e9fd551e7256cc02199 Feb 18 14:54:11 crc kubenswrapper[4957]: I0218 14:54:11.708863 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-7b22-account-create-update-t5mdx"] Feb 18 14:54:12 crc kubenswrapper[4957]: I0218 14:54:12.527158 4957 generic.go:334] "Generic (PLEG): container finished" podID="57c6557d-c336-47c4-b261-2896f28b3a6b" containerID="da015619ca0f0a5498e7901cca85c8401fe3baaf6ba02eed568d37ebcdd4db97" exitCode=0 Feb 18 14:54:12 crc kubenswrapper[4957]: I0218 14:54:12.527214 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-tsbrk" event={"ID":"57c6557d-c336-47c4-b261-2896f28b3a6b","Type":"ContainerDied","Data":"da015619ca0f0a5498e7901cca85c8401fe3baaf6ba02eed568d37ebcdd4db97"} Feb 18 14:54:12 crc kubenswrapper[4957]: I0218 14:54:12.529683 4957 generic.go:334] "Generic (PLEG): container finished" podID="83a8fb77-b238-40f4-8905-b6ebe3595115" containerID="5cd1786ec9ef9559a17db9df0d074bb006fae71bf22c18377d29ae099acc8d4b" exitCode=0 Feb 18 14:54:12 crc kubenswrapper[4957]: I0218 14:54:12.529745 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-339a-account-create-update-rjpcj" event={"ID":"83a8fb77-b238-40f4-8905-b6ebe3595115","Type":"ContainerDied","Data":"5cd1786ec9ef9559a17db9df0d074bb006fae71bf22c18377d29ae099acc8d4b"} Feb 18 14:54:12 crc kubenswrapper[4957]: I0218 14:54:12.529768 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-339a-account-create-update-rjpcj" event={"ID":"83a8fb77-b238-40f4-8905-b6ebe3595115","Type":"ContainerStarted","Data":"fc4bb39ce54e3f7dbd4739fc9e247e74c7ccd0bc8932dde05bd966846c302386"} Feb 18 14:54:12 crc kubenswrapper[4957]: I0218 14:54:12.531980 4957 generic.go:334] "Generic (PLEG): container finished" podID="13bcc539-d1e7-4b12-b87b-d989a5f8db2d" containerID="b8469d23458e28e2eb80e787472cfd03130fba22b4c4bd1c6c4fcc5ebe59e877" exitCode=0 Feb 18 14:54:12 crc kubenswrapper[4957]: I0218 14:54:12.532037 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-8r67m" event={"ID":"13bcc539-d1e7-4b12-b87b-d989a5f8db2d","Type":"ContainerDied","Data":"b8469d23458e28e2eb80e787472cfd03130fba22b4c4bd1c6c4fcc5ebe59e877"} Feb 18 14:54:12 crc kubenswrapper[4957]: I0218 14:54:12.533851 4957 generic.go:334] "Generic (PLEG): container finished" podID="b38115e0-8b2a-42e5-8d73-a2009264ce13" containerID="50c99879806d6b17c98ffb370d86f137f5fd3f295e1ce4e6646b901bd59a279e" exitCode=0 Feb 18 14:54:12 crc kubenswrapper[4957]: I0218 14:54:12.533961 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-5j8tc" event={"ID":"b38115e0-8b2a-42e5-8d73-a2009264ce13","Type":"ContainerDied","Data":"50c99879806d6b17c98ffb370d86f137f5fd3f295e1ce4e6646b901bd59a279e"} Feb 18 14:54:12 crc kubenswrapper[4957]: I0218 14:54:12.537235 4957 generic.go:334] "Generic (PLEG): container finished" podID="aeb1c40c-9227-4912-944f-c18fafab0fc6" containerID="b06d36b41a26396c73cd03472cfd89ec2f223e907c596fcaf6b0c07bcf06c549" exitCode=0 Feb 18 14:54:12 crc kubenswrapper[4957]: I0218 14:54:12.537305 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-7b22-account-create-update-t5mdx" event={"ID":"aeb1c40c-9227-4912-944f-c18fafab0fc6","Type":"ContainerDied","Data":"b06d36b41a26396c73cd03472cfd89ec2f223e907c596fcaf6b0c07bcf06c549"} Feb 18 14:54:12 crc kubenswrapper[4957]: I0218 14:54:12.537328 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-7b22-account-create-update-t5mdx" event={"ID":"aeb1c40c-9227-4912-944f-c18fafab0fc6","Type":"ContainerStarted","Data":"b26b0f5f7b11449c66ef12871f93f377bdd9ba2e19891e9fd551e7256cc02199"} Feb 18 14:54:12 crc kubenswrapper[4957]: I0218 14:54:12.545664 4957 generic.go:334] "Generic (PLEG): container finished" podID="87460367-575b-4083-bff0-78dc45a41598" containerID="375e4383c9a1ac11f448ebed339d8d05a5fb70e6593323b6e0289e7df8b8b7c6" exitCode=0 Feb 18 14:54:12 crc kubenswrapper[4957]: I0218 14:54:12.545747 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-14cd-account-create-update-cnqzt" event={"ID":"87460367-575b-4083-bff0-78dc45a41598","Type":"ContainerDied","Data":"375e4383c9a1ac11f448ebed339d8d05a5fb70e6593323b6e0289e7df8b8b7c6"} Feb 18 14:54:12 crc kubenswrapper[4957]: I0218 14:54:12.545788 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-14cd-account-create-update-cnqzt" event={"ID":"87460367-575b-4083-bff0-78dc45a41598","Type":"ContainerStarted","Data":"a5f1650988f35dd2d2b675b3dd3cb90e86417c5f4505d4d9d6ae58243e8933d6"} Feb 18 14:54:12 crc kubenswrapper[4957]: I0218 14:54:12.548125 4957 generic.go:334] "Generic (PLEG): container finished" podID="aff441ba-44a2-40e6-b5ff-aad169a28811" containerID="580a1ba9ac8bfbfeb7a6c690c3ef34c7a8876d5c75cfb75290deeff74bc718c2" exitCode=0 Feb 18 14:54:12 crc kubenswrapper[4957]: I0218 14:54:12.548159 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-fqdmc" event={"ID":"aff441ba-44a2-40e6-b5ff-aad169a28811","Type":"ContainerDied","Data":"580a1ba9ac8bfbfeb7a6c690c3ef34c7a8876d5c75cfb75290deeff74bc718c2"} Feb 18 14:54:12 crc kubenswrapper[4957]: I0218 14:54:12.550109 4957 generic.go:334] "Generic (PLEG): container finished" podID="4fe03eaf-305f-4528-b0df-d1d435a75f30" containerID="98a6371c228f55174fd2bce27cd78ad2e220e930622bb81f2528b1980b2651aa" exitCode=0 Feb 18 14:54:12 crc kubenswrapper[4957]: I0218 14:54:12.550159 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-a24c-account-create-update-j4ppv" event={"ID":"4fe03eaf-305f-4528-b0df-d1d435a75f30","Type":"ContainerDied","Data":"98a6371c228f55174fd2bce27cd78ad2e220e930622bb81f2528b1980b2651aa"} Feb 18 14:54:16 crc kubenswrapper[4957]: I0218 14:54:16.288303 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-5j8tc" Feb 18 14:54:16 crc kubenswrapper[4957]: I0218 14:54:16.315830 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-339a-account-create-update-rjpcj" Feb 18 14:54:16 crc kubenswrapper[4957]: I0218 14:54:16.346822 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-8r67m" Feb 18 14:54:16 crc kubenswrapper[4957]: I0218 14:54:16.347186 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-a24c-account-create-update-j4ppv" Feb 18 14:54:16 crc kubenswrapper[4957]: I0218 14:54:16.349439 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-fqdmc" Feb 18 14:54:16 crc kubenswrapper[4957]: I0218 14:54:16.371608 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-tsbrk" Feb 18 14:54:16 crc kubenswrapper[4957]: I0218 14:54:16.375508 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-14cd-account-create-update-cnqzt" Feb 18 14:54:16 crc kubenswrapper[4957]: I0218 14:54:16.381004 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-7b22-account-create-update-t5mdx" Feb 18 14:54:16 crc kubenswrapper[4957]: I0218 14:54:16.388243 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b38115e0-8b2a-42e5-8d73-a2009264ce13-operator-scripts\") pod \"b38115e0-8b2a-42e5-8d73-a2009264ce13\" (UID: \"b38115e0-8b2a-42e5-8d73-a2009264ce13\") " Feb 18 14:54:16 crc kubenswrapper[4957]: I0218 14:54:16.388499 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ktz8v\" (UniqueName: \"kubernetes.io/projected/b38115e0-8b2a-42e5-8d73-a2009264ce13-kube-api-access-ktz8v\") pod \"b38115e0-8b2a-42e5-8d73-a2009264ce13\" (UID: \"b38115e0-8b2a-42e5-8d73-a2009264ce13\") " Feb 18 14:54:16 crc kubenswrapper[4957]: I0218 14:54:16.389363 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b38115e0-8b2a-42e5-8d73-a2009264ce13-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b38115e0-8b2a-42e5-8d73-a2009264ce13" (UID: "b38115e0-8b2a-42e5-8d73-a2009264ce13"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:54:16 crc kubenswrapper[4957]: I0218 14:54:16.394072 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b38115e0-8b2a-42e5-8d73-a2009264ce13-kube-api-access-ktz8v" (OuterVolumeSpecName: "kube-api-access-ktz8v") pod "b38115e0-8b2a-42e5-8d73-a2009264ce13" (UID: "b38115e0-8b2a-42e5-8d73-a2009264ce13"). InnerVolumeSpecName "kube-api-access-ktz8v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:54:16 crc kubenswrapper[4957]: I0218 14:54:16.490612 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/57c6557d-c336-47c4-b261-2896f28b3a6b-operator-scripts\") pod \"57c6557d-c336-47c4-b261-2896f28b3a6b\" (UID: \"57c6557d-c336-47c4-b261-2896f28b3a6b\") " Feb 18 14:54:16 crc kubenswrapper[4957]: I0218 14:54:16.490664 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aeb1c40c-9227-4912-944f-c18fafab0fc6-operator-scripts\") pod \"aeb1c40c-9227-4912-944f-c18fafab0fc6\" (UID: \"aeb1c40c-9227-4912-944f-c18fafab0fc6\") " Feb 18 14:54:16 crc kubenswrapper[4957]: I0218 14:54:16.490725 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4thvr\" (UniqueName: \"kubernetes.io/projected/57c6557d-c336-47c4-b261-2896f28b3a6b-kube-api-access-4thvr\") pod \"57c6557d-c336-47c4-b261-2896f28b3a6b\" (UID: \"57c6557d-c336-47c4-b261-2896f28b3a6b\") " Feb 18 14:54:16 crc kubenswrapper[4957]: I0218 14:54:16.490745 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wf6rw\" (UniqueName: \"kubernetes.io/projected/13bcc539-d1e7-4b12-b87b-d989a5f8db2d-kube-api-access-wf6rw\") pod \"13bcc539-d1e7-4b12-b87b-d989a5f8db2d\" (UID: \"13bcc539-d1e7-4b12-b87b-d989a5f8db2d\") " Feb 18 14:54:16 crc kubenswrapper[4957]: I0218 14:54:16.491492 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bxrcv\" (UniqueName: \"kubernetes.io/projected/aeb1c40c-9227-4912-944f-c18fafab0fc6-kube-api-access-bxrcv\") pod \"aeb1c40c-9227-4912-944f-c18fafab0fc6\" (UID: \"aeb1c40c-9227-4912-944f-c18fafab0fc6\") " Feb 18 14:54:16 crc kubenswrapper[4957]: I0218 14:54:16.491181 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/57c6557d-c336-47c4-b261-2896f28b3a6b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "57c6557d-c336-47c4-b261-2896f28b3a6b" (UID: "57c6557d-c336-47c4-b261-2896f28b3a6b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:54:16 crc kubenswrapper[4957]: I0218 14:54:16.491209 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aeb1c40c-9227-4912-944f-c18fafab0fc6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "aeb1c40c-9227-4912-944f-c18fafab0fc6" (UID: "aeb1c40c-9227-4912-944f-c18fafab0fc6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:54:16 crc kubenswrapper[4957]: I0218 14:54:16.491608 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6mb5z\" (UniqueName: \"kubernetes.io/projected/4fe03eaf-305f-4528-b0df-d1d435a75f30-kube-api-access-6mb5z\") pod \"4fe03eaf-305f-4528-b0df-d1d435a75f30\" (UID: \"4fe03eaf-305f-4528-b0df-d1d435a75f30\") " Feb 18 14:54:16 crc kubenswrapper[4957]: I0218 14:54:16.491672 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nr7fh\" (UniqueName: \"kubernetes.io/projected/aff441ba-44a2-40e6-b5ff-aad169a28811-kube-api-access-nr7fh\") pod \"aff441ba-44a2-40e6-b5ff-aad169a28811\" (UID: \"aff441ba-44a2-40e6-b5ff-aad169a28811\") " Feb 18 14:54:16 crc kubenswrapper[4957]: I0218 14:54:16.491696 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/87460367-575b-4083-bff0-78dc45a41598-operator-scripts\") pod \"87460367-575b-4083-bff0-78dc45a41598\" (UID: \"87460367-575b-4083-bff0-78dc45a41598\") " Feb 18 14:54:16 crc kubenswrapper[4957]: I0218 14:54:16.491721 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ddfpp\" (UniqueName: \"kubernetes.io/projected/87460367-575b-4083-bff0-78dc45a41598-kube-api-access-ddfpp\") pod \"87460367-575b-4083-bff0-78dc45a41598\" (UID: \"87460367-575b-4083-bff0-78dc45a41598\") " Feb 18 14:54:16 crc kubenswrapper[4957]: I0218 14:54:16.491791 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aff441ba-44a2-40e6-b5ff-aad169a28811-operator-scripts\") pod \"aff441ba-44a2-40e6-b5ff-aad169a28811\" (UID: \"aff441ba-44a2-40e6-b5ff-aad169a28811\") " Feb 18 14:54:16 crc kubenswrapper[4957]: I0218 14:54:16.491852 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4fe03eaf-305f-4528-b0df-d1d435a75f30-operator-scripts\") pod \"4fe03eaf-305f-4528-b0df-d1d435a75f30\" (UID: \"4fe03eaf-305f-4528-b0df-d1d435a75f30\") " Feb 18 14:54:16 crc kubenswrapper[4957]: I0218 14:54:16.491883 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6f89z\" (UniqueName: \"kubernetes.io/projected/83a8fb77-b238-40f4-8905-b6ebe3595115-kube-api-access-6f89z\") pod \"83a8fb77-b238-40f4-8905-b6ebe3595115\" (UID: \"83a8fb77-b238-40f4-8905-b6ebe3595115\") " Feb 18 14:54:16 crc kubenswrapper[4957]: I0218 14:54:16.491901 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/13bcc539-d1e7-4b12-b87b-d989a5f8db2d-operator-scripts\") pod \"13bcc539-d1e7-4b12-b87b-d989a5f8db2d\" (UID: \"13bcc539-d1e7-4b12-b87b-d989a5f8db2d\") " Feb 18 14:54:16 crc kubenswrapper[4957]: I0218 14:54:16.491919 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/83a8fb77-b238-40f4-8905-b6ebe3595115-operator-scripts\") pod \"83a8fb77-b238-40f4-8905-b6ebe3595115\" (UID: \"83a8fb77-b238-40f4-8905-b6ebe3595115\") " Feb 18 14:54:16 crc kubenswrapper[4957]: I0218 14:54:16.492387 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87460367-575b-4083-bff0-78dc45a41598-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "87460367-575b-4083-bff0-78dc45a41598" (UID: "87460367-575b-4083-bff0-78dc45a41598"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:54:16 crc kubenswrapper[4957]: I0218 14:54:16.492561 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4fe03eaf-305f-4528-b0df-d1d435a75f30-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4fe03eaf-305f-4528-b0df-d1d435a75f30" (UID: "4fe03eaf-305f-4528-b0df-d1d435a75f30"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:54:16 crc kubenswrapper[4957]: I0218 14:54:16.492576 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13bcc539-d1e7-4b12-b87b-d989a5f8db2d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "13bcc539-d1e7-4b12-b87b-d989a5f8db2d" (UID: "13bcc539-d1e7-4b12-b87b-d989a5f8db2d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:54:16 crc kubenswrapper[4957]: I0218 14:54:16.492783 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aff441ba-44a2-40e6-b5ff-aad169a28811-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "aff441ba-44a2-40e6-b5ff-aad169a28811" (UID: "aff441ba-44a2-40e6-b5ff-aad169a28811"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:54:16 crc kubenswrapper[4957]: I0218 14:54:16.492885 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/83a8fb77-b238-40f4-8905-b6ebe3595115-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "83a8fb77-b238-40f4-8905-b6ebe3595115" (UID: "83a8fb77-b238-40f4-8905-b6ebe3595115"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:54:16 crc kubenswrapper[4957]: I0218 14:54:16.492903 4957 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/57c6557d-c336-47c4-b261-2896f28b3a6b-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 14:54:16 crc kubenswrapper[4957]: I0218 14:54:16.492946 4957 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aeb1c40c-9227-4912-944f-c18fafab0fc6-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 14:54:16 crc kubenswrapper[4957]: I0218 14:54:16.492994 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ktz8v\" (UniqueName: \"kubernetes.io/projected/b38115e0-8b2a-42e5-8d73-a2009264ce13-kube-api-access-ktz8v\") on node \"crc\" DevicePath \"\"" Feb 18 14:54:16 crc kubenswrapper[4957]: I0218 14:54:16.493010 4957 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/87460367-575b-4083-bff0-78dc45a41598-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 14:54:16 crc kubenswrapper[4957]: I0218 14:54:16.493024 4957 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b38115e0-8b2a-42e5-8d73-a2009264ce13-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 14:54:16 crc kubenswrapper[4957]: I0218 14:54:16.493035 4957 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4fe03eaf-305f-4528-b0df-d1d435a75f30-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 14:54:16 crc kubenswrapper[4957]: I0218 14:54:16.493049 4957 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/13bcc539-d1e7-4b12-b87b-d989a5f8db2d-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 14:54:16 crc kubenswrapper[4957]: I0218 14:54:16.494964 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13bcc539-d1e7-4b12-b87b-d989a5f8db2d-kube-api-access-wf6rw" (OuterVolumeSpecName: "kube-api-access-wf6rw") pod "13bcc539-d1e7-4b12-b87b-d989a5f8db2d" (UID: "13bcc539-d1e7-4b12-b87b-d989a5f8db2d"). InnerVolumeSpecName "kube-api-access-wf6rw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:54:16 crc kubenswrapper[4957]: I0218 14:54:16.495748 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aff441ba-44a2-40e6-b5ff-aad169a28811-kube-api-access-nr7fh" (OuterVolumeSpecName: "kube-api-access-nr7fh") pod "aff441ba-44a2-40e6-b5ff-aad169a28811" (UID: "aff441ba-44a2-40e6-b5ff-aad169a28811"). InnerVolumeSpecName "kube-api-access-nr7fh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:54:16 crc kubenswrapper[4957]: I0218 14:54:16.496892 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83a8fb77-b238-40f4-8905-b6ebe3595115-kube-api-access-6f89z" (OuterVolumeSpecName: "kube-api-access-6f89z") pod "83a8fb77-b238-40f4-8905-b6ebe3595115" (UID: "83a8fb77-b238-40f4-8905-b6ebe3595115"). InnerVolumeSpecName "kube-api-access-6f89z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:54:16 crc kubenswrapper[4957]: I0218 14:54:16.496990 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57c6557d-c336-47c4-b261-2896f28b3a6b-kube-api-access-4thvr" (OuterVolumeSpecName: "kube-api-access-4thvr") pod "57c6557d-c336-47c4-b261-2896f28b3a6b" (UID: "57c6557d-c336-47c4-b261-2896f28b3a6b"). InnerVolumeSpecName "kube-api-access-4thvr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:54:16 crc kubenswrapper[4957]: I0218 14:54:16.497062 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87460367-575b-4083-bff0-78dc45a41598-kube-api-access-ddfpp" (OuterVolumeSpecName: "kube-api-access-ddfpp") pod "87460367-575b-4083-bff0-78dc45a41598" (UID: "87460367-575b-4083-bff0-78dc45a41598"). InnerVolumeSpecName "kube-api-access-ddfpp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:54:16 crc kubenswrapper[4957]: I0218 14:54:16.497362 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4fe03eaf-305f-4528-b0df-d1d435a75f30-kube-api-access-6mb5z" (OuterVolumeSpecName: "kube-api-access-6mb5z") pod "4fe03eaf-305f-4528-b0df-d1d435a75f30" (UID: "4fe03eaf-305f-4528-b0df-d1d435a75f30"). InnerVolumeSpecName "kube-api-access-6mb5z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:54:16 crc kubenswrapper[4957]: I0218 14:54:16.497862 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aeb1c40c-9227-4912-944f-c18fafab0fc6-kube-api-access-bxrcv" (OuterVolumeSpecName: "kube-api-access-bxrcv") pod "aeb1c40c-9227-4912-944f-c18fafab0fc6" (UID: "aeb1c40c-9227-4912-944f-c18fafab0fc6"). InnerVolumeSpecName "kube-api-access-bxrcv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:54:16 crc kubenswrapper[4957]: I0218 14:54:16.590588 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-339a-account-create-update-rjpcj" Feb 18 14:54:16 crc kubenswrapper[4957]: I0218 14:54:16.590875 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-339a-account-create-update-rjpcj" event={"ID":"83a8fb77-b238-40f4-8905-b6ebe3595115","Type":"ContainerDied","Data":"fc4bb39ce54e3f7dbd4739fc9e247e74c7ccd0bc8932dde05bd966846c302386"} Feb 18 14:54:16 crc kubenswrapper[4957]: I0218 14:54:16.590911 4957 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fc4bb39ce54e3f7dbd4739fc9e247e74c7ccd0bc8932dde05bd966846c302386" Feb 18 14:54:16 crc kubenswrapper[4957]: I0218 14:54:16.592248 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-8r67m" event={"ID":"13bcc539-d1e7-4b12-b87b-d989a5f8db2d","Type":"ContainerDied","Data":"8e8c6a17e52c3a48d3901875be2c4fa04ca3bea0c6882874a1aa35a7aebf92bd"} Feb 18 14:54:16 crc kubenswrapper[4957]: I0218 14:54:16.592272 4957 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8e8c6a17e52c3a48d3901875be2c4fa04ca3bea0c6882874a1aa35a7aebf92bd" Feb 18 14:54:16 crc kubenswrapper[4957]: I0218 14:54:16.592285 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-8r67m" Feb 18 14:54:16 crc kubenswrapper[4957]: I0218 14:54:16.594639 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6f89z\" (UniqueName: \"kubernetes.io/projected/83a8fb77-b238-40f4-8905-b6ebe3595115-kube-api-access-6f89z\") on node \"crc\" DevicePath \"\"" Feb 18 14:54:16 crc kubenswrapper[4957]: I0218 14:54:16.594661 4957 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/83a8fb77-b238-40f4-8905-b6ebe3595115-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 14:54:16 crc kubenswrapper[4957]: I0218 14:54:16.594670 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4thvr\" (UniqueName: \"kubernetes.io/projected/57c6557d-c336-47c4-b261-2896f28b3a6b-kube-api-access-4thvr\") on node \"crc\" DevicePath \"\"" Feb 18 14:54:16 crc kubenswrapper[4957]: I0218 14:54:16.594679 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wf6rw\" (UniqueName: \"kubernetes.io/projected/13bcc539-d1e7-4b12-b87b-d989a5f8db2d-kube-api-access-wf6rw\") on node \"crc\" DevicePath \"\"" Feb 18 14:54:16 crc kubenswrapper[4957]: I0218 14:54:16.594690 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bxrcv\" (UniqueName: \"kubernetes.io/projected/aeb1c40c-9227-4912-944f-c18fafab0fc6-kube-api-access-bxrcv\") on node \"crc\" DevicePath \"\"" Feb 18 14:54:16 crc kubenswrapper[4957]: I0218 14:54:16.594699 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6mb5z\" (UniqueName: \"kubernetes.io/projected/4fe03eaf-305f-4528-b0df-d1d435a75f30-kube-api-access-6mb5z\") on node \"crc\" DevicePath \"\"" Feb 18 14:54:16 crc kubenswrapper[4957]: I0218 14:54:16.594708 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nr7fh\" (UniqueName: \"kubernetes.io/projected/aff441ba-44a2-40e6-b5ff-aad169a28811-kube-api-access-nr7fh\") on node \"crc\" DevicePath \"\"" Feb 18 14:54:16 crc kubenswrapper[4957]: I0218 14:54:16.594718 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ddfpp\" (UniqueName: \"kubernetes.io/projected/87460367-575b-4083-bff0-78dc45a41598-kube-api-access-ddfpp\") on node \"crc\" DevicePath \"\"" Feb 18 14:54:16 crc kubenswrapper[4957]: I0218 14:54:16.594727 4957 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aff441ba-44a2-40e6-b5ff-aad169a28811-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 14:54:16 crc kubenswrapper[4957]: I0218 14:54:16.595868 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-5j8tc" event={"ID":"b38115e0-8b2a-42e5-8d73-a2009264ce13","Type":"ContainerDied","Data":"4d3b6f62c898ba12b5c39fc77978bea2735eee4d9c09e19fc29f4022b6e5ca35"} Feb 18 14:54:16 crc kubenswrapper[4957]: I0218 14:54:16.595890 4957 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4d3b6f62c898ba12b5c39fc77978bea2735eee4d9c09e19fc29f4022b6e5ca35" Feb 18 14:54:16 crc kubenswrapper[4957]: I0218 14:54:16.595948 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-5j8tc" Feb 18 14:54:16 crc kubenswrapper[4957]: I0218 14:54:16.604618 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-7b22-account-create-update-t5mdx" Feb 18 14:54:16 crc kubenswrapper[4957]: I0218 14:54:16.604901 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-7b22-account-create-update-t5mdx" event={"ID":"aeb1c40c-9227-4912-944f-c18fafab0fc6","Type":"ContainerDied","Data":"b26b0f5f7b11449c66ef12871f93f377bdd9ba2e19891e9fd551e7256cc02199"} Feb 18 14:54:16 crc kubenswrapper[4957]: I0218 14:54:16.604941 4957 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b26b0f5f7b11449c66ef12871f93f377bdd9ba2e19891e9fd551e7256cc02199" Feb 18 14:54:16 crc kubenswrapper[4957]: I0218 14:54:16.607905 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-mflsm" event={"ID":"fcf177d4-d412-4457-ad6d-a3423ee3dce0","Type":"ContainerStarted","Data":"1641bdb6592b0882e3e7fa9edcfd2133ceb319917c89978a2b2c95ac12c5c6d1"} Feb 18 14:54:16 crc kubenswrapper[4957]: I0218 14:54:16.610003 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-a24c-account-create-update-j4ppv" event={"ID":"4fe03eaf-305f-4528-b0df-d1d435a75f30","Type":"ContainerDied","Data":"9353d7d47515c4a758d96cc5a3d821b7e0ba64f00801a0ff16106396c94b4b71"} Feb 18 14:54:16 crc kubenswrapper[4957]: I0218 14:54:16.610045 4957 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9353d7d47515c4a758d96cc5a3d821b7e0ba64f00801a0ff16106396c94b4b71" Feb 18 14:54:16 crc kubenswrapper[4957]: I0218 14:54:16.610045 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-a24c-account-create-update-j4ppv" Feb 18 14:54:16 crc kubenswrapper[4957]: I0218 14:54:16.611867 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-tsbrk" Feb 18 14:54:16 crc kubenswrapper[4957]: I0218 14:54:16.611883 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-tsbrk" event={"ID":"57c6557d-c336-47c4-b261-2896f28b3a6b","Type":"ContainerDied","Data":"668d38b812770deeb5a692c504a33f107d108e29f5a2df07a9f2b11dedbc35ba"} Feb 18 14:54:16 crc kubenswrapper[4957]: I0218 14:54:16.611976 4957 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="668d38b812770deeb5a692c504a33f107d108e29f5a2df07a9f2b11dedbc35ba" Feb 18 14:54:16 crc kubenswrapper[4957]: I0218 14:54:16.621445 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-14cd-account-create-update-cnqzt" Feb 18 14:54:16 crc kubenswrapper[4957]: I0218 14:54:16.621485 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-14cd-account-create-update-cnqzt" event={"ID":"87460367-575b-4083-bff0-78dc45a41598","Type":"ContainerDied","Data":"a5f1650988f35dd2d2b675b3dd3cb90e86417c5f4505d4d9d6ae58243e8933d6"} Feb 18 14:54:16 crc kubenswrapper[4957]: I0218 14:54:16.621526 4957 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a5f1650988f35dd2d2b675b3dd3cb90e86417c5f4505d4d9d6ae58243e8933d6" Feb 18 14:54:16 crc kubenswrapper[4957]: I0218 14:54:16.623489 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-fqdmc" event={"ID":"aff441ba-44a2-40e6-b5ff-aad169a28811","Type":"ContainerDied","Data":"d2839c8077c6fa4527a18b22e1d355ae0a549e31a904bfd3335bbd2c418cc2b4"} Feb 18 14:54:16 crc kubenswrapper[4957]: I0218 14:54:16.623529 4957 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d2839c8077c6fa4527a18b22e1d355ae0a549e31a904bfd3335bbd2c418cc2b4" Feb 18 14:54:16 crc kubenswrapper[4957]: I0218 14:54:16.623543 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-fqdmc" Feb 18 14:54:16 crc kubenswrapper[4957]: I0218 14:54:16.633829 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-mflsm" podStartSLOduration=2.802557893 podStartE2EDuration="7.633812308s" podCreationTimestamp="2026-02-18 14:54:09 +0000 UTC" firstStartedPulling="2026-02-18 14:54:11.235793585 +0000 UTC m=+1357.756658329" lastFinishedPulling="2026-02-18 14:54:16.067048 +0000 UTC m=+1362.587912744" observedRunningTime="2026-02-18 14:54:16.629859664 +0000 UTC m=+1363.150724408" watchObservedRunningTime="2026-02-18 14:54:16.633812308 +0000 UTC m=+1363.154677052" Feb 18 14:54:19 crc kubenswrapper[4957]: I0218 14:54:19.654541 4957 generic.go:334] "Generic (PLEG): container finished" podID="fcf177d4-d412-4457-ad6d-a3423ee3dce0" containerID="1641bdb6592b0882e3e7fa9edcfd2133ceb319917c89978a2b2c95ac12c5c6d1" exitCode=0 Feb 18 14:54:19 crc kubenswrapper[4957]: I0218 14:54:19.654593 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-mflsm" event={"ID":"fcf177d4-d412-4457-ad6d-a3423ee3dce0","Type":"ContainerDied","Data":"1641bdb6592b0882e3e7fa9edcfd2133ceb319917c89978a2b2c95ac12c5c6d1"} Feb 18 14:54:21 crc kubenswrapper[4957]: I0218 14:54:21.056364 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-mflsm" Feb 18 14:54:21 crc kubenswrapper[4957]: I0218 14:54:21.130606 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fkgh2\" (UniqueName: \"kubernetes.io/projected/fcf177d4-d412-4457-ad6d-a3423ee3dce0-kube-api-access-fkgh2\") pod \"fcf177d4-d412-4457-ad6d-a3423ee3dce0\" (UID: \"fcf177d4-d412-4457-ad6d-a3423ee3dce0\") " Feb 18 14:54:21 crc kubenswrapper[4957]: I0218 14:54:21.130695 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcf177d4-d412-4457-ad6d-a3423ee3dce0-combined-ca-bundle\") pod \"fcf177d4-d412-4457-ad6d-a3423ee3dce0\" (UID: \"fcf177d4-d412-4457-ad6d-a3423ee3dce0\") " Feb 18 14:54:21 crc kubenswrapper[4957]: I0218 14:54:21.130815 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fcf177d4-d412-4457-ad6d-a3423ee3dce0-config-data\") pod \"fcf177d4-d412-4457-ad6d-a3423ee3dce0\" (UID: \"fcf177d4-d412-4457-ad6d-a3423ee3dce0\") " Feb 18 14:54:21 crc kubenswrapper[4957]: I0218 14:54:21.138663 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fcf177d4-d412-4457-ad6d-a3423ee3dce0-kube-api-access-fkgh2" (OuterVolumeSpecName: "kube-api-access-fkgh2") pod "fcf177d4-d412-4457-ad6d-a3423ee3dce0" (UID: "fcf177d4-d412-4457-ad6d-a3423ee3dce0"). InnerVolumeSpecName "kube-api-access-fkgh2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:54:21 crc kubenswrapper[4957]: I0218 14:54:21.165501 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fcf177d4-d412-4457-ad6d-a3423ee3dce0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fcf177d4-d412-4457-ad6d-a3423ee3dce0" (UID: "fcf177d4-d412-4457-ad6d-a3423ee3dce0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:54:21 crc kubenswrapper[4957]: I0218 14:54:21.201203 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fcf177d4-d412-4457-ad6d-a3423ee3dce0-config-data" (OuterVolumeSpecName: "config-data") pod "fcf177d4-d412-4457-ad6d-a3423ee3dce0" (UID: "fcf177d4-d412-4457-ad6d-a3423ee3dce0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:54:21 crc kubenswrapper[4957]: I0218 14:54:21.234367 4957 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fcf177d4-d412-4457-ad6d-a3423ee3dce0-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 14:54:21 crc kubenswrapper[4957]: I0218 14:54:21.234404 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fkgh2\" (UniqueName: \"kubernetes.io/projected/fcf177d4-d412-4457-ad6d-a3423ee3dce0-kube-api-access-fkgh2\") on node \"crc\" DevicePath \"\"" Feb 18 14:54:21 crc kubenswrapper[4957]: I0218 14:54:21.234437 4957 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcf177d4-d412-4457-ad6d-a3423ee3dce0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:54:21 crc kubenswrapper[4957]: I0218 14:54:21.673266 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-mflsm" event={"ID":"fcf177d4-d412-4457-ad6d-a3423ee3dce0","Type":"ContainerDied","Data":"8201317d003ffd4cb1a87b58b33365229f0fab481436cdea9fbf9040a208d954"} Feb 18 14:54:21 crc kubenswrapper[4957]: I0218 14:54:21.673744 4957 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8201317d003ffd4cb1a87b58b33365229f0fab481436cdea9fbf9040a208d954" Feb 18 14:54:21 crc kubenswrapper[4957]: I0218 14:54:21.673320 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-mflsm" Feb 18 14:54:21 crc kubenswrapper[4957]: I0218 14:54:21.963435 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-qqfkg"] Feb 18 14:54:21 crc kubenswrapper[4957]: E0218 14:54:21.963972 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57c6557d-c336-47c4-b261-2896f28b3a6b" containerName="mariadb-database-create" Feb 18 14:54:21 crc kubenswrapper[4957]: I0218 14:54:21.964000 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="57c6557d-c336-47c4-b261-2896f28b3a6b" containerName="mariadb-database-create" Feb 18 14:54:21 crc kubenswrapper[4957]: E0218 14:54:21.964022 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aeb1c40c-9227-4912-944f-c18fafab0fc6" containerName="mariadb-account-create-update" Feb 18 14:54:21 crc kubenswrapper[4957]: I0218 14:54:21.964031 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="aeb1c40c-9227-4912-944f-c18fafab0fc6" containerName="mariadb-account-create-update" Feb 18 14:54:21 crc kubenswrapper[4957]: E0218 14:54:21.964054 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13bcc539-d1e7-4b12-b87b-d989a5f8db2d" containerName="mariadb-database-create" Feb 18 14:54:21 crc kubenswrapper[4957]: I0218 14:54:21.964063 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="13bcc539-d1e7-4b12-b87b-d989a5f8db2d" containerName="mariadb-database-create" Feb 18 14:54:21 crc kubenswrapper[4957]: E0218 14:54:21.964092 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fcf177d4-d412-4457-ad6d-a3423ee3dce0" containerName="keystone-db-sync" Feb 18 14:54:21 crc kubenswrapper[4957]: I0218 14:54:21.964101 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="fcf177d4-d412-4457-ad6d-a3423ee3dce0" containerName="keystone-db-sync" Feb 18 14:54:21 crc kubenswrapper[4957]: E0218 14:54:21.964111 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83a8fb77-b238-40f4-8905-b6ebe3595115" containerName="mariadb-account-create-update" Feb 18 14:54:21 crc kubenswrapper[4957]: I0218 14:54:21.964118 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="83a8fb77-b238-40f4-8905-b6ebe3595115" containerName="mariadb-account-create-update" Feb 18 14:54:21 crc kubenswrapper[4957]: E0218 14:54:21.964140 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b38115e0-8b2a-42e5-8d73-a2009264ce13" containerName="mariadb-database-create" Feb 18 14:54:21 crc kubenswrapper[4957]: I0218 14:54:21.964148 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="b38115e0-8b2a-42e5-8d73-a2009264ce13" containerName="mariadb-database-create" Feb 18 14:54:21 crc kubenswrapper[4957]: E0218 14:54:21.964160 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aff441ba-44a2-40e6-b5ff-aad169a28811" containerName="mariadb-database-create" Feb 18 14:54:21 crc kubenswrapper[4957]: I0218 14:54:21.964168 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="aff441ba-44a2-40e6-b5ff-aad169a28811" containerName="mariadb-database-create" Feb 18 14:54:21 crc kubenswrapper[4957]: E0218 14:54:21.964189 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fe03eaf-305f-4528-b0df-d1d435a75f30" containerName="mariadb-account-create-update" Feb 18 14:54:21 crc kubenswrapper[4957]: I0218 14:54:21.964196 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fe03eaf-305f-4528-b0df-d1d435a75f30" containerName="mariadb-account-create-update" Feb 18 14:54:21 crc kubenswrapper[4957]: E0218 14:54:21.964215 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87460367-575b-4083-bff0-78dc45a41598" containerName="mariadb-account-create-update" Feb 18 14:54:21 crc kubenswrapper[4957]: I0218 14:54:21.964224 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="87460367-575b-4083-bff0-78dc45a41598" containerName="mariadb-account-create-update" Feb 18 14:54:21 crc kubenswrapper[4957]: I0218 14:54:21.964470 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="aff441ba-44a2-40e6-b5ff-aad169a28811" containerName="mariadb-database-create" Feb 18 14:54:21 crc kubenswrapper[4957]: I0218 14:54:21.964485 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="87460367-575b-4083-bff0-78dc45a41598" containerName="mariadb-account-create-update" Feb 18 14:54:21 crc kubenswrapper[4957]: I0218 14:54:21.964523 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="aeb1c40c-9227-4912-944f-c18fafab0fc6" containerName="mariadb-account-create-update" Feb 18 14:54:21 crc kubenswrapper[4957]: I0218 14:54:21.964532 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="83a8fb77-b238-40f4-8905-b6ebe3595115" containerName="mariadb-account-create-update" Feb 18 14:54:21 crc kubenswrapper[4957]: I0218 14:54:21.964542 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="57c6557d-c336-47c4-b261-2896f28b3a6b" containerName="mariadb-database-create" Feb 18 14:54:21 crc kubenswrapper[4957]: I0218 14:54:21.964562 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="fcf177d4-d412-4457-ad6d-a3423ee3dce0" containerName="keystone-db-sync" Feb 18 14:54:21 crc kubenswrapper[4957]: I0218 14:54:21.964574 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="4fe03eaf-305f-4528-b0df-d1d435a75f30" containerName="mariadb-account-create-update" Feb 18 14:54:21 crc kubenswrapper[4957]: I0218 14:54:21.964595 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="13bcc539-d1e7-4b12-b87b-d989a5f8db2d" containerName="mariadb-database-create" Feb 18 14:54:21 crc kubenswrapper[4957]: I0218 14:54:21.964609 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="b38115e0-8b2a-42e5-8d73-a2009264ce13" containerName="mariadb-database-create" Feb 18 14:54:21 crc kubenswrapper[4957]: I0218 14:54:21.966065 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-qqfkg" Feb 18 14:54:21 crc kubenswrapper[4957]: I0218 14:54:21.972791 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-qqfkg"] Feb 18 14:54:21 crc kubenswrapper[4957]: I0218 14:54:21.996790 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-6pv8c"] Feb 18 14:54:21 crc kubenswrapper[4957]: I0218 14:54:21.998457 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-6pv8c" Feb 18 14:54:22 crc kubenswrapper[4957]: I0218 14:54:22.005449 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-t4lzm" Feb 18 14:54:22 crc kubenswrapper[4957]: I0218 14:54:22.005476 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 18 14:54:22 crc kubenswrapper[4957]: I0218 14:54:22.005806 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 18 14:54:22 crc kubenswrapper[4957]: I0218 14:54:22.006053 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 18 14:54:22 crc kubenswrapper[4957]: I0218 14:54:22.006221 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 18 14:54:22 crc kubenswrapper[4957]: I0218 14:54:22.047899 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-6pv8c"] Feb 18 14:54:22 crc kubenswrapper[4957]: I0218 14:54:22.062372 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d62bcf4-5e8d-4d44-8127-957e3ec97d8f-config-data\") pod \"keystone-bootstrap-6pv8c\" (UID: \"4d62bcf4-5e8d-4d44-8127-957e3ec97d8f\") " pod="openstack/keystone-bootstrap-6pv8c" Feb 18 14:54:22 crc kubenswrapper[4957]: I0218 14:54:22.062810 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3fd89ea8-c220-4e8a-a834-5149e3028e94-dns-swift-storage-0\") pod \"dnsmasq-dns-847c4cc679-qqfkg\" (UID: \"3fd89ea8-c220-4e8a-a834-5149e3028e94\") " pod="openstack/dnsmasq-dns-847c4cc679-qqfkg" Feb 18 14:54:22 crc kubenswrapper[4957]: I0218 14:54:22.062852 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3fd89ea8-c220-4e8a-a834-5149e3028e94-config\") pod \"dnsmasq-dns-847c4cc679-qqfkg\" (UID: \"3fd89ea8-c220-4e8a-a834-5149e3028e94\") " pod="openstack/dnsmasq-dns-847c4cc679-qqfkg" Feb 18 14:54:22 crc kubenswrapper[4957]: I0218 14:54:22.062887 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j66l2\" (UniqueName: \"kubernetes.io/projected/4d62bcf4-5e8d-4d44-8127-957e3ec97d8f-kube-api-access-j66l2\") pod \"keystone-bootstrap-6pv8c\" (UID: \"4d62bcf4-5e8d-4d44-8127-957e3ec97d8f\") " pod="openstack/keystone-bootstrap-6pv8c" Feb 18 14:54:22 crc kubenswrapper[4957]: I0218 14:54:22.062945 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3fd89ea8-c220-4e8a-a834-5149e3028e94-ovsdbserver-nb\") pod \"dnsmasq-dns-847c4cc679-qqfkg\" (UID: \"3fd89ea8-c220-4e8a-a834-5149e3028e94\") " pod="openstack/dnsmasq-dns-847c4cc679-qqfkg" Feb 18 14:54:22 crc kubenswrapper[4957]: I0218 14:54:22.063017 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d62bcf4-5e8d-4d44-8127-957e3ec97d8f-combined-ca-bundle\") pod \"keystone-bootstrap-6pv8c\" (UID: \"4d62bcf4-5e8d-4d44-8127-957e3ec97d8f\") " pod="openstack/keystone-bootstrap-6pv8c" Feb 18 14:54:22 crc kubenswrapper[4957]: I0218 14:54:22.063040 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66gsp\" (UniqueName: \"kubernetes.io/projected/3fd89ea8-c220-4e8a-a834-5149e3028e94-kube-api-access-66gsp\") pod \"dnsmasq-dns-847c4cc679-qqfkg\" (UID: \"3fd89ea8-c220-4e8a-a834-5149e3028e94\") " pod="openstack/dnsmasq-dns-847c4cc679-qqfkg" Feb 18 14:54:22 crc kubenswrapper[4957]: I0218 14:54:22.063082 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4d62bcf4-5e8d-4d44-8127-957e3ec97d8f-scripts\") pod \"keystone-bootstrap-6pv8c\" (UID: \"4d62bcf4-5e8d-4d44-8127-957e3ec97d8f\") " pod="openstack/keystone-bootstrap-6pv8c" Feb 18 14:54:22 crc kubenswrapper[4957]: I0218 14:54:22.063108 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4d62bcf4-5e8d-4d44-8127-957e3ec97d8f-fernet-keys\") pod \"keystone-bootstrap-6pv8c\" (UID: \"4d62bcf4-5e8d-4d44-8127-957e3ec97d8f\") " pod="openstack/keystone-bootstrap-6pv8c" Feb 18 14:54:22 crc kubenswrapper[4957]: I0218 14:54:22.063163 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3fd89ea8-c220-4e8a-a834-5149e3028e94-ovsdbserver-sb\") pod \"dnsmasq-dns-847c4cc679-qqfkg\" (UID: \"3fd89ea8-c220-4e8a-a834-5149e3028e94\") " pod="openstack/dnsmasq-dns-847c4cc679-qqfkg" Feb 18 14:54:22 crc kubenswrapper[4957]: I0218 14:54:22.063234 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/4d62bcf4-5e8d-4d44-8127-957e3ec97d8f-credential-keys\") pod \"keystone-bootstrap-6pv8c\" (UID: \"4d62bcf4-5e8d-4d44-8127-957e3ec97d8f\") " pod="openstack/keystone-bootstrap-6pv8c" Feb 18 14:54:22 crc kubenswrapper[4957]: I0218 14:54:22.063488 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3fd89ea8-c220-4e8a-a834-5149e3028e94-dns-svc\") pod \"dnsmasq-dns-847c4cc679-qqfkg\" (UID: \"3fd89ea8-c220-4e8a-a834-5149e3028e94\") " pod="openstack/dnsmasq-dns-847c4cc679-qqfkg" Feb 18 14:54:22 crc kubenswrapper[4957]: I0218 14:54:22.112525 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-sg7fz"] Feb 18 14:54:22 crc kubenswrapper[4957]: I0218 14:54:22.114340 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-sg7fz" Feb 18 14:54:22 crc kubenswrapper[4957]: I0218 14:54:22.123328 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-sqlqh" Feb 18 14:54:22 crc kubenswrapper[4957]: I0218 14:54:22.129139 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-sg7fz"] Feb 18 14:54:22 crc kubenswrapper[4957]: I0218 14:54:22.138791 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Feb 18 14:54:22 crc kubenswrapper[4957]: I0218 14:54:22.165284 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4d62bcf4-5e8d-4d44-8127-957e3ec97d8f-scripts\") pod \"keystone-bootstrap-6pv8c\" (UID: \"4d62bcf4-5e8d-4d44-8127-957e3ec97d8f\") " pod="openstack/keystone-bootstrap-6pv8c" Feb 18 14:54:22 crc kubenswrapper[4957]: I0218 14:54:22.165346 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4d62bcf4-5e8d-4d44-8127-957e3ec97d8f-fernet-keys\") pod \"keystone-bootstrap-6pv8c\" (UID: \"4d62bcf4-5e8d-4d44-8127-957e3ec97d8f\") " pod="openstack/keystone-bootstrap-6pv8c" Feb 18 14:54:22 crc kubenswrapper[4957]: I0218 14:54:22.165403 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3fd89ea8-c220-4e8a-a834-5149e3028e94-ovsdbserver-sb\") pod \"dnsmasq-dns-847c4cc679-qqfkg\" (UID: \"3fd89ea8-c220-4e8a-a834-5149e3028e94\") " pod="openstack/dnsmasq-dns-847c4cc679-qqfkg" Feb 18 14:54:22 crc kubenswrapper[4957]: I0218 14:54:22.165521 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/4d62bcf4-5e8d-4d44-8127-957e3ec97d8f-credential-keys\") pod \"keystone-bootstrap-6pv8c\" (UID: \"4d62bcf4-5e8d-4d44-8127-957e3ec97d8f\") " pod="openstack/keystone-bootstrap-6pv8c" Feb 18 14:54:22 crc kubenswrapper[4957]: I0218 14:54:22.165592 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3fd89ea8-c220-4e8a-a834-5149e3028e94-dns-svc\") pod \"dnsmasq-dns-847c4cc679-qqfkg\" (UID: \"3fd89ea8-c220-4e8a-a834-5149e3028e94\") " pod="openstack/dnsmasq-dns-847c4cc679-qqfkg" Feb 18 14:54:22 crc kubenswrapper[4957]: I0218 14:54:22.165679 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3b12676-eb60-406c-a019-461370859d2a-config-data\") pod \"heat-db-sync-sg7fz\" (UID: \"a3b12676-eb60-406c-a019-461370859d2a\") " pod="openstack/heat-db-sync-sg7fz" Feb 18 14:54:22 crc kubenswrapper[4957]: I0218 14:54:22.165707 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5ng7\" (UniqueName: \"kubernetes.io/projected/a3b12676-eb60-406c-a019-461370859d2a-kube-api-access-f5ng7\") pod \"heat-db-sync-sg7fz\" (UID: \"a3b12676-eb60-406c-a019-461370859d2a\") " pod="openstack/heat-db-sync-sg7fz" Feb 18 14:54:22 crc kubenswrapper[4957]: I0218 14:54:22.165736 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d62bcf4-5e8d-4d44-8127-957e3ec97d8f-config-data\") pod \"keystone-bootstrap-6pv8c\" (UID: \"4d62bcf4-5e8d-4d44-8127-957e3ec97d8f\") " pod="openstack/keystone-bootstrap-6pv8c" Feb 18 14:54:22 crc kubenswrapper[4957]: I0218 14:54:22.165827 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3fd89ea8-c220-4e8a-a834-5149e3028e94-dns-swift-storage-0\") pod \"dnsmasq-dns-847c4cc679-qqfkg\" (UID: \"3fd89ea8-c220-4e8a-a834-5149e3028e94\") " pod="openstack/dnsmasq-dns-847c4cc679-qqfkg" Feb 18 14:54:22 crc kubenswrapper[4957]: I0218 14:54:22.165859 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3fd89ea8-c220-4e8a-a834-5149e3028e94-config\") pod \"dnsmasq-dns-847c4cc679-qqfkg\" (UID: \"3fd89ea8-c220-4e8a-a834-5149e3028e94\") " pod="openstack/dnsmasq-dns-847c4cc679-qqfkg" Feb 18 14:54:22 crc kubenswrapper[4957]: I0218 14:54:22.165885 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j66l2\" (UniqueName: \"kubernetes.io/projected/4d62bcf4-5e8d-4d44-8127-957e3ec97d8f-kube-api-access-j66l2\") pod \"keystone-bootstrap-6pv8c\" (UID: \"4d62bcf4-5e8d-4d44-8127-957e3ec97d8f\") " pod="openstack/keystone-bootstrap-6pv8c" Feb 18 14:54:22 crc kubenswrapper[4957]: I0218 14:54:22.165941 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3fd89ea8-c220-4e8a-a834-5149e3028e94-ovsdbserver-nb\") pod \"dnsmasq-dns-847c4cc679-qqfkg\" (UID: \"3fd89ea8-c220-4e8a-a834-5149e3028e94\") " pod="openstack/dnsmasq-dns-847c4cc679-qqfkg" Feb 18 14:54:22 crc kubenswrapper[4957]: I0218 14:54:22.166012 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d62bcf4-5e8d-4d44-8127-957e3ec97d8f-combined-ca-bundle\") pod \"keystone-bootstrap-6pv8c\" (UID: \"4d62bcf4-5e8d-4d44-8127-957e3ec97d8f\") " pod="openstack/keystone-bootstrap-6pv8c" Feb 18 14:54:22 crc kubenswrapper[4957]: I0218 14:54:22.166033 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3b12676-eb60-406c-a019-461370859d2a-combined-ca-bundle\") pod \"heat-db-sync-sg7fz\" (UID: \"a3b12676-eb60-406c-a019-461370859d2a\") " pod="openstack/heat-db-sync-sg7fz" Feb 18 14:54:22 crc kubenswrapper[4957]: I0218 14:54:22.167126 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-66gsp\" (UniqueName: \"kubernetes.io/projected/3fd89ea8-c220-4e8a-a834-5149e3028e94-kube-api-access-66gsp\") pod \"dnsmasq-dns-847c4cc679-qqfkg\" (UID: \"3fd89ea8-c220-4e8a-a834-5149e3028e94\") " pod="openstack/dnsmasq-dns-847c4cc679-qqfkg" Feb 18 14:54:22 crc kubenswrapper[4957]: I0218 14:54:22.168480 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3fd89ea8-c220-4e8a-a834-5149e3028e94-config\") pod \"dnsmasq-dns-847c4cc679-qqfkg\" (UID: \"3fd89ea8-c220-4e8a-a834-5149e3028e94\") " pod="openstack/dnsmasq-dns-847c4cc679-qqfkg" Feb 18 14:54:22 crc kubenswrapper[4957]: I0218 14:54:22.170696 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3fd89ea8-c220-4e8a-a834-5149e3028e94-ovsdbserver-nb\") pod \"dnsmasq-dns-847c4cc679-qqfkg\" (UID: \"3fd89ea8-c220-4e8a-a834-5149e3028e94\") " pod="openstack/dnsmasq-dns-847c4cc679-qqfkg" Feb 18 14:54:22 crc kubenswrapper[4957]: I0218 14:54:22.171520 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3fd89ea8-c220-4e8a-a834-5149e3028e94-dns-svc\") pod \"dnsmasq-dns-847c4cc679-qqfkg\" (UID: \"3fd89ea8-c220-4e8a-a834-5149e3028e94\") " pod="openstack/dnsmasq-dns-847c4cc679-qqfkg" Feb 18 14:54:22 crc kubenswrapper[4957]: I0218 14:54:22.172776 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3fd89ea8-c220-4e8a-a834-5149e3028e94-dns-swift-storage-0\") pod \"dnsmasq-dns-847c4cc679-qqfkg\" (UID: \"3fd89ea8-c220-4e8a-a834-5149e3028e94\") " pod="openstack/dnsmasq-dns-847c4cc679-qqfkg" Feb 18 14:54:22 crc kubenswrapper[4957]: I0218 14:54:22.173511 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3fd89ea8-c220-4e8a-a834-5149e3028e94-ovsdbserver-sb\") pod \"dnsmasq-dns-847c4cc679-qqfkg\" (UID: \"3fd89ea8-c220-4e8a-a834-5149e3028e94\") " pod="openstack/dnsmasq-dns-847c4cc679-qqfkg" Feb 18 14:54:22 crc kubenswrapper[4957]: I0218 14:54:22.182298 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d62bcf4-5e8d-4d44-8127-957e3ec97d8f-combined-ca-bundle\") pod \"keystone-bootstrap-6pv8c\" (UID: \"4d62bcf4-5e8d-4d44-8127-957e3ec97d8f\") " pod="openstack/keystone-bootstrap-6pv8c" Feb 18 14:54:22 crc kubenswrapper[4957]: I0218 14:54:22.182388 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d62bcf4-5e8d-4d44-8127-957e3ec97d8f-config-data\") pod \"keystone-bootstrap-6pv8c\" (UID: \"4d62bcf4-5e8d-4d44-8127-957e3ec97d8f\") " pod="openstack/keystone-bootstrap-6pv8c" Feb 18 14:54:22 crc kubenswrapper[4957]: I0218 14:54:22.189669 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/4d62bcf4-5e8d-4d44-8127-957e3ec97d8f-credential-keys\") pod \"keystone-bootstrap-6pv8c\" (UID: \"4d62bcf4-5e8d-4d44-8127-957e3ec97d8f\") " pod="openstack/keystone-bootstrap-6pv8c" Feb 18 14:54:22 crc kubenswrapper[4957]: I0218 14:54:22.196026 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4d62bcf4-5e8d-4d44-8127-957e3ec97d8f-fernet-keys\") pod \"keystone-bootstrap-6pv8c\" (UID: \"4d62bcf4-5e8d-4d44-8127-957e3ec97d8f\") " pod="openstack/keystone-bootstrap-6pv8c" Feb 18 14:54:22 crc kubenswrapper[4957]: I0218 14:54:22.209048 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4d62bcf4-5e8d-4d44-8127-957e3ec97d8f-scripts\") pod \"keystone-bootstrap-6pv8c\" (UID: \"4d62bcf4-5e8d-4d44-8127-957e3ec97d8f\") " pod="openstack/keystone-bootstrap-6pv8c" Feb 18 14:54:22 crc kubenswrapper[4957]: I0218 14:54:22.209531 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j66l2\" (UniqueName: \"kubernetes.io/projected/4d62bcf4-5e8d-4d44-8127-957e3ec97d8f-kube-api-access-j66l2\") pod \"keystone-bootstrap-6pv8c\" (UID: \"4d62bcf4-5e8d-4d44-8127-957e3ec97d8f\") " pod="openstack/keystone-bootstrap-6pv8c" Feb 18 14:54:22 crc kubenswrapper[4957]: I0218 14:54:22.218023 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-66gsp\" (UniqueName: \"kubernetes.io/projected/3fd89ea8-c220-4e8a-a834-5149e3028e94-kube-api-access-66gsp\") pod \"dnsmasq-dns-847c4cc679-qqfkg\" (UID: \"3fd89ea8-c220-4e8a-a834-5149e3028e94\") " pod="openstack/dnsmasq-dns-847c4cc679-qqfkg" Feb 18 14:54:22 crc kubenswrapper[4957]: I0218 14:54:22.253246 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-5sgzz"] Feb 18 14:54:22 crc kubenswrapper[4957]: I0218 14:54:22.254519 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-5sgzz" Feb 18 14:54:22 crc kubenswrapper[4957]: I0218 14:54:22.265486 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 18 14:54:22 crc kubenswrapper[4957]: I0218 14:54:22.266041 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 18 14:54:22 crc kubenswrapper[4957]: I0218 14:54:22.269020 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-5sgzz"] Feb 18 14:54:22 crc kubenswrapper[4957]: I0218 14:54:22.270278 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3b12676-eb60-406c-a019-461370859d2a-config-data\") pod \"heat-db-sync-sg7fz\" (UID: \"a3b12676-eb60-406c-a019-461370859d2a\") " pod="openstack/heat-db-sync-sg7fz" Feb 18 14:54:22 crc kubenswrapper[4957]: I0218 14:54:22.270310 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f5ng7\" (UniqueName: \"kubernetes.io/projected/a3b12676-eb60-406c-a019-461370859d2a-kube-api-access-f5ng7\") pod \"heat-db-sync-sg7fz\" (UID: \"a3b12676-eb60-406c-a019-461370859d2a\") " pod="openstack/heat-db-sync-sg7fz" Feb 18 14:54:22 crc kubenswrapper[4957]: I0218 14:54:22.270443 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3b12676-eb60-406c-a019-461370859d2a-combined-ca-bundle\") pod \"heat-db-sync-sg7fz\" (UID: \"a3b12676-eb60-406c-a019-461370859d2a\") " pod="openstack/heat-db-sync-sg7fz" Feb 18 14:54:22 crc kubenswrapper[4957]: I0218 14:54:22.271458 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-llfw6" Feb 18 14:54:22 crc kubenswrapper[4957]: I0218 14:54:22.283760 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3b12676-eb60-406c-a019-461370859d2a-combined-ca-bundle\") pod \"heat-db-sync-sg7fz\" (UID: \"a3b12676-eb60-406c-a019-461370859d2a\") " pod="openstack/heat-db-sync-sg7fz" Feb 18 14:54:22 crc kubenswrapper[4957]: I0218 14:54:22.284470 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3b12676-eb60-406c-a019-461370859d2a-config-data\") pod \"heat-db-sync-sg7fz\" (UID: \"a3b12676-eb60-406c-a019-461370859d2a\") " pod="openstack/heat-db-sync-sg7fz" Feb 18 14:54:22 crc kubenswrapper[4957]: I0218 14:54:22.305736 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-b2p7g"] Feb 18 14:54:22 crc kubenswrapper[4957]: I0218 14:54:22.307398 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-b2p7g" Feb 18 14:54:22 crc kubenswrapper[4957]: I0218 14:54:22.308541 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-qqfkg" Feb 18 14:54:22 crc kubenswrapper[4957]: I0218 14:54:22.316094 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f5ng7\" (UniqueName: \"kubernetes.io/projected/a3b12676-eb60-406c-a019-461370859d2a-kube-api-access-f5ng7\") pod \"heat-db-sync-sg7fz\" (UID: \"a3b12676-eb60-406c-a019-461370859d2a\") " pod="openstack/heat-db-sync-sg7fz" Feb 18 14:54:22 crc kubenswrapper[4957]: I0218 14:54:22.318707 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 18 14:54:22 crc kubenswrapper[4957]: I0218 14:54:22.318921 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 18 14:54:22 crc kubenswrapper[4957]: I0218 14:54:22.319072 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-zflrv" Feb 18 14:54:22 crc kubenswrapper[4957]: I0218 14:54:22.332488 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-b2p7g"] Feb 18 14:54:22 crc kubenswrapper[4957]: I0218 14:54:22.336734 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-6pv8c" Feb 18 14:54:22 crc kubenswrapper[4957]: I0218 14:54:22.372082 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c233a613-22d9-4534-811e-31acfe4eb302-config\") pod \"neutron-db-sync-b2p7g\" (UID: \"c233a613-22d9-4534-811e-31acfe4eb302\") " pod="openstack/neutron-db-sync-b2p7g" Feb 18 14:54:22 crc kubenswrapper[4957]: I0218 14:54:22.372134 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9205f99b-873e-4f53-9d89-85b77ca7adc1-db-sync-config-data\") pod \"cinder-db-sync-5sgzz\" (UID: \"9205f99b-873e-4f53-9d89-85b77ca7adc1\") " pod="openstack/cinder-db-sync-5sgzz" Feb 18 14:54:22 crc kubenswrapper[4957]: I0218 14:54:22.372168 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9205f99b-873e-4f53-9d89-85b77ca7adc1-config-data\") pod \"cinder-db-sync-5sgzz\" (UID: \"9205f99b-873e-4f53-9d89-85b77ca7adc1\") " pod="openstack/cinder-db-sync-5sgzz" Feb 18 14:54:22 crc kubenswrapper[4957]: I0218 14:54:22.372205 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c233a613-22d9-4534-811e-31acfe4eb302-combined-ca-bundle\") pod \"neutron-db-sync-b2p7g\" (UID: \"c233a613-22d9-4534-811e-31acfe4eb302\") " pod="openstack/neutron-db-sync-b2p7g" Feb 18 14:54:22 crc kubenswrapper[4957]: I0218 14:54:22.372247 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9205f99b-873e-4f53-9d89-85b77ca7adc1-scripts\") pod \"cinder-db-sync-5sgzz\" (UID: \"9205f99b-873e-4f53-9d89-85b77ca7adc1\") " pod="openstack/cinder-db-sync-5sgzz" Feb 18 14:54:22 crc kubenswrapper[4957]: I0218 14:54:22.372306 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8fzs\" (UniqueName: \"kubernetes.io/projected/c233a613-22d9-4534-811e-31acfe4eb302-kube-api-access-m8fzs\") pod \"neutron-db-sync-b2p7g\" (UID: \"c233a613-22d9-4534-811e-31acfe4eb302\") " pod="openstack/neutron-db-sync-b2p7g" Feb 18 14:54:22 crc kubenswrapper[4957]: I0218 14:54:22.372326 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9205f99b-873e-4f53-9d89-85b77ca7adc1-etc-machine-id\") pod \"cinder-db-sync-5sgzz\" (UID: \"9205f99b-873e-4f53-9d89-85b77ca7adc1\") " pod="openstack/cinder-db-sync-5sgzz" Feb 18 14:54:22 crc kubenswrapper[4957]: I0218 14:54:22.372368 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9205f99b-873e-4f53-9d89-85b77ca7adc1-combined-ca-bundle\") pod \"cinder-db-sync-5sgzz\" (UID: \"9205f99b-873e-4f53-9d89-85b77ca7adc1\") " pod="openstack/cinder-db-sync-5sgzz" Feb 18 14:54:22 crc kubenswrapper[4957]: I0218 14:54:22.372398 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgpqx\" (UniqueName: \"kubernetes.io/projected/9205f99b-873e-4f53-9d89-85b77ca7adc1-kube-api-access-vgpqx\") pod \"cinder-db-sync-5sgzz\" (UID: \"9205f99b-873e-4f53-9d89-85b77ca7adc1\") " pod="openstack/cinder-db-sync-5sgzz" Feb 18 14:54:22 crc kubenswrapper[4957]: I0218 14:54:22.399031 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-d89wt"] Feb 18 14:54:22 crc kubenswrapper[4957]: I0218 14:54:22.400178 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-d89wt" Feb 18 14:54:22 crc kubenswrapper[4957]: I0218 14:54:22.405406 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-27zkn" Feb 18 14:54:22 crc kubenswrapper[4957]: I0218 14:54:22.405761 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 18 14:54:22 crc kubenswrapper[4957]: I0218 14:54:22.405903 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 18 14:54:22 crc kubenswrapper[4957]: I0218 14:54:22.410065 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-qqfkg"] Feb 18 14:54:22 crc kubenswrapper[4957]: I0218 14:54:22.429887 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-d89wt"] Feb 18 14:54:22 crc kubenswrapper[4957]: I0218 14:54:22.446939 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-sg7fz" Feb 18 14:54:22 crc kubenswrapper[4957]: I0218 14:54:22.478324 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-8v9g9"] Feb 18 14:54:22 crc kubenswrapper[4957]: I0218 14:54:22.480919 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9f3c5afd-c757-4e78-9f08-3d55f1b32ab6-scripts\") pod \"placement-db-sync-d89wt\" (UID: \"9f3c5afd-c757-4e78-9f08-3d55f1b32ab6\") " pod="openstack/placement-db-sync-d89wt" Feb 18 14:54:22 crc kubenswrapper[4957]: I0218 14:54:22.480973 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9205f99b-873e-4f53-9d89-85b77ca7adc1-combined-ca-bundle\") pod \"cinder-db-sync-5sgzz\" (UID: \"9205f99b-873e-4f53-9d89-85b77ca7adc1\") " pod="openstack/cinder-db-sync-5sgzz" Feb 18 14:54:22 crc kubenswrapper[4957]: I0218 14:54:22.481017 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42qld\" (UniqueName: \"kubernetes.io/projected/9f3c5afd-c757-4e78-9f08-3d55f1b32ab6-kube-api-access-42qld\") pod \"placement-db-sync-d89wt\" (UID: \"9f3c5afd-c757-4e78-9f08-3d55f1b32ab6\") " pod="openstack/placement-db-sync-d89wt" Feb 18 14:54:22 crc kubenswrapper[4957]: I0218 14:54:22.481051 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vgpqx\" (UniqueName: \"kubernetes.io/projected/9205f99b-873e-4f53-9d89-85b77ca7adc1-kube-api-access-vgpqx\") pod \"cinder-db-sync-5sgzz\" (UID: \"9205f99b-873e-4f53-9d89-85b77ca7adc1\") " pod="openstack/cinder-db-sync-5sgzz" Feb 18 14:54:22 crc kubenswrapper[4957]: I0218 14:54:22.481113 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f3c5afd-c757-4e78-9f08-3d55f1b32ab6-combined-ca-bundle\") pod \"placement-db-sync-d89wt\" (UID: \"9f3c5afd-c757-4e78-9f08-3d55f1b32ab6\") " pod="openstack/placement-db-sync-d89wt" Feb 18 14:54:22 crc kubenswrapper[4957]: I0218 14:54:22.481333 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c233a613-22d9-4534-811e-31acfe4eb302-config\") pod \"neutron-db-sync-b2p7g\" (UID: \"c233a613-22d9-4534-811e-31acfe4eb302\") " pod="openstack/neutron-db-sync-b2p7g" Feb 18 14:54:22 crc kubenswrapper[4957]: I0218 14:54:22.481358 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9205f99b-873e-4f53-9d89-85b77ca7adc1-db-sync-config-data\") pod \"cinder-db-sync-5sgzz\" (UID: \"9205f99b-873e-4f53-9d89-85b77ca7adc1\") " pod="openstack/cinder-db-sync-5sgzz" Feb 18 14:54:22 crc kubenswrapper[4957]: I0218 14:54:22.481381 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9205f99b-873e-4f53-9d89-85b77ca7adc1-config-data\") pod \"cinder-db-sync-5sgzz\" (UID: \"9205f99b-873e-4f53-9d89-85b77ca7adc1\") " pod="openstack/cinder-db-sync-5sgzz" Feb 18 14:54:22 crc kubenswrapper[4957]: I0218 14:54:22.481405 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c233a613-22d9-4534-811e-31acfe4eb302-combined-ca-bundle\") pod \"neutron-db-sync-b2p7g\" (UID: \"c233a613-22d9-4534-811e-31acfe4eb302\") " pod="openstack/neutron-db-sync-b2p7g" Feb 18 14:54:22 crc kubenswrapper[4957]: I0218 14:54:22.481449 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9205f99b-873e-4f53-9d89-85b77ca7adc1-scripts\") pod \"cinder-db-sync-5sgzz\" (UID: \"9205f99b-873e-4f53-9d89-85b77ca7adc1\") " pod="openstack/cinder-db-sync-5sgzz" Feb 18 14:54:22 crc kubenswrapper[4957]: I0218 14:54:22.481470 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9f3c5afd-c757-4e78-9f08-3d55f1b32ab6-logs\") pod \"placement-db-sync-d89wt\" (UID: \"9f3c5afd-c757-4e78-9f08-3d55f1b32ab6\") " pod="openstack/placement-db-sync-d89wt" Feb 18 14:54:22 crc kubenswrapper[4957]: I0218 14:54:22.481508 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f3c5afd-c757-4e78-9f08-3d55f1b32ab6-config-data\") pod \"placement-db-sync-d89wt\" (UID: \"9f3c5afd-c757-4e78-9f08-3d55f1b32ab6\") " pod="openstack/placement-db-sync-d89wt" Feb 18 14:54:22 crc kubenswrapper[4957]: I0218 14:54:22.481537 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m8fzs\" (UniqueName: \"kubernetes.io/projected/c233a613-22d9-4534-811e-31acfe4eb302-kube-api-access-m8fzs\") pod \"neutron-db-sync-b2p7g\" (UID: \"c233a613-22d9-4534-811e-31acfe4eb302\") " pod="openstack/neutron-db-sync-b2p7g" Feb 18 14:54:22 crc kubenswrapper[4957]: I0218 14:54:22.481558 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9205f99b-873e-4f53-9d89-85b77ca7adc1-etc-machine-id\") pod \"cinder-db-sync-5sgzz\" (UID: \"9205f99b-873e-4f53-9d89-85b77ca7adc1\") " pod="openstack/cinder-db-sync-5sgzz" Feb 18 14:54:22 crc kubenswrapper[4957]: I0218 14:54:22.481644 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9205f99b-873e-4f53-9d89-85b77ca7adc1-etc-machine-id\") pod \"cinder-db-sync-5sgzz\" (UID: \"9205f99b-873e-4f53-9d89-85b77ca7adc1\") " pod="openstack/cinder-db-sync-5sgzz" Feb 18 14:54:22 crc kubenswrapper[4957]: I0218 14:54:22.506352 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-8v9g9" Feb 18 14:54:22 crc kubenswrapper[4957]: I0218 14:54:22.507378 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vgpqx\" (UniqueName: \"kubernetes.io/projected/9205f99b-873e-4f53-9d89-85b77ca7adc1-kube-api-access-vgpqx\") pod \"cinder-db-sync-5sgzz\" (UID: \"9205f99b-873e-4f53-9d89-85b77ca7adc1\") " pod="openstack/cinder-db-sync-5sgzz" Feb 18 14:54:22 crc kubenswrapper[4957]: I0218 14:54:22.507590 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c233a613-22d9-4534-811e-31acfe4eb302-combined-ca-bundle\") pod \"neutron-db-sync-b2p7g\" (UID: \"c233a613-22d9-4534-811e-31acfe4eb302\") " pod="openstack/neutron-db-sync-b2p7g" Feb 18 14:54:22 crc kubenswrapper[4957]: I0218 14:54:22.508089 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9205f99b-873e-4f53-9d89-85b77ca7adc1-combined-ca-bundle\") pod \"cinder-db-sync-5sgzz\" (UID: \"9205f99b-873e-4f53-9d89-85b77ca7adc1\") " pod="openstack/cinder-db-sync-5sgzz" Feb 18 14:54:22 crc kubenswrapper[4957]: I0218 14:54:22.520536 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9205f99b-873e-4f53-9d89-85b77ca7adc1-config-data\") pod \"cinder-db-sync-5sgzz\" (UID: \"9205f99b-873e-4f53-9d89-85b77ca7adc1\") " pod="openstack/cinder-db-sync-5sgzz" Feb 18 14:54:22 crc kubenswrapper[4957]: I0218 14:54:22.529203 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9205f99b-873e-4f53-9d89-85b77ca7adc1-db-sync-config-data\") pod \"cinder-db-sync-5sgzz\" (UID: \"9205f99b-873e-4f53-9d89-85b77ca7adc1\") " pod="openstack/cinder-db-sync-5sgzz" Feb 18 14:54:22 crc kubenswrapper[4957]: I0218 14:54:22.540363 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m8fzs\" (UniqueName: \"kubernetes.io/projected/c233a613-22d9-4534-811e-31acfe4eb302-kube-api-access-m8fzs\") pod \"neutron-db-sync-b2p7g\" (UID: \"c233a613-22d9-4534-811e-31acfe4eb302\") " pod="openstack/neutron-db-sync-b2p7g" Feb 18 14:54:22 crc kubenswrapper[4957]: I0218 14:54:22.580728 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/c233a613-22d9-4534-811e-31acfe4eb302-config\") pod \"neutron-db-sync-b2p7g\" (UID: \"c233a613-22d9-4534-811e-31acfe4eb302\") " pod="openstack/neutron-db-sync-b2p7g" Feb 18 14:54:22 crc kubenswrapper[4957]: I0218 14:54:22.603748 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-8v9g9"] Feb 18 14:54:22 crc kubenswrapper[4957]: I0218 14:54:22.631230 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9f3c5afd-c757-4e78-9f08-3d55f1b32ab6-logs\") pod \"placement-db-sync-d89wt\" (UID: \"9f3c5afd-c757-4e78-9f08-3d55f1b32ab6\") " pod="openstack/placement-db-sync-d89wt" Feb 18 14:54:22 crc kubenswrapper[4957]: I0218 14:54:22.635602 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/36143c0c-ebdb-4642-9926-b8d4601d0aae-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-8v9g9\" (UID: \"36143c0c-ebdb-4642-9926-b8d4601d0aae\") " pod="openstack/dnsmasq-dns-785d8bcb8c-8v9g9" Feb 18 14:54:22 crc kubenswrapper[4957]: I0218 14:54:22.636029 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f3c5afd-c757-4e78-9f08-3d55f1b32ab6-config-data\") pod \"placement-db-sync-d89wt\" (UID: \"9f3c5afd-c757-4e78-9f08-3d55f1b32ab6\") " pod="openstack/placement-db-sync-d89wt" Feb 18 14:54:22 crc kubenswrapper[4957]: I0218 14:54:22.640604 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/36143c0c-ebdb-4642-9926-b8d4601d0aae-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-8v9g9\" (UID: \"36143c0c-ebdb-4642-9926-b8d4601d0aae\") " pod="openstack/dnsmasq-dns-785d8bcb8c-8v9g9" Feb 18 14:54:22 crc kubenswrapper[4957]: I0218 14:54:22.640961 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9f3c5afd-c757-4e78-9f08-3d55f1b32ab6-scripts\") pod \"placement-db-sync-d89wt\" (UID: \"9f3c5afd-c757-4e78-9f08-3d55f1b32ab6\") " pod="openstack/placement-db-sync-d89wt" Feb 18 14:54:22 crc kubenswrapper[4957]: I0218 14:54:22.641191 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-42qld\" (UniqueName: \"kubernetes.io/projected/9f3c5afd-c757-4e78-9f08-3d55f1b32ab6-kube-api-access-42qld\") pod \"placement-db-sync-d89wt\" (UID: \"9f3c5afd-c757-4e78-9f08-3d55f1b32ab6\") " pod="openstack/placement-db-sync-d89wt" Feb 18 14:54:22 crc kubenswrapper[4957]: I0218 14:54:22.641440 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g62dr\" (UniqueName: \"kubernetes.io/projected/36143c0c-ebdb-4642-9926-b8d4601d0aae-kube-api-access-g62dr\") pod \"dnsmasq-dns-785d8bcb8c-8v9g9\" (UID: \"36143c0c-ebdb-4642-9926-b8d4601d0aae\") " pod="openstack/dnsmasq-dns-785d8bcb8c-8v9g9" Feb 18 14:54:22 crc kubenswrapper[4957]: I0218 14:54:22.641649 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/36143c0c-ebdb-4642-9926-b8d4601d0aae-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-8v9g9\" (UID: \"36143c0c-ebdb-4642-9926-b8d4601d0aae\") " pod="openstack/dnsmasq-dns-785d8bcb8c-8v9g9" Feb 18 14:54:22 crc kubenswrapper[4957]: I0218 14:54:22.641844 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36143c0c-ebdb-4642-9926-b8d4601d0aae-config\") pod \"dnsmasq-dns-785d8bcb8c-8v9g9\" (UID: \"36143c0c-ebdb-4642-9926-b8d4601d0aae\") " pod="openstack/dnsmasq-dns-785d8bcb8c-8v9g9" Feb 18 14:54:22 crc kubenswrapper[4957]: I0218 14:54:22.641981 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/36143c0c-ebdb-4642-9926-b8d4601d0aae-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-8v9g9\" (UID: \"36143c0c-ebdb-4642-9926-b8d4601d0aae\") " pod="openstack/dnsmasq-dns-785d8bcb8c-8v9g9" Feb 18 14:54:22 crc kubenswrapper[4957]: I0218 14:54:22.642193 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f3c5afd-c757-4e78-9f08-3d55f1b32ab6-combined-ca-bundle\") pod \"placement-db-sync-d89wt\" (UID: \"9f3c5afd-c757-4e78-9f08-3d55f1b32ab6\") " pod="openstack/placement-db-sync-d89wt" Feb 18 14:54:22 crc kubenswrapper[4957]: I0218 14:54:22.634069 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9f3c5afd-c757-4e78-9f08-3d55f1b32ab6-logs\") pod \"placement-db-sync-d89wt\" (UID: \"9f3c5afd-c757-4e78-9f08-3d55f1b32ab6\") " pod="openstack/placement-db-sync-d89wt" Feb 18 14:54:22 crc kubenswrapper[4957]: I0218 14:54:22.660062 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f3c5afd-c757-4e78-9f08-3d55f1b32ab6-combined-ca-bundle\") pod \"placement-db-sync-d89wt\" (UID: \"9f3c5afd-c757-4e78-9f08-3d55f1b32ab6\") " pod="openstack/placement-db-sync-d89wt" Feb 18 14:54:22 crc kubenswrapper[4957]: I0218 14:54:22.679117 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9205f99b-873e-4f53-9d89-85b77ca7adc1-scripts\") pod \"cinder-db-sync-5sgzz\" (UID: \"9205f99b-873e-4f53-9d89-85b77ca7adc1\") " pod="openstack/cinder-db-sync-5sgzz" Feb 18 14:54:22 crc kubenswrapper[4957]: I0218 14:54:22.680444 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f3c5afd-c757-4e78-9f08-3d55f1b32ab6-config-data\") pod \"placement-db-sync-d89wt\" (UID: \"9f3c5afd-c757-4e78-9f08-3d55f1b32ab6\") " pod="openstack/placement-db-sync-d89wt" Feb 18 14:54:22 crc kubenswrapper[4957]: I0218 14:54:22.681753 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9f3c5afd-c757-4e78-9f08-3d55f1b32ab6-scripts\") pod \"placement-db-sync-d89wt\" (UID: \"9f3c5afd-c757-4e78-9f08-3d55f1b32ab6\") " pod="openstack/placement-db-sync-d89wt" Feb 18 14:54:22 crc kubenswrapper[4957]: I0218 14:54:22.716359 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-42qld\" (UniqueName: \"kubernetes.io/projected/9f3c5afd-c757-4e78-9f08-3d55f1b32ab6-kube-api-access-42qld\") pod \"placement-db-sync-d89wt\" (UID: \"9f3c5afd-c757-4e78-9f08-3d55f1b32ab6\") " pod="openstack/placement-db-sync-d89wt" Feb 18 14:54:22 crc kubenswrapper[4957]: I0218 14:54:22.748302 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/36143c0c-ebdb-4642-9926-b8d4601d0aae-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-8v9g9\" (UID: \"36143c0c-ebdb-4642-9926-b8d4601d0aae\") " pod="openstack/dnsmasq-dns-785d8bcb8c-8v9g9" Feb 18 14:54:22 crc kubenswrapper[4957]: I0218 14:54:22.748472 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/36143c0c-ebdb-4642-9926-b8d4601d0aae-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-8v9g9\" (UID: \"36143c0c-ebdb-4642-9926-b8d4601d0aae\") " pod="openstack/dnsmasq-dns-785d8bcb8c-8v9g9" Feb 18 14:54:22 crc kubenswrapper[4957]: I0218 14:54:22.748560 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g62dr\" (UniqueName: \"kubernetes.io/projected/36143c0c-ebdb-4642-9926-b8d4601d0aae-kube-api-access-g62dr\") pod \"dnsmasq-dns-785d8bcb8c-8v9g9\" (UID: \"36143c0c-ebdb-4642-9926-b8d4601d0aae\") " pod="openstack/dnsmasq-dns-785d8bcb8c-8v9g9" Feb 18 14:54:22 crc kubenswrapper[4957]: I0218 14:54:22.748599 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/36143c0c-ebdb-4642-9926-b8d4601d0aae-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-8v9g9\" (UID: \"36143c0c-ebdb-4642-9926-b8d4601d0aae\") " pod="openstack/dnsmasq-dns-785d8bcb8c-8v9g9" Feb 18 14:54:22 crc kubenswrapper[4957]: I0218 14:54:22.748660 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36143c0c-ebdb-4642-9926-b8d4601d0aae-config\") pod \"dnsmasq-dns-785d8bcb8c-8v9g9\" (UID: \"36143c0c-ebdb-4642-9926-b8d4601d0aae\") " pod="openstack/dnsmasq-dns-785d8bcb8c-8v9g9" Feb 18 14:54:22 crc kubenswrapper[4957]: I0218 14:54:22.748680 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/36143c0c-ebdb-4642-9926-b8d4601d0aae-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-8v9g9\" (UID: \"36143c0c-ebdb-4642-9926-b8d4601d0aae\") " pod="openstack/dnsmasq-dns-785d8bcb8c-8v9g9" Feb 18 14:54:22 crc kubenswrapper[4957]: I0218 14:54:22.752034 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/36143c0c-ebdb-4642-9926-b8d4601d0aae-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-8v9g9\" (UID: \"36143c0c-ebdb-4642-9926-b8d4601d0aae\") " pod="openstack/dnsmasq-dns-785d8bcb8c-8v9g9" Feb 18 14:54:22 crc kubenswrapper[4957]: I0218 14:54:22.752680 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36143c0c-ebdb-4642-9926-b8d4601d0aae-config\") pod \"dnsmasq-dns-785d8bcb8c-8v9g9\" (UID: \"36143c0c-ebdb-4642-9926-b8d4601d0aae\") " pod="openstack/dnsmasq-dns-785d8bcb8c-8v9g9" Feb 18 14:54:22 crc kubenswrapper[4957]: I0218 14:54:22.753246 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/36143c0c-ebdb-4642-9926-b8d4601d0aae-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-8v9g9\" (UID: \"36143c0c-ebdb-4642-9926-b8d4601d0aae\") " pod="openstack/dnsmasq-dns-785d8bcb8c-8v9g9" Feb 18 14:54:22 crc kubenswrapper[4957]: I0218 14:54:22.753743 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/36143c0c-ebdb-4642-9926-b8d4601d0aae-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-8v9g9\" (UID: \"36143c0c-ebdb-4642-9926-b8d4601d0aae\") " pod="openstack/dnsmasq-dns-785d8bcb8c-8v9g9" Feb 18 14:54:22 crc kubenswrapper[4957]: I0218 14:54:22.753771 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-jmmp8"] Feb 18 14:54:22 crc kubenswrapper[4957]: I0218 14:54:22.759532 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/36143c0c-ebdb-4642-9926-b8d4601d0aae-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-8v9g9\" (UID: \"36143c0c-ebdb-4642-9926-b8d4601d0aae\") " pod="openstack/dnsmasq-dns-785d8bcb8c-8v9g9" Feb 18 14:54:22 crc kubenswrapper[4957]: I0218 14:54:22.767466 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-jmmp8" Feb 18 14:54:22 crc kubenswrapper[4957]: I0218 14:54:22.775389 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-l98wd" Feb 18 14:54:22 crc kubenswrapper[4957]: I0218 14:54:22.775700 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 18 14:54:22 crc kubenswrapper[4957]: I0218 14:54:22.781444 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-5sgzz" Feb 18 14:54:22 crc kubenswrapper[4957]: I0218 14:54:22.788184 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g62dr\" (UniqueName: \"kubernetes.io/projected/36143c0c-ebdb-4642-9926-b8d4601d0aae-kube-api-access-g62dr\") pod \"dnsmasq-dns-785d8bcb8c-8v9g9\" (UID: \"36143c0c-ebdb-4642-9926-b8d4601d0aae\") " pod="openstack/dnsmasq-dns-785d8bcb8c-8v9g9" Feb 18 14:54:22 crc kubenswrapper[4957]: I0218 14:54:22.815277 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-b2p7g" Feb 18 14:54:22 crc kubenswrapper[4957]: I0218 14:54:22.817467 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-jmmp8"] Feb 18 14:54:22 crc kubenswrapper[4957]: I0218 14:54:22.850267 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/19ecbaa7-5ddf-4835-b9b7-01a0c156b75e-db-sync-config-data\") pod \"barbican-db-sync-jmmp8\" (UID: \"19ecbaa7-5ddf-4835-b9b7-01a0c156b75e\") " pod="openstack/barbican-db-sync-jmmp8" Feb 18 14:54:22 crc kubenswrapper[4957]: I0218 14:54:22.850319 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4d9j\" (UniqueName: \"kubernetes.io/projected/19ecbaa7-5ddf-4835-b9b7-01a0c156b75e-kube-api-access-p4d9j\") pod \"barbican-db-sync-jmmp8\" (UID: \"19ecbaa7-5ddf-4835-b9b7-01a0c156b75e\") " pod="openstack/barbican-db-sync-jmmp8" Feb 18 14:54:22 crc kubenswrapper[4957]: I0218 14:54:22.850486 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19ecbaa7-5ddf-4835-b9b7-01a0c156b75e-combined-ca-bundle\") pod \"barbican-db-sync-jmmp8\" (UID: \"19ecbaa7-5ddf-4835-b9b7-01a0c156b75e\") " pod="openstack/barbican-db-sync-jmmp8" Feb 18 14:54:22 crc kubenswrapper[4957]: I0218 14:54:22.865388 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 18 14:54:22 crc kubenswrapper[4957]: I0218 14:54:22.881301 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 14:54:22 crc kubenswrapper[4957]: I0218 14:54:22.881433 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 14:54:22 crc kubenswrapper[4957]: I0218 14:54:22.883490 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-d89wt" Feb 18 14:54:22 crc kubenswrapper[4957]: I0218 14:54:22.889148 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 18 14:54:22 crc kubenswrapper[4957]: I0218 14:54:22.889614 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 18 14:54:22 crc kubenswrapper[4957]: I0218 14:54:22.905590 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-8v9g9" Feb 18 14:54:22 crc kubenswrapper[4957]: I0218 14:54:22.953227 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/19ecbaa7-5ddf-4835-b9b7-01a0c156b75e-db-sync-config-data\") pod \"barbican-db-sync-jmmp8\" (UID: \"19ecbaa7-5ddf-4835-b9b7-01a0c156b75e\") " pod="openstack/barbican-db-sync-jmmp8" Feb 18 14:54:22 crc kubenswrapper[4957]: I0218 14:54:22.953275 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/86855234-3589-4780-a8a5-75d8a4350c27-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"86855234-3589-4780-a8a5-75d8a4350c27\") " pod="openstack/ceilometer-0" Feb 18 14:54:22 crc kubenswrapper[4957]: I0218 14:54:22.953299 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p4d9j\" (UniqueName: \"kubernetes.io/projected/19ecbaa7-5ddf-4835-b9b7-01a0c156b75e-kube-api-access-p4d9j\") pod \"barbican-db-sync-jmmp8\" (UID: \"19ecbaa7-5ddf-4835-b9b7-01a0c156b75e\") " pod="openstack/barbican-db-sync-jmmp8" Feb 18 14:54:22 crc kubenswrapper[4957]: I0218 14:54:22.953319 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/86855234-3589-4780-a8a5-75d8a4350c27-log-httpd\") pod \"ceilometer-0\" (UID: \"86855234-3589-4780-a8a5-75d8a4350c27\") " pod="openstack/ceilometer-0" Feb 18 14:54:22 crc kubenswrapper[4957]: I0218 14:54:22.953371 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86855234-3589-4780-a8a5-75d8a4350c27-config-data\") pod \"ceilometer-0\" (UID: \"86855234-3589-4780-a8a5-75d8a4350c27\") " pod="openstack/ceilometer-0" Feb 18 14:54:22 crc kubenswrapper[4957]: I0218 14:54:22.954901 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/86855234-3589-4780-a8a5-75d8a4350c27-run-httpd\") pod \"ceilometer-0\" (UID: \"86855234-3589-4780-a8a5-75d8a4350c27\") " pod="openstack/ceilometer-0" Feb 18 14:54:22 crc kubenswrapper[4957]: I0218 14:54:22.955160 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/86855234-3589-4780-a8a5-75d8a4350c27-scripts\") pod \"ceilometer-0\" (UID: \"86855234-3589-4780-a8a5-75d8a4350c27\") " pod="openstack/ceilometer-0" Feb 18 14:54:22 crc kubenswrapper[4957]: I0218 14:54:22.955190 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19ecbaa7-5ddf-4835-b9b7-01a0c156b75e-combined-ca-bundle\") pod \"barbican-db-sync-jmmp8\" (UID: \"19ecbaa7-5ddf-4835-b9b7-01a0c156b75e\") " pod="openstack/barbican-db-sync-jmmp8" Feb 18 14:54:22 crc kubenswrapper[4957]: I0218 14:54:22.955214 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwzgs\" (UniqueName: \"kubernetes.io/projected/86855234-3589-4780-a8a5-75d8a4350c27-kube-api-access-qwzgs\") pod \"ceilometer-0\" (UID: \"86855234-3589-4780-a8a5-75d8a4350c27\") " pod="openstack/ceilometer-0" Feb 18 14:54:22 crc kubenswrapper[4957]: I0218 14:54:22.955265 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86855234-3589-4780-a8a5-75d8a4350c27-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"86855234-3589-4780-a8a5-75d8a4350c27\") " pod="openstack/ceilometer-0" Feb 18 14:54:22 crc kubenswrapper[4957]: I0218 14:54:22.971932 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19ecbaa7-5ddf-4835-b9b7-01a0c156b75e-combined-ca-bundle\") pod \"barbican-db-sync-jmmp8\" (UID: \"19ecbaa7-5ddf-4835-b9b7-01a0c156b75e\") " pod="openstack/barbican-db-sync-jmmp8" Feb 18 14:54:22 crc kubenswrapper[4957]: I0218 14:54:22.972451 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/19ecbaa7-5ddf-4835-b9b7-01a0c156b75e-db-sync-config-data\") pod \"barbican-db-sync-jmmp8\" (UID: \"19ecbaa7-5ddf-4835-b9b7-01a0c156b75e\") " pod="openstack/barbican-db-sync-jmmp8" Feb 18 14:54:22 crc kubenswrapper[4957]: I0218 14:54:22.978307 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p4d9j\" (UniqueName: \"kubernetes.io/projected/19ecbaa7-5ddf-4835-b9b7-01a0c156b75e-kube-api-access-p4d9j\") pod \"barbican-db-sync-jmmp8\" (UID: \"19ecbaa7-5ddf-4835-b9b7-01a0c156b75e\") " pod="openstack/barbican-db-sync-jmmp8" Feb 18 14:54:23 crc kubenswrapper[4957]: I0218 14:54:23.057921 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/86855234-3589-4780-a8a5-75d8a4350c27-scripts\") pod \"ceilometer-0\" (UID: \"86855234-3589-4780-a8a5-75d8a4350c27\") " pod="openstack/ceilometer-0" Feb 18 14:54:23 crc kubenswrapper[4957]: I0218 14:54:23.057970 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qwzgs\" (UniqueName: \"kubernetes.io/projected/86855234-3589-4780-a8a5-75d8a4350c27-kube-api-access-qwzgs\") pod \"ceilometer-0\" (UID: \"86855234-3589-4780-a8a5-75d8a4350c27\") " pod="openstack/ceilometer-0" Feb 18 14:54:23 crc kubenswrapper[4957]: I0218 14:54:23.058005 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86855234-3589-4780-a8a5-75d8a4350c27-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"86855234-3589-4780-a8a5-75d8a4350c27\") " pod="openstack/ceilometer-0" Feb 18 14:54:23 crc kubenswrapper[4957]: I0218 14:54:23.058043 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/86855234-3589-4780-a8a5-75d8a4350c27-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"86855234-3589-4780-a8a5-75d8a4350c27\") " pod="openstack/ceilometer-0" Feb 18 14:54:23 crc kubenswrapper[4957]: I0218 14:54:23.058066 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/86855234-3589-4780-a8a5-75d8a4350c27-log-httpd\") pod \"ceilometer-0\" (UID: \"86855234-3589-4780-a8a5-75d8a4350c27\") " pod="openstack/ceilometer-0" Feb 18 14:54:23 crc kubenswrapper[4957]: I0218 14:54:23.058098 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86855234-3589-4780-a8a5-75d8a4350c27-config-data\") pod \"ceilometer-0\" (UID: \"86855234-3589-4780-a8a5-75d8a4350c27\") " pod="openstack/ceilometer-0" Feb 18 14:54:23 crc kubenswrapper[4957]: I0218 14:54:23.058206 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/86855234-3589-4780-a8a5-75d8a4350c27-run-httpd\") pod \"ceilometer-0\" (UID: \"86855234-3589-4780-a8a5-75d8a4350c27\") " pod="openstack/ceilometer-0" Feb 18 14:54:23 crc kubenswrapper[4957]: I0218 14:54:23.058917 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/86855234-3589-4780-a8a5-75d8a4350c27-run-httpd\") pod \"ceilometer-0\" (UID: \"86855234-3589-4780-a8a5-75d8a4350c27\") " pod="openstack/ceilometer-0" Feb 18 14:54:23 crc kubenswrapper[4957]: I0218 14:54:23.061292 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/86855234-3589-4780-a8a5-75d8a4350c27-log-httpd\") pod \"ceilometer-0\" (UID: \"86855234-3589-4780-a8a5-75d8a4350c27\") " pod="openstack/ceilometer-0" Feb 18 14:54:23 crc kubenswrapper[4957]: I0218 14:54:23.067304 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86855234-3589-4780-a8a5-75d8a4350c27-config-data\") pod \"ceilometer-0\" (UID: \"86855234-3589-4780-a8a5-75d8a4350c27\") " pod="openstack/ceilometer-0" Feb 18 14:54:23 crc kubenswrapper[4957]: I0218 14:54:23.068850 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/86855234-3589-4780-a8a5-75d8a4350c27-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"86855234-3589-4780-a8a5-75d8a4350c27\") " pod="openstack/ceilometer-0" Feb 18 14:54:23 crc kubenswrapper[4957]: I0218 14:54:23.080279 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86855234-3589-4780-a8a5-75d8a4350c27-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"86855234-3589-4780-a8a5-75d8a4350c27\") " pod="openstack/ceilometer-0" Feb 18 14:54:23 crc kubenswrapper[4957]: I0218 14:54:23.081118 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/86855234-3589-4780-a8a5-75d8a4350c27-scripts\") pod \"ceilometer-0\" (UID: \"86855234-3589-4780-a8a5-75d8a4350c27\") " pod="openstack/ceilometer-0" Feb 18 14:54:23 crc kubenswrapper[4957]: I0218 14:54:23.091099 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qwzgs\" (UniqueName: \"kubernetes.io/projected/86855234-3589-4780-a8a5-75d8a4350c27-kube-api-access-qwzgs\") pod \"ceilometer-0\" (UID: \"86855234-3589-4780-a8a5-75d8a4350c27\") " pod="openstack/ceilometer-0" Feb 18 14:54:23 crc kubenswrapper[4957]: I0218 14:54:23.117097 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-jmmp8" Feb 18 14:54:23 crc kubenswrapper[4957]: I0218 14:54:23.125781 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 14:54:23 crc kubenswrapper[4957]: I0218 14:54:23.127678 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 18 14:54:23 crc kubenswrapper[4957]: I0218 14:54:23.141237 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-f948p" Feb 18 14:54:23 crc kubenswrapper[4957]: I0218 14:54:23.141507 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Feb 18 14:54:23 crc kubenswrapper[4957]: I0218 14:54:23.141624 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 18 14:54:23 crc kubenswrapper[4957]: I0218 14:54:23.141845 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 18 14:54:23 crc kubenswrapper[4957]: I0218 14:54:23.161866 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 14:54:23 crc kubenswrapper[4957]: I0218 14:54:23.199928 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-qqfkg"] Feb 18 14:54:23 crc kubenswrapper[4957]: I0218 14:54:23.231573 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 14:54:23 crc kubenswrapper[4957]: I0218 14:54:23.265076 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5ffcba6d-7fd0-4552-ab65-2619972f67ed-scripts\") pod \"glance-default-external-api-0\" (UID: \"5ffcba6d-7fd0-4552-ab65-2619972f67ed\") " pod="openstack/glance-default-external-api-0" Feb 18 14:54:23 crc kubenswrapper[4957]: I0218 14:54:23.265116 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5ffcba6d-7fd0-4552-ab65-2619972f67ed-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"5ffcba6d-7fd0-4552-ab65-2619972f67ed\") " pod="openstack/glance-default-external-api-0" Feb 18 14:54:23 crc kubenswrapper[4957]: I0218 14:54:23.265158 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-e7977071-236d-4f61-8ab3-a6195398953d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e7977071-236d-4f61-8ab3-a6195398953d\") pod \"glance-default-external-api-0\" (UID: \"5ffcba6d-7fd0-4552-ab65-2619972f67ed\") " pod="openstack/glance-default-external-api-0" Feb 18 14:54:23 crc kubenswrapper[4957]: I0218 14:54:23.265275 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ffcba6d-7fd0-4552-ab65-2619972f67ed-config-data\") pod \"glance-default-external-api-0\" (UID: \"5ffcba6d-7fd0-4552-ab65-2619972f67ed\") " pod="openstack/glance-default-external-api-0" Feb 18 14:54:23 crc kubenswrapper[4957]: I0218 14:54:23.265333 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtm7c\" (UniqueName: \"kubernetes.io/projected/5ffcba6d-7fd0-4552-ab65-2619972f67ed-kube-api-access-qtm7c\") pod \"glance-default-external-api-0\" (UID: \"5ffcba6d-7fd0-4552-ab65-2619972f67ed\") " pod="openstack/glance-default-external-api-0" Feb 18 14:54:23 crc kubenswrapper[4957]: I0218 14:54:23.265352 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5ffcba6d-7fd0-4552-ab65-2619972f67ed-logs\") pod \"glance-default-external-api-0\" (UID: \"5ffcba6d-7fd0-4552-ab65-2619972f67ed\") " pod="openstack/glance-default-external-api-0" Feb 18 14:54:23 crc kubenswrapper[4957]: I0218 14:54:23.265385 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5ffcba6d-7fd0-4552-ab65-2619972f67ed-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"5ffcba6d-7fd0-4552-ab65-2619972f67ed\") " pod="openstack/glance-default-external-api-0" Feb 18 14:54:23 crc kubenswrapper[4957]: I0218 14:54:23.265434 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ffcba6d-7fd0-4552-ab65-2619972f67ed-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"5ffcba6d-7fd0-4552-ab65-2619972f67ed\") " pod="openstack/glance-default-external-api-0" Feb 18 14:54:23 crc kubenswrapper[4957]: I0218 14:54:23.266215 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 14:54:23 crc kubenswrapper[4957]: I0218 14:54:23.269520 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 18 14:54:23 crc kubenswrapper[4957]: I0218 14:54:23.273990 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 14:54:23 crc kubenswrapper[4957]: I0218 14:54:23.274207 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 18 14:54:23 crc kubenswrapper[4957]: I0218 14:54:23.274385 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 18 14:54:23 crc kubenswrapper[4957]: I0218 14:54:23.345607 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-6pv8c"] Feb 18 14:54:23 crc kubenswrapper[4957]: I0218 14:54:23.370112 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ede80602-d0cc-4ce0-a368-35626d308374-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"ede80602-d0cc-4ce0-a368-35626d308374\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:54:23 crc kubenswrapper[4957]: I0218 14:54:23.370182 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-2d87d3ce-8343-4706-ae50-f2bcfa8ba982\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2d87d3ce-8343-4706-ae50-f2bcfa8ba982\") pod \"glance-default-internal-api-0\" (UID: \"ede80602-d0cc-4ce0-a368-35626d308374\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:54:23 crc kubenswrapper[4957]: I0218 14:54:23.370209 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5ffcba6d-7fd0-4552-ab65-2619972f67ed-scripts\") pod \"glance-default-external-api-0\" (UID: \"5ffcba6d-7fd0-4552-ab65-2619972f67ed\") " pod="openstack/glance-default-external-api-0" Feb 18 14:54:23 crc kubenswrapper[4957]: W0218 14:54:23.370193 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4d62bcf4_5e8d_4d44_8127_957e3ec97d8f.slice/crio-8915afa86e0b549a2d33e5914d729d9bcc272d22b0b48c847fdf0649a1a1c4c6 WatchSource:0}: Error finding container 8915afa86e0b549a2d33e5914d729d9bcc272d22b0b48c847fdf0649a1a1c4c6: Status 404 returned error can't find the container with id 8915afa86e0b549a2d33e5914d729d9bcc272d22b0b48c847fdf0649a1a1c4c6 Feb 18 14:54:23 crc kubenswrapper[4957]: I0218 14:54:23.370225 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5ffcba6d-7fd0-4552-ab65-2619972f67ed-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"5ffcba6d-7fd0-4552-ab65-2619972f67ed\") " pod="openstack/glance-default-external-api-0" Feb 18 14:54:23 crc kubenswrapper[4957]: I0218 14:54:23.370282 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ede80602-d0cc-4ce0-a368-35626d308374-scripts\") pod \"glance-default-internal-api-0\" (UID: \"ede80602-d0cc-4ce0-a368-35626d308374\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:54:23 crc kubenswrapper[4957]: I0218 14:54:23.370362 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-e7977071-236d-4f61-8ab3-a6195398953d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e7977071-236d-4f61-8ab3-a6195398953d\") pod \"glance-default-external-api-0\" (UID: \"5ffcba6d-7fd0-4552-ab65-2619972f67ed\") " pod="openstack/glance-default-external-api-0" Feb 18 14:54:23 crc kubenswrapper[4957]: I0218 14:54:23.370495 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ede80602-d0cc-4ce0-a368-35626d308374-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"ede80602-d0cc-4ce0-a368-35626d308374\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:54:23 crc kubenswrapper[4957]: I0218 14:54:23.370779 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5ffcba6d-7fd0-4552-ab65-2619972f67ed-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"5ffcba6d-7fd0-4552-ab65-2619972f67ed\") " pod="openstack/glance-default-external-api-0" Feb 18 14:54:23 crc kubenswrapper[4957]: I0218 14:54:23.374990 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nlnsd\" (UniqueName: \"kubernetes.io/projected/ede80602-d0cc-4ce0-a368-35626d308374-kube-api-access-nlnsd\") pod \"glance-default-internal-api-0\" (UID: \"ede80602-d0cc-4ce0-a368-35626d308374\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:54:23 crc kubenswrapper[4957]: I0218 14:54:23.375039 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ede80602-d0cc-4ce0-a368-35626d308374-config-data\") pod \"glance-default-internal-api-0\" (UID: \"ede80602-d0cc-4ce0-a368-35626d308374\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:54:23 crc kubenswrapper[4957]: I0218 14:54:23.375057 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ede80602-d0cc-4ce0-a368-35626d308374-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"ede80602-d0cc-4ce0-a368-35626d308374\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:54:23 crc kubenswrapper[4957]: I0218 14:54:23.375094 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ffcba6d-7fd0-4552-ab65-2619972f67ed-config-data\") pod \"glance-default-external-api-0\" (UID: \"5ffcba6d-7fd0-4552-ab65-2619972f67ed\") " pod="openstack/glance-default-external-api-0" Feb 18 14:54:23 crc kubenswrapper[4957]: I0218 14:54:23.375160 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ede80602-d0cc-4ce0-a368-35626d308374-logs\") pod \"glance-default-internal-api-0\" (UID: \"ede80602-d0cc-4ce0-a368-35626d308374\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:54:23 crc kubenswrapper[4957]: I0218 14:54:23.375194 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qtm7c\" (UniqueName: \"kubernetes.io/projected/5ffcba6d-7fd0-4552-ab65-2619972f67ed-kube-api-access-qtm7c\") pod \"glance-default-external-api-0\" (UID: \"5ffcba6d-7fd0-4552-ab65-2619972f67ed\") " pod="openstack/glance-default-external-api-0" Feb 18 14:54:23 crc kubenswrapper[4957]: I0218 14:54:23.375213 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5ffcba6d-7fd0-4552-ab65-2619972f67ed-logs\") pod \"glance-default-external-api-0\" (UID: \"5ffcba6d-7fd0-4552-ab65-2619972f67ed\") " pod="openstack/glance-default-external-api-0" Feb 18 14:54:23 crc kubenswrapper[4957]: I0218 14:54:23.375276 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5ffcba6d-7fd0-4552-ab65-2619972f67ed-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"5ffcba6d-7fd0-4552-ab65-2619972f67ed\") " pod="openstack/glance-default-external-api-0" Feb 18 14:54:23 crc kubenswrapper[4957]: I0218 14:54:23.375335 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ffcba6d-7fd0-4552-ab65-2619972f67ed-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"5ffcba6d-7fd0-4552-ab65-2619972f67ed\") " pod="openstack/glance-default-external-api-0" Feb 18 14:54:23 crc kubenswrapper[4957]: I0218 14:54:23.381010 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ffcba6d-7fd0-4552-ab65-2619972f67ed-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"5ffcba6d-7fd0-4552-ab65-2619972f67ed\") " pod="openstack/glance-default-external-api-0" Feb 18 14:54:23 crc kubenswrapper[4957]: I0218 14:54:23.381475 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5ffcba6d-7fd0-4552-ab65-2619972f67ed-logs\") pod \"glance-default-external-api-0\" (UID: \"5ffcba6d-7fd0-4552-ab65-2619972f67ed\") " pod="openstack/glance-default-external-api-0" Feb 18 14:54:23 crc kubenswrapper[4957]: I0218 14:54:23.386477 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5ffcba6d-7fd0-4552-ab65-2619972f67ed-scripts\") pod \"glance-default-external-api-0\" (UID: \"5ffcba6d-7fd0-4552-ab65-2619972f67ed\") " pod="openstack/glance-default-external-api-0" Feb 18 14:54:23 crc kubenswrapper[4957]: I0218 14:54:23.387168 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ffcba6d-7fd0-4552-ab65-2619972f67ed-config-data\") pod \"glance-default-external-api-0\" (UID: \"5ffcba6d-7fd0-4552-ab65-2619972f67ed\") " pod="openstack/glance-default-external-api-0" Feb 18 14:54:23 crc kubenswrapper[4957]: I0218 14:54:23.391140 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5ffcba6d-7fd0-4552-ab65-2619972f67ed-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"5ffcba6d-7fd0-4552-ab65-2619972f67ed\") " pod="openstack/glance-default-external-api-0" Feb 18 14:54:23 crc kubenswrapper[4957]: I0218 14:54:23.402266 4957 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 18 14:54:23 crc kubenswrapper[4957]: I0218 14:54:23.402310 4957 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-e7977071-236d-4f61-8ab3-a6195398953d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e7977071-236d-4f61-8ab3-a6195398953d\") pod \"glance-default-external-api-0\" (UID: \"5ffcba6d-7fd0-4552-ab65-2619972f67ed\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/591545b4df89bef99058b4a9f50b40c040b4db545d218b7f1013700bffeeb8a2/globalmount\"" pod="openstack/glance-default-external-api-0" Feb 18 14:54:23 crc kubenswrapper[4957]: I0218 14:54:23.410599 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qtm7c\" (UniqueName: \"kubernetes.io/projected/5ffcba6d-7fd0-4552-ab65-2619972f67ed-kube-api-access-qtm7c\") pod \"glance-default-external-api-0\" (UID: \"5ffcba6d-7fd0-4552-ab65-2619972f67ed\") " pod="openstack/glance-default-external-api-0" Feb 18 14:54:23 crc kubenswrapper[4957]: I0218 14:54:23.477451 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ede80602-d0cc-4ce0-a368-35626d308374-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"ede80602-d0cc-4ce0-a368-35626d308374\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:54:23 crc kubenswrapper[4957]: I0218 14:54:23.477510 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-2d87d3ce-8343-4706-ae50-f2bcfa8ba982\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2d87d3ce-8343-4706-ae50-f2bcfa8ba982\") pod \"glance-default-internal-api-0\" (UID: \"ede80602-d0cc-4ce0-a368-35626d308374\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:54:23 crc kubenswrapper[4957]: I0218 14:54:23.477531 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ede80602-d0cc-4ce0-a368-35626d308374-scripts\") pod \"glance-default-internal-api-0\" (UID: \"ede80602-d0cc-4ce0-a368-35626d308374\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:54:23 crc kubenswrapper[4957]: I0218 14:54:23.477588 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ede80602-d0cc-4ce0-a368-35626d308374-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"ede80602-d0cc-4ce0-a368-35626d308374\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:54:23 crc kubenswrapper[4957]: I0218 14:54:23.477633 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nlnsd\" (UniqueName: \"kubernetes.io/projected/ede80602-d0cc-4ce0-a368-35626d308374-kube-api-access-nlnsd\") pod \"glance-default-internal-api-0\" (UID: \"ede80602-d0cc-4ce0-a368-35626d308374\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:54:23 crc kubenswrapper[4957]: I0218 14:54:23.477651 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ede80602-d0cc-4ce0-a368-35626d308374-config-data\") pod \"glance-default-internal-api-0\" (UID: \"ede80602-d0cc-4ce0-a368-35626d308374\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:54:23 crc kubenswrapper[4957]: I0218 14:54:23.477667 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ede80602-d0cc-4ce0-a368-35626d308374-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"ede80602-d0cc-4ce0-a368-35626d308374\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:54:23 crc kubenswrapper[4957]: I0218 14:54:23.477697 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ede80602-d0cc-4ce0-a368-35626d308374-logs\") pod \"glance-default-internal-api-0\" (UID: \"ede80602-d0cc-4ce0-a368-35626d308374\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:54:23 crc kubenswrapper[4957]: I0218 14:54:23.478163 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ede80602-d0cc-4ce0-a368-35626d308374-logs\") pod \"glance-default-internal-api-0\" (UID: \"ede80602-d0cc-4ce0-a368-35626d308374\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:54:23 crc kubenswrapper[4957]: I0218 14:54:23.479869 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ede80602-d0cc-4ce0-a368-35626d308374-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"ede80602-d0cc-4ce0-a368-35626d308374\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:54:23 crc kubenswrapper[4957]: I0218 14:54:23.482678 4957 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 18 14:54:23 crc kubenswrapper[4957]: I0218 14:54:23.482728 4957 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-2d87d3ce-8343-4706-ae50-f2bcfa8ba982\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2d87d3ce-8343-4706-ae50-f2bcfa8ba982\") pod \"glance-default-internal-api-0\" (UID: \"ede80602-d0cc-4ce0-a368-35626d308374\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/ad2bd3876c77f68ca56101056f3774a80bb42ebacbba043834367849c0bb95ba/globalmount\"" pod="openstack/glance-default-internal-api-0" Feb 18 14:54:23 crc kubenswrapper[4957]: I0218 14:54:23.487133 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ede80602-d0cc-4ce0-a368-35626d308374-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"ede80602-d0cc-4ce0-a368-35626d308374\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:54:23 crc kubenswrapper[4957]: I0218 14:54:23.488018 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ede80602-d0cc-4ce0-a368-35626d308374-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"ede80602-d0cc-4ce0-a368-35626d308374\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:54:23 crc kubenswrapper[4957]: I0218 14:54:23.493848 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ede80602-d0cc-4ce0-a368-35626d308374-config-data\") pod \"glance-default-internal-api-0\" (UID: \"ede80602-d0cc-4ce0-a368-35626d308374\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:54:23 crc kubenswrapper[4957]: I0218 14:54:23.497635 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ede80602-d0cc-4ce0-a368-35626d308374-scripts\") pod \"glance-default-internal-api-0\" (UID: \"ede80602-d0cc-4ce0-a368-35626d308374\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:54:23 crc kubenswrapper[4957]: I0218 14:54:23.517476 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nlnsd\" (UniqueName: \"kubernetes.io/projected/ede80602-d0cc-4ce0-a368-35626d308374-kube-api-access-nlnsd\") pod \"glance-default-internal-api-0\" (UID: \"ede80602-d0cc-4ce0-a368-35626d308374\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:54:23 crc kubenswrapper[4957]: I0218 14:54:23.527834 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-e7977071-236d-4f61-8ab3-a6195398953d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e7977071-236d-4f61-8ab3-a6195398953d\") pod \"glance-default-external-api-0\" (UID: \"5ffcba6d-7fd0-4552-ab65-2619972f67ed\") " pod="openstack/glance-default-external-api-0" Feb 18 14:54:23 crc kubenswrapper[4957]: I0218 14:54:23.566593 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-2d87d3ce-8343-4706-ae50-f2bcfa8ba982\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2d87d3ce-8343-4706-ae50-f2bcfa8ba982\") pod \"glance-default-internal-api-0\" (UID: \"ede80602-d0cc-4ce0-a368-35626d308374\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:54:23 crc kubenswrapper[4957]: I0218 14:54:23.726405 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-sg7fz"] Feb 18 14:54:23 crc kubenswrapper[4957]: W0218 14:54:23.728197 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda3b12676_eb60_406c_a019_461370859d2a.slice/crio-561ff9d61bfb9c2c4c3f08e7f092c0a7a0146db64f47a54957ccb5821fad5a8b WatchSource:0}: Error finding container 561ff9d61bfb9c2c4c3f08e7f092c0a7a0146db64f47a54957ccb5821fad5a8b: Status 404 returned error can't find the container with id 561ff9d61bfb9c2c4c3f08e7f092c0a7a0146db64f47a54957ccb5821fad5a8b Feb 18 14:54:23 crc kubenswrapper[4957]: I0218 14:54:23.760459 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 18 14:54:23 crc kubenswrapper[4957]: I0218 14:54:23.762572 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 18 14:54:23 crc kubenswrapper[4957]: I0218 14:54:23.765987 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-6pv8c" event={"ID":"4d62bcf4-5e8d-4d44-8127-957e3ec97d8f","Type":"ContainerStarted","Data":"8915afa86e0b549a2d33e5914d729d9bcc272d22b0b48c847fdf0649a1a1c4c6"} Feb 18 14:54:23 crc kubenswrapper[4957]: I0218 14:54:23.767913 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-sg7fz" event={"ID":"a3b12676-eb60-406c-a019-461370859d2a","Type":"ContainerStarted","Data":"561ff9d61bfb9c2c4c3f08e7f092c0a7a0146db64f47a54957ccb5821fad5a8b"} Feb 18 14:54:23 crc kubenswrapper[4957]: I0218 14:54:23.769142 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-847c4cc679-qqfkg" event={"ID":"3fd89ea8-c220-4e8a-a834-5149e3028e94","Type":"ContainerStarted","Data":"7a7b9be6dc0b36d111f134edc8c507fc5e8901e8064e06524339248ee5f8898e"} Feb 18 14:54:24 crc kubenswrapper[4957]: I0218 14:54:24.141646 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-b2p7g"] Feb 18 14:54:24 crc kubenswrapper[4957]: W0218 14:54:24.144160 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc233a613_22d9_4534_811e_31acfe4eb302.slice/crio-a752a4037f524ad880d10a47e6adfa7096ca649e275b55a9be4308129a7cf5e9 WatchSource:0}: Error finding container a752a4037f524ad880d10a47e6adfa7096ca649e275b55a9be4308129a7cf5e9: Status 404 returned error can't find the container with id a752a4037f524ad880d10a47e6adfa7096ca649e275b55a9be4308129a7cf5e9 Feb 18 14:54:24 crc kubenswrapper[4957]: W0218 14:54:24.160298 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9205f99b_873e_4f53_9d89_85b77ca7adc1.slice/crio-fb351eee6b70178887cf2ccc4c294f64582321e03dbc7b1ceac5fcb9d389e6ff WatchSource:0}: Error finding container fb351eee6b70178887cf2ccc4c294f64582321e03dbc7b1ceac5fcb9d389e6ff: Status 404 returned error can't find the container with id fb351eee6b70178887cf2ccc4c294f64582321e03dbc7b1ceac5fcb9d389e6ff Feb 18 14:54:24 crc kubenswrapper[4957]: I0218 14:54:24.163718 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-8v9g9"] Feb 18 14:54:24 crc kubenswrapper[4957]: I0218 14:54:24.185788 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-5sgzz"] Feb 18 14:54:24 crc kubenswrapper[4957]: W0218 14:54:24.497571 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9f3c5afd_c757_4e78_9f08_3d55f1b32ab6.slice/crio-73dd275d8967a06e469c52410df3c84377073f58d8a5bf7bed460aa5a0a06135 WatchSource:0}: Error finding container 73dd275d8967a06e469c52410df3c84377073f58d8a5bf7bed460aa5a0a06135: Status 404 returned error can't find the container with id 73dd275d8967a06e469c52410df3c84377073f58d8a5bf7bed460aa5a0a06135 Feb 18 14:54:24 crc kubenswrapper[4957]: I0218 14:54:24.516265 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-d89wt"] Feb 18 14:54:24 crc kubenswrapper[4957]: I0218 14:54:24.562828 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 14:54:24 crc kubenswrapper[4957]: I0218 14:54:24.589520 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-jmmp8"] Feb 18 14:54:24 crc kubenswrapper[4957]: I0218 14:54:24.724859 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 14:54:24 crc kubenswrapper[4957]: W0218 14:54:24.731770 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5ffcba6d_7fd0_4552_ab65_2619972f67ed.slice/crio-30b6946b5627599830d84523d00e6577d86cdded90aa37a1497cb3b2a665c73a WatchSource:0}: Error finding container 30b6946b5627599830d84523d00e6577d86cdded90aa37a1497cb3b2a665c73a: Status 404 returned error can't find the container with id 30b6946b5627599830d84523d00e6577d86cdded90aa37a1497cb3b2a665c73a Feb 18 14:54:24 crc kubenswrapper[4957]: I0218 14:54:24.800325 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-5sgzz" event={"ID":"9205f99b-873e-4f53-9d89-85b77ca7adc1","Type":"ContainerStarted","Data":"fb351eee6b70178887cf2ccc4c294f64582321e03dbc7b1ceac5fcb9d389e6ff"} Feb 18 14:54:24 crc kubenswrapper[4957]: I0218 14:54:24.802033 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5ffcba6d-7fd0-4552-ab65-2619972f67ed","Type":"ContainerStarted","Data":"30b6946b5627599830d84523d00e6577d86cdded90aa37a1497cb3b2a665c73a"} Feb 18 14:54:24 crc kubenswrapper[4957]: I0218 14:54:24.811590 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-d89wt" event={"ID":"9f3c5afd-c757-4e78-9f08-3d55f1b32ab6","Type":"ContainerStarted","Data":"73dd275d8967a06e469c52410df3c84377073f58d8a5bf7bed460aa5a0a06135"} Feb 18 14:54:24 crc kubenswrapper[4957]: I0218 14:54:24.820014 4957 generic.go:334] "Generic (PLEG): container finished" podID="36143c0c-ebdb-4642-9926-b8d4601d0aae" containerID="99cf3770a0dfe46908a29c5c2b3ecee59f3ba107fe308135a158ef02665bb615" exitCode=0 Feb 18 14:54:24 crc kubenswrapper[4957]: I0218 14:54:24.820087 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-8v9g9" event={"ID":"36143c0c-ebdb-4642-9926-b8d4601d0aae","Type":"ContainerDied","Data":"99cf3770a0dfe46908a29c5c2b3ecee59f3ba107fe308135a158ef02665bb615"} Feb 18 14:54:24 crc kubenswrapper[4957]: I0218 14:54:24.820113 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-8v9g9" event={"ID":"36143c0c-ebdb-4642-9926-b8d4601d0aae","Type":"ContainerStarted","Data":"e3ffd264f750f63b668cbc55c343f10fa70a4c611492f1af8065b9abe22223e8"} Feb 18 14:54:24 crc kubenswrapper[4957]: I0218 14:54:24.829905 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-b2p7g" event={"ID":"c233a613-22d9-4534-811e-31acfe4eb302","Type":"ContainerStarted","Data":"66b30415e27ae4448ad716f032c63f380c2d3287d510cea022a5669e21607e9e"} Feb 18 14:54:24 crc kubenswrapper[4957]: I0218 14:54:24.829949 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-b2p7g" event={"ID":"c233a613-22d9-4534-811e-31acfe4eb302","Type":"ContainerStarted","Data":"a752a4037f524ad880d10a47e6adfa7096ca649e275b55a9be4308129a7cf5e9"} Feb 18 14:54:24 crc kubenswrapper[4957]: I0218 14:54:24.834892 4957 generic.go:334] "Generic (PLEG): container finished" podID="3fd89ea8-c220-4e8a-a834-5149e3028e94" containerID="b311590d54877127681d7965847ae67a76099dca1009cb2dd5c0fb9cf2fd1a43" exitCode=0 Feb 18 14:54:24 crc kubenswrapper[4957]: I0218 14:54:24.835053 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-847c4cc679-qqfkg" event={"ID":"3fd89ea8-c220-4e8a-a834-5149e3028e94","Type":"ContainerDied","Data":"b311590d54877127681d7965847ae67a76099dca1009cb2dd5c0fb9cf2fd1a43"} Feb 18 14:54:24 crc kubenswrapper[4957]: I0218 14:54:24.846010 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-jmmp8" event={"ID":"19ecbaa7-5ddf-4835-b9b7-01a0c156b75e","Type":"ContainerStarted","Data":"93970d30a996a8ab3d90d6cda11e3c1ff7c5c3dab30e49b5bd86a1ad61e5d718"} Feb 18 14:54:24 crc kubenswrapper[4957]: I0218 14:54:24.888353 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"86855234-3589-4780-a8a5-75d8a4350c27","Type":"ContainerStarted","Data":"18695ef7925f6c7b7e5f3919811981fbfe03a50f1df3dbf464a2c86a64916049"} Feb 18 14:54:24 crc kubenswrapper[4957]: I0218 14:54:24.907992 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 14:54:24 crc kubenswrapper[4957]: I0218 14:54:24.943565 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-6pv8c" event={"ID":"4d62bcf4-5e8d-4d44-8127-957e3ec97d8f","Type":"ContainerStarted","Data":"ec910696aeece37da26fa7ea20c29f50c0b0104dcac0861ada821dfc42313f6f"} Feb 18 14:54:24 crc kubenswrapper[4957]: I0218 14:54:24.950130 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-b2p7g" podStartSLOduration=2.950107629 podStartE2EDuration="2.950107629s" podCreationTimestamp="2026-02-18 14:54:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:54:24.913339436 +0000 UTC m=+1371.434204200" watchObservedRunningTime="2026-02-18 14:54:24.950107629 +0000 UTC m=+1371.470972373" Feb 18 14:54:25 crc kubenswrapper[4957]: I0218 14:54:25.000683 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-6pv8c" podStartSLOduration=4.000659642 podStartE2EDuration="4.000659642s" podCreationTimestamp="2026-02-18 14:54:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:54:24.985951996 +0000 UTC m=+1371.506816740" watchObservedRunningTime="2026-02-18 14:54:25.000659642 +0000 UTC m=+1371.521524386" Feb 18 14:54:25 crc kubenswrapper[4957]: I0218 14:54:25.040343 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 14:54:25 crc kubenswrapper[4957]: I0218 14:54:25.436440 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 14:54:25 crc kubenswrapper[4957]: I0218 14:54:25.523344 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 14:54:25 crc kubenswrapper[4957]: I0218 14:54:25.593754 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-qqfkg" Feb 18 14:54:25 crc kubenswrapper[4957]: I0218 14:54:25.672841 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3fd89ea8-c220-4e8a-a834-5149e3028e94-ovsdbserver-nb\") pod \"3fd89ea8-c220-4e8a-a834-5149e3028e94\" (UID: \"3fd89ea8-c220-4e8a-a834-5149e3028e94\") " Feb 18 14:54:25 crc kubenswrapper[4957]: I0218 14:54:25.672930 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3fd89ea8-c220-4e8a-a834-5149e3028e94-ovsdbserver-sb\") pod \"3fd89ea8-c220-4e8a-a834-5149e3028e94\" (UID: \"3fd89ea8-c220-4e8a-a834-5149e3028e94\") " Feb 18 14:54:25 crc kubenswrapper[4957]: I0218 14:54:25.673009 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3fd89ea8-c220-4e8a-a834-5149e3028e94-dns-svc\") pod \"3fd89ea8-c220-4e8a-a834-5149e3028e94\" (UID: \"3fd89ea8-c220-4e8a-a834-5149e3028e94\") " Feb 18 14:54:25 crc kubenswrapper[4957]: I0218 14:54:25.673072 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-66gsp\" (UniqueName: \"kubernetes.io/projected/3fd89ea8-c220-4e8a-a834-5149e3028e94-kube-api-access-66gsp\") pod \"3fd89ea8-c220-4e8a-a834-5149e3028e94\" (UID: \"3fd89ea8-c220-4e8a-a834-5149e3028e94\") " Feb 18 14:54:25 crc kubenswrapper[4957]: I0218 14:54:25.673309 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3fd89ea8-c220-4e8a-a834-5149e3028e94-dns-swift-storage-0\") pod \"3fd89ea8-c220-4e8a-a834-5149e3028e94\" (UID: \"3fd89ea8-c220-4e8a-a834-5149e3028e94\") " Feb 18 14:54:25 crc kubenswrapper[4957]: I0218 14:54:25.673374 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3fd89ea8-c220-4e8a-a834-5149e3028e94-config\") pod \"3fd89ea8-c220-4e8a-a834-5149e3028e94\" (UID: \"3fd89ea8-c220-4e8a-a834-5149e3028e94\") " Feb 18 14:54:25 crc kubenswrapper[4957]: I0218 14:54:25.693927 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3fd89ea8-c220-4e8a-a834-5149e3028e94-kube-api-access-66gsp" (OuterVolumeSpecName: "kube-api-access-66gsp") pod "3fd89ea8-c220-4e8a-a834-5149e3028e94" (UID: "3fd89ea8-c220-4e8a-a834-5149e3028e94"). InnerVolumeSpecName "kube-api-access-66gsp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:54:25 crc kubenswrapper[4957]: I0218 14:54:25.713910 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3fd89ea8-c220-4e8a-a834-5149e3028e94-config" (OuterVolumeSpecName: "config") pod "3fd89ea8-c220-4e8a-a834-5149e3028e94" (UID: "3fd89ea8-c220-4e8a-a834-5149e3028e94"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:54:25 crc kubenswrapper[4957]: I0218 14:54:25.714939 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3fd89ea8-c220-4e8a-a834-5149e3028e94-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3fd89ea8-c220-4e8a-a834-5149e3028e94" (UID: "3fd89ea8-c220-4e8a-a834-5149e3028e94"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:54:25 crc kubenswrapper[4957]: I0218 14:54:25.728130 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3fd89ea8-c220-4e8a-a834-5149e3028e94-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "3fd89ea8-c220-4e8a-a834-5149e3028e94" (UID: "3fd89ea8-c220-4e8a-a834-5149e3028e94"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:54:25 crc kubenswrapper[4957]: I0218 14:54:25.730767 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3fd89ea8-c220-4e8a-a834-5149e3028e94-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3fd89ea8-c220-4e8a-a834-5149e3028e94" (UID: "3fd89ea8-c220-4e8a-a834-5149e3028e94"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:54:25 crc kubenswrapper[4957]: I0218 14:54:25.740030 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3fd89ea8-c220-4e8a-a834-5149e3028e94-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "3fd89ea8-c220-4e8a-a834-5149e3028e94" (UID: "3fd89ea8-c220-4e8a-a834-5149e3028e94"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:54:25 crc kubenswrapper[4957]: I0218 14:54:25.776149 4957 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3fd89ea8-c220-4e8a-a834-5149e3028e94-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 18 14:54:25 crc kubenswrapper[4957]: I0218 14:54:25.776177 4957 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3fd89ea8-c220-4e8a-a834-5149e3028e94-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 18 14:54:25 crc kubenswrapper[4957]: I0218 14:54:25.776191 4957 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3fd89ea8-c220-4e8a-a834-5149e3028e94-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 14:54:25 crc kubenswrapper[4957]: I0218 14:54:25.776203 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-66gsp\" (UniqueName: \"kubernetes.io/projected/3fd89ea8-c220-4e8a-a834-5149e3028e94-kube-api-access-66gsp\") on node \"crc\" DevicePath \"\"" Feb 18 14:54:25 crc kubenswrapper[4957]: I0218 14:54:25.776217 4957 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3fd89ea8-c220-4e8a-a834-5149e3028e94-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 18 14:54:25 crc kubenswrapper[4957]: I0218 14:54:25.776228 4957 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3fd89ea8-c220-4e8a-a834-5149e3028e94-config\") on node \"crc\" DevicePath \"\"" Feb 18 14:54:25 crc kubenswrapper[4957]: I0218 14:54:25.962693 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-8v9g9" event={"ID":"36143c0c-ebdb-4642-9926-b8d4601d0aae","Type":"ContainerStarted","Data":"77534336adbe533d04aee751547e718c56d01b0a78db0ae9c22246033d0b8b54"} Feb 18 14:54:25 crc kubenswrapper[4957]: I0218 14:54:25.963173 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-785d8bcb8c-8v9g9" Feb 18 14:54:25 crc kubenswrapper[4957]: I0218 14:54:25.969166 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-847c4cc679-qqfkg" event={"ID":"3fd89ea8-c220-4e8a-a834-5149e3028e94","Type":"ContainerDied","Data":"7a7b9be6dc0b36d111f134edc8c507fc5e8901e8064e06524339248ee5f8898e"} Feb 18 14:54:25 crc kubenswrapper[4957]: I0218 14:54:25.969231 4957 scope.go:117] "RemoveContainer" containerID="b311590d54877127681d7965847ae67a76099dca1009cb2dd5c0fb9cf2fd1a43" Feb 18 14:54:25 crc kubenswrapper[4957]: I0218 14:54:25.969387 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-qqfkg" Feb 18 14:54:25 crc kubenswrapper[4957]: I0218 14:54:25.987360 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ede80602-d0cc-4ce0-a368-35626d308374","Type":"ContainerStarted","Data":"d33aa3aae729e45aa301533a96df94372bf491ae70be8898f4115d36ad6baba2"} Feb 18 14:54:26 crc kubenswrapper[4957]: I0218 14:54:26.004150 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-785d8bcb8c-8v9g9" podStartSLOduration=4.004127315 podStartE2EDuration="4.004127315s" podCreationTimestamp="2026-02-18 14:54:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:54:25.985852526 +0000 UTC m=+1372.506717280" watchObservedRunningTime="2026-02-18 14:54:26.004127315 +0000 UTC m=+1372.524992059" Feb 18 14:54:26 crc kubenswrapper[4957]: I0218 14:54:26.070849 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-qqfkg"] Feb 18 14:54:26 crc kubenswrapper[4957]: I0218 14:54:26.109237 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-qqfkg"] Feb 18 14:54:26 crc kubenswrapper[4957]: I0218 14:54:26.239781 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3fd89ea8-c220-4e8a-a834-5149e3028e94" path="/var/lib/kubelet/pods/3fd89ea8-c220-4e8a-a834-5149e3028e94/volumes" Feb 18 14:54:27 crc kubenswrapper[4957]: I0218 14:54:27.084245 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ede80602-d0cc-4ce0-a368-35626d308374","Type":"ContainerStarted","Data":"1df2633fec8e0f1f5d968256e9cfe0714bc57f9738bd45290087311c9103747e"} Feb 18 14:54:27 crc kubenswrapper[4957]: I0218 14:54:27.087381 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5ffcba6d-7fd0-4552-ab65-2619972f67ed","Type":"ContainerStarted","Data":"daf47f6b7d8179f4bba9105a3f3d104e6a4d8eecda319bef898a5e782af12496"} Feb 18 14:54:28 crc kubenswrapper[4957]: I0218 14:54:28.110581 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ede80602-d0cc-4ce0-a368-35626d308374","Type":"ContainerStarted","Data":"71ef7699281b56605d1b8eb7bda10261df1dbe43db75cb32dad9f3b867e588d0"} Feb 18 14:54:28 crc kubenswrapper[4957]: I0218 14:54:28.110721 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="ede80602-d0cc-4ce0-a368-35626d308374" containerName="glance-log" containerID="cri-o://1df2633fec8e0f1f5d968256e9cfe0714bc57f9738bd45290087311c9103747e" gracePeriod=30 Feb 18 14:54:28 crc kubenswrapper[4957]: I0218 14:54:28.110778 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="ede80602-d0cc-4ce0-a368-35626d308374" containerName="glance-httpd" containerID="cri-o://71ef7699281b56605d1b8eb7bda10261df1dbe43db75cb32dad9f3b867e588d0" gracePeriod=30 Feb 18 14:54:28 crc kubenswrapper[4957]: I0218 14:54:28.116691 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5ffcba6d-7fd0-4552-ab65-2619972f67ed","Type":"ContainerStarted","Data":"a4e3c0c562c83ad085cf74d834374cabaf8f8dde3edde06cdd56418e10561f79"} Feb 18 14:54:28 crc kubenswrapper[4957]: I0218 14:54:28.116859 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="5ffcba6d-7fd0-4552-ab65-2619972f67ed" containerName="glance-log" containerID="cri-o://daf47f6b7d8179f4bba9105a3f3d104e6a4d8eecda319bef898a5e782af12496" gracePeriod=30 Feb 18 14:54:28 crc kubenswrapper[4957]: I0218 14:54:28.116994 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="5ffcba6d-7fd0-4552-ab65-2619972f67ed" containerName="glance-httpd" containerID="cri-o://a4e3c0c562c83ad085cf74d834374cabaf8f8dde3edde06cdd56418e10561f79" gracePeriod=30 Feb 18 14:54:28 crc kubenswrapper[4957]: I0218 14:54:28.151316 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=6.151294099 podStartE2EDuration="6.151294099s" podCreationTimestamp="2026-02-18 14:54:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:54:28.141669241 +0000 UTC m=+1374.662533985" watchObservedRunningTime="2026-02-18 14:54:28.151294099 +0000 UTC m=+1374.672158853" Feb 18 14:54:28 crc kubenswrapper[4957]: I0218 14:54:28.182761 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=6.182729157 podStartE2EDuration="6.182729157s" podCreationTimestamp="2026-02-18 14:54:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:54:28.179827943 +0000 UTC m=+1374.700692687" watchObservedRunningTime="2026-02-18 14:54:28.182729157 +0000 UTC m=+1374.703593901" Feb 18 14:54:29 crc kubenswrapper[4957]: I0218 14:54:29.163859 4957 generic.go:334] "Generic (PLEG): container finished" podID="ede80602-d0cc-4ce0-a368-35626d308374" containerID="71ef7699281b56605d1b8eb7bda10261df1dbe43db75cb32dad9f3b867e588d0" exitCode=0 Feb 18 14:54:29 crc kubenswrapper[4957]: I0218 14:54:29.164198 4957 generic.go:334] "Generic (PLEG): container finished" podID="ede80602-d0cc-4ce0-a368-35626d308374" containerID="1df2633fec8e0f1f5d968256e9cfe0714bc57f9738bd45290087311c9103747e" exitCode=143 Feb 18 14:54:29 crc kubenswrapper[4957]: I0218 14:54:29.164293 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ede80602-d0cc-4ce0-a368-35626d308374","Type":"ContainerDied","Data":"71ef7699281b56605d1b8eb7bda10261df1dbe43db75cb32dad9f3b867e588d0"} Feb 18 14:54:29 crc kubenswrapper[4957]: I0218 14:54:29.164321 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ede80602-d0cc-4ce0-a368-35626d308374","Type":"ContainerDied","Data":"1df2633fec8e0f1f5d968256e9cfe0714bc57f9738bd45290087311c9103747e"} Feb 18 14:54:29 crc kubenswrapper[4957]: I0218 14:54:29.171307 4957 generic.go:334] "Generic (PLEG): container finished" podID="5ffcba6d-7fd0-4552-ab65-2619972f67ed" containerID="a4e3c0c562c83ad085cf74d834374cabaf8f8dde3edde06cdd56418e10561f79" exitCode=0 Feb 18 14:54:29 crc kubenswrapper[4957]: I0218 14:54:29.171341 4957 generic.go:334] "Generic (PLEG): container finished" podID="5ffcba6d-7fd0-4552-ab65-2619972f67ed" containerID="daf47f6b7d8179f4bba9105a3f3d104e6a4d8eecda319bef898a5e782af12496" exitCode=143 Feb 18 14:54:29 crc kubenswrapper[4957]: I0218 14:54:29.171363 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5ffcba6d-7fd0-4552-ab65-2619972f67ed","Type":"ContainerDied","Data":"a4e3c0c562c83ad085cf74d834374cabaf8f8dde3edde06cdd56418e10561f79"} Feb 18 14:54:29 crc kubenswrapper[4957]: I0218 14:54:29.171390 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5ffcba6d-7fd0-4552-ab65-2619972f67ed","Type":"ContainerDied","Data":"daf47f6b7d8179f4bba9105a3f3d104e6a4d8eecda319bef898a5e782af12496"} Feb 18 14:54:29 crc kubenswrapper[4957]: E0218 14:54:29.657198 4957 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4d62bcf4_5e8d_4d44_8127_957e3ec97d8f.slice/crio-ec910696aeece37da26fa7ea20c29f50c0b0104dcac0861ada821dfc42313f6f.scope\": RecentStats: unable to find data in memory cache]" Feb 18 14:54:30 crc kubenswrapper[4957]: I0218 14:54:30.188334 4957 generic.go:334] "Generic (PLEG): container finished" podID="4d62bcf4-5e8d-4d44-8127-957e3ec97d8f" containerID="ec910696aeece37da26fa7ea20c29f50c0b0104dcac0861ada821dfc42313f6f" exitCode=0 Feb 18 14:54:30 crc kubenswrapper[4957]: I0218 14:54:30.188745 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-6pv8c" event={"ID":"4d62bcf4-5e8d-4d44-8127-957e3ec97d8f","Type":"ContainerDied","Data":"ec910696aeece37da26fa7ea20c29f50c0b0104dcac0861ada821dfc42313f6f"} Feb 18 14:54:32 crc kubenswrapper[4957]: I0218 14:54:32.907685 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-785d8bcb8c-8v9g9" Feb 18 14:54:32 crc kubenswrapper[4957]: I0218 14:54:32.994496 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-b5ll7"] Feb 18 14:54:32 crc kubenswrapper[4957]: I0218 14:54:32.994943 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-74f6bcbc87-b5ll7" podUID="38b6d3b3-c319-4fb5-b91c-0f8607178ad0" containerName="dnsmasq-dns" containerID="cri-o://30b68765787f665bb4c6a9ad08829aa955f0bfd70d5d01623724f5fb62bd21f4" gracePeriod=10 Feb 18 14:54:33 crc kubenswrapper[4957]: I0218 14:54:33.239621 4957 generic.go:334] "Generic (PLEG): container finished" podID="38b6d3b3-c319-4fb5-b91c-0f8607178ad0" containerID="30b68765787f665bb4c6a9ad08829aa955f0bfd70d5d01623724f5fb62bd21f4" exitCode=0 Feb 18 14:54:33 crc kubenswrapper[4957]: I0218 14:54:33.239928 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-b5ll7" event={"ID":"38b6d3b3-c319-4fb5-b91c-0f8607178ad0","Type":"ContainerDied","Data":"30b68765787f665bb4c6a9ad08829aa955f0bfd70d5d01623724f5fb62bd21f4"} Feb 18 14:54:33 crc kubenswrapper[4957]: I0218 14:54:33.391815 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-cjglk"] Feb 18 14:54:33 crc kubenswrapper[4957]: E0218 14:54:33.392349 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3fd89ea8-c220-4e8a-a834-5149e3028e94" containerName="init" Feb 18 14:54:33 crc kubenswrapper[4957]: I0218 14:54:33.392366 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="3fd89ea8-c220-4e8a-a834-5149e3028e94" containerName="init" Feb 18 14:54:33 crc kubenswrapper[4957]: I0218 14:54:33.392641 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="3fd89ea8-c220-4e8a-a834-5149e3028e94" containerName="init" Feb 18 14:54:33 crc kubenswrapper[4957]: I0218 14:54:33.394276 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cjglk" Feb 18 14:54:33 crc kubenswrapper[4957]: I0218 14:54:33.401364 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cjglk"] Feb 18 14:54:33 crc kubenswrapper[4957]: I0218 14:54:33.510896 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26jg5\" (UniqueName: \"kubernetes.io/projected/fa177feb-1f29-43cb-baa9-6676bfa7c403-kube-api-access-26jg5\") pod \"redhat-operators-cjglk\" (UID: \"fa177feb-1f29-43cb-baa9-6676bfa7c403\") " pod="openshift-marketplace/redhat-operators-cjglk" Feb 18 14:54:33 crc kubenswrapper[4957]: I0218 14:54:33.510996 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa177feb-1f29-43cb-baa9-6676bfa7c403-catalog-content\") pod \"redhat-operators-cjglk\" (UID: \"fa177feb-1f29-43cb-baa9-6676bfa7c403\") " pod="openshift-marketplace/redhat-operators-cjglk" Feb 18 14:54:33 crc kubenswrapper[4957]: I0218 14:54:33.511246 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa177feb-1f29-43cb-baa9-6676bfa7c403-utilities\") pod \"redhat-operators-cjglk\" (UID: \"fa177feb-1f29-43cb-baa9-6676bfa7c403\") " pod="openshift-marketplace/redhat-operators-cjglk" Feb 18 14:54:33 crc kubenswrapper[4957]: I0218 14:54:33.613238 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-26jg5\" (UniqueName: \"kubernetes.io/projected/fa177feb-1f29-43cb-baa9-6676bfa7c403-kube-api-access-26jg5\") pod \"redhat-operators-cjglk\" (UID: \"fa177feb-1f29-43cb-baa9-6676bfa7c403\") " pod="openshift-marketplace/redhat-operators-cjglk" Feb 18 14:54:33 crc kubenswrapper[4957]: I0218 14:54:33.613309 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa177feb-1f29-43cb-baa9-6676bfa7c403-catalog-content\") pod \"redhat-operators-cjglk\" (UID: \"fa177feb-1f29-43cb-baa9-6676bfa7c403\") " pod="openshift-marketplace/redhat-operators-cjglk" Feb 18 14:54:33 crc kubenswrapper[4957]: I0218 14:54:33.613436 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa177feb-1f29-43cb-baa9-6676bfa7c403-utilities\") pod \"redhat-operators-cjglk\" (UID: \"fa177feb-1f29-43cb-baa9-6676bfa7c403\") " pod="openshift-marketplace/redhat-operators-cjglk" Feb 18 14:54:33 crc kubenswrapper[4957]: I0218 14:54:33.613908 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa177feb-1f29-43cb-baa9-6676bfa7c403-catalog-content\") pod \"redhat-operators-cjglk\" (UID: \"fa177feb-1f29-43cb-baa9-6676bfa7c403\") " pod="openshift-marketplace/redhat-operators-cjglk" Feb 18 14:54:33 crc kubenswrapper[4957]: I0218 14:54:33.613934 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa177feb-1f29-43cb-baa9-6676bfa7c403-utilities\") pod \"redhat-operators-cjglk\" (UID: \"fa177feb-1f29-43cb-baa9-6676bfa7c403\") " pod="openshift-marketplace/redhat-operators-cjglk" Feb 18 14:54:33 crc kubenswrapper[4957]: I0218 14:54:33.647769 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-26jg5\" (UniqueName: \"kubernetes.io/projected/fa177feb-1f29-43cb-baa9-6676bfa7c403-kube-api-access-26jg5\") pod \"redhat-operators-cjglk\" (UID: \"fa177feb-1f29-43cb-baa9-6676bfa7c403\") " pod="openshift-marketplace/redhat-operators-cjglk" Feb 18 14:54:33 crc kubenswrapper[4957]: I0218 14:54:33.724714 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cjglk" Feb 18 14:54:37 crc kubenswrapper[4957]: I0218 14:54:37.954826 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-74f6bcbc87-b5ll7" podUID="38b6d3b3-c319-4fb5-b91c-0f8607178ad0" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.168:5353: connect: connection refused" Feb 18 14:54:41 crc kubenswrapper[4957]: I0218 14:54:41.367293 4957 generic.go:334] "Generic (PLEG): container finished" podID="c233a613-22d9-4534-811e-31acfe4eb302" containerID="66b30415e27ae4448ad716f032c63f380c2d3287d510cea022a5669e21607e9e" exitCode=0 Feb 18 14:54:41 crc kubenswrapper[4957]: I0218 14:54:41.367392 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-b2p7g" event={"ID":"c233a613-22d9-4534-811e-31acfe4eb302","Type":"ContainerDied","Data":"66b30415e27ae4448ad716f032c63f380c2d3287d510cea022a5669e21607e9e"} Feb 18 14:54:41 crc kubenswrapper[4957]: I0218 14:54:41.563411 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 18 14:54:41 crc kubenswrapper[4957]: I0218 14:54:41.574937 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 18 14:54:41 crc kubenswrapper[4957]: I0218 14:54:41.579706 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-6pv8c" Feb 18 14:54:41 crc kubenswrapper[4957]: I0218 14:54:41.633485 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ede80602-d0cc-4ce0-a368-35626d308374-internal-tls-certs\") pod \"ede80602-d0cc-4ce0-a368-35626d308374\" (UID: \"ede80602-d0cc-4ce0-a368-35626d308374\") " Feb 18 14:54:41 crc kubenswrapper[4957]: I0218 14:54:41.633814 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j66l2\" (UniqueName: \"kubernetes.io/projected/4d62bcf4-5e8d-4d44-8127-957e3ec97d8f-kube-api-access-j66l2\") pod \"4d62bcf4-5e8d-4d44-8127-957e3ec97d8f\" (UID: \"4d62bcf4-5e8d-4d44-8127-957e3ec97d8f\") " Feb 18 14:54:41 crc kubenswrapper[4957]: I0218 14:54:41.633842 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ede80602-d0cc-4ce0-a368-35626d308374-combined-ca-bundle\") pod \"ede80602-d0cc-4ce0-a368-35626d308374\" (UID: \"ede80602-d0cc-4ce0-a368-35626d308374\") " Feb 18 14:54:41 crc kubenswrapper[4957]: I0218 14:54:41.633868 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ede80602-d0cc-4ce0-a368-35626d308374-config-data\") pod \"ede80602-d0cc-4ce0-a368-35626d308374\" (UID: \"ede80602-d0cc-4ce0-a368-35626d308374\") " Feb 18 14:54:41 crc kubenswrapper[4957]: I0218 14:54:41.633922 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5ffcba6d-7fd0-4552-ab65-2619972f67ed-httpd-run\") pod \"5ffcba6d-7fd0-4552-ab65-2619972f67ed\" (UID: \"5ffcba6d-7fd0-4552-ab65-2619972f67ed\") " Feb 18 14:54:41 crc kubenswrapper[4957]: I0218 14:54:41.633970 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4d62bcf4-5e8d-4d44-8127-957e3ec97d8f-scripts\") pod \"4d62bcf4-5e8d-4d44-8127-957e3ec97d8f\" (UID: \"4d62bcf4-5e8d-4d44-8127-957e3ec97d8f\") " Feb 18 14:54:41 crc kubenswrapper[4957]: I0218 14:54:41.634000 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d62bcf4-5e8d-4d44-8127-957e3ec97d8f-combined-ca-bundle\") pod \"4d62bcf4-5e8d-4d44-8127-957e3ec97d8f\" (UID: \"4d62bcf4-5e8d-4d44-8127-957e3ec97d8f\") " Feb 18 14:54:41 crc kubenswrapper[4957]: I0218 14:54:41.634036 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ede80602-d0cc-4ce0-a368-35626d308374-logs\") pod \"ede80602-d0cc-4ce0-a368-35626d308374\" (UID: \"ede80602-d0cc-4ce0-a368-35626d308374\") " Feb 18 14:54:41 crc kubenswrapper[4957]: I0218 14:54:41.634233 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e7977071-236d-4f61-8ab3-a6195398953d\") pod \"5ffcba6d-7fd0-4552-ab65-2619972f67ed\" (UID: \"5ffcba6d-7fd0-4552-ab65-2619972f67ed\") " Feb 18 14:54:41 crc kubenswrapper[4957]: I0218 14:54:41.634275 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ede80602-d0cc-4ce0-a368-35626d308374-scripts\") pod \"ede80602-d0cc-4ce0-a368-35626d308374\" (UID: \"ede80602-d0cc-4ce0-a368-35626d308374\") " Feb 18 14:54:41 crc kubenswrapper[4957]: I0218 14:54:41.634381 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ede80602-d0cc-4ce0-a368-35626d308374-httpd-run\") pod \"ede80602-d0cc-4ce0-a368-35626d308374\" (UID: \"ede80602-d0cc-4ce0-a368-35626d308374\") " Feb 18 14:54:41 crc kubenswrapper[4957]: I0218 14:54:41.634461 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5ffcba6d-7fd0-4552-ab65-2619972f67ed-public-tls-certs\") pod \"5ffcba6d-7fd0-4552-ab65-2619972f67ed\" (UID: \"5ffcba6d-7fd0-4552-ab65-2619972f67ed\") " Feb 18 14:54:41 crc kubenswrapper[4957]: I0218 14:54:41.634556 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2d87d3ce-8343-4706-ae50-f2bcfa8ba982\") pod \"ede80602-d0cc-4ce0-a368-35626d308374\" (UID: \"ede80602-d0cc-4ce0-a368-35626d308374\") " Feb 18 14:54:41 crc kubenswrapper[4957]: I0218 14:54:41.634601 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/4d62bcf4-5e8d-4d44-8127-957e3ec97d8f-credential-keys\") pod \"4d62bcf4-5e8d-4d44-8127-957e3ec97d8f\" (UID: \"4d62bcf4-5e8d-4d44-8127-957e3ec97d8f\") " Feb 18 14:54:41 crc kubenswrapper[4957]: I0218 14:54:41.634658 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d62bcf4-5e8d-4d44-8127-957e3ec97d8f-config-data\") pod \"4d62bcf4-5e8d-4d44-8127-957e3ec97d8f\" (UID: \"4d62bcf4-5e8d-4d44-8127-957e3ec97d8f\") " Feb 18 14:54:41 crc kubenswrapper[4957]: I0218 14:54:41.634716 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5ffcba6d-7fd0-4552-ab65-2619972f67ed-scripts\") pod \"5ffcba6d-7fd0-4552-ab65-2619972f67ed\" (UID: \"5ffcba6d-7fd0-4552-ab65-2619972f67ed\") " Feb 18 14:54:41 crc kubenswrapper[4957]: I0218 14:54:41.634744 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qtm7c\" (UniqueName: \"kubernetes.io/projected/5ffcba6d-7fd0-4552-ab65-2619972f67ed-kube-api-access-qtm7c\") pod \"5ffcba6d-7fd0-4552-ab65-2619972f67ed\" (UID: \"5ffcba6d-7fd0-4552-ab65-2619972f67ed\") " Feb 18 14:54:41 crc kubenswrapper[4957]: I0218 14:54:41.634789 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nlnsd\" (UniqueName: \"kubernetes.io/projected/ede80602-d0cc-4ce0-a368-35626d308374-kube-api-access-nlnsd\") pod \"ede80602-d0cc-4ce0-a368-35626d308374\" (UID: \"ede80602-d0cc-4ce0-a368-35626d308374\") " Feb 18 14:54:41 crc kubenswrapper[4957]: I0218 14:54:41.634819 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ffcba6d-7fd0-4552-ab65-2619972f67ed-config-data\") pod \"5ffcba6d-7fd0-4552-ab65-2619972f67ed\" (UID: \"5ffcba6d-7fd0-4552-ab65-2619972f67ed\") " Feb 18 14:54:41 crc kubenswrapper[4957]: I0218 14:54:41.634853 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4d62bcf4-5e8d-4d44-8127-957e3ec97d8f-fernet-keys\") pod \"4d62bcf4-5e8d-4d44-8127-957e3ec97d8f\" (UID: \"4d62bcf4-5e8d-4d44-8127-957e3ec97d8f\") " Feb 18 14:54:41 crc kubenswrapper[4957]: I0218 14:54:41.634882 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5ffcba6d-7fd0-4552-ab65-2619972f67ed-logs\") pod \"5ffcba6d-7fd0-4552-ab65-2619972f67ed\" (UID: \"5ffcba6d-7fd0-4552-ab65-2619972f67ed\") " Feb 18 14:54:41 crc kubenswrapper[4957]: I0218 14:54:41.634917 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ffcba6d-7fd0-4552-ab65-2619972f67ed-combined-ca-bundle\") pod \"5ffcba6d-7fd0-4552-ab65-2619972f67ed\" (UID: \"5ffcba6d-7fd0-4552-ab65-2619972f67ed\") " Feb 18 14:54:41 crc kubenswrapper[4957]: I0218 14:54:41.635997 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ede80602-d0cc-4ce0-a368-35626d308374-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "ede80602-d0cc-4ce0-a368-35626d308374" (UID: "ede80602-d0cc-4ce0-a368-35626d308374"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:54:41 crc kubenswrapper[4957]: I0218 14:54:41.643336 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d62bcf4-5e8d-4d44-8127-957e3ec97d8f-kube-api-access-j66l2" (OuterVolumeSpecName: "kube-api-access-j66l2") pod "4d62bcf4-5e8d-4d44-8127-957e3ec97d8f" (UID: "4d62bcf4-5e8d-4d44-8127-957e3ec97d8f"). InnerVolumeSpecName "kube-api-access-j66l2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:54:41 crc kubenswrapper[4957]: I0218 14:54:41.647140 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d62bcf4-5e8d-4d44-8127-957e3ec97d8f-scripts" (OuterVolumeSpecName: "scripts") pod "4d62bcf4-5e8d-4d44-8127-957e3ec97d8f" (UID: "4d62bcf4-5e8d-4d44-8127-957e3ec97d8f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:54:41 crc kubenswrapper[4957]: I0218 14:54:41.647480 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5ffcba6d-7fd0-4552-ab65-2619972f67ed-logs" (OuterVolumeSpecName: "logs") pod "5ffcba6d-7fd0-4552-ab65-2619972f67ed" (UID: "5ffcba6d-7fd0-4552-ab65-2619972f67ed"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:54:41 crc kubenswrapper[4957]: I0218 14:54:41.647744 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ede80602-d0cc-4ce0-a368-35626d308374-logs" (OuterVolumeSpecName: "logs") pod "ede80602-d0cc-4ce0-a368-35626d308374" (UID: "ede80602-d0cc-4ce0-a368-35626d308374"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:54:41 crc kubenswrapper[4957]: I0218 14:54:41.647770 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5ffcba6d-7fd0-4552-ab65-2619972f67ed-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "5ffcba6d-7fd0-4552-ab65-2619972f67ed" (UID: "5ffcba6d-7fd0-4552-ab65-2619972f67ed"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:54:41 crc kubenswrapper[4957]: I0218 14:54:41.650684 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ffcba6d-7fd0-4552-ab65-2619972f67ed-scripts" (OuterVolumeSpecName: "scripts") pod "5ffcba6d-7fd0-4552-ab65-2619972f67ed" (UID: "5ffcba6d-7fd0-4552-ab65-2619972f67ed"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:54:41 crc kubenswrapper[4957]: I0218 14:54:41.652789 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d62bcf4-5e8d-4d44-8127-957e3ec97d8f-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "4d62bcf4-5e8d-4d44-8127-957e3ec97d8f" (UID: "4d62bcf4-5e8d-4d44-8127-957e3ec97d8f"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:54:41 crc kubenswrapper[4957]: I0218 14:54:41.652821 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ffcba6d-7fd0-4552-ab65-2619972f67ed-kube-api-access-qtm7c" (OuterVolumeSpecName: "kube-api-access-qtm7c") pod "5ffcba6d-7fd0-4552-ab65-2619972f67ed" (UID: "5ffcba6d-7fd0-4552-ab65-2619972f67ed"). InnerVolumeSpecName "kube-api-access-qtm7c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:54:41 crc kubenswrapper[4957]: I0218 14:54:41.660189 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d62bcf4-5e8d-4d44-8127-957e3ec97d8f-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "4d62bcf4-5e8d-4d44-8127-957e3ec97d8f" (UID: "4d62bcf4-5e8d-4d44-8127-957e3ec97d8f"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:54:41 crc kubenswrapper[4957]: I0218 14:54:41.676719 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ede80602-d0cc-4ce0-a368-35626d308374-kube-api-access-nlnsd" (OuterVolumeSpecName: "kube-api-access-nlnsd") pod "ede80602-d0cc-4ce0-a368-35626d308374" (UID: "ede80602-d0cc-4ce0-a368-35626d308374"). InnerVolumeSpecName "kube-api-access-nlnsd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:54:41 crc kubenswrapper[4957]: I0218 14:54:41.685585 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ede80602-d0cc-4ce0-a368-35626d308374-scripts" (OuterVolumeSpecName: "scripts") pod "ede80602-d0cc-4ce0-a368-35626d308374" (UID: "ede80602-d0cc-4ce0-a368-35626d308374"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:54:41 crc kubenswrapper[4957]: I0218 14:54:41.712178 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e7977071-236d-4f61-8ab3-a6195398953d" (OuterVolumeSpecName: "glance") pod "5ffcba6d-7fd0-4552-ab65-2619972f67ed" (UID: "5ffcba6d-7fd0-4552-ab65-2619972f67ed"). InnerVolumeSpecName "pvc-e7977071-236d-4f61-8ab3-a6195398953d". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 18 14:54:41 crc kubenswrapper[4957]: I0218 14:54:41.726686 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d62bcf4-5e8d-4d44-8127-957e3ec97d8f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4d62bcf4-5e8d-4d44-8127-957e3ec97d8f" (UID: "4d62bcf4-5e8d-4d44-8127-957e3ec97d8f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:54:41 crc kubenswrapper[4957]: E0218 14:54:41.735150 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2d87d3ce-8343-4706-ae50-f2bcfa8ba982 podName:ede80602-d0cc-4ce0-a368-35626d308374 nodeName:}" failed. No retries permitted until 2026-02-18 14:54:42.235097334 +0000 UTC m=+1388.755962258 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "glance" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2d87d3ce-8343-4706-ae50-f2bcfa8ba982") pod "ede80602-d0cc-4ce0-a368-35626d308374" (UID: "ede80602-d0cc-4ce0-a368-35626d308374") : kubernetes.io/csi: Unmounter.TearDownAt failed: rpc error: code = Unknown desc = check target path: could not get consistent content of /proc/mounts after 3 attempts Feb 18 14:54:41 crc kubenswrapper[4957]: I0218 14:54:41.739086 4957 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/4d62bcf4-5e8d-4d44-8127-957e3ec97d8f-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 18 14:54:41 crc kubenswrapper[4957]: I0218 14:54:41.739259 4957 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5ffcba6d-7fd0-4552-ab65-2619972f67ed-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 14:54:41 crc kubenswrapper[4957]: I0218 14:54:41.739329 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qtm7c\" (UniqueName: \"kubernetes.io/projected/5ffcba6d-7fd0-4552-ab65-2619972f67ed-kube-api-access-qtm7c\") on node \"crc\" DevicePath \"\"" Feb 18 14:54:41 crc kubenswrapper[4957]: I0218 14:54:41.739398 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nlnsd\" (UniqueName: \"kubernetes.io/projected/ede80602-d0cc-4ce0-a368-35626d308374-kube-api-access-nlnsd\") on node \"crc\" DevicePath \"\"" Feb 18 14:54:41 crc kubenswrapper[4957]: I0218 14:54:41.739506 4957 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4d62bcf4-5e8d-4d44-8127-957e3ec97d8f-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 18 14:54:41 crc kubenswrapper[4957]: I0218 14:54:41.739593 4957 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5ffcba6d-7fd0-4552-ab65-2619972f67ed-logs\") on node \"crc\" DevicePath \"\"" Feb 18 14:54:41 crc kubenswrapper[4957]: I0218 14:54:41.739684 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j66l2\" (UniqueName: \"kubernetes.io/projected/4d62bcf4-5e8d-4d44-8127-957e3ec97d8f-kube-api-access-j66l2\") on node \"crc\" DevicePath \"\"" Feb 18 14:54:41 crc kubenswrapper[4957]: I0218 14:54:41.739760 4957 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5ffcba6d-7fd0-4552-ab65-2619972f67ed-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 18 14:54:41 crc kubenswrapper[4957]: I0218 14:54:41.739829 4957 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4d62bcf4-5e8d-4d44-8127-957e3ec97d8f-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 14:54:41 crc kubenswrapper[4957]: I0218 14:54:41.739886 4957 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d62bcf4-5e8d-4d44-8127-957e3ec97d8f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:54:41 crc kubenswrapper[4957]: I0218 14:54:41.739952 4957 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ede80602-d0cc-4ce0-a368-35626d308374-logs\") on node \"crc\" DevicePath \"\"" Feb 18 14:54:41 crc kubenswrapper[4957]: I0218 14:54:41.740056 4957 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-e7977071-236d-4f61-8ab3-a6195398953d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e7977071-236d-4f61-8ab3-a6195398953d\") on node \"crc\" " Feb 18 14:54:41 crc kubenswrapper[4957]: I0218 14:54:41.740125 4957 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ede80602-d0cc-4ce0-a368-35626d308374-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 14:54:41 crc kubenswrapper[4957]: I0218 14:54:41.740195 4957 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ede80602-d0cc-4ce0-a368-35626d308374-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 18 14:54:41 crc kubenswrapper[4957]: I0218 14:54:41.748762 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ffcba6d-7fd0-4552-ab65-2619972f67ed-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5ffcba6d-7fd0-4552-ab65-2619972f67ed" (UID: "5ffcba6d-7fd0-4552-ab65-2619972f67ed"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:54:41 crc kubenswrapper[4957]: I0218 14:54:41.752142 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ede80602-d0cc-4ce0-a368-35626d308374-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "ede80602-d0cc-4ce0-a368-35626d308374" (UID: "ede80602-d0cc-4ce0-a368-35626d308374"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:54:41 crc kubenswrapper[4957]: I0218 14:54:41.762699 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ede80602-d0cc-4ce0-a368-35626d308374-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ede80602-d0cc-4ce0-a368-35626d308374" (UID: "ede80602-d0cc-4ce0-a368-35626d308374"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:54:41 crc kubenswrapper[4957]: I0218 14:54:41.765162 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d62bcf4-5e8d-4d44-8127-957e3ec97d8f-config-data" (OuterVolumeSpecName: "config-data") pod "4d62bcf4-5e8d-4d44-8127-957e3ec97d8f" (UID: "4d62bcf4-5e8d-4d44-8127-957e3ec97d8f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:54:41 crc kubenswrapper[4957]: I0218 14:54:41.769397 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ffcba6d-7fd0-4552-ab65-2619972f67ed-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "5ffcba6d-7fd0-4552-ab65-2619972f67ed" (UID: "5ffcba6d-7fd0-4552-ab65-2619972f67ed"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:54:41 crc kubenswrapper[4957]: I0218 14:54:41.795605 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ede80602-d0cc-4ce0-a368-35626d308374-config-data" (OuterVolumeSpecName: "config-data") pod "ede80602-d0cc-4ce0-a368-35626d308374" (UID: "ede80602-d0cc-4ce0-a368-35626d308374"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:54:41 crc kubenswrapper[4957]: I0218 14:54:41.802230 4957 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 18 14:54:41 crc kubenswrapper[4957]: I0218 14:54:41.802472 4957 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-e7977071-236d-4f61-8ab3-a6195398953d" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e7977071-236d-4f61-8ab3-a6195398953d") on node "crc" Feb 18 14:54:41 crc kubenswrapper[4957]: I0218 14:54:41.824044 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ffcba6d-7fd0-4552-ab65-2619972f67ed-config-data" (OuterVolumeSpecName: "config-data") pod "5ffcba6d-7fd0-4552-ab65-2619972f67ed" (UID: "5ffcba6d-7fd0-4552-ab65-2619972f67ed"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:54:41 crc kubenswrapper[4957]: I0218 14:54:41.841862 4957 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5ffcba6d-7fd0-4552-ab65-2619972f67ed-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 14:54:41 crc kubenswrapper[4957]: I0218 14:54:41.841920 4957 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d62bcf4-5e8d-4d44-8127-957e3ec97d8f-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 14:54:41 crc kubenswrapper[4957]: I0218 14:54:41.841931 4957 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ffcba6d-7fd0-4552-ab65-2619972f67ed-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 14:54:41 crc kubenswrapper[4957]: I0218 14:54:41.841943 4957 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ffcba6d-7fd0-4552-ab65-2619972f67ed-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:54:41 crc kubenswrapper[4957]: I0218 14:54:41.841954 4957 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ede80602-d0cc-4ce0-a368-35626d308374-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 14:54:41 crc kubenswrapper[4957]: I0218 14:54:41.841965 4957 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ede80602-d0cc-4ce0-a368-35626d308374-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:54:41 crc kubenswrapper[4957]: I0218 14:54:41.841976 4957 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ede80602-d0cc-4ce0-a368-35626d308374-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 14:54:41 crc kubenswrapper[4957]: I0218 14:54:41.841993 4957 reconciler_common.go:293] "Volume detached for volume \"pvc-e7977071-236d-4f61-8ab3-a6195398953d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e7977071-236d-4f61-8ab3-a6195398953d\") on node \"crc\" DevicePath \"\"" Feb 18 14:54:42 crc kubenswrapper[4957]: I0218 14:54:42.250438 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2d87d3ce-8343-4706-ae50-f2bcfa8ba982\") pod \"ede80602-d0cc-4ce0-a368-35626d308374\" (UID: \"ede80602-d0cc-4ce0-a368-35626d308374\") " Feb 18 14:54:42 crc kubenswrapper[4957]: I0218 14:54:42.265936 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2d87d3ce-8343-4706-ae50-f2bcfa8ba982" (OuterVolumeSpecName: "glance") pod "ede80602-d0cc-4ce0-a368-35626d308374" (UID: "ede80602-d0cc-4ce0-a368-35626d308374"). InnerVolumeSpecName "pvc-2d87d3ce-8343-4706-ae50-f2bcfa8ba982". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 18 14:54:42 crc kubenswrapper[4957]: I0218 14:54:42.358151 4957 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-2d87d3ce-8343-4706-ae50-f2bcfa8ba982\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2d87d3ce-8343-4706-ae50-f2bcfa8ba982\") on node \"crc\" " Feb 18 14:54:42 crc kubenswrapper[4957]: I0218 14:54:42.379936 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ede80602-d0cc-4ce0-a368-35626d308374","Type":"ContainerDied","Data":"d33aa3aae729e45aa301533a96df94372bf491ae70be8898f4115d36ad6baba2"} Feb 18 14:54:42 crc kubenswrapper[4957]: I0218 14:54:42.380007 4957 scope.go:117] "RemoveContainer" containerID="71ef7699281b56605d1b8eb7bda10261df1dbe43db75cb32dad9f3b867e588d0" Feb 18 14:54:42 crc kubenswrapper[4957]: I0218 14:54:42.380160 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 18 14:54:42 crc kubenswrapper[4957]: I0218 14:54:42.391695 4957 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 18 14:54:42 crc kubenswrapper[4957]: I0218 14:54:42.391856 4957 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-2d87d3ce-8343-4706-ae50-f2bcfa8ba982" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2d87d3ce-8343-4706-ae50-f2bcfa8ba982") on node "crc" Feb 18 14:54:42 crc kubenswrapper[4957]: I0218 14:54:42.391937 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-6pv8c" Feb 18 14:54:42 crc kubenswrapper[4957]: I0218 14:54:42.391933 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-6pv8c" event={"ID":"4d62bcf4-5e8d-4d44-8127-957e3ec97d8f","Type":"ContainerDied","Data":"8915afa86e0b549a2d33e5914d729d9bcc272d22b0b48c847fdf0649a1a1c4c6"} Feb 18 14:54:42 crc kubenswrapper[4957]: I0218 14:54:42.391989 4957 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8915afa86e0b549a2d33e5914d729d9bcc272d22b0b48c847fdf0649a1a1c4c6" Feb 18 14:54:42 crc kubenswrapper[4957]: I0218 14:54:42.395656 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5ffcba6d-7fd0-4552-ab65-2619972f67ed","Type":"ContainerDied","Data":"30b6946b5627599830d84523d00e6577d86cdded90aa37a1497cb3b2a665c73a"} Feb 18 14:54:42 crc kubenswrapper[4957]: I0218 14:54:42.395663 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 18 14:54:42 crc kubenswrapper[4957]: I0218 14:54:42.461104 4957 reconciler_common.go:293] "Volume detached for volume \"pvc-2d87d3ce-8343-4706-ae50-f2bcfa8ba982\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2d87d3ce-8343-4706-ae50-f2bcfa8ba982\") on node \"crc\" DevicePath \"\"" Feb 18 14:54:42 crc kubenswrapper[4957]: I0218 14:54:42.515877 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 14:54:42 crc kubenswrapper[4957]: I0218 14:54:42.543794 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 14:54:42 crc kubenswrapper[4957]: I0218 14:54:42.584504 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 14:54:42 crc kubenswrapper[4957]: I0218 14:54:42.620912 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 14:54:42 crc kubenswrapper[4957]: I0218 14:54:42.648853 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 14:54:42 crc kubenswrapper[4957]: E0218 14:54:42.649302 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ffcba6d-7fd0-4552-ab65-2619972f67ed" containerName="glance-log" Feb 18 14:54:42 crc kubenswrapper[4957]: I0218 14:54:42.649319 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ffcba6d-7fd0-4552-ab65-2619972f67ed" containerName="glance-log" Feb 18 14:54:42 crc kubenswrapper[4957]: E0218 14:54:42.649329 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ede80602-d0cc-4ce0-a368-35626d308374" containerName="glance-httpd" Feb 18 14:54:42 crc kubenswrapper[4957]: I0218 14:54:42.649335 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="ede80602-d0cc-4ce0-a368-35626d308374" containerName="glance-httpd" Feb 18 14:54:42 crc kubenswrapper[4957]: E0218 14:54:42.649353 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ffcba6d-7fd0-4552-ab65-2619972f67ed" containerName="glance-httpd" Feb 18 14:54:42 crc kubenswrapper[4957]: I0218 14:54:42.649360 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ffcba6d-7fd0-4552-ab65-2619972f67ed" containerName="glance-httpd" Feb 18 14:54:42 crc kubenswrapper[4957]: E0218 14:54:42.649371 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ede80602-d0cc-4ce0-a368-35626d308374" containerName="glance-log" Feb 18 14:54:42 crc kubenswrapper[4957]: I0218 14:54:42.649378 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="ede80602-d0cc-4ce0-a368-35626d308374" containerName="glance-log" Feb 18 14:54:42 crc kubenswrapper[4957]: E0218 14:54:42.649386 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d62bcf4-5e8d-4d44-8127-957e3ec97d8f" containerName="keystone-bootstrap" Feb 18 14:54:42 crc kubenswrapper[4957]: I0218 14:54:42.649392 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d62bcf4-5e8d-4d44-8127-957e3ec97d8f" containerName="keystone-bootstrap" Feb 18 14:54:42 crc kubenswrapper[4957]: I0218 14:54:42.649581 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="ede80602-d0cc-4ce0-a368-35626d308374" containerName="glance-log" Feb 18 14:54:42 crc kubenswrapper[4957]: I0218 14:54:42.649594 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ffcba6d-7fd0-4552-ab65-2619972f67ed" containerName="glance-httpd" Feb 18 14:54:42 crc kubenswrapper[4957]: I0218 14:54:42.649611 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="ede80602-d0cc-4ce0-a368-35626d308374" containerName="glance-httpd" Feb 18 14:54:42 crc kubenswrapper[4957]: I0218 14:54:42.649621 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ffcba6d-7fd0-4552-ab65-2619972f67ed" containerName="glance-log" Feb 18 14:54:42 crc kubenswrapper[4957]: I0218 14:54:42.649635 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d62bcf4-5e8d-4d44-8127-957e3ec97d8f" containerName="keystone-bootstrap" Feb 18 14:54:42 crc kubenswrapper[4957]: I0218 14:54:42.650895 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 18 14:54:42 crc kubenswrapper[4957]: I0218 14:54:42.657598 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-f948p" Feb 18 14:54:42 crc kubenswrapper[4957]: I0218 14:54:42.657639 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 18 14:54:42 crc kubenswrapper[4957]: I0218 14:54:42.657745 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 18 14:54:42 crc kubenswrapper[4957]: I0218 14:54:42.657864 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Feb 18 14:54:42 crc kubenswrapper[4957]: I0218 14:54:42.688807 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 14:54:42 crc kubenswrapper[4957]: I0218 14:54:42.691268 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 18 14:54:42 crc kubenswrapper[4957]: I0218 14:54:42.697167 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 18 14:54:42 crc kubenswrapper[4957]: I0218 14:54:42.697475 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 18 14:54:42 crc kubenswrapper[4957]: I0218 14:54:42.749928 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 14:54:42 crc kubenswrapper[4957]: I0218 14:54:42.767738 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eef0cbff-2dd2-4fee-86cc-04d3bd3032f9-logs\") pod \"glance-default-internal-api-0\" (UID: \"eef0cbff-2dd2-4fee-86cc-04d3bd3032f9\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:54:42 crc kubenswrapper[4957]: I0218 14:54:42.767787 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-2d87d3ce-8343-4706-ae50-f2bcfa8ba982\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2d87d3ce-8343-4706-ae50-f2bcfa8ba982\") pod \"glance-default-internal-api-0\" (UID: \"eef0cbff-2dd2-4fee-86cc-04d3bd3032f9\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:54:42 crc kubenswrapper[4957]: I0218 14:54:42.767852 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/eef0cbff-2dd2-4fee-86cc-04d3bd3032f9-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"eef0cbff-2dd2-4fee-86cc-04d3bd3032f9\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:54:42 crc kubenswrapper[4957]: I0218 14:54:42.767885 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7h9fj\" (UniqueName: \"kubernetes.io/projected/eef0cbff-2dd2-4fee-86cc-04d3bd3032f9-kube-api-access-7h9fj\") pod \"glance-default-internal-api-0\" (UID: \"eef0cbff-2dd2-4fee-86cc-04d3bd3032f9\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:54:42 crc kubenswrapper[4957]: I0218 14:54:42.767909 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eef0cbff-2dd2-4fee-86cc-04d3bd3032f9-config-data\") pod \"glance-default-internal-api-0\" (UID: \"eef0cbff-2dd2-4fee-86cc-04d3bd3032f9\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:54:42 crc kubenswrapper[4957]: I0218 14:54:42.767926 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eef0cbff-2dd2-4fee-86cc-04d3bd3032f9-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"eef0cbff-2dd2-4fee-86cc-04d3bd3032f9\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:54:42 crc kubenswrapper[4957]: I0218 14:54:42.767954 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/eef0cbff-2dd2-4fee-86cc-04d3bd3032f9-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"eef0cbff-2dd2-4fee-86cc-04d3bd3032f9\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:54:42 crc kubenswrapper[4957]: I0218 14:54:42.767989 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eef0cbff-2dd2-4fee-86cc-04d3bd3032f9-scripts\") pod \"glance-default-internal-api-0\" (UID: \"eef0cbff-2dd2-4fee-86cc-04d3bd3032f9\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:54:42 crc kubenswrapper[4957]: I0218 14:54:42.784067 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 14:54:42 crc kubenswrapper[4957]: I0218 14:54:42.857650 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-6pv8c"] Feb 18 14:54:42 crc kubenswrapper[4957]: I0218 14:54:42.866373 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-6pv8c"] Feb 18 14:54:42 crc kubenswrapper[4957]: I0218 14:54:42.869893 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eef0cbff-2dd2-4fee-86cc-04d3bd3032f9-logs\") pod \"glance-default-internal-api-0\" (UID: \"eef0cbff-2dd2-4fee-86cc-04d3bd3032f9\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:54:42 crc kubenswrapper[4957]: I0218 14:54:42.869947 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pttkc\" (UniqueName: \"kubernetes.io/projected/eb55dbb2-b556-42c0-af4c-e94f389576c3-kube-api-access-pttkc\") pod \"glance-default-external-api-0\" (UID: \"eb55dbb2-b556-42c0-af4c-e94f389576c3\") " pod="openstack/glance-default-external-api-0" Feb 18 14:54:42 crc kubenswrapper[4957]: I0218 14:54:42.869980 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-2d87d3ce-8343-4706-ae50-f2bcfa8ba982\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2d87d3ce-8343-4706-ae50-f2bcfa8ba982\") pod \"glance-default-internal-api-0\" (UID: \"eef0cbff-2dd2-4fee-86cc-04d3bd3032f9\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:54:42 crc kubenswrapper[4957]: I0218 14:54:42.870021 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb55dbb2-b556-42c0-af4c-e94f389576c3-config-data\") pod \"glance-default-external-api-0\" (UID: \"eb55dbb2-b556-42c0-af4c-e94f389576c3\") " pod="openstack/glance-default-external-api-0" Feb 18 14:54:42 crc kubenswrapper[4957]: I0218 14:54:42.870047 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-e7977071-236d-4f61-8ab3-a6195398953d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e7977071-236d-4f61-8ab3-a6195398953d\") pod \"glance-default-external-api-0\" (UID: \"eb55dbb2-b556-42c0-af4c-e94f389576c3\") " pod="openstack/glance-default-external-api-0" Feb 18 14:54:42 crc kubenswrapper[4957]: I0218 14:54:42.870120 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/eef0cbff-2dd2-4fee-86cc-04d3bd3032f9-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"eef0cbff-2dd2-4fee-86cc-04d3bd3032f9\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:54:42 crc kubenswrapper[4957]: I0218 14:54:42.870161 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7h9fj\" (UniqueName: \"kubernetes.io/projected/eef0cbff-2dd2-4fee-86cc-04d3bd3032f9-kube-api-access-7h9fj\") pod \"glance-default-internal-api-0\" (UID: \"eef0cbff-2dd2-4fee-86cc-04d3bd3032f9\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:54:42 crc kubenswrapper[4957]: I0218 14:54:42.870197 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eef0cbff-2dd2-4fee-86cc-04d3bd3032f9-config-data\") pod \"glance-default-internal-api-0\" (UID: \"eef0cbff-2dd2-4fee-86cc-04d3bd3032f9\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:54:42 crc kubenswrapper[4957]: I0218 14:54:42.870225 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eef0cbff-2dd2-4fee-86cc-04d3bd3032f9-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"eef0cbff-2dd2-4fee-86cc-04d3bd3032f9\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:54:42 crc kubenswrapper[4957]: I0218 14:54:42.870254 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb55dbb2-b556-42c0-af4c-e94f389576c3-scripts\") pod \"glance-default-external-api-0\" (UID: \"eb55dbb2-b556-42c0-af4c-e94f389576c3\") " pod="openstack/glance-default-external-api-0" Feb 18 14:54:42 crc kubenswrapper[4957]: I0218 14:54:42.870274 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/eef0cbff-2dd2-4fee-86cc-04d3bd3032f9-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"eef0cbff-2dd2-4fee-86cc-04d3bd3032f9\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:54:42 crc kubenswrapper[4957]: I0218 14:54:42.870393 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eef0cbff-2dd2-4fee-86cc-04d3bd3032f9-logs\") pod \"glance-default-internal-api-0\" (UID: \"eef0cbff-2dd2-4fee-86cc-04d3bd3032f9\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:54:42 crc kubenswrapper[4957]: I0218 14:54:42.870311 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eef0cbff-2dd2-4fee-86cc-04d3bd3032f9-scripts\") pod \"glance-default-internal-api-0\" (UID: \"eef0cbff-2dd2-4fee-86cc-04d3bd3032f9\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:54:42 crc kubenswrapper[4957]: I0218 14:54:42.870874 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/eb55dbb2-b556-42c0-af4c-e94f389576c3-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"eb55dbb2-b556-42c0-af4c-e94f389576c3\") " pod="openstack/glance-default-external-api-0" Feb 18 14:54:42 crc kubenswrapper[4957]: I0218 14:54:42.870896 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb55dbb2-b556-42c0-af4c-e94f389576c3-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"eb55dbb2-b556-42c0-af4c-e94f389576c3\") " pod="openstack/glance-default-external-api-0" Feb 18 14:54:42 crc kubenswrapper[4957]: I0218 14:54:42.870922 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb55dbb2-b556-42c0-af4c-e94f389576c3-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"eb55dbb2-b556-42c0-af4c-e94f389576c3\") " pod="openstack/glance-default-external-api-0" Feb 18 14:54:42 crc kubenswrapper[4957]: I0218 14:54:42.870973 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eb55dbb2-b556-42c0-af4c-e94f389576c3-logs\") pod \"glance-default-external-api-0\" (UID: \"eb55dbb2-b556-42c0-af4c-e94f389576c3\") " pod="openstack/glance-default-external-api-0" Feb 18 14:54:42 crc kubenswrapper[4957]: I0218 14:54:42.877517 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/eef0cbff-2dd2-4fee-86cc-04d3bd3032f9-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"eef0cbff-2dd2-4fee-86cc-04d3bd3032f9\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:54:42 crc kubenswrapper[4957]: I0218 14:54:42.879559 4957 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 18 14:54:42 crc kubenswrapper[4957]: I0218 14:54:42.879627 4957 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-2d87d3ce-8343-4706-ae50-f2bcfa8ba982\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2d87d3ce-8343-4706-ae50-f2bcfa8ba982\") pod \"glance-default-internal-api-0\" (UID: \"eef0cbff-2dd2-4fee-86cc-04d3bd3032f9\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/ad2bd3876c77f68ca56101056f3774a80bb42ebacbba043834367849c0bb95ba/globalmount\"" pod="openstack/glance-default-internal-api-0" Feb 18 14:54:42 crc kubenswrapper[4957]: I0218 14:54:42.883121 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eef0cbff-2dd2-4fee-86cc-04d3bd3032f9-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"eef0cbff-2dd2-4fee-86cc-04d3bd3032f9\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:54:42 crc kubenswrapper[4957]: I0218 14:54:42.883378 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eef0cbff-2dd2-4fee-86cc-04d3bd3032f9-scripts\") pod \"glance-default-internal-api-0\" (UID: \"eef0cbff-2dd2-4fee-86cc-04d3bd3032f9\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:54:42 crc kubenswrapper[4957]: I0218 14:54:42.884929 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eef0cbff-2dd2-4fee-86cc-04d3bd3032f9-config-data\") pod \"glance-default-internal-api-0\" (UID: \"eef0cbff-2dd2-4fee-86cc-04d3bd3032f9\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:54:42 crc kubenswrapper[4957]: I0218 14:54:42.896032 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7h9fj\" (UniqueName: \"kubernetes.io/projected/eef0cbff-2dd2-4fee-86cc-04d3bd3032f9-kube-api-access-7h9fj\") pod \"glance-default-internal-api-0\" (UID: \"eef0cbff-2dd2-4fee-86cc-04d3bd3032f9\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:54:42 crc kubenswrapper[4957]: I0218 14:54:42.920670 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/eef0cbff-2dd2-4fee-86cc-04d3bd3032f9-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"eef0cbff-2dd2-4fee-86cc-04d3bd3032f9\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:54:42 crc kubenswrapper[4957]: I0218 14:54:42.936056 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-dfvgf"] Feb 18 14:54:42 crc kubenswrapper[4957]: I0218 14:54:42.938725 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-dfvgf" Feb 18 14:54:42 crc kubenswrapper[4957]: I0218 14:54:42.941339 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 18 14:54:42 crc kubenswrapper[4957]: I0218 14:54:42.941671 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 18 14:54:42 crc kubenswrapper[4957]: I0218 14:54:42.941853 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-t4lzm" Feb 18 14:54:42 crc kubenswrapper[4957]: I0218 14:54:42.942052 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 18 14:54:42 crc kubenswrapper[4957]: I0218 14:54:42.942530 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 18 14:54:42 crc kubenswrapper[4957]: I0218 14:54:42.947393 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-dfvgf"] Feb 18 14:54:42 crc kubenswrapper[4957]: I0218 14:54:42.953406 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-2d87d3ce-8343-4706-ae50-f2bcfa8ba982\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2d87d3ce-8343-4706-ae50-f2bcfa8ba982\") pod \"glance-default-internal-api-0\" (UID: \"eef0cbff-2dd2-4fee-86cc-04d3bd3032f9\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:54:42 crc kubenswrapper[4957]: I0218 14:54:42.972769 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eb55dbb2-b556-42c0-af4c-e94f389576c3-logs\") pod \"glance-default-external-api-0\" (UID: \"eb55dbb2-b556-42c0-af4c-e94f389576c3\") " pod="openstack/glance-default-external-api-0" Feb 18 14:54:42 crc kubenswrapper[4957]: I0218 14:54:42.972882 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pttkc\" (UniqueName: \"kubernetes.io/projected/eb55dbb2-b556-42c0-af4c-e94f389576c3-kube-api-access-pttkc\") pod \"glance-default-external-api-0\" (UID: \"eb55dbb2-b556-42c0-af4c-e94f389576c3\") " pod="openstack/glance-default-external-api-0" Feb 18 14:54:42 crc kubenswrapper[4957]: I0218 14:54:42.972928 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb55dbb2-b556-42c0-af4c-e94f389576c3-config-data\") pod \"glance-default-external-api-0\" (UID: \"eb55dbb2-b556-42c0-af4c-e94f389576c3\") " pod="openstack/glance-default-external-api-0" Feb 18 14:54:42 crc kubenswrapper[4957]: I0218 14:54:42.972958 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-e7977071-236d-4f61-8ab3-a6195398953d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e7977071-236d-4f61-8ab3-a6195398953d\") pod \"glance-default-external-api-0\" (UID: \"eb55dbb2-b556-42c0-af4c-e94f389576c3\") " pod="openstack/glance-default-external-api-0" Feb 18 14:54:42 crc kubenswrapper[4957]: I0218 14:54:42.973082 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb55dbb2-b556-42c0-af4c-e94f389576c3-scripts\") pod \"glance-default-external-api-0\" (UID: \"eb55dbb2-b556-42c0-af4c-e94f389576c3\") " pod="openstack/glance-default-external-api-0" Feb 18 14:54:42 crc kubenswrapper[4957]: I0218 14:54:42.973190 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/eb55dbb2-b556-42c0-af4c-e94f389576c3-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"eb55dbb2-b556-42c0-af4c-e94f389576c3\") " pod="openstack/glance-default-external-api-0" Feb 18 14:54:42 crc kubenswrapper[4957]: I0218 14:54:42.973215 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb55dbb2-b556-42c0-af4c-e94f389576c3-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"eb55dbb2-b556-42c0-af4c-e94f389576c3\") " pod="openstack/glance-default-external-api-0" Feb 18 14:54:42 crc kubenswrapper[4957]: I0218 14:54:42.973248 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb55dbb2-b556-42c0-af4c-e94f389576c3-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"eb55dbb2-b556-42c0-af4c-e94f389576c3\") " pod="openstack/glance-default-external-api-0" Feb 18 14:54:42 crc kubenswrapper[4957]: I0218 14:54:42.974040 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eb55dbb2-b556-42c0-af4c-e94f389576c3-logs\") pod \"glance-default-external-api-0\" (UID: \"eb55dbb2-b556-42c0-af4c-e94f389576c3\") " pod="openstack/glance-default-external-api-0" Feb 18 14:54:42 crc kubenswrapper[4957]: I0218 14:54:42.975769 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/eb55dbb2-b556-42c0-af4c-e94f389576c3-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"eb55dbb2-b556-42c0-af4c-e94f389576c3\") " pod="openstack/glance-default-external-api-0" Feb 18 14:54:42 crc kubenswrapper[4957]: I0218 14:54:42.977693 4957 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 18 14:54:42 crc kubenswrapper[4957]: I0218 14:54:42.977763 4957 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-e7977071-236d-4f61-8ab3-a6195398953d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e7977071-236d-4f61-8ab3-a6195398953d\") pod \"glance-default-external-api-0\" (UID: \"eb55dbb2-b556-42c0-af4c-e94f389576c3\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/591545b4df89bef99058b4a9f50b40c040b4db545d218b7f1013700bffeeb8a2/globalmount\"" pod="openstack/glance-default-external-api-0" Feb 18 14:54:42 crc kubenswrapper[4957]: I0218 14:54:42.984844 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 18 14:54:42 crc kubenswrapper[4957]: I0218 14:54:42.992606 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb55dbb2-b556-42c0-af4c-e94f389576c3-config-data\") pod \"glance-default-external-api-0\" (UID: \"eb55dbb2-b556-42c0-af4c-e94f389576c3\") " pod="openstack/glance-default-external-api-0" Feb 18 14:54:42 crc kubenswrapper[4957]: I0218 14:54:42.993248 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb55dbb2-b556-42c0-af4c-e94f389576c3-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"eb55dbb2-b556-42c0-af4c-e94f389576c3\") " pod="openstack/glance-default-external-api-0" Feb 18 14:54:42 crc kubenswrapper[4957]: I0218 14:54:42.993668 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb55dbb2-b556-42c0-af4c-e94f389576c3-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"eb55dbb2-b556-42c0-af4c-e94f389576c3\") " pod="openstack/glance-default-external-api-0" Feb 18 14:54:42 crc kubenswrapper[4957]: I0218 14:54:42.994234 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb55dbb2-b556-42c0-af4c-e94f389576c3-scripts\") pod \"glance-default-external-api-0\" (UID: \"eb55dbb2-b556-42c0-af4c-e94f389576c3\") " pod="openstack/glance-default-external-api-0" Feb 18 14:54:42 crc kubenswrapper[4957]: I0218 14:54:42.997162 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pttkc\" (UniqueName: \"kubernetes.io/projected/eb55dbb2-b556-42c0-af4c-e94f389576c3-kube-api-access-pttkc\") pod \"glance-default-external-api-0\" (UID: \"eb55dbb2-b556-42c0-af4c-e94f389576c3\") " pod="openstack/glance-default-external-api-0" Feb 18 14:54:43 crc kubenswrapper[4957]: I0218 14:54:43.038490 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-e7977071-236d-4f61-8ab3-a6195398953d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e7977071-236d-4f61-8ab3-a6195398953d\") pod \"glance-default-external-api-0\" (UID: \"eb55dbb2-b556-42c0-af4c-e94f389576c3\") " pod="openstack/glance-default-external-api-0" Feb 18 14:54:43 crc kubenswrapper[4957]: I0218 14:54:43.054695 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 18 14:54:43 crc kubenswrapper[4957]: I0218 14:54:43.075992 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jfnmm\" (UniqueName: \"kubernetes.io/projected/c6255f04-0970-449b-b48c-2a812f42b7c5-kube-api-access-jfnmm\") pod \"keystone-bootstrap-dfvgf\" (UID: \"c6255f04-0970-449b-b48c-2a812f42b7c5\") " pod="openstack/keystone-bootstrap-dfvgf" Feb 18 14:54:43 crc kubenswrapper[4957]: I0218 14:54:43.076105 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6255f04-0970-449b-b48c-2a812f42b7c5-config-data\") pod \"keystone-bootstrap-dfvgf\" (UID: \"c6255f04-0970-449b-b48c-2a812f42b7c5\") " pod="openstack/keystone-bootstrap-dfvgf" Feb 18 14:54:43 crc kubenswrapper[4957]: I0218 14:54:43.076155 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c6255f04-0970-449b-b48c-2a812f42b7c5-scripts\") pod \"keystone-bootstrap-dfvgf\" (UID: \"c6255f04-0970-449b-b48c-2a812f42b7c5\") " pod="openstack/keystone-bootstrap-dfvgf" Feb 18 14:54:43 crc kubenswrapper[4957]: I0218 14:54:43.076191 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c6255f04-0970-449b-b48c-2a812f42b7c5-fernet-keys\") pod \"keystone-bootstrap-dfvgf\" (UID: \"c6255f04-0970-449b-b48c-2a812f42b7c5\") " pod="openstack/keystone-bootstrap-dfvgf" Feb 18 14:54:43 crc kubenswrapper[4957]: I0218 14:54:43.076223 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6255f04-0970-449b-b48c-2a812f42b7c5-combined-ca-bundle\") pod \"keystone-bootstrap-dfvgf\" (UID: \"c6255f04-0970-449b-b48c-2a812f42b7c5\") " pod="openstack/keystone-bootstrap-dfvgf" Feb 18 14:54:43 crc kubenswrapper[4957]: I0218 14:54:43.076281 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c6255f04-0970-449b-b48c-2a812f42b7c5-credential-keys\") pod \"keystone-bootstrap-dfvgf\" (UID: \"c6255f04-0970-449b-b48c-2a812f42b7c5\") " pod="openstack/keystone-bootstrap-dfvgf" Feb 18 14:54:43 crc kubenswrapper[4957]: I0218 14:54:43.177929 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6255f04-0970-449b-b48c-2a812f42b7c5-combined-ca-bundle\") pod \"keystone-bootstrap-dfvgf\" (UID: \"c6255f04-0970-449b-b48c-2a812f42b7c5\") " pod="openstack/keystone-bootstrap-dfvgf" Feb 18 14:54:43 crc kubenswrapper[4957]: I0218 14:54:43.178456 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c6255f04-0970-449b-b48c-2a812f42b7c5-credential-keys\") pod \"keystone-bootstrap-dfvgf\" (UID: \"c6255f04-0970-449b-b48c-2a812f42b7c5\") " pod="openstack/keystone-bootstrap-dfvgf" Feb 18 14:54:43 crc kubenswrapper[4957]: I0218 14:54:43.179103 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jfnmm\" (UniqueName: \"kubernetes.io/projected/c6255f04-0970-449b-b48c-2a812f42b7c5-kube-api-access-jfnmm\") pod \"keystone-bootstrap-dfvgf\" (UID: \"c6255f04-0970-449b-b48c-2a812f42b7c5\") " pod="openstack/keystone-bootstrap-dfvgf" Feb 18 14:54:43 crc kubenswrapper[4957]: I0218 14:54:43.179197 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6255f04-0970-449b-b48c-2a812f42b7c5-config-data\") pod \"keystone-bootstrap-dfvgf\" (UID: \"c6255f04-0970-449b-b48c-2a812f42b7c5\") " pod="openstack/keystone-bootstrap-dfvgf" Feb 18 14:54:43 crc kubenswrapper[4957]: I0218 14:54:43.179264 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c6255f04-0970-449b-b48c-2a812f42b7c5-scripts\") pod \"keystone-bootstrap-dfvgf\" (UID: \"c6255f04-0970-449b-b48c-2a812f42b7c5\") " pod="openstack/keystone-bootstrap-dfvgf" Feb 18 14:54:43 crc kubenswrapper[4957]: I0218 14:54:43.179314 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c6255f04-0970-449b-b48c-2a812f42b7c5-fernet-keys\") pod \"keystone-bootstrap-dfvgf\" (UID: \"c6255f04-0970-449b-b48c-2a812f42b7c5\") " pod="openstack/keystone-bootstrap-dfvgf" Feb 18 14:54:43 crc kubenswrapper[4957]: I0218 14:54:43.182898 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c6255f04-0970-449b-b48c-2a812f42b7c5-fernet-keys\") pod \"keystone-bootstrap-dfvgf\" (UID: \"c6255f04-0970-449b-b48c-2a812f42b7c5\") " pod="openstack/keystone-bootstrap-dfvgf" Feb 18 14:54:43 crc kubenswrapper[4957]: I0218 14:54:43.183100 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6255f04-0970-449b-b48c-2a812f42b7c5-config-data\") pod \"keystone-bootstrap-dfvgf\" (UID: \"c6255f04-0970-449b-b48c-2a812f42b7c5\") " pod="openstack/keystone-bootstrap-dfvgf" Feb 18 14:54:43 crc kubenswrapper[4957]: I0218 14:54:43.183376 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6255f04-0970-449b-b48c-2a812f42b7c5-combined-ca-bundle\") pod \"keystone-bootstrap-dfvgf\" (UID: \"c6255f04-0970-449b-b48c-2a812f42b7c5\") " pod="openstack/keystone-bootstrap-dfvgf" Feb 18 14:54:43 crc kubenswrapper[4957]: I0218 14:54:43.183941 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c6255f04-0970-449b-b48c-2a812f42b7c5-scripts\") pod \"keystone-bootstrap-dfvgf\" (UID: \"c6255f04-0970-449b-b48c-2a812f42b7c5\") " pod="openstack/keystone-bootstrap-dfvgf" Feb 18 14:54:43 crc kubenswrapper[4957]: I0218 14:54:43.184353 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c6255f04-0970-449b-b48c-2a812f42b7c5-credential-keys\") pod \"keystone-bootstrap-dfvgf\" (UID: \"c6255f04-0970-449b-b48c-2a812f42b7c5\") " pod="openstack/keystone-bootstrap-dfvgf" Feb 18 14:54:43 crc kubenswrapper[4957]: I0218 14:54:43.200080 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jfnmm\" (UniqueName: \"kubernetes.io/projected/c6255f04-0970-449b-b48c-2a812f42b7c5-kube-api-access-jfnmm\") pod \"keystone-bootstrap-dfvgf\" (UID: \"c6255f04-0970-449b-b48c-2a812f42b7c5\") " pod="openstack/keystone-bootstrap-dfvgf" Feb 18 14:54:43 crc kubenswrapper[4957]: I0218 14:54:43.425761 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-dfvgf" Feb 18 14:54:44 crc kubenswrapper[4957]: I0218 14:54:44.225809 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d62bcf4-5e8d-4d44-8127-957e3ec97d8f" path="/var/lib/kubelet/pods/4d62bcf4-5e8d-4d44-8127-957e3ec97d8f/volumes" Feb 18 14:54:44 crc kubenswrapper[4957]: I0218 14:54:44.226556 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ffcba6d-7fd0-4552-ab65-2619972f67ed" path="/var/lib/kubelet/pods/5ffcba6d-7fd0-4552-ab65-2619972f67ed/volumes" Feb 18 14:54:44 crc kubenswrapper[4957]: I0218 14:54:44.227327 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ede80602-d0cc-4ce0-a368-35626d308374" path="/var/lib/kubelet/pods/ede80602-d0cc-4ce0-a368-35626d308374/volumes" Feb 18 14:54:47 crc kubenswrapper[4957]: I0218 14:54:47.962005 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-74f6bcbc87-b5ll7" podUID="38b6d3b3-c319-4fb5-b91c-0f8607178ad0" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.168:5353: i/o timeout" Feb 18 14:54:50 crc kubenswrapper[4957]: E0218 14:54:50.228795 4957 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified" Feb 18 14:54:50 crc kubenswrapper[4957]: E0218 14:54:50.229487 4957 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f5ng7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-sg7fz_openstack(a3b12676-eb60-406c-a019-461370859d2a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 14:54:50 crc kubenswrapper[4957]: E0218 14:54:50.230899 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/heat-db-sync-sg7fz" podUID="a3b12676-eb60-406c-a019-461370859d2a" Feb 18 14:54:50 crc kubenswrapper[4957]: I0218 14:54:50.355542 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-b5ll7" Feb 18 14:54:50 crc kubenswrapper[4957]: I0218 14:54:50.367506 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-b2p7g" Feb 18 14:54:50 crc kubenswrapper[4957]: I0218 14:54:50.509922 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-b5ll7" Feb 18 14:54:50 crc kubenswrapper[4957]: I0218 14:54:50.510126 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-b5ll7" event={"ID":"38b6d3b3-c319-4fb5-b91c-0f8607178ad0","Type":"ContainerDied","Data":"7317f4075e67c60790b5a278daba856f0d23cdc471501878baf5a0f427e2bfbc"} Feb 18 14:54:50 crc kubenswrapper[4957]: I0218 14:54:50.513241 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-b2p7g" event={"ID":"c233a613-22d9-4534-811e-31acfe4eb302","Type":"ContainerDied","Data":"a752a4037f524ad880d10a47e6adfa7096ca649e275b55a9be4308129a7cf5e9"} Feb 18 14:54:50 crc kubenswrapper[4957]: I0218 14:54:50.513281 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-b2p7g" Feb 18 14:54:50 crc kubenswrapper[4957]: I0218 14:54:50.513284 4957 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a752a4037f524ad880d10a47e6adfa7096ca649e275b55a9be4308129a7cf5e9" Feb 18 14:54:50 crc kubenswrapper[4957]: E0218 14:54:50.515388 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified\\\"\"" pod="openstack/heat-db-sync-sg7fz" podUID="a3b12676-eb60-406c-a019-461370859d2a" Feb 18 14:54:50 crc kubenswrapper[4957]: I0218 14:54:50.518890 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/38b6d3b3-c319-4fb5-b91c-0f8607178ad0-dns-swift-storage-0\") pod \"38b6d3b3-c319-4fb5-b91c-0f8607178ad0\" (UID: \"38b6d3b3-c319-4fb5-b91c-0f8607178ad0\") " Feb 18 14:54:50 crc kubenswrapper[4957]: I0218 14:54:50.518975 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m8fzs\" (UniqueName: \"kubernetes.io/projected/c233a613-22d9-4534-811e-31acfe4eb302-kube-api-access-m8fzs\") pod \"c233a613-22d9-4534-811e-31acfe4eb302\" (UID: \"c233a613-22d9-4534-811e-31acfe4eb302\") " Feb 18 14:54:50 crc kubenswrapper[4957]: I0218 14:54:50.519065 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/38b6d3b3-c319-4fb5-b91c-0f8607178ad0-dns-svc\") pod \"38b6d3b3-c319-4fb5-b91c-0f8607178ad0\" (UID: \"38b6d3b3-c319-4fb5-b91c-0f8607178ad0\") " Feb 18 14:54:50 crc kubenswrapper[4957]: I0218 14:54:50.519137 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lllkx\" (UniqueName: \"kubernetes.io/projected/38b6d3b3-c319-4fb5-b91c-0f8607178ad0-kube-api-access-lllkx\") pod \"38b6d3b3-c319-4fb5-b91c-0f8607178ad0\" (UID: \"38b6d3b3-c319-4fb5-b91c-0f8607178ad0\") " Feb 18 14:54:50 crc kubenswrapper[4957]: I0218 14:54:50.519195 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/38b6d3b3-c319-4fb5-b91c-0f8607178ad0-ovsdbserver-sb\") pod \"38b6d3b3-c319-4fb5-b91c-0f8607178ad0\" (UID: \"38b6d3b3-c319-4fb5-b91c-0f8607178ad0\") " Feb 18 14:54:50 crc kubenswrapper[4957]: I0218 14:54:50.520214 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c233a613-22d9-4534-811e-31acfe4eb302-config\") pod \"c233a613-22d9-4534-811e-31acfe4eb302\" (UID: \"c233a613-22d9-4534-811e-31acfe4eb302\") " Feb 18 14:54:50 crc kubenswrapper[4957]: I0218 14:54:50.520309 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38b6d3b3-c319-4fb5-b91c-0f8607178ad0-config\") pod \"38b6d3b3-c319-4fb5-b91c-0f8607178ad0\" (UID: \"38b6d3b3-c319-4fb5-b91c-0f8607178ad0\") " Feb 18 14:54:50 crc kubenswrapper[4957]: I0218 14:54:50.520342 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c233a613-22d9-4534-811e-31acfe4eb302-combined-ca-bundle\") pod \"c233a613-22d9-4534-811e-31acfe4eb302\" (UID: \"c233a613-22d9-4534-811e-31acfe4eb302\") " Feb 18 14:54:50 crc kubenswrapper[4957]: I0218 14:54:50.522221 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/38b6d3b3-c319-4fb5-b91c-0f8607178ad0-ovsdbserver-nb\") pod \"38b6d3b3-c319-4fb5-b91c-0f8607178ad0\" (UID: \"38b6d3b3-c319-4fb5-b91c-0f8607178ad0\") " Feb 18 14:54:50 crc kubenswrapper[4957]: I0218 14:54:50.528110 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38b6d3b3-c319-4fb5-b91c-0f8607178ad0-kube-api-access-lllkx" (OuterVolumeSpecName: "kube-api-access-lllkx") pod "38b6d3b3-c319-4fb5-b91c-0f8607178ad0" (UID: "38b6d3b3-c319-4fb5-b91c-0f8607178ad0"). InnerVolumeSpecName "kube-api-access-lllkx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:54:50 crc kubenswrapper[4957]: I0218 14:54:50.535985 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c233a613-22d9-4534-811e-31acfe4eb302-kube-api-access-m8fzs" (OuterVolumeSpecName: "kube-api-access-m8fzs") pod "c233a613-22d9-4534-811e-31acfe4eb302" (UID: "c233a613-22d9-4534-811e-31acfe4eb302"). InnerVolumeSpecName "kube-api-access-m8fzs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:54:50 crc kubenswrapper[4957]: I0218 14:54:50.576580 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c233a613-22d9-4534-811e-31acfe4eb302-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c233a613-22d9-4534-811e-31acfe4eb302" (UID: "c233a613-22d9-4534-811e-31acfe4eb302"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:54:50 crc kubenswrapper[4957]: I0218 14:54:50.578509 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c233a613-22d9-4534-811e-31acfe4eb302-config" (OuterVolumeSpecName: "config") pod "c233a613-22d9-4534-811e-31acfe4eb302" (UID: "c233a613-22d9-4534-811e-31acfe4eb302"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:54:50 crc kubenswrapper[4957]: I0218 14:54:50.604242 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/38b6d3b3-c319-4fb5-b91c-0f8607178ad0-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "38b6d3b3-c319-4fb5-b91c-0f8607178ad0" (UID: "38b6d3b3-c319-4fb5-b91c-0f8607178ad0"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:54:50 crc kubenswrapper[4957]: I0218 14:54:50.604813 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/38b6d3b3-c319-4fb5-b91c-0f8607178ad0-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "38b6d3b3-c319-4fb5-b91c-0f8607178ad0" (UID: "38b6d3b3-c319-4fb5-b91c-0f8607178ad0"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:54:50 crc kubenswrapper[4957]: I0218 14:54:50.610124 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/38b6d3b3-c319-4fb5-b91c-0f8607178ad0-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "38b6d3b3-c319-4fb5-b91c-0f8607178ad0" (UID: "38b6d3b3-c319-4fb5-b91c-0f8607178ad0"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:54:50 crc kubenswrapper[4957]: I0218 14:54:50.614121 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/38b6d3b3-c319-4fb5-b91c-0f8607178ad0-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "38b6d3b3-c319-4fb5-b91c-0f8607178ad0" (UID: "38b6d3b3-c319-4fb5-b91c-0f8607178ad0"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:54:50 crc kubenswrapper[4957]: I0218 14:54:50.626543 4957 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/38b6d3b3-c319-4fb5-b91c-0f8607178ad0-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 18 14:54:50 crc kubenswrapper[4957]: I0218 14:54:50.626571 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m8fzs\" (UniqueName: \"kubernetes.io/projected/c233a613-22d9-4534-811e-31acfe4eb302-kube-api-access-m8fzs\") on node \"crc\" DevicePath \"\"" Feb 18 14:54:50 crc kubenswrapper[4957]: I0218 14:54:50.626581 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lllkx\" (UniqueName: \"kubernetes.io/projected/38b6d3b3-c319-4fb5-b91c-0f8607178ad0-kube-api-access-lllkx\") on node \"crc\" DevicePath \"\"" Feb 18 14:54:50 crc kubenswrapper[4957]: I0218 14:54:50.626592 4957 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/38b6d3b3-c319-4fb5-b91c-0f8607178ad0-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 14:54:50 crc kubenswrapper[4957]: I0218 14:54:50.626601 4957 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/38b6d3b3-c319-4fb5-b91c-0f8607178ad0-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 18 14:54:50 crc kubenswrapper[4957]: I0218 14:54:50.626609 4957 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/c233a613-22d9-4534-811e-31acfe4eb302-config\") on node \"crc\" DevicePath \"\"" Feb 18 14:54:50 crc kubenswrapper[4957]: I0218 14:54:50.626619 4957 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c233a613-22d9-4534-811e-31acfe4eb302-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:54:50 crc kubenswrapper[4957]: I0218 14:54:50.626628 4957 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/38b6d3b3-c319-4fb5-b91c-0f8607178ad0-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 18 14:54:50 crc kubenswrapper[4957]: I0218 14:54:50.639485 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/38b6d3b3-c319-4fb5-b91c-0f8607178ad0-config" (OuterVolumeSpecName: "config") pod "38b6d3b3-c319-4fb5-b91c-0f8607178ad0" (UID: "38b6d3b3-c319-4fb5-b91c-0f8607178ad0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:54:50 crc kubenswrapper[4957]: I0218 14:54:50.728837 4957 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38b6d3b3-c319-4fb5-b91c-0f8607178ad0-config\") on node \"crc\" DevicePath \"\"" Feb 18 14:54:50 crc kubenswrapper[4957]: I0218 14:54:50.912471 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-b5ll7"] Feb 18 14:54:50 crc kubenswrapper[4957]: I0218 14:54:50.933181 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-b5ll7"] Feb 18 14:54:51 crc kubenswrapper[4957]: I0218 14:54:51.686409 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-vx74r"] Feb 18 14:54:51 crc kubenswrapper[4957]: E0218 14:54:51.688684 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c233a613-22d9-4534-811e-31acfe4eb302" containerName="neutron-db-sync" Feb 18 14:54:51 crc kubenswrapper[4957]: I0218 14:54:51.689622 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="c233a613-22d9-4534-811e-31acfe4eb302" containerName="neutron-db-sync" Feb 18 14:54:51 crc kubenswrapper[4957]: E0218 14:54:51.689848 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38b6d3b3-c319-4fb5-b91c-0f8607178ad0" containerName="init" Feb 18 14:54:51 crc kubenswrapper[4957]: I0218 14:54:51.689910 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="38b6d3b3-c319-4fb5-b91c-0f8607178ad0" containerName="init" Feb 18 14:54:51 crc kubenswrapper[4957]: E0218 14:54:51.689971 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38b6d3b3-c319-4fb5-b91c-0f8607178ad0" containerName="dnsmasq-dns" Feb 18 14:54:51 crc kubenswrapper[4957]: I0218 14:54:51.690030 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="38b6d3b3-c319-4fb5-b91c-0f8607178ad0" containerName="dnsmasq-dns" Feb 18 14:54:51 crc kubenswrapper[4957]: I0218 14:54:51.690296 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="c233a613-22d9-4534-811e-31acfe4eb302" containerName="neutron-db-sync" Feb 18 14:54:51 crc kubenswrapper[4957]: I0218 14:54:51.690401 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="38b6d3b3-c319-4fb5-b91c-0f8607178ad0" containerName="dnsmasq-dns" Feb 18 14:54:51 crc kubenswrapper[4957]: I0218 14:54:51.691931 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-vx74r" Feb 18 14:54:51 crc kubenswrapper[4957]: I0218 14:54:51.723517 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-vx74r"] Feb 18 14:54:51 crc kubenswrapper[4957]: I0218 14:54:51.777361 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d2f29bea-5fda-4080-a98b-b869c41e3dab-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-vx74r\" (UID: \"d2f29bea-5fda-4080-a98b-b869c41e3dab\") " pod="openstack/dnsmasq-dns-55f844cf75-vx74r" Feb 18 14:54:51 crc kubenswrapper[4957]: I0218 14:54:51.777503 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d2f29bea-5fda-4080-a98b-b869c41e3dab-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-vx74r\" (UID: \"d2f29bea-5fda-4080-a98b-b869c41e3dab\") " pod="openstack/dnsmasq-dns-55f844cf75-vx74r" Feb 18 14:54:51 crc kubenswrapper[4957]: I0218 14:54:51.777534 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjzz9\" (UniqueName: \"kubernetes.io/projected/d2f29bea-5fda-4080-a98b-b869c41e3dab-kube-api-access-bjzz9\") pod \"dnsmasq-dns-55f844cf75-vx74r\" (UID: \"d2f29bea-5fda-4080-a98b-b869c41e3dab\") " pod="openstack/dnsmasq-dns-55f844cf75-vx74r" Feb 18 14:54:51 crc kubenswrapper[4957]: I0218 14:54:51.777645 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d2f29bea-5fda-4080-a98b-b869c41e3dab-dns-svc\") pod \"dnsmasq-dns-55f844cf75-vx74r\" (UID: \"d2f29bea-5fda-4080-a98b-b869c41e3dab\") " pod="openstack/dnsmasq-dns-55f844cf75-vx74r" Feb 18 14:54:51 crc kubenswrapper[4957]: I0218 14:54:51.777879 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2f29bea-5fda-4080-a98b-b869c41e3dab-config\") pod \"dnsmasq-dns-55f844cf75-vx74r\" (UID: \"d2f29bea-5fda-4080-a98b-b869c41e3dab\") " pod="openstack/dnsmasq-dns-55f844cf75-vx74r" Feb 18 14:54:51 crc kubenswrapper[4957]: I0218 14:54:51.777947 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d2f29bea-5fda-4080-a98b-b869c41e3dab-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-vx74r\" (UID: \"d2f29bea-5fda-4080-a98b-b869c41e3dab\") " pod="openstack/dnsmasq-dns-55f844cf75-vx74r" Feb 18 14:54:51 crc kubenswrapper[4957]: I0218 14:54:51.879099 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2f29bea-5fda-4080-a98b-b869c41e3dab-config\") pod \"dnsmasq-dns-55f844cf75-vx74r\" (UID: \"d2f29bea-5fda-4080-a98b-b869c41e3dab\") " pod="openstack/dnsmasq-dns-55f844cf75-vx74r" Feb 18 14:54:51 crc kubenswrapper[4957]: I0218 14:54:51.879148 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d2f29bea-5fda-4080-a98b-b869c41e3dab-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-vx74r\" (UID: \"d2f29bea-5fda-4080-a98b-b869c41e3dab\") " pod="openstack/dnsmasq-dns-55f844cf75-vx74r" Feb 18 14:54:51 crc kubenswrapper[4957]: I0218 14:54:51.879274 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d2f29bea-5fda-4080-a98b-b869c41e3dab-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-vx74r\" (UID: \"d2f29bea-5fda-4080-a98b-b869c41e3dab\") " pod="openstack/dnsmasq-dns-55f844cf75-vx74r" Feb 18 14:54:51 crc kubenswrapper[4957]: I0218 14:54:51.879324 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d2f29bea-5fda-4080-a98b-b869c41e3dab-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-vx74r\" (UID: \"d2f29bea-5fda-4080-a98b-b869c41e3dab\") " pod="openstack/dnsmasq-dns-55f844cf75-vx74r" Feb 18 14:54:51 crc kubenswrapper[4957]: I0218 14:54:51.879348 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bjzz9\" (UniqueName: \"kubernetes.io/projected/d2f29bea-5fda-4080-a98b-b869c41e3dab-kube-api-access-bjzz9\") pod \"dnsmasq-dns-55f844cf75-vx74r\" (UID: \"d2f29bea-5fda-4080-a98b-b869c41e3dab\") " pod="openstack/dnsmasq-dns-55f844cf75-vx74r" Feb 18 14:54:51 crc kubenswrapper[4957]: I0218 14:54:51.879375 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d2f29bea-5fda-4080-a98b-b869c41e3dab-dns-svc\") pod \"dnsmasq-dns-55f844cf75-vx74r\" (UID: \"d2f29bea-5fda-4080-a98b-b869c41e3dab\") " pod="openstack/dnsmasq-dns-55f844cf75-vx74r" Feb 18 14:54:51 crc kubenswrapper[4957]: I0218 14:54:51.880306 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2f29bea-5fda-4080-a98b-b869c41e3dab-config\") pod \"dnsmasq-dns-55f844cf75-vx74r\" (UID: \"d2f29bea-5fda-4080-a98b-b869c41e3dab\") " pod="openstack/dnsmasq-dns-55f844cf75-vx74r" Feb 18 14:54:51 crc kubenswrapper[4957]: I0218 14:54:51.880569 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d2f29bea-5fda-4080-a98b-b869c41e3dab-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-vx74r\" (UID: \"d2f29bea-5fda-4080-a98b-b869c41e3dab\") " pod="openstack/dnsmasq-dns-55f844cf75-vx74r" Feb 18 14:54:51 crc kubenswrapper[4957]: I0218 14:54:51.880851 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d2f29bea-5fda-4080-a98b-b869c41e3dab-dns-svc\") pod \"dnsmasq-dns-55f844cf75-vx74r\" (UID: \"d2f29bea-5fda-4080-a98b-b869c41e3dab\") " pod="openstack/dnsmasq-dns-55f844cf75-vx74r" Feb 18 14:54:51 crc kubenswrapper[4957]: I0218 14:54:51.880866 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d2f29bea-5fda-4080-a98b-b869c41e3dab-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-vx74r\" (UID: \"d2f29bea-5fda-4080-a98b-b869c41e3dab\") " pod="openstack/dnsmasq-dns-55f844cf75-vx74r" Feb 18 14:54:51 crc kubenswrapper[4957]: I0218 14:54:51.880859 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d2f29bea-5fda-4080-a98b-b869c41e3dab-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-vx74r\" (UID: \"d2f29bea-5fda-4080-a98b-b869c41e3dab\") " pod="openstack/dnsmasq-dns-55f844cf75-vx74r" Feb 18 14:54:51 crc kubenswrapper[4957]: I0218 14:54:51.923600 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bjzz9\" (UniqueName: \"kubernetes.io/projected/d2f29bea-5fda-4080-a98b-b869c41e3dab-kube-api-access-bjzz9\") pod \"dnsmasq-dns-55f844cf75-vx74r\" (UID: \"d2f29bea-5fda-4080-a98b-b869c41e3dab\") " pod="openstack/dnsmasq-dns-55f844cf75-vx74r" Feb 18 14:54:51 crc kubenswrapper[4957]: I0218 14:54:51.960691 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-6dc87d6466-mkw7v"] Feb 18 14:54:51 crc kubenswrapper[4957]: I0218 14:54:51.970878 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6dc87d6466-mkw7v" Feb 18 14:54:51 crc kubenswrapper[4957]: I0218 14:54:51.974087 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-zflrv" Feb 18 14:54:51 crc kubenswrapper[4957]: I0218 14:54:51.974293 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 18 14:54:51 crc kubenswrapper[4957]: I0218 14:54:51.974730 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Feb 18 14:54:51 crc kubenswrapper[4957]: I0218 14:54:51.974934 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 18 14:54:51 crc kubenswrapper[4957]: I0218 14:54:51.984451 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/49162e61-07d2-4596-a0c5-fd8f90890e35-httpd-config\") pod \"neutron-6dc87d6466-mkw7v\" (UID: \"49162e61-07d2-4596-a0c5-fd8f90890e35\") " pod="openstack/neutron-6dc87d6466-mkw7v" Feb 18 14:54:51 crc kubenswrapper[4957]: I0218 14:54:51.985673 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49162e61-07d2-4596-a0c5-fd8f90890e35-combined-ca-bundle\") pod \"neutron-6dc87d6466-mkw7v\" (UID: \"49162e61-07d2-4596-a0c5-fd8f90890e35\") " pod="openstack/neutron-6dc87d6466-mkw7v" Feb 18 14:54:51 crc kubenswrapper[4957]: I0218 14:54:51.985827 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/49162e61-07d2-4596-a0c5-fd8f90890e35-ovndb-tls-certs\") pod \"neutron-6dc87d6466-mkw7v\" (UID: \"49162e61-07d2-4596-a0c5-fd8f90890e35\") " pod="openstack/neutron-6dc87d6466-mkw7v" Feb 18 14:54:51 crc kubenswrapper[4957]: I0218 14:54:51.985945 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/49162e61-07d2-4596-a0c5-fd8f90890e35-config\") pod \"neutron-6dc87d6466-mkw7v\" (UID: \"49162e61-07d2-4596-a0c5-fd8f90890e35\") " pod="openstack/neutron-6dc87d6466-mkw7v" Feb 18 14:54:51 crc kubenswrapper[4957]: I0218 14:54:51.986047 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xd776\" (UniqueName: \"kubernetes.io/projected/49162e61-07d2-4596-a0c5-fd8f90890e35-kube-api-access-xd776\") pod \"neutron-6dc87d6466-mkw7v\" (UID: \"49162e61-07d2-4596-a0c5-fd8f90890e35\") " pod="openstack/neutron-6dc87d6466-mkw7v" Feb 18 14:54:52 crc kubenswrapper[4957]: I0218 14:54:51.997624 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6dc87d6466-mkw7v"] Feb 18 14:54:52 crc kubenswrapper[4957]: I0218 14:54:52.017575 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-vx74r" Feb 18 14:54:52 crc kubenswrapper[4957]: I0218 14:54:52.087978 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/49162e61-07d2-4596-a0c5-fd8f90890e35-config\") pod \"neutron-6dc87d6466-mkw7v\" (UID: \"49162e61-07d2-4596-a0c5-fd8f90890e35\") " pod="openstack/neutron-6dc87d6466-mkw7v" Feb 18 14:54:52 crc kubenswrapper[4957]: I0218 14:54:52.089014 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xd776\" (UniqueName: \"kubernetes.io/projected/49162e61-07d2-4596-a0c5-fd8f90890e35-kube-api-access-xd776\") pod \"neutron-6dc87d6466-mkw7v\" (UID: \"49162e61-07d2-4596-a0c5-fd8f90890e35\") " pod="openstack/neutron-6dc87d6466-mkw7v" Feb 18 14:54:52 crc kubenswrapper[4957]: I0218 14:54:52.089205 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/49162e61-07d2-4596-a0c5-fd8f90890e35-httpd-config\") pod \"neutron-6dc87d6466-mkw7v\" (UID: \"49162e61-07d2-4596-a0c5-fd8f90890e35\") " pod="openstack/neutron-6dc87d6466-mkw7v" Feb 18 14:54:52 crc kubenswrapper[4957]: I0218 14:54:52.089585 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49162e61-07d2-4596-a0c5-fd8f90890e35-combined-ca-bundle\") pod \"neutron-6dc87d6466-mkw7v\" (UID: \"49162e61-07d2-4596-a0c5-fd8f90890e35\") " pod="openstack/neutron-6dc87d6466-mkw7v" Feb 18 14:54:52 crc kubenswrapper[4957]: I0218 14:54:52.089790 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/49162e61-07d2-4596-a0c5-fd8f90890e35-ovndb-tls-certs\") pod \"neutron-6dc87d6466-mkw7v\" (UID: \"49162e61-07d2-4596-a0c5-fd8f90890e35\") " pod="openstack/neutron-6dc87d6466-mkw7v" Feb 18 14:54:52 crc kubenswrapper[4957]: I0218 14:54:52.097372 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/49162e61-07d2-4596-a0c5-fd8f90890e35-config\") pod \"neutron-6dc87d6466-mkw7v\" (UID: \"49162e61-07d2-4596-a0c5-fd8f90890e35\") " pod="openstack/neutron-6dc87d6466-mkw7v" Feb 18 14:54:52 crc kubenswrapper[4957]: I0218 14:54:52.103172 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49162e61-07d2-4596-a0c5-fd8f90890e35-combined-ca-bundle\") pod \"neutron-6dc87d6466-mkw7v\" (UID: \"49162e61-07d2-4596-a0c5-fd8f90890e35\") " pod="openstack/neutron-6dc87d6466-mkw7v" Feb 18 14:54:52 crc kubenswrapper[4957]: I0218 14:54:52.114491 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/49162e61-07d2-4596-a0c5-fd8f90890e35-httpd-config\") pod \"neutron-6dc87d6466-mkw7v\" (UID: \"49162e61-07d2-4596-a0c5-fd8f90890e35\") " pod="openstack/neutron-6dc87d6466-mkw7v" Feb 18 14:54:52 crc kubenswrapper[4957]: I0218 14:54:52.119191 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/49162e61-07d2-4596-a0c5-fd8f90890e35-ovndb-tls-certs\") pod \"neutron-6dc87d6466-mkw7v\" (UID: \"49162e61-07d2-4596-a0c5-fd8f90890e35\") " pod="openstack/neutron-6dc87d6466-mkw7v" Feb 18 14:54:52 crc kubenswrapper[4957]: I0218 14:54:52.120189 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xd776\" (UniqueName: \"kubernetes.io/projected/49162e61-07d2-4596-a0c5-fd8f90890e35-kube-api-access-xd776\") pod \"neutron-6dc87d6466-mkw7v\" (UID: \"49162e61-07d2-4596-a0c5-fd8f90890e35\") " pod="openstack/neutron-6dc87d6466-mkw7v" Feb 18 14:54:52 crc kubenswrapper[4957]: I0218 14:54:52.229581 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="38b6d3b3-c319-4fb5-b91c-0f8607178ad0" path="/var/lib/kubelet/pods/38b6d3b3-c319-4fb5-b91c-0f8607178ad0/volumes" Feb 18 14:54:52 crc kubenswrapper[4957]: I0218 14:54:52.340997 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6dc87d6466-mkw7v" Feb 18 14:54:52 crc kubenswrapper[4957]: I0218 14:54:52.964743 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-74f6bcbc87-b5ll7" podUID="38b6d3b3-c319-4fb5-b91c-0f8607178ad0" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.168:5353: i/o timeout" Feb 18 14:54:53 crc kubenswrapper[4957]: I0218 14:54:53.747690 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-5676bc86fc-b7j6f"] Feb 18 14:54:53 crc kubenswrapper[4957]: I0218 14:54:53.749507 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5676bc86fc-b7j6f" Feb 18 14:54:53 crc kubenswrapper[4957]: I0218 14:54:53.753322 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Feb 18 14:54:53 crc kubenswrapper[4957]: I0218 14:54:53.754234 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Feb 18 14:54:53 crc kubenswrapper[4957]: I0218 14:54:53.769060 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5676bc86fc-b7j6f"] Feb 18 14:54:53 crc kubenswrapper[4957]: I0218 14:54:53.926022 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/75590e4a-6bc1-44d5-9a9e-5756f4a7de8c-internal-tls-certs\") pod \"neutron-5676bc86fc-b7j6f\" (UID: \"75590e4a-6bc1-44d5-9a9e-5756f4a7de8c\") " pod="openstack/neutron-5676bc86fc-b7j6f" Feb 18 14:54:53 crc kubenswrapper[4957]: I0218 14:54:53.926298 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/75590e4a-6bc1-44d5-9a9e-5756f4a7de8c-ovndb-tls-certs\") pod \"neutron-5676bc86fc-b7j6f\" (UID: \"75590e4a-6bc1-44d5-9a9e-5756f4a7de8c\") " pod="openstack/neutron-5676bc86fc-b7j6f" Feb 18 14:54:53 crc kubenswrapper[4957]: I0218 14:54:53.926344 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/75590e4a-6bc1-44d5-9a9e-5756f4a7de8c-httpd-config\") pod \"neutron-5676bc86fc-b7j6f\" (UID: \"75590e4a-6bc1-44d5-9a9e-5756f4a7de8c\") " pod="openstack/neutron-5676bc86fc-b7j6f" Feb 18 14:54:53 crc kubenswrapper[4957]: I0218 14:54:53.926506 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmhjp\" (UniqueName: \"kubernetes.io/projected/75590e4a-6bc1-44d5-9a9e-5756f4a7de8c-kube-api-access-rmhjp\") pod \"neutron-5676bc86fc-b7j6f\" (UID: \"75590e4a-6bc1-44d5-9a9e-5756f4a7de8c\") " pod="openstack/neutron-5676bc86fc-b7j6f" Feb 18 14:54:53 crc kubenswrapper[4957]: I0218 14:54:53.926656 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/75590e4a-6bc1-44d5-9a9e-5756f4a7de8c-config\") pod \"neutron-5676bc86fc-b7j6f\" (UID: \"75590e4a-6bc1-44d5-9a9e-5756f4a7de8c\") " pod="openstack/neutron-5676bc86fc-b7j6f" Feb 18 14:54:53 crc kubenswrapper[4957]: I0218 14:54:53.926892 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75590e4a-6bc1-44d5-9a9e-5756f4a7de8c-combined-ca-bundle\") pod \"neutron-5676bc86fc-b7j6f\" (UID: \"75590e4a-6bc1-44d5-9a9e-5756f4a7de8c\") " pod="openstack/neutron-5676bc86fc-b7j6f" Feb 18 14:54:53 crc kubenswrapper[4957]: I0218 14:54:53.926934 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/75590e4a-6bc1-44d5-9a9e-5756f4a7de8c-public-tls-certs\") pod \"neutron-5676bc86fc-b7j6f\" (UID: \"75590e4a-6bc1-44d5-9a9e-5756f4a7de8c\") " pod="openstack/neutron-5676bc86fc-b7j6f" Feb 18 14:54:54 crc kubenswrapper[4957]: I0218 14:54:54.028393 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rmhjp\" (UniqueName: \"kubernetes.io/projected/75590e4a-6bc1-44d5-9a9e-5756f4a7de8c-kube-api-access-rmhjp\") pod \"neutron-5676bc86fc-b7j6f\" (UID: \"75590e4a-6bc1-44d5-9a9e-5756f4a7de8c\") " pod="openstack/neutron-5676bc86fc-b7j6f" Feb 18 14:54:54 crc kubenswrapper[4957]: I0218 14:54:54.029381 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/75590e4a-6bc1-44d5-9a9e-5756f4a7de8c-config\") pod \"neutron-5676bc86fc-b7j6f\" (UID: \"75590e4a-6bc1-44d5-9a9e-5756f4a7de8c\") " pod="openstack/neutron-5676bc86fc-b7j6f" Feb 18 14:54:54 crc kubenswrapper[4957]: I0218 14:54:54.029554 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75590e4a-6bc1-44d5-9a9e-5756f4a7de8c-combined-ca-bundle\") pod \"neutron-5676bc86fc-b7j6f\" (UID: \"75590e4a-6bc1-44d5-9a9e-5756f4a7de8c\") " pod="openstack/neutron-5676bc86fc-b7j6f" Feb 18 14:54:54 crc kubenswrapper[4957]: I0218 14:54:54.029639 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/75590e4a-6bc1-44d5-9a9e-5756f4a7de8c-public-tls-certs\") pod \"neutron-5676bc86fc-b7j6f\" (UID: \"75590e4a-6bc1-44d5-9a9e-5756f4a7de8c\") " pod="openstack/neutron-5676bc86fc-b7j6f" Feb 18 14:54:54 crc kubenswrapper[4957]: I0218 14:54:54.029797 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/75590e4a-6bc1-44d5-9a9e-5756f4a7de8c-internal-tls-certs\") pod \"neutron-5676bc86fc-b7j6f\" (UID: \"75590e4a-6bc1-44d5-9a9e-5756f4a7de8c\") " pod="openstack/neutron-5676bc86fc-b7j6f" Feb 18 14:54:54 crc kubenswrapper[4957]: I0218 14:54:54.029897 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/75590e4a-6bc1-44d5-9a9e-5756f4a7de8c-ovndb-tls-certs\") pod \"neutron-5676bc86fc-b7j6f\" (UID: \"75590e4a-6bc1-44d5-9a9e-5756f4a7de8c\") " pod="openstack/neutron-5676bc86fc-b7j6f" Feb 18 14:54:54 crc kubenswrapper[4957]: I0218 14:54:54.030013 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/75590e4a-6bc1-44d5-9a9e-5756f4a7de8c-httpd-config\") pod \"neutron-5676bc86fc-b7j6f\" (UID: \"75590e4a-6bc1-44d5-9a9e-5756f4a7de8c\") " pod="openstack/neutron-5676bc86fc-b7j6f" Feb 18 14:54:54 crc kubenswrapper[4957]: I0218 14:54:54.043983 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/75590e4a-6bc1-44d5-9a9e-5756f4a7de8c-config\") pod \"neutron-5676bc86fc-b7j6f\" (UID: \"75590e4a-6bc1-44d5-9a9e-5756f4a7de8c\") " pod="openstack/neutron-5676bc86fc-b7j6f" Feb 18 14:54:54 crc kubenswrapper[4957]: I0218 14:54:54.044025 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/75590e4a-6bc1-44d5-9a9e-5756f4a7de8c-ovndb-tls-certs\") pod \"neutron-5676bc86fc-b7j6f\" (UID: \"75590e4a-6bc1-44d5-9a9e-5756f4a7de8c\") " pod="openstack/neutron-5676bc86fc-b7j6f" Feb 18 14:54:54 crc kubenswrapper[4957]: I0218 14:54:54.045547 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/75590e4a-6bc1-44d5-9a9e-5756f4a7de8c-public-tls-certs\") pod \"neutron-5676bc86fc-b7j6f\" (UID: \"75590e4a-6bc1-44d5-9a9e-5756f4a7de8c\") " pod="openstack/neutron-5676bc86fc-b7j6f" Feb 18 14:54:54 crc kubenswrapper[4957]: I0218 14:54:54.046874 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rmhjp\" (UniqueName: \"kubernetes.io/projected/75590e4a-6bc1-44d5-9a9e-5756f4a7de8c-kube-api-access-rmhjp\") pod \"neutron-5676bc86fc-b7j6f\" (UID: \"75590e4a-6bc1-44d5-9a9e-5756f4a7de8c\") " pod="openstack/neutron-5676bc86fc-b7j6f" Feb 18 14:54:54 crc kubenswrapper[4957]: I0218 14:54:54.047019 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/75590e4a-6bc1-44d5-9a9e-5756f4a7de8c-internal-tls-certs\") pod \"neutron-5676bc86fc-b7j6f\" (UID: \"75590e4a-6bc1-44d5-9a9e-5756f4a7de8c\") " pod="openstack/neutron-5676bc86fc-b7j6f" Feb 18 14:54:54 crc kubenswrapper[4957]: I0218 14:54:54.054308 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75590e4a-6bc1-44d5-9a9e-5756f4a7de8c-combined-ca-bundle\") pod \"neutron-5676bc86fc-b7j6f\" (UID: \"75590e4a-6bc1-44d5-9a9e-5756f4a7de8c\") " pod="openstack/neutron-5676bc86fc-b7j6f" Feb 18 14:54:54 crc kubenswrapper[4957]: I0218 14:54:54.058011 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/75590e4a-6bc1-44d5-9a9e-5756f4a7de8c-httpd-config\") pod \"neutron-5676bc86fc-b7j6f\" (UID: \"75590e4a-6bc1-44d5-9a9e-5756f4a7de8c\") " pod="openstack/neutron-5676bc86fc-b7j6f" Feb 18 14:54:54 crc kubenswrapper[4957]: I0218 14:54:54.069388 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5676bc86fc-b7j6f" Feb 18 14:54:56 crc kubenswrapper[4957]: E0218 14:54:56.146054 4957 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Feb 18 14:54:56 crc kubenswrapper[4957]: E0218 14:54:56.146511 4957 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vgpqx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-5sgzz_openstack(9205f99b-873e-4f53-9d89-85b77ca7adc1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 14:54:56 crc kubenswrapper[4957]: E0218 14:54:56.147860 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-5sgzz" podUID="9205f99b-873e-4f53-9d89-85b77ca7adc1" Feb 18 14:54:56 crc kubenswrapper[4957]: I0218 14:54:56.152294 4957 scope.go:117] "RemoveContainer" containerID="1df2633fec8e0f1f5d968256e9cfe0714bc57f9738bd45290087311c9103747e" Feb 18 14:54:56 crc kubenswrapper[4957]: I0218 14:54:56.501348 4957 scope.go:117] "RemoveContainer" containerID="a4e3c0c562c83ad085cf74d834374cabaf8f8dde3edde06cdd56418e10561f79" Feb 18 14:54:56 crc kubenswrapper[4957]: I0218 14:54:56.552147 4957 scope.go:117] "RemoveContainer" containerID="daf47f6b7d8179f4bba9105a3f3d104e6a4d8eecda319bef898a5e782af12496" Feb 18 14:54:56 crc kubenswrapper[4957]: E0218 14:54:56.665011 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-5sgzz" podUID="9205f99b-873e-4f53-9d89-85b77ca7adc1" Feb 18 14:54:56 crc kubenswrapper[4957]: I0218 14:54:56.665034 4957 scope.go:117] "RemoveContainer" containerID="30b68765787f665bb4c6a9ad08829aa955f0bfd70d5d01623724f5fb62bd21f4" Feb 18 14:54:56 crc kubenswrapper[4957]: I0218 14:54:56.697548 4957 scope.go:117] "RemoveContainer" containerID="bcb8e02e1a8f62e4f7738021085499e43efe9f8d664c4fbe3771cc772b9da494" Feb 18 14:54:56 crc kubenswrapper[4957]: I0218 14:54:56.991081 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cjglk"] Feb 18 14:54:57 crc kubenswrapper[4957]: I0218 14:54:57.102221 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 14:54:57 crc kubenswrapper[4957]: W0218 14:54:57.115446 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeef0cbff_2dd2_4fee_86cc_04d3bd3032f9.slice/crio-9855b749d5fcd7f19ac26072a6f499ffdbfd396e69dc950cf8f4dbf737ec6587 WatchSource:0}: Error finding container 9855b749d5fcd7f19ac26072a6f499ffdbfd396e69dc950cf8f4dbf737ec6587: Status 404 returned error can't find the container with id 9855b749d5fcd7f19ac26072a6f499ffdbfd396e69dc950cf8f4dbf737ec6587 Feb 18 14:54:57 crc kubenswrapper[4957]: I0218 14:54:57.577899 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-dfvgf"] Feb 18 14:54:57 crc kubenswrapper[4957]: I0218 14:54:57.623091 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-vx74r"] Feb 18 14:54:57 crc kubenswrapper[4957]: I0218 14:54:57.717164 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 14:54:57 crc kubenswrapper[4957]: I0218 14:54:57.752781 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-dfvgf" event={"ID":"c6255f04-0970-449b-b48c-2a812f42b7c5","Type":"ContainerStarted","Data":"669a4b34e552ae5f9a0f6b3ee57a7a1a2681915ef58d677310bb98f80ac3a3af"} Feb 18 14:54:57 crc kubenswrapper[4957]: I0218 14:54:57.754761 4957 generic.go:334] "Generic (PLEG): container finished" podID="fa177feb-1f29-43cb-baa9-6676bfa7c403" containerID="7cd470e247aef31391c038b46de2520c93e6a1a648f7bbc80db424527446488c" exitCode=0 Feb 18 14:54:57 crc kubenswrapper[4957]: I0218 14:54:57.754836 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cjglk" event={"ID":"fa177feb-1f29-43cb-baa9-6676bfa7c403","Type":"ContainerDied","Data":"7cd470e247aef31391c038b46de2520c93e6a1a648f7bbc80db424527446488c"} Feb 18 14:54:57 crc kubenswrapper[4957]: I0218 14:54:57.754854 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cjglk" event={"ID":"fa177feb-1f29-43cb-baa9-6676bfa7c403","Type":"ContainerStarted","Data":"6eaca2688003abaaf90afcd56b5547909ddd06529e0501f57cb103943a85b900"} Feb 18 14:54:57 crc kubenswrapper[4957]: I0218 14:54:57.760826 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"eef0cbff-2dd2-4fee-86cc-04d3bd3032f9","Type":"ContainerStarted","Data":"9855b749d5fcd7f19ac26072a6f499ffdbfd396e69dc950cf8f4dbf737ec6587"} Feb 18 14:54:57 crc kubenswrapper[4957]: I0218 14:54:57.771245 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-d89wt" event={"ID":"9f3c5afd-c757-4e78-9f08-3d55f1b32ab6","Type":"ContainerStarted","Data":"731abfa56ca6f9c33d8db0d3a24173e07b4e8b32b32805453f0ea5349d13cba3"} Feb 18 14:54:57 crc kubenswrapper[4957]: I0218 14:54:57.780991 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-jmmp8" event={"ID":"19ecbaa7-5ddf-4835-b9b7-01a0c156b75e","Type":"ContainerStarted","Data":"dcdeccaa18d47ed9aba61a7acce59cd112e1073f48ac721be148f83adf44d04e"} Feb 18 14:54:57 crc kubenswrapper[4957]: I0218 14:54:57.784680 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"86855234-3589-4780-a8a5-75d8a4350c27","Type":"ContainerStarted","Data":"cf84290ebb3b0143fb40a7b4437c22eeefd853c8e208b14410e7abe5454b2413"} Feb 18 14:54:57 crc kubenswrapper[4957]: I0218 14:54:57.830289 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-d89wt" podStartSLOduration=10.117048829 podStartE2EDuration="35.830270239s" podCreationTimestamp="2026-02-18 14:54:22 +0000 UTC" firstStartedPulling="2026-02-18 14:54:24.515849156 +0000 UTC m=+1371.036713900" lastFinishedPulling="2026-02-18 14:54:50.229070566 +0000 UTC m=+1396.749935310" observedRunningTime="2026-02-18 14:54:57.792172787 +0000 UTC m=+1404.313037541" watchObservedRunningTime="2026-02-18 14:54:57.830270239 +0000 UTC m=+1404.351134983" Feb 18 14:54:57 crc kubenswrapper[4957]: I0218 14:54:57.850937 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-jmmp8" podStartSLOduration=4.286766103 podStartE2EDuration="35.850913807s" podCreationTimestamp="2026-02-18 14:54:22 +0000 UTC" firstStartedPulling="2026-02-18 14:54:24.531503319 +0000 UTC m=+1371.052368063" lastFinishedPulling="2026-02-18 14:54:56.095651023 +0000 UTC m=+1402.616515767" observedRunningTime="2026-02-18 14:54:57.807935923 +0000 UTC m=+1404.328800667" watchObservedRunningTime="2026-02-18 14:54:57.850913807 +0000 UTC m=+1404.371778551" Feb 18 14:54:57 crc kubenswrapper[4957]: I0218 14:54:57.942007 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6dc87d6466-mkw7v"] Feb 18 14:54:57 crc kubenswrapper[4957]: W0218 14:54:57.968928 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod49162e61_07d2_4596_a0c5_fd8f90890e35.slice/crio-8d0e58a24d3d8891c6c66af5e6e652482bf62506026627c8ad746d9ee143cbc0 WatchSource:0}: Error finding container 8d0e58a24d3d8891c6c66af5e6e652482bf62506026627c8ad746d9ee143cbc0: Status 404 returned error can't find the container with id 8d0e58a24d3d8891c6c66af5e6e652482bf62506026627c8ad746d9ee143cbc0 Feb 18 14:54:58 crc kubenswrapper[4957]: I0218 14:54:58.256029 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5676bc86fc-b7j6f"] Feb 18 14:54:58 crc kubenswrapper[4957]: I0218 14:54:58.810632 4957 generic.go:334] "Generic (PLEG): container finished" podID="d2f29bea-5fda-4080-a98b-b869c41e3dab" containerID="133506c284a0e1a4ac9c63847af0827d35d039576c2b6156ffbfb558923a8f53" exitCode=0 Feb 18 14:54:58 crc kubenswrapper[4957]: I0218 14:54:58.810778 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-vx74r" event={"ID":"d2f29bea-5fda-4080-a98b-b869c41e3dab","Type":"ContainerDied","Data":"133506c284a0e1a4ac9c63847af0827d35d039576c2b6156ffbfb558923a8f53"} Feb 18 14:54:58 crc kubenswrapper[4957]: I0218 14:54:58.811295 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-vx74r" event={"ID":"d2f29bea-5fda-4080-a98b-b869c41e3dab","Type":"ContainerStarted","Data":"a41d5314140c7607f5fa3643cacebb6378b52a22d2153671c7e3b58500454031"} Feb 18 14:54:58 crc kubenswrapper[4957]: I0218 14:54:58.821851 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5676bc86fc-b7j6f" event={"ID":"75590e4a-6bc1-44d5-9a9e-5756f4a7de8c","Type":"ContainerStarted","Data":"bd090441f097ae3e6bab956274dc876ec71208091f2a8481c088b365f4c7f37a"} Feb 18 14:54:58 crc kubenswrapper[4957]: I0218 14:54:58.825661 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6dc87d6466-mkw7v" event={"ID":"49162e61-07d2-4596-a0c5-fd8f90890e35","Type":"ContainerStarted","Data":"8d0e58a24d3d8891c6c66af5e6e652482bf62506026627c8ad746d9ee143cbc0"} Feb 18 14:54:58 crc kubenswrapper[4957]: I0218 14:54:58.845844 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"eb55dbb2-b556-42c0-af4c-e94f389576c3","Type":"ContainerStarted","Data":"a23e1035e2130f7ffc0042e2de1aeb8444de6b117f9304a7d6f61a57faddbd56"} Feb 18 14:54:58 crc kubenswrapper[4957]: I0218 14:54:58.845912 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"eb55dbb2-b556-42c0-af4c-e94f389576c3","Type":"ContainerStarted","Data":"545dc7fad2b0e39e450bfe027538249e70b9a92ad218511a077714df1c2410a4"} Feb 18 14:54:58 crc kubenswrapper[4957]: I0218 14:54:58.871898 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-dfvgf" event={"ID":"c6255f04-0970-449b-b48c-2a812f42b7c5","Type":"ContainerStarted","Data":"934ba6875ed91c5f5497a89930bdc9804dd8370aed7d6ff922e51b5f30ec1ef2"} Feb 18 14:54:58 crc kubenswrapper[4957]: I0218 14:54:58.882618 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"eef0cbff-2dd2-4fee-86cc-04d3bd3032f9","Type":"ContainerStarted","Data":"7af7e42045fe579837329e1cb64f1e6505f8fdfdeeb82dc284b3e513d17e3132"} Feb 18 14:54:58 crc kubenswrapper[4957]: I0218 14:54:58.891905 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-dfvgf" podStartSLOduration=16.891889194 podStartE2EDuration="16.891889194s" podCreationTimestamp="2026-02-18 14:54:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:54:58.891756231 +0000 UTC m=+1405.412620985" watchObservedRunningTime="2026-02-18 14:54:58.891889194 +0000 UTC m=+1405.412753938" Feb 18 14:54:59 crc kubenswrapper[4957]: I0218 14:54:59.931066 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"eef0cbff-2dd2-4fee-86cc-04d3bd3032f9","Type":"ContainerStarted","Data":"77670be4dca07a5341a66ef224ba145df08a99a0a4195e0749d8b4fc0773cc5d"} Feb 18 14:54:59 crc kubenswrapper[4957]: I0218 14:54:59.939212 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-vx74r" event={"ID":"d2f29bea-5fda-4080-a98b-b869c41e3dab","Type":"ContainerStarted","Data":"709638ed8b589de857df474f40c1c4c5e31bf6d8042a69c0bd2637c7e585eada"} Feb 18 14:54:59 crc kubenswrapper[4957]: I0218 14:54:59.939602 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-55f844cf75-vx74r" Feb 18 14:54:59 crc kubenswrapper[4957]: I0218 14:54:59.948811 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5676bc86fc-b7j6f" event={"ID":"75590e4a-6bc1-44d5-9a9e-5756f4a7de8c","Type":"ContainerStarted","Data":"724ae5090c9c953a769bc5cdbb49510d3e37ce00a400b59638e3888375d0c426"} Feb 18 14:54:59 crc kubenswrapper[4957]: I0218 14:54:59.960783 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"86855234-3589-4780-a8a5-75d8a4350c27","Type":"ContainerStarted","Data":"2c9f74ecd757966d257850b11178c028a5a828bafe90ee101af4c0ee154220d1"} Feb 18 14:54:59 crc kubenswrapper[4957]: I0218 14:54:59.963774 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6dc87d6466-mkw7v" event={"ID":"49162e61-07d2-4596-a0c5-fd8f90890e35","Type":"ContainerStarted","Data":"3d81b08dc34086f30147f8acf3bba813d2ee5d1ebdb6ff06cd95fdc0a01a1280"} Feb 18 14:54:59 crc kubenswrapper[4957]: I0218 14:54:59.963814 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6dc87d6466-mkw7v" event={"ID":"49162e61-07d2-4596-a0c5-fd8f90890e35","Type":"ContainerStarted","Data":"3b7016aca82e4bbdb5f8f9706dd6604e9b40eac55b3e0c37a1b53f441c2b51c9"} Feb 18 14:54:59 crc kubenswrapper[4957]: I0218 14:54:59.964705 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-6dc87d6466-mkw7v" Feb 18 14:54:59 crc kubenswrapper[4957]: I0218 14:54:59.983464 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"eb55dbb2-b556-42c0-af4c-e94f389576c3","Type":"ContainerStarted","Data":"1e2f8288271c4a9f01a34ca593d2f6701401f7c597ca71e28d78a04c62be5f52"} Feb 18 14:54:59 crc kubenswrapper[4957]: I0218 14:54:59.983715 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=17.983686723 podStartE2EDuration="17.983686723s" podCreationTimestamp="2026-02-18 14:54:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:54:59.964045405 +0000 UTC m=+1406.484910149" watchObservedRunningTime="2026-02-18 14:54:59.983686723 +0000 UTC m=+1406.504551467" Feb 18 14:54:59 crc kubenswrapper[4957]: I0218 14:54:59.989473 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cjglk" event={"ID":"fa177feb-1f29-43cb-baa9-6676bfa7c403","Type":"ContainerStarted","Data":"4025573f3af7fb3568e26202d2fbdb267c37da53fb65e5d931dba6b1dd6a7ece"} Feb 18 14:55:00 crc kubenswrapper[4957]: I0218 14:55:00.006099 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-6dc87d6466-mkw7v" podStartSLOduration=9.006073461 podStartE2EDuration="9.006073461s" podCreationTimestamp="2026-02-18 14:54:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:54:59.994749283 +0000 UTC m=+1406.515614027" watchObservedRunningTime="2026-02-18 14:55:00.006073461 +0000 UTC m=+1406.526938205" Feb 18 14:55:00 crc kubenswrapper[4957]: I0218 14:55:00.034020 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-55f844cf75-vx74r" podStartSLOduration=9.033999599 podStartE2EDuration="9.033999599s" podCreationTimestamp="2026-02-18 14:54:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:55:00.017139981 +0000 UTC m=+1406.538004725" watchObservedRunningTime="2026-02-18 14:55:00.033999599 +0000 UTC m=+1406.554864343" Feb 18 14:55:00 crc kubenswrapper[4957]: I0218 14:55:00.042066 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=18.042045142 podStartE2EDuration="18.042045142s" podCreationTimestamp="2026-02-18 14:54:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:55:00.038447668 +0000 UTC m=+1406.559312412" watchObservedRunningTime="2026-02-18 14:55:00.042045142 +0000 UTC m=+1406.562909886" Feb 18 14:55:01 crc kubenswrapper[4957]: I0218 14:55:01.002677 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5676bc86fc-b7j6f" event={"ID":"75590e4a-6bc1-44d5-9a9e-5756f4a7de8c","Type":"ContainerStarted","Data":"24ce149ff122d658c6184fdc73ffe70fa4bf2e46b11c6b4dc48688856ef318f2"} Feb 18 14:55:01 crc kubenswrapper[4957]: I0218 14:55:01.004044 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-5676bc86fc-b7j6f" Feb 18 14:55:01 crc kubenswrapper[4957]: I0218 14:55:01.026450 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-5676bc86fc-b7j6f" podStartSLOduration=8.026429112 podStartE2EDuration="8.026429112s" podCreationTimestamp="2026-02-18 14:54:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:55:01.019277435 +0000 UTC m=+1407.540142179" watchObservedRunningTime="2026-02-18 14:55:01.026429112 +0000 UTC m=+1407.547293856" Feb 18 14:55:02 crc kubenswrapper[4957]: I0218 14:55:02.021878 4957 generic.go:334] "Generic (PLEG): container finished" podID="fa177feb-1f29-43cb-baa9-6676bfa7c403" containerID="4025573f3af7fb3568e26202d2fbdb267c37da53fb65e5d931dba6b1dd6a7ece" exitCode=0 Feb 18 14:55:02 crc kubenswrapper[4957]: I0218 14:55:02.022068 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cjglk" event={"ID":"fa177feb-1f29-43cb-baa9-6676bfa7c403","Type":"ContainerDied","Data":"4025573f3af7fb3568e26202d2fbdb267c37da53fb65e5d931dba6b1dd6a7ece"} Feb 18 14:55:02 crc kubenswrapper[4957]: I0218 14:55:02.987078 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 18 14:55:02 crc kubenswrapper[4957]: I0218 14:55:02.987522 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 18 14:55:03 crc kubenswrapper[4957]: I0218 14:55:03.037711 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 18 14:55:03 crc kubenswrapper[4957]: I0218 14:55:03.056787 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 18 14:55:03 crc kubenswrapper[4957]: I0218 14:55:03.056836 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 18 14:55:03 crc kubenswrapper[4957]: I0218 14:55:03.062095 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 18 14:55:03 crc kubenswrapper[4957]: I0218 14:55:03.087103 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 18 14:55:03 crc kubenswrapper[4957]: I0218 14:55:03.088619 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 18 14:55:03 crc kubenswrapper[4957]: I0218 14:55:03.123315 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 18 14:55:03 crc kubenswrapper[4957]: I0218 14:55:03.167405 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 18 14:55:04 crc kubenswrapper[4957]: I0218 14:55:04.119704 4957 generic.go:334] "Generic (PLEG): container finished" podID="9f3c5afd-c757-4e78-9f08-3d55f1b32ab6" containerID="731abfa56ca6f9c33d8db0d3a24173e07b4e8b32b32805453f0ea5349d13cba3" exitCode=0 Feb 18 14:55:04 crc kubenswrapper[4957]: I0218 14:55:04.120151 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-d89wt" event={"ID":"9f3c5afd-c757-4e78-9f08-3d55f1b32ab6","Type":"ContainerDied","Data":"731abfa56ca6f9c33d8db0d3a24173e07b4e8b32b32805453f0ea5349d13cba3"} Feb 18 14:55:04 crc kubenswrapper[4957]: I0218 14:55:04.129998 4957 generic.go:334] "Generic (PLEG): container finished" podID="c6255f04-0970-449b-b48c-2a812f42b7c5" containerID="934ba6875ed91c5f5497a89930bdc9804dd8370aed7d6ff922e51b5f30ec1ef2" exitCode=0 Feb 18 14:55:04 crc kubenswrapper[4957]: I0218 14:55:04.131496 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-dfvgf" event={"ID":"c6255f04-0970-449b-b48c-2a812f42b7c5","Type":"ContainerDied","Data":"934ba6875ed91c5f5497a89930bdc9804dd8370aed7d6ff922e51b5f30ec1ef2"} Feb 18 14:55:04 crc kubenswrapper[4957]: I0218 14:55:04.132492 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 18 14:55:04 crc kubenswrapper[4957]: I0218 14:55:04.132519 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 18 14:55:05 crc kubenswrapper[4957]: I0218 14:55:05.147249 4957 generic.go:334] "Generic (PLEG): container finished" podID="19ecbaa7-5ddf-4835-b9b7-01a0c156b75e" containerID="dcdeccaa18d47ed9aba61a7acce59cd112e1073f48ac721be148f83adf44d04e" exitCode=0 Feb 18 14:55:05 crc kubenswrapper[4957]: I0218 14:55:05.147454 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-jmmp8" event={"ID":"19ecbaa7-5ddf-4835-b9b7-01a0c156b75e","Type":"ContainerDied","Data":"dcdeccaa18d47ed9aba61a7acce59cd112e1073f48ac721be148f83adf44d04e"} Feb 18 14:55:06 crc kubenswrapper[4957]: I0218 14:55:06.156786 4957 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 18 14:55:06 crc kubenswrapper[4957]: I0218 14:55:06.157163 4957 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 18 14:55:06 crc kubenswrapper[4957]: I0218 14:55:06.946637 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 18 14:55:06 crc kubenswrapper[4957]: I0218 14:55:06.949047 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 18 14:55:06 crc kubenswrapper[4957]: I0218 14:55:06.949131 4957 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 18 14:55:06 crc kubenswrapper[4957]: I0218 14:55:06.985872 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 18 14:55:07 crc kubenswrapper[4957]: I0218 14:55:07.010141 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 18 14:55:07 crc kubenswrapper[4957]: I0218 14:55:07.024618 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-55f844cf75-vx74r" Feb 18 14:55:07 crc kubenswrapper[4957]: I0218 14:55:07.165996 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-dfvgf" Feb 18 14:55:07 crc kubenswrapper[4957]: I0218 14:55:07.166731 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-8v9g9"] Feb 18 14:55:07 crc kubenswrapper[4957]: I0218 14:55:07.169671 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-785d8bcb8c-8v9g9" podUID="36143c0c-ebdb-4642-9926-b8d4601d0aae" containerName="dnsmasq-dns" containerID="cri-o://77534336adbe533d04aee751547e718c56d01b0a78db0ae9c22246033d0b8b54" gracePeriod=10 Feb 18 14:55:07 crc kubenswrapper[4957]: I0218 14:55:07.175857 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-jmmp8" Feb 18 14:55:07 crc kubenswrapper[4957]: I0218 14:55:07.267018 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jfnmm\" (UniqueName: \"kubernetes.io/projected/c6255f04-0970-449b-b48c-2a812f42b7c5-kube-api-access-jfnmm\") pod \"c6255f04-0970-449b-b48c-2a812f42b7c5\" (UID: \"c6255f04-0970-449b-b48c-2a812f42b7c5\") " Feb 18 14:55:07 crc kubenswrapper[4957]: I0218 14:55:07.267060 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c6255f04-0970-449b-b48c-2a812f42b7c5-scripts\") pod \"c6255f04-0970-449b-b48c-2a812f42b7c5\" (UID: \"c6255f04-0970-449b-b48c-2a812f42b7c5\") " Feb 18 14:55:07 crc kubenswrapper[4957]: I0218 14:55:07.267160 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6255f04-0970-449b-b48c-2a812f42b7c5-config-data\") pod \"c6255f04-0970-449b-b48c-2a812f42b7c5\" (UID: \"c6255f04-0970-449b-b48c-2a812f42b7c5\") " Feb 18 14:55:07 crc kubenswrapper[4957]: I0218 14:55:07.267198 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c6255f04-0970-449b-b48c-2a812f42b7c5-credential-keys\") pod \"c6255f04-0970-449b-b48c-2a812f42b7c5\" (UID: \"c6255f04-0970-449b-b48c-2a812f42b7c5\") " Feb 18 14:55:07 crc kubenswrapper[4957]: I0218 14:55:07.267267 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c6255f04-0970-449b-b48c-2a812f42b7c5-fernet-keys\") pod \"c6255f04-0970-449b-b48c-2a812f42b7c5\" (UID: \"c6255f04-0970-449b-b48c-2a812f42b7c5\") " Feb 18 14:55:07 crc kubenswrapper[4957]: I0218 14:55:07.267299 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p4d9j\" (UniqueName: \"kubernetes.io/projected/19ecbaa7-5ddf-4835-b9b7-01a0c156b75e-kube-api-access-p4d9j\") pod \"19ecbaa7-5ddf-4835-b9b7-01a0c156b75e\" (UID: \"19ecbaa7-5ddf-4835-b9b7-01a0c156b75e\") " Feb 18 14:55:07 crc kubenswrapper[4957]: I0218 14:55:07.267351 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/19ecbaa7-5ddf-4835-b9b7-01a0c156b75e-db-sync-config-data\") pod \"19ecbaa7-5ddf-4835-b9b7-01a0c156b75e\" (UID: \"19ecbaa7-5ddf-4835-b9b7-01a0c156b75e\") " Feb 18 14:55:07 crc kubenswrapper[4957]: I0218 14:55:07.267386 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19ecbaa7-5ddf-4835-b9b7-01a0c156b75e-combined-ca-bundle\") pod \"19ecbaa7-5ddf-4835-b9b7-01a0c156b75e\" (UID: \"19ecbaa7-5ddf-4835-b9b7-01a0c156b75e\") " Feb 18 14:55:07 crc kubenswrapper[4957]: I0218 14:55:07.267461 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6255f04-0970-449b-b48c-2a812f42b7c5-combined-ca-bundle\") pod \"c6255f04-0970-449b-b48c-2a812f42b7c5\" (UID: \"c6255f04-0970-449b-b48c-2a812f42b7c5\") " Feb 18 14:55:07 crc kubenswrapper[4957]: I0218 14:55:07.307546 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19ecbaa7-5ddf-4835-b9b7-01a0c156b75e-kube-api-access-p4d9j" (OuterVolumeSpecName: "kube-api-access-p4d9j") pod "19ecbaa7-5ddf-4835-b9b7-01a0c156b75e" (UID: "19ecbaa7-5ddf-4835-b9b7-01a0c156b75e"). InnerVolumeSpecName "kube-api-access-p4d9j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:55:07 crc kubenswrapper[4957]: I0218 14:55:07.308665 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6255f04-0970-449b-b48c-2a812f42b7c5-scripts" (OuterVolumeSpecName: "scripts") pod "c6255f04-0970-449b-b48c-2a812f42b7c5" (UID: "c6255f04-0970-449b-b48c-2a812f42b7c5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:55:07 crc kubenswrapper[4957]: I0218 14:55:07.308877 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-d89wt" event={"ID":"9f3c5afd-c757-4e78-9f08-3d55f1b32ab6","Type":"ContainerDied","Data":"73dd275d8967a06e469c52410df3c84377073f58d8a5bf7bed460aa5a0a06135"} Feb 18 14:55:07 crc kubenswrapper[4957]: I0218 14:55:07.308905 4957 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="73dd275d8967a06e469c52410df3c84377073f58d8a5bf7bed460aa5a0a06135" Feb 18 14:55:07 crc kubenswrapper[4957]: I0218 14:55:07.317828 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6255f04-0970-449b-b48c-2a812f42b7c5-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "c6255f04-0970-449b-b48c-2a812f42b7c5" (UID: "c6255f04-0970-449b-b48c-2a812f42b7c5"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:55:07 crc kubenswrapper[4957]: I0218 14:55:07.328978 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-jmmp8" Feb 18 14:55:07 crc kubenswrapper[4957]: I0218 14:55:07.329680 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-jmmp8" event={"ID":"19ecbaa7-5ddf-4835-b9b7-01a0c156b75e","Type":"ContainerDied","Data":"93970d30a996a8ab3d90d6cda11e3c1ff7c5c3dab30e49b5bd86a1ad61e5d718"} Feb 18 14:55:07 crc kubenswrapper[4957]: I0218 14:55:07.329844 4957 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="93970d30a996a8ab3d90d6cda11e3c1ff7c5c3dab30e49b5bd86a1ad61e5d718" Feb 18 14:55:07 crc kubenswrapper[4957]: I0218 14:55:07.344623 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/19ecbaa7-5ddf-4835-b9b7-01a0c156b75e-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "19ecbaa7-5ddf-4835-b9b7-01a0c156b75e" (UID: "19ecbaa7-5ddf-4835-b9b7-01a0c156b75e"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:55:07 crc kubenswrapper[4957]: I0218 14:55:07.356980 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6255f04-0970-449b-b48c-2a812f42b7c5-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "c6255f04-0970-449b-b48c-2a812f42b7c5" (UID: "c6255f04-0970-449b-b48c-2a812f42b7c5"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:55:07 crc kubenswrapper[4957]: I0218 14:55:07.356990 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6255f04-0970-449b-b48c-2a812f42b7c5-kube-api-access-jfnmm" (OuterVolumeSpecName: "kube-api-access-jfnmm") pod "c6255f04-0970-449b-b48c-2a812f42b7c5" (UID: "c6255f04-0970-449b-b48c-2a812f42b7c5"). InnerVolumeSpecName "kube-api-access-jfnmm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:55:07 crc kubenswrapper[4957]: I0218 14:55:07.360378 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-dfvgf" Feb 18 14:55:07 crc kubenswrapper[4957]: I0218 14:55:07.360608 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-dfvgf" event={"ID":"c6255f04-0970-449b-b48c-2a812f42b7c5","Type":"ContainerDied","Data":"669a4b34e552ae5f9a0f6b3ee57a7a1a2681915ef58d677310bb98f80ac3a3af"} Feb 18 14:55:07 crc kubenswrapper[4957]: I0218 14:55:07.360643 4957 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="669a4b34e552ae5f9a0f6b3ee57a7a1a2681915ef58d677310bb98f80ac3a3af" Feb 18 14:55:07 crc kubenswrapper[4957]: I0218 14:55:07.372048 4957 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/19ecbaa7-5ddf-4835-b9b7-01a0c156b75e-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 14:55:07 crc kubenswrapper[4957]: I0218 14:55:07.388622 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jfnmm\" (UniqueName: \"kubernetes.io/projected/c6255f04-0970-449b-b48c-2a812f42b7c5-kube-api-access-jfnmm\") on node \"crc\" DevicePath \"\"" Feb 18 14:55:07 crc kubenswrapper[4957]: I0218 14:55:07.388642 4957 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c6255f04-0970-449b-b48c-2a812f42b7c5-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 14:55:07 crc kubenswrapper[4957]: I0218 14:55:07.388658 4957 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c6255f04-0970-449b-b48c-2a812f42b7c5-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 18 14:55:07 crc kubenswrapper[4957]: I0218 14:55:07.388669 4957 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c6255f04-0970-449b-b48c-2a812f42b7c5-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 18 14:55:07 crc kubenswrapper[4957]: I0218 14:55:07.388682 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p4d9j\" (UniqueName: \"kubernetes.io/projected/19ecbaa7-5ddf-4835-b9b7-01a0c156b75e-kube-api-access-p4d9j\") on node \"crc\" DevicePath \"\"" Feb 18 14:55:07 crc kubenswrapper[4957]: I0218 14:55:07.395695 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/19ecbaa7-5ddf-4835-b9b7-01a0c156b75e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "19ecbaa7-5ddf-4835-b9b7-01a0c156b75e" (UID: "19ecbaa7-5ddf-4835-b9b7-01a0c156b75e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:55:07 crc kubenswrapper[4957]: I0218 14:55:07.420294 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-d89wt" Feb 18 14:55:07 crc kubenswrapper[4957]: I0218 14:55:07.449106 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6255f04-0970-449b-b48c-2a812f42b7c5-config-data" (OuterVolumeSpecName: "config-data") pod "c6255f04-0970-449b-b48c-2a812f42b7c5" (UID: "c6255f04-0970-449b-b48c-2a812f42b7c5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:55:07 crc kubenswrapper[4957]: I0218 14:55:07.479560 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6255f04-0970-449b-b48c-2a812f42b7c5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c6255f04-0970-449b-b48c-2a812f42b7c5" (UID: "c6255f04-0970-449b-b48c-2a812f42b7c5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:55:07 crc kubenswrapper[4957]: I0218 14:55:07.492128 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-42qld\" (UniqueName: \"kubernetes.io/projected/9f3c5afd-c757-4e78-9f08-3d55f1b32ab6-kube-api-access-42qld\") pod \"9f3c5afd-c757-4e78-9f08-3d55f1b32ab6\" (UID: \"9f3c5afd-c757-4e78-9f08-3d55f1b32ab6\") " Feb 18 14:55:07 crc kubenswrapper[4957]: I0218 14:55:07.492203 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9f3c5afd-c757-4e78-9f08-3d55f1b32ab6-logs\") pod \"9f3c5afd-c757-4e78-9f08-3d55f1b32ab6\" (UID: \"9f3c5afd-c757-4e78-9f08-3d55f1b32ab6\") " Feb 18 14:55:07 crc kubenswrapper[4957]: I0218 14:55:07.492263 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9f3c5afd-c757-4e78-9f08-3d55f1b32ab6-scripts\") pod \"9f3c5afd-c757-4e78-9f08-3d55f1b32ab6\" (UID: \"9f3c5afd-c757-4e78-9f08-3d55f1b32ab6\") " Feb 18 14:55:07 crc kubenswrapper[4957]: I0218 14:55:07.492441 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f3c5afd-c757-4e78-9f08-3d55f1b32ab6-combined-ca-bundle\") pod \"9f3c5afd-c757-4e78-9f08-3d55f1b32ab6\" (UID: \"9f3c5afd-c757-4e78-9f08-3d55f1b32ab6\") " Feb 18 14:55:07 crc kubenswrapper[4957]: I0218 14:55:07.492464 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f3c5afd-c757-4e78-9f08-3d55f1b32ab6-config-data\") pod \"9f3c5afd-c757-4e78-9f08-3d55f1b32ab6\" (UID: \"9f3c5afd-c757-4e78-9f08-3d55f1b32ab6\") " Feb 18 14:55:07 crc kubenswrapper[4957]: I0218 14:55:07.493038 4957 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6255f04-0970-449b-b48c-2a812f42b7c5-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 14:55:07 crc kubenswrapper[4957]: I0218 14:55:07.493052 4957 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19ecbaa7-5ddf-4835-b9b7-01a0c156b75e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:55:07 crc kubenswrapper[4957]: I0218 14:55:07.493063 4957 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6255f04-0970-449b-b48c-2a812f42b7c5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:55:07 crc kubenswrapper[4957]: I0218 14:55:07.493237 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9f3c5afd-c757-4e78-9f08-3d55f1b32ab6-logs" (OuterVolumeSpecName: "logs") pod "9f3c5afd-c757-4e78-9f08-3d55f1b32ab6" (UID: "9f3c5afd-c757-4e78-9f08-3d55f1b32ab6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:55:07 crc kubenswrapper[4957]: I0218 14:55:07.500382 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f3c5afd-c757-4e78-9f08-3d55f1b32ab6-scripts" (OuterVolumeSpecName: "scripts") pod "9f3c5afd-c757-4e78-9f08-3d55f1b32ab6" (UID: "9f3c5afd-c757-4e78-9f08-3d55f1b32ab6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:55:07 crc kubenswrapper[4957]: I0218 14:55:07.505351 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f3c5afd-c757-4e78-9f08-3d55f1b32ab6-kube-api-access-42qld" (OuterVolumeSpecName: "kube-api-access-42qld") pod "9f3c5afd-c757-4e78-9f08-3d55f1b32ab6" (UID: "9f3c5afd-c757-4e78-9f08-3d55f1b32ab6"). InnerVolumeSpecName "kube-api-access-42qld". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:55:07 crc kubenswrapper[4957]: I0218 14:55:07.555360 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f3c5afd-c757-4e78-9f08-3d55f1b32ab6-config-data" (OuterVolumeSpecName: "config-data") pod "9f3c5afd-c757-4e78-9f08-3d55f1b32ab6" (UID: "9f3c5afd-c757-4e78-9f08-3d55f1b32ab6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:55:07 crc kubenswrapper[4957]: I0218 14:55:07.594717 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-42qld\" (UniqueName: \"kubernetes.io/projected/9f3c5afd-c757-4e78-9f08-3d55f1b32ab6-kube-api-access-42qld\") on node \"crc\" DevicePath \"\"" Feb 18 14:55:07 crc kubenswrapper[4957]: I0218 14:55:07.595247 4957 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9f3c5afd-c757-4e78-9f08-3d55f1b32ab6-logs\") on node \"crc\" DevicePath \"\"" Feb 18 14:55:07 crc kubenswrapper[4957]: I0218 14:55:07.595312 4957 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9f3c5afd-c757-4e78-9f08-3d55f1b32ab6-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 14:55:07 crc kubenswrapper[4957]: I0218 14:55:07.595371 4957 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f3c5afd-c757-4e78-9f08-3d55f1b32ab6-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 14:55:07 crc kubenswrapper[4957]: I0218 14:55:07.599667 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f3c5afd-c757-4e78-9f08-3d55f1b32ab6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9f3c5afd-c757-4e78-9f08-3d55f1b32ab6" (UID: "9f3c5afd-c757-4e78-9f08-3d55f1b32ab6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:55:07 crc kubenswrapper[4957]: I0218 14:55:07.699758 4957 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f3c5afd-c757-4e78-9f08-3d55f1b32ab6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:55:07 crc kubenswrapper[4957]: I0218 14:55:07.980131 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-8v9g9" Feb 18 14:55:08 crc kubenswrapper[4957]: I0218 14:55:08.114854 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g62dr\" (UniqueName: \"kubernetes.io/projected/36143c0c-ebdb-4642-9926-b8d4601d0aae-kube-api-access-g62dr\") pod \"36143c0c-ebdb-4642-9926-b8d4601d0aae\" (UID: \"36143c0c-ebdb-4642-9926-b8d4601d0aae\") " Feb 18 14:55:08 crc kubenswrapper[4957]: I0218 14:55:08.114938 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36143c0c-ebdb-4642-9926-b8d4601d0aae-config\") pod \"36143c0c-ebdb-4642-9926-b8d4601d0aae\" (UID: \"36143c0c-ebdb-4642-9926-b8d4601d0aae\") " Feb 18 14:55:08 crc kubenswrapper[4957]: I0218 14:55:08.115000 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/36143c0c-ebdb-4642-9926-b8d4601d0aae-dns-swift-storage-0\") pod \"36143c0c-ebdb-4642-9926-b8d4601d0aae\" (UID: \"36143c0c-ebdb-4642-9926-b8d4601d0aae\") " Feb 18 14:55:08 crc kubenswrapper[4957]: I0218 14:55:08.115122 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/36143c0c-ebdb-4642-9926-b8d4601d0aae-ovsdbserver-sb\") pod \"36143c0c-ebdb-4642-9926-b8d4601d0aae\" (UID: \"36143c0c-ebdb-4642-9926-b8d4601d0aae\") " Feb 18 14:55:08 crc kubenswrapper[4957]: I0218 14:55:08.115195 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/36143c0c-ebdb-4642-9926-b8d4601d0aae-ovsdbserver-nb\") pod \"36143c0c-ebdb-4642-9926-b8d4601d0aae\" (UID: \"36143c0c-ebdb-4642-9926-b8d4601d0aae\") " Feb 18 14:55:08 crc kubenswrapper[4957]: I0218 14:55:08.115264 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/36143c0c-ebdb-4642-9926-b8d4601d0aae-dns-svc\") pod \"36143c0c-ebdb-4642-9926-b8d4601d0aae\" (UID: \"36143c0c-ebdb-4642-9926-b8d4601d0aae\") " Feb 18 14:55:08 crc kubenswrapper[4957]: I0218 14:55:08.127581 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36143c0c-ebdb-4642-9926-b8d4601d0aae-kube-api-access-g62dr" (OuterVolumeSpecName: "kube-api-access-g62dr") pod "36143c0c-ebdb-4642-9926-b8d4601d0aae" (UID: "36143c0c-ebdb-4642-9926-b8d4601d0aae"). InnerVolumeSpecName "kube-api-access-g62dr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:55:08 crc kubenswrapper[4957]: I0218 14:55:08.210102 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/36143c0c-ebdb-4642-9926-b8d4601d0aae-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "36143c0c-ebdb-4642-9926-b8d4601d0aae" (UID: "36143c0c-ebdb-4642-9926-b8d4601d0aae"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:55:08 crc kubenswrapper[4957]: I0218 14:55:08.228396 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g62dr\" (UniqueName: \"kubernetes.io/projected/36143c0c-ebdb-4642-9926-b8d4601d0aae-kube-api-access-g62dr\") on node \"crc\" DevicePath \"\"" Feb 18 14:55:08 crc kubenswrapper[4957]: I0218 14:55:08.228740 4957 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/36143c0c-ebdb-4642-9926-b8d4601d0aae-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 18 14:55:08 crc kubenswrapper[4957]: I0218 14:55:08.240598 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/36143c0c-ebdb-4642-9926-b8d4601d0aae-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "36143c0c-ebdb-4642-9926-b8d4601d0aae" (UID: "36143c0c-ebdb-4642-9926-b8d4601d0aae"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:55:08 crc kubenswrapper[4957]: I0218 14:55:08.257551 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/36143c0c-ebdb-4642-9926-b8d4601d0aae-config" (OuterVolumeSpecName: "config") pod "36143c0c-ebdb-4642-9926-b8d4601d0aae" (UID: "36143c0c-ebdb-4642-9926-b8d4601d0aae"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:55:08 crc kubenswrapper[4957]: I0218 14:55:08.271790 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/36143c0c-ebdb-4642-9926-b8d4601d0aae-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "36143c0c-ebdb-4642-9926-b8d4601d0aae" (UID: "36143c0c-ebdb-4642-9926-b8d4601d0aae"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:55:08 crc kubenswrapper[4957]: I0218 14:55:08.297955 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/36143c0c-ebdb-4642-9926-b8d4601d0aae-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "36143c0c-ebdb-4642-9926-b8d4601d0aae" (UID: "36143c0c-ebdb-4642-9926-b8d4601d0aae"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:55:08 crc kubenswrapper[4957]: I0218 14:55:08.331155 4957 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36143c0c-ebdb-4642-9926-b8d4601d0aae-config\") on node \"crc\" DevicePath \"\"" Feb 18 14:55:08 crc kubenswrapper[4957]: I0218 14:55:08.331191 4957 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/36143c0c-ebdb-4642-9926-b8d4601d0aae-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 18 14:55:08 crc kubenswrapper[4957]: I0218 14:55:08.331204 4957 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/36143c0c-ebdb-4642-9926-b8d4601d0aae-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 18 14:55:08 crc kubenswrapper[4957]: I0218 14:55:08.331217 4957 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/36143c0c-ebdb-4642-9926-b8d4601d0aae-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 14:55:08 crc kubenswrapper[4957]: I0218 14:55:08.372708 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-59cd79686b-x6zk5"] Feb 18 14:55:08 crc kubenswrapper[4957]: E0218 14:55:08.373915 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36143c0c-ebdb-4642-9926-b8d4601d0aae" containerName="init" Feb 18 14:55:08 crc kubenswrapper[4957]: I0218 14:55:08.374011 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="36143c0c-ebdb-4642-9926-b8d4601d0aae" containerName="init" Feb 18 14:55:08 crc kubenswrapper[4957]: E0218 14:55:08.374095 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36143c0c-ebdb-4642-9926-b8d4601d0aae" containerName="dnsmasq-dns" Feb 18 14:55:08 crc kubenswrapper[4957]: I0218 14:55:08.374166 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="36143c0c-ebdb-4642-9926-b8d4601d0aae" containerName="dnsmasq-dns" Feb 18 14:55:08 crc kubenswrapper[4957]: E0218 14:55:08.374259 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f3c5afd-c757-4e78-9f08-3d55f1b32ab6" containerName="placement-db-sync" Feb 18 14:55:08 crc kubenswrapper[4957]: I0218 14:55:08.374348 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f3c5afd-c757-4e78-9f08-3d55f1b32ab6" containerName="placement-db-sync" Feb 18 14:55:08 crc kubenswrapper[4957]: E0218 14:55:08.374497 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19ecbaa7-5ddf-4835-b9b7-01a0c156b75e" containerName="barbican-db-sync" Feb 18 14:55:08 crc kubenswrapper[4957]: I0218 14:55:08.374607 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="19ecbaa7-5ddf-4835-b9b7-01a0c156b75e" containerName="barbican-db-sync" Feb 18 14:55:08 crc kubenswrapper[4957]: E0218 14:55:08.374721 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6255f04-0970-449b-b48c-2a812f42b7c5" containerName="keystone-bootstrap" Feb 18 14:55:08 crc kubenswrapper[4957]: I0218 14:55:08.374801 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6255f04-0970-449b-b48c-2a812f42b7c5" containerName="keystone-bootstrap" Feb 18 14:55:08 crc kubenswrapper[4957]: I0218 14:55:08.375139 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="36143c0c-ebdb-4642-9926-b8d4601d0aae" containerName="dnsmasq-dns" Feb 18 14:55:08 crc kubenswrapper[4957]: I0218 14:55:08.375248 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f3c5afd-c757-4e78-9f08-3d55f1b32ab6" containerName="placement-db-sync" Feb 18 14:55:08 crc kubenswrapper[4957]: I0218 14:55:08.375351 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="19ecbaa7-5ddf-4835-b9b7-01a0c156b75e" containerName="barbican-db-sync" Feb 18 14:55:08 crc kubenswrapper[4957]: I0218 14:55:08.375463 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6255f04-0970-449b-b48c-2a812f42b7c5" containerName="keystone-bootstrap" Feb 18 14:55:08 crc kubenswrapper[4957]: I0218 14:55:08.376366 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-59cd79686b-x6zk5"] Feb 18 14:55:08 crc kubenswrapper[4957]: I0218 14:55:08.376552 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-59cd79686b-x6zk5" Feb 18 14:55:08 crc kubenswrapper[4957]: I0218 14:55:08.386740 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Feb 18 14:55:08 crc kubenswrapper[4957]: I0218 14:55:08.387055 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 18 14:55:08 crc kubenswrapper[4957]: I0218 14:55:08.387152 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 18 14:55:08 crc kubenswrapper[4957]: I0218 14:55:08.387197 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Feb 18 14:55:08 crc kubenswrapper[4957]: I0218 14:55:08.387380 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 18 14:55:08 crc kubenswrapper[4957]: I0218 14:55:08.387516 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-t4lzm" Feb 18 14:55:08 crc kubenswrapper[4957]: I0218 14:55:08.388969 4957 generic.go:334] "Generic (PLEG): container finished" podID="36143c0c-ebdb-4642-9926-b8d4601d0aae" containerID="77534336adbe533d04aee751547e718c56d01b0a78db0ae9c22246033d0b8b54" exitCode=0 Feb 18 14:55:08 crc kubenswrapper[4957]: I0218 14:55:08.389257 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-8v9g9" Feb 18 14:55:08 crc kubenswrapper[4957]: I0218 14:55:08.389498 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-8v9g9" event={"ID":"36143c0c-ebdb-4642-9926-b8d4601d0aae","Type":"ContainerDied","Data":"77534336adbe533d04aee751547e718c56d01b0a78db0ae9c22246033d0b8b54"} Feb 18 14:55:08 crc kubenswrapper[4957]: I0218 14:55:08.389558 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-8v9g9" event={"ID":"36143c0c-ebdb-4642-9926-b8d4601d0aae","Type":"ContainerDied","Data":"e3ffd264f750f63b668cbc55c343f10fa70a4c611492f1af8065b9abe22223e8"} Feb 18 14:55:08 crc kubenswrapper[4957]: I0218 14:55:08.389574 4957 scope.go:117] "RemoveContainer" containerID="77534336adbe533d04aee751547e718c56d01b0a78db0ae9c22246033d0b8b54" Feb 18 14:55:08 crc kubenswrapper[4957]: I0218 14:55:08.429250 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"86855234-3589-4780-a8a5-75d8a4350c27","Type":"ContainerStarted","Data":"127b825a86c5ff0148cf0ebee0981a0d07998697b29a1f9952795c3b07b5a3e3"} Feb 18 14:55:08 crc kubenswrapper[4957]: I0218 14:55:08.436665 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/47942968-a16c-4e8d-8aa8-2a54303782a5-public-tls-certs\") pod \"keystone-59cd79686b-x6zk5\" (UID: \"47942968-a16c-4e8d-8aa8-2a54303782a5\") " pod="openstack/keystone-59cd79686b-x6zk5" Feb 18 14:55:08 crc kubenswrapper[4957]: I0218 14:55:08.436782 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47942968-a16c-4e8d-8aa8-2a54303782a5-config-data\") pod \"keystone-59cd79686b-x6zk5\" (UID: \"47942968-a16c-4e8d-8aa8-2a54303782a5\") " pod="openstack/keystone-59cd79686b-x6zk5" Feb 18 14:55:08 crc kubenswrapper[4957]: I0218 14:55:08.436825 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/47942968-a16c-4e8d-8aa8-2a54303782a5-credential-keys\") pod \"keystone-59cd79686b-x6zk5\" (UID: \"47942968-a16c-4e8d-8aa8-2a54303782a5\") " pod="openstack/keystone-59cd79686b-x6zk5" Feb 18 14:55:08 crc kubenswrapper[4957]: I0218 14:55:08.436877 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/47942968-a16c-4e8d-8aa8-2a54303782a5-fernet-keys\") pod \"keystone-59cd79686b-x6zk5\" (UID: \"47942968-a16c-4e8d-8aa8-2a54303782a5\") " pod="openstack/keystone-59cd79686b-x6zk5" Feb 18 14:55:08 crc kubenswrapper[4957]: I0218 14:55:08.436976 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvhgb\" (UniqueName: \"kubernetes.io/projected/47942968-a16c-4e8d-8aa8-2a54303782a5-kube-api-access-pvhgb\") pod \"keystone-59cd79686b-x6zk5\" (UID: \"47942968-a16c-4e8d-8aa8-2a54303782a5\") " pod="openstack/keystone-59cd79686b-x6zk5" Feb 18 14:55:08 crc kubenswrapper[4957]: I0218 14:55:08.436993 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/47942968-a16c-4e8d-8aa8-2a54303782a5-internal-tls-certs\") pod \"keystone-59cd79686b-x6zk5\" (UID: \"47942968-a16c-4e8d-8aa8-2a54303782a5\") " pod="openstack/keystone-59cd79686b-x6zk5" Feb 18 14:55:08 crc kubenswrapper[4957]: I0218 14:55:08.437013 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/47942968-a16c-4e8d-8aa8-2a54303782a5-scripts\") pod \"keystone-59cd79686b-x6zk5\" (UID: \"47942968-a16c-4e8d-8aa8-2a54303782a5\") " pod="openstack/keystone-59cd79686b-x6zk5" Feb 18 14:55:08 crc kubenswrapper[4957]: I0218 14:55:08.437080 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47942968-a16c-4e8d-8aa8-2a54303782a5-combined-ca-bundle\") pod \"keystone-59cd79686b-x6zk5\" (UID: \"47942968-a16c-4e8d-8aa8-2a54303782a5\") " pod="openstack/keystone-59cd79686b-x6zk5" Feb 18 14:55:08 crc kubenswrapper[4957]: I0218 14:55:08.441977 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cjglk" event={"ID":"fa177feb-1f29-43cb-baa9-6676bfa7c403","Type":"ContainerStarted","Data":"722f64918d346c25a11589cfce40cd5da1a3c5a87c8d88c91ff7420199341355"} Feb 18 14:55:08 crc kubenswrapper[4957]: I0218 14:55:08.443889 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-sg7fz" event={"ID":"a3b12676-eb60-406c-a019-461370859d2a","Type":"ContainerStarted","Data":"529c06b3faf50c75256e4096a91e6712527ce6db69b811695032087f8aab43f8"} Feb 18 14:55:08 crc kubenswrapper[4957]: I0218 14:55:08.444555 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-d89wt" Feb 18 14:55:08 crc kubenswrapper[4957]: I0218 14:55:08.447921 4957 scope.go:117] "RemoveContainer" containerID="99cf3770a0dfe46908a29c5c2b3ecee59f3ba107fe308135a158ef02665bb615" Feb 18 14:55:08 crc kubenswrapper[4957]: I0218 14:55:08.457858 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-8v9g9"] Feb 18 14:55:08 crc kubenswrapper[4957]: I0218 14:55:08.482286 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-8v9g9"] Feb 18 14:55:08 crc kubenswrapper[4957]: I0218 14:55:08.498102 4957 scope.go:117] "RemoveContainer" containerID="77534336adbe533d04aee751547e718c56d01b0a78db0ae9c22246033d0b8b54" Feb 18 14:55:08 crc kubenswrapper[4957]: E0218 14:55:08.503020 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"77534336adbe533d04aee751547e718c56d01b0a78db0ae9c22246033d0b8b54\": container with ID starting with 77534336adbe533d04aee751547e718c56d01b0a78db0ae9c22246033d0b8b54 not found: ID does not exist" containerID="77534336adbe533d04aee751547e718c56d01b0a78db0ae9c22246033d0b8b54" Feb 18 14:55:08 crc kubenswrapper[4957]: I0218 14:55:08.503073 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"77534336adbe533d04aee751547e718c56d01b0a78db0ae9c22246033d0b8b54"} err="failed to get container status \"77534336adbe533d04aee751547e718c56d01b0a78db0ae9c22246033d0b8b54\": rpc error: code = NotFound desc = could not find container \"77534336adbe533d04aee751547e718c56d01b0a78db0ae9c22246033d0b8b54\": container with ID starting with 77534336adbe533d04aee751547e718c56d01b0a78db0ae9c22246033d0b8b54 not found: ID does not exist" Feb 18 14:55:08 crc kubenswrapper[4957]: I0218 14:55:08.503106 4957 scope.go:117] "RemoveContainer" containerID="99cf3770a0dfe46908a29c5c2b3ecee59f3ba107fe308135a158ef02665bb615" Feb 18 14:55:08 crc kubenswrapper[4957]: E0218 14:55:08.503777 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"99cf3770a0dfe46908a29c5c2b3ecee59f3ba107fe308135a158ef02665bb615\": container with ID starting with 99cf3770a0dfe46908a29c5c2b3ecee59f3ba107fe308135a158ef02665bb615 not found: ID does not exist" containerID="99cf3770a0dfe46908a29c5c2b3ecee59f3ba107fe308135a158ef02665bb615" Feb 18 14:55:08 crc kubenswrapper[4957]: I0218 14:55:08.503799 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99cf3770a0dfe46908a29c5c2b3ecee59f3ba107fe308135a158ef02665bb615"} err="failed to get container status \"99cf3770a0dfe46908a29c5c2b3ecee59f3ba107fe308135a158ef02665bb615\": rpc error: code = NotFound desc = could not find container \"99cf3770a0dfe46908a29c5c2b3ecee59f3ba107fe308135a158ef02665bb615\": container with ID starting with 99cf3770a0dfe46908a29c5c2b3ecee59f3ba107fe308135a158ef02665bb615 not found: ID does not exist" Feb 18 14:55:08 crc kubenswrapper[4957]: I0218 14:55:08.519690 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-cjglk" podStartSLOduration=26.120322214 podStartE2EDuration="35.519661911s" podCreationTimestamp="2026-02-18 14:54:33 +0000 UTC" firstStartedPulling="2026-02-18 14:54:57.756915707 +0000 UTC m=+1404.277780451" lastFinishedPulling="2026-02-18 14:55:07.156255404 +0000 UTC m=+1413.677120148" observedRunningTime="2026-02-18 14:55:08.467900423 +0000 UTC m=+1414.988765177" watchObservedRunningTime="2026-02-18 14:55:08.519661911 +0000 UTC m=+1415.040526655" Feb 18 14:55:08 crc kubenswrapper[4957]: I0218 14:55:08.530354 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-sync-sg7fz" podStartSLOduration=3.114845816 podStartE2EDuration="46.530332069s" podCreationTimestamp="2026-02-18 14:54:22 +0000 UTC" firstStartedPulling="2026-02-18 14:54:23.733258924 +0000 UTC m=+1370.254123668" lastFinishedPulling="2026-02-18 14:55:07.148745177 +0000 UTC m=+1413.669609921" observedRunningTime="2026-02-18 14:55:08.503338808 +0000 UTC m=+1415.024203552" watchObservedRunningTime="2026-02-18 14:55:08.530332069 +0000 UTC m=+1415.051196823" Feb 18 14:55:08 crc kubenswrapper[4957]: I0218 14:55:08.538886 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/47942968-a16c-4e8d-8aa8-2a54303782a5-public-tls-certs\") pod \"keystone-59cd79686b-x6zk5\" (UID: \"47942968-a16c-4e8d-8aa8-2a54303782a5\") " pod="openstack/keystone-59cd79686b-x6zk5" Feb 18 14:55:08 crc kubenswrapper[4957]: I0218 14:55:08.538984 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47942968-a16c-4e8d-8aa8-2a54303782a5-config-data\") pod \"keystone-59cd79686b-x6zk5\" (UID: \"47942968-a16c-4e8d-8aa8-2a54303782a5\") " pod="openstack/keystone-59cd79686b-x6zk5" Feb 18 14:55:08 crc kubenswrapper[4957]: I0218 14:55:08.539023 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/47942968-a16c-4e8d-8aa8-2a54303782a5-credential-keys\") pod \"keystone-59cd79686b-x6zk5\" (UID: \"47942968-a16c-4e8d-8aa8-2a54303782a5\") " pod="openstack/keystone-59cd79686b-x6zk5" Feb 18 14:55:08 crc kubenswrapper[4957]: I0218 14:55:08.539078 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/47942968-a16c-4e8d-8aa8-2a54303782a5-fernet-keys\") pod \"keystone-59cd79686b-x6zk5\" (UID: \"47942968-a16c-4e8d-8aa8-2a54303782a5\") " pod="openstack/keystone-59cd79686b-x6zk5" Feb 18 14:55:08 crc kubenswrapper[4957]: I0218 14:55:08.539191 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pvhgb\" (UniqueName: \"kubernetes.io/projected/47942968-a16c-4e8d-8aa8-2a54303782a5-kube-api-access-pvhgb\") pod \"keystone-59cd79686b-x6zk5\" (UID: \"47942968-a16c-4e8d-8aa8-2a54303782a5\") " pod="openstack/keystone-59cd79686b-x6zk5" Feb 18 14:55:08 crc kubenswrapper[4957]: I0218 14:55:08.539215 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/47942968-a16c-4e8d-8aa8-2a54303782a5-internal-tls-certs\") pod \"keystone-59cd79686b-x6zk5\" (UID: \"47942968-a16c-4e8d-8aa8-2a54303782a5\") " pod="openstack/keystone-59cd79686b-x6zk5" Feb 18 14:55:08 crc kubenswrapper[4957]: I0218 14:55:08.539241 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/47942968-a16c-4e8d-8aa8-2a54303782a5-scripts\") pod \"keystone-59cd79686b-x6zk5\" (UID: \"47942968-a16c-4e8d-8aa8-2a54303782a5\") " pod="openstack/keystone-59cd79686b-x6zk5" Feb 18 14:55:08 crc kubenswrapper[4957]: I0218 14:55:08.539412 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47942968-a16c-4e8d-8aa8-2a54303782a5-combined-ca-bundle\") pod \"keystone-59cd79686b-x6zk5\" (UID: \"47942968-a16c-4e8d-8aa8-2a54303782a5\") " pod="openstack/keystone-59cd79686b-x6zk5" Feb 18 14:55:08 crc kubenswrapper[4957]: I0218 14:55:08.546750 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47942968-a16c-4e8d-8aa8-2a54303782a5-combined-ca-bundle\") pod \"keystone-59cd79686b-x6zk5\" (UID: \"47942968-a16c-4e8d-8aa8-2a54303782a5\") " pod="openstack/keystone-59cd79686b-x6zk5" Feb 18 14:55:08 crc kubenswrapper[4957]: I0218 14:55:08.548628 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47942968-a16c-4e8d-8aa8-2a54303782a5-config-data\") pod \"keystone-59cd79686b-x6zk5\" (UID: \"47942968-a16c-4e8d-8aa8-2a54303782a5\") " pod="openstack/keystone-59cd79686b-x6zk5" Feb 18 14:55:08 crc kubenswrapper[4957]: I0218 14:55:08.555499 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/47942968-a16c-4e8d-8aa8-2a54303782a5-scripts\") pod \"keystone-59cd79686b-x6zk5\" (UID: \"47942968-a16c-4e8d-8aa8-2a54303782a5\") " pod="openstack/keystone-59cd79686b-x6zk5" Feb 18 14:55:08 crc kubenswrapper[4957]: I0218 14:55:08.556854 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/47942968-a16c-4e8d-8aa8-2a54303782a5-internal-tls-certs\") pod \"keystone-59cd79686b-x6zk5\" (UID: \"47942968-a16c-4e8d-8aa8-2a54303782a5\") " pod="openstack/keystone-59cd79686b-x6zk5" Feb 18 14:55:08 crc kubenswrapper[4957]: I0218 14:55:08.563542 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/47942968-a16c-4e8d-8aa8-2a54303782a5-credential-keys\") pod \"keystone-59cd79686b-x6zk5\" (UID: \"47942968-a16c-4e8d-8aa8-2a54303782a5\") " pod="openstack/keystone-59cd79686b-x6zk5" Feb 18 14:55:08 crc kubenswrapper[4957]: I0218 14:55:08.564471 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/47942968-a16c-4e8d-8aa8-2a54303782a5-fernet-keys\") pod \"keystone-59cd79686b-x6zk5\" (UID: \"47942968-a16c-4e8d-8aa8-2a54303782a5\") " pod="openstack/keystone-59cd79686b-x6zk5" Feb 18 14:55:08 crc kubenswrapper[4957]: I0218 14:55:08.573015 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/47942968-a16c-4e8d-8aa8-2a54303782a5-public-tls-certs\") pod \"keystone-59cd79686b-x6zk5\" (UID: \"47942968-a16c-4e8d-8aa8-2a54303782a5\") " pod="openstack/keystone-59cd79686b-x6zk5" Feb 18 14:55:08 crc kubenswrapper[4957]: I0218 14:55:08.579732 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-64456c6c55-glfbl"] Feb 18 14:55:08 crc kubenswrapper[4957]: I0218 14:55:08.581523 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-64456c6c55-glfbl" Feb 18 14:55:08 crc kubenswrapper[4957]: I0218 14:55:08.603279 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 18 14:55:08 crc kubenswrapper[4957]: I0218 14:55:08.604557 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Feb 18 14:55:08 crc kubenswrapper[4957]: I0218 14:55:08.604858 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-l98wd" Feb 18 14:55:08 crc kubenswrapper[4957]: I0218 14:55:08.612703 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvhgb\" (UniqueName: \"kubernetes.io/projected/47942968-a16c-4e8d-8aa8-2a54303782a5-kube-api-access-pvhgb\") pod \"keystone-59cd79686b-x6zk5\" (UID: \"47942968-a16c-4e8d-8aa8-2a54303782a5\") " pod="openstack/keystone-59cd79686b-x6zk5" Feb 18 14:55:08 crc kubenswrapper[4957]: I0218 14:55:08.642739 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a219f4d-fd2e-4398-95c0-8005624a8c31-config-data\") pod \"barbican-worker-64456c6c55-glfbl\" (UID: \"1a219f4d-fd2e-4398-95c0-8005624a8c31\") " pod="openstack/barbican-worker-64456c6c55-glfbl" Feb 18 14:55:08 crc kubenswrapper[4957]: I0218 14:55:08.642803 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1a219f4d-fd2e-4398-95c0-8005624a8c31-config-data-custom\") pod \"barbican-worker-64456c6c55-glfbl\" (UID: \"1a219f4d-fd2e-4398-95c0-8005624a8c31\") " pod="openstack/barbican-worker-64456c6c55-glfbl" Feb 18 14:55:08 crc kubenswrapper[4957]: I0218 14:55:08.642898 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rhbtq\" (UniqueName: \"kubernetes.io/projected/1a219f4d-fd2e-4398-95c0-8005624a8c31-kube-api-access-rhbtq\") pod \"barbican-worker-64456c6c55-glfbl\" (UID: \"1a219f4d-fd2e-4398-95c0-8005624a8c31\") " pod="openstack/barbican-worker-64456c6c55-glfbl" Feb 18 14:55:08 crc kubenswrapper[4957]: I0218 14:55:08.642945 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1a219f4d-fd2e-4398-95c0-8005624a8c31-logs\") pod \"barbican-worker-64456c6c55-glfbl\" (UID: \"1a219f4d-fd2e-4398-95c0-8005624a8c31\") " pod="openstack/barbican-worker-64456c6c55-glfbl" Feb 18 14:55:08 crc kubenswrapper[4957]: I0218 14:55:08.642991 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a219f4d-fd2e-4398-95c0-8005624a8c31-combined-ca-bundle\") pod \"barbican-worker-64456c6c55-glfbl\" (UID: \"1a219f4d-fd2e-4398-95c0-8005624a8c31\") " pod="openstack/barbican-worker-64456c6c55-glfbl" Feb 18 14:55:08 crc kubenswrapper[4957]: I0218 14:55:08.643818 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-64456c6c55-glfbl"] Feb 18 14:55:08 crc kubenswrapper[4957]: I0218 14:55:08.753172 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-59cd79686b-x6zk5" Feb 18 14:55:08 crc kubenswrapper[4957]: I0218 14:55:08.769373 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a219f4d-fd2e-4398-95c0-8005624a8c31-config-data\") pod \"barbican-worker-64456c6c55-glfbl\" (UID: \"1a219f4d-fd2e-4398-95c0-8005624a8c31\") " pod="openstack/barbican-worker-64456c6c55-glfbl" Feb 18 14:55:08 crc kubenswrapper[4957]: I0218 14:55:08.776372 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1a219f4d-fd2e-4398-95c0-8005624a8c31-config-data-custom\") pod \"barbican-worker-64456c6c55-glfbl\" (UID: \"1a219f4d-fd2e-4398-95c0-8005624a8c31\") " pod="openstack/barbican-worker-64456c6c55-glfbl" Feb 18 14:55:08 crc kubenswrapper[4957]: I0218 14:55:08.777766 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rhbtq\" (UniqueName: \"kubernetes.io/projected/1a219f4d-fd2e-4398-95c0-8005624a8c31-kube-api-access-rhbtq\") pod \"barbican-worker-64456c6c55-glfbl\" (UID: \"1a219f4d-fd2e-4398-95c0-8005624a8c31\") " pod="openstack/barbican-worker-64456c6c55-glfbl" Feb 18 14:55:08 crc kubenswrapper[4957]: I0218 14:55:08.785498 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1a219f4d-fd2e-4398-95c0-8005624a8c31-logs\") pod \"barbican-worker-64456c6c55-glfbl\" (UID: \"1a219f4d-fd2e-4398-95c0-8005624a8c31\") " pod="openstack/barbican-worker-64456c6c55-glfbl" Feb 18 14:55:08 crc kubenswrapper[4957]: I0218 14:55:08.785889 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a219f4d-fd2e-4398-95c0-8005624a8c31-combined-ca-bundle\") pod \"barbican-worker-64456c6c55-glfbl\" (UID: \"1a219f4d-fd2e-4398-95c0-8005624a8c31\") " pod="openstack/barbican-worker-64456c6c55-glfbl" Feb 18 14:55:08 crc kubenswrapper[4957]: I0218 14:55:08.775735 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a219f4d-fd2e-4398-95c0-8005624a8c31-config-data\") pod \"barbican-worker-64456c6c55-glfbl\" (UID: \"1a219f4d-fd2e-4398-95c0-8005624a8c31\") " pod="openstack/barbican-worker-64456c6c55-glfbl" Feb 18 14:55:08 crc kubenswrapper[4957]: I0218 14:55:08.769486 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-c445bc8f8-ltx54"] Feb 18 14:55:08 crc kubenswrapper[4957]: I0218 14:55:08.789987 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-c445bc8f8-ltx54" Feb 18 14:55:08 crc kubenswrapper[4957]: I0218 14:55:08.791595 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1a219f4d-fd2e-4398-95c0-8005624a8c31-logs\") pod \"barbican-worker-64456c6c55-glfbl\" (UID: \"1a219f4d-fd2e-4398-95c0-8005624a8c31\") " pod="openstack/barbican-worker-64456c6c55-glfbl" Feb 18 14:55:08 crc kubenswrapper[4957]: I0218 14:55:08.808887 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1a219f4d-fd2e-4398-95c0-8005624a8c31-config-data-custom\") pod \"barbican-worker-64456c6c55-glfbl\" (UID: \"1a219f4d-fd2e-4398-95c0-8005624a8c31\") " pod="openstack/barbican-worker-64456c6c55-glfbl" Feb 18 14:55:08 crc kubenswrapper[4957]: I0218 14:55:08.810853 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Feb 18 14:55:08 crc kubenswrapper[4957]: I0218 14:55:08.828467 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a219f4d-fd2e-4398-95c0-8005624a8c31-combined-ca-bundle\") pod \"barbican-worker-64456c6c55-glfbl\" (UID: \"1a219f4d-fd2e-4398-95c0-8005624a8c31\") " pod="openstack/barbican-worker-64456c6c55-glfbl" Feb 18 14:55:08 crc kubenswrapper[4957]: I0218 14:55:08.836069 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rhbtq\" (UniqueName: \"kubernetes.io/projected/1a219f4d-fd2e-4398-95c0-8005624a8c31-kube-api-access-rhbtq\") pod \"barbican-worker-64456c6c55-glfbl\" (UID: \"1a219f4d-fd2e-4398-95c0-8005624a8c31\") " pod="openstack/barbican-worker-64456c6c55-glfbl" Feb 18 14:55:08 crc kubenswrapper[4957]: I0218 14:55:08.872208 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-c445bc8f8-ltx54"] Feb 18 14:55:08 crc kubenswrapper[4957]: I0218 14:55:08.907269 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-7667cffcf-zch65"] Feb 18 14:55:08 crc kubenswrapper[4957]: I0218 14:55:08.910961 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c7cb1bd6-6151-4d00-834c-dbe330c6506b-logs\") pod \"barbican-keystone-listener-c445bc8f8-ltx54\" (UID: \"c7cb1bd6-6151-4d00-834c-dbe330c6506b\") " pod="openstack/barbican-keystone-listener-c445bc8f8-ltx54" Feb 18 14:55:08 crc kubenswrapper[4957]: I0218 14:55:08.911393 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c7cb1bd6-6151-4d00-834c-dbe330c6506b-config-data-custom\") pod \"barbican-keystone-listener-c445bc8f8-ltx54\" (UID: \"c7cb1bd6-6151-4d00-834c-dbe330c6506b\") " pod="openstack/barbican-keystone-listener-c445bc8f8-ltx54" Feb 18 14:55:08 crc kubenswrapper[4957]: I0218 14:55:08.911832 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7cb1bd6-6151-4d00-834c-dbe330c6506b-config-data\") pod \"barbican-keystone-listener-c445bc8f8-ltx54\" (UID: \"c7cb1bd6-6151-4d00-834c-dbe330c6506b\") " pod="openstack/barbican-keystone-listener-c445bc8f8-ltx54" Feb 18 14:55:08 crc kubenswrapper[4957]: I0218 14:55:08.911979 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8b9xn\" (UniqueName: \"kubernetes.io/projected/c7cb1bd6-6151-4d00-834c-dbe330c6506b-kube-api-access-8b9xn\") pod \"barbican-keystone-listener-c445bc8f8-ltx54\" (UID: \"c7cb1bd6-6151-4d00-834c-dbe330c6506b\") " pod="openstack/barbican-keystone-listener-c445bc8f8-ltx54" Feb 18 14:55:08 crc kubenswrapper[4957]: I0218 14:55:08.912115 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7cb1bd6-6151-4d00-834c-dbe330c6506b-combined-ca-bundle\") pod \"barbican-keystone-listener-c445bc8f8-ltx54\" (UID: \"c7cb1bd6-6151-4d00-834c-dbe330c6506b\") " pod="openstack/barbican-keystone-listener-c445bc8f8-ltx54" Feb 18 14:55:08 crc kubenswrapper[4957]: I0218 14:55:08.912481 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-7667cffcf-zch65" Feb 18 14:55:08 crc kubenswrapper[4957]: I0218 14:55:08.927208 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-fffb49c9b-fm77z"] Feb 18 14:55:08 crc kubenswrapper[4957]: I0218 14:55:08.930085 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-fffb49c9b-fm77z" Feb 18 14:55:08 crc kubenswrapper[4957]: I0218 14:55:08.947592 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-7667cffcf-zch65"] Feb 18 14:55:08 crc kubenswrapper[4957]: I0218 14:55:08.948170 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-64456c6c55-glfbl" Feb 18 14:55:08 crc kubenswrapper[4957]: I0218 14:55:08.974543 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-hdpgd"] Feb 18 14:55:08 crc kubenswrapper[4957]: I0218 14:55:08.980697 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-hdpgd" Feb 18 14:55:09 crc kubenswrapper[4957]: I0218 14:55:09.017069 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a84b8763-62fd-4a64-ab6e-276ca0488599-combined-ca-bundle\") pod \"barbican-keystone-listener-fffb49c9b-fm77z\" (UID: \"a84b8763-62fd-4a64-ab6e-276ca0488599\") " pod="openstack/barbican-keystone-listener-fffb49c9b-fm77z" Feb 18 14:55:09 crc kubenswrapper[4957]: I0218 14:55:09.017520 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4mxb\" (UniqueName: \"kubernetes.io/projected/a84b8763-62fd-4a64-ab6e-276ca0488599-kube-api-access-h4mxb\") pod \"barbican-keystone-listener-fffb49c9b-fm77z\" (UID: \"a84b8763-62fd-4a64-ab6e-276ca0488599\") " pod="openstack/barbican-keystone-listener-fffb49c9b-fm77z" Feb 18 14:55:09 crc kubenswrapper[4957]: I0218 14:55:09.017736 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c7cb1bd6-6151-4d00-834c-dbe330c6506b-logs\") pod \"barbican-keystone-listener-c445bc8f8-ltx54\" (UID: \"c7cb1bd6-6151-4d00-834c-dbe330c6506b\") " pod="openstack/barbican-keystone-listener-c445bc8f8-ltx54" Feb 18 14:55:09 crc kubenswrapper[4957]: I0218 14:55:09.017952 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c7cb1bd6-6151-4d00-834c-dbe330c6506b-config-data-custom\") pod \"barbican-keystone-listener-c445bc8f8-ltx54\" (UID: \"c7cb1bd6-6151-4d00-834c-dbe330c6506b\") " pod="openstack/barbican-keystone-listener-c445bc8f8-ltx54" Feb 18 14:55:09 crc kubenswrapper[4957]: I0218 14:55:09.018085 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c7cb1bd6-6151-4d00-834c-dbe330c6506b-logs\") pod \"barbican-keystone-listener-c445bc8f8-ltx54\" (UID: \"c7cb1bd6-6151-4d00-834c-dbe330c6506b\") " pod="openstack/barbican-keystone-listener-c445bc8f8-ltx54" Feb 18 14:55:09 crc kubenswrapper[4957]: I0218 14:55:09.018370 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4370835e-b763-479d-b742-fe754a24bd1b-logs\") pod \"barbican-worker-7667cffcf-zch65\" (UID: \"4370835e-b763-479d-b742-fe754a24bd1b\") " pod="openstack/barbican-worker-7667cffcf-zch65" Feb 18 14:55:09 crc kubenswrapper[4957]: I0218 14:55:09.018580 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4370835e-b763-479d-b742-fe754a24bd1b-combined-ca-bundle\") pod \"barbican-worker-7667cffcf-zch65\" (UID: \"4370835e-b763-479d-b742-fe754a24bd1b\") " pod="openstack/barbican-worker-7667cffcf-zch65" Feb 18 14:55:09 crc kubenswrapper[4957]: I0218 14:55:09.018862 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a84b8763-62fd-4a64-ab6e-276ca0488599-config-data\") pod \"barbican-keystone-listener-fffb49c9b-fm77z\" (UID: \"a84b8763-62fd-4a64-ab6e-276ca0488599\") " pod="openstack/barbican-keystone-listener-fffb49c9b-fm77z" Feb 18 14:55:09 crc kubenswrapper[4957]: I0218 14:55:09.018932 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4370835e-b763-479d-b742-fe754a24bd1b-config-data\") pod \"barbican-worker-7667cffcf-zch65\" (UID: \"4370835e-b763-479d-b742-fe754a24bd1b\") " pod="openstack/barbican-worker-7667cffcf-zch65" Feb 18 14:55:09 crc kubenswrapper[4957]: I0218 14:55:09.019036 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a84b8763-62fd-4a64-ab6e-276ca0488599-logs\") pod \"barbican-keystone-listener-fffb49c9b-fm77z\" (UID: \"a84b8763-62fd-4a64-ab6e-276ca0488599\") " pod="openstack/barbican-keystone-listener-fffb49c9b-fm77z" Feb 18 14:55:09 crc kubenswrapper[4957]: I0218 14:55:09.019117 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7cb1bd6-6151-4d00-834c-dbe330c6506b-config-data\") pod \"barbican-keystone-listener-c445bc8f8-ltx54\" (UID: \"c7cb1bd6-6151-4d00-834c-dbe330c6506b\") " pod="openstack/barbican-keystone-listener-c445bc8f8-ltx54" Feb 18 14:55:09 crc kubenswrapper[4957]: I0218 14:55:09.019171 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvr7l\" (UniqueName: \"kubernetes.io/projected/4370835e-b763-479d-b742-fe754a24bd1b-kube-api-access-vvr7l\") pod \"barbican-worker-7667cffcf-zch65\" (UID: \"4370835e-b763-479d-b742-fe754a24bd1b\") " pod="openstack/barbican-worker-7667cffcf-zch65" Feb 18 14:55:09 crc kubenswrapper[4957]: I0218 14:55:09.019223 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8b9xn\" (UniqueName: \"kubernetes.io/projected/c7cb1bd6-6151-4d00-834c-dbe330c6506b-kube-api-access-8b9xn\") pod \"barbican-keystone-listener-c445bc8f8-ltx54\" (UID: \"c7cb1bd6-6151-4d00-834c-dbe330c6506b\") " pod="openstack/barbican-keystone-listener-c445bc8f8-ltx54" Feb 18 14:55:09 crc kubenswrapper[4957]: I0218 14:55:09.019281 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7cb1bd6-6151-4d00-834c-dbe330c6506b-combined-ca-bundle\") pod \"barbican-keystone-listener-c445bc8f8-ltx54\" (UID: \"c7cb1bd6-6151-4d00-834c-dbe330c6506b\") " pod="openstack/barbican-keystone-listener-c445bc8f8-ltx54" Feb 18 14:55:09 crc kubenswrapper[4957]: I0218 14:55:09.019344 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a84b8763-62fd-4a64-ab6e-276ca0488599-config-data-custom\") pod \"barbican-keystone-listener-fffb49c9b-fm77z\" (UID: \"a84b8763-62fd-4a64-ab6e-276ca0488599\") " pod="openstack/barbican-keystone-listener-fffb49c9b-fm77z" Feb 18 14:55:09 crc kubenswrapper[4957]: I0218 14:55:09.019559 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4370835e-b763-479d-b742-fe754a24bd1b-config-data-custom\") pod \"barbican-worker-7667cffcf-zch65\" (UID: \"4370835e-b763-479d-b742-fe754a24bd1b\") " pod="openstack/barbican-worker-7667cffcf-zch65" Feb 18 14:55:09 crc kubenswrapper[4957]: I0218 14:55:09.026664 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c7cb1bd6-6151-4d00-834c-dbe330c6506b-config-data-custom\") pod \"barbican-keystone-listener-c445bc8f8-ltx54\" (UID: \"c7cb1bd6-6151-4d00-834c-dbe330c6506b\") " pod="openstack/barbican-keystone-listener-c445bc8f8-ltx54" Feb 18 14:55:09 crc kubenswrapper[4957]: I0218 14:55:09.028791 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7cb1bd6-6151-4d00-834c-dbe330c6506b-config-data\") pod \"barbican-keystone-listener-c445bc8f8-ltx54\" (UID: \"c7cb1bd6-6151-4d00-834c-dbe330c6506b\") " pod="openstack/barbican-keystone-listener-c445bc8f8-ltx54" Feb 18 14:55:09 crc kubenswrapper[4957]: I0218 14:55:09.029710 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7cb1bd6-6151-4d00-834c-dbe330c6506b-combined-ca-bundle\") pod \"barbican-keystone-listener-c445bc8f8-ltx54\" (UID: \"c7cb1bd6-6151-4d00-834c-dbe330c6506b\") " pod="openstack/barbican-keystone-listener-c445bc8f8-ltx54" Feb 18 14:55:09 crc kubenswrapper[4957]: I0218 14:55:09.033782 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-fffb49c9b-fm77z"] Feb 18 14:55:09 crc kubenswrapper[4957]: I0218 14:55:09.053453 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8b9xn\" (UniqueName: \"kubernetes.io/projected/c7cb1bd6-6151-4d00-834c-dbe330c6506b-kube-api-access-8b9xn\") pod \"barbican-keystone-listener-c445bc8f8-ltx54\" (UID: \"c7cb1bd6-6151-4d00-834c-dbe330c6506b\") " pod="openstack/barbican-keystone-listener-c445bc8f8-ltx54" Feb 18 14:55:09 crc kubenswrapper[4957]: I0218 14:55:09.068789 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-hdpgd"] Feb 18 14:55:09 crc kubenswrapper[4957]: I0218 14:55:09.122184 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a84b8763-62fd-4a64-ab6e-276ca0488599-config-data-custom\") pod \"barbican-keystone-listener-fffb49c9b-fm77z\" (UID: \"a84b8763-62fd-4a64-ab6e-276ca0488599\") " pod="openstack/barbican-keystone-listener-fffb49c9b-fm77z" Feb 18 14:55:09 crc kubenswrapper[4957]: I0218 14:55:09.122242 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/da55f937-4545-48b6-93df-ff81f7215472-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-hdpgd\" (UID: \"da55f937-4545-48b6-93df-ff81f7215472\") " pod="openstack/dnsmasq-dns-85ff748b95-hdpgd" Feb 18 14:55:09 crc kubenswrapper[4957]: I0218 14:55:09.122274 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4370835e-b763-479d-b742-fe754a24bd1b-config-data-custom\") pod \"barbican-worker-7667cffcf-zch65\" (UID: \"4370835e-b763-479d-b742-fe754a24bd1b\") " pod="openstack/barbican-worker-7667cffcf-zch65" Feb 18 14:55:09 crc kubenswrapper[4957]: I0218 14:55:09.122300 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da55f937-4545-48b6-93df-ff81f7215472-config\") pod \"dnsmasq-dns-85ff748b95-hdpgd\" (UID: \"da55f937-4545-48b6-93df-ff81f7215472\") " pod="openstack/dnsmasq-dns-85ff748b95-hdpgd" Feb 18 14:55:09 crc kubenswrapper[4957]: I0218 14:55:09.122326 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a84b8763-62fd-4a64-ab6e-276ca0488599-combined-ca-bundle\") pod \"barbican-keystone-listener-fffb49c9b-fm77z\" (UID: \"a84b8763-62fd-4a64-ab6e-276ca0488599\") " pod="openstack/barbican-keystone-listener-fffb49c9b-fm77z" Feb 18 14:55:09 crc kubenswrapper[4957]: I0218 14:55:09.122346 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h4mxb\" (UniqueName: \"kubernetes.io/projected/a84b8763-62fd-4a64-ab6e-276ca0488599-kube-api-access-h4mxb\") pod \"barbican-keystone-listener-fffb49c9b-fm77z\" (UID: \"a84b8763-62fd-4a64-ab6e-276ca0488599\") " pod="openstack/barbican-keystone-listener-fffb49c9b-fm77z" Feb 18 14:55:09 crc kubenswrapper[4957]: I0218 14:55:09.122399 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/da55f937-4545-48b6-93df-ff81f7215472-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-hdpgd\" (UID: \"da55f937-4545-48b6-93df-ff81f7215472\") " pod="openstack/dnsmasq-dns-85ff748b95-hdpgd" Feb 18 14:55:09 crc kubenswrapper[4957]: I0218 14:55:09.122455 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/da55f937-4545-48b6-93df-ff81f7215472-dns-svc\") pod \"dnsmasq-dns-85ff748b95-hdpgd\" (UID: \"da55f937-4545-48b6-93df-ff81f7215472\") " pod="openstack/dnsmasq-dns-85ff748b95-hdpgd" Feb 18 14:55:09 crc kubenswrapper[4957]: I0218 14:55:09.122504 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4370835e-b763-479d-b742-fe754a24bd1b-logs\") pod \"barbican-worker-7667cffcf-zch65\" (UID: \"4370835e-b763-479d-b742-fe754a24bd1b\") " pod="openstack/barbican-worker-7667cffcf-zch65" Feb 18 14:55:09 crc kubenswrapper[4957]: I0218 14:55:09.122525 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4370835e-b763-479d-b742-fe754a24bd1b-combined-ca-bundle\") pod \"barbican-worker-7667cffcf-zch65\" (UID: \"4370835e-b763-479d-b742-fe754a24bd1b\") " pod="openstack/barbican-worker-7667cffcf-zch65" Feb 18 14:55:09 crc kubenswrapper[4957]: I0218 14:55:09.122541 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/da55f937-4545-48b6-93df-ff81f7215472-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-hdpgd\" (UID: \"da55f937-4545-48b6-93df-ff81f7215472\") " pod="openstack/dnsmasq-dns-85ff748b95-hdpgd" Feb 18 14:55:09 crc kubenswrapper[4957]: I0218 14:55:09.122559 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a84b8763-62fd-4a64-ab6e-276ca0488599-config-data\") pod \"barbican-keystone-listener-fffb49c9b-fm77z\" (UID: \"a84b8763-62fd-4a64-ab6e-276ca0488599\") " pod="openstack/barbican-keystone-listener-fffb49c9b-fm77z" Feb 18 14:55:09 crc kubenswrapper[4957]: I0218 14:55:09.122580 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vn27w\" (UniqueName: \"kubernetes.io/projected/da55f937-4545-48b6-93df-ff81f7215472-kube-api-access-vn27w\") pod \"dnsmasq-dns-85ff748b95-hdpgd\" (UID: \"da55f937-4545-48b6-93df-ff81f7215472\") " pod="openstack/dnsmasq-dns-85ff748b95-hdpgd" Feb 18 14:55:09 crc kubenswrapper[4957]: I0218 14:55:09.122599 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4370835e-b763-479d-b742-fe754a24bd1b-config-data\") pod \"barbican-worker-7667cffcf-zch65\" (UID: \"4370835e-b763-479d-b742-fe754a24bd1b\") " pod="openstack/barbican-worker-7667cffcf-zch65" Feb 18 14:55:09 crc kubenswrapper[4957]: I0218 14:55:09.122631 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a84b8763-62fd-4a64-ab6e-276ca0488599-logs\") pod \"barbican-keystone-listener-fffb49c9b-fm77z\" (UID: \"a84b8763-62fd-4a64-ab6e-276ca0488599\") " pod="openstack/barbican-keystone-listener-fffb49c9b-fm77z" Feb 18 14:55:09 crc kubenswrapper[4957]: I0218 14:55:09.122677 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vvr7l\" (UniqueName: \"kubernetes.io/projected/4370835e-b763-479d-b742-fe754a24bd1b-kube-api-access-vvr7l\") pod \"barbican-worker-7667cffcf-zch65\" (UID: \"4370835e-b763-479d-b742-fe754a24bd1b\") " pod="openstack/barbican-worker-7667cffcf-zch65" Feb 18 14:55:09 crc kubenswrapper[4957]: I0218 14:55:09.126838 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4370835e-b763-479d-b742-fe754a24bd1b-logs\") pod \"barbican-worker-7667cffcf-zch65\" (UID: \"4370835e-b763-479d-b742-fe754a24bd1b\") " pod="openstack/barbican-worker-7667cffcf-zch65" Feb 18 14:55:09 crc kubenswrapper[4957]: I0218 14:55:09.127594 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-76ccd9d8b4-5flnw"] Feb 18 14:55:09 crc kubenswrapper[4957]: I0218 14:55:09.130020 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-76ccd9d8b4-5flnw" Feb 18 14:55:09 crc kubenswrapper[4957]: I0218 14:55:09.130490 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a84b8763-62fd-4a64-ab6e-276ca0488599-logs\") pod \"barbican-keystone-listener-fffb49c9b-fm77z\" (UID: \"a84b8763-62fd-4a64-ab6e-276ca0488599\") " pod="openstack/barbican-keystone-listener-fffb49c9b-fm77z" Feb 18 14:55:09 crc kubenswrapper[4957]: I0218 14:55:09.139765 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-76ccd9d8b4-5flnw"] Feb 18 14:55:09 crc kubenswrapper[4957]: I0218 14:55:09.140909 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Feb 18 14:55:09 crc kubenswrapper[4957]: I0218 14:55:09.141384 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 18 14:55:09 crc kubenswrapper[4957]: I0218 14:55:09.141778 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-27zkn" Feb 18 14:55:09 crc kubenswrapper[4957]: I0218 14:55:09.144621 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a84b8763-62fd-4a64-ab6e-276ca0488599-combined-ca-bundle\") pod \"barbican-keystone-listener-fffb49c9b-fm77z\" (UID: \"a84b8763-62fd-4a64-ab6e-276ca0488599\") " pod="openstack/barbican-keystone-listener-fffb49c9b-fm77z" Feb 18 14:55:09 crc kubenswrapper[4957]: I0218 14:55:09.151066 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 18 14:55:09 crc kubenswrapper[4957]: I0218 14:55:09.151309 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Feb 18 14:55:09 crc kubenswrapper[4957]: I0218 14:55:09.152230 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a84b8763-62fd-4a64-ab6e-276ca0488599-config-data\") pod \"barbican-keystone-listener-fffb49c9b-fm77z\" (UID: \"a84b8763-62fd-4a64-ab6e-276ca0488599\") " pod="openstack/barbican-keystone-listener-fffb49c9b-fm77z" Feb 18 14:55:09 crc kubenswrapper[4957]: I0218 14:55:09.153095 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-768df8c8bb-2s2nt"] Feb 18 14:55:09 crc kubenswrapper[4957]: I0218 14:55:09.154974 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4370835e-b763-479d-b742-fe754a24bd1b-config-data-custom\") pod \"barbican-worker-7667cffcf-zch65\" (UID: \"4370835e-b763-479d-b742-fe754a24bd1b\") " pod="openstack/barbican-worker-7667cffcf-zch65" Feb 18 14:55:09 crc kubenswrapper[4957]: I0218 14:55:09.154981 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-768df8c8bb-2s2nt" Feb 18 14:55:09 crc kubenswrapper[4957]: I0218 14:55:09.160039 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Feb 18 14:55:09 crc kubenswrapper[4957]: I0218 14:55:09.167550 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4370835e-b763-479d-b742-fe754a24bd1b-config-data\") pod \"barbican-worker-7667cffcf-zch65\" (UID: \"4370835e-b763-479d-b742-fe754a24bd1b\") " pod="openstack/barbican-worker-7667cffcf-zch65" Feb 18 14:55:09 crc kubenswrapper[4957]: I0218 14:55:09.174128 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-768df8c8bb-2s2nt"] Feb 18 14:55:09 crc kubenswrapper[4957]: I0218 14:55:09.183752 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-c445bc8f8-ltx54" Feb 18 14:55:09 crc kubenswrapper[4957]: I0218 14:55:09.187690 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4370835e-b763-479d-b742-fe754a24bd1b-combined-ca-bundle\") pod \"barbican-worker-7667cffcf-zch65\" (UID: \"4370835e-b763-479d-b742-fe754a24bd1b\") " pod="openstack/barbican-worker-7667cffcf-zch65" Feb 18 14:55:09 crc kubenswrapper[4957]: I0218 14:55:09.192482 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h4mxb\" (UniqueName: \"kubernetes.io/projected/a84b8763-62fd-4a64-ab6e-276ca0488599-kube-api-access-h4mxb\") pod \"barbican-keystone-listener-fffb49c9b-fm77z\" (UID: \"a84b8763-62fd-4a64-ab6e-276ca0488599\") " pod="openstack/barbican-keystone-listener-fffb49c9b-fm77z" Feb 18 14:55:09 crc kubenswrapper[4957]: I0218 14:55:09.193196 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vvr7l\" (UniqueName: \"kubernetes.io/projected/4370835e-b763-479d-b742-fe754a24bd1b-kube-api-access-vvr7l\") pod \"barbican-worker-7667cffcf-zch65\" (UID: \"4370835e-b763-479d-b742-fe754a24bd1b\") " pod="openstack/barbican-worker-7667cffcf-zch65" Feb 18 14:55:09 crc kubenswrapper[4957]: I0218 14:55:09.199102 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a84b8763-62fd-4a64-ab6e-276ca0488599-config-data-custom\") pod \"barbican-keystone-listener-fffb49c9b-fm77z\" (UID: \"a84b8763-62fd-4a64-ab6e-276ca0488599\") " pod="openstack/barbican-keystone-listener-fffb49c9b-fm77z" Feb 18 14:55:09 crc kubenswrapper[4957]: I0218 14:55:09.229802 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/226e0541-e9a7-4516-9a20-94ace7e03a41-combined-ca-bundle\") pod \"placement-76ccd9d8b4-5flnw\" (UID: \"226e0541-e9a7-4516-9a20-94ace7e03a41\") " pod="openstack/placement-76ccd9d8b4-5flnw" Feb 18 14:55:09 crc kubenswrapper[4957]: I0218 14:55:09.230239 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/da55f937-4545-48b6-93df-ff81f7215472-dns-svc\") pod \"dnsmasq-dns-85ff748b95-hdpgd\" (UID: \"da55f937-4545-48b6-93df-ff81f7215472\") " pod="openstack/dnsmasq-dns-85ff748b95-hdpgd" Feb 18 14:55:09 crc kubenswrapper[4957]: I0218 14:55:09.230329 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hhnsj\" (UniqueName: \"kubernetes.io/projected/226e0541-e9a7-4516-9a20-94ace7e03a41-kube-api-access-hhnsj\") pod \"placement-76ccd9d8b4-5flnw\" (UID: \"226e0541-e9a7-4516-9a20-94ace7e03a41\") " pod="openstack/placement-76ccd9d8b4-5flnw" Feb 18 14:55:09 crc kubenswrapper[4957]: I0218 14:55:09.230352 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/da55f937-4545-48b6-93df-ff81f7215472-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-hdpgd\" (UID: \"da55f937-4545-48b6-93df-ff81f7215472\") " pod="openstack/dnsmasq-dns-85ff748b95-hdpgd" Feb 18 14:55:09 crc kubenswrapper[4957]: I0218 14:55:09.230376 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/226e0541-e9a7-4516-9a20-94ace7e03a41-logs\") pod \"placement-76ccd9d8b4-5flnw\" (UID: \"226e0541-e9a7-4516-9a20-94ace7e03a41\") " pod="openstack/placement-76ccd9d8b4-5flnw" Feb 18 14:55:09 crc kubenswrapper[4957]: I0218 14:55:09.230407 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vn27w\" (UniqueName: \"kubernetes.io/projected/da55f937-4545-48b6-93df-ff81f7215472-kube-api-access-vn27w\") pod \"dnsmasq-dns-85ff748b95-hdpgd\" (UID: \"da55f937-4545-48b6-93df-ff81f7215472\") " pod="openstack/dnsmasq-dns-85ff748b95-hdpgd" Feb 18 14:55:09 crc kubenswrapper[4957]: I0218 14:55:09.230578 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/226e0541-e9a7-4516-9a20-94ace7e03a41-public-tls-certs\") pod \"placement-76ccd9d8b4-5flnw\" (UID: \"226e0541-e9a7-4516-9a20-94ace7e03a41\") " pod="openstack/placement-76ccd9d8b4-5flnw" Feb 18 14:55:09 crc kubenswrapper[4957]: I0218 14:55:09.230639 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/da55f937-4545-48b6-93df-ff81f7215472-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-hdpgd\" (UID: \"da55f937-4545-48b6-93df-ff81f7215472\") " pod="openstack/dnsmasq-dns-85ff748b95-hdpgd" Feb 18 14:55:09 crc kubenswrapper[4957]: I0218 14:55:09.230667 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/226e0541-e9a7-4516-9a20-94ace7e03a41-scripts\") pod \"placement-76ccd9d8b4-5flnw\" (UID: \"226e0541-e9a7-4516-9a20-94ace7e03a41\") " pod="openstack/placement-76ccd9d8b4-5flnw" Feb 18 14:55:09 crc kubenswrapper[4957]: I0218 14:55:09.230709 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da55f937-4545-48b6-93df-ff81f7215472-config\") pod \"dnsmasq-dns-85ff748b95-hdpgd\" (UID: \"da55f937-4545-48b6-93df-ff81f7215472\") " pod="openstack/dnsmasq-dns-85ff748b95-hdpgd" Feb 18 14:55:09 crc kubenswrapper[4957]: I0218 14:55:09.230734 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/226e0541-e9a7-4516-9a20-94ace7e03a41-internal-tls-certs\") pod \"placement-76ccd9d8b4-5flnw\" (UID: \"226e0541-e9a7-4516-9a20-94ace7e03a41\") " pod="openstack/placement-76ccd9d8b4-5flnw" Feb 18 14:55:09 crc kubenswrapper[4957]: I0218 14:55:09.231929 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/da55f937-4545-48b6-93df-ff81f7215472-dns-svc\") pod \"dnsmasq-dns-85ff748b95-hdpgd\" (UID: \"da55f937-4545-48b6-93df-ff81f7215472\") " pod="openstack/dnsmasq-dns-85ff748b95-hdpgd" Feb 18 14:55:09 crc kubenswrapper[4957]: I0218 14:55:09.232795 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/da55f937-4545-48b6-93df-ff81f7215472-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-hdpgd\" (UID: \"da55f937-4545-48b6-93df-ff81f7215472\") " pod="openstack/dnsmasq-dns-85ff748b95-hdpgd" Feb 18 14:55:09 crc kubenswrapper[4957]: I0218 14:55:09.232979 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/da55f937-4545-48b6-93df-ff81f7215472-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-hdpgd\" (UID: \"da55f937-4545-48b6-93df-ff81f7215472\") " pod="openstack/dnsmasq-dns-85ff748b95-hdpgd" Feb 18 14:55:09 crc kubenswrapper[4957]: I0218 14:55:09.233666 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da55f937-4545-48b6-93df-ff81f7215472-config\") pod \"dnsmasq-dns-85ff748b95-hdpgd\" (UID: \"da55f937-4545-48b6-93df-ff81f7215472\") " pod="openstack/dnsmasq-dns-85ff748b95-hdpgd" Feb 18 14:55:09 crc kubenswrapper[4957]: I0218 14:55:09.233815 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/226e0541-e9a7-4516-9a20-94ace7e03a41-config-data\") pod \"placement-76ccd9d8b4-5flnw\" (UID: \"226e0541-e9a7-4516-9a20-94ace7e03a41\") " pod="openstack/placement-76ccd9d8b4-5flnw" Feb 18 14:55:09 crc kubenswrapper[4957]: I0218 14:55:09.233863 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/da55f937-4545-48b6-93df-ff81f7215472-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-hdpgd\" (UID: \"da55f937-4545-48b6-93df-ff81f7215472\") " pod="openstack/dnsmasq-dns-85ff748b95-hdpgd" Feb 18 14:55:09 crc kubenswrapper[4957]: I0218 14:55:09.235571 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/da55f937-4545-48b6-93df-ff81f7215472-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-hdpgd\" (UID: \"da55f937-4545-48b6-93df-ff81f7215472\") " pod="openstack/dnsmasq-dns-85ff748b95-hdpgd" Feb 18 14:55:09 crc kubenswrapper[4957]: I0218 14:55:09.266662 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vn27w\" (UniqueName: \"kubernetes.io/projected/da55f937-4545-48b6-93df-ff81f7215472-kube-api-access-vn27w\") pod \"dnsmasq-dns-85ff748b95-hdpgd\" (UID: \"da55f937-4545-48b6-93df-ff81f7215472\") " pod="openstack/dnsmasq-dns-85ff748b95-hdpgd" Feb 18 14:55:09 crc kubenswrapper[4957]: I0218 14:55:09.299163 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-7667cffcf-zch65" Feb 18 14:55:09 crc kubenswrapper[4957]: I0218 14:55:09.336822 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/226e0541-e9a7-4516-9a20-94ace7e03a41-logs\") pod \"placement-76ccd9d8b4-5flnw\" (UID: \"226e0541-e9a7-4516-9a20-94ace7e03a41\") " pod="openstack/placement-76ccd9d8b4-5flnw" Feb 18 14:55:09 crc kubenswrapper[4957]: I0218 14:55:09.336885 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/401e08eb-2650-46f4-9906-7780d2655b31-logs\") pod \"barbican-api-768df8c8bb-2s2nt\" (UID: \"401e08eb-2650-46f4-9906-7780d2655b31\") " pod="openstack/barbican-api-768df8c8bb-2s2nt" Feb 18 14:55:09 crc kubenswrapper[4957]: I0218 14:55:09.337058 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/226e0541-e9a7-4516-9a20-94ace7e03a41-public-tls-certs\") pod \"placement-76ccd9d8b4-5flnw\" (UID: \"226e0541-e9a7-4516-9a20-94ace7e03a41\") " pod="openstack/placement-76ccd9d8b4-5flnw" Feb 18 14:55:09 crc kubenswrapper[4957]: I0218 14:55:09.337132 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/226e0541-e9a7-4516-9a20-94ace7e03a41-scripts\") pod \"placement-76ccd9d8b4-5flnw\" (UID: \"226e0541-e9a7-4516-9a20-94ace7e03a41\") " pod="openstack/placement-76ccd9d8b4-5flnw" Feb 18 14:55:09 crc kubenswrapper[4957]: I0218 14:55:09.337196 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/226e0541-e9a7-4516-9a20-94ace7e03a41-internal-tls-certs\") pod \"placement-76ccd9d8b4-5flnw\" (UID: \"226e0541-e9a7-4516-9a20-94ace7e03a41\") " pod="openstack/placement-76ccd9d8b4-5flnw" Feb 18 14:55:09 crc kubenswrapper[4957]: I0218 14:55:09.337250 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/401e08eb-2650-46f4-9906-7780d2655b31-config-data-custom\") pod \"barbican-api-768df8c8bb-2s2nt\" (UID: \"401e08eb-2650-46f4-9906-7780d2655b31\") " pod="openstack/barbican-api-768df8c8bb-2s2nt" Feb 18 14:55:09 crc kubenswrapper[4957]: I0218 14:55:09.337322 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/226e0541-e9a7-4516-9a20-94ace7e03a41-config-data\") pod \"placement-76ccd9d8b4-5flnw\" (UID: \"226e0541-e9a7-4516-9a20-94ace7e03a41\") " pod="openstack/placement-76ccd9d8b4-5flnw" Feb 18 14:55:09 crc kubenswrapper[4957]: I0218 14:55:09.337358 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/401e08eb-2650-46f4-9906-7780d2655b31-combined-ca-bundle\") pod \"barbican-api-768df8c8bb-2s2nt\" (UID: \"401e08eb-2650-46f4-9906-7780d2655b31\") " pod="openstack/barbican-api-768df8c8bb-2s2nt" Feb 18 14:55:09 crc kubenswrapper[4957]: I0218 14:55:09.337397 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/226e0541-e9a7-4516-9a20-94ace7e03a41-combined-ca-bundle\") pod \"placement-76ccd9d8b4-5flnw\" (UID: \"226e0541-e9a7-4516-9a20-94ace7e03a41\") " pod="openstack/placement-76ccd9d8b4-5flnw" Feb 18 14:55:09 crc kubenswrapper[4957]: I0218 14:55:09.337443 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/401e08eb-2650-46f4-9906-7780d2655b31-config-data\") pod \"barbican-api-768df8c8bb-2s2nt\" (UID: \"401e08eb-2650-46f4-9906-7780d2655b31\") " pod="openstack/barbican-api-768df8c8bb-2s2nt" Feb 18 14:55:09 crc kubenswrapper[4957]: I0218 14:55:09.337489 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzvgh\" (UniqueName: \"kubernetes.io/projected/401e08eb-2650-46f4-9906-7780d2655b31-kube-api-access-xzvgh\") pod \"barbican-api-768df8c8bb-2s2nt\" (UID: \"401e08eb-2650-46f4-9906-7780d2655b31\") " pod="openstack/barbican-api-768df8c8bb-2s2nt" Feb 18 14:55:09 crc kubenswrapper[4957]: I0218 14:55:09.337580 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hhnsj\" (UniqueName: \"kubernetes.io/projected/226e0541-e9a7-4516-9a20-94ace7e03a41-kube-api-access-hhnsj\") pod \"placement-76ccd9d8b4-5flnw\" (UID: \"226e0541-e9a7-4516-9a20-94ace7e03a41\") " pod="openstack/placement-76ccd9d8b4-5flnw" Feb 18 14:55:09 crc kubenswrapper[4957]: I0218 14:55:09.338951 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/226e0541-e9a7-4516-9a20-94ace7e03a41-logs\") pod \"placement-76ccd9d8b4-5flnw\" (UID: \"226e0541-e9a7-4516-9a20-94ace7e03a41\") " pod="openstack/placement-76ccd9d8b4-5flnw" Feb 18 14:55:09 crc kubenswrapper[4957]: I0218 14:55:09.348444 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/226e0541-e9a7-4516-9a20-94ace7e03a41-internal-tls-certs\") pod \"placement-76ccd9d8b4-5flnw\" (UID: \"226e0541-e9a7-4516-9a20-94ace7e03a41\") " pod="openstack/placement-76ccd9d8b4-5flnw" Feb 18 14:55:09 crc kubenswrapper[4957]: I0218 14:55:09.374601 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/226e0541-e9a7-4516-9a20-94ace7e03a41-config-data\") pod \"placement-76ccd9d8b4-5flnw\" (UID: \"226e0541-e9a7-4516-9a20-94ace7e03a41\") " pod="openstack/placement-76ccd9d8b4-5flnw" Feb 18 14:55:09 crc kubenswrapper[4957]: I0218 14:55:09.374973 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/226e0541-e9a7-4516-9a20-94ace7e03a41-public-tls-certs\") pod \"placement-76ccd9d8b4-5flnw\" (UID: \"226e0541-e9a7-4516-9a20-94ace7e03a41\") " pod="openstack/placement-76ccd9d8b4-5flnw" Feb 18 14:55:09 crc kubenswrapper[4957]: I0218 14:55:09.375847 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/226e0541-e9a7-4516-9a20-94ace7e03a41-scripts\") pod \"placement-76ccd9d8b4-5flnw\" (UID: \"226e0541-e9a7-4516-9a20-94ace7e03a41\") " pod="openstack/placement-76ccd9d8b4-5flnw" Feb 18 14:55:09 crc kubenswrapper[4957]: I0218 14:55:09.389370 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/226e0541-e9a7-4516-9a20-94ace7e03a41-combined-ca-bundle\") pod \"placement-76ccd9d8b4-5flnw\" (UID: \"226e0541-e9a7-4516-9a20-94ace7e03a41\") " pod="openstack/placement-76ccd9d8b4-5flnw" Feb 18 14:55:09 crc kubenswrapper[4957]: I0218 14:55:09.390943 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hhnsj\" (UniqueName: \"kubernetes.io/projected/226e0541-e9a7-4516-9a20-94ace7e03a41-kube-api-access-hhnsj\") pod \"placement-76ccd9d8b4-5flnw\" (UID: \"226e0541-e9a7-4516-9a20-94ace7e03a41\") " pod="openstack/placement-76ccd9d8b4-5flnw" Feb 18 14:55:09 crc kubenswrapper[4957]: I0218 14:55:09.441296 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/401e08eb-2650-46f4-9906-7780d2655b31-config-data-custom\") pod \"barbican-api-768df8c8bb-2s2nt\" (UID: \"401e08eb-2650-46f4-9906-7780d2655b31\") " pod="openstack/barbican-api-768df8c8bb-2s2nt" Feb 18 14:55:09 crc kubenswrapper[4957]: I0218 14:55:09.441409 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/401e08eb-2650-46f4-9906-7780d2655b31-combined-ca-bundle\") pod \"barbican-api-768df8c8bb-2s2nt\" (UID: \"401e08eb-2650-46f4-9906-7780d2655b31\") " pod="openstack/barbican-api-768df8c8bb-2s2nt" Feb 18 14:55:09 crc kubenswrapper[4957]: I0218 14:55:09.456364 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/401e08eb-2650-46f4-9906-7780d2655b31-config-data-custom\") pod \"barbican-api-768df8c8bb-2s2nt\" (UID: \"401e08eb-2650-46f4-9906-7780d2655b31\") " pod="openstack/barbican-api-768df8c8bb-2s2nt" Feb 18 14:55:09 crc kubenswrapper[4957]: I0218 14:55:09.456559 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/401e08eb-2650-46f4-9906-7780d2655b31-config-data\") pod \"barbican-api-768df8c8bb-2s2nt\" (UID: \"401e08eb-2650-46f4-9906-7780d2655b31\") " pod="openstack/barbican-api-768df8c8bb-2s2nt" Feb 18 14:55:09 crc kubenswrapper[4957]: I0218 14:55:09.456682 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xzvgh\" (UniqueName: \"kubernetes.io/projected/401e08eb-2650-46f4-9906-7780d2655b31-kube-api-access-xzvgh\") pod \"barbican-api-768df8c8bb-2s2nt\" (UID: \"401e08eb-2650-46f4-9906-7780d2655b31\") " pod="openstack/barbican-api-768df8c8bb-2s2nt" Feb 18 14:55:09 crc kubenswrapper[4957]: I0218 14:55:09.456802 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/401e08eb-2650-46f4-9906-7780d2655b31-logs\") pod \"barbican-api-768df8c8bb-2s2nt\" (UID: \"401e08eb-2650-46f4-9906-7780d2655b31\") " pod="openstack/barbican-api-768df8c8bb-2s2nt" Feb 18 14:55:09 crc kubenswrapper[4957]: I0218 14:55:09.459388 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/401e08eb-2650-46f4-9906-7780d2655b31-logs\") pod \"barbican-api-768df8c8bb-2s2nt\" (UID: \"401e08eb-2650-46f4-9906-7780d2655b31\") " pod="openstack/barbican-api-768df8c8bb-2s2nt" Feb 18 14:55:09 crc kubenswrapper[4957]: I0218 14:55:09.460160 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/401e08eb-2650-46f4-9906-7780d2655b31-combined-ca-bundle\") pod \"barbican-api-768df8c8bb-2s2nt\" (UID: \"401e08eb-2650-46f4-9906-7780d2655b31\") " pod="openstack/barbican-api-768df8c8bb-2s2nt" Feb 18 14:55:09 crc kubenswrapper[4957]: I0218 14:55:09.494783 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/401e08eb-2650-46f4-9906-7780d2655b31-config-data\") pod \"barbican-api-768df8c8bb-2s2nt\" (UID: \"401e08eb-2650-46f4-9906-7780d2655b31\") " pod="openstack/barbican-api-768df8c8bb-2s2nt" Feb 18 14:55:09 crc kubenswrapper[4957]: I0218 14:55:09.502575 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xzvgh\" (UniqueName: \"kubernetes.io/projected/401e08eb-2650-46f4-9906-7780d2655b31-kube-api-access-xzvgh\") pod \"barbican-api-768df8c8bb-2s2nt\" (UID: \"401e08eb-2650-46f4-9906-7780d2655b31\") " pod="openstack/barbican-api-768df8c8bb-2s2nt" Feb 18 14:55:09 crc kubenswrapper[4957]: I0218 14:55:09.528887 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-fffb49c9b-fm77z" Feb 18 14:55:09 crc kubenswrapper[4957]: I0218 14:55:09.571479 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-hdpgd" Feb 18 14:55:09 crc kubenswrapper[4957]: I0218 14:55:09.620040 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-76ccd9d8b4-5flnw" Feb 18 14:55:09 crc kubenswrapper[4957]: I0218 14:55:09.641879 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-768df8c8bb-2s2nt" Feb 18 14:55:09 crc kubenswrapper[4957]: I0218 14:55:09.954010 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-64456c6c55-glfbl"] Feb 18 14:55:10 crc kubenswrapper[4957]: I0218 14:55:10.200854 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-59cd79686b-x6zk5"] Feb 18 14:55:10 crc kubenswrapper[4957]: I0218 14:55:10.548769 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="36143c0c-ebdb-4642-9926-b8d4601d0aae" path="/var/lib/kubelet/pods/36143c0c-ebdb-4642-9926-b8d4601d0aae/volumes" Feb 18 14:55:10 crc kubenswrapper[4957]: I0218 14:55:10.563252 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-c445bc8f8-ltx54"] Feb 18 14:55:10 crc kubenswrapper[4957]: I0218 14:55:10.683632 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-64456c6c55-glfbl" event={"ID":"1a219f4d-fd2e-4398-95c0-8005624a8c31","Type":"ContainerStarted","Data":"5214813722e15c1cf7444584e68265d9adea728aba9d81fb043b186663c8fd2f"} Feb 18 14:55:10 crc kubenswrapper[4957]: I0218 14:55:10.692927 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-59cd79686b-x6zk5" event={"ID":"47942968-a16c-4e8d-8aa8-2a54303782a5","Type":"ContainerStarted","Data":"659eed08ee9b128c58844276782eb759107e927fc91cbd05aa1abb03ddaeaa31"} Feb 18 14:55:10 crc kubenswrapper[4957]: W0218 14:55:10.766652 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc7cb1bd6_6151_4d00_834c_dbe330c6506b.slice/crio-3d9cab8a330c1f90ffa069c4d603d8e130496155a7e7aa56f4f6077c1145e3b8 WatchSource:0}: Error finding container 3d9cab8a330c1f90ffa069c4d603d8e130496155a7e7aa56f4f6077c1145e3b8: Status 404 returned error can't find the container with id 3d9cab8a330c1f90ffa069c4d603d8e130496155a7e7aa56f4f6077c1145e3b8 Feb 18 14:55:10 crc kubenswrapper[4957]: I0218 14:55:10.899972 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-fffb49c9b-fm77z"] Feb 18 14:55:10 crc kubenswrapper[4957]: I0218 14:55:10.922594 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-7667cffcf-zch65"] Feb 18 14:55:11 crc kubenswrapper[4957]: I0218 14:55:11.356355 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-768df8c8bb-2s2nt"] Feb 18 14:55:11 crc kubenswrapper[4957]: I0218 14:55:11.388572 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-76ccd9d8b4-5flnw"] Feb 18 14:55:11 crc kubenswrapper[4957]: I0218 14:55:11.452697 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-hdpgd"] Feb 18 14:55:11 crc kubenswrapper[4957]: W0218 14:55:11.457271 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podda55f937_4545_48b6_93df_ff81f7215472.slice/crio-10586da96e226f4439f4a77ad0369d76634aecf85e0385c8f69b339cd8d0c2a2 WatchSource:0}: Error finding container 10586da96e226f4439f4a77ad0369d76634aecf85e0385c8f69b339cd8d0c2a2: Status 404 returned error can't find the container with id 10586da96e226f4439f4a77ad0369d76634aecf85e0385c8f69b339cd8d0c2a2 Feb 18 14:55:11 crc kubenswrapper[4957]: I0218 14:55:11.727113 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-fffb49c9b-fm77z" event={"ID":"a84b8763-62fd-4a64-ab6e-276ca0488599","Type":"ContainerStarted","Data":"353e4c6e43ecc7b7ac76d5db950cbbd6a7a266b3af9fb62630e88ff4df64dd75"} Feb 18 14:55:11 crc kubenswrapper[4957]: I0218 14:55:11.730808 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-c445bc8f8-ltx54" event={"ID":"c7cb1bd6-6151-4d00-834c-dbe330c6506b","Type":"ContainerStarted","Data":"3d9cab8a330c1f90ffa069c4d603d8e130496155a7e7aa56f4f6077c1145e3b8"} Feb 18 14:55:11 crc kubenswrapper[4957]: I0218 14:55:11.741701 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-hdpgd" event={"ID":"da55f937-4545-48b6-93df-ff81f7215472","Type":"ContainerStarted","Data":"10586da96e226f4439f4a77ad0369d76634aecf85e0385c8f69b339cd8d0c2a2"} Feb 18 14:55:11 crc kubenswrapper[4957]: I0218 14:55:11.752684 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-768df8c8bb-2s2nt" event={"ID":"401e08eb-2650-46f4-9906-7780d2655b31","Type":"ContainerStarted","Data":"f65b50ccbf6ec25e6dbf4738e74eec0fa9253b77b98b7cda35931fde9e21340b"} Feb 18 14:55:11 crc kubenswrapper[4957]: I0218 14:55:11.758698 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-76ccd9d8b4-5flnw" event={"ID":"226e0541-e9a7-4516-9a20-94ace7e03a41","Type":"ContainerStarted","Data":"03d1aef415a253fc0708d36faf835c929943efd8443b03633aeae2e489c85f44"} Feb 18 14:55:11 crc kubenswrapper[4957]: I0218 14:55:11.762569 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-59cd79686b-x6zk5" event={"ID":"47942968-a16c-4e8d-8aa8-2a54303782a5","Type":"ContainerStarted","Data":"8a61963408929c5052c47fe71e04d5970638d5cf16bef5410517de1bea19e290"} Feb 18 14:55:11 crc kubenswrapper[4957]: I0218 14:55:11.763602 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-59cd79686b-x6zk5" Feb 18 14:55:11 crc kubenswrapper[4957]: I0218 14:55:11.777507 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7667cffcf-zch65" event={"ID":"4370835e-b763-479d-b742-fe754a24bd1b","Type":"ContainerStarted","Data":"92576cec54a9ffacc63a91ad2e911d255693617ae6d0feb7f0fd7abc6429e609"} Feb 18 14:55:11 crc kubenswrapper[4957]: I0218 14:55:11.818357 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-59cd79686b-x6zk5" podStartSLOduration=3.81833396 podStartE2EDuration="3.81833396s" podCreationTimestamp="2026-02-18 14:55:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:55:11.80002198 +0000 UTC m=+1418.320886734" watchObservedRunningTime="2026-02-18 14:55:11.81833396 +0000 UTC m=+1418.339198704" Feb 18 14:55:12 crc kubenswrapper[4957]: E0218 14:55:12.624777 4957 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podda55f937_4545_48b6_93df_ff81f7215472.slice/crio-214e23bc7de7ea33e2ee6c9c168cedfeb5fa391ba7c14bd126f665597740b064.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podda55f937_4545_48b6_93df_ff81f7215472.slice/crio-conmon-214e23bc7de7ea33e2ee6c9c168cedfeb5fa391ba7c14bd126f665597740b064.scope\": RecentStats: unable to find data in memory cache]" Feb 18 14:55:12 crc kubenswrapper[4957]: I0218 14:55:12.794717 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-768df8c8bb-2s2nt" event={"ID":"401e08eb-2650-46f4-9906-7780d2655b31","Type":"ContainerStarted","Data":"8b62ef971adb34a7f8e7baf58d7c93d620e071e61c1cd5b644cd9b9e0f3bf043"} Feb 18 14:55:12 crc kubenswrapper[4957]: I0218 14:55:12.795057 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-768df8c8bb-2s2nt" event={"ID":"401e08eb-2650-46f4-9906-7780d2655b31","Type":"ContainerStarted","Data":"6a0a47e9c8fe026a739f11a88aebc05bf04a56fe90d8d5154453e2c28c0f6e83"} Feb 18 14:55:12 crc kubenswrapper[4957]: I0218 14:55:12.796505 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-768df8c8bb-2s2nt" Feb 18 14:55:12 crc kubenswrapper[4957]: I0218 14:55:12.797164 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-768df8c8bb-2s2nt" Feb 18 14:55:12 crc kubenswrapper[4957]: I0218 14:55:12.806503 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-76ccd9d8b4-5flnw" event={"ID":"226e0541-e9a7-4516-9a20-94ace7e03a41","Type":"ContainerStarted","Data":"d24869b943c732b2bebf9cb81858d78c1cef9d45816f98d682e56d872d7ad518"} Feb 18 14:55:12 crc kubenswrapper[4957]: I0218 14:55:12.811160 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-5sgzz" event={"ID":"9205f99b-873e-4f53-9d89-85b77ca7adc1","Type":"ContainerStarted","Data":"a467ab3655df39ef6387264594b9fe1a027463d0576d224cadbd9e56f2573717"} Feb 18 14:55:12 crc kubenswrapper[4957]: I0218 14:55:12.824018 4957 generic.go:334] "Generic (PLEG): container finished" podID="da55f937-4545-48b6-93df-ff81f7215472" containerID="214e23bc7de7ea33e2ee6c9c168cedfeb5fa391ba7c14bd126f665597740b064" exitCode=0 Feb 18 14:55:12 crc kubenswrapper[4957]: I0218 14:55:12.824793 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-hdpgd" event={"ID":"da55f937-4545-48b6-93df-ff81f7215472","Type":"ContainerDied","Data":"214e23bc7de7ea33e2ee6c9c168cedfeb5fa391ba7c14bd126f665597740b064"} Feb 18 14:55:12 crc kubenswrapper[4957]: I0218 14:55:12.842967 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-768df8c8bb-2s2nt" podStartSLOduration=4.842943805 podStartE2EDuration="4.842943805s" podCreationTimestamp="2026-02-18 14:55:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:55:12.820158236 +0000 UTC m=+1419.341022980" watchObservedRunningTime="2026-02-18 14:55:12.842943805 +0000 UTC m=+1419.363808549" Feb 18 14:55:12 crc kubenswrapper[4957]: I0218 14:55:12.879356 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-5sgzz" podStartSLOduration=4.807718606 podStartE2EDuration="50.879327458s" podCreationTimestamp="2026-02-18 14:54:22 +0000 UTC" firstStartedPulling="2026-02-18 14:54:24.164238383 +0000 UTC m=+1370.685103127" lastFinishedPulling="2026-02-18 14:55:10.235847235 +0000 UTC m=+1416.756711979" observedRunningTime="2026-02-18 14:55:12.854698745 +0000 UTC m=+1419.375563489" watchObservedRunningTime="2026-02-18 14:55:12.879327458 +0000 UTC m=+1419.400192202" Feb 18 14:55:12 crc kubenswrapper[4957]: I0218 14:55:12.910537 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-785d8bcb8c-8v9g9" podUID="36143c0c-ebdb-4642-9926-b8d4601d0aae" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.184:5353: i/o timeout" Feb 18 14:55:13 crc kubenswrapper[4957]: I0218 14:55:13.093569 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-c7999fbc4-ttfwg"] Feb 18 14:55:13 crc kubenswrapper[4957]: I0218 14:55:13.101606 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-c7999fbc4-ttfwg" Feb 18 14:55:13 crc kubenswrapper[4957]: I0218 14:55:13.113365 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Feb 18 14:55:13 crc kubenswrapper[4957]: I0218 14:55:13.113643 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Feb 18 14:55:13 crc kubenswrapper[4957]: I0218 14:55:13.113928 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-c7999fbc4-ttfwg"] Feb 18 14:55:13 crc kubenswrapper[4957]: I0218 14:55:13.180092 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b15f7971-ec1f-4b4e-ae33-45863ceb6b09-internal-tls-certs\") pod \"barbican-api-c7999fbc4-ttfwg\" (UID: \"b15f7971-ec1f-4b4e-ae33-45863ceb6b09\") " pod="openstack/barbican-api-c7999fbc4-ttfwg" Feb 18 14:55:13 crc kubenswrapper[4957]: I0218 14:55:13.180593 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b15f7971-ec1f-4b4e-ae33-45863ceb6b09-config-data\") pod \"barbican-api-c7999fbc4-ttfwg\" (UID: \"b15f7971-ec1f-4b4e-ae33-45863ceb6b09\") " pod="openstack/barbican-api-c7999fbc4-ttfwg" Feb 18 14:55:13 crc kubenswrapper[4957]: I0218 14:55:13.180704 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b15f7971-ec1f-4b4e-ae33-45863ceb6b09-config-data-custom\") pod \"barbican-api-c7999fbc4-ttfwg\" (UID: \"b15f7971-ec1f-4b4e-ae33-45863ceb6b09\") " pod="openstack/barbican-api-c7999fbc4-ttfwg" Feb 18 14:55:13 crc kubenswrapper[4957]: I0218 14:55:13.180759 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b15f7971-ec1f-4b4e-ae33-45863ceb6b09-combined-ca-bundle\") pod \"barbican-api-c7999fbc4-ttfwg\" (UID: \"b15f7971-ec1f-4b4e-ae33-45863ceb6b09\") " pod="openstack/barbican-api-c7999fbc4-ttfwg" Feb 18 14:55:13 crc kubenswrapper[4957]: I0218 14:55:13.180820 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8blz\" (UniqueName: \"kubernetes.io/projected/b15f7971-ec1f-4b4e-ae33-45863ceb6b09-kube-api-access-g8blz\") pod \"barbican-api-c7999fbc4-ttfwg\" (UID: \"b15f7971-ec1f-4b4e-ae33-45863ceb6b09\") " pod="openstack/barbican-api-c7999fbc4-ttfwg" Feb 18 14:55:13 crc kubenswrapper[4957]: I0218 14:55:13.180959 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b15f7971-ec1f-4b4e-ae33-45863ceb6b09-public-tls-certs\") pod \"barbican-api-c7999fbc4-ttfwg\" (UID: \"b15f7971-ec1f-4b4e-ae33-45863ceb6b09\") " pod="openstack/barbican-api-c7999fbc4-ttfwg" Feb 18 14:55:13 crc kubenswrapper[4957]: I0218 14:55:13.180991 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b15f7971-ec1f-4b4e-ae33-45863ceb6b09-logs\") pod \"barbican-api-c7999fbc4-ttfwg\" (UID: \"b15f7971-ec1f-4b4e-ae33-45863ceb6b09\") " pod="openstack/barbican-api-c7999fbc4-ttfwg" Feb 18 14:55:13 crc kubenswrapper[4957]: I0218 14:55:13.283138 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b15f7971-ec1f-4b4e-ae33-45863ceb6b09-internal-tls-certs\") pod \"barbican-api-c7999fbc4-ttfwg\" (UID: \"b15f7971-ec1f-4b4e-ae33-45863ceb6b09\") " pod="openstack/barbican-api-c7999fbc4-ttfwg" Feb 18 14:55:13 crc kubenswrapper[4957]: I0218 14:55:13.283193 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b15f7971-ec1f-4b4e-ae33-45863ceb6b09-config-data\") pod \"barbican-api-c7999fbc4-ttfwg\" (UID: \"b15f7971-ec1f-4b4e-ae33-45863ceb6b09\") " pod="openstack/barbican-api-c7999fbc4-ttfwg" Feb 18 14:55:13 crc kubenswrapper[4957]: I0218 14:55:13.283257 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b15f7971-ec1f-4b4e-ae33-45863ceb6b09-config-data-custom\") pod \"barbican-api-c7999fbc4-ttfwg\" (UID: \"b15f7971-ec1f-4b4e-ae33-45863ceb6b09\") " pod="openstack/barbican-api-c7999fbc4-ttfwg" Feb 18 14:55:13 crc kubenswrapper[4957]: I0218 14:55:13.283291 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b15f7971-ec1f-4b4e-ae33-45863ceb6b09-combined-ca-bundle\") pod \"barbican-api-c7999fbc4-ttfwg\" (UID: \"b15f7971-ec1f-4b4e-ae33-45863ceb6b09\") " pod="openstack/barbican-api-c7999fbc4-ttfwg" Feb 18 14:55:13 crc kubenswrapper[4957]: I0218 14:55:13.283332 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g8blz\" (UniqueName: \"kubernetes.io/projected/b15f7971-ec1f-4b4e-ae33-45863ceb6b09-kube-api-access-g8blz\") pod \"barbican-api-c7999fbc4-ttfwg\" (UID: \"b15f7971-ec1f-4b4e-ae33-45863ceb6b09\") " pod="openstack/barbican-api-c7999fbc4-ttfwg" Feb 18 14:55:13 crc kubenswrapper[4957]: I0218 14:55:13.283396 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b15f7971-ec1f-4b4e-ae33-45863ceb6b09-public-tls-certs\") pod \"barbican-api-c7999fbc4-ttfwg\" (UID: \"b15f7971-ec1f-4b4e-ae33-45863ceb6b09\") " pod="openstack/barbican-api-c7999fbc4-ttfwg" Feb 18 14:55:13 crc kubenswrapper[4957]: I0218 14:55:13.283412 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b15f7971-ec1f-4b4e-ae33-45863ceb6b09-logs\") pod \"barbican-api-c7999fbc4-ttfwg\" (UID: \"b15f7971-ec1f-4b4e-ae33-45863ceb6b09\") " pod="openstack/barbican-api-c7999fbc4-ttfwg" Feb 18 14:55:13 crc kubenswrapper[4957]: I0218 14:55:13.284710 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b15f7971-ec1f-4b4e-ae33-45863ceb6b09-logs\") pod \"barbican-api-c7999fbc4-ttfwg\" (UID: \"b15f7971-ec1f-4b4e-ae33-45863ceb6b09\") " pod="openstack/barbican-api-c7999fbc4-ttfwg" Feb 18 14:55:13 crc kubenswrapper[4957]: I0218 14:55:13.288760 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b15f7971-ec1f-4b4e-ae33-45863ceb6b09-combined-ca-bundle\") pod \"barbican-api-c7999fbc4-ttfwg\" (UID: \"b15f7971-ec1f-4b4e-ae33-45863ceb6b09\") " pod="openstack/barbican-api-c7999fbc4-ttfwg" Feb 18 14:55:13 crc kubenswrapper[4957]: I0218 14:55:13.308029 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b15f7971-ec1f-4b4e-ae33-45863ceb6b09-config-data\") pod \"barbican-api-c7999fbc4-ttfwg\" (UID: \"b15f7971-ec1f-4b4e-ae33-45863ceb6b09\") " pod="openstack/barbican-api-c7999fbc4-ttfwg" Feb 18 14:55:13 crc kubenswrapper[4957]: I0218 14:55:13.315166 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b15f7971-ec1f-4b4e-ae33-45863ceb6b09-config-data-custom\") pod \"barbican-api-c7999fbc4-ttfwg\" (UID: \"b15f7971-ec1f-4b4e-ae33-45863ceb6b09\") " pod="openstack/barbican-api-c7999fbc4-ttfwg" Feb 18 14:55:13 crc kubenswrapper[4957]: I0218 14:55:13.317720 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b15f7971-ec1f-4b4e-ae33-45863ceb6b09-internal-tls-certs\") pod \"barbican-api-c7999fbc4-ttfwg\" (UID: \"b15f7971-ec1f-4b4e-ae33-45863ceb6b09\") " pod="openstack/barbican-api-c7999fbc4-ttfwg" Feb 18 14:55:13 crc kubenswrapper[4957]: I0218 14:55:13.318923 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b15f7971-ec1f-4b4e-ae33-45863ceb6b09-public-tls-certs\") pod \"barbican-api-c7999fbc4-ttfwg\" (UID: \"b15f7971-ec1f-4b4e-ae33-45863ceb6b09\") " pod="openstack/barbican-api-c7999fbc4-ttfwg" Feb 18 14:55:13 crc kubenswrapper[4957]: I0218 14:55:13.319397 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g8blz\" (UniqueName: \"kubernetes.io/projected/b15f7971-ec1f-4b4e-ae33-45863ceb6b09-kube-api-access-g8blz\") pod \"barbican-api-c7999fbc4-ttfwg\" (UID: \"b15f7971-ec1f-4b4e-ae33-45863ceb6b09\") " pod="openstack/barbican-api-c7999fbc4-ttfwg" Feb 18 14:55:13 crc kubenswrapper[4957]: I0218 14:55:13.510430 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-c7999fbc4-ttfwg" Feb 18 14:55:13 crc kubenswrapper[4957]: I0218 14:55:13.725722 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-cjglk" Feb 18 14:55:13 crc kubenswrapper[4957]: I0218 14:55:13.725765 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-cjglk" Feb 18 14:55:14 crc kubenswrapper[4957]: I0218 14:55:14.895361 4957 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-cjglk" podUID="fa177feb-1f29-43cb-baa9-6676bfa7c403" containerName="registry-server" probeResult="failure" output=< Feb 18 14:55:14 crc kubenswrapper[4957]: timeout: failed to connect service ":50051" within 1s Feb 18 14:55:14 crc kubenswrapper[4957]: > Feb 18 14:55:15 crc kubenswrapper[4957]: I0218 14:55:15.291998 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-c7999fbc4-ttfwg"] Feb 18 14:55:15 crc kubenswrapper[4957]: I0218 14:55:15.894888 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-c7999fbc4-ttfwg" event={"ID":"b15f7971-ec1f-4b4e-ae33-45863ceb6b09","Type":"ContainerStarted","Data":"49d170b37ebe76badfb655551170bf2d8fcf163095fac8b19e62f95680ec9595"} Feb 18 14:55:15 crc kubenswrapper[4957]: I0218 14:55:15.921683 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-hdpgd" event={"ID":"da55f937-4545-48b6-93df-ff81f7215472","Type":"ContainerStarted","Data":"5527615e90699da12f7872652ec647defc705f06dbb01417721be02aa64750ca"} Feb 18 14:55:15 crc kubenswrapper[4957]: I0218 14:55:15.922236 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-85ff748b95-hdpgd" Feb 18 14:55:15 crc kubenswrapper[4957]: I0218 14:55:15.928364 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-64456c6c55-glfbl" event={"ID":"1a219f4d-fd2e-4398-95c0-8005624a8c31","Type":"ContainerStarted","Data":"82b4bdb4b219020361ef275eee0678d0796cf9e67c54fc19a15343be03f3308f"} Feb 18 14:55:15 crc kubenswrapper[4957]: I0218 14:55:15.940308 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-76ccd9d8b4-5flnw" event={"ID":"226e0541-e9a7-4516-9a20-94ace7e03a41","Type":"ContainerStarted","Data":"bc8f79a425e1725c918af88886292043df5e2f4b57c594f0eb05780a52fd4be7"} Feb 18 14:55:15 crc kubenswrapper[4957]: I0218 14:55:15.940508 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-76ccd9d8b4-5flnw" Feb 18 14:55:15 crc kubenswrapper[4957]: I0218 14:55:15.940619 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-76ccd9d8b4-5flnw" Feb 18 14:55:15 crc kubenswrapper[4957]: I0218 14:55:15.951409 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-85ff748b95-hdpgd" podStartSLOduration=7.95138102 podStartE2EDuration="7.95138102s" podCreationTimestamp="2026-02-18 14:55:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:55:15.945239242 +0000 UTC m=+1422.466103986" watchObservedRunningTime="2026-02-18 14:55:15.95138102 +0000 UTC m=+1422.472245764" Feb 18 14:55:15 crc kubenswrapper[4957]: I0218 14:55:15.958797 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7667cffcf-zch65" event={"ID":"4370835e-b763-479d-b742-fe754a24bd1b","Type":"ContainerStarted","Data":"ba9902ce8de22d885a5b1873c650b5acb5799381c5a1348f3b34d624110d0981"} Feb 18 14:55:15 crc kubenswrapper[4957]: I0218 14:55:15.965469 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-fffb49c9b-fm77z" event={"ID":"a84b8763-62fd-4a64-ab6e-276ca0488599","Type":"ContainerStarted","Data":"7b1d6244be41f685578f856de337abdd92285949252499453fbe84ad9c25f7ef"} Feb 18 14:55:15 crc kubenswrapper[4957]: I0218 14:55:15.985918 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-c445bc8f8-ltx54" event={"ID":"c7cb1bd6-6151-4d00-834c-dbe330c6506b","Type":"ContainerStarted","Data":"1a96283bc3d2c89ae47ddb632408c838b24913044dc4456d34270990e28285b1"} Feb 18 14:55:15 crc kubenswrapper[4957]: I0218 14:55:15.997537 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-76ccd9d8b4-5flnw" podStartSLOduration=7.997513575 podStartE2EDuration="7.997513575s" podCreationTimestamp="2026-02-18 14:55:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:55:15.967179727 +0000 UTC m=+1422.488044461" watchObservedRunningTime="2026-02-18 14:55:15.997513575 +0000 UTC m=+1422.518378319" Feb 18 14:55:17 crc kubenswrapper[4957]: I0218 14:55:17.021917 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7667cffcf-zch65" event={"ID":"4370835e-b763-479d-b742-fe754a24bd1b","Type":"ContainerStarted","Data":"02c7f86c47951efe2438459fd9a7c1df5b707ba0396813dcfcda1aff757b41e9"} Feb 18 14:55:17 crc kubenswrapper[4957]: I0218 14:55:17.032283 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-fffb49c9b-fm77z" event={"ID":"a84b8763-62fd-4a64-ab6e-276ca0488599","Type":"ContainerStarted","Data":"7980e0cf811f9e8cfa06e3aa4f8a10b2e569a80b0873b49adf9b8079186925fa"} Feb 18 14:55:17 crc kubenswrapper[4957]: I0218 14:55:17.042585 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-c445bc8f8-ltx54" event={"ID":"c7cb1bd6-6151-4d00-834c-dbe330c6506b","Type":"ContainerStarted","Data":"22589993cf04dd84a5ced76dc2e4b3c636efd99cf1c4056002c72b0814d46ebb"} Feb 18 14:55:17 crc kubenswrapper[4957]: I0218 14:55:17.069896 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-7667cffcf-zch65" podStartSLOduration=5.087647386 podStartE2EDuration="9.069872421s" podCreationTimestamp="2026-02-18 14:55:08 +0000 UTC" firstStartedPulling="2026-02-18 14:55:10.902901635 +0000 UTC m=+1417.423766379" lastFinishedPulling="2026-02-18 14:55:14.88512667 +0000 UTC m=+1421.405991414" observedRunningTime="2026-02-18 14:55:17.052968652 +0000 UTC m=+1423.573833396" watchObservedRunningTime="2026-02-18 14:55:17.069872421 +0000 UTC m=+1423.590737165" Feb 18 14:55:17 crc kubenswrapper[4957]: I0218 14:55:17.083537 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-c7999fbc4-ttfwg" event={"ID":"b15f7971-ec1f-4b4e-ae33-45863ceb6b09","Type":"ContainerStarted","Data":"35b76dbef12d86bd421c994b8761b39c7afa86065753105b9561a803636245ff"} Feb 18 14:55:17 crc kubenswrapper[4957]: I0218 14:55:17.098540 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-64456c6c55-glfbl" event={"ID":"1a219f4d-fd2e-4398-95c0-8005624a8c31","Type":"ContainerStarted","Data":"5e2a4a205862e096a59181199936c9aa007f13bf22a30eaaa7f77b0413c252e2"} Feb 18 14:55:17 crc kubenswrapper[4957]: I0218 14:55:17.157905 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-64456c6c55-glfbl"] Feb 18 14:55:17 crc kubenswrapper[4957]: I0218 14:55:17.181373 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-fffb49c9b-fm77z" podStartSLOduration=5.228421039 podStartE2EDuration="9.181349547s" podCreationTimestamp="2026-02-18 14:55:08 +0000 UTC" firstStartedPulling="2026-02-18 14:55:10.931156062 +0000 UTC m=+1417.452020806" lastFinishedPulling="2026-02-18 14:55:14.88408457 +0000 UTC m=+1421.404949314" observedRunningTime="2026-02-18 14:55:17.086947045 +0000 UTC m=+1423.607811789" watchObservedRunningTime="2026-02-18 14:55:17.181349547 +0000 UTC m=+1423.702214291" Feb 18 14:55:17 crc kubenswrapper[4957]: I0218 14:55:17.197387 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-c445bc8f8-ltx54" podStartSLOduration=5.086993237 podStartE2EDuration="9.19736233s" podCreationTimestamp="2026-02-18 14:55:08 +0000 UTC" firstStartedPulling="2026-02-18 14:55:10.775094747 +0000 UTC m=+1417.295959491" lastFinishedPulling="2026-02-18 14:55:14.88546383 +0000 UTC m=+1421.406328584" observedRunningTime="2026-02-18 14:55:17.128805646 +0000 UTC m=+1423.649670400" watchObservedRunningTime="2026-02-18 14:55:17.19736233 +0000 UTC m=+1423.718227074" Feb 18 14:55:17 crc kubenswrapper[4957]: I0218 14:55:17.223390 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-c445bc8f8-ltx54"] Feb 18 14:55:17 crc kubenswrapper[4957]: I0218 14:55:17.228180 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-64456c6c55-glfbl" podStartSLOduration=4.559184556 podStartE2EDuration="9.228163231s" podCreationTimestamp="2026-02-18 14:55:08 +0000 UTC" firstStartedPulling="2026-02-18 14:55:10.216151645 +0000 UTC m=+1416.737016389" lastFinishedPulling="2026-02-18 14:55:14.88513031 +0000 UTC m=+1421.405995064" observedRunningTime="2026-02-18 14:55:17.148006652 +0000 UTC m=+1423.668871416" watchObservedRunningTime="2026-02-18 14:55:17.228163231 +0000 UTC m=+1423.749027975" Feb 18 14:55:18 crc kubenswrapper[4957]: I0218 14:55:18.110700 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-c7999fbc4-ttfwg" event={"ID":"b15f7971-ec1f-4b4e-ae33-45863ceb6b09","Type":"ContainerStarted","Data":"38623da020f0d20792b8644604d15d975410243c65a3830112145b2e6ca22029"} Feb 18 14:55:18 crc kubenswrapper[4957]: I0218 14:55:18.169639 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-c7999fbc4-ttfwg" podStartSLOduration=5.1696174599999996 podStartE2EDuration="5.16961746s" podCreationTimestamp="2026-02-18 14:55:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:55:18.157990734 +0000 UTC m=+1424.678855478" watchObservedRunningTime="2026-02-18 14:55:18.16961746 +0000 UTC m=+1424.690482204" Feb 18 14:55:18 crc kubenswrapper[4957]: I0218 14:55:18.511096 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-c7999fbc4-ttfwg" Feb 18 14:55:18 crc kubenswrapper[4957]: I0218 14:55:18.511143 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-c7999fbc4-ttfwg" Feb 18 14:55:19 crc kubenswrapper[4957]: I0218 14:55:19.139954 4957 generic.go:334] "Generic (PLEG): container finished" podID="a3b12676-eb60-406c-a019-461370859d2a" containerID="529c06b3faf50c75256e4096a91e6712527ce6db69b811695032087f8aab43f8" exitCode=0 Feb 18 14:55:19 crc kubenswrapper[4957]: I0218 14:55:19.140058 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-sg7fz" event={"ID":"a3b12676-eb60-406c-a019-461370859d2a","Type":"ContainerDied","Data":"529c06b3faf50c75256e4096a91e6712527ce6db69b811695032087f8aab43f8"} Feb 18 14:55:19 crc kubenswrapper[4957]: I0218 14:55:19.140458 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-c445bc8f8-ltx54" podUID="c7cb1bd6-6151-4d00-834c-dbe330c6506b" containerName="barbican-keystone-listener-log" containerID="cri-o://1a96283bc3d2c89ae47ddb632408c838b24913044dc4456d34270990e28285b1" gracePeriod=30 Feb 18 14:55:19 crc kubenswrapper[4957]: I0218 14:55:19.140580 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-c445bc8f8-ltx54" podUID="c7cb1bd6-6151-4d00-834c-dbe330c6506b" containerName="barbican-keystone-listener" containerID="cri-o://22589993cf04dd84a5ced76dc2e4b3c636efd99cf1c4056002c72b0814d46ebb" gracePeriod=30 Feb 18 14:55:19 crc kubenswrapper[4957]: I0218 14:55:19.140646 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-64456c6c55-glfbl" podUID="1a219f4d-fd2e-4398-95c0-8005624a8c31" containerName="barbican-worker-log" containerID="cri-o://82b4bdb4b219020361ef275eee0678d0796cf9e67c54fc19a15343be03f3308f" gracePeriod=30 Feb 18 14:55:19 crc kubenswrapper[4957]: I0218 14:55:19.140891 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-64456c6c55-glfbl" podUID="1a219f4d-fd2e-4398-95c0-8005624a8c31" containerName="barbican-worker" containerID="cri-o://5e2a4a205862e096a59181199936c9aa007f13bf22a30eaaa7f77b0413c252e2" gracePeriod=30 Feb 18 14:55:19 crc kubenswrapper[4957]: I0218 14:55:19.736610 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-76ccd9d8b4-5flnw" Feb 18 14:55:20 crc kubenswrapper[4957]: I0218 14:55:20.152864 4957 generic.go:334] "Generic (PLEG): container finished" podID="c7cb1bd6-6151-4d00-834c-dbe330c6506b" containerID="1a96283bc3d2c89ae47ddb632408c838b24913044dc4456d34270990e28285b1" exitCode=143 Feb 18 14:55:20 crc kubenswrapper[4957]: I0218 14:55:20.153003 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-c445bc8f8-ltx54" event={"ID":"c7cb1bd6-6151-4d00-834c-dbe330c6506b","Type":"ContainerDied","Data":"1a96283bc3d2c89ae47ddb632408c838b24913044dc4456d34270990e28285b1"} Feb 18 14:55:20 crc kubenswrapper[4957]: I0218 14:55:20.156505 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-64456c6c55-glfbl" event={"ID":"1a219f4d-fd2e-4398-95c0-8005624a8c31","Type":"ContainerDied","Data":"5e2a4a205862e096a59181199936c9aa007f13bf22a30eaaa7f77b0413c252e2"} Feb 18 14:55:20 crc kubenswrapper[4957]: I0218 14:55:20.156409 4957 generic.go:334] "Generic (PLEG): container finished" podID="1a219f4d-fd2e-4398-95c0-8005624a8c31" containerID="5e2a4a205862e096a59181199936c9aa007f13bf22a30eaaa7f77b0413c252e2" exitCode=0 Feb 18 14:55:20 crc kubenswrapper[4957]: I0218 14:55:20.156563 4957 generic.go:334] "Generic (PLEG): container finished" podID="1a219f4d-fd2e-4398-95c0-8005624a8c31" containerID="82b4bdb4b219020361ef275eee0678d0796cf9e67c54fc19a15343be03f3308f" exitCode=143 Feb 18 14:55:20 crc kubenswrapper[4957]: I0218 14:55:20.156645 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-64456c6c55-glfbl" event={"ID":"1a219f4d-fd2e-4398-95c0-8005624a8c31","Type":"ContainerDied","Data":"82b4bdb4b219020361ef275eee0678d0796cf9e67c54fc19a15343be03f3308f"} Feb 18 14:55:20 crc kubenswrapper[4957]: I0218 14:55:20.866802 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-76ccd9d8b4-5flnw" Feb 18 14:55:21 crc kubenswrapper[4957]: I0218 14:55:21.304519 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-768df8c8bb-2s2nt" Feb 18 14:55:21 crc kubenswrapper[4957]: I0218 14:55:21.450148 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-768df8c8bb-2s2nt" Feb 18 14:55:22 crc kubenswrapper[4957]: I0218 14:55:22.192043 4957 generic.go:334] "Generic (PLEG): container finished" podID="9205f99b-873e-4f53-9d89-85b77ca7adc1" containerID="a467ab3655df39ef6387264594b9fe1a027463d0576d224cadbd9e56f2573717" exitCode=0 Feb 18 14:55:22 crc kubenswrapper[4957]: I0218 14:55:22.192120 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-5sgzz" event={"ID":"9205f99b-873e-4f53-9d89-85b77ca7adc1","Type":"ContainerDied","Data":"a467ab3655df39ef6387264594b9fe1a027463d0576d224cadbd9e56f2573717"} Feb 18 14:55:22 crc kubenswrapper[4957]: I0218 14:55:22.360670 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-6dc87d6466-mkw7v" Feb 18 14:55:22 crc kubenswrapper[4957]: I0218 14:55:22.602184 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5676bc86fc-b7j6f"] Feb 18 14:55:22 crc kubenswrapper[4957]: I0218 14:55:22.602529 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-5676bc86fc-b7j6f" podUID="75590e4a-6bc1-44d5-9a9e-5756f4a7de8c" containerName="neutron-api" containerID="cri-o://724ae5090c9c953a769bc5cdbb49510d3e37ce00a400b59638e3888375d0c426" gracePeriod=30 Feb 18 14:55:22 crc kubenswrapper[4957]: I0218 14:55:22.602691 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-5676bc86fc-b7j6f" podUID="75590e4a-6bc1-44d5-9a9e-5756f4a7de8c" containerName="neutron-httpd" containerID="cri-o://24ce149ff122d658c6184fdc73ffe70fa4bf2e46b11c6b4dc48688856ef318f2" gracePeriod=30 Feb 18 14:55:22 crc kubenswrapper[4957]: I0218 14:55:22.639492 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-7fbd58c64f-dmc49"] Feb 18 14:55:22 crc kubenswrapper[4957]: I0218 14:55:22.656818 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-5676bc86fc-b7j6f" podUID="75590e4a-6bc1-44d5-9a9e-5756f4a7de8c" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.195:9696/\": EOF" Feb 18 14:55:22 crc kubenswrapper[4957]: I0218 14:55:22.659071 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7fbd58c64f-dmc49" Feb 18 14:55:22 crc kubenswrapper[4957]: I0218 14:55:22.690719 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7fbd58c64f-dmc49"] Feb 18 14:55:22 crc kubenswrapper[4957]: I0218 14:55:22.764004 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7e3267f-8a47-48ec-94ee-40aed5e39cff-internal-tls-certs\") pod \"neutron-7fbd58c64f-dmc49\" (UID: \"c7e3267f-8a47-48ec-94ee-40aed5e39cff\") " pod="openstack/neutron-7fbd58c64f-dmc49" Feb 18 14:55:22 crc kubenswrapper[4957]: I0218 14:55:22.764088 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/c7e3267f-8a47-48ec-94ee-40aed5e39cff-httpd-config\") pod \"neutron-7fbd58c64f-dmc49\" (UID: \"c7e3267f-8a47-48ec-94ee-40aed5e39cff\") " pod="openstack/neutron-7fbd58c64f-dmc49" Feb 18 14:55:22 crc kubenswrapper[4957]: I0218 14:55:22.764212 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7e3267f-8a47-48ec-94ee-40aed5e39cff-ovndb-tls-certs\") pod \"neutron-7fbd58c64f-dmc49\" (UID: \"c7e3267f-8a47-48ec-94ee-40aed5e39cff\") " pod="openstack/neutron-7fbd58c64f-dmc49" Feb 18 14:55:22 crc kubenswrapper[4957]: I0218 14:55:22.764263 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c7e3267f-8a47-48ec-94ee-40aed5e39cff-config\") pod \"neutron-7fbd58c64f-dmc49\" (UID: \"c7e3267f-8a47-48ec-94ee-40aed5e39cff\") " pod="openstack/neutron-7fbd58c64f-dmc49" Feb 18 14:55:22 crc kubenswrapper[4957]: I0218 14:55:22.764305 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7e3267f-8a47-48ec-94ee-40aed5e39cff-combined-ca-bundle\") pod \"neutron-7fbd58c64f-dmc49\" (UID: \"c7e3267f-8a47-48ec-94ee-40aed5e39cff\") " pod="openstack/neutron-7fbd58c64f-dmc49" Feb 18 14:55:22 crc kubenswrapper[4957]: I0218 14:55:22.764333 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7e3267f-8a47-48ec-94ee-40aed5e39cff-public-tls-certs\") pod \"neutron-7fbd58c64f-dmc49\" (UID: \"c7e3267f-8a47-48ec-94ee-40aed5e39cff\") " pod="openstack/neutron-7fbd58c64f-dmc49" Feb 18 14:55:22 crc kubenswrapper[4957]: I0218 14:55:22.764451 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rsmg8\" (UniqueName: \"kubernetes.io/projected/c7e3267f-8a47-48ec-94ee-40aed5e39cff-kube-api-access-rsmg8\") pod \"neutron-7fbd58c64f-dmc49\" (UID: \"c7e3267f-8a47-48ec-94ee-40aed5e39cff\") " pod="openstack/neutron-7fbd58c64f-dmc49" Feb 18 14:55:22 crc kubenswrapper[4957]: I0218 14:55:22.867543 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c7e3267f-8a47-48ec-94ee-40aed5e39cff-config\") pod \"neutron-7fbd58c64f-dmc49\" (UID: \"c7e3267f-8a47-48ec-94ee-40aed5e39cff\") " pod="openstack/neutron-7fbd58c64f-dmc49" Feb 18 14:55:22 crc kubenswrapper[4957]: I0218 14:55:22.867616 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7e3267f-8a47-48ec-94ee-40aed5e39cff-combined-ca-bundle\") pod \"neutron-7fbd58c64f-dmc49\" (UID: \"c7e3267f-8a47-48ec-94ee-40aed5e39cff\") " pod="openstack/neutron-7fbd58c64f-dmc49" Feb 18 14:55:22 crc kubenswrapper[4957]: I0218 14:55:22.867642 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7e3267f-8a47-48ec-94ee-40aed5e39cff-public-tls-certs\") pod \"neutron-7fbd58c64f-dmc49\" (UID: \"c7e3267f-8a47-48ec-94ee-40aed5e39cff\") " pod="openstack/neutron-7fbd58c64f-dmc49" Feb 18 14:55:22 crc kubenswrapper[4957]: I0218 14:55:22.868128 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rsmg8\" (UniqueName: \"kubernetes.io/projected/c7e3267f-8a47-48ec-94ee-40aed5e39cff-kube-api-access-rsmg8\") pod \"neutron-7fbd58c64f-dmc49\" (UID: \"c7e3267f-8a47-48ec-94ee-40aed5e39cff\") " pod="openstack/neutron-7fbd58c64f-dmc49" Feb 18 14:55:22 crc kubenswrapper[4957]: I0218 14:55:22.868234 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7e3267f-8a47-48ec-94ee-40aed5e39cff-internal-tls-certs\") pod \"neutron-7fbd58c64f-dmc49\" (UID: \"c7e3267f-8a47-48ec-94ee-40aed5e39cff\") " pod="openstack/neutron-7fbd58c64f-dmc49" Feb 18 14:55:22 crc kubenswrapper[4957]: I0218 14:55:22.868278 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/c7e3267f-8a47-48ec-94ee-40aed5e39cff-httpd-config\") pod \"neutron-7fbd58c64f-dmc49\" (UID: \"c7e3267f-8a47-48ec-94ee-40aed5e39cff\") " pod="openstack/neutron-7fbd58c64f-dmc49" Feb 18 14:55:22 crc kubenswrapper[4957]: I0218 14:55:22.868385 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7e3267f-8a47-48ec-94ee-40aed5e39cff-ovndb-tls-certs\") pod \"neutron-7fbd58c64f-dmc49\" (UID: \"c7e3267f-8a47-48ec-94ee-40aed5e39cff\") " pod="openstack/neutron-7fbd58c64f-dmc49" Feb 18 14:55:22 crc kubenswrapper[4957]: I0218 14:55:22.875798 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/c7e3267f-8a47-48ec-94ee-40aed5e39cff-config\") pod \"neutron-7fbd58c64f-dmc49\" (UID: \"c7e3267f-8a47-48ec-94ee-40aed5e39cff\") " pod="openstack/neutron-7fbd58c64f-dmc49" Feb 18 14:55:22 crc kubenswrapper[4957]: I0218 14:55:22.876678 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7e3267f-8a47-48ec-94ee-40aed5e39cff-combined-ca-bundle\") pod \"neutron-7fbd58c64f-dmc49\" (UID: \"c7e3267f-8a47-48ec-94ee-40aed5e39cff\") " pod="openstack/neutron-7fbd58c64f-dmc49" Feb 18 14:55:22 crc kubenswrapper[4957]: I0218 14:55:22.879566 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7e3267f-8a47-48ec-94ee-40aed5e39cff-internal-tls-certs\") pod \"neutron-7fbd58c64f-dmc49\" (UID: \"c7e3267f-8a47-48ec-94ee-40aed5e39cff\") " pod="openstack/neutron-7fbd58c64f-dmc49" Feb 18 14:55:22 crc kubenswrapper[4957]: I0218 14:55:22.880733 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7e3267f-8a47-48ec-94ee-40aed5e39cff-public-tls-certs\") pod \"neutron-7fbd58c64f-dmc49\" (UID: \"c7e3267f-8a47-48ec-94ee-40aed5e39cff\") " pod="openstack/neutron-7fbd58c64f-dmc49" Feb 18 14:55:22 crc kubenswrapper[4957]: I0218 14:55:22.880960 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/c7e3267f-8a47-48ec-94ee-40aed5e39cff-httpd-config\") pod \"neutron-7fbd58c64f-dmc49\" (UID: \"c7e3267f-8a47-48ec-94ee-40aed5e39cff\") " pod="openstack/neutron-7fbd58c64f-dmc49" Feb 18 14:55:22 crc kubenswrapper[4957]: I0218 14:55:22.882404 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7e3267f-8a47-48ec-94ee-40aed5e39cff-ovndb-tls-certs\") pod \"neutron-7fbd58c64f-dmc49\" (UID: \"c7e3267f-8a47-48ec-94ee-40aed5e39cff\") " pod="openstack/neutron-7fbd58c64f-dmc49" Feb 18 14:55:22 crc kubenswrapper[4957]: I0218 14:55:22.893318 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rsmg8\" (UniqueName: \"kubernetes.io/projected/c7e3267f-8a47-48ec-94ee-40aed5e39cff-kube-api-access-rsmg8\") pod \"neutron-7fbd58c64f-dmc49\" (UID: \"c7e3267f-8a47-48ec-94ee-40aed5e39cff\") " pod="openstack/neutron-7fbd58c64f-dmc49" Feb 18 14:55:22 crc kubenswrapper[4957]: I0218 14:55:22.993353 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7fbd58c64f-dmc49" Feb 18 14:55:23 crc kubenswrapper[4957]: I0218 14:55:23.211521 4957 generic.go:334] "Generic (PLEG): container finished" podID="75590e4a-6bc1-44d5-9a9e-5756f4a7de8c" containerID="24ce149ff122d658c6184fdc73ffe70fa4bf2e46b11c6b4dc48688856ef318f2" exitCode=0 Feb 18 14:55:23 crc kubenswrapper[4957]: I0218 14:55:23.212102 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5676bc86fc-b7j6f" event={"ID":"75590e4a-6bc1-44d5-9a9e-5756f4a7de8c","Type":"ContainerDied","Data":"24ce149ff122d658c6184fdc73ffe70fa4bf2e46b11c6b4dc48688856ef318f2"} Feb 18 14:55:24 crc kubenswrapper[4957]: I0218 14:55:24.075598 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-5676bc86fc-b7j6f" podUID="75590e4a-6bc1-44d5-9a9e-5756f4a7de8c" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.195:9696/\": dial tcp 10.217.0.195:9696: connect: connection refused" Feb 18 14:55:24 crc kubenswrapper[4957]: I0218 14:55:24.141970 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-sg7fz" Feb 18 14:55:24 crc kubenswrapper[4957]: I0218 14:55:24.186227 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-5sgzz" Feb 18 14:55:24 crc kubenswrapper[4957]: I0218 14:55:24.256654 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-5sgzz" Feb 18 14:55:24 crc kubenswrapper[4957]: I0218 14:55:24.266009 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-sg7fz" Feb 18 14:55:24 crc kubenswrapper[4957]: I0218 14:55:24.337259 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-5sgzz" event={"ID":"9205f99b-873e-4f53-9d89-85b77ca7adc1","Type":"ContainerDied","Data":"fb351eee6b70178887cf2ccc4c294f64582321e03dbc7b1ceac5fcb9d389e6ff"} Feb 18 14:55:24 crc kubenswrapper[4957]: I0218 14:55:24.337302 4957 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fb351eee6b70178887cf2ccc4c294f64582321e03dbc7b1ceac5fcb9d389e6ff" Feb 18 14:55:24 crc kubenswrapper[4957]: I0218 14:55:24.337315 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-sg7fz" event={"ID":"a3b12676-eb60-406c-a019-461370859d2a","Type":"ContainerDied","Data":"561ff9d61bfb9c2c4c3f08e7f092c0a7a0146db64f47a54957ccb5821fad5a8b"} Feb 18 14:55:24 crc kubenswrapper[4957]: I0218 14:55:24.337326 4957 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="561ff9d61bfb9c2c4c3f08e7f092c0a7a0146db64f47a54957ccb5821fad5a8b" Feb 18 14:55:24 crc kubenswrapper[4957]: I0218 14:55:24.338206 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vgpqx\" (UniqueName: \"kubernetes.io/projected/9205f99b-873e-4f53-9d89-85b77ca7adc1-kube-api-access-vgpqx\") pod \"9205f99b-873e-4f53-9d89-85b77ca7adc1\" (UID: \"9205f99b-873e-4f53-9d89-85b77ca7adc1\") " Feb 18 14:55:24 crc kubenswrapper[4957]: I0218 14:55:24.338305 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f5ng7\" (UniqueName: \"kubernetes.io/projected/a3b12676-eb60-406c-a019-461370859d2a-kube-api-access-f5ng7\") pod \"a3b12676-eb60-406c-a019-461370859d2a\" (UID: \"a3b12676-eb60-406c-a019-461370859d2a\") " Feb 18 14:55:24 crc kubenswrapper[4957]: I0218 14:55:24.338331 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3b12676-eb60-406c-a019-461370859d2a-config-data\") pod \"a3b12676-eb60-406c-a019-461370859d2a\" (UID: \"a3b12676-eb60-406c-a019-461370859d2a\") " Feb 18 14:55:24 crc kubenswrapper[4957]: I0218 14:55:24.338358 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9205f99b-873e-4f53-9d89-85b77ca7adc1-scripts\") pod \"9205f99b-873e-4f53-9d89-85b77ca7adc1\" (UID: \"9205f99b-873e-4f53-9d89-85b77ca7adc1\") " Feb 18 14:55:24 crc kubenswrapper[4957]: I0218 14:55:24.338501 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9205f99b-873e-4f53-9d89-85b77ca7adc1-config-data\") pod \"9205f99b-873e-4f53-9d89-85b77ca7adc1\" (UID: \"9205f99b-873e-4f53-9d89-85b77ca7adc1\") " Feb 18 14:55:24 crc kubenswrapper[4957]: I0218 14:55:24.338550 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9205f99b-873e-4f53-9d89-85b77ca7adc1-db-sync-config-data\") pod \"9205f99b-873e-4f53-9d89-85b77ca7adc1\" (UID: \"9205f99b-873e-4f53-9d89-85b77ca7adc1\") " Feb 18 14:55:24 crc kubenswrapper[4957]: I0218 14:55:24.338631 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3b12676-eb60-406c-a019-461370859d2a-combined-ca-bundle\") pod \"a3b12676-eb60-406c-a019-461370859d2a\" (UID: \"a3b12676-eb60-406c-a019-461370859d2a\") " Feb 18 14:55:24 crc kubenswrapper[4957]: I0218 14:55:24.338661 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9205f99b-873e-4f53-9d89-85b77ca7adc1-etc-machine-id\") pod \"9205f99b-873e-4f53-9d89-85b77ca7adc1\" (UID: \"9205f99b-873e-4f53-9d89-85b77ca7adc1\") " Feb 18 14:55:24 crc kubenswrapper[4957]: I0218 14:55:24.338681 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9205f99b-873e-4f53-9d89-85b77ca7adc1-combined-ca-bundle\") pod \"9205f99b-873e-4f53-9d89-85b77ca7adc1\" (UID: \"9205f99b-873e-4f53-9d89-85b77ca7adc1\") " Feb 18 14:55:24 crc kubenswrapper[4957]: I0218 14:55:24.349596 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9205f99b-873e-4f53-9d89-85b77ca7adc1-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "9205f99b-873e-4f53-9d89-85b77ca7adc1" (UID: "9205f99b-873e-4f53-9d89-85b77ca7adc1"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 14:55:24 crc kubenswrapper[4957]: I0218 14:55:24.350975 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3b12676-eb60-406c-a019-461370859d2a-kube-api-access-f5ng7" (OuterVolumeSpecName: "kube-api-access-f5ng7") pod "a3b12676-eb60-406c-a019-461370859d2a" (UID: "a3b12676-eb60-406c-a019-461370859d2a"). InnerVolumeSpecName "kube-api-access-f5ng7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:55:24 crc kubenswrapper[4957]: I0218 14:55:24.359539 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-64456c6c55-glfbl" Feb 18 14:55:24 crc kubenswrapper[4957]: I0218 14:55:24.381440 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9205f99b-873e-4f53-9d89-85b77ca7adc1-scripts" (OuterVolumeSpecName: "scripts") pod "9205f99b-873e-4f53-9d89-85b77ca7adc1" (UID: "9205f99b-873e-4f53-9d89-85b77ca7adc1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:55:24 crc kubenswrapper[4957]: I0218 14:55:24.382958 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9205f99b-873e-4f53-9d89-85b77ca7adc1-kube-api-access-vgpqx" (OuterVolumeSpecName: "kube-api-access-vgpqx") pod "9205f99b-873e-4f53-9d89-85b77ca7adc1" (UID: "9205f99b-873e-4f53-9d89-85b77ca7adc1"). InnerVolumeSpecName "kube-api-access-vgpqx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:55:24 crc kubenswrapper[4957]: I0218 14:55:24.393663 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9205f99b-873e-4f53-9d89-85b77ca7adc1-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "9205f99b-873e-4f53-9d89-85b77ca7adc1" (UID: "9205f99b-873e-4f53-9d89-85b77ca7adc1"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:55:24 crc kubenswrapper[4957]: I0218 14:55:24.442685 4957 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9205f99b-873e-4f53-9d89-85b77ca7adc1-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 18 14:55:24 crc kubenswrapper[4957]: I0218 14:55:24.442707 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vgpqx\" (UniqueName: \"kubernetes.io/projected/9205f99b-873e-4f53-9d89-85b77ca7adc1-kube-api-access-vgpqx\") on node \"crc\" DevicePath \"\"" Feb 18 14:55:24 crc kubenswrapper[4957]: I0218 14:55:24.442720 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f5ng7\" (UniqueName: \"kubernetes.io/projected/a3b12676-eb60-406c-a019-461370859d2a-kube-api-access-f5ng7\") on node \"crc\" DevicePath \"\"" Feb 18 14:55:24 crc kubenswrapper[4957]: I0218 14:55:24.442728 4957 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9205f99b-873e-4f53-9d89-85b77ca7adc1-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 14:55:24 crc kubenswrapper[4957]: I0218 14:55:24.442737 4957 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9205f99b-873e-4f53-9d89-85b77ca7adc1-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 14:55:24 crc kubenswrapper[4957]: I0218 14:55:24.443646 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3b12676-eb60-406c-a019-461370859d2a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a3b12676-eb60-406c-a019-461370859d2a" (UID: "a3b12676-eb60-406c-a019-461370859d2a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:55:24 crc kubenswrapper[4957]: I0218 14:55:24.464812 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9205f99b-873e-4f53-9d89-85b77ca7adc1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9205f99b-873e-4f53-9d89-85b77ca7adc1" (UID: "9205f99b-873e-4f53-9d89-85b77ca7adc1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:55:24 crc kubenswrapper[4957]: I0218 14:55:24.489148 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9205f99b-873e-4f53-9d89-85b77ca7adc1-config-data" (OuterVolumeSpecName: "config-data") pod "9205f99b-873e-4f53-9d89-85b77ca7adc1" (UID: "9205f99b-873e-4f53-9d89-85b77ca7adc1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:55:24 crc kubenswrapper[4957]: I0218 14:55:24.521126 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 18 14:55:24 crc kubenswrapper[4957]: E0218 14:55:24.521657 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a219f4d-fd2e-4398-95c0-8005624a8c31" containerName="barbican-worker" Feb 18 14:55:24 crc kubenswrapper[4957]: I0218 14:55:24.521670 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a219f4d-fd2e-4398-95c0-8005624a8c31" containerName="barbican-worker" Feb 18 14:55:24 crc kubenswrapper[4957]: E0218 14:55:24.521689 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9205f99b-873e-4f53-9d89-85b77ca7adc1" containerName="cinder-db-sync" Feb 18 14:55:24 crc kubenswrapper[4957]: I0218 14:55:24.521696 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="9205f99b-873e-4f53-9d89-85b77ca7adc1" containerName="cinder-db-sync" Feb 18 14:55:24 crc kubenswrapper[4957]: E0218 14:55:24.521708 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3b12676-eb60-406c-a019-461370859d2a" containerName="heat-db-sync" Feb 18 14:55:24 crc kubenswrapper[4957]: I0218 14:55:24.521716 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3b12676-eb60-406c-a019-461370859d2a" containerName="heat-db-sync" Feb 18 14:55:24 crc kubenswrapper[4957]: E0218 14:55:24.521748 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a219f4d-fd2e-4398-95c0-8005624a8c31" containerName="barbican-worker-log" Feb 18 14:55:24 crc kubenswrapper[4957]: I0218 14:55:24.521754 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a219f4d-fd2e-4398-95c0-8005624a8c31" containerName="barbican-worker-log" Feb 18 14:55:24 crc kubenswrapper[4957]: I0218 14:55:24.521953 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a219f4d-fd2e-4398-95c0-8005624a8c31" containerName="barbican-worker" Feb 18 14:55:24 crc kubenswrapper[4957]: I0218 14:55:24.521976 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3b12676-eb60-406c-a019-461370859d2a" containerName="heat-db-sync" Feb 18 14:55:24 crc kubenswrapper[4957]: I0218 14:55:24.521988 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a219f4d-fd2e-4398-95c0-8005624a8c31" containerName="barbican-worker-log" Feb 18 14:55:24 crc kubenswrapper[4957]: I0218 14:55:24.521998 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="9205f99b-873e-4f53-9d89-85b77ca7adc1" containerName="cinder-db-sync" Feb 18 14:55:24 crc kubenswrapper[4957]: I0218 14:55:24.524482 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 18 14:55:24 crc kubenswrapper[4957]: I0218 14:55:24.528772 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 18 14:55:24 crc kubenswrapper[4957]: I0218 14:55:24.545289 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a219f4d-fd2e-4398-95c0-8005624a8c31-combined-ca-bundle\") pod \"1a219f4d-fd2e-4398-95c0-8005624a8c31\" (UID: \"1a219f4d-fd2e-4398-95c0-8005624a8c31\") " Feb 18 14:55:24 crc kubenswrapper[4957]: I0218 14:55:24.545395 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a219f4d-fd2e-4398-95c0-8005624a8c31-config-data\") pod \"1a219f4d-fd2e-4398-95c0-8005624a8c31\" (UID: \"1a219f4d-fd2e-4398-95c0-8005624a8c31\") " Feb 18 14:55:24 crc kubenswrapper[4957]: I0218 14:55:24.545430 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rhbtq\" (UniqueName: \"kubernetes.io/projected/1a219f4d-fd2e-4398-95c0-8005624a8c31-kube-api-access-rhbtq\") pod \"1a219f4d-fd2e-4398-95c0-8005624a8c31\" (UID: \"1a219f4d-fd2e-4398-95c0-8005624a8c31\") " Feb 18 14:55:24 crc kubenswrapper[4957]: I0218 14:55:24.545458 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1a219f4d-fd2e-4398-95c0-8005624a8c31-config-data-custom\") pod \"1a219f4d-fd2e-4398-95c0-8005624a8c31\" (UID: \"1a219f4d-fd2e-4398-95c0-8005624a8c31\") " Feb 18 14:55:24 crc kubenswrapper[4957]: I0218 14:55:24.545594 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1a219f4d-fd2e-4398-95c0-8005624a8c31-logs\") pod \"1a219f4d-fd2e-4398-95c0-8005624a8c31\" (UID: \"1a219f4d-fd2e-4398-95c0-8005624a8c31\") " Feb 18 14:55:24 crc kubenswrapper[4957]: I0218 14:55:24.546070 4957 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9205f99b-873e-4f53-9d89-85b77ca7adc1-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 14:55:24 crc kubenswrapper[4957]: I0218 14:55:24.546081 4957 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3b12676-eb60-406c-a019-461370859d2a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:55:24 crc kubenswrapper[4957]: I0218 14:55:24.546091 4957 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9205f99b-873e-4f53-9d89-85b77ca7adc1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:55:24 crc kubenswrapper[4957]: I0218 14:55:24.546724 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1a219f4d-fd2e-4398-95c0-8005624a8c31-logs" (OuterVolumeSpecName: "logs") pod "1a219f4d-fd2e-4398-95c0-8005624a8c31" (UID: "1a219f4d-fd2e-4398-95c0-8005624a8c31"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:55:24 crc kubenswrapper[4957]: I0218 14:55:24.580526 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 18 14:55:24 crc kubenswrapper[4957]: I0218 14:55:24.580589 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a219f4d-fd2e-4398-95c0-8005624a8c31-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "1a219f4d-fd2e-4398-95c0-8005624a8c31" (UID: "1a219f4d-fd2e-4398-95c0-8005624a8c31"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:55:24 crc kubenswrapper[4957]: I0218 14:55:24.580654 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a219f4d-fd2e-4398-95c0-8005624a8c31-kube-api-access-rhbtq" (OuterVolumeSpecName: "kube-api-access-rhbtq") pod "1a219f4d-fd2e-4398-95c0-8005624a8c31" (UID: "1a219f4d-fd2e-4398-95c0-8005624a8c31"). InnerVolumeSpecName "kube-api-access-rhbtq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:55:24 crc kubenswrapper[4957]: I0218 14:55:24.583459 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-85ff748b95-hdpgd" Feb 18 14:55:24 crc kubenswrapper[4957]: I0218 14:55:24.623718 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3b12676-eb60-406c-a019-461370859d2a-config-data" (OuterVolumeSpecName: "config-data") pod "a3b12676-eb60-406c-a019-461370859d2a" (UID: "a3b12676-eb60-406c-a019-461370859d2a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:55:24 crc kubenswrapper[4957]: I0218 14:55:24.647109 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-hdpgd"] Feb 18 14:55:24 crc kubenswrapper[4957]: I0218 14:55:24.648603 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/daeedf1e-1be4-4db9-8354-cdd548f43256-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"daeedf1e-1be4-4db9-8354-cdd548f43256\") " pod="openstack/cinder-scheduler-0" Feb 18 14:55:24 crc kubenswrapper[4957]: I0218 14:55:24.648649 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/daeedf1e-1be4-4db9-8354-cdd548f43256-config-data\") pod \"cinder-scheduler-0\" (UID: \"daeedf1e-1be4-4db9-8354-cdd548f43256\") " pod="openstack/cinder-scheduler-0" Feb 18 14:55:24 crc kubenswrapper[4957]: I0218 14:55:24.648718 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/daeedf1e-1be4-4db9-8354-cdd548f43256-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"daeedf1e-1be4-4db9-8354-cdd548f43256\") " pod="openstack/cinder-scheduler-0" Feb 18 14:55:24 crc kubenswrapper[4957]: I0218 14:55:24.648791 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/daeedf1e-1be4-4db9-8354-cdd548f43256-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"daeedf1e-1be4-4db9-8354-cdd548f43256\") " pod="openstack/cinder-scheduler-0" Feb 18 14:55:24 crc kubenswrapper[4957]: I0218 14:55:24.648867 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/daeedf1e-1be4-4db9-8354-cdd548f43256-scripts\") pod \"cinder-scheduler-0\" (UID: \"daeedf1e-1be4-4db9-8354-cdd548f43256\") " pod="openstack/cinder-scheduler-0" Feb 18 14:55:24 crc kubenswrapper[4957]: I0218 14:55:24.648895 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmt6n\" (UniqueName: \"kubernetes.io/projected/daeedf1e-1be4-4db9-8354-cdd548f43256-kube-api-access-bmt6n\") pod \"cinder-scheduler-0\" (UID: \"daeedf1e-1be4-4db9-8354-cdd548f43256\") " pod="openstack/cinder-scheduler-0" Feb 18 14:55:24 crc kubenswrapper[4957]: I0218 14:55:24.648980 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rhbtq\" (UniqueName: \"kubernetes.io/projected/1a219f4d-fd2e-4398-95c0-8005624a8c31-kube-api-access-rhbtq\") on node \"crc\" DevicePath \"\"" Feb 18 14:55:24 crc kubenswrapper[4957]: I0218 14:55:24.648997 4957 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1a219f4d-fd2e-4398-95c0-8005624a8c31-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 18 14:55:24 crc kubenswrapper[4957]: I0218 14:55:24.649008 4957 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3b12676-eb60-406c-a019-461370859d2a-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 14:55:24 crc kubenswrapper[4957]: I0218 14:55:24.649021 4957 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1a219f4d-fd2e-4398-95c0-8005624a8c31-logs\") on node \"crc\" DevicePath \"\"" Feb 18 14:55:24 crc kubenswrapper[4957]: I0218 14:55:24.669790 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a219f4d-fd2e-4398-95c0-8005624a8c31-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1a219f4d-fd2e-4398-95c0-8005624a8c31" (UID: "1a219f4d-fd2e-4398-95c0-8005624a8c31"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:55:24 crc kubenswrapper[4957]: I0218 14:55:24.689367 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-xmqsm"] Feb 18 14:55:24 crc kubenswrapper[4957]: I0218 14:55:24.691620 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-xmqsm" Feb 18 14:55:24 crc kubenswrapper[4957]: I0218 14:55:24.731664 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-xmqsm"] Feb 18 14:55:24 crc kubenswrapper[4957]: I0218 14:55:24.751658 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/daeedf1e-1be4-4db9-8354-cdd548f43256-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"daeedf1e-1be4-4db9-8354-cdd548f43256\") " pod="openstack/cinder-scheduler-0" Feb 18 14:55:24 crc kubenswrapper[4957]: I0218 14:55:24.751737 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/daeedf1e-1be4-4db9-8354-cdd548f43256-config-data\") pod \"cinder-scheduler-0\" (UID: \"daeedf1e-1be4-4db9-8354-cdd548f43256\") " pod="openstack/cinder-scheduler-0" Feb 18 14:55:24 crc kubenswrapper[4957]: I0218 14:55:24.751823 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/daeedf1e-1be4-4db9-8354-cdd548f43256-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"daeedf1e-1be4-4db9-8354-cdd548f43256\") " pod="openstack/cinder-scheduler-0" Feb 18 14:55:24 crc kubenswrapper[4957]: I0218 14:55:24.751939 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/daeedf1e-1be4-4db9-8354-cdd548f43256-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"daeedf1e-1be4-4db9-8354-cdd548f43256\") " pod="openstack/cinder-scheduler-0" Feb 18 14:55:24 crc kubenswrapper[4957]: I0218 14:55:24.752054 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/daeedf1e-1be4-4db9-8354-cdd548f43256-scripts\") pod \"cinder-scheduler-0\" (UID: \"daeedf1e-1be4-4db9-8354-cdd548f43256\") " pod="openstack/cinder-scheduler-0" Feb 18 14:55:24 crc kubenswrapper[4957]: I0218 14:55:24.752089 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bmt6n\" (UniqueName: \"kubernetes.io/projected/daeedf1e-1be4-4db9-8354-cdd548f43256-kube-api-access-bmt6n\") pod \"cinder-scheduler-0\" (UID: \"daeedf1e-1be4-4db9-8354-cdd548f43256\") " pod="openstack/cinder-scheduler-0" Feb 18 14:55:24 crc kubenswrapper[4957]: I0218 14:55:24.752230 4957 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a219f4d-fd2e-4398-95c0-8005624a8c31-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:55:24 crc kubenswrapper[4957]: I0218 14:55:24.752567 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a219f4d-fd2e-4398-95c0-8005624a8c31-config-data" (OuterVolumeSpecName: "config-data") pod "1a219f4d-fd2e-4398-95c0-8005624a8c31" (UID: "1a219f4d-fd2e-4398-95c0-8005624a8c31"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:55:24 crc kubenswrapper[4957]: I0218 14:55:24.752587 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/daeedf1e-1be4-4db9-8354-cdd548f43256-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"daeedf1e-1be4-4db9-8354-cdd548f43256\") " pod="openstack/cinder-scheduler-0" Feb 18 14:55:24 crc kubenswrapper[4957]: I0218 14:55:24.762558 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/daeedf1e-1be4-4db9-8354-cdd548f43256-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"daeedf1e-1be4-4db9-8354-cdd548f43256\") " pod="openstack/cinder-scheduler-0" Feb 18 14:55:24 crc kubenswrapper[4957]: I0218 14:55:24.774567 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/daeedf1e-1be4-4db9-8354-cdd548f43256-config-data\") pod \"cinder-scheduler-0\" (UID: \"daeedf1e-1be4-4db9-8354-cdd548f43256\") " pod="openstack/cinder-scheduler-0" Feb 18 14:55:24 crc kubenswrapper[4957]: I0218 14:55:24.775905 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/daeedf1e-1be4-4db9-8354-cdd548f43256-scripts\") pod \"cinder-scheduler-0\" (UID: \"daeedf1e-1be4-4db9-8354-cdd548f43256\") " pod="openstack/cinder-scheduler-0" Feb 18 14:55:24 crc kubenswrapper[4957]: I0218 14:55:24.779071 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/daeedf1e-1be4-4db9-8354-cdd548f43256-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"daeedf1e-1be4-4db9-8354-cdd548f43256\") " pod="openstack/cinder-scheduler-0" Feb 18 14:55:24 crc kubenswrapper[4957]: I0218 14:55:24.799099 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bmt6n\" (UniqueName: \"kubernetes.io/projected/daeedf1e-1be4-4db9-8354-cdd548f43256-kube-api-access-bmt6n\") pod \"cinder-scheduler-0\" (UID: \"daeedf1e-1be4-4db9-8354-cdd548f43256\") " pod="openstack/cinder-scheduler-0" Feb 18 14:55:24 crc kubenswrapper[4957]: I0218 14:55:24.805618 4957 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-cjglk" podUID="fa177feb-1f29-43cb-baa9-6676bfa7c403" containerName="registry-server" probeResult="failure" output=< Feb 18 14:55:24 crc kubenswrapper[4957]: timeout: failed to connect service ":50051" within 1s Feb 18 14:55:24 crc kubenswrapper[4957]: > Feb 18 14:55:24 crc kubenswrapper[4957]: I0218 14:55:24.853784 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/88ba3d00-f51e-4168-809e-ee46dad21b45-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-xmqsm\" (UID: \"88ba3d00-f51e-4168-809e-ee46dad21b45\") " pod="openstack/dnsmasq-dns-5c9776ccc5-xmqsm" Feb 18 14:55:24 crc kubenswrapper[4957]: I0218 14:55:24.853918 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88ba3d00-f51e-4168-809e-ee46dad21b45-config\") pod \"dnsmasq-dns-5c9776ccc5-xmqsm\" (UID: \"88ba3d00-f51e-4168-809e-ee46dad21b45\") " pod="openstack/dnsmasq-dns-5c9776ccc5-xmqsm" Feb 18 14:55:24 crc kubenswrapper[4957]: I0218 14:55:24.853960 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/88ba3d00-f51e-4168-809e-ee46dad21b45-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-xmqsm\" (UID: \"88ba3d00-f51e-4168-809e-ee46dad21b45\") " pod="openstack/dnsmasq-dns-5c9776ccc5-xmqsm" Feb 18 14:55:24 crc kubenswrapper[4957]: I0218 14:55:24.853996 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwv5g\" (UniqueName: \"kubernetes.io/projected/88ba3d00-f51e-4168-809e-ee46dad21b45-kube-api-access-pwv5g\") pod \"dnsmasq-dns-5c9776ccc5-xmqsm\" (UID: \"88ba3d00-f51e-4168-809e-ee46dad21b45\") " pod="openstack/dnsmasq-dns-5c9776ccc5-xmqsm" Feb 18 14:55:24 crc kubenswrapper[4957]: I0218 14:55:24.854097 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/88ba3d00-f51e-4168-809e-ee46dad21b45-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-xmqsm\" (UID: \"88ba3d00-f51e-4168-809e-ee46dad21b45\") " pod="openstack/dnsmasq-dns-5c9776ccc5-xmqsm" Feb 18 14:55:24 crc kubenswrapper[4957]: I0218 14:55:24.854137 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/88ba3d00-f51e-4168-809e-ee46dad21b45-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-xmqsm\" (UID: \"88ba3d00-f51e-4168-809e-ee46dad21b45\") " pod="openstack/dnsmasq-dns-5c9776ccc5-xmqsm" Feb 18 14:55:24 crc kubenswrapper[4957]: I0218 14:55:24.854206 4957 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a219f4d-fd2e-4398-95c0-8005624a8c31-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 14:55:24 crc kubenswrapper[4957]: I0218 14:55:24.869558 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 18 14:55:24 crc kubenswrapper[4957]: I0218 14:55:24.885166 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7fbd58c64f-dmc49"] Feb 18 14:55:24 crc kubenswrapper[4957]: I0218 14:55:24.957815 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/88ba3d00-f51e-4168-809e-ee46dad21b45-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-xmqsm\" (UID: \"88ba3d00-f51e-4168-809e-ee46dad21b45\") " pod="openstack/dnsmasq-dns-5c9776ccc5-xmqsm" Feb 18 14:55:24 crc kubenswrapper[4957]: I0218 14:55:24.957877 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pwv5g\" (UniqueName: \"kubernetes.io/projected/88ba3d00-f51e-4168-809e-ee46dad21b45-kube-api-access-pwv5g\") pod \"dnsmasq-dns-5c9776ccc5-xmqsm\" (UID: \"88ba3d00-f51e-4168-809e-ee46dad21b45\") " pod="openstack/dnsmasq-dns-5c9776ccc5-xmqsm" Feb 18 14:55:24 crc kubenswrapper[4957]: I0218 14:55:24.957972 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/88ba3d00-f51e-4168-809e-ee46dad21b45-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-xmqsm\" (UID: \"88ba3d00-f51e-4168-809e-ee46dad21b45\") " pod="openstack/dnsmasq-dns-5c9776ccc5-xmqsm" Feb 18 14:55:24 crc kubenswrapper[4957]: I0218 14:55:24.958002 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/88ba3d00-f51e-4168-809e-ee46dad21b45-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-xmqsm\" (UID: \"88ba3d00-f51e-4168-809e-ee46dad21b45\") " pod="openstack/dnsmasq-dns-5c9776ccc5-xmqsm" Feb 18 14:55:24 crc kubenswrapper[4957]: I0218 14:55:24.958042 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/88ba3d00-f51e-4168-809e-ee46dad21b45-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-xmqsm\" (UID: \"88ba3d00-f51e-4168-809e-ee46dad21b45\") " pod="openstack/dnsmasq-dns-5c9776ccc5-xmqsm" Feb 18 14:55:24 crc kubenswrapper[4957]: I0218 14:55:24.958121 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88ba3d00-f51e-4168-809e-ee46dad21b45-config\") pod \"dnsmasq-dns-5c9776ccc5-xmqsm\" (UID: \"88ba3d00-f51e-4168-809e-ee46dad21b45\") " pod="openstack/dnsmasq-dns-5c9776ccc5-xmqsm" Feb 18 14:55:24 crc kubenswrapper[4957]: I0218 14:55:24.958952 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88ba3d00-f51e-4168-809e-ee46dad21b45-config\") pod \"dnsmasq-dns-5c9776ccc5-xmqsm\" (UID: \"88ba3d00-f51e-4168-809e-ee46dad21b45\") " pod="openstack/dnsmasq-dns-5c9776ccc5-xmqsm" Feb 18 14:55:24 crc kubenswrapper[4957]: I0218 14:55:24.959097 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/88ba3d00-f51e-4168-809e-ee46dad21b45-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-xmqsm\" (UID: \"88ba3d00-f51e-4168-809e-ee46dad21b45\") " pod="openstack/dnsmasq-dns-5c9776ccc5-xmqsm" Feb 18 14:55:24 crc kubenswrapper[4957]: I0218 14:55:24.959498 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/88ba3d00-f51e-4168-809e-ee46dad21b45-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-xmqsm\" (UID: \"88ba3d00-f51e-4168-809e-ee46dad21b45\") " pod="openstack/dnsmasq-dns-5c9776ccc5-xmqsm" Feb 18 14:55:24 crc kubenswrapper[4957]: I0218 14:55:24.959804 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/88ba3d00-f51e-4168-809e-ee46dad21b45-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-xmqsm\" (UID: \"88ba3d00-f51e-4168-809e-ee46dad21b45\") " pod="openstack/dnsmasq-dns-5c9776ccc5-xmqsm" Feb 18 14:55:24 crc kubenswrapper[4957]: I0218 14:55:24.961320 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/88ba3d00-f51e-4168-809e-ee46dad21b45-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-xmqsm\" (UID: \"88ba3d00-f51e-4168-809e-ee46dad21b45\") " pod="openstack/dnsmasq-dns-5c9776ccc5-xmqsm" Feb 18 14:55:24 crc kubenswrapper[4957]: I0218 14:55:24.994957 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 18 14:55:25 crc kubenswrapper[4957]: I0218 14:55:25.000267 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 18 14:55:25 crc kubenswrapper[4957]: I0218 14:55:25.004188 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 18 14:55:25 crc kubenswrapper[4957]: I0218 14:55:25.044490 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pwv5g\" (UniqueName: \"kubernetes.io/projected/88ba3d00-f51e-4168-809e-ee46dad21b45-kube-api-access-pwv5g\") pod \"dnsmasq-dns-5c9776ccc5-xmqsm\" (UID: \"88ba3d00-f51e-4168-809e-ee46dad21b45\") " pod="openstack/dnsmasq-dns-5c9776ccc5-xmqsm" Feb 18 14:55:25 crc kubenswrapper[4957]: I0218 14:55:25.071960 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 18 14:55:25 crc kubenswrapper[4957]: I0218 14:55:25.168892 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/608c576f-d3a4-42d5-9ba2-92d8f2560905-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"608c576f-d3a4-42d5-9ba2-92d8f2560905\") " pod="openstack/cinder-api-0" Feb 18 14:55:25 crc kubenswrapper[4957]: I0218 14:55:25.168991 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/608c576f-d3a4-42d5-9ba2-92d8f2560905-scripts\") pod \"cinder-api-0\" (UID: \"608c576f-d3a4-42d5-9ba2-92d8f2560905\") " pod="openstack/cinder-api-0" Feb 18 14:55:25 crc kubenswrapper[4957]: I0218 14:55:25.169051 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/608c576f-d3a4-42d5-9ba2-92d8f2560905-config-data-custom\") pod \"cinder-api-0\" (UID: \"608c576f-d3a4-42d5-9ba2-92d8f2560905\") " pod="openstack/cinder-api-0" Feb 18 14:55:25 crc kubenswrapper[4957]: I0218 14:55:25.169083 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/608c576f-d3a4-42d5-9ba2-92d8f2560905-etc-machine-id\") pod \"cinder-api-0\" (UID: \"608c576f-d3a4-42d5-9ba2-92d8f2560905\") " pod="openstack/cinder-api-0" Feb 18 14:55:25 crc kubenswrapper[4957]: I0218 14:55:25.169123 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzfpf\" (UniqueName: \"kubernetes.io/projected/608c576f-d3a4-42d5-9ba2-92d8f2560905-kube-api-access-nzfpf\") pod \"cinder-api-0\" (UID: \"608c576f-d3a4-42d5-9ba2-92d8f2560905\") " pod="openstack/cinder-api-0" Feb 18 14:55:25 crc kubenswrapper[4957]: I0218 14:55:25.169153 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/608c576f-d3a4-42d5-9ba2-92d8f2560905-config-data\") pod \"cinder-api-0\" (UID: \"608c576f-d3a4-42d5-9ba2-92d8f2560905\") " pod="openstack/cinder-api-0" Feb 18 14:55:25 crc kubenswrapper[4957]: I0218 14:55:25.169178 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/608c576f-d3a4-42d5-9ba2-92d8f2560905-logs\") pod \"cinder-api-0\" (UID: \"608c576f-d3a4-42d5-9ba2-92d8f2560905\") " pod="openstack/cinder-api-0" Feb 18 14:55:25 crc kubenswrapper[4957]: I0218 14:55:25.291634 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nzfpf\" (UniqueName: \"kubernetes.io/projected/608c576f-d3a4-42d5-9ba2-92d8f2560905-kube-api-access-nzfpf\") pod \"cinder-api-0\" (UID: \"608c576f-d3a4-42d5-9ba2-92d8f2560905\") " pod="openstack/cinder-api-0" Feb 18 14:55:25 crc kubenswrapper[4957]: I0218 14:55:25.291744 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/608c576f-d3a4-42d5-9ba2-92d8f2560905-config-data\") pod \"cinder-api-0\" (UID: \"608c576f-d3a4-42d5-9ba2-92d8f2560905\") " pod="openstack/cinder-api-0" Feb 18 14:55:25 crc kubenswrapper[4957]: I0218 14:55:25.291801 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/608c576f-d3a4-42d5-9ba2-92d8f2560905-logs\") pod \"cinder-api-0\" (UID: \"608c576f-d3a4-42d5-9ba2-92d8f2560905\") " pod="openstack/cinder-api-0" Feb 18 14:55:25 crc kubenswrapper[4957]: I0218 14:55:25.291905 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/608c576f-d3a4-42d5-9ba2-92d8f2560905-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"608c576f-d3a4-42d5-9ba2-92d8f2560905\") " pod="openstack/cinder-api-0" Feb 18 14:55:25 crc kubenswrapper[4957]: I0218 14:55:25.292071 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/608c576f-d3a4-42d5-9ba2-92d8f2560905-scripts\") pod \"cinder-api-0\" (UID: \"608c576f-d3a4-42d5-9ba2-92d8f2560905\") " pod="openstack/cinder-api-0" Feb 18 14:55:25 crc kubenswrapper[4957]: I0218 14:55:25.292210 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/608c576f-d3a4-42d5-9ba2-92d8f2560905-config-data-custom\") pod \"cinder-api-0\" (UID: \"608c576f-d3a4-42d5-9ba2-92d8f2560905\") " pod="openstack/cinder-api-0" Feb 18 14:55:25 crc kubenswrapper[4957]: I0218 14:55:25.292316 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/608c576f-d3a4-42d5-9ba2-92d8f2560905-etc-machine-id\") pod \"cinder-api-0\" (UID: \"608c576f-d3a4-42d5-9ba2-92d8f2560905\") " pod="openstack/cinder-api-0" Feb 18 14:55:25 crc kubenswrapper[4957]: I0218 14:55:25.292516 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/608c576f-d3a4-42d5-9ba2-92d8f2560905-etc-machine-id\") pod \"cinder-api-0\" (UID: \"608c576f-d3a4-42d5-9ba2-92d8f2560905\") " pod="openstack/cinder-api-0" Feb 18 14:55:25 crc kubenswrapper[4957]: I0218 14:55:25.308151 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/608c576f-d3a4-42d5-9ba2-92d8f2560905-scripts\") pod \"cinder-api-0\" (UID: \"608c576f-d3a4-42d5-9ba2-92d8f2560905\") " pod="openstack/cinder-api-0" Feb 18 14:55:25 crc kubenswrapper[4957]: I0218 14:55:25.310339 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/608c576f-d3a4-42d5-9ba2-92d8f2560905-logs\") pod \"cinder-api-0\" (UID: \"608c576f-d3a4-42d5-9ba2-92d8f2560905\") " pod="openstack/cinder-api-0" Feb 18 14:55:25 crc kubenswrapper[4957]: I0218 14:55:25.313683 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/608c576f-d3a4-42d5-9ba2-92d8f2560905-config-data-custom\") pod \"cinder-api-0\" (UID: \"608c576f-d3a4-42d5-9ba2-92d8f2560905\") " pod="openstack/cinder-api-0" Feb 18 14:55:25 crc kubenswrapper[4957]: I0218 14:55:25.313912 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/608c576f-d3a4-42d5-9ba2-92d8f2560905-config-data\") pod \"cinder-api-0\" (UID: \"608c576f-d3a4-42d5-9ba2-92d8f2560905\") " pod="openstack/cinder-api-0" Feb 18 14:55:25 crc kubenswrapper[4957]: I0218 14:55:25.341855 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-xmqsm" Feb 18 14:55:25 crc kubenswrapper[4957]: I0218 14:55:25.398403 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/608c576f-d3a4-42d5-9ba2-92d8f2560905-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"608c576f-d3a4-42d5-9ba2-92d8f2560905\") " pod="openstack/cinder-api-0" Feb 18 14:55:25 crc kubenswrapper[4957]: I0218 14:55:25.400576 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nzfpf\" (UniqueName: \"kubernetes.io/projected/608c576f-d3a4-42d5-9ba2-92d8f2560905-kube-api-access-nzfpf\") pod \"cinder-api-0\" (UID: \"608c576f-d3a4-42d5-9ba2-92d8f2560905\") " pod="openstack/cinder-api-0" Feb 18 14:55:25 crc kubenswrapper[4957]: I0218 14:55:25.429194 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-64456c6c55-glfbl" event={"ID":"1a219f4d-fd2e-4398-95c0-8005624a8c31","Type":"ContainerDied","Data":"5214813722e15c1cf7444584e68265d9adea728aba9d81fb043b186663c8fd2f"} Feb 18 14:55:25 crc kubenswrapper[4957]: I0218 14:55:25.429260 4957 scope.go:117] "RemoveContainer" containerID="5e2a4a205862e096a59181199936c9aa007f13bf22a30eaaa7f77b0413c252e2" Feb 18 14:55:25 crc kubenswrapper[4957]: I0218 14:55:25.429601 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-64456c6c55-glfbl" Feb 18 14:55:25 crc kubenswrapper[4957]: I0218 14:55:25.457724 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7fbd58c64f-dmc49" event={"ID":"c7e3267f-8a47-48ec-94ee-40aed5e39cff","Type":"ContainerStarted","Data":"ab626967075b9cc4da6404a1f38d14fd32a50faebbf65d30ff28c38475f77f1f"} Feb 18 14:55:25 crc kubenswrapper[4957]: I0218 14:55:25.488510 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-85ff748b95-hdpgd" podUID="da55f937-4545-48b6-93df-ff81f7215472" containerName="dnsmasq-dns" containerID="cri-o://5527615e90699da12f7872652ec647defc705f06dbb01417721be02aa64750ca" gracePeriod=10 Feb 18 14:55:25 crc kubenswrapper[4957]: I0218 14:55:25.488802 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="86855234-3589-4780-a8a5-75d8a4350c27" containerName="ceilometer-central-agent" containerID="cri-o://cf84290ebb3b0143fb40a7b4437c22eeefd853c8e208b14410e7abe5454b2413" gracePeriod=30 Feb 18 14:55:25 crc kubenswrapper[4957]: I0218 14:55:25.488858 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"86855234-3589-4780-a8a5-75d8a4350c27","Type":"ContainerStarted","Data":"415264cfbb527cf59ce502746b4faeadddb802f1c2025ec9bc6a676ab5243c36"} Feb 18 14:55:25 crc kubenswrapper[4957]: I0218 14:55:25.489252 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 18 14:55:25 crc kubenswrapper[4957]: I0218 14:55:25.489290 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="86855234-3589-4780-a8a5-75d8a4350c27" containerName="proxy-httpd" containerID="cri-o://415264cfbb527cf59ce502746b4faeadddb802f1c2025ec9bc6a676ab5243c36" gracePeriod=30 Feb 18 14:55:25 crc kubenswrapper[4957]: I0218 14:55:25.489352 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="86855234-3589-4780-a8a5-75d8a4350c27" containerName="sg-core" containerID="cri-o://127b825a86c5ff0148cf0ebee0981a0d07998697b29a1f9952795c3b07b5a3e3" gracePeriod=30 Feb 18 14:55:25 crc kubenswrapper[4957]: I0218 14:55:25.489389 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="86855234-3589-4780-a8a5-75d8a4350c27" containerName="ceilometer-notification-agent" containerID="cri-o://2c9f74ecd757966d257850b11178c028a5a828bafe90ee101af4c0ee154220d1" gracePeriod=30 Feb 18 14:55:25 crc kubenswrapper[4957]: I0218 14:55:25.561545 4957 scope.go:117] "RemoveContainer" containerID="82b4bdb4b219020361ef275eee0678d0796cf9e67c54fc19a15343be03f3308f" Feb 18 14:55:25 crc kubenswrapper[4957]: I0218 14:55:25.604672 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-64456c6c55-glfbl"] Feb 18 14:55:25 crc kubenswrapper[4957]: I0218 14:55:25.620094 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 18 14:55:25 crc kubenswrapper[4957]: I0218 14:55:25.637604 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-worker-64456c6c55-glfbl"] Feb 18 14:55:25 crc kubenswrapper[4957]: I0218 14:55:25.690147 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=4.257629799 podStartE2EDuration="1m3.690123497s" podCreationTimestamp="2026-02-18 14:54:22 +0000 UTC" firstStartedPulling="2026-02-18 14:54:24.531792147 +0000 UTC m=+1371.052656881" lastFinishedPulling="2026-02-18 14:55:23.964285835 +0000 UTC m=+1430.485150579" observedRunningTime="2026-02-18 14:55:25.54055406 +0000 UTC m=+1432.061418814" watchObservedRunningTime="2026-02-18 14:55:25.690123497 +0000 UTC m=+1432.210988241" Feb 18 14:55:25 crc kubenswrapper[4957]: I0218 14:55:25.921764 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 18 14:55:26 crc kubenswrapper[4957]: I0218 14:55:26.106170 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-xmqsm"] Feb 18 14:55:26 crc kubenswrapper[4957]: I0218 14:55:26.264268 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a219f4d-fd2e-4398-95c0-8005624a8c31" path="/var/lib/kubelet/pods/1a219f4d-fd2e-4398-95c0-8005624a8c31/volumes" Feb 18 14:55:26 crc kubenswrapper[4957]: I0218 14:55:26.518668 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-hdpgd" Feb 18 14:55:26 crc kubenswrapper[4957]: W0218 14:55:26.552960 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod608c576f_d3a4_42d5_9ba2_92d8f2560905.slice/crio-6f434f8a8917b44d1be215d2125286c8182f6394c055a4fa76f3c2cd3be5f70e WatchSource:0}: Error finding container 6f434f8a8917b44d1be215d2125286c8182f6394c055a4fa76f3c2cd3be5f70e: Status 404 returned error can't find the container with id 6f434f8a8917b44d1be215d2125286c8182f6394c055a4fa76f3c2cd3be5f70e Feb 18 14:55:26 crc kubenswrapper[4957]: I0218 14:55:26.560244 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 18 14:55:26 crc kubenswrapper[4957]: I0218 14:55:26.562767 4957 generic.go:334] "Generic (PLEG): container finished" podID="da55f937-4545-48b6-93df-ff81f7215472" containerID="5527615e90699da12f7872652ec647defc705f06dbb01417721be02aa64750ca" exitCode=0 Feb 18 14:55:26 crc kubenswrapper[4957]: I0218 14:55:26.562868 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-hdpgd" event={"ID":"da55f937-4545-48b6-93df-ff81f7215472","Type":"ContainerDied","Data":"5527615e90699da12f7872652ec647defc705f06dbb01417721be02aa64750ca"} Feb 18 14:55:26 crc kubenswrapper[4957]: I0218 14:55:26.562894 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-hdpgd" event={"ID":"da55f937-4545-48b6-93df-ff81f7215472","Type":"ContainerDied","Data":"10586da96e226f4439f4a77ad0369d76634aecf85e0385c8f69b339cd8d0c2a2"} Feb 18 14:55:26 crc kubenswrapper[4957]: I0218 14:55:26.562915 4957 scope.go:117] "RemoveContainer" containerID="5527615e90699da12f7872652ec647defc705f06dbb01417721be02aa64750ca" Feb 18 14:55:26 crc kubenswrapper[4957]: I0218 14:55:26.563998 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-hdpgd" Feb 18 14:55:26 crc kubenswrapper[4957]: I0218 14:55:26.616034 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7fbd58c64f-dmc49" event={"ID":"c7e3267f-8a47-48ec-94ee-40aed5e39cff","Type":"ContainerStarted","Data":"65fccf8ed27130dbc27f3032cb6a831e69fbc34d168fcf90ef944eeeb7c6d1e6"} Feb 18 14:55:26 crc kubenswrapper[4957]: I0218 14:55:26.616149 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7fbd58c64f-dmc49" event={"ID":"c7e3267f-8a47-48ec-94ee-40aed5e39cff","Type":"ContainerStarted","Data":"e8ff8db041c5c61190348347f8b945b0e6c2f5a8e1ed5e27501ade54f5316893"} Feb 18 14:55:26 crc kubenswrapper[4957]: I0218 14:55:26.617564 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-7fbd58c64f-dmc49" Feb 18 14:55:26 crc kubenswrapper[4957]: I0218 14:55:26.629293 4957 generic.go:334] "Generic (PLEG): container finished" podID="86855234-3589-4780-a8a5-75d8a4350c27" containerID="415264cfbb527cf59ce502746b4faeadddb802f1c2025ec9bc6a676ab5243c36" exitCode=0 Feb 18 14:55:26 crc kubenswrapper[4957]: I0218 14:55:26.629321 4957 generic.go:334] "Generic (PLEG): container finished" podID="86855234-3589-4780-a8a5-75d8a4350c27" containerID="127b825a86c5ff0148cf0ebee0981a0d07998697b29a1f9952795c3b07b5a3e3" exitCode=2 Feb 18 14:55:26 crc kubenswrapper[4957]: I0218 14:55:26.629328 4957 generic.go:334] "Generic (PLEG): container finished" podID="86855234-3589-4780-a8a5-75d8a4350c27" containerID="cf84290ebb3b0143fb40a7b4437c22eeefd853c8e208b14410e7abe5454b2413" exitCode=0 Feb 18 14:55:26 crc kubenswrapper[4957]: I0218 14:55:26.629627 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"86855234-3589-4780-a8a5-75d8a4350c27","Type":"ContainerDied","Data":"415264cfbb527cf59ce502746b4faeadddb802f1c2025ec9bc6a676ab5243c36"} Feb 18 14:55:26 crc kubenswrapper[4957]: I0218 14:55:26.629658 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"86855234-3589-4780-a8a5-75d8a4350c27","Type":"ContainerDied","Data":"127b825a86c5ff0148cf0ebee0981a0d07998697b29a1f9952795c3b07b5a3e3"} Feb 18 14:55:26 crc kubenswrapper[4957]: I0218 14:55:26.629767 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"86855234-3589-4780-a8a5-75d8a4350c27","Type":"ContainerDied","Data":"cf84290ebb3b0143fb40a7b4437c22eeefd853c8e208b14410e7abe5454b2413"} Feb 18 14:55:26 crc kubenswrapper[4957]: I0218 14:55:26.658741 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"daeedf1e-1be4-4db9-8354-cdd548f43256","Type":"ContainerStarted","Data":"434a3e0733af3bd792c76bad275429b67554658aee0b8643c994e25c1ca814af"} Feb 18 14:55:26 crc kubenswrapper[4957]: I0218 14:55:26.659220 4957 scope.go:117] "RemoveContainer" containerID="214e23bc7de7ea33e2ee6c9c168cedfeb5fa391ba7c14bd126f665597740b064" Feb 18 14:55:26 crc kubenswrapper[4957]: I0218 14:55:26.661866 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-xmqsm" event={"ID":"88ba3d00-f51e-4168-809e-ee46dad21b45","Type":"ContainerStarted","Data":"c211fd28de5f634eeb862576f336b7514081517505ba949ee6d66f8066058b9e"} Feb 18 14:55:26 crc kubenswrapper[4957]: I0218 14:55:26.667149 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-7fbd58c64f-dmc49" podStartSLOduration=4.667123675 podStartE2EDuration="4.667123675s" podCreationTimestamp="2026-02-18 14:55:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:55:26.645471398 +0000 UTC m=+1433.166336142" watchObservedRunningTime="2026-02-18 14:55:26.667123675 +0000 UTC m=+1433.187988419" Feb 18 14:55:26 crc kubenswrapper[4957]: I0218 14:55:26.690736 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/da55f937-4545-48b6-93df-ff81f7215472-ovsdbserver-nb\") pod \"da55f937-4545-48b6-93df-ff81f7215472\" (UID: \"da55f937-4545-48b6-93df-ff81f7215472\") " Feb 18 14:55:26 crc kubenswrapper[4957]: I0218 14:55:26.691165 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/da55f937-4545-48b6-93df-ff81f7215472-dns-svc\") pod \"da55f937-4545-48b6-93df-ff81f7215472\" (UID: \"da55f937-4545-48b6-93df-ff81f7215472\") " Feb 18 14:55:26 crc kubenswrapper[4957]: I0218 14:55:26.691202 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/da55f937-4545-48b6-93df-ff81f7215472-dns-swift-storage-0\") pod \"da55f937-4545-48b6-93df-ff81f7215472\" (UID: \"da55f937-4545-48b6-93df-ff81f7215472\") " Feb 18 14:55:26 crc kubenswrapper[4957]: I0218 14:55:26.691239 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da55f937-4545-48b6-93df-ff81f7215472-config\") pod \"da55f937-4545-48b6-93df-ff81f7215472\" (UID: \"da55f937-4545-48b6-93df-ff81f7215472\") " Feb 18 14:55:26 crc kubenswrapper[4957]: I0218 14:55:26.691278 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vn27w\" (UniqueName: \"kubernetes.io/projected/da55f937-4545-48b6-93df-ff81f7215472-kube-api-access-vn27w\") pod \"da55f937-4545-48b6-93df-ff81f7215472\" (UID: \"da55f937-4545-48b6-93df-ff81f7215472\") " Feb 18 14:55:26 crc kubenswrapper[4957]: I0218 14:55:26.691305 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/da55f937-4545-48b6-93df-ff81f7215472-ovsdbserver-sb\") pod \"da55f937-4545-48b6-93df-ff81f7215472\" (UID: \"da55f937-4545-48b6-93df-ff81f7215472\") " Feb 18 14:55:26 crc kubenswrapper[4957]: I0218 14:55:26.707554 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da55f937-4545-48b6-93df-ff81f7215472-kube-api-access-vn27w" (OuterVolumeSpecName: "kube-api-access-vn27w") pod "da55f937-4545-48b6-93df-ff81f7215472" (UID: "da55f937-4545-48b6-93df-ff81f7215472"). InnerVolumeSpecName "kube-api-access-vn27w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:55:26 crc kubenswrapper[4957]: I0218 14:55:26.772733 4957 scope.go:117] "RemoveContainer" containerID="5527615e90699da12f7872652ec647defc705f06dbb01417721be02aa64750ca" Feb 18 14:55:26 crc kubenswrapper[4957]: E0218 14:55:26.774252 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5527615e90699da12f7872652ec647defc705f06dbb01417721be02aa64750ca\": container with ID starting with 5527615e90699da12f7872652ec647defc705f06dbb01417721be02aa64750ca not found: ID does not exist" containerID="5527615e90699da12f7872652ec647defc705f06dbb01417721be02aa64750ca" Feb 18 14:55:26 crc kubenswrapper[4957]: I0218 14:55:26.774285 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5527615e90699da12f7872652ec647defc705f06dbb01417721be02aa64750ca"} err="failed to get container status \"5527615e90699da12f7872652ec647defc705f06dbb01417721be02aa64750ca\": rpc error: code = NotFound desc = could not find container \"5527615e90699da12f7872652ec647defc705f06dbb01417721be02aa64750ca\": container with ID starting with 5527615e90699da12f7872652ec647defc705f06dbb01417721be02aa64750ca not found: ID does not exist" Feb 18 14:55:26 crc kubenswrapper[4957]: I0218 14:55:26.774308 4957 scope.go:117] "RemoveContainer" containerID="214e23bc7de7ea33e2ee6c9c168cedfeb5fa391ba7c14bd126f665597740b064" Feb 18 14:55:26 crc kubenswrapper[4957]: E0218 14:55:26.777878 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"214e23bc7de7ea33e2ee6c9c168cedfeb5fa391ba7c14bd126f665597740b064\": container with ID starting with 214e23bc7de7ea33e2ee6c9c168cedfeb5fa391ba7c14bd126f665597740b064 not found: ID does not exist" containerID="214e23bc7de7ea33e2ee6c9c168cedfeb5fa391ba7c14bd126f665597740b064" Feb 18 14:55:26 crc kubenswrapper[4957]: I0218 14:55:26.777933 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"214e23bc7de7ea33e2ee6c9c168cedfeb5fa391ba7c14bd126f665597740b064"} err="failed to get container status \"214e23bc7de7ea33e2ee6c9c168cedfeb5fa391ba7c14bd126f665597740b064\": rpc error: code = NotFound desc = could not find container \"214e23bc7de7ea33e2ee6c9c168cedfeb5fa391ba7c14bd126f665597740b064\": container with ID starting with 214e23bc7de7ea33e2ee6c9c168cedfeb5fa391ba7c14bd126f665597740b064 not found: ID does not exist" Feb 18 14:55:26 crc kubenswrapper[4957]: I0218 14:55:26.795673 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vn27w\" (UniqueName: \"kubernetes.io/projected/da55f937-4545-48b6-93df-ff81f7215472-kube-api-access-vn27w\") on node \"crc\" DevicePath \"\"" Feb 18 14:55:26 crc kubenswrapper[4957]: I0218 14:55:26.828078 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da55f937-4545-48b6-93df-ff81f7215472-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "da55f937-4545-48b6-93df-ff81f7215472" (UID: "da55f937-4545-48b6-93df-ff81f7215472"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:55:26 crc kubenswrapper[4957]: I0218 14:55:26.856702 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da55f937-4545-48b6-93df-ff81f7215472-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "da55f937-4545-48b6-93df-ff81f7215472" (UID: "da55f937-4545-48b6-93df-ff81f7215472"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:55:26 crc kubenswrapper[4957]: I0218 14:55:26.862454 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da55f937-4545-48b6-93df-ff81f7215472-config" (OuterVolumeSpecName: "config") pod "da55f937-4545-48b6-93df-ff81f7215472" (UID: "da55f937-4545-48b6-93df-ff81f7215472"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:55:26 crc kubenswrapper[4957]: I0218 14:55:26.873133 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da55f937-4545-48b6-93df-ff81f7215472-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "da55f937-4545-48b6-93df-ff81f7215472" (UID: "da55f937-4545-48b6-93df-ff81f7215472"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:55:26 crc kubenswrapper[4957]: I0218 14:55:26.879045 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da55f937-4545-48b6-93df-ff81f7215472-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "da55f937-4545-48b6-93df-ff81f7215472" (UID: "da55f937-4545-48b6-93df-ff81f7215472"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:55:26 crc kubenswrapper[4957]: I0218 14:55:26.894109 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-c7999fbc4-ttfwg" Feb 18 14:55:26 crc kubenswrapper[4957]: I0218 14:55:26.898086 4957 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/da55f937-4545-48b6-93df-ff81f7215472-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 18 14:55:26 crc kubenswrapper[4957]: I0218 14:55:26.898131 4957 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/da55f937-4545-48b6-93df-ff81f7215472-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 14:55:26 crc kubenswrapper[4957]: I0218 14:55:26.898143 4957 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/da55f937-4545-48b6-93df-ff81f7215472-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 18 14:55:26 crc kubenswrapper[4957]: I0218 14:55:26.898158 4957 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da55f937-4545-48b6-93df-ff81f7215472-config\") on node \"crc\" DevicePath \"\"" Feb 18 14:55:26 crc kubenswrapper[4957]: I0218 14:55:26.898172 4957 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/da55f937-4545-48b6-93df-ff81f7215472-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 18 14:55:27 crc kubenswrapper[4957]: I0218 14:55:27.096564 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-c7999fbc4-ttfwg" Feb 18 14:55:27 crc kubenswrapper[4957]: I0218 14:55:27.184215 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-768df8c8bb-2s2nt"] Feb 18 14:55:27 crc kubenswrapper[4957]: I0218 14:55:27.184472 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-768df8c8bb-2s2nt" podUID="401e08eb-2650-46f4-9906-7780d2655b31" containerName="barbican-api-log" containerID="cri-o://6a0a47e9c8fe026a739f11a88aebc05bf04a56fe90d8d5154453e2c28c0f6e83" gracePeriod=30 Feb 18 14:55:27 crc kubenswrapper[4957]: I0218 14:55:27.184923 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-768df8c8bb-2s2nt" podUID="401e08eb-2650-46f4-9906-7780d2655b31" containerName="barbican-api" containerID="cri-o://8b62ef971adb34a7f8e7baf58d7c93d620e071e61c1cd5b644cd9b9e0f3bf043" gracePeriod=30 Feb 18 14:55:27 crc kubenswrapper[4957]: I0218 14:55:27.202641 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-768df8c8bb-2s2nt" podUID="401e08eb-2650-46f4-9906-7780d2655b31" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.203:9311/healthcheck\": EOF" Feb 18 14:55:27 crc kubenswrapper[4957]: I0218 14:55:27.341725 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-hdpgd"] Feb 18 14:55:27 crc kubenswrapper[4957]: I0218 14:55:27.363425 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-hdpgd"] Feb 18 14:55:27 crc kubenswrapper[4957]: I0218 14:55:27.682787 4957 generic.go:334] "Generic (PLEG): container finished" podID="88ba3d00-f51e-4168-809e-ee46dad21b45" containerID="e43614936541aa847085fa2acc27df27335703c8e13d1309259154c97a649b48" exitCode=0 Feb 18 14:55:27 crc kubenswrapper[4957]: I0218 14:55:27.683001 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-xmqsm" event={"ID":"88ba3d00-f51e-4168-809e-ee46dad21b45","Type":"ContainerDied","Data":"e43614936541aa847085fa2acc27df27335703c8e13d1309259154c97a649b48"} Feb 18 14:55:27 crc kubenswrapper[4957]: I0218 14:55:27.698789 4957 generic.go:334] "Generic (PLEG): container finished" podID="401e08eb-2650-46f4-9906-7780d2655b31" containerID="6a0a47e9c8fe026a739f11a88aebc05bf04a56fe90d8d5154453e2c28c0f6e83" exitCode=143 Feb 18 14:55:27 crc kubenswrapper[4957]: I0218 14:55:27.698891 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-768df8c8bb-2s2nt" event={"ID":"401e08eb-2650-46f4-9906-7780d2655b31","Type":"ContainerDied","Data":"6a0a47e9c8fe026a739f11a88aebc05bf04a56fe90d8d5154453e2c28c0f6e83"} Feb 18 14:55:27 crc kubenswrapper[4957]: I0218 14:55:27.707744 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"608c576f-d3a4-42d5-9ba2-92d8f2560905","Type":"ContainerStarted","Data":"6f434f8a8917b44d1be215d2125286c8182f6394c055a4fa76f3c2cd3be5f70e"} Feb 18 14:55:27 crc kubenswrapper[4957]: I0218 14:55:27.743499 4957 generic.go:334] "Generic (PLEG): container finished" podID="75590e4a-6bc1-44d5-9a9e-5756f4a7de8c" containerID="724ae5090c9c953a769bc5cdbb49510d3e37ce00a400b59638e3888375d0c426" exitCode=0 Feb 18 14:55:27 crc kubenswrapper[4957]: I0218 14:55:27.743579 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5676bc86fc-b7j6f" event={"ID":"75590e4a-6bc1-44d5-9a9e-5756f4a7de8c","Type":"ContainerDied","Data":"724ae5090c9c953a769bc5cdbb49510d3e37ce00a400b59638e3888375d0c426"} Feb 18 14:55:28 crc kubenswrapper[4957]: I0218 14:55:28.026284 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 18 14:55:28 crc kubenswrapper[4957]: I0218 14:55:28.230123 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da55f937-4545-48b6-93df-ff81f7215472" path="/var/lib/kubelet/pods/da55f937-4545-48b6-93df-ff81f7215472/volumes" Feb 18 14:55:28 crc kubenswrapper[4957]: I0218 14:55:28.353676 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5676bc86fc-b7j6f" Feb 18 14:55:28 crc kubenswrapper[4957]: I0218 14:55:28.503011 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/75590e4a-6bc1-44d5-9a9e-5756f4a7de8c-ovndb-tls-certs\") pod \"75590e4a-6bc1-44d5-9a9e-5756f4a7de8c\" (UID: \"75590e4a-6bc1-44d5-9a9e-5756f4a7de8c\") " Feb 18 14:55:28 crc kubenswrapper[4957]: I0218 14:55:28.503089 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75590e4a-6bc1-44d5-9a9e-5756f4a7de8c-combined-ca-bundle\") pod \"75590e4a-6bc1-44d5-9a9e-5756f4a7de8c\" (UID: \"75590e4a-6bc1-44d5-9a9e-5756f4a7de8c\") " Feb 18 14:55:28 crc kubenswrapper[4957]: I0218 14:55:28.503141 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/75590e4a-6bc1-44d5-9a9e-5756f4a7de8c-public-tls-certs\") pod \"75590e4a-6bc1-44d5-9a9e-5756f4a7de8c\" (UID: \"75590e4a-6bc1-44d5-9a9e-5756f4a7de8c\") " Feb 18 14:55:28 crc kubenswrapper[4957]: I0218 14:55:28.503216 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/75590e4a-6bc1-44d5-9a9e-5756f4a7de8c-config\") pod \"75590e4a-6bc1-44d5-9a9e-5756f4a7de8c\" (UID: \"75590e4a-6bc1-44d5-9a9e-5756f4a7de8c\") " Feb 18 14:55:28 crc kubenswrapper[4957]: I0218 14:55:28.503251 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/75590e4a-6bc1-44d5-9a9e-5756f4a7de8c-internal-tls-certs\") pod \"75590e4a-6bc1-44d5-9a9e-5756f4a7de8c\" (UID: \"75590e4a-6bc1-44d5-9a9e-5756f4a7de8c\") " Feb 18 14:55:28 crc kubenswrapper[4957]: I0218 14:55:28.503299 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rmhjp\" (UniqueName: \"kubernetes.io/projected/75590e4a-6bc1-44d5-9a9e-5756f4a7de8c-kube-api-access-rmhjp\") pod \"75590e4a-6bc1-44d5-9a9e-5756f4a7de8c\" (UID: \"75590e4a-6bc1-44d5-9a9e-5756f4a7de8c\") " Feb 18 14:55:28 crc kubenswrapper[4957]: I0218 14:55:28.503574 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/75590e4a-6bc1-44d5-9a9e-5756f4a7de8c-httpd-config\") pod \"75590e4a-6bc1-44d5-9a9e-5756f4a7de8c\" (UID: \"75590e4a-6bc1-44d5-9a9e-5756f4a7de8c\") " Feb 18 14:55:28 crc kubenswrapper[4957]: I0218 14:55:28.514244 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75590e4a-6bc1-44d5-9a9e-5756f4a7de8c-kube-api-access-rmhjp" (OuterVolumeSpecName: "kube-api-access-rmhjp") pod "75590e4a-6bc1-44d5-9a9e-5756f4a7de8c" (UID: "75590e4a-6bc1-44d5-9a9e-5756f4a7de8c"). InnerVolumeSpecName "kube-api-access-rmhjp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:55:28 crc kubenswrapper[4957]: I0218 14:55:28.529991 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75590e4a-6bc1-44d5-9a9e-5756f4a7de8c-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "75590e4a-6bc1-44d5-9a9e-5756f4a7de8c" (UID: "75590e4a-6bc1-44d5-9a9e-5756f4a7de8c"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:55:28 crc kubenswrapper[4957]: I0218 14:55:28.609039 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rmhjp\" (UniqueName: \"kubernetes.io/projected/75590e4a-6bc1-44d5-9a9e-5756f4a7de8c-kube-api-access-rmhjp\") on node \"crc\" DevicePath \"\"" Feb 18 14:55:28 crc kubenswrapper[4957]: I0218 14:55:28.609337 4957 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/75590e4a-6bc1-44d5-9a9e-5756f4a7de8c-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 18 14:55:28 crc kubenswrapper[4957]: I0218 14:55:28.613360 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75590e4a-6bc1-44d5-9a9e-5756f4a7de8c-config" (OuterVolumeSpecName: "config") pod "75590e4a-6bc1-44d5-9a9e-5756f4a7de8c" (UID: "75590e4a-6bc1-44d5-9a9e-5756f4a7de8c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:55:28 crc kubenswrapper[4957]: I0218 14:55:28.669626 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75590e4a-6bc1-44d5-9a9e-5756f4a7de8c-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "75590e4a-6bc1-44d5-9a9e-5756f4a7de8c" (UID: "75590e4a-6bc1-44d5-9a9e-5756f4a7de8c"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:55:28 crc kubenswrapper[4957]: I0218 14:55:28.684581 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75590e4a-6bc1-44d5-9a9e-5756f4a7de8c-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "75590e4a-6bc1-44d5-9a9e-5756f4a7de8c" (UID: "75590e4a-6bc1-44d5-9a9e-5756f4a7de8c"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:55:28 crc kubenswrapper[4957]: I0218 14:55:28.712709 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 14:55:28 crc kubenswrapper[4957]: I0218 14:55:28.716615 4957 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/75590e4a-6bc1-44d5-9a9e-5756f4a7de8c-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 14:55:28 crc kubenswrapper[4957]: I0218 14:55:28.716641 4957 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/75590e4a-6bc1-44d5-9a9e-5756f4a7de8c-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 14:55:28 crc kubenswrapper[4957]: I0218 14:55:28.716653 4957 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/75590e4a-6bc1-44d5-9a9e-5756f4a7de8c-config\") on node \"crc\" DevicePath \"\"" Feb 18 14:55:28 crc kubenswrapper[4957]: I0218 14:55:28.717030 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75590e4a-6bc1-44d5-9a9e-5756f4a7de8c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "75590e4a-6bc1-44d5-9a9e-5756f4a7de8c" (UID: "75590e4a-6bc1-44d5-9a9e-5756f4a7de8c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:55:28 crc kubenswrapper[4957]: I0218 14:55:28.742223 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75590e4a-6bc1-44d5-9a9e-5756f4a7de8c-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "75590e4a-6bc1-44d5-9a9e-5756f4a7de8c" (UID: "75590e4a-6bc1-44d5-9a9e-5756f4a7de8c"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:55:28 crc kubenswrapper[4957]: I0218 14:55:28.776977 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"daeedf1e-1be4-4db9-8354-cdd548f43256","Type":"ContainerStarted","Data":"45aa19e92e1482006289d50687206d6ea0df03ad6bf3d1f783ae23c29cd5cf98"} Feb 18 14:55:28 crc kubenswrapper[4957]: I0218 14:55:28.779580 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-xmqsm" event={"ID":"88ba3d00-f51e-4168-809e-ee46dad21b45","Type":"ContainerStarted","Data":"468d188c4710786a555618d14803fc9033243d1d74a37ba304b78da81c0d0171"} Feb 18 14:55:28 crc kubenswrapper[4957]: I0218 14:55:28.779920 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c9776ccc5-xmqsm" Feb 18 14:55:28 crc kubenswrapper[4957]: I0218 14:55:28.790232 4957 generic.go:334] "Generic (PLEG): container finished" podID="86855234-3589-4780-a8a5-75d8a4350c27" containerID="2c9f74ecd757966d257850b11178c028a5a828bafe90ee101af4c0ee154220d1" exitCode=0 Feb 18 14:55:28 crc kubenswrapper[4957]: I0218 14:55:28.790304 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 14:55:28 crc kubenswrapper[4957]: I0218 14:55:28.790354 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"86855234-3589-4780-a8a5-75d8a4350c27","Type":"ContainerDied","Data":"2c9f74ecd757966d257850b11178c028a5a828bafe90ee101af4c0ee154220d1"} Feb 18 14:55:28 crc kubenswrapper[4957]: I0218 14:55:28.790386 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"86855234-3589-4780-a8a5-75d8a4350c27","Type":"ContainerDied","Data":"18695ef7925f6c7b7e5f3919811981fbfe03a50f1df3dbf464a2c86a64916049"} Feb 18 14:55:28 crc kubenswrapper[4957]: I0218 14:55:28.790403 4957 scope.go:117] "RemoveContainer" containerID="415264cfbb527cf59ce502746b4faeadddb802f1c2025ec9bc6a676ab5243c36" Feb 18 14:55:28 crc kubenswrapper[4957]: I0218 14:55:28.796092 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"608c576f-d3a4-42d5-9ba2-92d8f2560905","Type":"ContainerStarted","Data":"b812932d705db67c58db95eb5a774a24fbbd83516c912b6ef898add9ab4bfcb2"} Feb 18 14:55:28 crc kubenswrapper[4957]: I0218 14:55:28.812537 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5676bc86fc-b7j6f" event={"ID":"75590e4a-6bc1-44d5-9a9e-5756f4a7de8c","Type":"ContainerDied","Data":"bd090441f097ae3e6bab956274dc876ec71208091f2a8481c088b365f4c7f37a"} Feb 18 14:55:28 crc kubenswrapper[4957]: I0218 14:55:28.812681 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5676bc86fc-b7j6f" Feb 18 14:55:28 crc kubenswrapper[4957]: I0218 14:55:28.815698 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c9776ccc5-xmqsm" podStartSLOduration=4.815681469 podStartE2EDuration="4.815681469s" podCreationTimestamp="2026-02-18 14:55:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:55:28.802485027 +0000 UTC m=+1435.323349791" watchObservedRunningTime="2026-02-18 14:55:28.815681469 +0000 UTC m=+1435.336546213" Feb 18 14:55:28 crc kubenswrapper[4957]: I0218 14:55:28.821099 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86855234-3589-4780-a8a5-75d8a4350c27-combined-ca-bundle\") pod \"86855234-3589-4780-a8a5-75d8a4350c27\" (UID: \"86855234-3589-4780-a8a5-75d8a4350c27\") " Feb 18 14:55:28 crc kubenswrapper[4957]: I0218 14:55:28.821132 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86855234-3589-4780-a8a5-75d8a4350c27-config-data\") pod \"86855234-3589-4780-a8a5-75d8a4350c27\" (UID: \"86855234-3589-4780-a8a5-75d8a4350c27\") " Feb 18 14:55:28 crc kubenswrapper[4957]: I0218 14:55:28.821180 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/86855234-3589-4780-a8a5-75d8a4350c27-log-httpd\") pod \"86855234-3589-4780-a8a5-75d8a4350c27\" (UID: \"86855234-3589-4780-a8a5-75d8a4350c27\") " Feb 18 14:55:28 crc kubenswrapper[4957]: I0218 14:55:28.821226 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/86855234-3589-4780-a8a5-75d8a4350c27-run-httpd\") pod \"86855234-3589-4780-a8a5-75d8a4350c27\" (UID: \"86855234-3589-4780-a8a5-75d8a4350c27\") " Feb 18 14:55:28 crc kubenswrapper[4957]: I0218 14:55:28.821279 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/86855234-3589-4780-a8a5-75d8a4350c27-sg-core-conf-yaml\") pod \"86855234-3589-4780-a8a5-75d8a4350c27\" (UID: \"86855234-3589-4780-a8a5-75d8a4350c27\") " Feb 18 14:55:28 crc kubenswrapper[4957]: I0218 14:55:28.821446 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/86855234-3589-4780-a8a5-75d8a4350c27-scripts\") pod \"86855234-3589-4780-a8a5-75d8a4350c27\" (UID: \"86855234-3589-4780-a8a5-75d8a4350c27\") " Feb 18 14:55:28 crc kubenswrapper[4957]: I0218 14:55:28.821491 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qwzgs\" (UniqueName: \"kubernetes.io/projected/86855234-3589-4780-a8a5-75d8a4350c27-kube-api-access-qwzgs\") pod \"86855234-3589-4780-a8a5-75d8a4350c27\" (UID: \"86855234-3589-4780-a8a5-75d8a4350c27\") " Feb 18 14:55:28 crc kubenswrapper[4957]: I0218 14:55:28.822020 4957 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75590e4a-6bc1-44d5-9a9e-5756f4a7de8c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:55:28 crc kubenswrapper[4957]: I0218 14:55:28.822042 4957 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/75590e4a-6bc1-44d5-9a9e-5756f4a7de8c-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 14:55:28 crc kubenswrapper[4957]: I0218 14:55:28.822260 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/86855234-3589-4780-a8a5-75d8a4350c27-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "86855234-3589-4780-a8a5-75d8a4350c27" (UID: "86855234-3589-4780-a8a5-75d8a4350c27"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:55:28 crc kubenswrapper[4957]: I0218 14:55:28.822850 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/86855234-3589-4780-a8a5-75d8a4350c27-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "86855234-3589-4780-a8a5-75d8a4350c27" (UID: "86855234-3589-4780-a8a5-75d8a4350c27"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:55:28 crc kubenswrapper[4957]: I0218 14:55:28.828010 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86855234-3589-4780-a8a5-75d8a4350c27-kube-api-access-qwzgs" (OuterVolumeSpecName: "kube-api-access-qwzgs") pod "86855234-3589-4780-a8a5-75d8a4350c27" (UID: "86855234-3589-4780-a8a5-75d8a4350c27"). InnerVolumeSpecName "kube-api-access-qwzgs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:55:28 crc kubenswrapper[4957]: I0218 14:55:28.834493 4957 scope.go:117] "RemoveContainer" containerID="127b825a86c5ff0148cf0ebee0981a0d07998697b29a1f9952795c3b07b5a3e3" Feb 18 14:55:28 crc kubenswrapper[4957]: I0218 14:55:28.839854 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86855234-3589-4780-a8a5-75d8a4350c27-scripts" (OuterVolumeSpecName: "scripts") pod "86855234-3589-4780-a8a5-75d8a4350c27" (UID: "86855234-3589-4780-a8a5-75d8a4350c27"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:55:28 crc kubenswrapper[4957]: I0218 14:55:28.876727 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86855234-3589-4780-a8a5-75d8a4350c27-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "86855234-3589-4780-a8a5-75d8a4350c27" (UID: "86855234-3589-4780-a8a5-75d8a4350c27"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:55:28 crc kubenswrapper[4957]: I0218 14:55:28.883149 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5676bc86fc-b7j6f"] Feb 18 14:55:28 crc kubenswrapper[4957]: I0218 14:55:28.895458 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-5676bc86fc-b7j6f"] Feb 18 14:55:28 crc kubenswrapper[4957]: I0218 14:55:28.924926 4957 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/86855234-3589-4780-a8a5-75d8a4350c27-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 14:55:28 crc kubenswrapper[4957]: I0218 14:55:28.925145 4957 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/86855234-3589-4780-a8a5-75d8a4350c27-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 14:55:28 crc kubenswrapper[4957]: I0218 14:55:28.925245 4957 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/86855234-3589-4780-a8a5-75d8a4350c27-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 18 14:55:28 crc kubenswrapper[4957]: I0218 14:55:28.925321 4957 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/86855234-3589-4780-a8a5-75d8a4350c27-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 14:55:28 crc kubenswrapper[4957]: I0218 14:55:28.925395 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qwzgs\" (UniqueName: \"kubernetes.io/projected/86855234-3589-4780-a8a5-75d8a4350c27-kube-api-access-qwzgs\") on node \"crc\" DevicePath \"\"" Feb 18 14:55:28 crc kubenswrapper[4957]: I0218 14:55:28.943676 4957 scope.go:117] "RemoveContainer" containerID="2c9f74ecd757966d257850b11178c028a5a828bafe90ee101af4c0ee154220d1" Feb 18 14:55:28 crc kubenswrapper[4957]: I0218 14:55:28.978393 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86855234-3589-4780-a8a5-75d8a4350c27-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "86855234-3589-4780-a8a5-75d8a4350c27" (UID: "86855234-3589-4780-a8a5-75d8a4350c27"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:55:29 crc kubenswrapper[4957]: I0218 14:55:29.025464 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86855234-3589-4780-a8a5-75d8a4350c27-config-data" (OuterVolumeSpecName: "config-data") pod "86855234-3589-4780-a8a5-75d8a4350c27" (UID: "86855234-3589-4780-a8a5-75d8a4350c27"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:55:29 crc kubenswrapper[4957]: I0218 14:55:29.027025 4957 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86855234-3589-4780-a8a5-75d8a4350c27-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:55:29 crc kubenswrapper[4957]: I0218 14:55:29.027044 4957 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86855234-3589-4780-a8a5-75d8a4350c27-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 14:55:29 crc kubenswrapper[4957]: I0218 14:55:29.168856 4957 scope.go:117] "RemoveContainer" containerID="cf84290ebb3b0143fb40a7b4437c22eeefd853c8e208b14410e7abe5454b2413" Feb 18 14:55:29 crc kubenswrapper[4957]: I0218 14:55:29.170012 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 14:55:29 crc kubenswrapper[4957]: I0218 14:55:29.183290 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 18 14:55:29 crc kubenswrapper[4957]: I0218 14:55:29.194003 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 18 14:55:29 crc kubenswrapper[4957]: E0218 14:55:29.194600 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da55f937-4545-48b6-93df-ff81f7215472" containerName="dnsmasq-dns" Feb 18 14:55:29 crc kubenswrapper[4957]: I0218 14:55:29.194616 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="da55f937-4545-48b6-93df-ff81f7215472" containerName="dnsmasq-dns" Feb 18 14:55:29 crc kubenswrapper[4957]: E0218 14:55:29.194631 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86855234-3589-4780-a8a5-75d8a4350c27" containerName="proxy-httpd" Feb 18 14:55:29 crc kubenswrapper[4957]: I0218 14:55:29.194638 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="86855234-3589-4780-a8a5-75d8a4350c27" containerName="proxy-httpd" Feb 18 14:55:29 crc kubenswrapper[4957]: E0218 14:55:29.194650 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75590e4a-6bc1-44d5-9a9e-5756f4a7de8c" containerName="neutron-httpd" Feb 18 14:55:29 crc kubenswrapper[4957]: I0218 14:55:29.194657 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="75590e4a-6bc1-44d5-9a9e-5756f4a7de8c" containerName="neutron-httpd" Feb 18 14:55:29 crc kubenswrapper[4957]: E0218 14:55:29.194666 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da55f937-4545-48b6-93df-ff81f7215472" containerName="init" Feb 18 14:55:29 crc kubenswrapper[4957]: I0218 14:55:29.194672 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="da55f937-4545-48b6-93df-ff81f7215472" containerName="init" Feb 18 14:55:29 crc kubenswrapper[4957]: E0218 14:55:29.194681 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86855234-3589-4780-a8a5-75d8a4350c27" containerName="ceilometer-central-agent" Feb 18 14:55:29 crc kubenswrapper[4957]: I0218 14:55:29.194687 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="86855234-3589-4780-a8a5-75d8a4350c27" containerName="ceilometer-central-agent" Feb 18 14:55:29 crc kubenswrapper[4957]: E0218 14:55:29.194701 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86855234-3589-4780-a8a5-75d8a4350c27" containerName="sg-core" Feb 18 14:55:29 crc kubenswrapper[4957]: I0218 14:55:29.194706 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="86855234-3589-4780-a8a5-75d8a4350c27" containerName="sg-core" Feb 18 14:55:29 crc kubenswrapper[4957]: E0218 14:55:29.194730 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75590e4a-6bc1-44d5-9a9e-5756f4a7de8c" containerName="neutron-api" Feb 18 14:55:29 crc kubenswrapper[4957]: I0218 14:55:29.194735 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="75590e4a-6bc1-44d5-9a9e-5756f4a7de8c" containerName="neutron-api" Feb 18 14:55:29 crc kubenswrapper[4957]: E0218 14:55:29.194750 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86855234-3589-4780-a8a5-75d8a4350c27" containerName="ceilometer-notification-agent" Feb 18 14:55:29 crc kubenswrapper[4957]: I0218 14:55:29.194756 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="86855234-3589-4780-a8a5-75d8a4350c27" containerName="ceilometer-notification-agent" Feb 18 14:55:29 crc kubenswrapper[4957]: I0218 14:55:29.194945 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="86855234-3589-4780-a8a5-75d8a4350c27" containerName="ceilometer-central-agent" Feb 18 14:55:29 crc kubenswrapper[4957]: I0218 14:55:29.194961 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="86855234-3589-4780-a8a5-75d8a4350c27" containerName="proxy-httpd" Feb 18 14:55:29 crc kubenswrapper[4957]: I0218 14:55:29.194972 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="75590e4a-6bc1-44d5-9a9e-5756f4a7de8c" containerName="neutron-api" Feb 18 14:55:29 crc kubenswrapper[4957]: I0218 14:55:29.194980 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="86855234-3589-4780-a8a5-75d8a4350c27" containerName="sg-core" Feb 18 14:55:29 crc kubenswrapper[4957]: I0218 14:55:29.194998 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="da55f937-4545-48b6-93df-ff81f7215472" containerName="dnsmasq-dns" Feb 18 14:55:29 crc kubenswrapper[4957]: I0218 14:55:29.195008 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="86855234-3589-4780-a8a5-75d8a4350c27" containerName="ceilometer-notification-agent" Feb 18 14:55:29 crc kubenswrapper[4957]: I0218 14:55:29.195016 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="75590e4a-6bc1-44d5-9a9e-5756f4a7de8c" containerName="neutron-httpd" Feb 18 14:55:29 crc kubenswrapper[4957]: I0218 14:55:29.196971 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 14:55:29 crc kubenswrapper[4957]: I0218 14:55:29.221999 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 18 14:55:29 crc kubenswrapper[4957]: I0218 14:55:29.222126 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 18 14:55:29 crc kubenswrapper[4957]: I0218 14:55:29.238861 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 14:55:29 crc kubenswrapper[4957]: I0218 14:55:29.277817 4957 scope.go:117] "RemoveContainer" containerID="415264cfbb527cf59ce502746b4faeadddb802f1c2025ec9bc6a676ab5243c36" Feb 18 14:55:29 crc kubenswrapper[4957]: E0218 14:55:29.285114 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"415264cfbb527cf59ce502746b4faeadddb802f1c2025ec9bc6a676ab5243c36\": container with ID starting with 415264cfbb527cf59ce502746b4faeadddb802f1c2025ec9bc6a676ab5243c36 not found: ID does not exist" containerID="415264cfbb527cf59ce502746b4faeadddb802f1c2025ec9bc6a676ab5243c36" Feb 18 14:55:29 crc kubenswrapper[4957]: I0218 14:55:29.285157 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"415264cfbb527cf59ce502746b4faeadddb802f1c2025ec9bc6a676ab5243c36"} err="failed to get container status \"415264cfbb527cf59ce502746b4faeadddb802f1c2025ec9bc6a676ab5243c36\": rpc error: code = NotFound desc = could not find container \"415264cfbb527cf59ce502746b4faeadddb802f1c2025ec9bc6a676ab5243c36\": container with ID starting with 415264cfbb527cf59ce502746b4faeadddb802f1c2025ec9bc6a676ab5243c36 not found: ID does not exist" Feb 18 14:55:29 crc kubenswrapper[4957]: I0218 14:55:29.285182 4957 scope.go:117] "RemoveContainer" containerID="127b825a86c5ff0148cf0ebee0981a0d07998697b29a1f9952795c3b07b5a3e3" Feb 18 14:55:29 crc kubenswrapper[4957]: E0218 14:55:29.285732 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"127b825a86c5ff0148cf0ebee0981a0d07998697b29a1f9952795c3b07b5a3e3\": container with ID starting with 127b825a86c5ff0148cf0ebee0981a0d07998697b29a1f9952795c3b07b5a3e3 not found: ID does not exist" containerID="127b825a86c5ff0148cf0ebee0981a0d07998697b29a1f9952795c3b07b5a3e3" Feb 18 14:55:29 crc kubenswrapper[4957]: I0218 14:55:29.285757 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"127b825a86c5ff0148cf0ebee0981a0d07998697b29a1f9952795c3b07b5a3e3"} err="failed to get container status \"127b825a86c5ff0148cf0ebee0981a0d07998697b29a1f9952795c3b07b5a3e3\": rpc error: code = NotFound desc = could not find container \"127b825a86c5ff0148cf0ebee0981a0d07998697b29a1f9952795c3b07b5a3e3\": container with ID starting with 127b825a86c5ff0148cf0ebee0981a0d07998697b29a1f9952795c3b07b5a3e3 not found: ID does not exist" Feb 18 14:55:29 crc kubenswrapper[4957]: I0218 14:55:29.285772 4957 scope.go:117] "RemoveContainer" containerID="2c9f74ecd757966d257850b11178c028a5a828bafe90ee101af4c0ee154220d1" Feb 18 14:55:29 crc kubenswrapper[4957]: E0218 14:55:29.285991 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c9f74ecd757966d257850b11178c028a5a828bafe90ee101af4c0ee154220d1\": container with ID starting with 2c9f74ecd757966d257850b11178c028a5a828bafe90ee101af4c0ee154220d1 not found: ID does not exist" containerID="2c9f74ecd757966d257850b11178c028a5a828bafe90ee101af4c0ee154220d1" Feb 18 14:55:29 crc kubenswrapper[4957]: I0218 14:55:29.286012 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c9f74ecd757966d257850b11178c028a5a828bafe90ee101af4c0ee154220d1"} err="failed to get container status \"2c9f74ecd757966d257850b11178c028a5a828bafe90ee101af4c0ee154220d1\": rpc error: code = NotFound desc = could not find container \"2c9f74ecd757966d257850b11178c028a5a828bafe90ee101af4c0ee154220d1\": container with ID starting with 2c9f74ecd757966d257850b11178c028a5a828bafe90ee101af4c0ee154220d1 not found: ID does not exist" Feb 18 14:55:29 crc kubenswrapper[4957]: I0218 14:55:29.286024 4957 scope.go:117] "RemoveContainer" containerID="cf84290ebb3b0143fb40a7b4437c22eeefd853c8e208b14410e7abe5454b2413" Feb 18 14:55:29 crc kubenswrapper[4957]: E0218 14:55:29.286204 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cf84290ebb3b0143fb40a7b4437c22eeefd853c8e208b14410e7abe5454b2413\": container with ID starting with cf84290ebb3b0143fb40a7b4437c22eeefd853c8e208b14410e7abe5454b2413 not found: ID does not exist" containerID="cf84290ebb3b0143fb40a7b4437c22eeefd853c8e208b14410e7abe5454b2413" Feb 18 14:55:29 crc kubenswrapper[4957]: I0218 14:55:29.286224 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf84290ebb3b0143fb40a7b4437c22eeefd853c8e208b14410e7abe5454b2413"} err="failed to get container status \"cf84290ebb3b0143fb40a7b4437c22eeefd853c8e208b14410e7abe5454b2413\": rpc error: code = NotFound desc = could not find container \"cf84290ebb3b0143fb40a7b4437c22eeefd853c8e208b14410e7abe5454b2413\": container with ID starting with cf84290ebb3b0143fb40a7b4437c22eeefd853c8e208b14410e7abe5454b2413 not found: ID does not exist" Feb 18 14:55:29 crc kubenswrapper[4957]: I0218 14:55:29.286236 4957 scope.go:117] "RemoveContainer" containerID="24ce149ff122d658c6184fdc73ffe70fa4bf2e46b11c6b4dc48688856ef318f2" Feb 18 14:55:29 crc kubenswrapper[4957]: I0218 14:55:29.332753 4957 scope.go:117] "RemoveContainer" containerID="724ae5090c9c953a769bc5cdbb49510d3e37ce00a400b59638e3888375d0c426" Feb 18 14:55:29 crc kubenswrapper[4957]: I0218 14:55:29.334839 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/46f8dbf3-ac91-469d-8d11-dc33a51b2d24-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"46f8dbf3-ac91-469d-8d11-dc33a51b2d24\") " pod="openstack/ceilometer-0" Feb 18 14:55:29 crc kubenswrapper[4957]: I0218 14:55:29.334888 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46f8dbf3-ac91-469d-8d11-dc33a51b2d24-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"46f8dbf3-ac91-469d-8d11-dc33a51b2d24\") " pod="openstack/ceilometer-0" Feb 18 14:55:29 crc kubenswrapper[4957]: I0218 14:55:29.334933 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/46f8dbf3-ac91-469d-8d11-dc33a51b2d24-scripts\") pod \"ceilometer-0\" (UID: \"46f8dbf3-ac91-469d-8d11-dc33a51b2d24\") " pod="openstack/ceilometer-0" Feb 18 14:55:29 crc kubenswrapper[4957]: I0218 14:55:29.334955 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46f8dbf3-ac91-469d-8d11-dc33a51b2d24-config-data\") pod \"ceilometer-0\" (UID: \"46f8dbf3-ac91-469d-8d11-dc33a51b2d24\") " pod="openstack/ceilometer-0" Feb 18 14:55:29 crc kubenswrapper[4957]: I0218 14:55:29.334972 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/46f8dbf3-ac91-469d-8d11-dc33a51b2d24-run-httpd\") pod \"ceilometer-0\" (UID: \"46f8dbf3-ac91-469d-8d11-dc33a51b2d24\") " pod="openstack/ceilometer-0" Feb 18 14:55:29 crc kubenswrapper[4957]: I0218 14:55:29.335033 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/46f8dbf3-ac91-469d-8d11-dc33a51b2d24-log-httpd\") pod \"ceilometer-0\" (UID: \"46f8dbf3-ac91-469d-8d11-dc33a51b2d24\") " pod="openstack/ceilometer-0" Feb 18 14:55:29 crc kubenswrapper[4957]: I0218 14:55:29.335125 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ptwhq\" (UniqueName: \"kubernetes.io/projected/46f8dbf3-ac91-469d-8d11-dc33a51b2d24-kube-api-access-ptwhq\") pod \"ceilometer-0\" (UID: \"46f8dbf3-ac91-469d-8d11-dc33a51b2d24\") " pod="openstack/ceilometer-0" Feb 18 14:55:29 crc kubenswrapper[4957]: I0218 14:55:29.438911 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ptwhq\" (UniqueName: \"kubernetes.io/projected/46f8dbf3-ac91-469d-8d11-dc33a51b2d24-kube-api-access-ptwhq\") pod \"ceilometer-0\" (UID: \"46f8dbf3-ac91-469d-8d11-dc33a51b2d24\") " pod="openstack/ceilometer-0" Feb 18 14:55:29 crc kubenswrapper[4957]: I0218 14:55:29.439055 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/46f8dbf3-ac91-469d-8d11-dc33a51b2d24-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"46f8dbf3-ac91-469d-8d11-dc33a51b2d24\") " pod="openstack/ceilometer-0" Feb 18 14:55:29 crc kubenswrapper[4957]: I0218 14:55:29.439074 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46f8dbf3-ac91-469d-8d11-dc33a51b2d24-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"46f8dbf3-ac91-469d-8d11-dc33a51b2d24\") " pod="openstack/ceilometer-0" Feb 18 14:55:29 crc kubenswrapper[4957]: I0218 14:55:29.439100 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/46f8dbf3-ac91-469d-8d11-dc33a51b2d24-scripts\") pod \"ceilometer-0\" (UID: \"46f8dbf3-ac91-469d-8d11-dc33a51b2d24\") " pod="openstack/ceilometer-0" Feb 18 14:55:29 crc kubenswrapper[4957]: I0218 14:55:29.439117 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46f8dbf3-ac91-469d-8d11-dc33a51b2d24-config-data\") pod \"ceilometer-0\" (UID: \"46f8dbf3-ac91-469d-8d11-dc33a51b2d24\") " pod="openstack/ceilometer-0" Feb 18 14:55:29 crc kubenswrapper[4957]: I0218 14:55:29.439130 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/46f8dbf3-ac91-469d-8d11-dc33a51b2d24-run-httpd\") pod \"ceilometer-0\" (UID: \"46f8dbf3-ac91-469d-8d11-dc33a51b2d24\") " pod="openstack/ceilometer-0" Feb 18 14:55:29 crc kubenswrapper[4957]: I0218 14:55:29.439171 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/46f8dbf3-ac91-469d-8d11-dc33a51b2d24-log-httpd\") pod \"ceilometer-0\" (UID: \"46f8dbf3-ac91-469d-8d11-dc33a51b2d24\") " pod="openstack/ceilometer-0" Feb 18 14:55:29 crc kubenswrapper[4957]: I0218 14:55:29.439681 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/46f8dbf3-ac91-469d-8d11-dc33a51b2d24-log-httpd\") pod \"ceilometer-0\" (UID: \"46f8dbf3-ac91-469d-8d11-dc33a51b2d24\") " pod="openstack/ceilometer-0" Feb 18 14:55:29 crc kubenswrapper[4957]: I0218 14:55:29.441811 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/46f8dbf3-ac91-469d-8d11-dc33a51b2d24-run-httpd\") pod \"ceilometer-0\" (UID: \"46f8dbf3-ac91-469d-8d11-dc33a51b2d24\") " pod="openstack/ceilometer-0" Feb 18 14:55:29 crc kubenswrapper[4957]: I0218 14:55:29.444315 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/46f8dbf3-ac91-469d-8d11-dc33a51b2d24-scripts\") pod \"ceilometer-0\" (UID: \"46f8dbf3-ac91-469d-8d11-dc33a51b2d24\") " pod="openstack/ceilometer-0" Feb 18 14:55:29 crc kubenswrapper[4957]: I0218 14:55:29.445602 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/46f8dbf3-ac91-469d-8d11-dc33a51b2d24-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"46f8dbf3-ac91-469d-8d11-dc33a51b2d24\") " pod="openstack/ceilometer-0" Feb 18 14:55:29 crc kubenswrapper[4957]: I0218 14:55:29.457304 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46f8dbf3-ac91-469d-8d11-dc33a51b2d24-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"46f8dbf3-ac91-469d-8d11-dc33a51b2d24\") " pod="openstack/ceilometer-0" Feb 18 14:55:29 crc kubenswrapper[4957]: I0218 14:55:29.458256 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46f8dbf3-ac91-469d-8d11-dc33a51b2d24-config-data\") pod \"ceilometer-0\" (UID: \"46f8dbf3-ac91-469d-8d11-dc33a51b2d24\") " pod="openstack/ceilometer-0" Feb 18 14:55:29 crc kubenswrapper[4957]: I0218 14:55:29.464181 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ptwhq\" (UniqueName: \"kubernetes.io/projected/46f8dbf3-ac91-469d-8d11-dc33a51b2d24-kube-api-access-ptwhq\") pod \"ceilometer-0\" (UID: \"46f8dbf3-ac91-469d-8d11-dc33a51b2d24\") " pod="openstack/ceilometer-0" Feb 18 14:55:29 crc kubenswrapper[4957]: I0218 14:55:29.550975 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 14:55:29 crc kubenswrapper[4957]: I0218 14:55:29.830398 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"608c576f-d3a4-42d5-9ba2-92d8f2560905","Type":"ContainerStarted","Data":"e0ec6e476c32e193e5d85442325e9af7b6bcde32fd05b3324bddc3cbab22a77a"} Feb 18 14:55:29 crc kubenswrapper[4957]: I0218 14:55:29.831338 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 18 14:55:29 crc kubenswrapper[4957]: I0218 14:55:29.830633 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="608c576f-d3a4-42d5-9ba2-92d8f2560905" containerName="cinder-api-log" containerID="cri-o://b812932d705db67c58db95eb5a774a24fbbd83516c912b6ef898add9ab4bfcb2" gracePeriod=30 Feb 18 14:55:29 crc kubenswrapper[4957]: I0218 14:55:29.831372 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="608c576f-d3a4-42d5-9ba2-92d8f2560905" containerName="cinder-api" containerID="cri-o://e0ec6e476c32e193e5d85442325e9af7b6bcde32fd05b3324bddc3cbab22a77a" gracePeriod=30 Feb 18 14:55:29 crc kubenswrapper[4957]: I0218 14:55:29.839315 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"daeedf1e-1be4-4db9-8354-cdd548f43256","Type":"ContainerStarted","Data":"701965444d0099501e693f7f72f5b19d510cea850e55c8ab211ade3d611e6182"} Feb 18 14:55:29 crc kubenswrapper[4957]: I0218 14:55:29.864827 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=5.8648031419999995 podStartE2EDuration="5.864803142s" podCreationTimestamp="2026-02-18 14:55:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:55:29.853817424 +0000 UTC m=+1436.374682168" watchObservedRunningTime="2026-02-18 14:55:29.864803142 +0000 UTC m=+1436.385667896" Feb 18 14:55:29 crc kubenswrapper[4957]: I0218 14:55:29.870276 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 18 14:55:29 crc kubenswrapper[4957]: I0218 14:55:29.882068 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.557953251 podStartE2EDuration="5.882047491s" podCreationTimestamp="2026-02-18 14:55:24 +0000 UTC" firstStartedPulling="2026-02-18 14:55:25.981802086 +0000 UTC m=+1432.502666830" lastFinishedPulling="2026-02-18 14:55:27.305896326 +0000 UTC m=+1433.826761070" observedRunningTime="2026-02-18 14:55:29.876920963 +0000 UTC m=+1436.397785727" watchObservedRunningTime="2026-02-18 14:55:29.882047491 +0000 UTC m=+1436.402912235" Feb 18 14:55:30 crc kubenswrapper[4957]: I0218 14:55:30.117398 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 14:55:30 crc kubenswrapper[4957]: I0218 14:55:30.227286 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="75590e4a-6bc1-44d5-9a9e-5756f4a7de8c" path="/var/lib/kubelet/pods/75590e4a-6bc1-44d5-9a9e-5756f4a7de8c/volumes" Feb 18 14:55:30 crc kubenswrapper[4957]: I0218 14:55:30.228489 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="86855234-3589-4780-a8a5-75d8a4350c27" path="/var/lib/kubelet/pods/86855234-3589-4780-a8a5-75d8a4350c27/volumes" Feb 18 14:55:30 crc kubenswrapper[4957]: I0218 14:55:30.851007 4957 generic.go:334] "Generic (PLEG): container finished" podID="608c576f-d3a4-42d5-9ba2-92d8f2560905" containerID="b812932d705db67c58db95eb5a774a24fbbd83516c912b6ef898add9ab4bfcb2" exitCode=143 Feb 18 14:55:30 crc kubenswrapper[4957]: I0218 14:55:30.851065 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"608c576f-d3a4-42d5-9ba2-92d8f2560905","Type":"ContainerDied","Data":"b812932d705db67c58db95eb5a774a24fbbd83516c912b6ef898add9ab4bfcb2"} Feb 18 14:55:30 crc kubenswrapper[4957]: I0218 14:55:30.853008 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"46f8dbf3-ac91-469d-8d11-dc33a51b2d24","Type":"ContainerStarted","Data":"c4b5f42cb84638506325eb8d728939aa76599a069f7eb04754148bc7b3478de0"} Feb 18 14:55:30 crc kubenswrapper[4957]: I0218 14:55:30.853057 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"46f8dbf3-ac91-469d-8d11-dc33a51b2d24","Type":"ContainerStarted","Data":"4a2368872b9e645eb7aefb7af03316eecd979d6e665e5a74fbafc8b88a094978"} Feb 18 14:55:31 crc kubenswrapper[4957]: I0218 14:55:31.673142 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-768df8c8bb-2s2nt" podUID="401e08eb-2650-46f4-9906-7780d2655b31" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.203:9311/healthcheck\": read tcp 10.217.0.2:39058->10.217.0.203:9311: read: connection reset by peer" Feb 18 14:55:31 crc kubenswrapper[4957]: I0218 14:55:31.673149 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-768df8c8bb-2s2nt" podUID="401e08eb-2650-46f4-9906-7780d2655b31" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.203:9311/healthcheck\": read tcp 10.217.0.2:39074->10.217.0.203:9311: read: connection reset by peer" Feb 18 14:55:31 crc kubenswrapper[4957]: I0218 14:55:31.872196 4957 generic.go:334] "Generic (PLEG): container finished" podID="401e08eb-2650-46f4-9906-7780d2655b31" containerID="8b62ef971adb34a7f8e7baf58d7c93d620e071e61c1cd5b644cd9b9e0f3bf043" exitCode=0 Feb 18 14:55:31 crc kubenswrapper[4957]: I0218 14:55:31.872273 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-768df8c8bb-2s2nt" event={"ID":"401e08eb-2650-46f4-9906-7780d2655b31","Type":"ContainerDied","Data":"8b62ef971adb34a7f8e7baf58d7c93d620e071e61c1cd5b644cd9b9e0f3bf043"} Feb 18 14:55:31 crc kubenswrapper[4957]: I0218 14:55:31.875817 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"46f8dbf3-ac91-469d-8d11-dc33a51b2d24","Type":"ContainerStarted","Data":"e500c38939b9836cce29d60cfb8ff32930e9ff8168b6b90b2517fd9c93e736e5"} Feb 18 14:55:32 crc kubenswrapper[4957]: I0218 14:55:32.242368 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-768df8c8bb-2s2nt" Feb 18 14:55:32 crc kubenswrapper[4957]: I0218 14:55:32.343783 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/401e08eb-2650-46f4-9906-7780d2655b31-config-data\") pod \"401e08eb-2650-46f4-9906-7780d2655b31\" (UID: \"401e08eb-2650-46f4-9906-7780d2655b31\") " Feb 18 14:55:32 crc kubenswrapper[4957]: I0218 14:55:32.343957 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/401e08eb-2650-46f4-9906-7780d2655b31-logs\") pod \"401e08eb-2650-46f4-9906-7780d2655b31\" (UID: \"401e08eb-2650-46f4-9906-7780d2655b31\") " Feb 18 14:55:32 crc kubenswrapper[4957]: I0218 14:55:32.344049 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/401e08eb-2650-46f4-9906-7780d2655b31-config-data-custom\") pod \"401e08eb-2650-46f4-9906-7780d2655b31\" (UID: \"401e08eb-2650-46f4-9906-7780d2655b31\") " Feb 18 14:55:32 crc kubenswrapper[4957]: I0218 14:55:32.344148 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xzvgh\" (UniqueName: \"kubernetes.io/projected/401e08eb-2650-46f4-9906-7780d2655b31-kube-api-access-xzvgh\") pod \"401e08eb-2650-46f4-9906-7780d2655b31\" (UID: \"401e08eb-2650-46f4-9906-7780d2655b31\") " Feb 18 14:55:32 crc kubenswrapper[4957]: I0218 14:55:32.344239 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/401e08eb-2650-46f4-9906-7780d2655b31-combined-ca-bundle\") pod \"401e08eb-2650-46f4-9906-7780d2655b31\" (UID: \"401e08eb-2650-46f4-9906-7780d2655b31\") " Feb 18 14:55:32 crc kubenswrapper[4957]: I0218 14:55:32.345094 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/401e08eb-2650-46f4-9906-7780d2655b31-logs" (OuterVolumeSpecName: "logs") pod "401e08eb-2650-46f4-9906-7780d2655b31" (UID: "401e08eb-2650-46f4-9906-7780d2655b31"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:55:32 crc kubenswrapper[4957]: I0218 14:55:32.349815 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/401e08eb-2650-46f4-9906-7780d2655b31-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "401e08eb-2650-46f4-9906-7780d2655b31" (UID: "401e08eb-2650-46f4-9906-7780d2655b31"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:55:32 crc kubenswrapper[4957]: I0218 14:55:32.351016 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/401e08eb-2650-46f4-9906-7780d2655b31-kube-api-access-xzvgh" (OuterVolumeSpecName: "kube-api-access-xzvgh") pod "401e08eb-2650-46f4-9906-7780d2655b31" (UID: "401e08eb-2650-46f4-9906-7780d2655b31"). InnerVolumeSpecName "kube-api-access-xzvgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:55:32 crc kubenswrapper[4957]: I0218 14:55:32.383660 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/401e08eb-2650-46f4-9906-7780d2655b31-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "401e08eb-2650-46f4-9906-7780d2655b31" (UID: "401e08eb-2650-46f4-9906-7780d2655b31"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:55:32 crc kubenswrapper[4957]: I0218 14:55:32.435577 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/401e08eb-2650-46f4-9906-7780d2655b31-config-data" (OuterVolumeSpecName: "config-data") pod "401e08eb-2650-46f4-9906-7780d2655b31" (UID: "401e08eb-2650-46f4-9906-7780d2655b31"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:55:32 crc kubenswrapper[4957]: I0218 14:55:32.447109 4957 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/401e08eb-2650-46f4-9906-7780d2655b31-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 18 14:55:32 crc kubenswrapper[4957]: I0218 14:55:32.447490 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xzvgh\" (UniqueName: \"kubernetes.io/projected/401e08eb-2650-46f4-9906-7780d2655b31-kube-api-access-xzvgh\") on node \"crc\" DevicePath \"\"" Feb 18 14:55:32 crc kubenswrapper[4957]: I0218 14:55:32.447506 4957 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/401e08eb-2650-46f4-9906-7780d2655b31-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:55:32 crc kubenswrapper[4957]: I0218 14:55:32.447521 4957 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/401e08eb-2650-46f4-9906-7780d2655b31-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 14:55:32 crc kubenswrapper[4957]: I0218 14:55:32.447530 4957 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/401e08eb-2650-46f4-9906-7780d2655b31-logs\") on node \"crc\" DevicePath \"\"" Feb 18 14:55:32 crc kubenswrapper[4957]: I0218 14:55:32.892350 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-768df8c8bb-2s2nt" event={"ID":"401e08eb-2650-46f4-9906-7780d2655b31","Type":"ContainerDied","Data":"f65b50ccbf6ec25e6dbf4738e74eec0fa9253b77b98b7cda35931fde9e21340b"} Feb 18 14:55:32 crc kubenswrapper[4957]: I0218 14:55:32.892451 4957 scope.go:117] "RemoveContainer" containerID="8b62ef971adb34a7f8e7baf58d7c93d620e071e61c1cd5b644cd9b9e0f3bf043" Feb 18 14:55:32 crc kubenswrapper[4957]: I0218 14:55:32.892699 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-768df8c8bb-2s2nt" Feb 18 14:55:32 crc kubenswrapper[4957]: I0218 14:55:32.895582 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"46f8dbf3-ac91-469d-8d11-dc33a51b2d24","Type":"ContainerStarted","Data":"b1efbee1c7ecf0941b838236bca1523e961852805a7f1534148e58bb2cc6cab6"} Feb 18 14:55:32 crc kubenswrapper[4957]: I0218 14:55:32.919595 4957 scope.go:117] "RemoveContainer" containerID="6a0a47e9c8fe026a739f11a88aebc05bf04a56fe90d8d5154453e2c28c0f6e83" Feb 18 14:55:32 crc kubenswrapper[4957]: I0218 14:55:32.941400 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-768df8c8bb-2s2nt"] Feb 18 14:55:32 crc kubenswrapper[4957]: I0218 14:55:32.974360 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-768df8c8bb-2s2nt"] Feb 18 14:55:34 crc kubenswrapper[4957]: I0218 14:55:34.237069 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="401e08eb-2650-46f4-9906-7780d2655b31" path="/var/lib/kubelet/pods/401e08eb-2650-46f4-9906-7780d2655b31/volumes" Feb 18 14:55:34 crc kubenswrapper[4957]: I0218 14:55:34.779357 4957 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-cjglk" podUID="fa177feb-1f29-43cb-baa9-6676bfa7c403" containerName="registry-server" probeResult="failure" output=< Feb 18 14:55:34 crc kubenswrapper[4957]: timeout: failed to connect service ":50051" within 1s Feb 18 14:55:34 crc kubenswrapper[4957]: > Feb 18 14:55:34 crc kubenswrapper[4957]: I0218 14:55:34.919121 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"46f8dbf3-ac91-469d-8d11-dc33a51b2d24","Type":"ContainerStarted","Data":"1cd9c45d8dea28db111c530f257714c6d4939e26327efc23c8e0b4500ed7842f"} Feb 18 14:55:34 crc kubenswrapper[4957]: I0218 14:55:34.919510 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 18 14:55:34 crc kubenswrapper[4957]: I0218 14:55:34.951971 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.137398862 podStartE2EDuration="5.951949857s" podCreationTimestamp="2026-02-18 14:55:29 +0000 UTC" firstStartedPulling="2026-02-18 14:55:30.119583193 +0000 UTC m=+1436.640447937" lastFinishedPulling="2026-02-18 14:55:33.934134188 +0000 UTC m=+1440.454998932" observedRunningTime="2026-02-18 14:55:34.940127515 +0000 UTC m=+1441.460992259" watchObservedRunningTime="2026-02-18 14:55:34.951949857 +0000 UTC m=+1441.472814611" Feb 18 14:55:35 crc kubenswrapper[4957]: I0218 14:55:35.112910 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 18 14:55:35 crc kubenswrapper[4957]: I0218 14:55:35.209399 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 18 14:55:35 crc kubenswrapper[4957]: I0218 14:55:35.344854 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5c9776ccc5-xmqsm" Feb 18 14:55:35 crc kubenswrapper[4957]: I0218 14:55:35.428074 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-vx74r"] Feb 18 14:55:35 crc kubenswrapper[4957]: I0218 14:55:35.428304 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-55f844cf75-vx74r" podUID="d2f29bea-5fda-4080-a98b-b869c41e3dab" containerName="dnsmasq-dns" containerID="cri-o://709638ed8b589de857df474f40c1c4c5e31bf6d8042a69c0bd2637c7e585eada" gracePeriod=10 Feb 18 14:55:35 crc kubenswrapper[4957]: I0218 14:55:35.934806 4957 generic.go:334] "Generic (PLEG): container finished" podID="d2f29bea-5fda-4080-a98b-b869c41e3dab" containerID="709638ed8b589de857df474f40c1c4c5e31bf6d8042a69c0bd2637c7e585eada" exitCode=0 Feb 18 14:55:35 crc kubenswrapper[4957]: I0218 14:55:35.935699 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="daeedf1e-1be4-4db9-8354-cdd548f43256" containerName="cinder-scheduler" containerID="cri-o://45aa19e92e1482006289d50687206d6ea0df03ad6bf3d1f783ae23c29cd5cf98" gracePeriod=30 Feb 18 14:55:35 crc kubenswrapper[4957]: I0218 14:55:35.934846 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-vx74r" event={"ID":"d2f29bea-5fda-4080-a98b-b869c41e3dab","Type":"ContainerDied","Data":"709638ed8b589de857df474f40c1c4c5e31bf6d8042a69c0bd2637c7e585eada"} Feb 18 14:55:35 crc kubenswrapper[4957]: I0218 14:55:35.936725 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="daeedf1e-1be4-4db9-8354-cdd548f43256" containerName="probe" containerID="cri-o://701965444d0099501e693f7f72f5b19d510cea850e55c8ab211ade3d611e6182" gracePeriod=30 Feb 18 14:55:36 crc kubenswrapper[4957]: I0218 14:55:36.067726 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-vx74r" Feb 18 14:55:36 crc kubenswrapper[4957]: I0218 14:55:36.174702 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d2f29bea-5fda-4080-a98b-b869c41e3dab-dns-swift-storage-0\") pod \"d2f29bea-5fda-4080-a98b-b869c41e3dab\" (UID: \"d2f29bea-5fda-4080-a98b-b869c41e3dab\") " Feb 18 14:55:36 crc kubenswrapper[4957]: I0218 14:55:36.174860 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bjzz9\" (UniqueName: \"kubernetes.io/projected/d2f29bea-5fda-4080-a98b-b869c41e3dab-kube-api-access-bjzz9\") pod \"d2f29bea-5fda-4080-a98b-b869c41e3dab\" (UID: \"d2f29bea-5fda-4080-a98b-b869c41e3dab\") " Feb 18 14:55:36 crc kubenswrapper[4957]: I0218 14:55:36.174935 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d2f29bea-5fda-4080-a98b-b869c41e3dab-dns-svc\") pod \"d2f29bea-5fda-4080-a98b-b869c41e3dab\" (UID: \"d2f29bea-5fda-4080-a98b-b869c41e3dab\") " Feb 18 14:55:36 crc kubenswrapper[4957]: I0218 14:55:36.174979 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2f29bea-5fda-4080-a98b-b869c41e3dab-config\") pod \"d2f29bea-5fda-4080-a98b-b869c41e3dab\" (UID: \"d2f29bea-5fda-4080-a98b-b869c41e3dab\") " Feb 18 14:55:36 crc kubenswrapper[4957]: I0218 14:55:36.175076 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d2f29bea-5fda-4080-a98b-b869c41e3dab-ovsdbserver-nb\") pod \"d2f29bea-5fda-4080-a98b-b869c41e3dab\" (UID: \"d2f29bea-5fda-4080-a98b-b869c41e3dab\") " Feb 18 14:55:36 crc kubenswrapper[4957]: I0218 14:55:36.175150 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d2f29bea-5fda-4080-a98b-b869c41e3dab-ovsdbserver-sb\") pod \"d2f29bea-5fda-4080-a98b-b869c41e3dab\" (UID: \"d2f29bea-5fda-4080-a98b-b869c41e3dab\") " Feb 18 14:55:36 crc kubenswrapper[4957]: I0218 14:55:36.204640 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2f29bea-5fda-4080-a98b-b869c41e3dab-kube-api-access-bjzz9" (OuterVolumeSpecName: "kube-api-access-bjzz9") pod "d2f29bea-5fda-4080-a98b-b869c41e3dab" (UID: "d2f29bea-5fda-4080-a98b-b869c41e3dab"). InnerVolumeSpecName "kube-api-access-bjzz9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:55:36 crc kubenswrapper[4957]: I0218 14:55:36.259242 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2f29bea-5fda-4080-a98b-b869c41e3dab-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d2f29bea-5fda-4080-a98b-b869c41e3dab" (UID: "d2f29bea-5fda-4080-a98b-b869c41e3dab"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:55:36 crc kubenswrapper[4957]: I0218 14:55:36.264913 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2f29bea-5fda-4080-a98b-b869c41e3dab-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d2f29bea-5fda-4080-a98b-b869c41e3dab" (UID: "d2f29bea-5fda-4080-a98b-b869c41e3dab"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:55:36 crc kubenswrapper[4957]: I0218 14:55:36.278549 4957 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d2f29bea-5fda-4080-a98b-b869c41e3dab-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 18 14:55:36 crc kubenswrapper[4957]: I0218 14:55:36.278598 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bjzz9\" (UniqueName: \"kubernetes.io/projected/d2f29bea-5fda-4080-a98b-b869c41e3dab-kube-api-access-bjzz9\") on node \"crc\" DevicePath \"\"" Feb 18 14:55:36 crc kubenswrapper[4957]: I0218 14:55:36.278616 4957 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d2f29bea-5fda-4080-a98b-b869c41e3dab-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 14:55:36 crc kubenswrapper[4957]: I0218 14:55:36.280956 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2f29bea-5fda-4080-a98b-b869c41e3dab-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d2f29bea-5fda-4080-a98b-b869c41e3dab" (UID: "d2f29bea-5fda-4080-a98b-b869c41e3dab"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:55:36 crc kubenswrapper[4957]: I0218 14:55:36.293990 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2f29bea-5fda-4080-a98b-b869c41e3dab-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "d2f29bea-5fda-4080-a98b-b869c41e3dab" (UID: "d2f29bea-5fda-4080-a98b-b869c41e3dab"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:55:36 crc kubenswrapper[4957]: I0218 14:55:36.303892 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2f29bea-5fda-4080-a98b-b869c41e3dab-config" (OuterVolumeSpecName: "config") pod "d2f29bea-5fda-4080-a98b-b869c41e3dab" (UID: "d2f29bea-5fda-4080-a98b-b869c41e3dab"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:55:36 crc kubenswrapper[4957]: I0218 14:55:36.380726 4957 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2f29bea-5fda-4080-a98b-b869c41e3dab-config\") on node \"crc\" DevicePath \"\"" Feb 18 14:55:36 crc kubenswrapper[4957]: I0218 14:55:36.380759 4957 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d2f29bea-5fda-4080-a98b-b869c41e3dab-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 18 14:55:36 crc kubenswrapper[4957]: I0218 14:55:36.380770 4957 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d2f29bea-5fda-4080-a98b-b869c41e3dab-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 18 14:55:36 crc kubenswrapper[4957]: I0218 14:55:36.947775 4957 generic.go:334] "Generic (PLEG): container finished" podID="daeedf1e-1be4-4db9-8354-cdd548f43256" containerID="701965444d0099501e693f7f72f5b19d510cea850e55c8ab211ade3d611e6182" exitCode=0 Feb 18 14:55:36 crc kubenswrapper[4957]: I0218 14:55:36.948080 4957 generic.go:334] "Generic (PLEG): container finished" podID="daeedf1e-1be4-4db9-8354-cdd548f43256" containerID="45aa19e92e1482006289d50687206d6ea0df03ad6bf3d1f783ae23c29cd5cf98" exitCode=0 Feb 18 14:55:36 crc kubenswrapper[4957]: I0218 14:55:36.947868 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"daeedf1e-1be4-4db9-8354-cdd548f43256","Type":"ContainerDied","Data":"701965444d0099501e693f7f72f5b19d510cea850e55c8ab211ade3d611e6182"} Feb 18 14:55:36 crc kubenswrapper[4957]: I0218 14:55:36.948146 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"daeedf1e-1be4-4db9-8354-cdd548f43256","Type":"ContainerDied","Data":"45aa19e92e1482006289d50687206d6ea0df03ad6bf3d1f783ae23c29cd5cf98"} Feb 18 14:55:36 crc kubenswrapper[4957]: I0218 14:55:36.951403 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-vx74r" event={"ID":"d2f29bea-5fda-4080-a98b-b869c41e3dab","Type":"ContainerDied","Data":"a41d5314140c7607f5fa3643cacebb6378b52a22d2153671c7e3b58500454031"} Feb 18 14:55:36 crc kubenswrapper[4957]: I0218 14:55:36.951479 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-vx74r" Feb 18 14:55:36 crc kubenswrapper[4957]: I0218 14:55:36.951479 4957 scope.go:117] "RemoveContainer" containerID="709638ed8b589de857df474f40c1c4c5e31bf6d8042a69c0bd2637c7e585eada" Feb 18 14:55:36 crc kubenswrapper[4957]: I0218 14:55:36.982705 4957 scope.go:117] "RemoveContainer" containerID="133506c284a0e1a4ac9c63847af0827d35d039576c2b6156ffbfb558923a8f53" Feb 18 14:55:36 crc kubenswrapper[4957]: I0218 14:55:36.986320 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-vx74r"] Feb 18 14:55:36 crc kubenswrapper[4957]: I0218 14:55:36.997497 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-vx74r"] Feb 18 14:55:37 crc kubenswrapper[4957]: I0218 14:55:37.148022 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 18 14:55:37 crc kubenswrapper[4957]: I0218 14:55:37.305225 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/daeedf1e-1be4-4db9-8354-cdd548f43256-etc-machine-id\") pod \"daeedf1e-1be4-4db9-8354-cdd548f43256\" (UID: \"daeedf1e-1be4-4db9-8354-cdd548f43256\") " Feb 18 14:55:37 crc kubenswrapper[4957]: I0218 14:55:37.305624 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/daeedf1e-1be4-4db9-8354-cdd548f43256-scripts\") pod \"daeedf1e-1be4-4db9-8354-cdd548f43256\" (UID: \"daeedf1e-1be4-4db9-8354-cdd548f43256\") " Feb 18 14:55:37 crc kubenswrapper[4957]: I0218 14:55:37.305385 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/daeedf1e-1be4-4db9-8354-cdd548f43256-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "daeedf1e-1be4-4db9-8354-cdd548f43256" (UID: "daeedf1e-1be4-4db9-8354-cdd548f43256"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 14:55:37 crc kubenswrapper[4957]: I0218 14:55:37.305883 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/daeedf1e-1be4-4db9-8354-cdd548f43256-config-data\") pod \"daeedf1e-1be4-4db9-8354-cdd548f43256\" (UID: \"daeedf1e-1be4-4db9-8354-cdd548f43256\") " Feb 18 14:55:37 crc kubenswrapper[4957]: I0218 14:55:37.305962 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/daeedf1e-1be4-4db9-8354-cdd548f43256-config-data-custom\") pod \"daeedf1e-1be4-4db9-8354-cdd548f43256\" (UID: \"daeedf1e-1be4-4db9-8354-cdd548f43256\") " Feb 18 14:55:37 crc kubenswrapper[4957]: I0218 14:55:37.306045 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bmt6n\" (UniqueName: \"kubernetes.io/projected/daeedf1e-1be4-4db9-8354-cdd548f43256-kube-api-access-bmt6n\") pod \"daeedf1e-1be4-4db9-8354-cdd548f43256\" (UID: \"daeedf1e-1be4-4db9-8354-cdd548f43256\") " Feb 18 14:55:37 crc kubenswrapper[4957]: I0218 14:55:37.306133 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/daeedf1e-1be4-4db9-8354-cdd548f43256-combined-ca-bundle\") pod \"daeedf1e-1be4-4db9-8354-cdd548f43256\" (UID: \"daeedf1e-1be4-4db9-8354-cdd548f43256\") " Feb 18 14:55:37 crc kubenswrapper[4957]: I0218 14:55:37.306767 4957 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/daeedf1e-1be4-4db9-8354-cdd548f43256-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 18 14:55:37 crc kubenswrapper[4957]: I0218 14:55:37.312623 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/daeedf1e-1be4-4db9-8354-cdd548f43256-kube-api-access-bmt6n" (OuterVolumeSpecName: "kube-api-access-bmt6n") pod "daeedf1e-1be4-4db9-8354-cdd548f43256" (UID: "daeedf1e-1be4-4db9-8354-cdd548f43256"). InnerVolumeSpecName "kube-api-access-bmt6n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:55:37 crc kubenswrapper[4957]: I0218 14:55:37.312854 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/daeedf1e-1be4-4db9-8354-cdd548f43256-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "daeedf1e-1be4-4db9-8354-cdd548f43256" (UID: "daeedf1e-1be4-4db9-8354-cdd548f43256"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:55:37 crc kubenswrapper[4957]: I0218 14:55:37.313081 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/daeedf1e-1be4-4db9-8354-cdd548f43256-scripts" (OuterVolumeSpecName: "scripts") pod "daeedf1e-1be4-4db9-8354-cdd548f43256" (UID: "daeedf1e-1be4-4db9-8354-cdd548f43256"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:55:37 crc kubenswrapper[4957]: I0218 14:55:37.379639 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/daeedf1e-1be4-4db9-8354-cdd548f43256-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "daeedf1e-1be4-4db9-8354-cdd548f43256" (UID: "daeedf1e-1be4-4db9-8354-cdd548f43256"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:55:37 crc kubenswrapper[4957]: I0218 14:55:37.409524 4957 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/daeedf1e-1be4-4db9-8354-cdd548f43256-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:55:37 crc kubenswrapper[4957]: I0218 14:55:37.409566 4957 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/daeedf1e-1be4-4db9-8354-cdd548f43256-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 14:55:37 crc kubenswrapper[4957]: I0218 14:55:37.409579 4957 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/daeedf1e-1be4-4db9-8354-cdd548f43256-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 18 14:55:37 crc kubenswrapper[4957]: I0218 14:55:37.409591 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bmt6n\" (UniqueName: \"kubernetes.io/projected/daeedf1e-1be4-4db9-8354-cdd548f43256-kube-api-access-bmt6n\") on node \"crc\" DevicePath \"\"" Feb 18 14:55:37 crc kubenswrapper[4957]: I0218 14:55:37.446536 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/daeedf1e-1be4-4db9-8354-cdd548f43256-config-data" (OuterVolumeSpecName: "config-data") pod "daeedf1e-1be4-4db9-8354-cdd548f43256" (UID: "daeedf1e-1be4-4db9-8354-cdd548f43256"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:55:37 crc kubenswrapper[4957]: I0218 14:55:37.511809 4957 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/daeedf1e-1be4-4db9-8354-cdd548f43256-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 14:55:37 crc kubenswrapper[4957]: I0218 14:55:37.936463 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Feb 18 14:55:37 crc kubenswrapper[4957]: I0218 14:55:37.980374 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"daeedf1e-1be4-4db9-8354-cdd548f43256","Type":"ContainerDied","Data":"434a3e0733af3bd792c76bad275429b67554658aee0b8643c994e25c1ca814af"} Feb 18 14:55:37 crc kubenswrapper[4957]: I0218 14:55:37.980441 4957 scope.go:117] "RemoveContainer" containerID="701965444d0099501e693f7f72f5b19d510cea850e55c8ab211ade3d611e6182" Feb 18 14:55:37 crc kubenswrapper[4957]: I0218 14:55:37.980519 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 18 14:55:38 crc kubenswrapper[4957]: I0218 14:55:38.027972 4957 scope.go:117] "RemoveContainer" containerID="45aa19e92e1482006289d50687206d6ea0df03ad6bf3d1f783ae23c29cd5cf98" Feb 18 14:55:38 crc kubenswrapper[4957]: I0218 14:55:38.041886 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 18 14:55:38 crc kubenswrapper[4957]: I0218 14:55:38.059503 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 18 14:55:38 crc kubenswrapper[4957]: I0218 14:55:38.079660 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 18 14:55:38 crc kubenswrapper[4957]: E0218 14:55:38.080316 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2f29bea-5fda-4080-a98b-b869c41e3dab" containerName="init" Feb 18 14:55:38 crc kubenswrapper[4957]: I0218 14:55:38.080404 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2f29bea-5fda-4080-a98b-b869c41e3dab" containerName="init" Feb 18 14:55:38 crc kubenswrapper[4957]: E0218 14:55:38.080509 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="daeedf1e-1be4-4db9-8354-cdd548f43256" containerName="cinder-scheduler" Feb 18 14:55:38 crc kubenswrapper[4957]: I0218 14:55:38.080602 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="daeedf1e-1be4-4db9-8354-cdd548f43256" containerName="cinder-scheduler" Feb 18 14:55:38 crc kubenswrapper[4957]: E0218 14:55:38.080682 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2f29bea-5fda-4080-a98b-b869c41e3dab" containerName="dnsmasq-dns" Feb 18 14:55:38 crc kubenswrapper[4957]: I0218 14:55:38.080734 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2f29bea-5fda-4080-a98b-b869c41e3dab" containerName="dnsmasq-dns" Feb 18 14:55:38 crc kubenswrapper[4957]: E0218 14:55:38.080802 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="401e08eb-2650-46f4-9906-7780d2655b31" containerName="barbican-api-log" Feb 18 14:55:38 crc kubenswrapper[4957]: I0218 14:55:38.080856 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="401e08eb-2650-46f4-9906-7780d2655b31" containerName="barbican-api-log" Feb 18 14:55:38 crc kubenswrapper[4957]: E0218 14:55:38.080920 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="daeedf1e-1be4-4db9-8354-cdd548f43256" containerName="probe" Feb 18 14:55:38 crc kubenswrapper[4957]: I0218 14:55:38.080970 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="daeedf1e-1be4-4db9-8354-cdd548f43256" containerName="probe" Feb 18 14:55:38 crc kubenswrapper[4957]: E0218 14:55:38.081029 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="401e08eb-2650-46f4-9906-7780d2655b31" containerName="barbican-api" Feb 18 14:55:38 crc kubenswrapper[4957]: I0218 14:55:38.081109 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="401e08eb-2650-46f4-9906-7780d2655b31" containerName="barbican-api" Feb 18 14:55:38 crc kubenswrapper[4957]: I0218 14:55:38.081428 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="401e08eb-2650-46f4-9906-7780d2655b31" containerName="barbican-api" Feb 18 14:55:38 crc kubenswrapper[4957]: I0218 14:55:38.081513 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="401e08eb-2650-46f4-9906-7780d2655b31" containerName="barbican-api-log" Feb 18 14:55:38 crc kubenswrapper[4957]: I0218 14:55:38.081571 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="daeedf1e-1be4-4db9-8354-cdd548f43256" containerName="cinder-scheduler" Feb 18 14:55:38 crc kubenswrapper[4957]: I0218 14:55:38.081636 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="daeedf1e-1be4-4db9-8354-cdd548f43256" containerName="probe" Feb 18 14:55:38 crc kubenswrapper[4957]: I0218 14:55:38.081702 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2f29bea-5fda-4080-a98b-b869c41e3dab" containerName="dnsmasq-dns" Feb 18 14:55:38 crc kubenswrapper[4957]: I0218 14:55:38.082965 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 18 14:55:38 crc kubenswrapper[4957]: I0218 14:55:38.085912 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 18 14:55:38 crc kubenswrapper[4957]: I0218 14:55:38.089284 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 18 14:55:38 crc kubenswrapper[4957]: I0218 14:55:38.228712 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d2f29bea-5fda-4080-a98b-b869c41e3dab" path="/var/lib/kubelet/pods/d2f29bea-5fda-4080-a98b-b869c41e3dab/volumes" Feb 18 14:55:38 crc kubenswrapper[4957]: I0218 14:55:38.229555 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="daeedf1e-1be4-4db9-8354-cdd548f43256" path="/var/lib/kubelet/pods/daeedf1e-1be4-4db9-8354-cdd548f43256/volumes" Feb 18 14:55:38 crc kubenswrapper[4957]: I0218 14:55:38.236945 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-knzk7\" (UniqueName: \"kubernetes.io/projected/e7fbc309-e5ee-4222-8409-6d68468ae015-kube-api-access-knzk7\") pod \"cinder-scheduler-0\" (UID: \"e7fbc309-e5ee-4222-8409-6d68468ae015\") " pod="openstack/cinder-scheduler-0" Feb 18 14:55:38 crc kubenswrapper[4957]: I0218 14:55:38.237122 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7fbc309-e5ee-4222-8409-6d68468ae015-config-data\") pod \"cinder-scheduler-0\" (UID: \"e7fbc309-e5ee-4222-8409-6d68468ae015\") " pod="openstack/cinder-scheduler-0" Feb 18 14:55:38 crc kubenswrapper[4957]: I0218 14:55:38.237304 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7fbc309-e5ee-4222-8409-6d68468ae015-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"e7fbc309-e5ee-4222-8409-6d68468ae015\") " pod="openstack/cinder-scheduler-0" Feb 18 14:55:38 crc kubenswrapper[4957]: I0218 14:55:38.237584 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e7fbc309-e5ee-4222-8409-6d68468ae015-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"e7fbc309-e5ee-4222-8409-6d68468ae015\") " pod="openstack/cinder-scheduler-0" Feb 18 14:55:38 crc kubenswrapper[4957]: I0218 14:55:38.237731 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e7fbc309-e5ee-4222-8409-6d68468ae015-scripts\") pod \"cinder-scheduler-0\" (UID: \"e7fbc309-e5ee-4222-8409-6d68468ae015\") " pod="openstack/cinder-scheduler-0" Feb 18 14:55:38 crc kubenswrapper[4957]: I0218 14:55:38.237788 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e7fbc309-e5ee-4222-8409-6d68468ae015-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"e7fbc309-e5ee-4222-8409-6d68468ae015\") " pod="openstack/cinder-scheduler-0" Feb 18 14:55:38 crc kubenswrapper[4957]: I0218 14:55:38.340251 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7fbc309-e5ee-4222-8409-6d68468ae015-config-data\") pod \"cinder-scheduler-0\" (UID: \"e7fbc309-e5ee-4222-8409-6d68468ae015\") " pod="openstack/cinder-scheduler-0" Feb 18 14:55:38 crc kubenswrapper[4957]: I0218 14:55:38.340389 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7fbc309-e5ee-4222-8409-6d68468ae015-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"e7fbc309-e5ee-4222-8409-6d68468ae015\") " pod="openstack/cinder-scheduler-0" Feb 18 14:55:38 crc kubenswrapper[4957]: I0218 14:55:38.340599 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e7fbc309-e5ee-4222-8409-6d68468ae015-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"e7fbc309-e5ee-4222-8409-6d68468ae015\") " pod="openstack/cinder-scheduler-0" Feb 18 14:55:38 crc kubenswrapper[4957]: I0218 14:55:38.340700 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e7fbc309-e5ee-4222-8409-6d68468ae015-scripts\") pod \"cinder-scheduler-0\" (UID: \"e7fbc309-e5ee-4222-8409-6d68468ae015\") " pod="openstack/cinder-scheduler-0" Feb 18 14:55:38 crc kubenswrapper[4957]: I0218 14:55:38.340747 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e7fbc309-e5ee-4222-8409-6d68468ae015-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"e7fbc309-e5ee-4222-8409-6d68468ae015\") " pod="openstack/cinder-scheduler-0" Feb 18 14:55:38 crc kubenswrapper[4957]: I0218 14:55:38.340916 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-knzk7\" (UniqueName: \"kubernetes.io/projected/e7fbc309-e5ee-4222-8409-6d68468ae015-kube-api-access-knzk7\") pod \"cinder-scheduler-0\" (UID: \"e7fbc309-e5ee-4222-8409-6d68468ae015\") " pod="openstack/cinder-scheduler-0" Feb 18 14:55:38 crc kubenswrapper[4957]: I0218 14:55:38.341308 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e7fbc309-e5ee-4222-8409-6d68468ae015-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"e7fbc309-e5ee-4222-8409-6d68468ae015\") " pod="openstack/cinder-scheduler-0" Feb 18 14:55:38 crc kubenswrapper[4957]: I0218 14:55:38.349058 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7fbc309-e5ee-4222-8409-6d68468ae015-config-data\") pod \"cinder-scheduler-0\" (UID: \"e7fbc309-e5ee-4222-8409-6d68468ae015\") " pod="openstack/cinder-scheduler-0" Feb 18 14:55:38 crc kubenswrapper[4957]: I0218 14:55:38.349142 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e7fbc309-e5ee-4222-8409-6d68468ae015-scripts\") pod \"cinder-scheduler-0\" (UID: \"e7fbc309-e5ee-4222-8409-6d68468ae015\") " pod="openstack/cinder-scheduler-0" Feb 18 14:55:38 crc kubenswrapper[4957]: I0218 14:55:38.352283 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e7fbc309-e5ee-4222-8409-6d68468ae015-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"e7fbc309-e5ee-4222-8409-6d68468ae015\") " pod="openstack/cinder-scheduler-0" Feb 18 14:55:38 crc kubenswrapper[4957]: I0218 14:55:38.355726 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7fbc309-e5ee-4222-8409-6d68468ae015-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"e7fbc309-e5ee-4222-8409-6d68468ae015\") " pod="openstack/cinder-scheduler-0" Feb 18 14:55:38 crc kubenswrapper[4957]: I0218 14:55:38.375988 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-knzk7\" (UniqueName: \"kubernetes.io/projected/e7fbc309-e5ee-4222-8409-6d68468ae015-kube-api-access-knzk7\") pod \"cinder-scheduler-0\" (UID: \"e7fbc309-e5ee-4222-8409-6d68468ae015\") " pod="openstack/cinder-scheduler-0" Feb 18 14:55:38 crc kubenswrapper[4957]: I0218 14:55:38.406781 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 18 14:55:38 crc kubenswrapper[4957]: I0218 14:55:38.869629 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 18 14:55:38 crc kubenswrapper[4957]: I0218 14:55:38.996686 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e7fbc309-e5ee-4222-8409-6d68468ae015","Type":"ContainerStarted","Data":"c2d87d393b31981abd05be24687d097f1f5e0cc29611cd4135a64f8bbf571a80"} Feb 18 14:55:40 crc kubenswrapper[4957]: I0218 14:55:40.008243 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e7fbc309-e5ee-4222-8409-6d68468ae015","Type":"ContainerStarted","Data":"5c9456a365e1b038bd55aba3cde326bd5a3d8e14610956b5b39ad709f2db3ccb"} Feb 18 14:55:40 crc kubenswrapper[4957]: I0218 14:55:40.008648 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e7fbc309-e5ee-4222-8409-6d68468ae015","Type":"ContainerStarted","Data":"c56a353ed7b7e1b4b51e206a5ceb87b775ac853438042079b219de59fd7eab4c"} Feb 18 14:55:40 crc kubenswrapper[4957]: I0218 14:55:40.026145 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=2.026119785 podStartE2EDuration="2.026119785s" podCreationTimestamp="2026-02-18 14:55:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:55:40.023056977 +0000 UTC m=+1446.543921721" watchObservedRunningTime="2026-02-18 14:55:40.026119785 +0000 UTC m=+1446.546984529" Feb 18 14:55:40 crc kubenswrapper[4957]: I0218 14:55:40.475494 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-59cd79686b-x6zk5" Feb 18 14:55:43 crc kubenswrapper[4957]: I0218 14:55:43.126512 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Feb 18 14:55:43 crc kubenswrapper[4957]: I0218 14:55:43.129027 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 18 14:55:43 crc kubenswrapper[4957]: I0218 14:55:43.131037 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Feb 18 14:55:43 crc kubenswrapper[4957]: I0218 14:55:43.131304 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Feb 18 14:55:43 crc kubenswrapper[4957]: I0218 14:55:43.131906 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-xfq8w" Feb 18 14:55:43 crc kubenswrapper[4957]: I0218 14:55:43.137730 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 18 14:55:43 crc kubenswrapper[4957]: I0218 14:55:43.183224 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/aa2f421b-f6d0-4db4-9162-f863e45ca417-openstack-config-secret\") pod \"openstackclient\" (UID: \"aa2f421b-f6d0-4db4-9162-f863e45ca417\") " pod="openstack/openstackclient" Feb 18 14:55:43 crc kubenswrapper[4957]: I0218 14:55:43.183394 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dprht\" (UniqueName: \"kubernetes.io/projected/aa2f421b-f6d0-4db4-9162-f863e45ca417-kube-api-access-dprht\") pod \"openstackclient\" (UID: \"aa2f421b-f6d0-4db4-9162-f863e45ca417\") " pod="openstack/openstackclient" Feb 18 14:55:43 crc kubenswrapper[4957]: I0218 14:55:43.183550 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/aa2f421b-f6d0-4db4-9162-f863e45ca417-openstack-config\") pod \"openstackclient\" (UID: \"aa2f421b-f6d0-4db4-9162-f863e45ca417\") " pod="openstack/openstackclient" Feb 18 14:55:43 crc kubenswrapper[4957]: I0218 14:55:43.183612 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa2f421b-f6d0-4db4-9162-f863e45ca417-combined-ca-bundle\") pod \"openstackclient\" (UID: \"aa2f421b-f6d0-4db4-9162-f863e45ca417\") " pod="openstack/openstackclient" Feb 18 14:55:43 crc kubenswrapper[4957]: I0218 14:55:43.285803 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa2f421b-f6d0-4db4-9162-f863e45ca417-combined-ca-bundle\") pod \"openstackclient\" (UID: \"aa2f421b-f6d0-4db4-9162-f863e45ca417\") " pod="openstack/openstackclient" Feb 18 14:55:43 crc kubenswrapper[4957]: I0218 14:55:43.286217 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/aa2f421b-f6d0-4db4-9162-f863e45ca417-openstack-config-secret\") pod \"openstackclient\" (UID: \"aa2f421b-f6d0-4db4-9162-f863e45ca417\") " pod="openstack/openstackclient" Feb 18 14:55:43 crc kubenswrapper[4957]: I0218 14:55:43.286469 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dprht\" (UniqueName: \"kubernetes.io/projected/aa2f421b-f6d0-4db4-9162-f863e45ca417-kube-api-access-dprht\") pod \"openstackclient\" (UID: \"aa2f421b-f6d0-4db4-9162-f863e45ca417\") " pod="openstack/openstackclient" Feb 18 14:55:43 crc kubenswrapper[4957]: I0218 14:55:43.286657 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/aa2f421b-f6d0-4db4-9162-f863e45ca417-openstack-config\") pod \"openstackclient\" (UID: \"aa2f421b-f6d0-4db4-9162-f863e45ca417\") " pod="openstack/openstackclient" Feb 18 14:55:43 crc kubenswrapper[4957]: I0218 14:55:43.288142 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/aa2f421b-f6d0-4db4-9162-f863e45ca417-openstack-config\") pod \"openstackclient\" (UID: \"aa2f421b-f6d0-4db4-9162-f863e45ca417\") " pod="openstack/openstackclient" Feb 18 14:55:43 crc kubenswrapper[4957]: I0218 14:55:43.293887 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa2f421b-f6d0-4db4-9162-f863e45ca417-combined-ca-bundle\") pod \"openstackclient\" (UID: \"aa2f421b-f6d0-4db4-9162-f863e45ca417\") " pod="openstack/openstackclient" Feb 18 14:55:43 crc kubenswrapper[4957]: I0218 14:55:43.299546 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/aa2f421b-f6d0-4db4-9162-f863e45ca417-openstack-config-secret\") pod \"openstackclient\" (UID: \"aa2f421b-f6d0-4db4-9162-f863e45ca417\") " pod="openstack/openstackclient" Feb 18 14:55:43 crc kubenswrapper[4957]: I0218 14:55:43.305633 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dprht\" (UniqueName: \"kubernetes.io/projected/aa2f421b-f6d0-4db4-9162-f863e45ca417-kube-api-access-dprht\") pod \"openstackclient\" (UID: \"aa2f421b-f6d0-4db4-9162-f863e45ca417\") " pod="openstack/openstackclient" Feb 18 14:55:43 crc kubenswrapper[4957]: I0218 14:55:43.407404 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 18 14:55:43 crc kubenswrapper[4957]: I0218 14:55:43.451160 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 18 14:55:44 crc kubenswrapper[4957]: I0218 14:55:44.020322 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 18 14:55:44 crc kubenswrapper[4957]: I0218 14:55:44.048272 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"aa2f421b-f6d0-4db4-9162-f863e45ca417","Type":"ContainerStarted","Data":"7a478bc5f70868083e0bea42af3682548d15f6de9febb5565ee3e8768fe831ac"} Feb 18 14:55:44 crc kubenswrapper[4957]: I0218 14:55:44.782083 4957 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-cjglk" podUID="fa177feb-1f29-43cb-baa9-6676bfa7c403" containerName="registry-server" probeResult="failure" output=< Feb 18 14:55:44 crc kubenswrapper[4957]: timeout: failed to connect service ":50051" within 1s Feb 18 14:55:44 crc kubenswrapper[4957]: > Feb 18 14:55:47 crc kubenswrapper[4957]: I0218 14:55:47.368566 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-7c68fdd987-chglv"] Feb 18 14:55:47 crc kubenswrapper[4957]: I0218 14:55:47.371195 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-7c68fdd987-chglv" Feb 18 14:55:47 crc kubenswrapper[4957]: I0218 14:55:47.381156 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-7c68fdd987-chglv"] Feb 18 14:55:47 crc kubenswrapper[4957]: I0218 14:55:47.396020 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Feb 18 14:55:47 crc kubenswrapper[4957]: I0218 14:55:47.396275 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Feb 18 14:55:47 crc kubenswrapper[4957]: I0218 14:55:47.396458 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 18 14:55:47 crc kubenswrapper[4957]: I0218 14:55:47.483298 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/47005221-336b-424d-8c90-fc0c320cd135-run-httpd\") pod \"swift-proxy-7c68fdd987-chglv\" (UID: \"47005221-336b-424d-8c90-fc0c320cd135\") " pod="openstack/swift-proxy-7c68fdd987-chglv" Feb 18 14:55:47 crc kubenswrapper[4957]: I0218 14:55:47.483932 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47005221-336b-424d-8c90-fc0c320cd135-combined-ca-bundle\") pod \"swift-proxy-7c68fdd987-chglv\" (UID: \"47005221-336b-424d-8c90-fc0c320cd135\") " pod="openstack/swift-proxy-7c68fdd987-chglv" Feb 18 14:55:47 crc kubenswrapper[4957]: I0218 14:55:47.484288 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/47005221-336b-424d-8c90-fc0c320cd135-etc-swift\") pod \"swift-proxy-7c68fdd987-chglv\" (UID: \"47005221-336b-424d-8c90-fc0c320cd135\") " pod="openstack/swift-proxy-7c68fdd987-chglv" Feb 18 14:55:47 crc kubenswrapper[4957]: I0218 14:55:47.484608 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47005221-336b-424d-8c90-fc0c320cd135-config-data\") pod \"swift-proxy-7c68fdd987-chglv\" (UID: \"47005221-336b-424d-8c90-fc0c320cd135\") " pod="openstack/swift-proxy-7c68fdd987-chglv" Feb 18 14:55:47 crc kubenswrapper[4957]: I0218 14:55:47.484878 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tch9d\" (UniqueName: \"kubernetes.io/projected/47005221-336b-424d-8c90-fc0c320cd135-kube-api-access-tch9d\") pod \"swift-proxy-7c68fdd987-chglv\" (UID: \"47005221-336b-424d-8c90-fc0c320cd135\") " pod="openstack/swift-proxy-7c68fdd987-chglv" Feb 18 14:55:47 crc kubenswrapper[4957]: I0218 14:55:47.485031 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/47005221-336b-424d-8c90-fc0c320cd135-public-tls-certs\") pod \"swift-proxy-7c68fdd987-chglv\" (UID: \"47005221-336b-424d-8c90-fc0c320cd135\") " pod="openstack/swift-proxy-7c68fdd987-chglv" Feb 18 14:55:47 crc kubenswrapper[4957]: I0218 14:55:47.485213 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/47005221-336b-424d-8c90-fc0c320cd135-internal-tls-certs\") pod \"swift-proxy-7c68fdd987-chglv\" (UID: \"47005221-336b-424d-8c90-fc0c320cd135\") " pod="openstack/swift-proxy-7c68fdd987-chglv" Feb 18 14:55:47 crc kubenswrapper[4957]: I0218 14:55:47.485377 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/47005221-336b-424d-8c90-fc0c320cd135-log-httpd\") pod \"swift-proxy-7c68fdd987-chglv\" (UID: \"47005221-336b-424d-8c90-fc0c320cd135\") " pod="openstack/swift-proxy-7c68fdd987-chglv" Feb 18 14:55:47 crc kubenswrapper[4957]: I0218 14:55:47.588314 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/47005221-336b-424d-8c90-fc0c320cd135-internal-tls-certs\") pod \"swift-proxy-7c68fdd987-chglv\" (UID: \"47005221-336b-424d-8c90-fc0c320cd135\") " pod="openstack/swift-proxy-7c68fdd987-chglv" Feb 18 14:55:47 crc kubenswrapper[4957]: I0218 14:55:47.588373 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/47005221-336b-424d-8c90-fc0c320cd135-log-httpd\") pod \"swift-proxy-7c68fdd987-chglv\" (UID: \"47005221-336b-424d-8c90-fc0c320cd135\") " pod="openstack/swift-proxy-7c68fdd987-chglv" Feb 18 14:55:47 crc kubenswrapper[4957]: I0218 14:55:47.588555 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/47005221-336b-424d-8c90-fc0c320cd135-run-httpd\") pod \"swift-proxy-7c68fdd987-chglv\" (UID: \"47005221-336b-424d-8c90-fc0c320cd135\") " pod="openstack/swift-proxy-7c68fdd987-chglv" Feb 18 14:55:47 crc kubenswrapper[4957]: I0218 14:55:47.588573 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47005221-336b-424d-8c90-fc0c320cd135-combined-ca-bundle\") pod \"swift-proxy-7c68fdd987-chglv\" (UID: \"47005221-336b-424d-8c90-fc0c320cd135\") " pod="openstack/swift-proxy-7c68fdd987-chglv" Feb 18 14:55:47 crc kubenswrapper[4957]: I0218 14:55:47.588637 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/47005221-336b-424d-8c90-fc0c320cd135-etc-swift\") pod \"swift-proxy-7c68fdd987-chglv\" (UID: \"47005221-336b-424d-8c90-fc0c320cd135\") " pod="openstack/swift-proxy-7c68fdd987-chglv" Feb 18 14:55:47 crc kubenswrapper[4957]: I0218 14:55:47.588697 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47005221-336b-424d-8c90-fc0c320cd135-config-data\") pod \"swift-proxy-7c68fdd987-chglv\" (UID: \"47005221-336b-424d-8c90-fc0c320cd135\") " pod="openstack/swift-proxy-7c68fdd987-chglv" Feb 18 14:55:47 crc kubenswrapper[4957]: I0218 14:55:47.588731 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tch9d\" (UniqueName: \"kubernetes.io/projected/47005221-336b-424d-8c90-fc0c320cd135-kube-api-access-tch9d\") pod \"swift-proxy-7c68fdd987-chglv\" (UID: \"47005221-336b-424d-8c90-fc0c320cd135\") " pod="openstack/swift-proxy-7c68fdd987-chglv" Feb 18 14:55:47 crc kubenswrapper[4957]: I0218 14:55:47.588768 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/47005221-336b-424d-8c90-fc0c320cd135-public-tls-certs\") pod \"swift-proxy-7c68fdd987-chglv\" (UID: \"47005221-336b-424d-8c90-fc0c320cd135\") " pod="openstack/swift-proxy-7c68fdd987-chglv" Feb 18 14:55:47 crc kubenswrapper[4957]: I0218 14:55:47.590073 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/47005221-336b-424d-8c90-fc0c320cd135-log-httpd\") pod \"swift-proxy-7c68fdd987-chglv\" (UID: \"47005221-336b-424d-8c90-fc0c320cd135\") " pod="openstack/swift-proxy-7c68fdd987-chglv" Feb 18 14:55:47 crc kubenswrapper[4957]: I0218 14:55:47.592746 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/47005221-336b-424d-8c90-fc0c320cd135-run-httpd\") pod \"swift-proxy-7c68fdd987-chglv\" (UID: \"47005221-336b-424d-8c90-fc0c320cd135\") " pod="openstack/swift-proxy-7c68fdd987-chglv" Feb 18 14:55:47 crc kubenswrapper[4957]: I0218 14:55:47.597435 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47005221-336b-424d-8c90-fc0c320cd135-combined-ca-bundle\") pod \"swift-proxy-7c68fdd987-chglv\" (UID: \"47005221-336b-424d-8c90-fc0c320cd135\") " pod="openstack/swift-proxy-7c68fdd987-chglv" Feb 18 14:55:47 crc kubenswrapper[4957]: I0218 14:55:47.599383 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/47005221-336b-424d-8c90-fc0c320cd135-internal-tls-certs\") pod \"swift-proxy-7c68fdd987-chglv\" (UID: \"47005221-336b-424d-8c90-fc0c320cd135\") " pod="openstack/swift-proxy-7c68fdd987-chglv" Feb 18 14:55:47 crc kubenswrapper[4957]: I0218 14:55:47.599760 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47005221-336b-424d-8c90-fc0c320cd135-config-data\") pod \"swift-proxy-7c68fdd987-chglv\" (UID: \"47005221-336b-424d-8c90-fc0c320cd135\") " pod="openstack/swift-proxy-7c68fdd987-chglv" Feb 18 14:55:47 crc kubenswrapper[4957]: I0218 14:55:47.600799 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/47005221-336b-424d-8c90-fc0c320cd135-etc-swift\") pod \"swift-proxy-7c68fdd987-chglv\" (UID: \"47005221-336b-424d-8c90-fc0c320cd135\") " pod="openstack/swift-proxy-7c68fdd987-chglv" Feb 18 14:55:47 crc kubenswrapper[4957]: I0218 14:55:47.601514 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/47005221-336b-424d-8c90-fc0c320cd135-public-tls-certs\") pod \"swift-proxy-7c68fdd987-chglv\" (UID: \"47005221-336b-424d-8c90-fc0c320cd135\") " pod="openstack/swift-proxy-7c68fdd987-chglv" Feb 18 14:55:47 crc kubenswrapper[4957]: I0218 14:55:47.609831 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tch9d\" (UniqueName: \"kubernetes.io/projected/47005221-336b-424d-8c90-fc0c320cd135-kube-api-access-tch9d\") pod \"swift-proxy-7c68fdd987-chglv\" (UID: \"47005221-336b-424d-8c90-fc0c320cd135\") " pod="openstack/swift-proxy-7c68fdd987-chglv" Feb 18 14:55:47 crc kubenswrapper[4957]: I0218 14:55:47.715754 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-7c68fdd987-chglv" Feb 18 14:55:48 crc kubenswrapper[4957]: I0218 14:55:48.349526 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-7c68fdd987-chglv"] Feb 18 14:55:48 crc kubenswrapper[4957]: I0218 14:55:48.653904 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 18 14:55:49 crc kubenswrapper[4957]: I0218 14:55:49.127010 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-7c68fdd987-chglv" event={"ID":"47005221-336b-424d-8c90-fc0c320cd135","Type":"ContainerStarted","Data":"47e7cccaf9839caf9ec43cb187bd74363ad32d04f3c26eea20e563dd826b3566"} Feb 18 14:55:49 crc kubenswrapper[4957]: I0218 14:55:49.127351 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-7c68fdd987-chglv" Feb 18 14:55:49 crc kubenswrapper[4957]: I0218 14:55:49.127363 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-7c68fdd987-chglv" event={"ID":"47005221-336b-424d-8c90-fc0c320cd135","Type":"ContainerStarted","Data":"3627028ee929477b8815494c3e87bb2169b9215d61fa42fc0fc7e9ded2a4c3e2"} Feb 18 14:55:49 crc kubenswrapper[4957]: I0218 14:55:49.127372 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-7c68fdd987-chglv" event={"ID":"47005221-336b-424d-8c90-fc0c320cd135","Type":"ContainerStarted","Data":"d8d9136c5179c4bba49ff798edc4eaefc2b08f677d01dd1fa31c5a8b5d1ad3ff"} Feb 18 14:55:49 crc kubenswrapper[4957]: I0218 14:55:49.127734 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-7c68fdd987-chglv" Feb 18 14:55:49 crc kubenswrapper[4957]: I0218 14:55:49.182830 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-7c68fdd987-chglv" podStartSLOduration=2.182791623 podStartE2EDuration="2.182791623s" podCreationTimestamp="2026-02-18 14:55:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:55:49.162503576 +0000 UTC m=+1455.683368320" watchObservedRunningTime="2026-02-18 14:55:49.182791623 +0000 UTC m=+1455.703656377" Feb 18 14:55:49 crc kubenswrapper[4957]: E0218 14:55:49.438881 4957 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd2f29bea_5fda_4080_a98b_b869c41e3dab.slice/crio-conmon-709638ed8b589de857df474f40c1c4c5e31bf6d8042a69c0bd2637c7e585eada.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddaeedf1e_1be4_4db9_8354_cdd548f43256.slice/crio-conmon-45aa19e92e1482006289d50687206d6ea0df03ad6bf3d1f783ae23c29cd5cf98.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddaeedf1e_1be4_4db9_8354_cdd548f43256.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd2f29bea_5fda_4080_a98b_b869c41e3dab.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd2f29bea_5fda_4080_a98b_b869c41e3dab.slice/crio-a41d5314140c7607f5fa3643cacebb6378b52a22d2153671c7e3b58500454031\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddaeedf1e_1be4_4db9_8354_cdd548f43256.slice/crio-701965444d0099501e693f7f72f5b19d510cea850e55c8ab211ade3d611e6182.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddaeedf1e_1be4_4db9_8354_cdd548f43256.slice/crio-434a3e0733af3bd792c76bad275429b67554658aee0b8643c994e25c1ca814af\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddaeedf1e_1be4_4db9_8354_cdd548f43256.slice/crio-45aa19e92e1482006289d50687206d6ea0df03ad6bf3d1f783ae23c29cd5cf98.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddaeedf1e_1be4_4db9_8354_cdd548f43256.slice/crio-conmon-701965444d0099501e693f7f72f5b19d510cea850e55c8ab211ade3d611e6182.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd2f29bea_5fda_4080_a98b_b869c41e3dab.slice/crio-709638ed8b589de857df474f40c1c4c5e31bf6d8042a69c0bd2637c7e585eada.scope\": RecentStats: unable to find data in memory cache]" Feb 18 14:55:49 crc kubenswrapper[4957]: E0218 14:55:49.441438 4957 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddaeedf1e_1be4_4db9_8354_cdd548f43256.slice/crio-434a3e0733af3bd792c76bad275429b67554658aee0b8643c994e25c1ca814af\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd2f29bea_5fda_4080_a98b_b869c41e3dab.slice/crio-conmon-709638ed8b589de857df474f40c1c4c5e31bf6d8042a69c0bd2637c7e585eada.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddaeedf1e_1be4_4db9_8354_cdd548f43256.slice/crio-701965444d0099501e693f7f72f5b19d510cea850e55c8ab211ade3d611e6182.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd2f29bea_5fda_4080_a98b_b869c41e3dab.slice/crio-709638ed8b589de857df474f40c1c4c5e31bf6d8042a69c0bd2637c7e585eada.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddaeedf1e_1be4_4db9_8354_cdd548f43256.slice/crio-45aa19e92e1482006289d50687206d6ea0df03ad6bf3d1f783ae23c29cd5cf98.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddaeedf1e_1be4_4db9_8354_cdd548f43256.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddaeedf1e_1be4_4db9_8354_cdd548f43256.slice/crio-conmon-701965444d0099501e693f7f72f5b19d510cea850e55c8ab211ade3d611e6182.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd2f29bea_5fda_4080_a98b_b869c41e3dab.slice/crio-a41d5314140c7607f5fa3643cacebb6378b52a22d2153671c7e3b58500454031\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd2f29bea_5fda_4080_a98b_b869c41e3dab.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddaeedf1e_1be4_4db9_8354_cdd548f43256.slice/crio-conmon-45aa19e92e1482006289d50687206d6ea0df03ad6bf3d1f783ae23c29cd5cf98.scope\": RecentStats: unable to find data in memory cache]" Feb 18 14:55:49 crc kubenswrapper[4957]: E0218 14:55:49.598011 4957 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd2f29bea_5fda_4080_a98b_b869c41e3dab.slice/crio-a41d5314140c7607f5fa3643cacebb6378b52a22d2153671c7e3b58500454031\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd2f29bea_5fda_4080_a98b_b869c41e3dab.slice/crio-conmon-709638ed8b589de857df474f40c1c4c5e31bf6d8042a69c0bd2637c7e585eada.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd2f29bea_5fda_4080_a98b_b869c41e3dab.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddaeedf1e_1be4_4db9_8354_cdd548f43256.slice/crio-701965444d0099501e693f7f72f5b19d510cea850e55c8ab211ade3d611e6182.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddaeedf1e_1be4_4db9_8354_cdd548f43256.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddaeedf1e_1be4_4db9_8354_cdd548f43256.slice/crio-conmon-701965444d0099501e693f7f72f5b19d510cea850e55c8ab211ade3d611e6182.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddaeedf1e_1be4_4db9_8354_cdd548f43256.slice/crio-conmon-45aa19e92e1482006289d50687206d6ea0df03ad6bf3d1f783ae23c29cd5cf98.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd2f29bea_5fda_4080_a98b_b869c41e3dab.slice/crio-709638ed8b589de857df474f40c1c4c5e31bf6d8042a69c0bd2637c7e585eada.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddaeedf1e_1be4_4db9_8354_cdd548f43256.slice/crio-434a3e0733af3bd792c76bad275429b67554658aee0b8643c994e25c1ca814af\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddaeedf1e_1be4_4db9_8354_cdd548f43256.slice/crio-45aa19e92e1482006289d50687206d6ea0df03ad6bf3d1f783ae23c29cd5cf98.scope\": RecentStats: unable to find data in memory cache]" Feb 18 14:55:49 crc kubenswrapper[4957]: I0218 14:55:49.717484 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-65df87cd54-bjsc4"] Feb 18 14:55:49 crc kubenswrapper[4957]: I0218 14:55:49.740530 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-65df87cd54-bjsc4" Feb 18 14:55:49 crc kubenswrapper[4957]: I0218 14:55:49.763131 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-sqlqh" Feb 18 14:55:49 crc kubenswrapper[4957]: I0218 14:55:49.766200 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-engine-config-data" Feb 18 14:55:49 crc kubenswrapper[4957]: I0218 14:55:49.766384 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Feb 18 14:55:49 crc kubenswrapper[4957]: I0218 14:55:49.824800 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-65df87cd54-bjsc4"] Feb 18 14:55:49 crc kubenswrapper[4957]: I0218 14:55:49.899764 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-rst4k"] Feb 18 14:55:49 crc kubenswrapper[4957]: I0218 14:55:49.916898 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-7fb5f67558-76mdf"] Feb 18 14:55:49 crc kubenswrapper[4957]: I0218 14:55:49.919163 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7fb5f67558-76mdf" Feb 18 14:55:49 crc kubenswrapper[4957]: I0218 14:55:49.925965 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9gdc\" (UniqueName: \"kubernetes.io/projected/bde3dea8-c483-423d-8f4a-74575433fd2f-kube-api-access-k9gdc\") pod \"heat-engine-65df87cd54-bjsc4\" (UID: \"bde3dea8-c483-423d-8f4a-74575433fd2f\") " pod="openstack/heat-engine-65df87cd54-bjsc4" Feb 18 14:55:49 crc kubenswrapper[4957]: I0218 14:55:49.926119 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bde3dea8-c483-423d-8f4a-74575433fd2f-config-data-custom\") pod \"heat-engine-65df87cd54-bjsc4\" (UID: \"bde3dea8-c483-423d-8f4a-74575433fd2f\") " pod="openstack/heat-engine-65df87cd54-bjsc4" Feb 18 14:55:49 crc kubenswrapper[4957]: I0218 14:55:49.926170 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bde3dea8-c483-423d-8f4a-74575433fd2f-combined-ca-bundle\") pod \"heat-engine-65df87cd54-bjsc4\" (UID: \"bde3dea8-c483-423d-8f4a-74575433fd2f\") " pod="openstack/heat-engine-65df87cd54-bjsc4" Feb 18 14:55:49 crc kubenswrapper[4957]: I0218 14:55:49.926189 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bde3dea8-c483-423d-8f4a-74575433fd2f-config-data\") pod \"heat-engine-65df87cd54-bjsc4\" (UID: \"bde3dea8-c483-423d-8f4a-74575433fd2f\") " pod="openstack/heat-engine-65df87cd54-bjsc4" Feb 18 14:55:49 crc kubenswrapper[4957]: I0218 14:55:49.926553 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7756b9d78c-rst4k" Feb 18 14:55:49 crc kubenswrapper[4957]: I0218 14:55:49.926930 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-cfnapi-config-data" Feb 18 14:55:49 crc kubenswrapper[4957]: I0218 14:55:49.982525 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-rst4k"] Feb 18 14:55:50 crc kubenswrapper[4957]: I0218 14:55:50.023020 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-7fb5f67558-76mdf"] Feb 18 14:55:50 crc kubenswrapper[4957]: I0218 14:55:50.027737 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b0c6928-d889-421a-81e2-5e9dd8e1e986-config-data\") pod \"heat-cfnapi-7fb5f67558-76mdf\" (UID: \"9b0c6928-d889-421a-81e2-5e9dd8e1e986\") " pod="openstack/heat-cfnapi-7fb5f67558-76mdf" Feb 18 14:55:50 crc kubenswrapper[4957]: I0218 14:55:50.027974 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1ef0249f-b7a3-4183-9f46-0553a63c26ac-ovsdbserver-sb\") pod \"dnsmasq-dns-7756b9d78c-rst4k\" (UID: \"1ef0249f-b7a3-4183-9f46-0553a63c26ac\") " pod="openstack/dnsmasq-dns-7756b9d78c-rst4k" Feb 18 14:55:50 crc kubenswrapper[4957]: I0218 14:55:50.028090 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9b0c6928-d889-421a-81e2-5e9dd8e1e986-config-data-custom\") pod \"heat-cfnapi-7fb5f67558-76mdf\" (UID: \"9b0c6928-d889-421a-81e2-5e9dd8e1e986\") " pod="openstack/heat-cfnapi-7fb5f67558-76mdf" Feb 18 14:55:50 crc kubenswrapper[4957]: I0218 14:55:50.028208 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k9gdc\" (UniqueName: \"kubernetes.io/projected/bde3dea8-c483-423d-8f4a-74575433fd2f-kube-api-access-k9gdc\") pod \"heat-engine-65df87cd54-bjsc4\" (UID: \"bde3dea8-c483-423d-8f4a-74575433fd2f\") " pod="openstack/heat-engine-65df87cd54-bjsc4" Feb 18 14:55:50 crc kubenswrapper[4957]: I0218 14:55:50.028292 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1ef0249f-b7a3-4183-9f46-0553a63c26ac-ovsdbserver-nb\") pod \"dnsmasq-dns-7756b9d78c-rst4k\" (UID: \"1ef0249f-b7a3-4183-9f46-0553a63c26ac\") " pod="openstack/dnsmasq-dns-7756b9d78c-rst4k" Feb 18 14:55:50 crc kubenswrapper[4957]: I0218 14:55:50.028406 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j95qh\" (UniqueName: \"kubernetes.io/projected/1ef0249f-b7a3-4183-9f46-0553a63c26ac-kube-api-access-j95qh\") pod \"dnsmasq-dns-7756b9d78c-rst4k\" (UID: \"1ef0249f-b7a3-4183-9f46-0553a63c26ac\") " pod="openstack/dnsmasq-dns-7756b9d78c-rst4k" Feb 18 14:55:50 crc kubenswrapper[4957]: I0218 14:55:50.028527 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bde3dea8-c483-423d-8f4a-74575433fd2f-config-data-custom\") pod \"heat-engine-65df87cd54-bjsc4\" (UID: \"bde3dea8-c483-423d-8f4a-74575433fd2f\") " pod="openstack/heat-engine-65df87cd54-bjsc4" Feb 18 14:55:50 crc kubenswrapper[4957]: I0218 14:55:50.028626 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bde3dea8-c483-423d-8f4a-74575433fd2f-combined-ca-bundle\") pod \"heat-engine-65df87cd54-bjsc4\" (UID: \"bde3dea8-c483-423d-8f4a-74575433fd2f\") " pod="openstack/heat-engine-65df87cd54-bjsc4" Feb 18 14:55:50 crc kubenswrapper[4957]: I0218 14:55:50.028704 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmhqz\" (UniqueName: \"kubernetes.io/projected/9b0c6928-d889-421a-81e2-5e9dd8e1e986-kube-api-access-kmhqz\") pod \"heat-cfnapi-7fb5f67558-76mdf\" (UID: \"9b0c6928-d889-421a-81e2-5e9dd8e1e986\") " pod="openstack/heat-cfnapi-7fb5f67558-76mdf" Feb 18 14:55:50 crc kubenswrapper[4957]: I0218 14:55:50.028839 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bde3dea8-c483-423d-8f4a-74575433fd2f-config-data\") pod \"heat-engine-65df87cd54-bjsc4\" (UID: \"bde3dea8-c483-423d-8f4a-74575433fd2f\") " pod="openstack/heat-engine-65df87cd54-bjsc4" Feb 18 14:55:50 crc kubenswrapper[4957]: I0218 14:55:50.028956 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1ef0249f-b7a3-4183-9f46-0553a63c26ac-dns-svc\") pod \"dnsmasq-dns-7756b9d78c-rst4k\" (UID: \"1ef0249f-b7a3-4183-9f46-0553a63c26ac\") " pod="openstack/dnsmasq-dns-7756b9d78c-rst4k" Feb 18 14:55:50 crc kubenswrapper[4957]: I0218 14:55:50.029068 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b0c6928-d889-421a-81e2-5e9dd8e1e986-combined-ca-bundle\") pod \"heat-cfnapi-7fb5f67558-76mdf\" (UID: \"9b0c6928-d889-421a-81e2-5e9dd8e1e986\") " pod="openstack/heat-cfnapi-7fb5f67558-76mdf" Feb 18 14:55:50 crc kubenswrapper[4957]: I0218 14:55:50.029215 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1ef0249f-b7a3-4183-9f46-0553a63c26ac-dns-swift-storage-0\") pod \"dnsmasq-dns-7756b9d78c-rst4k\" (UID: \"1ef0249f-b7a3-4183-9f46-0553a63c26ac\") " pod="openstack/dnsmasq-dns-7756b9d78c-rst4k" Feb 18 14:55:50 crc kubenswrapper[4957]: I0218 14:55:50.029330 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ef0249f-b7a3-4183-9f46-0553a63c26ac-config\") pod \"dnsmasq-dns-7756b9d78c-rst4k\" (UID: \"1ef0249f-b7a3-4183-9f46-0553a63c26ac\") " pod="openstack/dnsmasq-dns-7756b9d78c-rst4k" Feb 18 14:55:50 crc kubenswrapper[4957]: I0218 14:55:50.050840 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bde3dea8-c483-423d-8f4a-74575433fd2f-combined-ca-bundle\") pod \"heat-engine-65df87cd54-bjsc4\" (UID: \"bde3dea8-c483-423d-8f4a-74575433fd2f\") " pod="openstack/heat-engine-65df87cd54-bjsc4" Feb 18 14:55:50 crc kubenswrapper[4957]: I0218 14:55:50.052513 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bde3dea8-c483-423d-8f4a-74575433fd2f-config-data\") pod \"heat-engine-65df87cd54-bjsc4\" (UID: \"bde3dea8-c483-423d-8f4a-74575433fd2f\") " pod="openstack/heat-engine-65df87cd54-bjsc4" Feb 18 14:55:50 crc kubenswrapper[4957]: I0218 14:55:50.053576 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bde3dea8-c483-423d-8f4a-74575433fd2f-config-data-custom\") pod \"heat-engine-65df87cd54-bjsc4\" (UID: \"bde3dea8-c483-423d-8f4a-74575433fd2f\") " pod="openstack/heat-engine-65df87cd54-bjsc4" Feb 18 14:55:50 crc kubenswrapper[4957]: I0218 14:55:50.078139 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k9gdc\" (UniqueName: \"kubernetes.io/projected/bde3dea8-c483-423d-8f4a-74575433fd2f-kube-api-access-k9gdc\") pod \"heat-engine-65df87cd54-bjsc4\" (UID: \"bde3dea8-c483-423d-8f4a-74575433fd2f\") " pod="openstack/heat-engine-65df87cd54-bjsc4" Feb 18 14:55:50 crc kubenswrapper[4957]: I0218 14:55:50.078411 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-5dfdd78696-x7mzw"] Feb 18 14:55:50 crc kubenswrapper[4957]: I0218 14:55:50.079992 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-5dfdd78696-x7mzw" Feb 18 14:55:50 crc kubenswrapper[4957]: I0218 14:55:50.088708 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-api-config-data" Feb 18 14:55:50 crc kubenswrapper[4957]: I0218 14:55:50.112952 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-65df87cd54-bjsc4" Feb 18 14:55:50 crc kubenswrapper[4957]: I0218 14:55:50.128140 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-5dfdd78696-x7mzw"] Feb 18 14:55:50 crc kubenswrapper[4957]: I0218 14:55:50.148560 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kmhqz\" (UniqueName: \"kubernetes.io/projected/9b0c6928-d889-421a-81e2-5e9dd8e1e986-kube-api-access-kmhqz\") pod \"heat-cfnapi-7fb5f67558-76mdf\" (UID: \"9b0c6928-d889-421a-81e2-5e9dd8e1e986\") " pod="openstack/heat-cfnapi-7fb5f67558-76mdf" Feb 18 14:55:50 crc kubenswrapper[4957]: I0218 14:55:50.148642 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1ef0249f-b7a3-4183-9f46-0553a63c26ac-dns-svc\") pod \"dnsmasq-dns-7756b9d78c-rst4k\" (UID: \"1ef0249f-b7a3-4183-9f46-0553a63c26ac\") " pod="openstack/dnsmasq-dns-7756b9d78c-rst4k" Feb 18 14:55:50 crc kubenswrapper[4957]: I0218 14:55:50.148698 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b0c6928-d889-421a-81e2-5e9dd8e1e986-combined-ca-bundle\") pod \"heat-cfnapi-7fb5f67558-76mdf\" (UID: \"9b0c6928-d889-421a-81e2-5e9dd8e1e986\") " pod="openstack/heat-cfnapi-7fb5f67558-76mdf" Feb 18 14:55:50 crc kubenswrapper[4957]: I0218 14:55:50.148823 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1ef0249f-b7a3-4183-9f46-0553a63c26ac-dns-swift-storage-0\") pod \"dnsmasq-dns-7756b9d78c-rst4k\" (UID: \"1ef0249f-b7a3-4183-9f46-0553a63c26ac\") " pod="openstack/dnsmasq-dns-7756b9d78c-rst4k" Feb 18 14:55:50 crc kubenswrapper[4957]: I0218 14:55:50.148860 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ef0249f-b7a3-4183-9f46-0553a63c26ac-config\") pod \"dnsmasq-dns-7756b9d78c-rst4k\" (UID: \"1ef0249f-b7a3-4183-9f46-0553a63c26ac\") " pod="openstack/dnsmasq-dns-7756b9d78c-rst4k" Feb 18 14:55:50 crc kubenswrapper[4957]: I0218 14:55:50.148970 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b0c6928-d889-421a-81e2-5e9dd8e1e986-config-data\") pod \"heat-cfnapi-7fb5f67558-76mdf\" (UID: \"9b0c6928-d889-421a-81e2-5e9dd8e1e986\") " pod="openstack/heat-cfnapi-7fb5f67558-76mdf" Feb 18 14:55:50 crc kubenswrapper[4957]: I0218 14:55:50.148995 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1ef0249f-b7a3-4183-9f46-0553a63c26ac-ovsdbserver-sb\") pod \"dnsmasq-dns-7756b9d78c-rst4k\" (UID: \"1ef0249f-b7a3-4183-9f46-0553a63c26ac\") " pod="openstack/dnsmasq-dns-7756b9d78c-rst4k" Feb 18 14:55:50 crc kubenswrapper[4957]: I0218 14:55:50.149024 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9b0c6928-d889-421a-81e2-5e9dd8e1e986-config-data-custom\") pod \"heat-cfnapi-7fb5f67558-76mdf\" (UID: \"9b0c6928-d889-421a-81e2-5e9dd8e1e986\") " pod="openstack/heat-cfnapi-7fb5f67558-76mdf" Feb 18 14:55:50 crc kubenswrapper[4957]: I0218 14:55:50.149120 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1ef0249f-b7a3-4183-9f46-0553a63c26ac-ovsdbserver-nb\") pod \"dnsmasq-dns-7756b9d78c-rst4k\" (UID: \"1ef0249f-b7a3-4183-9f46-0553a63c26ac\") " pod="openstack/dnsmasq-dns-7756b9d78c-rst4k" Feb 18 14:55:50 crc kubenswrapper[4957]: I0218 14:55:50.149211 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j95qh\" (UniqueName: \"kubernetes.io/projected/1ef0249f-b7a3-4183-9f46-0553a63c26ac-kube-api-access-j95qh\") pod \"dnsmasq-dns-7756b9d78c-rst4k\" (UID: \"1ef0249f-b7a3-4183-9f46-0553a63c26ac\") " pod="openstack/dnsmasq-dns-7756b9d78c-rst4k" Feb 18 14:55:50 crc kubenswrapper[4957]: I0218 14:55:50.150617 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1ef0249f-b7a3-4183-9f46-0553a63c26ac-dns-svc\") pod \"dnsmasq-dns-7756b9d78c-rst4k\" (UID: \"1ef0249f-b7a3-4183-9f46-0553a63c26ac\") " pod="openstack/dnsmasq-dns-7756b9d78c-rst4k" Feb 18 14:55:50 crc kubenswrapper[4957]: I0218 14:55:50.172066 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9b0c6928-d889-421a-81e2-5e9dd8e1e986-config-data-custom\") pod \"heat-cfnapi-7fb5f67558-76mdf\" (UID: \"9b0c6928-d889-421a-81e2-5e9dd8e1e986\") " pod="openstack/heat-cfnapi-7fb5f67558-76mdf" Feb 18 14:55:50 crc kubenswrapper[4957]: I0218 14:55:50.172669 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1ef0249f-b7a3-4183-9f46-0553a63c26ac-ovsdbserver-nb\") pod \"dnsmasq-dns-7756b9d78c-rst4k\" (UID: \"1ef0249f-b7a3-4183-9f46-0553a63c26ac\") " pod="openstack/dnsmasq-dns-7756b9d78c-rst4k" Feb 18 14:55:50 crc kubenswrapper[4957]: I0218 14:55:50.173567 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ef0249f-b7a3-4183-9f46-0553a63c26ac-config\") pod \"dnsmasq-dns-7756b9d78c-rst4k\" (UID: \"1ef0249f-b7a3-4183-9f46-0553a63c26ac\") " pod="openstack/dnsmasq-dns-7756b9d78c-rst4k" Feb 18 14:55:50 crc kubenswrapper[4957]: I0218 14:55:50.174565 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b0c6928-d889-421a-81e2-5e9dd8e1e986-config-data\") pod \"heat-cfnapi-7fb5f67558-76mdf\" (UID: \"9b0c6928-d889-421a-81e2-5e9dd8e1e986\") " pod="openstack/heat-cfnapi-7fb5f67558-76mdf" Feb 18 14:55:50 crc kubenswrapper[4957]: I0218 14:55:50.174720 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1ef0249f-b7a3-4183-9f46-0553a63c26ac-ovsdbserver-sb\") pod \"dnsmasq-dns-7756b9d78c-rst4k\" (UID: \"1ef0249f-b7a3-4183-9f46-0553a63c26ac\") " pod="openstack/dnsmasq-dns-7756b9d78c-rst4k" Feb 18 14:55:50 crc kubenswrapper[4957]: I0218 14:55:50.175129 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1ef0249f-b7a3-4183-9f46-0553a63c26ac-dns-swift-storage-0\") pod \"dnsmasq-dns-7756b9d78c-rst4k\" (UID: \"1ef0249f-b7a3-4183-9f46-0553a63c26ac\") " pod="openstack/dnsmasq-dns-7756b9d78c-rst4k" Feb 18 14:55:50 crc kubenswrapper[4957]: I0218 14:55:50.175862 4957 generic.go:334] "Generic (PLEG): container finished" podID="c7cb1bd6-6151-4d00-834c-dbe330c6506b" containerID="22589993cf04dd84a5ced76dc2e4b3c636efd99cf1c4056002c72b0814d46ebb" exitCode=137 Feb 18 14:55:50 crc kubenswrapper[4957]: I0218 14:55:50.176111 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-c445bc8f8-ltx54" event={"ID":"c7cb1bd6-6151-4d00-834c-dbe330c6506b","Type":"ContainerDied","Data":"22589993cf04dd84a5ced76dc2e4b3c636efd99cf1c4056002c72b0814d46ebb"} Feb 18 14:55:50 crc kubenswrapper[4957]: I0218 14:55:50.189830 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j95qh\" (UniqueName: \"kubernetes.io/projected/1ef0249f-b7a3-4183-9f46-0553a63c26ac-kube-api-access-j95qh\") pod \"dnsmasq-dns-7756b9d78c-rst4k\" (UID: \"1ef0249f-b7a3-4183-9f46-0553a63c26ac\") " pod="openstack/dnsmasq-dns-7756b9d78c-rst4k" Feb 18 14:55:50 crc kubenswrapper[4957]: I0218 14:55:50.190250 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b0c6928-d889-421a-81e2-5e9dd8e1e986-combined-ca-bundle\") pod \"heat-cfnapi-7fb5f67558-76mdf\" (UID: \"9b0c6928-d889-421a-81e2-5e9dd8e1e986\") " pod="openstack/heat-cfnapi-7fb5f67558-76mdf" Feb 18 14:55:50 crc kubenswrapper[4957]: I0218 14:55:50.192157 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kmhqz\" (UniqueName: \"kubernetes.io/projected/9b0c6928-d889-421a-81e2-5e9dd8e1e986-kube-api-access-kmhqz\") pod \"heat-cfnapi-7fb5f67558-76mdf\" (UID: \"9b0c6928-d889-421a-81e2-5e9dd8e1e986\") " pod="openstack/heat-cfnapi-7fb5f67558-76mdf" Feb 18 14:55:50 crc kubenswrapper[4957]: I0218 14:55:50.251606 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0f682c53-b8cc-42e4-a3b3-3bccfdf7c908-config-data-custom\") pod \"heat-api-5dfdd78696-x7mzw\" (UID: \"0f682c53-b8cc-42e4-a3b3-3bccfdf7c908\") " pod="openstack/heat-api-5dfdd78696-x7mzw" Feb 18 14:55:50 crc kubenswrapper[4957]: I0218 14:55:50.251799 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwq4g\" (UniqueName: \"kubernetes.io/projected/0f682c53-b8cc-42e4-a3b3-3bccfdf7c908-kube-api-access-hwq4g\") pod \"heat-api-5dfdd78696-x7mzw\" (UID: \"0f682c53-b8cc-42e4-a3b3-3bccfdf7c908\") " pod="openstack/heat-api-5dfdd78696-x7mzw" Feb 18 14:55:50 crc kubenswrapper[4957]: I0218 14:55:50.251976 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f682c53-b8cc-42e4-a3b3-3bccfdf7c908-config-data\") pod \"heat-api-5dfdd78696-x7mzw\" (UID: \"0f682c53-b8cc-42e4-a3b3-3bccfdf7c908\") " pod="openstack/heat-api-5dfdd78696-x7mzw" Feb 18 14:55:50 crc kubenswrapper[4957]: I0218 14:55:50.252015 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f682c53-b8cc-42e4-a3b3-3bccfdf7c908-combined-ca-bundle\") pod \"heat-api-5dfdd78696-x7mzw\" (UID: \"0f682c53-b8cc-42e4-a3b3-3bccfdf7c908\") " pod="openstack/heat-api-5dfdd78696-x7mzw" Feb 18 14:55:50 crc kubenswrapper[4957]: I0218 14:55:50.267413 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7fb5f67558-76mdf" Feb 18 14:55:50 crc kubenswrapper[4957]: I0218 14:55:50.298436 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7756b9d78c-rst4k" Feb 18 14:55:50 crc kubenswrapper[4957]: I0218 14:55:50.355478 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hwq4g\" (UniqueName: \"kubernetes.io/projected/0f682c53-b8cc-42e4-a3b3-3bccfdf7c908-kube-api-access-hwq4g\") pod \"heat-api-5dfdd78696-x7mzw\" (UID: \"0f682c53-b8cc-42e4-a3b3-3bccfdf7c908\") " pod="openstack/heat-api-5dfdd78696-x7mzw" Feb 18 14:55:50 crc kubenswrapper[4957]: I0218 14:55:50.355699 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f682c53-b8cc-42e4-a3b3-3bccfdf7c908-config-data\") pod \"heat-api-5dfdd78696-x7mzw\" (UID: \"0f682c53-b8cc-42e4-a3b3-3bccfdf7c908\") " pod="openstack/heat-api-5dfdd78696-x7mzw" Feb 18 14:55:50 crc kubenswrapper[4957]: I0218 14:55:50.355733 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f682c53-b8cc-42e4-a3b3-3bccfdf7c908-combined-ca-bundle\") pod \"heat-api-5dfdd78696-x7mzw\" (UID: \"0f682c53-b8cc-42e4-a3b3-3bccfdf7c908\") " pod="openstack/heat-api-5dfdd78696-x7mzw" Feb 18 14:55:50 crc kubenswrapper[4957]: I0218 14:55:50.355787 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0f682c53-b8cc-42e4-a3b3-3bccfdf7c908-config-data-custom\") pod \"heat-api-5dfdd78696-x7mzw\" (UID: \"0f682c53-b8cc-42e4-a3b3-3bccfdf7c908\") " pod="openstack/heat-api-5dfdd78696-x7mzw" Feb 18 14:55:50 crc kubenswrapper[4957]: I0218 14:55:50.363969 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0f682c53-b8cc-42e4-a3b3-3bccfdf7c908-config-data-custom\") pod \"heat-api-5dfdd78696-x7mzw\" (UID: \"0f682c53-b8cc-42e4-a3b3-3bccfdf7c908\") " pod="openstack/heat-api-5dfdd78696-x7mzw" Feb 18 14:55:50 crc kubenswrapper[4957]: I0218 14:55:50.365435 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f682c53-b8cc-42e4-a3b3-3bccfdf7c908-config-data\") pod \"heat-api-5dfdd78696-x7mzw\" (UID: \"0f682c53-b8cc-42e4-a3b3-3bccfdf7c908\") " pod="openstack/heat-api-5dfdd78696-x7mzw" Feb 18 14:55:50 crc kubenswrapper[4957]: I0218 14:55:50.369480 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f682c53-b8cc-42e4-a3b3-3bccfdf7c908-combined-ca-bundle\") pod \"heat-api-5dfdd78696-x7mzw\" (UID: \"0f682c53-b8cc-42e4-a3b3-3bccfdf7c908\") " pod="openstack/heat-api-5dfdd78696-x7mzw" Feb 18 14:55:50 crc kubenswrapper[4957]: I0218 14:55:50.386793 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hwq4g\" (UniqueName: \"kubernetes.io/projected/0f682c53-b8cc-42e4-a3b3-3bccfdf7c908-kube-api-access-hwq4g\") pod \"heat-api-5dfdd78696-x7mzw\" (UID: \"0f682c53-b8cc-42e4-a3b3-3bccfdf7c908\") " pod="openstack/heat-api-5dfdd78696-x7mzw" Feb 18 14:55:50 crc kubenswrapper[4957]: I0218 14:55:50.628580 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-5dfdd78696-x7mzw" Feb 18 14:55:51 crc kubenswrapper[4957]: I0218 14:55:51.016573 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 14:55:51 crc kubenswrapper[4957]: I0218 14:55:51.017839 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="46f8dbf3-ac91-469d-8d11-dc33a51b2d24" containerName="sg-core" containerID="cri-o://b1efbee1c7ecf0941b838236bca1523e961852805a7f1534148e58bb2cc6cab6" gracePeriod=30 Feb 18 14:55:51 crc kubenswrapper[4957]: I0218 14:55:51.017974 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="46f8dbf3-ac91-469d-8d11-dc33a51b2d24" containerName="proxy-httpd" containerID="cri-o://1cd9c45d8dea28db111c530f257714c6d4939e26327efc23c8e0b4500ed7842f" gracePeriod=30 Feb 18 14:55:51 crc kubenswrapper[4957]: I0218 14:55:51.018203 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="46f8dbf3-ac91-469d-8d11-dc33a51b2d24" containerName="ceilometer-notification-agent" containerID="cri-o://e500c38939b9836cce29d60cfb8ff32930e9ff8168b6b90b2517fd9c93e736e5" gracePeriod=30 Feb 18 14:55:51 crc kubenswrapper[4957]: I0218 14:55:51.017803 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="46f8dbf3-ac91-469d-8d11-dc33a51b2d24" containerName="ceilometer-central-agent" containerID="cri-o://c4b5f42cb84638506325eb8d728939aa76599a069f7eb04754148bc7b3478de0" gracePeriod=30 Feb 18 14:55:51 crc kubenswrapper[4957]: I0218 14:55:51.037184 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="46f8dbf3-ac91-469d-8d11-dc33a51b2d24" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.209:3000/\": EOF" Feb 18 14:55:51 crc kubenswrapper[4957]: I0218 14:55:51.204973 4957 generic.go:334] "Generic (PLEG): container finished" podID="46f8dbf3-ac91-469d-8d11-dc33a51b2d24" containerID="b1efbee1c7ecf0941b838236bca1523e961852805a7f1534148e58bb2cc6cab6" exitCode=2 Feb 18 14:55:51 crc kubenswrapper[4957]: I0218 14:55:51.205058 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"46f8dbf3-ac91-469d-8d11-dc33a51b2d24","Type":"ContainerDied","Data":"b1efbee1c7ecf0941b838236bca1523e961852805a7f1534148e58bb2cc6cab6"} Feb 18 14:55:53 crc kubenswrapper[4957]: I0218 14:55:53.025296 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-7fbd58c64f-dmc49" Feb 18 14:55:53 crc kubenswrapper[4957]: I0218 14:55:53.146840 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6dc87d6466-mkw7v"] Feb 18 14:55:53 crc kubenswrapper[4957]: I0218 14:55:53.148993 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6dc87d6466-mkw7v" podUID="49162e61-07d2-4596-a0c5-fd8f90890e35" containerName="neutron-api" containerID="cri-o://3b7016aca82e4bbdb5f8f9706dd6604e9b40eac55b3e0c37a1b53f441c2b51c9" gracePeriod=30 Feb 18 14:55:53 crc kubenswrapper[4957]: I0218 14:55:53.149653 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6dc87d6466-mkw7v" podUID="49162e61-07d2-4596-a0c5-fd8f90890e35" containerName="neutron-httpd" containerID="cri-o://3d81b08dc34086f30147f8acf3bba813d2ee5d1ebdb6ff06cd95fdc0a01a1280" gracePeriod=30 Feb 18 14:55:53 crc kubenswrapper[4957]: I0218 14:55:53.329524 4957 generic.go:334] "Generic (PLEG): container finished" podID="46f8dbf3-ac91-469d-8d11-dc33a51b2d24" containerID="1cd9c45d8dea28db111c530f257714c6d4939e26327efc23c8e0b4500ed7842f" exitCode=0 Feb 18 14:55:53 crc kubenswrapper[4957]: I0218 14:55:53.329553 4957 generic.go:334] "Generic (PLEG): container finished" podID="46f8dbf3-ac91-469d-8d11-dc33a51b2d24" containerID="c4b5f42cb84638506325eb8d728939aa76599a069f7eb04754148bc7b3478de0" exitCode=0 Feb 18 14:55:53 crc kubenswrapper[4957]: I0218 14:55:53.329571 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"46f8dbf3-ac91-469d-8d11-dc33a51b2d24","Type":"ContainerDied","Data":"1cd9c45d8dea28db111c530f257714c6d4939e26327efc23c8e0b4500ed7842f"} Feb 18 14:55:53 crc kubenswrapper[4957]: I0218 14:55:53.329594 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"46f8dbf3-ac91-469d-8d11-dc33a51b2d24","Type":"ContainerDied","Data":"c4b5f42cb84638506325eb8d728939aa76599a069f7eb04754148bc7b3478de0"} Feb 18 14:55:54 crc kubenswrapper[4957]: I0218 14:55:54.347357 4957 generic.go:334] "Generic (PLEG): container finished" podID="49162e61-07d2-4596-a0c5-fd8f90890e35" containerID="3d81b08dc34086f30147f8acf3bba813d2ee5d1ebdb6ff06cd95fdc0a01a1280" exitCode=0 Feb 18 14:55:54 crc kubenswrapper[4957]: I0218 14:55:54.347404 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6dc87d6466-mkw7v" event={"ID":"49162e61-07d2-4596-a0c5-fd8f90890e35","Type":"ContainerDied","Data":"3d81b08dc34086f30147f8acf3bba813d2ee5d1ebdb6ff06cd95fdc0a01a1280"} Feb 18 14:55:54 crc kubenswrapper[4957]: I0218 14:55:54.356203 4957 generic.go:334] "Generic (PLEG): container finished" podID="46f8dbf3-ac91-469d-8d11-dc33a51b2d24" containerID="e500c38939b9836cce29d60cfb8ff32930e9ff8168b6b90b2517fd9c93e736e5" exitCode=0 Feb 18 14:55:54 crc kubenswrapper[4957]: I0218 14:55:54.356234 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"46f8dbf3-ac91-469d-8d11-dc33a51b2d24","Type":"ContainerDied","Data":"e500c38939b9836cce29d60cfb8ff32930e9ff8168b6b90b2517fd9c93e736e5"} Feb 18 14:55:54 crc kubenswrapper[4957]: I0218 14:55:54.818811 4957 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-cjglk" podUID="fa177feb-1f29-43cb-baa9-6676bfa7c403" containerName="registry-server" probeResult="failure" output=< Feb 18 14:55:54 crc kubenswrapper[4957]: timeout: failed to connect service ":50051" within 1s Feb 18 14:55:54 crc kubenswrapper[4957]: > Feb 18 14:55:55 crc kubenswrapper[4957]: I0218 14:55:55.663660 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="608c576f-d3a4-42d5-9ba2-92d8f2560905" containerName="cinder-api" probeResult="failure" output="Get \"http://10.217.0.208:8776/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 14:55:56 crc kubenswrapper[4957]: I0218 14:55:56.122346 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-687db6759-27j8z"] Feb 18 14:55:56 crc kubenswrapper[4957]: I0218 14:55:56.124249 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-687db6759-27j8z" Feb 18 14:55:56 crc kubenswrapper[4957]: I0218 14:55:56.137066 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-687db6759-27j8z"] Feb 18 14:55:56 crc kubenswrapper[4957]: I0218 14:55:56.166540 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-7d845d58dd-m6z94"] Feb 18 14:55:56 crc kubenswrapper[4957]: I0218 14:55:56.170325 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7d845d58dd-m6z94" Feb 18 14:55:56 crc kubenswrapper[4957]: I0218 14:55:56.300943 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/befc3a14-3df1-45fd-9d4c-91033abb4d61-combined-ca-bundle\") pod \"heat-engine-687db6759-27j8z\" (UID: \"befc3a14-3df1-45fd-9d4c-91033abb4d61\") " pod="openstack/heat-engine-687db6759-27j8z" Feb 18 14:55:56 crc kubenswrapper[4957]: I0218 14:55:56.301302 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/befc3a14-3df1-45fd-9d4c-91033abb4d61-config-data\") pod \"heat-engine-687db6759-27j8z\" (UID: \"befc3a14-3df1-45fd-9d4c-91033abb4d61\") " pod="openstack/heat-engine-687db6759-27j8z" Feb 18 14:55:56 crc kubenswrapper[4957]: I0218 14:55:56.301509 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4qtg\" (UniqueName: \"kubernetes.io/projected/befc3a14-3df1-45fd-9d4c-91033abb4d61-kube-api-access-p4qtg\") pod \"heat-engine-687db6759-27j8z\" (UID: \"befc3a14-3df1-45fd-9d4c-91033abb4d61\") " pod="openstack/heat-engine-687db6759-27j8z" Feb 18 14:55:56 crc kubenswrapper[4957]: I0218 14:55:56.301667 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37f34265-b814-4eb7-b633-b1516352e951-combined-ca-bundle\") pod \"heat-cfnapi-7d845d58dd-m6z94\" (UID: \"37f34265-b814-4eb7-b633-b1516352e951\") " pod="openstack/heat-cfnapi-7d845d58dd-m6z94" Feb 18 14:55:56 crc kubenswrapper[4957]: I0218 14:55:56.301825 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/befc3a14-3df1-45fd-9d4c-91033abb4d61-config-data-custom\") pod \"heat-engine-687db6759-27j8z\" (UID: \"befc3a14-3df1-45fd-9d4c-91033abb4d61\") " pod="openstack/heat-engine-687db6759-27j8z" Feb 18 14:55:56 crc kubenswrapper[4957]: I0218 14:55:56.301950 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxszb\" (UniqueName: \"kubernetes.io/projected/37f34265-b814-4eb7-b633-b1516352e951-kube-api-access-rxszb\") pod \"heat-cfnapi-7d845d58dd-m6z94\" (UID: \"37f34265-b814-4eb7-b633-b1516352e951\") " pod="openstack/heat-cfnapi-7d845d58dd-m6z94" Feb 18 14:55:56 crc kubenswrapper[4957]: I0218 14:55:56.302064 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/37f34265-b814-4eb7-b633-b1516352e951-config-data-custom\") pod \"heat-cfnapi-7d845d58dd-m6z94\" (UID: \"37f34265-b814-4eb7-b633-b1516352e951\") " pod="openstack/heat-cfnapi-7d845d58dd-m6z94" Feb 18 14:55:56 crc kubenswrapper[4957]: I0218 14:55:56.302181 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37f34265-b814-4eb7-b633-b1516352e951-config-data\") pod \"heat-cfnapi-7d845d58dd-m6z94\" (UID: \"37f34265-b814-4eb7-b633-b1516352e951\") " pod="openstack/heat-cfnapi-7d845d58dd-m6z94" Feb 18 14:55:56 crc kubenswrapper[4957]: I0218 14:55:56.321318 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-7d845d58dd-m6z94"] Feb 18 14:55:56 crc kubenswrapper[4957]: I0218 14:55:56.324880 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-6744cfff74-cgs9b"] Feb 18 14:55:56 crc kubenswrapper[4957]: I0218 14:55:56.328298 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6744cfff74-cgs9b" Feb 18 14:55:56 crc kubenswrapper[4957]: I0218 14:55:56.335185 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-6744cfff74-cgs9b"] Feb 18 14:55:56 crc kubenswrapper[4957]: I0218 14:55:56.404536 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e973679-e589-4827-8e24-d2fda83ab2e2-config-data\") pod \"heat-api-6744cfff74-cgs9b\" (UID: \"3e973679-e589-4827-8e24-d2fda83ab2e2\") " pod="openstack/heat-api-6744cfff74-cgs9b" Feb 18 14:55:56 crc kubenswrapper[4957]: I0218 14:55:56.405203 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5g4z\" (UniqueName: \"kubernetes.io/projected/3e973679-e589-4827-8e24-d2fda83ab2e2-kube-api-access-p5g4z\") pod \"heat-api-6744cfff74-cgs9b\" (UID: \"3e973679-e589-4827-8e24-d2fda83ab2e2\") " pod="openstack/heat-api-6744cfff74-cgs9b" Feb 18 14:55:56 crc kubenswrapper[4957]: I0218 14:55:56.405405 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p4qtg\" (UniqueName: \"kubernetes.io/projected/befc3a14-3df1-45fd-9d4c-91033abb4d61-kube-api-access-p4qtg\") pod \"heat-engine-687db6759-27j8z\" (UID: \"befc3a14-3df1-45fd-9d4c-91033abb4d61\") " pod="openstack/heat-engine-687db6759-27j8z" Feb 18 14:55:56 crc kubenswrapper[4957]: I0218 14:55:56.405560 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e973679-e589-4827-8e24-d2fda83ab2e2-combined-ca-bundle\") pod \"heat-api-6744cfff74-cgs9b\" (UID: \"3e973679-e589-4827-8e24-d2fda83ab2e2\") " pod="openstack/heat-api-6744cfff74-cgs9b" Feb 18 14:55:56 crc kubenswrapper[4957]: I0218 14:55:56.405739 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37f34265-b814-4eb7-b633-b1516352e951-combined-ca-bundle\") pod \"heat-cfnapi-7d845d58dd-m6z94\" (UID: \"37f34265-b814-4eb7-b633-b1516352e951\") " pod="openstack/heat-cfnapi-7d845d58dd-m6z94" Feb 18 14:55:56 crc kubenswrapper[4957]: I0218 14:55:56.405951 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3e973679-e589-4827-8e24-d2fda83ab2e2-config-data-custom\") pod \"heat-api-6744cfff74-cgs9b\" (UID: \"3e973679-e589-4827-8e24-d2fda83ab2e2\") " pod="openstack/heat-api-6744cfff74-cgs9b" Feb 18 14:55:56 crc kubenswrapper[4957]: I0218 14:55:56.406103 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/befc3a14-3df1-45fd-9d4c-91033abb4d61-config-data-custom\") pod \"heat-engine-687db6759-27j8z\" (UID: \"befc3a14-3df1-45fd-9d4c-91033abb4d61\") " pod="openstack/heat-engine-687db6759-27j8z" Feb 18 14:55:56 crc kubenswrapper[4957]: I0218 14:55:56.406270 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rxszb\" (UniqueName: \"kubernetes.io/projected/37f34265-b814-4eb7-b633-b1516352e951-kube-api-access-rxszb\") pod \"heat-cfnapi-7d845d58dd-m6z94\" (UID: \"37f34265-b814-4eb7-b633-b1516352e951\") " pod="openstack/heat-cfnapi-7d845d58dd-m6z94" Feb 18 14:55:56 crc kubenswrapper[4957]: I0218 14:55:56.406394 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/37f34265-b814-4eb7-b633-b1516352e951-config-data-custom\") pod \"heat-cfnapi-7d845d58dd-m6z94\" (UID: \"37f34265-b814-4eb7-b633-b1516352e951\") " pod="openstack/heat-cfnapi-7d845d58dd-m6z94" Feb 18 14:55:56 crc kubenswrapper[4957]: I0218 14:55:56.406584 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37f34265-b814-4eb7-b633-b1516352e951-config-data\") pod \"heat-cfnapi-7d845d58dd-m6z94\" (UID: \"37f34265-b814-4eb7-b633-b1516352e951\") " pod="openstack/heat-cfnapi-7d845d58dd-m6z94" Feb 18 14:55:56 crc kubenswrapper[4957]: I0218 14:55:56.406722 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/befc3a14-3df1-45fd-9d4c-91033abb4d61-combined-ca-bundle\") pod \"heat-engine-687db6759-27j8z\" (UID: \"befc3a14-3df1-45fd-9d4c-91033abb4d61\") " pod="openstack/heat-engine-687db6759-27j8z" Feb 18 14:55:56 crc kubenswrapper[4957]: I0218 14:55:56.407535 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/befc3a14-3df1-45fd-9d4c-91033abb4d61-config-data\") pod \"heat-engine-687db6759-27j8z\" (UID: \"befc3a14-3df1-45fd-9d4c-91033abb4d61\") " pod="openstack/heat-engine-687db6759-27j8z" Feb 18 14:55:56 crc kubenswrapper[4957]: I0218 14:55:56.413262 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37f34265-b814-4eb7-b633-b1516352e951-config-data\") pod \"heat-cfnapi-7d845d58dd-m6z94\" (UID: \"37f34265-b814-4eb7-b633-b1516352e951\") " pod="openstack/heat-cfnapi-7d845d58dd-m6z94" Feb 18 14:55:56 crc kubenswrapper[4957]: I0218 14:55:56.413783 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/37f34265-b814-4eb7-b633-b1516352e951-config-data-custom\") pod \"heat-cfnapi-7d845d58dd-m6z94\" (UID: \"37f34265-b814-4eb7-b633-b1516352e951\") " pod="openstack/heat-cfnapi-7d845d58dd-m6z94" Feb 18 14:55:56 crc kubenswrapper[4957]: I0218 14:55:56.415586 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/befc3a14-3df1-45fd-9d4c-91033abb4d61-config-data-custom\") pod \"heat-engine-687db6759-27j8z\" (UID: \"befc3a14-3df1-45fd-9d4c-91033abb4d61\") " pod="openstack/heat-engine-687db6759-27j8z" Feb 18 14:55:56 crc kubenswrapper[4957]: I0218 14:55:56.416756 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37f34265-b814-4eb7-b633-b1516352e951-combined-ca-bundle\") pod \"heat-cfnapi-7d845d58dd-m6z94\" (UID: \"37f34265-b814-4eb7-b633-b1516352e951\") " pod="openstack/heat-cfnapi-7d845d58dd-m6z94" Feb 18 14:55:56 crc kubenswrapper[4957]: I0218 14:55:56.417002 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/befc3a14-3df1-45fd-9d4c-91033abb4d61-combined-ca-bundle\") pod \"heat-engine-687db6759-27j8z\" (UID: \"befc3a14-3df1-45fd-9d4c-91033abb4d61\") " pod="openstack/heat-engine-687db6759-27j8z" Feb 18 14:55:56 crc kubenswrapper[4957]: I0218 14:55:56.417598 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/befc3a14-3df1-45fd-9d4c-91033abb4d61-config-data\") pod \"heat-engine-687db6759-27j8z\" (UID: \"befc3a14-3df1-45fd-9d4c-91033abb4d61\") " pod="openstack/heat-engine-687db6759-27j8z" Feb 18 14:55:56 crc kubenswrapper[4957]: I0218 14:55:56.426439 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rxszb\" (UniqueName: \"kubernetes.io/projected/37f34265-b814-4eb7-b633-b1516352e951-kube-api-access-rxszb\") pod \"heat-cfnapi-7d845d58dd-m6z94\" (UID: \"37f34265-b814-4eb7-b633-b1516352e951\") " pod="openstack/heat-cfnapi-7d845d58dd-m6z94" Feb 18 14:55:56 crc kubenswrapper[4957]: I0218 14:55:56.445833 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p4qtg\" (UniqueName: \"kubernetes.io/projected/befc3a14-3df1-45fd-9d4c-91033abb4d61-kube-api-access-p4qtg\") pod \"heat-engine-687db6759-27j8z\" (UID: \"befc3a14-3df1-45fd-9d4c-91033abb4d61\") " pod="openstack/heat-engine-687db6759-27j8z" Feb 18 14:55:56 crc kubenswrapper[4957]: I0218 14:55:56.469038 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-687db6759-27j8z" Feb 18 14:55:56 crc kubenswrapper[4957]: I0218 14:55:56.510470 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3e973679-e589-4827-8e24-d2fda83ab2e2-config-data-custom\") pod \"heat-api-6744cfff74-cgs9b\" (UID: \"3e973679-e589-4827-8e24-d2fda83ab2e2\") " pod="openstack/heat-api-6744cfff74-cgs9b" Feb 18 14:55:56 crc kubenswrapper[4957]: I0218 14:55:56.510652 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e973679-e589-4827-8e24-d2fda83ab2e2-config-data\") pod \"heat-api-6744cfff74-cgs9b\" (UID: \"3e973679-e589-4827-8e24-d2fda83ab2e2\") " pod="openstack/heat-api-6744cfff74-cgs9b" Feb 18 14:55:56 crc kubenswrapper[4957]: I0218 14:55:56.510707 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5g4z\" (UniqueName: \"kubernetes.io/projected/3e973679-e589-4827-8e24-d2fda83ab2e2-kube-api-access-p5g4z\") pod \"heat-api-6744cfff74-cgs9b\" (UID: \"3e973679-e589-4827-8e24-d2fda83ab2e2\") " pod="openstack/heat-api-6744cfff74-cgs9b" Feb 18 14:55:56 crc kubenswrapper[4957]: I0218 14:55:56.510754 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e973679-e589-4827-8e24-d2fda83ab2e2-combined-ca-bundle\") pod \"heat-api-6744cfff74-cgs9b\" (UID: \"3e973679-e589-4827-8e24-d2fda83ab2e2\") " pod="openstack/heat-api-6744cfff74-cgs9b" Feb 18 14:55:56 crc kubenswrapper[4957]: I0218 14:55:56.514561 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3e973679-e589-4827-8e24-d2fda83ab2e2-config-data-custom\") pod \"heat-api-6744cfff74-cgs9b\" (UID: \"3e973679-e589-4827-8e24-d2fda83ab2e2\") " pod="openstack/heat-api-6744cfff74-cgs9b" Feb 18 14:55:56 crc kubenswrapper[4957]: I0218 14:55:56.516554 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e973679-e589-4827-8e24-d2fda83ab2e2-config-data\") pod \"heat-api-6744cfff74-cgs9b\" (UID: \"3e973679-e589-4827-8e24-d2fda83ab2e2\") " pod="openstack/heat-api-6744cfff74-cgs9b" Feb 18 14:55:56 crc kubenswrapper[4957]: I0218 14:55:56.528230 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e973679-e589-4827-8e24-d2fda83ab2e2-combined-ca-bundle\") pod \"heat-api-6744cfff74-cgs9b\" (UID: \"3e973679-e589-4827-8e24-d2fda83ab2e2\") " pod="openstack/heat-api-6744cfff74-cgs9b" Feb 18 14:55:56 crc kubenswrapper[4957]: I0218 14:55:56.536234 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p5g4z\" (UniqueName: \"kubernetes.io/projected/3e973679-e589-4827-8e24-d2fda83ab2e2-kube-api-access-p5g4z\") pod \"heat-api-6744cfff74-cgs9b\" (UID: \"3e973679-e589-4827-8e24-d2fda83ab2e2\") " pod="openstack/heat-api-6744cfff74-cgs9b" Feb 18 14:55:56 crc kubenswrapper[4957]: I0218 14:55:56.564581 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7d845d58dd-m6z94" Feb 18 14:55:56 crc kubenswrapper[4957]: I0218 14:55:56.655940 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6744cfff74-cgs9b" Feb 18 14:55:57 crc kubenswrapper[4957]: I0218 14:55:57.725954 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-7c68fdd987-chglv" Feb 18 14:55:57 crc kubenswrapper[4957]: I0218 14:55:57.726984 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-7c68fdd987-chglv" Feb 18 14:55:58 crc kubenswrapper[4957]: I0218 14:55:58.099932 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-5dfdd78696-x7mzw"] Feb 18 14:55:58 crc kubenswrapper[4957]: I0218 14:55:58.138448 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-7fb5f67558-76mdf"] Feb 18 14:55:58 crc kubenswrapper[4957]: I0218 14:55:58.166378 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-655997456c-8vtx7"] Feb 18 14:55:58 crc kubenswrapper[4957]: I0218 14:55:58.167969 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-655997456c-8vtx7" Feb 18 14:55:58 crc kubenswrapper[4957]: I0218 14:55:58.172359 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-public-svc" Feb 18 14:55:58 crc kubenswrapper[4957]: I0218 14:55:58.173624 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-internal-svc" Feb 18 14:55:58 crc kubenswrapper[4957]: I0218 14:55:58.192913 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-77b45dc78-6kx7v"] Feb 18 14:55:58 crc kubenswrapper[4957]: I0218 14:55:58.194643 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-77b45dc78-6kx7v" Feb 18 14:55:58 crc kubenswrapper[4957]: I0218 14:55:58.201682 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-internal-svc" Feb 18 14:55:58 crc kubenswrapper[4957]: I0218 14:55:58.204438 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-public-svc" Feb 18 14:55:58 crc kubenswrapper[4957]: I0218 14:55:58.238148 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-655997456c-8vtx7"] Feb 18 14:55:58 crc kubenswrapper[4957]: I0218 14:55:58.246445 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-77b45dc78-6kx7v"] Feb 18 14:55:58 crc kubenswrapper[4957]: I0218 14:55:58.255938 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b86c2f5-26d2-4702-89e3-e093e6cf4b21-config-data\") pod \"heat-api-655997456c-8vtx7\" (UID: \"1b86c2f5-26d2-4702-89e3-e093e6cf4b21\") " pod="openstack/heat-api-655997456c-8vtx7" Feb 18 14:55:58 crc kubenswrapper[4957]: I0218 14:55:58.256053 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1b86c2f5-26d2-4702-89e3-e093e6cf4b21-public-tls-certs\") pod \"heat-api-655997456c-8vtx7\" (UID: \"1b86c2f5-26d2-4702-89e3-e093e6cf4b21\") " pod="openstack/heat-api-655997456c-8vtx7" Feb 18 14:55:58 crc kubenswrapper[4957]: I0218 14:55:58.256072 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xtgdv\" (UniqueName: \"kubernetes.io/projected/1b86c2f5-26d2-4702-89e3-e093e6cf4b21-kube-api-access-xtgdv\") pod \"heat-api-655997456c-8vtx7\" (UID: \"1b86c2f5-26d2-4702-89e3-e093e6cf4b21\") " pod="openstack/heat-api-655997456c-8vtx7" Feb 18 14:55:58 crc kubenswrapper[4957]: I0218 14:55:58.256175 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1b86c2f5-26d2-4702-89e3-e093e6cf4b21-internal-tls-certs\") pod \"heat-api-655997456c-8vtx7\" (UID: \"1b86c2f5-26d2-4702-89e3-e093e6cf4b21\") " pod="openstack/heat-api-655997456c-8vtx7" Feb 18 14:55:58 crc kubenswrapper[4957]: I0218 14:55:58.256223 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1b86c2f5-26d2-4702-89e3-e093e6cf4b21-config-data-custom\") pod \"heat-api-655997456c-8vtx7\" (UID: \"1b86c2f5-26d2-4702-89e3-e093e6cf4b21\") " pod="openstack/heat-api-655997456c-8vtx7" Feb 18 14:55:58 crc kubenswrapper[4957]: I0218 14:55:58.256278 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b86c2f5-26d2-4702-89e3-e093e6cf4b21-combined-ca-bundle\") pod \"heat-api-655997456c-8vtx7\" (UID: \"1b86c2f5-26d2-4702-89e3-e093e6cf4b21\") " pod="openstack/heat-api-655997456c-8vtx7" Feb 18 14:55:58 crc kubenswrapper[4957]: I0218 14:55:58.359102 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1b86c2f5-26d2-4702-89e3-e093e6cf4b21-public-tls-certs\") pod \"heat-api-655997456c-8vtx7\" (UID: \"1b86c2f5-26d2-4702-89e3-e093e6cf4b21\") " pod="openstack/heat-api-655997456c-8vtx7" Feb 18 14:55:58 crc kubenswrapper[4957]: I0218 14:55:58.359147 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xtgdv\" (UniqueName: \"kubernetes.io/projected/1b86c2f5-26d2-4702-89e3-e093e6cf4b21-kube-api-access-xtgdv\") pod \"heat-api-655997456c-8vtx7\" (UID: \"1b86c2f5-26d2-4702-89e3-e093e6cf4b21\") " pod="openstack/heat-api-655997456c-8vtx7" Feb 18 14:55:58 crc kubenswrapper[4957]: I0218 14:55:58.359217 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1e0aae7-8a7f-4147-9abb-23f7c7691351-combined-ca-bundle\") pod \"heat-cfnapi-77b45dc78-6kx7v\" (UID: \"f1e0aae7-8a7f-4147-9abb-23f7c7691351\") " pod="openstack/heat-cfnapi-77b45dc78-6kx7v" Feb 18 14:55:58 crc kubenswrapper[4957]: I0218 14:55:58.359288 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1b86c2f5-26d2-4702-89e3-e093e6cf4b21-internal-tls-certs\") pod \"heat-api-655997456c-8vtx7\" (UID: \"1b86c2f5-26d2-4702-89e3-e093e6cf4b21\") " pod="openstack/heat-api-655997456c-8vtx7" Feb 18 14:55:58 crc kubenswrapper[4957]: I0218 14:55:58.359334 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1b86c2f5-26d2-4702-89e3-e093e6cf4b21-config-data-custom\") pod \"heat-api-655997456c-8vtx7\" (UID: \"1b86c2f5-26d2-4702-89e3-e093e6cf4b21\") " pod="openstack/heat-api-655997456c-8vtx7" Feb 18 14:55:58 crc kubenswrapper[4957]: I0218 14:55:58.359360 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f1e0aae7-8a7f-4147-9abb-23f7c7691351-public-tls-certs\") pod \"heat-cfnapi-77b45dc78-6kx7v\" (UID: \"f1e0aae7-8a7f-4147-9abb-23f7c7691351\") " pod="openstack/heat-cfnapi-77b45dc78-6kx7v" Feb 18 14:55:58 crc kubenswrapper[4957]: I0218 14:55:58.359538 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b86c2f5-26d2-4702-89e3-e093e6cf4b21-combined-ca-bundle\") pod \"heat-api-655997456c-8vtx7\" (UID: \"1b86c2f5-26d2-4702-89e3-e093e6cf4b21\") " pod="openstack/heat-api-655997456c-8vtx7" Feb 18 14:55:58 crc kubenswrapper[4957]: I0218 14:55:58.359622 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f1e0aae7-8a7f-4147-9abb-23f7c7691351-config-data-custom\") pod \"heat-cfnapi-77b45dc78-6kx7v\" (UID: \"f1e0aae7-8a7f-4147-9abb-23f7c7691351\") " pod="openstack/heat-cfnapi-77b45dc78-6kx7v" Feb 18 14:55:58 crc kubenswrapper[4957]: I0218 14:55:58.359650 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1e0aae7-8a7f-4147-9abb-23f7c7691351-config-data\") pod \"heat-cfnapi-77b45dc78-6kx7v\" (UID: \"f1e0aae7-8a7f-4147-9abb-23f7c7691351\") " pod="openstack/heat-cfnapi-77b45dc78-6kx7v" Feb 18 14:55:58 crc kubenswrapper[4957]: I0218 14:55:58.359694 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b86c2f5-26d2-4702-89e3-e093e6cf4b21-config-data\") pod \"heat-api-655997456c-8vtx7\" (UID: \"1b86c2f5-26d2-4702-89e3-e093e6cf4b21\") " pod="openstack/heat-api-655997456c-8vtx7" Feb 18 14:55:58 crc kubenswrapper[4957]: I0218 14:55:58.359747 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f1e0aae7-8a7f-4147-9abb-23f7c7691351-internal-tls-certs\") pod \"heat-cfnapi-77b45dc78-6kx7v\" (UID: \"f1e0aae7-8a7f-4147-9abb-23f7c7691351\") " pod="openstack/heat-cfnapi-77b45dc78-6kx7v" Feb 18 14:55:58 crc kubenswrapper[4957]: I0218 14:55:58.359786 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5nz47\" (UniqueName: \"kubernetes.io/projected/f1e0aae7-8a7f-4147-9abb-23f7c7691351-kube-api-access-5nz47\") pod \"heat-cfnapi-77b45dc78-6kx7v\" (UID: \"f1e0aae7-8a7f-4147-9abb-23f7c7691351\") " pod="openstack/heat-cfnapi-77b45dc78-6kx7v" Feb 18 14:55:58 crc kubenswrapper[4957]: I0218 14:55:58.364763 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1b86c2f5-26d2-4702-89e3-e093e6cf4b21-internal-tls-certs\") pod \"heat-api-655997456c-8vtx7\" (UID: \"1b86c2f5-26d2-4702-89e3-e093e6cf4b21\") " pod="openstack/heat-api-655997456c-8vtx7" Feb 18 14:55:58 crc kubenswrapper[4957]: I0218 14:55:58.366349 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1b86c2f5-26d2-4702-89e3-e093e6cf4b21-public-tls-certs\") pod \"heat-api-655997456c-8vtx7\" (UID: \"1b86c2f5-26d2-4702-89e3-e093e6cf4b21\") " pod="openstack/heat-api-655997456c-8vtx7" Feb 18 14:55:58 crc kubenswrapper[4957]: I0218 14:55:58.367064 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1b86c2f5-26d2-4702-89e3-e093e6cf4b21-config-data-custom\") pod \"heat-api-655997456c-8vtx7\" (UID: \"1b86c2f5-26d2-4702-89e3-e093e6cf4b21\") " pod="openstack/heat-api-655997456c-8vtx7" Feb 18 14:55:58 crc kubenswrapper[4957]: I0218 14:55:58.378937 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xtgdv\" (UniqueName: \"kubernetes.io/projected/1b86c2f5-26d2-4702-89e3-e093e6cf4b21-kube-api-access-xtgdv\") pod \"heat-api-655997456c-8vtx7\" (UID: \"1b86c2f5-26d2-4702-89e3-e093e6cf4b21\") " pod="openstack/heat-api-655997456c-8vtx7" Feb 18 14:55:58 crc kubenswrapper[4957]: I0218 14:55:58.384294 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b86c2f5-26d2-4702-89e3-e093e6cf4b21-config-data\") pod \"heat-api-655997456c-8vtx7\" (UID: \"1b86c2f5-26d2-4702-89e3-e093e6cf4b21\") " pod="openstack/heat-api-655997456c-8vtx7" Feb 18 14:55:58 crc kubenswrapper[4957]: I0218 14:55:58.387950 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b86c2f5-26d2-4702-89e3-e093e6cf4b21-combined-ca-bundle\") pod \"heat-api-655997456c-8vtx7\" (UID: \"1b86c2f5-26d2-4702-89e3-e093e6cf4b21\") " pod="openstack/heat-api-655997456c-8vtx7" Feb 18 14:55:58 crc kubenswrapper[4957]: I0218 14:55:58.415135 4957 generic.go:334] "Generic (PLEG): container finished" podID="49162e61-07d2-4596-a0c5-fd8f90890e35" containerID="3b7016aca82e4bbdb5f8f9706dd6604e9b40eac55b3e0c37a1b53f441c2b51c9" exitCode=0 Feb 18 14:55:58 crc kubenswrapper[4957]: I0218 14:55:58.416171 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6dc87d6466-mkw7v" event={"ID":"49162e61-07d2-4596-a0c5-fd8f90890e35","Type":"ContainerDied","Data":"3b7016aca82e4bbdb5f8f9706dd6604e9b40eac55b3e0c37a1b53f441c2b51c9"} Feb 18 14:55:58 crc kubenswrapper[4957]: I0218 14:55:58.462002 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1e0aae7-8a7f-4147-9abb-23f7c7691351-combined-ca-bundle\") pod \"heat-cfnapi-77b45dc78-6kx7v\" (UID: \"f1e0aae7-8a7f-4147-9abb-23f7c7691351\") " pod="openstack/heat-cfnapi-77b45dc78-6kx7v" Feb 18 14:55:58 crc kubenswrapper[4957]: I0218 14:55:58.462354 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f1e0aae7-8a7f-4147-9abb-23f7c7691351-public-tls-certs\") pod \"heat-cfnapi-77b45dc78-6kx7v\" (UID: \"f1e0aae7-8a7f-4147-9abb-23f7c7691351\") " pod="openstack/heat-cfnapi-77b45dc78-6kx7v" Feb 18 14:55:58 crc kubenswrapper[4957]: I0218 14:55:58.462617 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f1e0aae7-8a7f-4147-9abb-23f7c7691351-config-data-custom\") pod \"heat-cfnapi-77b45dc78-6kx7v\" (UID: \"f1e0aae7-8a7f-4147-9abb-23f7c7691351\") " pod="openstack/heat-cfnapi-77b45dc78-6kx7v" Feb 18 14:55:58 crc kubenswrapper[4957]: I0218 14:55:58.462748 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1e0aae7-8a7f-4147-9abb-23f7c7691351-config-data\") pod \"heat-cfnapi-77b45dc78-6kx7v\" (UID: \"f1e0aae7-8a7f-4147-9abb-23f7c7691351\") " pod="openstack/heat-cfnapi-77b45dc78-6kx7v" Feb 18 14:55:58 crc kubenswrapper[4957]: I0218 14:55:58.462907 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f1e0aae7-8a7f-4147-9abb-23f7c7691351-internal-tls-certs\") pod \"heat-cfnapi-77b45dc78-6kx7v\" (UID: \"f1e0aae7-8a7f-4147-9abb-23f7c7691351\") " pod="openstack/heat-cfnapi-77b45dc78-6kx7v" Feb 18 14:55:58 crc kubenswrapper[4957]: I0218 14:55:58.463159 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5nz47\" (UniqueName: \"kubernetes.io/projected/f1e0aae7-8a7f-4147-9abb-23f7c7691351-kube-api-access-5nz47\") pod \"heat-cfnapi-77b45dc78-6kx7v\" (UID: \"f1e0aae7-8a7f-4147-9abb-23f7c7691351\") " pod="openstack/heat-cfnapi-77b45dc78-6kx7v" Feb 18 14:55:58 crc kubenswrapper[4957]: I0218 14:55:58.467397 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f1e0aae7-8a7f-4147-9abb-23f7c7691351-config-data-custom\") pod \"heat-cfnapi-77b45dc78-6kx7v\" (UID: \"f1e0aae7-8a7f-4147-9abb-23f7c7691351\") " pod="openstack/heat-cfnapi-77b45dc78-6kx7v" Feb 18 14:55:58 crc kubenswrapper[4957]: I0218 14:55:58.468654 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1e0aae7-8a7f-4147-9abb-23f7c7691351-combined-ca-bundle\") pod \"heat-cfnapi-77b45dc78-6kx7v\" (UID: \"f1e0aae7-8a7f-4147-9abb-23f7c7691351\") " pod="openstack/heat-cfnapi-77b45dc78-6kx7v" Feb 18 14:55:58 crc kubenswrapper[4957]: I0218 14:55:58.468865 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f1e0aae7-8a7f-4147-9abb-23f7c7691351-public-tls-certs\") pod \"heat-cfnapi-77b45dc78-6kx7v\" (UID: \"f1e0aae7-8a7f-4147-9abb-23f7c7691351\") " pod="openstack/heat-cfnapi-77b45dc78-6kx7v" Feb 18 14:55:58 crc kubenswrapper[4957]: I0218 14:55:58.468869 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1e0aae7-8a7f-4147-9abb-23f7c7691351-config-data\") pod \"heat-cfnapi-77b45dc78-6kx7v\" (UID: \"f1e0aae7-8a7f-4147-9abb-23f7c7691351\") " pod="openstack/heat-cfnapi-77b45dc78-6kx7v" Feb 18 14:55:58 crc kubenswrapper[4957]: I0218 14:55:58.470714 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f1e0aae7-8a7f-4147-9abb-23f7c7691351-internal-tls-certs\") pod \"heat-cfnapi-77b45dc78-6kx7v\" (UID: \"f1e0aae7-8a7f-4147-9abb-23f7c7691351\") " pod="openstack/heat-cfnapi-77b45dc78-6kx7v" Feb 18 14:55:58 crc kubenswrapper[4957]: I0218 14:55:58.484617 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5nz47\" (UniqueName: \"kubernetes.io/projected/f1e0aae7-8a7f-4147-9abb-23f7c7691351-kube-api-access-5nz47\") pod \"heat-cfnapi-77b45dc78-6kx7v\" (UID: \"f1e0aae7-8a7f-4147-9abb-23f7c7691351\") " pod="openstack/heat-cfnapi-77b45dc78-6kx7v" Feb 18 14:55:58 crc kubenswrapper[4957]: I0218 14:55:58.499912 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-655997456c-8vtx7" Feb 18 14:55:58 crc kubenswrapper[4957]: I0218 14:55:58.522804 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-77b45dc78-6kx7v" Feb 18 14:55:59 crc kubenswrapper[4957]: I0218 14:55:59.553128 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="46f8dbf3-ac91-469d-8d11-dc33a51b2d24" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.209:3000/\": dial tcp 10.217.0.209:3000: connect: connection refused" Feb 18 14:55:59 crc kubenswrapper[4957]: E0218 14:55:59.586861 4957 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified" Feb 18 14:55:59 crc kubenswrapper[4957]: E0218 14:55:59.587043 4957 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:openstackclient,Image:quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified,Command:[/bin/sleep],Args:[infinity],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68ch55dh69h8fh6bh688h5b5h557h96h7h6dh549hdfhcch645h58bh68fh4h5dfh65h597hfdh84h5c9h564h5c8h667h69h5b4h5c6h57ch67fq,ValueFrom:nil,},EnvVar{Name:OS_CLOUD,Value:default,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_CA_CERT,Value:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_HOST,Value:metric-storage-prometheus.openstack.svc,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_PORT,Value:9090,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:openstack-config,ReadOnly:false,MountPath:/home/cloud-admin/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/home/cloud-admin/.config/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/home/cloud-admin/cloudrc,SubPath:cloudrc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dprht,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42401,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42401,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstackclient_openstack(aa2f421b-f6d0-4db4-9162-f863e45ca417): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 14:55:59 crc kubenswrapper[4957]: E0218 14:55:59.588823 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstackclient\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstackclient" podUID="aa2f421b-f6d0-4db4-9162-f863e45ca417" Feb 18 14:56:00 crc kubenswrapper[4957]: I0218 14:56:00.458795 4957 generic.go:334] "Generic (PLEG): container finished" podID="608c576f-d3a4-42d5-9ba2-92d8f2560905" containerID="e0ec6e476c32e193e5d85442325e9af7b6bcde32fd05b3324bddc3cbab22a77a" exitCode=137 Feb 18 14:56:00 crc kubenswrapper[4957]: I0218 14:56:00.459623 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"608c576f-d3a4-42d5-9ba2-92d8f2560905","Type":"ContainerDied","Data":"e0ec6e476c32e193e5d85442325e9af7b6bcde32fd05b3324bddc3cbab22a77a"} Feb 18 14:56:00 crc kubenswrapper[4957]: I0218 14:56:00.468630 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6dc87d6466-mkw7v" event={"ID":"49162e61-07d2-4596-a0c5-fd8f90890e35","Type":"ContainerDied","Data":"8d0e58a24d3d8891c6c66af5e6e652482bf62506026627c8ad746d9ee143cbc0"} Feb 18 14:56:00 crc kubenswrapper[4957]: I0218 14:56:00.468673 4957 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8d0e58a24d3d8891c6c66af5e6e652482bf62506026627c8ad746d9ee143cbc0" Feb 18 14:56:00 crc kubenswrapper[4957]: I0218 14:56:00.474120 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 14:56:00 crc kubenswrapper[4957]: I0218 14:56:00.476977 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"46f8dbf3-ac91-469d-8d11-dc33a51b2d24","Type":"ContainerDied","Data":"4a2368872b9e645eb7aefb7af03316eecd979d6e665e5a74fbafc8b88a094978"} Feb 18 14:56:00 crc kubenswrapper[4957]: I0218 14:56:00.477050 4957 scope.go:117] "RemoveContainer" containerID="1cd9c45d8dea28db111c530f257714c6d4939e26327efc23c8e0b4500ed7842f" Feb 18 14:56:00 crc kubenswrapper[4957]: E0218 14:56:00.511772 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstackclient\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified\\\"\"" pod="openstack/openstackclient" podUID="aa2f421b-f6d0-4db4-9162-f863e45ca417" Feb 18 14:56:00 crc kubenswrapper[4957]: I0218 14:56:00.531012 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/46f8dbf3-ac91-469d-8d11-dc33a51b2d24-sg-core-conf-yaml\") pod \"46f8dbf3-ac91-469d-8d11-dc33a51b2d24\" (UID: \"46f8dbf3-ac91-469d-8d11-dc33a51b2d24\") " Feb 18 14:56:00 crc kubenswrapper[4957]: I0218 14:56:00.531096 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/46f8dbf3-ac91-469d-8d11-dc33a51b2d24-log-httpd\") pod \"46f8dbf3-ac91-469d-8d11-dc33a51b2d24\" (UID: \"46f8dbf3-ac91-469d-8d11-dc33a51b2d24\") " Feb 18 14:56:00 crc kubenswrapper[4957]: I0218 14:56:00.531257 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46f8dbf3-ac91-469d-8d11-dc33a51b2d24-combined-ca-bundle\") pod \"46f8dbf3-ac91-469d-8d11-dc33a51b2d24\" (UID: \"46f8dbf3-ac91-469d-8d11-dc33a51b2d24\") " Feb 18 14:56:00 crc kubenswrapper[4957]: I0218 14:56:00.531457 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46f8dbf3-ac91-469d-8d11-dc33a51b2d24-config-data\") pod \"46f8dbf3-ac91-469d-8d11-dc33a51b2d24\" (UID: \"46f8dbf3-ac91-469d-8d11-dc33a51b2d24\") " Feb 18 14:56:00 crc kubenswrapper[4957]: I0218 14:56:00.531487 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/46f8dbf3-ac91-469d-8d11-dc33a51b2d24-scripts\") pod \"46f8dbf3-ac91-469d-8d11-dc33a51b2d24\" (UID: \"46f8dbf3-ac91-469d-8d11-dc33a51b2d24\") " Feb 18 14:56:00 crc kubenswrapper[4957]: I0218 14:56:00.532575 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ptwhq\" (UniqueName: \"kubernetes.io/projected/46f8dbf3-ac91-469d-8d11-dc33a51b2d24-kube-api-access-ptwhq\") pod \"46f8dbf3-ac91-469d-8d11-dc33a51b2d24\" (UID: \"46f8dbf3-ac91-469d-8d11-dc33a51b2d24\") " Feb 18 14:56:00 crc kubenswrapper[4957]: I0218 14:56:00.532648 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/46f8dbf3-ac91-469d-8d11-dc33a51b2d24-run-httpd\") pod \"46f8dbf3-ac91-469d-8d11-dc33a51b2d24\" (UID: \"46f8dbf3-ac91-469d-8d11-dc33a51b2d24\") " Feb 18 14:56:00 crc kubenswrapper[4957]: I0218 14:56:00.532925 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/46f8dbf3-ac91-469d-8d11-dc33a51b2d24-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "46f8dbf3-ac91-469d-8d11-dc33a51b2d24" (UID: "46f8dbf3-ac91-469d-8d11-dc33a51b2d24"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:56:00 crc kubenswrapper[4957]: I0218 14:56:00.534439 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/46f8dbf3-ac91-469d-8d11-dc33a51b2d24-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "46f8dbf3-ac91-469d-8d11-dc33a51b2d24" (UID: "46f8dbf3-ac91-469d-8d11-dc33a51b2d24"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:56:00 crc kubenswrapper[4957]: I0218 14:56:00.535522 4957 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/46f8dbf3-ac91-469d-8d11-dc33a51b2d24-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 14:56:00 crc kubenswrapper[4957]: I0218 14:56:00.535544 4957 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/46f8dbf3-ac91-469d-8d11-dc33a51b2d24-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 14:56:00 crc kubenswrapper[4957]: I0218 14:56:00.541573 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46f8dbf3-ac91-469d-8d11-dc33a51b2d24-scripts" (OuterVolumeSpecName: "scripts") pod "46f8dbf3-ac91-469d-8d11-dc33a51b2d24" (UID: "46f8dbf3-ac91-469d-8d11-dc33a51b2d24"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:56:00 crc kubenswrapper[4957]: I0218 14:56:00.562603 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6dc87d6466-mkw7v" Feb 18 14:56:00 crc kubenswrapper[4957]: I0218 14:56:00.571728 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46f8dbf3-ac91-469d-8d11-dc33a51b2d24-kube-api-access-ptwhq" (OuterVolumeSpecName: "kube-api-access-ptwhq") pod "46f8dbf3-ac91-469d-8d11-dc33a51b2d24" (UID: "46f8dbf3-ac91-469d-8d11-dc33a51b2d24"). InnerVolumeSpecName "kube-api-access-ptwhq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:56:00 crc kubenswrapper[4957]: I0218 14:56:00.602485 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46f8dbf3-ac91-469d-8d11-dc33a51b2d24-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "46f8dbf3-ac91-469d-8d11-dc33a51b2d24" (UID: "46f8dbf3-ac91-469d-8d11-dc33a51b2d24"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:56:00 crc kubenswrapper[4957]: I0218 14:56:00.638016 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49162e61-07d2-4596-a0c5-fd8f90890e35-combined-ca-bundle\") pod \"49162e61-07d2-4596-a0c5-fd8f90890e35\" (UID: \"49162e61-07d2-4596-a0c5-fd8f90890e35\") " Feb 18 14:56:00 crc kubenswrapper[4957]: I0218 14:56:00.638090 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/49162e61-07d2-4596-a0c5-fd8f90890e35-ovndb-tls-certs\") pod \"49162e61-07d2-4596-a0c5-fd8f90890e35\" (UID: \"49162e61-07d2-4596-a0c5-fd8f90890e35\") " Feb 18 14:56:00 crc kubenswrapper[4957]: I0218 14:56:00.638185 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/49162e61-07d2-4596-a0c5-fd8f90890e35-httpd-config\") pod \"49162e61-07d2-4596-a0c5-fd8f90890e35\" (UID: \"49162e61-07d2-4596-a0c5-fd8f90890e35\") " Feb 18 14:56:00 crc kubenswrapper[4957]: I0218 14:56:00.638329 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xd776\" (UniqueName: \"kubernetes.io/projected/49162e61-07d2-4596-a0c5-fd8f90890e35-kube-api-access-xd776\") pod \"49162e61-07d2-4596-a0c5-fd8f90890e35\" (UID: \"49162e61-07d2-4596-a0c5-fd8f90890e35\") " Feb 18 14:56:00 crc kubenswrapper[4957]: I0218 14:56:00.639057 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/49162e61-07d2-4596-a0c5-fd8f90890e35-config\") pod \"49162e61-07d2-4596-a0c5-fd8f90890e35\" (UID: \"49162e61-07d2-4596-a0c5-fd8f90890e35\") " Feb 18 14:56:00 crc kubenswrapper[4957]: I0218 14:56:00.640403 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ptwhq\" (UniqueName: \"kubernetes.io/projected/46f8dbf3-ac91-469d-8d11-dc33a51b2d24-kube-api-access-ptwhq\") on node \"crc\" DevicePath \"\"" Feb 18 14:56:00 crc kubenswrapper[4957]: I0218 14:56:00.640445 4957 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/46f8dbf3-ac91-469d-8d11-dc33a51b2d24-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 18 14:56:00 crc kubenswrapper[4957]: I0218 14:56:00.640457 4957 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/46f8dbf3-ac91-469d-8d11-dc33a51b2d24-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 14:56:00 crc kubenswrapper[4957]: I0218 14:56:00.631544 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="608c576f-d3a4-42d5-9ba2-92d8f2560905" containerName="cinder-api" probeResult="failure" output="Get \"http://10.217.0.208:8776/healthcheck\": dial tcp 10.217.0.208:8776: connect: connection refused" Feb 18 14:56:00 crc kubenswrapper[4957]: I0218 14:56:00.669214 4957 scope.go:117] "RemoveContainer" containerID="b1efbee1c7ecf0941b838236bca1523e961852805a7f1534148e58bb2cc6cab6" Feb 18 14:56:00 crc kubenswrapper[4957]: I0218 14:56:00.684454 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49162e61-07d2-4596-a0c5-fd8f90890e35-kube-api-access-xd776" (OuterVolumeSpecName: "kube-api-access-xd776") pod "49162e61-07d2-4596-a0c5-fd8f90890e35" (UID: "49162e61-07d2-4596-a0c5-fd8f90890e35"). InnerVolumeSpecName "kube-api-access-xd776". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:56:00 crc kubenswrapper[4957]: I0218 14:56:00.685348 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49162e61-07d2-4596-a0c5-fd8f90890e35-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "49162e61-07d2-4596-a0c5-fd8f90890e35" (UID: "49162e61-07d2-4596-a0c5-fd8f90890e35"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:56:00 crc kubenswrapper[4957]: I0218 14:56:00.754780 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xd776\" (UniqueName: \"kubernetes.io/projected/49162e61-07d2-4596-a0c5-fd8f90890e35-kube-api-access-xd776\") on node \"crc\" DevicePath \"\"" Feb 18 14:56:00 crc kubenswrapper[4957]: I0218 14:56:00.754823 4957 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/49162e61-07d2-4596-a0c5-fd8f90890e35-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 18 14:56:00 crc kubenswrapper[4957]: I0218 14:56:00.774150 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49162e61-07d2-4596-a0c5-fd8f90890e35-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "49162e61-07d2-4596-a0c5-fd8f90890e35" (UID: "49162e61-07d2-4596-a0c5-fd8f90890e35"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:56:00 crc kubenswrapper[4957]: I0218 14:56:00.819186 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46f8dbf3-ac91-469d-8d11-dc33a51b2d24-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "46f8dbf3-ac91-469d-8d11-dc33a51b2d24" (UID: "46f8dbf3-ac91-469d-8d11-dc33a51b2d24"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:56:00 crc kubenswrapper[4957]: I0218 14:56:00.841258 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49162e61-07d2-4596-a0c5-fd8f90890e35-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "49162e61-07d2-4596-a0c5-fd8f90890e35" (UID: "49162e61-07d2-4596-a0c5-fd8f90890e35"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:56:00 crc kubenswrapper[4957]: I0218 14:56:00.847575 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46f8dbf3-ac91-469d-8d11-dc33a51b2d24-config-data" (OuterVolumeSpecName: "config-data") pod "46f8dbf3-ac91-469d-8d11-dc33a51b2d24" (UID: "46f8dbf3-ac91-469d-8d11-dc33a51b2d24"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:56:00 crc kubenswrapper[4957]: I0218 14:56:00.857257 4957 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49162e61-07d2-4596-a0c5-fd8f90890e35-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:56:00 crc kubenswrapper[4957]: I0218 14:56:00.857293 4957 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/49162e61-07d2-4596-a0c5-fd8f90890e35-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 14:56:00 crc kubenswrapper[4957]: I0218 14:56:00.857306 4957 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46f8dbf3-ac91-469d-8d11-dc33a51b2d24-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:56:00 crc kubenswrapper[4957]: I0218 14:56:00.857315 4957 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46f8dbf3-ac91-469d-8d11-dc33a51b2d24-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 14:56:00 crc kubenswrapper[4957]: I0218 14:56:00.858014 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49162e61-07d2-4596-a0c5-fd8f90890e35-config" (OuterVolumeSpecName: "config") pod "49162e61-07d2-4596-a0c5-fd8f90890e35" (UID: "49162e61-07d2-4596-a0c5-fd8f90890e35"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:56:00 crc kubenswrapper[4957]: I0218 14:56:00.961307 4957 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/49162e61-07d2-4596-a0c5-fd8f90890e35-config\") on node \"crc\" DevicePath \"\"" Feb 18 14:56:01 crc kubenswrapper[4957]: I0218 14:56:01.047338 4957 scope.go:117] "RemoveContainer" containerID="e500c38939b9836cce29d60cfb8ff32930e9ff8168b6b90b2517fd9c93e736e5" Feb 18 14:56:01 crc kubenswrapper[4957]: I0218 14:56:01.049230 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-c445bc8f8-ltx54" Feb 18 14:56:01 crc kubenswrapper[4957]: E0218 14:56:01.093903 4957 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod608c576f_d3a4_42d5_9ba2_92d8f2560905.slice/crio-e0ec6e476c32e193e5d85442325e9af7b6bcde32fd05b3324bddc3cbab22a77a.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod46f8dbf3_ac91_469d_8d11_dc33a51b2d24.slice/crio-e500c38939b9836cce29d60cfb8ff32930e9ff8168b6b90b2517fd9c93e736e5.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod46f8dbf3_ac91_469d_8d11_dc33a51b2d24.slice/crio-conmon-b1efbee1c7ecf0941b838236bca1523e961852805a7f1534148e58bb2cc6cab6.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod608c576f_d3a4_42d5_9ba2_92d8f2560905.slice/crio-conmon-e0ec6e476c32e193e5d85442325e9af7b6bcde32fd05b3324bddc3cbab22a77a.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod49162e61_07d2_4596_a0c5_fd8f90890e35.slice/crio-3d81b08dc34086f30147f8acf3bba813d2ee5d1ebdb6ff06cd95fdc0a01a1280.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod49162e61_07d2_4596_a0c5_fd8f90890e35.slice/crio-conmon-3b7016aca82e4bbdb5f8f9706dd6604e9b40eac55b3e0c37a1b53f441c2b51c9.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod46f8dbf3_ac91_469d_8d11_dc33a51b2d24.slice/crio-conmon-1cd9c45d8dea28db111c530f257714c6d4939e26327efc23c8e0b4500ed7842f.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod46f8dbf3_ac91_469d_8d11_dc33a51b2d24.slice/crio-c4b5f42cb84638506325eb8d728939aa76599a069f7eb04754148bc7b3478de0.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod46f8dbf3_ac91_469d_8d11_dc33a51b2d24.slice/crio-conmon-e500c38939b9836cce29d60cfb8ff32930e9ff8168b6b90b2517fd9c93e736e5.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod49162e61_07d2_4596_a0c5_fd8f90890e35.slice/crio-conmon-3d81b08dc34086f30147f8acf3bba813d2ee5d1ebdb6ff06cd95fdc0a01a1280.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod46f8dbf3_ac91_469d_8d11_dc33a51b2d24.slice/crio-conmon-c4b5f42cb84638506325eb8d728939aa76599a069f7eb04754148bc7b3478de0.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod46f8dbf3_ac91_469d_8d11_dc33a51b2d24.slice/crio-1cd9c45d8dea28db111c530f257714c6d4939e26327efc23c8e0b4500ed7842f.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod46f8dbf3_ac91_469d_8d11_dc33a51b2d24.slice/crio-b1efbee1c7ecf0941b838236bca1523e961852805a7f1534148e58bb2cc6cab6.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod49162e61_07d2_4596_a0c5_fd8f90890e35.slice/crio-3b7016aca82e4bbdb5f8f9706dd6604e9b40eac55b3e0c37a1b53f441c2b51c9.scope\": RecentStats: unable to find data in memory cache]" Feb 18 14:56:01 crc kubenswrapper[4957]: I0218 14:56:01.095893 4957 scope.go:117] "RemoveContainer" containerID="c4b5f42cb84638506325eb8d728939aa76599a069f7eb04754148bc7b3478de0" Feb 18 14:56:01 crc kubenswrapper[4957]: I0218 14:56:01.165298 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7cb1bd6-6151-4d00-834c-dbe330c6506b-config-data\") pod \"c7cb1bd6-6151-4d00-834c-dbe330c6506b\" (UID: \"c7cb1bd6-6151-4d00-834c-dbe330c6506b\") " Feb 18 14:56:01 crc kubenswrapper[4957]: I0218 14:56:01.165549 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7cb1bd6-6151-4d00-834c-dbe330c6506b-combined-ca-bundle\") pod \"c7cb1bd6-6151-4d00-834c-dbe330c6506b\" (UID: \"c7cb1bd6-6151-4d00-834c-dbe330c6506b\") " Feb 18 14:56:01 crc kubenswrapper[4957]: I0218 14:56:01.165616 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c7cb1bd6-6151-4d00-834c-dbe330c6506b-logs\") pod \"c7cb1bd6-6151-4d00-834c-dbe330c6506b\" (UID: \"c7cb1bd6-6151-4d00-834c-dbe330c6506b\") " Feb 18 14:56:01 crc kubenswrapper[4957]: I0218 14:56:01.165742 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c7cb1bd6-6151-4d00-834c-dbe330c6506b-config-data-custom\") pod \"c7cb1bd6-6151-4d00-834c-dbe330c6506b\" (UID: \"c7cb1bd6-6151-4d00-834c-dbe330c6506b\") " Feb 18 14:56:01 crc kubenswrapper[4957]: I0218 14:56:01.165775 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8b9xn\" (UniqueName: \"kubernetes.io/projected/c7cb1bd6-6151-4d00-834c-dbe330c6506b-kube-api-access-8b9xn\") pod \"c7cb1bd6-6151-4d00-834c-dbe330c6506b\" (UID: \"c7cb1bd6-6151-4d00-834c-dbe330c6506b\") " Feb 18 14:56:01 crc kubenswrapper[4957]: I0218 14:56:01.166277 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c7cb1bd6-6151-4d00-834c-dbe330c6506b-logs" (OuterVolumeSpecName: "logs") pod "c7cb1bd6-6151-4d00-834c-dbe330c6506b" (UID: "c7cb1bd6-6151-4d00-834c-dbe330c6506b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:56:01 crc kubenswrapper[4957]: I0218 14:56:01.172879 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7cb1bd6-6151-4d00-834c-dbe330c6506b-kube-api-access-8b9xn" (OuterVolumeSpecName: "kube-api-access-8b9xn") pod "c7cb1bd6-6151-4d00-834c-dbe330c6506b" (UID: "c7cb1bd6-6151-4d00-834c-dbe330c6506b"). InnerVolumeSpecName "kube-api-access-8b9xn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:56:01 crc kubenswrapper[4957]: I0218 14:56:01.172989 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7cb1bd6-6151-4d00-834c-dbe330c6506b-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "c7cb1bd6-6151-4d00-834c-dbe330c6506b" (UID: "c7cb1bd6-6151-4d00-834c-dbe330c6506b"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:56:01 crc kubenswrapper[4957]: I0218 14:56:01.197376 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7cb1bd6-6151-4d00-834c-dbe330c6506b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c7cb1bd6-6151-4d00-834c-dbe330c6506b" (UID: "c7cb1bd6-6151-4d00-834c-dbe330c6506b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:56:01 crc kubenswrapper[4957]: I0218 14:56:01.260561 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7cb1bd6-6151-4d00-834c-dbe330c6506b-config-data" (OuterVolumeSpecName: "config-data") pod "c7cb1bd6-6151-4d00-834c-dbe330c6506b" (UID: "c7cb1bd6-6151-4d00-834c-dbe330c6506b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:56:01 crc kubenswrapper[4957]: I0218 14:56:01.265192 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 18 14:56:01 crc kubenswrapper[4957]: I0218 14:56:01.268360 4957 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c7cb1bd6-6151-4d00-834c-dbe330c6506b-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 18 14:56:01 crc kubenswrapper[4957]: I0218 14:56:01.268381 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8b9xn\" (UniqueName: \"kubernetes.io/projected/c7cb1bd6-6151-4d00-834c-dbe330c6506b-kube-api-access-8b9xn\") on node \"crc\" DevicePath \"\"" Feb 18 14:56:01 crc kubenswrapper[4957]: I0218 14:56:01.268392 4957 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7cb1bd6-6151-4d00-834c-dbe330c6506b-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 14:56:01 crc kubenswrapper[4957]: I0218 14:56:01.268401 4957 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7cb1bd6-6151-4d00-834c-dbe330c6506b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:56:01 crc kubenswrapper[4957]: I0218 14:56:01.268409 4957 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c7cb1bd6-6151-4d00-834c-dbe330c6506b-logs\") on node \"crc\" DevicePath \"\"" Feb 18 14:56:01 crc kubenswrapper[4957]: I0218 14:56:01.369331 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/608c576f-d3a4-42d5-9ba2-92d8f2560905-logs\") pod \"608c576f-d3a4-42d5-9ba2-92d8f2560905\" (UID: \"608c576f-d3a4-42d5-9ba2-92d8f2560905\") " Feb 18 14:56:01 crc kubenswrapper[4957]: I0218 14:56:01.369403 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzfpf\" (UniqueName: \"kubernetes.io/projected/608c576f-d3a4-42d5-9ba2-92d8f2560905-kube-api-access-nzfpf\") pod \"608c576f-d3a4-42d5-9ba2-92d8f2560905\" (UID: \"608c576f-d3a4-42d5-9ba2-92d8f2560905\") " Feb 18 14:56:01 crc kubenswrapper[4957]: I0218 14:56:01.369496 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/608c576f-d3a4-42d5-9ba2-92d8f2560905-config-data-custom\") pod \"608c576f-d3a4-42d5-9ba2-92d8f2560905\" (UID: \"608c576f-d3a4-42d5-9ba2-92d8f2560905\") " Feb 18 14:56:01 crc kubenswrapper[4957]: I0218 14:56:01.369564 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/608c576f-d3a4-42d5-9ba2-92d8f2560905-combined-ca-bundle\") pod \"608c576f-d3a4-42d5-9ba2-92d8f2560905\" (UID: \"608c576f-d3a4-42d5-9ba2-92d8f2560905\") " Feb 18 14:56:01 crc kubenswrapper[4957]: I0218 14:56:01.369594 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/608c576f-d3a4-42d5-9ba2-92d8f2560905-config-data\") pod \"608c576f-d3a4-42d5-9ba2-92d8f2560905\" (UID: \"608c576f-d3a4-42d5-9ba2-92d8f2560905\") " Feb 18 14:56:01 crc kubenswrapper[4957]: I0218 14:56:01.369645 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/608c576f-d3a4-42d5-9ba2-92d8f2560905-etc-machine-id\") pod \"608c576f-d3a4-42d5-9ba2-92d8f2560905\" (UID: \"608c576f-d3a4-42d5-9ba2-92d8f2560905\") " Feb 18 14:56:01 crc kubenswrapper[4957]: I0218 14:56:01.369671 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/608c576f-d3a4-42d5-9ba2-92d8f2560905-scripts\") pod \"608c576f-d3a4-42d5-9ba2-92d8f2560905\" (UID: \"608c576f-d3a4-42d5-9ba2-92d8f2560905\") " Feb 18 14:56:01 crc kubenswrapper[4957]: I0218 14:56:01.372030 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/608c576f-d3a4-42d5-9ba2-92d8f2560905-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "608c576f-d3a4-42d5-9ba2-92d8f2560905" (UID: "608c576f-d3a4-42d5-9ba2-92d8f2560905"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 14:56:01 crc kubenswrapper[4957]: I0218 14:56:01.372369 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/608c576f-d3a4-42d5-9ba2-92d8f2560905-logs" (OuterVolumeSpecName: "logs") pod "608c576f-d3a4-42d5-9ba2-92d8f2560905" (UID: "608c576f-d3a4-42d5-9ba2-92d8f2560905"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:56:01 crc kubenswrapper[4957]: I0218 14:56:01.389640 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/608c576f-d3a4-42d5-9ba2-92d8f2560905-scripts" (OuterVolumeSpecName: "scripts") pod "608c576f-d3a4-42d5-9ba2-92d8f2560905" (UID: "608c576f-d3a4-42d5-9ba2-92d8f2560905"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:56:01 crc kubenswrapper[4957]: I0218 14:56:01.389682 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/608c576f-d3a4-42d5-9ba2-92d8f2560905-kube-api-access-nzfpf" (OuterVolumeSpecName: "kube-api-access-nzfpf") pod "608c576f-d3a4-42d5-9ba2-92d8f2560905" (UID: "608c576f-d3a4-42d5-9ba2-92d8f2560905"). InnerVolumeSpecName "kube-api-access-nzfpf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:56:01 crc kubenswrapper[4957]: I0218 14:56:01.390088 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/608c576f-d3a4-42d5-9ba2-92d8f2560905-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "608c576f-d3a4-42d5-9ba2-92d8f2560905" (UID: "608c576f-d3a4-42d5-9ba2-92d8f2560905"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:56:01 crc kubenswrapper[4957]: I0218 14:56:01.440771 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/608c576f-d3a4-42d5-9ba2-92d8f2560905-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "608c576f-d3a4-42d5-9ba2-92d8f2560905" (UID: "608c576f-d3a4-42d5-9ba2-92d8f2560905"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:56:01 crc kubenswrapper[4957]: I0218 14:56:01.472572 4957 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/608c576f-d3a4-42d5-9ba2-92d8f2560905-logs\") on node \"crc\" DevicePath \"\"" Feb 18 14:56:01 crc kubenswrapper[4957]: I0218 14:56:01.472599 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzfpf\" (UniqueName: \"kubernetes.io/projected/608c576f-d3a4-42d5-9ba2-92d8f2560905-kube-api-access-nzfpf\") on node \"crc\" DevicePath \"\"" Feb 18 14:56:01 crc kubenswrapper[4957]: I0218 14:56:01.472611 4957 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/608c576f-d3a4-42d5-9ba2-92d8f2560905-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 18 14:56:01 crc kubenswrapper[4957]: I0218 14:56:01.472620 4957 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/608c576f-d3a4-42d5-9ba2-92d8f2560905-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:56:01 crc kubenswrapper[4957]: I0218 14:56:01.472629 4957 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/608c576f-d3a4-42d5-9ba2-92d8f2560905-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 18 14:56:01 crc kubenswrapper[4957]: I0218 14:56:01.472637 4957 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/608c576f-d3a4-42d5-9ba2-92d8f2560905-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 14:56:01 crc kubenswrapper[4957]: I0218 14:56:01.505189 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/608c576f-d3a4-42d5-9ba2-92d8f2560905-config-data" (OuterVolumeSpecName: "config-data") pod "608c576f-d3a4-42d5-9ba2-92d8f2560905" (UID: "608c576f-d3a4-42d5-9ba2-92d8f2560905"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:56:01 crc kubenswrapper[4957]: I0218 14:56:01.508889 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"608c576f-d3a4-42d5-9ba2-92d8f2560905","Type":"ContainerDied","Data":"6f434f8a8917b44d1be215d2125286c8182f6394c055a4fa76f3c2cd3be5f70e"} Feb 18 14:56:01 crc kubenswrapper[4957]: I0218 14:56:01.508987 4957 scope.go:117] "RemoveContainer" containerID="e0ec6e476c32e193e5d85442325e9af7b6bcde32fd05b3324bddc3cbab22a77a" Feb 18 14:56:01 crc kubenswrapper[4957]: I0218 14:56:01.509248 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 18 14:56:01 crc kubenswrapper[4957]: I0218 14:56:01.520882 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 14:56:01 crc kubenswrapper[4957]: I0218 14:56:01.542131 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6dc87d6466-mkw7v" Feb 18 14:56:01 crc kubenswrapper[4957]: I0218 14:56:01.543930 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-c445bc8f8-ltx54" Feb 18 14:56:01 crc kubenswrapper[4957]: I0218 14:56:01.544048 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-c445bc8f8-ltx54" event={"ID":"c7cb1bd6-6151-4d00-834c-dbe330c6506b","Type":"ContainerDied","Data":"3d9cab8a330c1f90ffa069c4d603d8e130496155a7e7aa56f4f6077c1145e3b8"} Feb 18 14:56:01 crc kubenswrapper[4957]: I0218 14:56:01.563356 4957 scope.go:117] "RemoveContainer" containerID="b812932d705db67c58db95eb5a774a24fbbd83516c912b6ef898add9ab4bfcb2" Feb 18 14:56:01 crc kubenswrapper[4957]: I0218 14:56:01.571974 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-7fb5f67558-76mdf"] Feb 18 14:56:01 crc kubenswrapper[4957]: I0218 14:56:01.575145 4957 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/608c576f-d3a4-42d5-9ba2-92d8f2560905-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 14:56:01 crc kubenswrapper[4957]: I0218 14:56:01.585240 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-65df87cd54-bjsc4"] Feb 18 14:56:01 crc kubenswrapper[4957]: I0218 14:56:01.597272 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-5dfdd78696-x7mzw"] Feb 18 14:56:01 crc kubenswrapper[4957]: I0218 14:56:01.714121 4957 scope.go:117] "RemoveContainer" containerID="22589993cf04dd84a5ced76dc2e4b3c636efd99cf1c4056002c72b0814d46ebb" Feb 18 14:56:01 crc kubenswrapper[4957]: I0218 14:56:01.934364 4957 scope.go:117] "RemoveContainer" containerID="1a96283bc3d2c89ae47ddb632408c838b24913044dc4456d34270990e28285b1" Feb 18 14:56:01 crc kubenswrapper[4957]: I0218 14:56:01.963153 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 14:56:01 crc kubenswrapper[4957]: I0218 14:56:01.983908 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 18 14:56:01 crc kubenswrapper[4957]: I0218 14:56:01.996251 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6dc87d6466-mkw7v"] Feb 18 14:56:02 crc kubenswrapper[4957]: I0218 14:56:02.009361 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-6dc87d6466-mkw7v"] Feb 18 14:56:02 crc kubenswrapper[4957]: I0218 14:56:02.033176 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 18 14:56:02 crc kubenswrapper[4957]: E0218 14:56:02.033843 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46f8dbf3-ac91-469d-8d11-dc33a51b2d24" containerName="proxy-httpd" Feb 18 14:56:02 crc kubenswrapper[4957]: I0218 14:56:02.033869 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="46f8dbf3-ac91-469d-8d11-dc33a51b2d24" containerName="proxy-httpd" Feb 18 14:56:02 crc kubenswrapper[4957]: E0218 14:56:02.033901 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49162e61-07d2-4596-a0c5-fd8f90890e35" containerName="neutron-api" Feb 18 14:56:02 crc kubenswrapper[4957]: I0218 14:56:02.033910 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="49162e61-07d2-4596-a0c5-fd8f90890e35" containerName="neutron-api" Feb 18 14:56:02 crc kubenswrapper[4957]: E0218 14:56:02.033928 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49162e61-07d2-4596-a0c5-fd8f90890e35" containerName="neutron-httpd" Feb 18 14:56:02 crc kubenswrapper[4957]: I0218 14:56:02.033936 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="49162e61-07d2-4596-a0c5-fd8f90890e35" containerName="neutron-httpd" Feb 18 14:56:02 crc kubenswrapper[4957]: E0218 14:56:02.033957 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46f8dbf3-ac91-469d-8d11-dc33a51b2d24" containerName="ceilometer-central-agent" Feb 18 14:56:02 crc kubenswrapper[4957]: I0218 14:56:02.033966 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="46f8dbf3-ac91-469d-8d11-dc33a51b2d24" containerName="ceilometer-central-agent" Feb 18 14:56:02 crc kubenswrapper[4957]: E0218 14:56:02.033983 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46f8dbf3-ac91-469d-8d11-dc33a51b2d24" containerName="sg-core" Feb 18 14:56:02 crc kubenswrapper[4957]: I0218 14:56:02.033990 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="46f8dbf3-ac91-469d-8d11-dc33a51b2d24" containerName="sg-core" Feb 18 14:56:02 crc kubenswrapper[4957]: E0218 14:56:02.034000 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="608c576f-d3a4-42d5-9ba2-92d8f2560905" containerName="cinder-api" Feb 18 14:56:02 crc kubenswrapper[4957]: I0218 14:56:02.034008 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="608c576f-d3a4-42d5-9ba2-92d8f2560905" containerName="cinder-api" Feb 18 14:56:02 crc kubenswrapper[4957]: E0218 14:56:02.034024 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46f8dbf3-ac91-469d-8d11-dc33a51b2d24" containerName="ceilometer-notification-agent" Feb 18 14:56:02 crc kubenswrapper[4957]: I0218 14:56:02.034031 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="46f8dbf3-ac91-469d-8d11-dc33a51b2d24" containerName="ceilometer-notification-agent" Feb 18 14:56:02 crc kubenswrapper[4957]: E0218 14:56:02.034041 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7cb1bd6-6151-4d00-834c-dbe330c6506b" containerName="barbican-keystone-listener-log" Feb 18 14:56:02 crc kubenswrapper[4957]: I0218 14:56:02.034048 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7cb1bd6-6151-4d00-834c-dbe330c6506b" containerName="barbican-keystone-listener-log" Feb 18 14:56:02 crc kubenswrapper[4957]: E0218 14:56:02.034068 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7cb1bd6-6151-4d00-834c-dbe330c6506b" containerName="barbican-keystone-listener" Feb 18 14:56:02 crc kubenswrapper[4957]: I0218 14:56:02.034075 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7cb1bd6-6151-4d00-834c-dbe330c6506b" containerName="barbican-keystone-listener" Feb 18 14:56:02 crc kubenswrapper[4957]: E0218 14:56:02.034096 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="608c576f-d3a4-42d5-9ba2-92d8f2560905" containerName="cinder-api-log" Feb 18 14:56:02 crc kubenswrapper[4957]: I0218 14:56:02.034104 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="608c576f-d3a4-42d5-9ba2-92d8f2560905" containerName="cinder-api-log" Feb 18 14:56:02 crc kubenswrapper[4957]: I0218 14:56:02.034401 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="49162e61-07d2-4596-a0c5-fd8f90890e35" containerName="neutron-httpd" Feb 18 14:56:02 crc kubenswrapper[4957]: I0218 14:56:02.034441 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7cb1bd6-6151-4d00-834c-dbe330c6506b" containerName="barbican-keystone-listener" Feb 18 14:56:02 crc kubenswrapper[4957]: I0218 14:56:02.034473 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="49162e61-07d2-4596-a0c5-fd8f90890e35" containerName="neutron-api" Feb 18 14:56:02 crc kubenswrapper[4957]: I0218 14:56:02.034494 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="46f8dbf3-ac91-469d-8d11-dc33a51b2d24" containerName="sg-core" Feb 18 14:56:02 crc kubenswrapper[4957]: I0218 14:56:02.034510 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="608c576f-d3a4-42d5-9ba2-92d8f2560905" containerName="cinder-api" Feb 18 14:56:02 crc kubenswrapper[4957]: I0218 14:56:02.034527 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="46f8dbf3-ac91-469d-8d11-dc33a51b2d24" containerName="ceilometer-notification-agent" Feb 18 14:56:02 crc kubenswrapper[4957]: I0218 14:56:02.034540 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="46f8dbf3-ac91-469d-8d11-dc33a51b2d24" containerName="proxy-httpd" Feb 18 14:56:02 crc kubenswrapper[4957]: I0218 14:56:02.034552 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="608c576f-d3a4-42d5-9ba2-92d8f2560905" containerName="cinder-api-log" Feb 18 14:56:02 crc kubenswrapper[4957]: I0218 14:56:02.034579 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="46f8dbf3-ac91-469d-8d11-dc33a51b2d24" containerName="ceilometer-central-agent" Feb 18 14:56:02 crc kubenswrapper[4957]: I0218 14:56:02.034589 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7cb1bd6-6151-4d00-834c-dbe330c6506b" containerName="barbican-keystone-listener-log" Feb 18 14:56:02 crc kubenswrapper[4957]: I0218 14:56:02.037529 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 14:56:02 crc kubenswrapper[4957]: I0218 14:56:02.054127 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 18 14:56:02 crc kubenswrapper[4957]: I0218 14:56:02.054150 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 18 14:56:02 crc kubenswrapper[4957]: I0218 14:56:02.058835 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-c445bc8f8-ltx54"] Feb 18 14:56:02 crc kubenswrapper[4957]: I0218 14:56:02.090094 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxjl6\" (UniqueName: \"kubernetes.io/projected/32518ba7-d47c-461e-a3e3-f13cca6bfd40-kube-api-access-mxjl6\") pod \"ceilometer-0\" (UID: \"32518ba7-d47c-461e-a3e3-f13cca6bfd40\") " pod="openstack/ceilometer-0" Feb 18 14:56:02 crc kubenswrapper[4957]: I0218 14:56:02.090187 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32518ba7-d47c-461e-a3e3-f13cca6bfd40-config-data\") pod \"ceilometer-0\" (UID: \"32518ba7-d47c-461e-a3e3-f13cca6bfd40\") " pod="openstack/ceilometer-0" Feb 18 14:56:02 crc kubenswrapper[4957]: I0218 14:56:02.090238 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32518ba7-d47c-461e-a3e3-f13cca6bfd40-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"32518ba7-d47c-461e-a3e3-f13cca6bfd40\") " pod="openstack/ceilometer-0" Feb 18 14:56:02 crc kubenswrapper[4957]: I0218 14:56:02.090269 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/32518ba7-d47c-461e-a3e3-f13cca6bfd40-scripts\") pod \"ceilometer-0\" (UID: \"32518ba7-d47c-461e-a3e3-f13cca6bfd40\") " pod="openstack/ceilometer-0" Feb 18 14:56:02 crc kubenswrapper[4957]: I0218 14:56:02.090299 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/32518ba7-d47c-461e-a3e3-f13cca6bfd40-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"32518ba7-d47c-461e-a3e3-f13cca6bfd40\") " pod="openstack/ceilometer-0" Feb 18 14:56:02 crc kubenswrapper[4957]: I0218 14:56:02.090324 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/32518ba7-d47c-461e-a3e3-f13cca6bfd40-log-httpd\") pod \"ceilometer-0\" (UID: \"32518ba7-d47c-461e-a3e3-f13cca6bfd40\") " pod="openstack/ceilometer-0" Feb 18 14:56:02 crc kubenswrapper[4957]: I0218 14:56:02.090437 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/32518ba7-d47c-461e-a3e3-f13cca6bfd40-run-httpd\") pod \"ceilometer-0\" (UID: \"32518ba7-d47c-461e-a3e3-f13cca6bfd40\") " pod="openstack/ceilometer-0" Feb 18 14:56:02 crc kubenswrapper[4957]: I0218 14:56:02.107689 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-keystone-listener-c445bc8f8-ltx54"] Feb 18 14:56:02 crc kubenswrapper[4957]: I0218 14:56:02.135638 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 14:56:02 crc kubenswrapper[4957]: I0218 14:56:02.172953 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 18 14:56:02 crc kubenswrapper[4957]: W0218 14:56:02.181929 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbefc3a14_3df1_45fd_9d4c_91033abb4d61.slice/crio-394e3a2d64c211a306d9b2e49ec74d77ff38ee7a750bdd9ea1bac6f05b8bd113 WatchSource:0}: Error finding container 394e3a2d64c211a306d9b2e49ec74d77ff38ee7a750bdd9ea1bac6f05b8bd113: Status 404 returned error can't find the container with id 394e3a2d64c211a306d9b2e49ec74d77ff38ee7a750bdd9ea1bac6f05b8bd113 Feb 18 14:56:02 crc kubenswrapper[4957]: I0218 14:56:02.185857 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Feb 18 14:56:02 crc kubenswrapper[4957]: I0218 14:56:02.192518 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mxjl6\" (UniqueName: \"kubernetes.io/projected/32518ba7-d47c-461e-a3e3-f13cca6bfd40-kube-api-access-mxjl6\") pod \"ceilometer-0\" (UID: \"32518ba7-d47c-461e-a3e3-f13cca6bfd40\") " pod="openstack/ceilometer-0" Feb 18 14:56:02 crc kubenswrapper[4957]: I0218 14:56:02.192578 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32518ba7-d47c-461e-a3e3-f13cca6bfd40-config-data\") pod \"ceilometer-0\" (UID: \"32518ba7-d47c-461e-a3e3-f13cca6bfd40\") " pod="openstack/ceilometer-0" Feb 18 14:56:02 crc kubenswrapper[4957]: I0218 14:56:02.192615 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32518ba7-d47c-461e-a3e3-f13cca6bfd40-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"32518ba7-d47c-461e-a3e3-f13cca6bfd40\") " pod="openstack/ceilometer-0" Feb 18 14:56:02 crc kubenswrapper[4957]: I0218 14:56:02.192634 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/32518ba7-d47c-461e-a3e3-f13cca6bfd40-scripts\") pod \"ceilometer-0\" (UID: \"32518ba7-d47c-461e-a3e3-f13cca6bfd40\") " pod="openstack/ceilometer-0" Feb 18 14:56:02 crc kubenswrapper[4957]: I0218 14:56:02.192655 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/32518ba7-d47c-461e-a3e3-f13cca6bfd40-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"32518ba7-d47c-461e-a3e3-f13cca6bfd40\") " pod="openstack/ceilometer-0" Feb 18 14:56:02 crc kubenswrapper[4957]: I0218 14:56:02.192672 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/32518ba7-d47c-461e-a3e3-f13cca6bfd40-log-httpd\") pod \"ceilometer-0\" (UID: \"32518ba7-d47c-461e-a3e3-f13cca6bfd40\") " pod="openstack/ceilometer-0" Feb 18 14:56:02 crc kubenswrapper[4957]: I0218 14:56:02.192807 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/32518ba7-d47c-461e-a3e3-f13cca6bfd40-run-httpd\") pod \"ceilometer-0\" (UID: \"32518ba7-d47c-461e-a3e3-f13cca6bfd40\") " pod="openstack/ceilometer-0" Feb 18 14:56:02 crc kubenswrapper[4957]: I0218 14:56:02.194411 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/32518ba7-d47c-461e-a3e3-f13cca6bfd40-log-httpd\") pod \"ceilometer-0\" (UID: \"32518ba7-d47c-461e-a3e3-f13cca6bfd40\") " pod="openstack/ceilometer-0" Feb 18 14:56:02 crc kubenswrapper[4957]: I0218 14:56:02.194917 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/32518ba7-d47c-461e-a3e3-f13cca6bfd40-run-httpd\") pod \"ceilometer-0\" (UID: \"32518ba7-d47c-461e-a3e3-f13cca6bfd40\") " pod="openstack/ceilometer-0" Feb 18 14:56:02 crc kubenswrapper[4957]: I0218 14:56:02.197527 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 18 14:56:02 crc kubenswrapper[4957]: I0218 14:56:02.200569 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 18 14:56:02 crc kubenswrapper[4957]: I0218 14:56:02.213299 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 18 14:56:02 crc kubenswrapper[4957]: I0218 14:56:02.204404 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/32518ba7-d47c-461e-a3e3-f13cca6bfd40-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"32518ba7-d47c-461e-a3e3-f13cca6bfd40\") " pod="openstack/ceilometer-0" Feb 18 14:56:02 crc kubenswrapper[4957]: I0218 14:56:02.216241 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mxjl6\" (UniqueName: \"kubernetes.io/projected/32518ba7-d47c-461e-a3e3-f13cca6bfd40-kube-api-access-mxjl6\") pod \"ceilometer-0\" (UID: \"32518ba7-d47c-461e-a3e3-f13cca6bfd40\") " pod="openstack/ceilometer-0" Feb 18 14:56:02 crc kubenswrapper[4957]: I0218 14:56:02.216872 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Feb 18 14:56:02 crc kubenswrapper[4957]: I0218 14:56:02.217152 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Feb 18 14:56:02 crc kubenswrapper[4957]: I0218 14:56:02.217193 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32518ba7-d47c-461e-a3e3-f13cca6bfd40-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"32518ba7-d47c-461e-a3e3-f13cca6bfd40\") " pod="openstack/ceilometer-0" Feb 18 14:56:02 crc kubenswrapper[4957]: I0218 14:56:02.220932 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 18 14:56:02 crc kubenswrapper[4957]: I0218 14:56:02.221581 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/32518ba7-d47c-461e-a3e3-f13cca6bfd40-scripts\") pod \"ceilometer-0\" (UID: \"32518ba7-d47c-461e-a3e3-f13cca6bfd40\") " pod="openstack/ceilometer-0" Feb 18 14:56:02 crc kubenswrapper[4957]: I0218 14:56:02.248345 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32518ba7-d47c-461e-a3e3-f13cca6bfd40-config-data\") pod \"ceilometer-0\" (UID: \"32518ba7-d47c-461e-a3e3-f13cca6bfd40\") " pod="openstack/ceilometer-0" Feb 18 14:56:02 crc kubenswrapper[4957]: I0218 14:56:02.258858 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46f8dbf3-ac91-469d-8d11-dc33a51b2d24" path="/var/lib/kubelet/pods/46f8dbf3-ac91-469d-8d11-dc33a51b2d24/volumes" Feb 18 14:56:02 crc kubenswrapper[4957]: I0218 14:56:02.260177 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49162e61-07d2-4596-a0c5-fd8f90890e35" path="/var/lib/kubelet/pods/49162e61-07d2-4596-a0c5-fd8f90890e35/volumes" Feb 18 14:56:02 crc kubenswrapper[4957]: I0218 14:56:02.260963 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="608c576f-d3a4-42d5-9ba2-92d8f2560905" path="/var/lib/kubelet/pods/608c576f-d3a4-42d5-9ba2-92d8f2560905/volumes" Feb 18 14:56:02 crc kubenswrapper[4957]: I0218 14:56:02.262255 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7cb1bd6-6151-4d00-834c-dbe330c6506b" path="/var/lib/kubelet/pods/c7cb1bd6-6151-4d00-834c-dbe330c6506b/volumes" Feb 18 14:56:02 crc kubenswrapper[4957]: I0218 14:56:02.266443 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-687db6759-27j8z"] Feb 18 14:56:02 crc kubenswrapper[4957]: I0218 14:56:02.272970 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-6744cfff74-cgs9b"] Feb 18 14:56:02 crc kubenswrapper[4957]: I0218 14:56:02.291093 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-7d845d58dd-m6z94"] Feb 18 14:56:02 crc kubenswrapper[4957]: I0218 14:56:02.308900 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18e4612d-bb78-44c5-b59e-4dbe1342c3d3-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"18e4612d-bb78-44c5-b59e-4dbe1342c3d3\") " pod="openstack/cinder-api-0" Feb 18 14:56:02 crc kubenswrapper[4957]: I0218 14:56:02.309047 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/18e4612d-bb78-44c5-b59e-4dbe1342c3d3-public-tls-certs\") pod \"cinder-api-0\" (UID: \"18e4612d-bb78-44c5-b59e-4dbe1342c3d3\") " pod="openstack/cinder-api-0" Feb 18 14:56:02 crc kubenswrapper[4957]: I0218 14:56:02.309230 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18e4612d-bb78-44c5-b59e-4dbe1342c3d3-config-data\") pod \"cinder-api-0\" (UID: \"18e4612d-bb78-44c5-b59e-4dbe1342c3d3\") " pod="openstack/cinder-api-0" Feb 18 14:56:02 crc kubenswrapper[4957]: I0218 14:56:02.309301 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/18e4612d-bb78-44c5-b59e-4dbe1342c3d3-logs\") pod \"cinder-api-0\" (UID: \"18e4612d-bb78-44c5-b59e-4dbe1342c3d3\") " pod="openstack/cinder-api-0" Feb 18 14:56:02 crc kubenswrapper[4957]: I0218 14:56:02.309679 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/18e4612d-bb78-44c5-b59e-4dbe1342c3d3-scripts\") pod \"cinder-api-0\" (UID: \"18e4612d-bb78-44c5-b59e-4dbe1342c3d3\") " pod="openstack/cinder-api-0" Feb 18 14:56:02 crc kubenswrapper[4957]: I0218 14:56:02.309814 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cz946\" (UniqueName: \"kubernetes.io/projected/18e4612d-bb78-44c5-b59e-4dbe1342c3d3-kube-api-access-cz946\") pod \"cinder-api-0\" (UID: \"18e4612d-bb78-44c5-b59e-4dbe1342c3d3\") " pod="openstack/cinder-api-0" Feb 18 14:56:02 crc kubenswrapper[4957]: I0218 14:56:02.309931 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/18e4612d-bb78-44c5-b59e-4dbe1342c3d3-config-data-custom\") pod \"cinder-api-0\" (UID: \"18e4612d-bb78-44c5-b59e-4dbe1342c3d3\") " pod="openstack/cinder-api-0" Feb 18 14:56:02 crc kubenswrapper[4957]: I0218 14:56:02.310095 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/18e4612d-bb78-44c5-b59e-4dbe1342c3d3-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"18e4612d-bb78-44c5-b59e-4dbe1342c3d3\") " pod="openstack/cinder-api-0" Feb 18 14:56:02 crc kubenswrapper[4957]: I0218 14:56:02.310578 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/18e4612d-bb78-44c5-b59e-4dbe1342c3d3-etc-machine-id\") pod \"cinder-api-0\" (UID: \"18e4612d-bb78-44c5-b59e-4dbe1342c3d3\") " pod="openstack/cinder-api-0" Feb 18 14:56:02 crc kubenswrapper[4957]: I0218 14:56:02.359513 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-655997456c-8vtx7"] Feb 18 14:56:02 crc kubenswrapper[4957]: I0218 14:56:02.373546 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 14:56:02 crc kubenswrapper[4957]: I0218 14:56:02.396945 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-77b45dc78-6kx7v"] Feb 18 14:56:02 crc kubenswrapper[4957]: I0218 14:56:02.408566 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-rst4k"] Feb 18 14:56:02 crc kubenswrapper[4957]: I0218 14:56:02.416053 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18e4612d-bb78-44c5-b59e-4dbe1342c3d3-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"18e4612d-bb78-44c5-b59e-4dbe1342c3d3\") " pod="openstack/cinder-api-0" Feb 18 14:56:02 crc kubenswrapper[4957]: I0218 14:56:02.416122 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/18e4612d-bb78-44c5-b59e-4dbe1342c3d3-public-tls-certs\") pod \"cinder-api-0\" (UID: \"18e4612d-bb78-44c5-b59e-4dbe1342c3d3\") " pod="openstack/cinder-api-0" Feb 18 14:56:02 crc kubenswrapper[4957]: I0218 14:56:02.416173 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18e4612d-bb78-44c5-b59e-4dbe1342c3d3-config-data\") pod \"cinder-api-0\" (UID: \"18e4612d-bb78-44c5-b59e-4dbe1342c3d3\") " pod="openstack/cinder-api-0" Feb 18 14:56:02 crc kubenswrapper[4957]: I0218 14:56:02.416201 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/18e4612d-bb78-44c5-b59e-4dbe1342c3d3-logs\") pod \"cinder-api-0\" (UID: \"18e4612d-bb78-44c5-b59e-4dbe1342c3d3\") " pod="openstack/cinder-api-0" Feb 18 14:56:02 crc kubenswrapper[4957]: I0218 14:56:02.416265 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/18e4612d-bb78-44c5-b59e-4dbe1342c3d3-scripts\") pod \"cinder-api-0\" (UID: \"18e4612d-bb78-44c5-b59e-4dbe1342c3d3\") " pod="openstack/cinder-api-0" Feb 18 14:56:02 crc kubenswrapper[4957]: I0218 14:56:02.416284 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cz946\" (UniqueName: \"kubernetes.io/projected/18e4612d-bb78-44c5-b59e-4dbe1342c3d3-kube-api-access-cz946\") pod \"cinder-api-0\" (UID: \"18e4612d-bb78-44c5-b59e-4dbe1342c3d3\") " pod="openstack/cinder-api-0" Feb 18 14:56:02 crc kubenswrapper[4957]: I0218 14:56:02.416303 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/18e4612d-bb78-44c5-b59e-4dbe1342c3d3-config-data-custom\") pod \"cinder-api-0\" (UID: \"18e4612d-bb78-44c5-b59e-4dbe1342c3d3\") " pod="openstack/cinder-api-0" Feb 18 14:56:02 crc kubenswrapper[4957]: I0218 14:56:02.417100 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/18e4612d-bb78-44c5-b59e-4dbe1342c3d3-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"18e4612d-bb78-44c5-b59e-4dbe1342c3d3\") " pod="openstack/cinder-api-0" Feb 18 14:56:02 crc kubenswrapper[4957]: I0218 14:56:02.417186 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/18e4612d-bb78-44c5-b59e-4dbe1342c3d3-etc-machine-id\") pod \"cinder-api-0\" (UID: \"18e4612d-bb78-44c5-b59e-4dbe1342c3d3\") " pod="openstack/cinder-api-0" Feb 18 14:56:02 crc kubenswrapper[4957]: I0218 14:56:02.417327 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/18e4612d-bb78-44c5-b59e-4dbe1342c3d3-etc-machine-id\") pod \"cinder-api-0\" (UID: \"18e4612d-bb78-44c5-b59e-4dbe1342c3d3\") " pod="openstack/cinder-api-0" Feb 18 14:56:02 crc kubenswrapper[4957]: I0218 14:56:02.417701 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/18e4612d-bb78-44c5-b59e-4dbe1342c3d3-logs\") pod \"cinder-api-0\" (UID: \"18e4612d-bb78-44c5-b59e-4dbe1342c3d3\") " pod="openstack/cinder-api-0" Feb 18 14:56:02 crc kubenswrapper[4957]: I0218 14:56:02.426878 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18e4612d-bb78-44c5-b59e-4dbe1342c3d3-config-data\") pod \"cinder-api-0\" (UID: \"18e4612d-bb78-44c5-b59e-4dbe1342c3d3\") " pod="openstack/cinder-api-0" Feb 18 14:56:02 crc kubenswrapper[4957]: I0218 14:56:02.427962 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/18e4612d-bb78-44c5-b59e-4dbe1342c3d3-scripts\") pod \"cinder-api-0\" (UID: \"18e4612d-bb78-44c5-b59e-4dbe1342c3d3\") " pod="openstack/cinder-api-0" Feb 18 14:56:02 crc kubenswrapper[4957]: I0218 14:56:02.428247 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18e4612d-bb78-44c5-b59e-4dbe1342c3d3-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"18e4612d-bb78-44c5-b59e-4dbe1342c3d3\") " pod="openstack/cinder-api-0" Feb 18 14:56:02 crc kubenswrapper[4957]: I0218 14:56:02.433696 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/18e4612d-bb78-44c5-b59e-4dbe1342c3d3-public-tls-certs\") pod \"cinder-api-0\" (UID: \"18e4612d-bb78-44c5-b59e-4dbe1342c3d3\") " pod="openstack/cinder-api-0" Feb 18 14:56:02 crc kubenswrapper[4957]: I0218 14:56:02.452037 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/18e4612d-bb78-44c5-b59e-4dbe1342c3d3-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"18e4612d-bb78-44c5-b59e-4dbe1342c3d3\") " pod="openstack/cinder-api-0" Feb 18 14:56:02 crc kubenswrapper[4957]: I0218 14:56:02.452272 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/18e4612d-bb78-44c5-b59e-4dbe1342c3d3-config-data-custom\") pod \"cinder-api-0\" (UID: \"18e4612d-bb78-44c5-b59e-4dbe1342c3d3\") " pod="openstack/cinder-api-0" Feb 18 14:56:02 crc kubenswrapper[4957]: I0218 14:56:02.454216 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cz946\" (UniqueName: \"kubernetes.io/projected/18e4612d-bb78-44c5-b59e-4dbe1342c3d3-kube-api-access-cz946\") pod \"cinder-api-0\" (UID: \"18e4612d-bb78-44c5-b59e-4dbe1342c3d3\") " pod="openstack/cinder-api-0" Feb 18 14:56:02 crc kubenswrapper[4957]: I0218 14:56:02.578250 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 18 14:56:02 crc kubenswrapper[4957]: I0218 14:56:02.681824 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7756b9d78c-rst4k" event={"ID":"1ef0249f-b7a3-4183-9f46-0553a63c26ac","Type":"ContainerStarted","Data":"c857029ab903a044e7a2d9f932029dbe8ef04f4d0673ec35a74b02de5c2ecf84"} Feb 18 14:56:02 crc kubenswrapper[4957]: I0218 14:56:02.694379 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6744cfff74-cgs9b" event={"ID":"3e973679-e589-4827-8e24-d2fda83ab2e2","Type":"ContainerStarted","Data":"1e9f058a04df53ed2f4240290871638c0bf5315bd1029aeba10c3c32111cf339"} Feb 18 14:56:02 crc kubenswrapper[4957]: I0218 14:56:02.699152 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7d845d58dd-m6z94" event={"ID":"37f34265-b814-4eb7-b633-b1516352e951","Type":"ContainerStarted","Data":"31b44221f96767c7889188a7cc88911f1d912482af56c514514cb9162b2a03b5"} Feb 18 14:56:02 crc kubenswrapper[4957]: I0218 14:56:02.700909 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-655997456c-8vtx7" event={"ID":"1b86c2f5-26d2-4702-89e3-e093e6cf4b21","Type":"ContainerStarted","Data":"8f36e313704e970af78e4fec2f7907247bfe43f7fd5a3962be535e7237b87ce1"} Feb 18 14:56:02 crc kubenswrapper[4957]: I0218 14:56:02.702905 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5dfdd78696-x7mzw" event={"ID":"0f682c53-b8cc-42e4-a3b3-3bccfdf7c908","Type":"ContainerStarted","Data":"41853f0fa5b7bbf79cb43da0a47dd8047662bac15803baa8a55289f9e2b841ae"} Feb 18 14:56:02 crc kubenswrapper[4957]: I0218 14:56:02.705789 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7fb5f67558-76mdf" event={"ID":"9b0c6928-d889-421a-81e2-5e9dd8e1e986","Type":"ContainerStarted","Data":"1cc5d1bfc18192542b1d3142aaa65193c8cb21784486995a7604140f796ff7a4"} Feb 18 14:56:02 crc kubenswrapper[4957]: I0218 14:56:02.708687 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-687db6759-27j8z" event={"ID":"befc3a14-3df1-45fd-9d4c-91033abb4d61","Type":"ContainerStarted","Data":"4acb5f41410eaf0ea424db0e036cd829ec37d06029defac53868da981a760108"} Feb 18 14:56:02 crc kubenswrapper[4957]: I0218 14:56:02.709645 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-687db6759-27j8z" Feb 18 14:56:02 crc kubenswrapper[4957]: I0218 14:56:02.709681 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-687db6759-27j8z" event={"ID":"befc3a14-3df1-45fd-9d4c-91033abb4d61","Type":"ContainerStarted","Data":"394e3a2d64c211a306d9b2e49ec74d77ff38ee7a750bdd9ea1bac6f05b8bd113"} Feb 18 14:56:02 crc kubenswrapper[4957]: I0218 14:56:02.744330 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-65df87cd54-bjsc4" event={"ID":"bde3dea8-c483-423d-8f4a-74575433fd2f","Type":"ContainerStarted","Data":"b6ed8a5c9d2f5879a3cf52c2eea94e0aec596e98c2e9ed9df790ef55c4e92c45"} Feb 18 14:56:02 crc kubenswrapper[4957]: I0218 14:56:02.744386 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-65df87cd54-bjsc4" event={"ID":"bde3dea8-c483-423d-8f4a-74575433fd2f","Type":"ContainerStarted","Data":"3115fb07fc1cc6939894ecf250674667b35f5368d315b100d5f6be1dcbdd4f8d"} Feb 18 14:56:02 crc kubenswrapper[4957]: I0218 14:56:02.744533 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-65df87cd54-bjsc4" Feb 18 14:56:02 crc kubenswrapper[4957]: I0218 14:56:02.753996 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-687db6759-27j8z" podStartSLOduration=6.753950892 podStartE2EDuration="6.753950892s" podCreationTimestamp="2026-02-18 14:55:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:56:02.730233145 +0000 UTC m=+1469.251097889" watchObservedRunningTime="2026-02-18 14:56:02.753950892 +0000 UTC m=+1469.274815646" Feb 18 14:56:02 crc kubenswrapper[4957]: I0218 14:56:02.755601 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-77b45dc78-6kx7v" event={"ID":"f1e0aae7-8a7f-4147-9abb-23f7c7691351","Type":"ContainerStarted","Data":"9dc00d016aae5eb6ec71bc537c384b00cf1f957722d6ac321bee5f9478dc743e"} Feb 18 14:56:02 crc kubenswrapper[4957]: I0218 14:56:02.821965 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-65df87cd54-bjsc4" podStartSLOduration=13.821941189 podStartE2EDuration="13.821941189s" podCreationTimestamp="2026-02-18 14:55:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:56:02.767144563 +0000 UTC m=+1469.288009317" watchObservedRunningTime="2026-02-18 14:56:02.821941189 +0000 UTC m=+1469.342805933" Feb 18 14:56:03 crc kubenswrapper[4957]: I0218 14:56:03.109659 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 14:56:03 crc kubenswrapper[4957]: I0218 14:56:03.254851 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 18 14:56:03 crc kubenswrapper[4957]: I0218 14:56:03.779506 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"18e4612d-bb78-44c5-b59e-4dbe1342c3d3","Type":"ContainerStarted","Data":"4b6d82667a437825a2b4383afa89abad3ca31e7f07475aa1f6326307757d648a"} Feb 18 14:56:03 crc kubenswrapper[4957]: I0218 14:56:03.786468 4957 generic.go:334] "Generic (PLEG): container finished" podID="1ef0249f-b7a3-4183-9f46-0553a63c26ac" containerID="133aed933ba7099e0fabbec64c2d6990b96b4c50fa7225dcf49d748699dce49b" exitCode=0 Feb 18 14:56:03 crc kubenswrapper[4957]: I0218 14:56:03.787817 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7756b9d78c-rst4k" event={"ID":"1ef0249f-b7a3-4183-9f46-0553a63c26ac","Type":"ContainerDied","Data":"133aed933ba7099e0fabbec64c2d6990b96b4c50fa7225dcf49d748699dce49b"} Feb 18 14:56:03 crc kubenswrapper[4957]: I0218 14:56:03.794949 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"32518ba7-d47c-461e-a3e3-f13cca6bfd40","Type":"ContainerStarted","Data":"b4c923cd8483abdf715f733440a105cd02f27f1615ec6426656423f10ac20452"} Feb 18 14:56:03 crc kubenswrapper[4957]: I0218 14:56:03.869824 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-cjglk" Feb 18 14:56:03 crc kubenswrapper[4957]: I0218 14:56:03.926766 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 14:56:03 crc kubenswrapper[4957]: I0218 14:56:03.987088 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-cjglk" Feb 18 14:56:04 crc kubenswrapper[4957]: I0218 14:56:04.627312 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cjglk"] Feb 18 14:56:04 crc kubenswrapper[4957]: I0218 14:56:04.816652 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"18e4612d-bb78-44c5-b59e-4dbe1342c3d3","Type":"ContainerStarted","Data":"2c0149e901011d6cbee5edc754ee0d98caa389d4f36bda5df8b4f3632680c6e5"} Feb 18 14:56:05 crc kubenswrapper[4957]: I0218 14:56:05.844594 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-cfnapi-7fb5f67558-76mdf" podUID="9b0c6928-d889-421a-81e2-5e9dd8e1e986" containerName="heat-cfnapi" containerID="cri-o://9b5b56bcedf4cb081d5bedfc96a6585aae36c6e77ed3081e492606f90e9c0e68" gracePeriod=60 Feb 18 14:56:05 crc kubenswrapper[4957]: I0218 14:56:05.845130 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-7fb5f67558-76mdf" Feb 18 14:56:05 crc kubenswrapper[4957]: I0218 14:56:05.850481 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7756b9d78c-rst4k" event={"ID":"1ef0249f-b7a3-4183-9f46-0553a63c26ac","Type":"ContainerStarted","Data":"f568f4eb22b215c69c7a0c81adfbdc1cfd1d51b2a55d3436b786596a9a8853ea"} Feb 18 14:56:05 crc kubenswrapper[4957]: I0218 14:56:05.850824 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7756b9d78c-rst4k" Feb 18 14:56:05 crc kubenswrapper[4957]: I0218 14:56:05.855581 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-cjglk" podUID="fa177feb-1f29-43cb-baa9-6676bfa7c403" containerName="registry-server" containerID="cri-o://722f64918d346c25a11589cfce40cd5da1a3c5a87c8d88c91ff7420199341355" gracePeriod=2 Feb 18 14:56:05 crc kubenswrapper[4957]: I0218 14:56:05.855947 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"32518ba7-d47c-461e-a3e3-f13cca6bfd40","Type":"ContainerStarted","Data":"83613776e4fdcc13175a733b8e96ed4451c30ba7131123ddeb3588ef6b3677ba"} Feb 18 14:56:05 crc kubenswrapper[4957]: I0218 14:56:05.882564 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-7fb5f67558-76mdf" podStartSLOduration=13.208878391 podStartE2EDuration="16.88253828s" podCreationTimestamp="2026-02-18 14:55:49 +0000 UTC" firstStartedPulling="2026-02-18 14:56:01.663399439 +0000 UTC m=+1468.184264183" lastFinishedPulling="2026-02-18 14:56:05.337059328 +0000 UTC m=+1471.857924072" observedRunningTime="2026-02-18 14:56:05.86836843 +0000 UTC m=+1472.389233174" watchObservedRunningTime="2026-02-18 14:56:05.88253828 +0000 UTC m=+1472.403403024" Feb 18 14:56:05 crc kubenswrapper[4957]: I0218 14:56:05.899980 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7756b9d78c-rst4k" podStartSLOduration=16.899963834 podStartE2EDuration="16.899963834s" podCreationTimestamp="2026-02-18 14:55:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:56:05.886508125 +0000 UTC m=+1472.407372889" watchObservedRunningTime="2026-02-18 14:56:05.899963834 +0000 UTC m=+1472.420828578" Feb 18 14:56:06 crc kubenswrapper[4957]: I0218 14:56:06.585613 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cjglk" Feb 18 14:56:06 crc kubenswrapper[4957]: I0218 14:56:06.727000 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fa177feb-1f29-43cb-baa9-6676bfa7c403-utilities" (OuterVolumeSpecName: "utilities") pod "fa177feb-1f29-43cb-baa9-6676bfa7c403" (UID: "fa177feb-1f29-43cb-baa9-6676bfa7c403"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:56:06 crc kubenswrapper[4957]: I0218 14:56:06.727079 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa177feb-1f29-43cb-baa9-6676bfa7c403-utilities\") pod \"fa177feb-1f29-43cb-baa9-6676bfa7c403\" (UID: \"fa177feb-1f29-43cb-baa9-6676bfa7c403\") " Feb 18 14:56:06 crc kubenswrapper[4957]: I0218 14:56:06.728651 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-26jg5\" (UniqueName: \"kubernetes.io/projected/fa177feb-1f29-43cb-baa9-6676bfa7c403-kube-api-access-26jg5\") pod \"fa177feb-1f29-43cb-baa9-6676bfa7c403\" (UID: \"fa177feb-1f29-43cb-baa9-6676bfa7c403\") " Feb 18 14:56:06 crc kubenswrapper[4957]: I0218 14:56:06.735784 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa177feb-1f29-43cb-baa9-6676bfa7c403-kube-api-access-26jg5" (OuterVolumeSpecName: "kube-api-access-26jg5") pod "fa177feb-1f29-43cb-baa9-6676bfa7c403" (UID: "fa177feb-1f29-43cb-baa9-6676bfa7c403"). InnerVolumeSpecName "kube-api-access-26jg5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:56:06 crc kubenswrapper[4957]: I0218 14:56:06.756551 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa177feb-1f29-43cb-baa9-6676bfa7c403-catalog-content\") pod \"fa177feb-1f29-43cb-baa9-6676bfa7c403\" (UID: \"fa177feb-1f29-43cb-baa9-6676bfa7c403\") " Feb 18 14:56:06 crc kubenswrapper[4957]: I0218 14:56:06.765619 4957 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa177feb-1f29-43cb-baa9-6676bfa7c403-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 14:56:06 crc kubenswrapper[4957]: I0218 14:56:06.765969 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-26jg5\" (UniqueName: \"kubernetes.io/projected/fa177feb-1f29-43cb-baa9-6676bfa7c403-kube-api-access-26jg5\") on node \"crc\" DevicePath \"\"" Feb 18 14:56:06 crc kubenswrapper[4957]: I0218 14:56:06.877502 4957 generic.go:334] "Generic (PLEG): container finished" podID="3e973679-e589-4827-8e24-d2fda83ab2e2" containerID="42dd40aceafde82ff0a7afae8cd0b52b51474b887980a7db56af5d6e528283da" exitCode=1 Feb 18 14:56:06 crc kubenswrapper[4957]: I0218 14:56:06.878900 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6744cfff74-cgs9b" event={"ID":"3e973679-e589-4827-8e24-d2fda83ab2e2","Type":"ContainerDied","Data":"42dd40aceafde82ff0a7afae8cd0b52b51474b887980a7db56af5d6e528283da"} Feb 18 14:56:06 crc kubenswrapper[4957]: I0218 14:56:06.881301 4957 scope.go:117] "RemoveContainer" containerID="42dd40aceafde82ff0a7afae8cd0b52b51474b887980a7db56af5d6e528283da" Feb 18 14:56:06 crc kubenswrapper[4957]: I0218 14:56:06.884187 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-77b45dc78-6kx7v" event={"ID":"f1e0aae7-8a7f-4147-9abb-23f7c7691351","Type":"ContainerStarted","Data":"455f8ff9066c4fe019e20e8146393f879d4144a9dda31bcc1aadb3d517bbda3d"} Feb 18 14:56:06 crc kubenswrapper[4957]: I0218 14:56:06.885069 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-77b45dc78-6kx7v" Feb 18 14:56:06 crc kubenswrapper[4957]: I0218 14:56:06.914399 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7fb5f67558-76mdf" event={"ID":"9b0c6928-d889-421a-81e2-5e9dd8e1e986","Type":"ContainerStarted","Data":"9b5b56bcedf4cb081d5bedfc96a6585aae36c6e77ed3081e492606f90e9c0e68"} Feb 18 14:56:06 crc kubenswrapper[4957]: I0218 14:56:06.942061 4957 generic.go:334] "Generic (PLEG): container finished" podID="fa177feb-1f29-43cb-baa9-6676bfa7c403" containerID="722f64918d346c25a11589cfce40cd5da1a3c5a87c8d88c91ff7420199341355" exitCode=0 Feb 18 14:56:06 crc kubenswrapper[4957]: I0218 14:56:06.942354 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cjglk" event={"ID":"fa177feb-1f29-43cb-baa9-6676bfa7c403","Type":"ContainerDied","Data":"722f64918d346c25a11589cfce40cd5da1a3c5a87c8d88c91ff7420199341355"} Feb 18 14:56:06 crc kubenswrapper[4957]: I0218 14:56:06.942489 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cjglk" event={"ID":"fa177feb-1f29-43cb-baa9-6676bfa7c403","Type":"ContainerDied","Data":"6eaca2688003abaaf90afcd56b5547909ddd06529e0501f57cb103943a85b900"} Feb 18 14:56:06 crc kubenswrapper[4957]: I0218 14:56:06.942603 4957 scope.go:117] "RemoveContainer" containerID="722f64918d346c25a11589cfce40cd5da1a3c5a87c8d88c91ff7420199341355" Feb 18 14:56:06 crc kubenswrapper[4957]: I0218 14:56:06.942916 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cjglk" Feb 18 14:56:06 crc kubenswrapper[4957]: I0218 14:56:06.953774 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-77b45dc78-6kx7v" podStartSLOduration=5.874912714 podStartE2EDuration="8.953722452s" podCreationTimestamp="2026-02-18 14:55:58 +0000 UTC" firstStartedPulling="2026-02-18 14:56:02.21816413 +0000 UTC m=+1468.739028874" lastFinishedPulling="2026-02-18 14:56:05.296973878 +0000 UTC m=+1471.817838612" observedRunningTime="2026-02-18 14:56:06.940633024 +0000 UTC m=+1473.461497768" watchObservedRunningTime="2026-02-18 14:56:06.953722452 +0000 UTC m=+1473.474587196" Feb 18 14:56:06 crc kubenswrapper[4957]: I0218 14:56:06.962517 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"32518ba7-d47c-461e-a3e3-f13cca6bfd40","Type":"ContainerStarted","Data":"16de19ebf6ef47001ba7f6991eaa123b1157f7154ee4d8d30dc469d077de5e29"} Feb 18 14:56:06 crc kubenswrapper[4957]: I0218 14:56:06.981716 4957 generic.go:334] "Generic (PLEG): container finished" podID="37f34265-b814-4eb7-b633-b1516352e951" containerID="bdfce11eb4e025e06bb4df0d601bc902b5674114f8d28b7fe7505bc8ea4d4aec" exitCode=1 Feb 18 14:56:06 crc kubenswrapper[4957]: I0218 14:56:06.983560 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7d845d58dd-m6z94" event={"ID":"37f34265-b814-4eb7-b633-b1516352e951","Type":"ContainerDied","Data":"bdfce11eb4e025e06bb4df0d601bc902b5674114f8d28b7fe7505bc8ea4d4aec"} Feb 18 14:56:06 crc kubenswrapper[4957]: I0218 14:56:06.989481 4957 scope.go:117] "RemoveContainer" containerID="4025573f3af7fb3568e26202d2fbdb267c37da53fb65e5d931dba6b1dd6a7ece" Feb 18 14:56:06 crc kubenswrapper[4957]: I0218 14:56:06.995150 4957 scope.go:117] "RemoveContainer" containerID="bdfce11eb4e025e06bb4df0d601bc902b5674114f8d28b7fe7505bc8ea4d4aec" Feb 18 14:56:07 crc kubenswrapper[4957]: I0218 14:56:07.021380 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-655997456c-8vtx7" event={"ID":"1b86c2f5-26d2-4702-89e3-e093e6cf4b21","Type":"ContainerStarted","Data":"261e0a582ceca480c2d1a51b5cc803e626d649d129a07f2898f597fa705b1a98"} Feb 18 14:56:07 crc kubenswrapper[4957]: I0218 14:56:07.021670 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-655997456c-8vtx7" Feb 18 14:56:07 crc kubenswrapper[4957]: I0218 14:56:07.031738 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5dfdd78696-x7mzw" event={"ID":"0f682c53-b8cc-42e4-a3b3-3bccfdf7c908","Type":"ContainerStarted","Data":"7265b82329950bbde40268f45d2449ca895f0d203c4f346616159cb830a1c65f"} Feb 18 14:56:07 crc kubenswrapper[4957]: I0218 14:56:07.031963 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-api-5dfdd78696-x7mzw" podUID="0f682c53-b8cc-42e4-a3b3-3bccfdf7c908" containerName="heat-api" containerID="cri-o://7265b82329950bbde40268f45d2449ca895f0d203c4f346616159cb830a1c65f" gracePeriod=60 Feb 18 14:56:07 crc kubenswrapper[4957]: I0218 14:56:07.032277 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-5dfdd78696-x7mzw" Feb 18 14:56:07 crc kubenswrapper[4957]: I0218 14:56:07.052531 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"18e4612d-bb78-44c5-b59e-4dbe1342c3d3","Type":"ContainerStarted","Data":"c762ab3d89aaff15c23ff0c9147cbcdecbcea007464ba860cbd5be32eca410f3"} Feb 18 14:56:07 crc kubenswrapper[4957]: I0218 14:56:07.053729 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 18 14:56:07 crc kubenswrapper[4957]: I0218 14:56:07.119104 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-655997456c-8vtx7" podStartSLOduration=6.009570681 podStartE2EDuration="9.119080027s" podCreationTimestamp="2026-02-18 14:55:58 +0000 UTC" firstStartedPulling="2026-02-18 14:56:02.187483902 +0000 UTC m=+1468.708348646" lastFinishedPulling="2026-02-18 14:56:05.296993248 +0000 UTC m=+1471.817857992" observedRunningTime="2026-02-18 14:56:07.051356338 +0000 UTC m=+1473.572221082" watchObservedRunningTime="2026-02-18 14:56:07.119080027 +0000 UTC m=+1473.639944771" Feb 18 14:56:07 crc kubenswrapper[4957]: I0218 14:56:07.141596 4957 scope.go:117] "RemoveContainer" containerID="7cd470e247aef31391c038b46de2520c93e6a1a648f7bbc80db424527446488c" Feb 18 14:56:07 crc kubenswrapper[4957]: I0218 14:56:07.144793 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fa177feb-1f29-43cb-baa9-6676bfa7c403-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fa177feb-1f29-43cb-baa9-6676bfa7c403" (UID: "fa177feb-1f29-43cb-baa9-6676bfa7c403"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:56:07 crc kubenswrapper[4957]: I0218 14:56:07.147173 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-5dfdd78696-x7mzw" podStartSLOduration=14.48485494 podStartE2EDuration="18.14715165s" podCreationTimestamp="2026-02-18 14:55:49 +0000 UTC" firstStartedPulling="2026-02-18 14:56:01.6755285 +0000 UTC m=+1468.196393244" lastFinishedPulling="2026-02-18 14:56:05.33782521 +0000 UTC m=+1471.858689954" observedRunningTime="2026-02-18 14:56:07.087470293 +0000 UTC m=+1473.608335037" watchObservedRunningTime="2026-02-18 14:56:07.14715165 +0000 UTC m=+1473.668016404" Feb 18 14:56:07 crc kubenswrapper[4957]: I0218 14:56:07.196502 4957 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa177feb-1f29-43cb-baa9-6676bfa7c403-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 14:56:07 crc kubenswrapper[4957]: I0218 14:56:07.251957 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=6.251940082 podStartE2EDuration="6.251940082s" podCreationTimestamp="2026-02-18 14:56:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:56:07.113583188 +0000 UTC m=+1473.634447932" watchObservedRunningTime="2026-02-18 14:56:07.251940082 +0000 UTC m=+1473.772804826" Feb 18 14:56:07 crc kubenswrapper[4957]: I0218 14:56:07.280834 4957 patch_prober.go:28] interesting pod/machine-config-daemon-x8wwg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 14:56:07 crc kubenswrapper[4957]: I0218 14:56:07.280904 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 14:56:07 crc kubenswrapper[4957]: I0218 14:56:07.284492 4957 scope.go:117] "RemoveContainer" containerID="722f64918d346c25a11589cfce40cd5da1a3c5a87c8d88c91ff7420199341355" Feb 18 14:56:07 crc kubenswrapper[4957]: E0218 14:56:07.286707 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"722f64918d346c25a11589cfce40cd5da1a3c5a87c8d88c91ff7420199341355\": container with ID starting with 722f64918d346c25a11589cfce40cd5da1a3c5a87c8d88c91ff7420199341355 not found: ID does not exist" containerID="722f64918d346c25a11589cfce40cd5da1a3c5a87c8d88c91ff7420199341355" Feb 18 14:56:07 crc kubenswrapper[4957]: I0218 14:56:07.286783 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"722f64918d346c25a11589cfce40cd5da1a3c5a87c8d88c91ff7420199341355"} err="failed to get container status \"722f64918d346c25a11589cfce40cd5da1a3c5a87c8d88c91ff7420199341355\": rpc error: code = NotFound desc = could not find container \"722f64918d346c25a11589cfce40cd5da1a3c5a87c8d88c91ff7420199341355\": container with ID starting with 722f64918d346c25a11589cfce40cd5da1a3c5a87c8d88c91ff7420199341355 not found: ID does not exist" Feb 18 14:56:07 crc kubenswrapper[4957]: I0218 14:56:07.286809 4957 scope.go:117] "RemoveContainer" containerID="4025573f3af7fb3568e26202d2fbdb267c37da53fb65e5d931dba6b1dd6a7ece" Feb 18 14:56:07 crc kubenswrapper[4957]: E0218 14:56:07.287491 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4025573f3af7fb3568e26202d2fbdb267c37da53fb65e5d931dba6b1dd6a7ece\": container with ID starting with 4025573f3af7fb3568e26202d2fbdb267c37da53fb65e5d931dba6b1dd6a7ece not found: ID does not exist" containerID="4025573f3af7fb3568e26202d2fbdb267c37da53fb65e5d931dba6b1dd6a7ece" Feb 18 14:56:07 crc kubenswrapper[4957]: I0218 14:56:07.287517 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4025573f3af7fb3568e26202d2fbdb267c37da53fb65e5d931dba6b1dd6a7ece"} err="failed to get container status \"4025573f3af7fb3568e26202d2fbdb267c37da53fb65e5d931dba6b1dd6a7ece\": rpc error: code = NotFound desc = could not find container \"4025573f3af7fb3568e26202d2fbdb267c37da53fb65e5d931dba6b1dd6a7ece\": container with ID starting with 4025573f3af7fb3568e26202d2fbdb267c37da53fb65e5d931dba6b1dd6a7ece not found: ID does not exist" Feb 18 14:56:07 crc kubenswrapper[4957]: I0218 14:56:07.287534 4957 scope.go:117] "RemoveContainer" containerID="7cd470e247aef31391c038b46de2520c93e6a1a648f7bbc80db424527446488c" Feb 18 14:56:07 crc kubenswrapper[4957]: E0218 14:56:07.288098 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7cd470e247aef31391c038b46de2520c93e6a1a648f7bbc80db424527446488c\": container with ID starting with 7cd470e247aef31391c038b46de2520c93e6a1a648f7bbc80db424527446488c not found: ID does not exist" containerID="7cd470e247aef31391c038b46de2520c93e6a1a648f7bbc80db424527446488c" Feb 18 14:56:07 crc kubenswrapper[4957]: I0218 14:56:07.288122 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7cd470e247aef31391c038b46de2520c93e6a1a648f7bbc80db424527446488c"} err="failed to get container status \"7cd470e247aef31391c038b46de2520c93e6a1a648f7bbc80db424527446488c\": rpc error: code = NotFound desc = could not find container \"7cd470e247aef31391c038b46de2520c93e6a1a648f7bbc80db424527446488c\": container with ID starting with 7cd470e247aef31391c038b46de2520c93e6a1a648f7bbc80db424527446488c not found: ID does not exist" Feb 18 14:56:07 crc kubenswrapper[4957]: I0218 14:56:07.350498 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cjglk"] Feb 18 14:56:07 crc kubenswrapper[4957]: I0218 14:56:07.370604 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-cjglk"] Feb 18 14:56:07 crc kubenswrapper[4957]: I0218 14:56:07.883751 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-hh59v"] Feb 18 14:56:07 crc kubenswrapper[4957]: E0218 14:56:07.884651 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa177feb-1f29-43cb-baa9-6676bfa7c403" containerName="registry-server" Feb 18 14:56:07 crc kubenswrapper[4957]: I0218 14:56:07.884670 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa177feb-1f29-43cb-baa9-6676bfa7c403" containerName="registry-server" Feb 18 14:56:07 crc kubenswrapper[4957]: E0218 14:56:07.884689 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa177feb-1f29-43cb-baa9-6676bfa7c403" containerName="extract-content" Feb 18 14:56:07 crc kubenswrapper[4957]: I0218 14:56:07.884697 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa177feb-1f29-43cb-baa9-6676bfa7c403" containerName="extract-content" Feb 18 14:56:07 crc kubenswrapper[4957]: E0218 14:56:07.884752 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa177feb-1f29-43cb-baa9-6676bfa7c403" containerName="extract-utilities" Feb 18 14:56:07 crc kubenswrapper[4957]: I0218 14:56:07.884763 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa177feb-1f29-43cb-baa9-6676bfa7c403" containerName="extract-utilities" Feb 18 14:56:07 crc kubenswrapper[4957]: I0218 14:56:07.885032 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa177feb-1f29-43cb-baa9-6676bfa7c403" containerName="registry-server" Feb 18 14:56:07 crc kubenswrapper[4957]: I0218 14:56:07.889396 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-hh59v" Feb 18 14:56:07 crc kubenswrapper[4957]: I0218 14:56:07.923510 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-hh59v"] Feb 18 14:56:07 crc kubenswrapper[4957]: I0218 14:56:07.969218 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-b67z9"] Feb 18 14:56:07 crc kubenswrapper[4957]: I0218 14:56:07.971137 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-b67z9" Feb 18 14:56:08 crc kubenswrapper[4957]: I0218 14:56:08.009207 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-b67z9"] Feb 18 14:56:08 crc kubenswrapper[4957]: I0218 14:56:08.029650 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xj97\" (UniqueName: \"kubernetes.io/projected/099706d3-04cd-4729-b03d-774bc14ae8b5-kube-api-access-5xj97\") pod \"nova-api-db-create-hh59v\" (UID: \"099706d3-04cd-4729-b03d-774bc14ae8b5\") " pod="openstack/nova-api-db-create-hh59v" Feb 18 14:56:08 crc kubenswrapper[4957]: I0218 14:56:08.030087 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/099706d3-04cd-4729-b03d-774bc14ae8b5-operator-scripts\") pod \"nova-api-db-create-hh59v\" (UID: \"099706d3-04cd-4729-b03d-774bc14ae8b5\") " pod="openstack/nova-api-db-create-hh59v" Feb 18 14:56:08 crc kubenswrapper[4957]: I0218 14:56:08.087914 4957 generic.go:334] "Generic (PLEG): container finished" podID="3e973679-e589-4827-8e24-d2fda83ab2e2" containerID="0b02eb364ae87efff724f29f33cdfbdaf7a7bbc39dd2df08ac9687f3d5fd9944" exitCode=1 Feb 18 14:56:08 crc kubenswrapper[4957]: I0218 14:56:08.087975 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6744cfff74-cgs9b" event={"ID":"3e973679-e589-4827-8e24-d2fda83ab2e2","Type":"ContainerDied","Data":"0b02eb364ae87efff724f29f33cdfbdaf7a7bbc39dd2df08ac9687f3d5fd9944"} Feb 18 14:56:08 crc kubenswrapper[4957]: I0218 14:56:08.088007 4957 scope.go:117] "RemoveContainer" containerID="42dd40aceafde82ff0a7afae8cd0b52b51474b887980a7db56af5d6e528283da" Feb 18 14:56:08 crc kubenswrapper[4957]: I0218 14:56:08.088810 4957 scope.go:117] "RemoveContainer" containerID="0b02eb364ae87efff724f29f33cdfbdaf7a7bbc39dd2df08ac9687f3d5fd9944" Feb 18 14:56:08 crc kubenswrapper[4957]: E0218 14:56:08.089634 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-6744cfff74-cgs9b_openstack(3e973679-e589-4827-8e24-d2fda83ab2e2)\"" pod="openstack/heat-api-6744cfff74-cgs9b" podUID="3e973679-e589-4827-8e24-d2fda83ab2e2" Feb 18 14:56:08 crc kubenswrapper[4957]: I0218 14:56:08.092896 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0197-account-create-update-tlpj9"] Feb 18 14:56:08 crc kubenswrapper[4957]: I0218 14:56:08.094311 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0197-account-create-update-tlpj9" Feb 18 14:56:08 crc kubenswrapper[4957]: I0218 14:56:08.096793 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Feb 18 14:56:08 crc kubenswrapper[4957]: I0218 14:56:08.123712 4957 generic.go:334] "Generic (PLEG): container finished" podID="37f34265-b814-4eb7-b633-b1516352e951" containerID="a63d2bf777122f3fbb509f7787ba6007bbb503ee64f6e8458ab8b0b855be90e6" exitCode=1 Feb 18 14:56:08 crc kubenswrapper[4957]: I0218 14:56:08.123797 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7d845d58dd-m6z94" event={"ID":"37f34265-b814-4eb7-b633-b1516352e951","Type":"ContainerDied","Data":"a63d2bf777122f3fbb509f7787ba6007bbb503ee64f6e8458ab8b0b855be90e6"} Feb 18 14:56:08 crc kubenswrapper[4957]: I0218 14:56:08.127860 4957 scope.go:117] "RemoveContainer" containerID="a63d2bf777122f3fbb509f7787ba6007bbb503ee64f6e8458ab8b0b855be90e6" Feb 18 14:56:08 crc kubenswrapper[4957]: E0218 14:56:08.128358 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-7d845d58dd-m6z94_openstack(37f34265-b814-4eb7-b633-b1516352e951)\"" pod="openstack/heat-cfnapi-7d845d58dd-m6z94" podUID="37f34265-b814-4eb7-b633-b1516352e951" Feb 18 14:56:08 crc kubenswrapper[4957]: I0218 14:56:08.128969 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-sgw96"] Feb 18 14:56:08 crc kubenswrapper[4957]: I0218 14:56:08.144412 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-sgw96" Feb 18 14:56:08 crc kubenswrapper[4957]: I0218 14:56:08.147401 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cb47212d-7882-43e9-bea7-a114f2e4f629-operator-scripts\") pod \"nova-cell0-db-create-b67z9\" (UID: \"cb47212d-7882-43e9-bea7-a114f2e4f629\") " pod="openstack/nova-cell0-db-create-b67z9" Feb 18 14:56:08 crc kubenswrapper[4957]: I0218 14:56:08.147510 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h72fz\" (UniqueName: \"kubernetes.io/projected/cb47212d-7882-43e9-bea7-a114f2e4f629-kube-api-access-h72fz\") pod \"nova-cell0-db-create-b67z9\" (UID: \"cb47212d-7882-43e9-bea7-a114f2e4f629\") " pod="openstack/nova-cell0-db-create-b67z9" Feb 18 14:56:08 crc kubenswrapper[4957]: I0218 14:56:08.148188 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0197-account-create-update-tlpj9"] Feb 18 14:56:08 crc kubenswrapper[4957]: I0218 14:56:08.148794 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/099706d3-04cd-4729-b03d-774bc14ae8b5-operator-scripts\") pod \"nova-api-db-create-hh59v\" (UID: \"099706d3-04cd-4729-b03d-774bc14ae8b5\") " pod="openstack/nova-api-db-create-hh59v" Feb 18 14:56:08 crc kubenswrapper[4957]: I0218 14:56:08.149198 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5xj97\" (UniqueName: \"kubernetes.io/projected/099706d3-04cd-4729-b03d-774bc14ae8b5-kube-api-access-5xj97\") pod \"nova-api-db-create-hh59v\" (UID: \"099706d3-04cd-4729-b03d-774bc14ae8b5\") " pod="openstack/nova-api-db-create-hh59v" Feb 18 14:56:08 crc kubenswrapper[4957]: I0218 14:56:08.149842 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/099706d3-04cd-4729-b03d-774bc14ae8b5-operator-scripts\") pod \"nova-api-db-create-hh59v\" (UID: \"099706d3-04cd-4729-b03d-774bc14ae8b5\") " pod="openstack/nova-api-db-create-hh59v" Feb 18 14:56:08 crc kubenswrapper[4957]: I0218 14:56:08.161687 4957 generic.go:334] "Generic (PLEG): container finished" podID="0f682c53-b8cc-42e4-a3b3-3bccfdf7c908" containerID="7265b82329950bbde40268f45d2449ca895f0d203c4f346616159cb830a1c65f" exitCode=0 Feb 18 14:56:08 crc kubenswrapper[4957]: I0218 14:56:08.161773 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5dfdd78696-x7mzw" event={"ID":"0f682c53-b8cc-42e4-a3b3-3bccfdf7c908","Type":"ContainerDied","Data":"7265b82329950bbde40268f45d2449ca895f0d203c4f346616159cb830a1c65f"} Feb 18 14:56:08 crc kubenswrapper[4957]: I0218 14:56:08.165297 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-sgw96"] Feb 18 14:56:08 crc kubenswrapper[4957]: I0218 14:56:08.197296 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xj97\" (UniqueName: \"kubernetes.io/projected/099706d3-04cd-4729-b03d-774bc14ae8b5-kube-api-access-5xj97\") pod \"nova-api-db-create-hh59v\" (UID: \"099706d3-04cd-4729-b03d-774bc14ae8b5\") " pod="openstack/nova-api-db-create-hh59v" Feb 18 14:56:08 crc kubenswrapper[4957]: I0218 14:56:08.218621 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-hh59v" Feb 18 14:56:08 crc kubenswrapper[4957]: I0218 14:56:08.268496 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pgvg5\" (UniqueName: \"kubernetes.io/projected/eb399cd7-737a-423f-8a68-71d4a3c4f592-kube-api-access-pgvg5\") pod \"nova-api-0197-account-create-update-tlpj9\" (UID: \"eb399cd7-737a-423f-8a68-71d4a3c4f592\") " pod="openstack/nova-api-0197-account-create-update-tlpj9" Feb 18 14:56:08 crc kubenswrapper[4957]: I0218 14:56:08.268770 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8rcb\" (UniqueName: \"kubernetes.io/projected/c916872c-8d06-4608-84d8-1159ad3c99eb-kube-api-access-x8rcb\") pod \"nova-cell1-db-create-sgw96\" (UID: \"c916872c-8d06-4608-84d8-1159ad3c99eb\") " pod="openstack/nova-cell1-db-create-sgw96" Feb 18 14:56:08 crc kubenswrapper[4957]: I0218 14:56:08.269000 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cb47212d-7882-43e9-bea7-a114f2e4f629-operator-scripts\") pod \"nova-cell0-db-create-b67z9\" (UID: \"cb47212d-7882-43e9-bea7-a114f2e4f629\") " pod="openstack/nova-cell0-db-create-b67z9" Feb 18 14:56:08 crc kubenswrapper[4957]: I0218 14:56:08.269090 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c916872c-8d06-4608-84d8-1159ad3c99eb-operator-scripts\") pod \"nova-cell1-db-create-sgw96\" (UID: \"c916872c-8d06-4608-84d8-1159ad3c99eb\") " pod="openstack/nova-cell1-db-create-sgw96" Feb 18 14:56:08 crc kubenswrapper[4957]: I0218 14:56:08.269129 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h72fz\" (UniqueName: \"kubernetes.io/projected/cb47212d-7882-43e9-bea7-a114f2e4f629-kube-api-access-h72fz\") pod \"nova-cell0-db-create-b67z9\" (UID: \"cb47212d-7882-43e9-bea7-a114f2e4f629\") " pod="openstack/nova-cell0-db-create-b67z9" Feb 18 14:56:08 crc kubenswrapper[4957]: I0218 14:56:08.269170 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eb399cd7-737a-423f-8a68-71d4a3c4f592-operator-scripts\") pod \"nova-api-0197-account-create-update-tlpj9\" (UID: \"eb399cd7-737a-423f-8a68-71d4a3c4f592\") " pod="openstack/nova-api-0197-account-create-update-tlpj9" Feb 18 14:56:08 crc kubenswrapper[4957]: I0218 14:56:08.281746 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cb47212d-7882-43e9-bea7-a114f2e4f629-operator-scripts\") pod \"nova-cell0-db-create-b67z9\" (UID: \"cb47212d-7882-43e9-bea7-a114f2e4f629\") " pod="openstack/nova-cell0-db-create-b67z9" Feb 18 14:56:08 crc kubenswrapper[4957]: I0218 14:56:08.324568 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa177feb-1f29-43cb-baa9-6676bfa7c403" path="/var/lib/kubelet/pods/fa177feb-1f29-43cb-baa9-6676bfa7c403/volumes" Feb 18 14:56:08 crc kubenswrapper[4957]: I0218 14:56:08.326070 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"32518ba7-d47c-461e-a3e3-f13cca6bfd40","Type":"ContainerStarted","Data":"8cef25036ba9f6eeba7b7d0c60e51e5b021cad30ea9b8df843af6d851b6c405d"} Feb 18 14:56:08 crc kubenswrapper[4957]: I0218 14:56:08.329602 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-6416-account-create-update-xvbh7"] Feb 18 14:56:08 crc kubenswrapper[4957]: I0218 14:56:08.331249 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h72fz\" (UniqueName: \"kubernetes.io/projected/cb47212d-7882-43e9-bea7-a114f2e4f629-kube-api-access-h72fz\") pod \"nova-cell0-db-create-b67z9\" (UID: \"cb47212d-7882-43e9-bea7-a114f2e4f629\") " pod="openstack/nova-cell0-db-create-b67z9" Feb 18 14:56:08 crc kubenswrapper[4957]: I0218 14:56:08.334932 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-6416-account-create-update-xvbh7" Feb 18 14:56:08 crc kubenswrapper[4957]: I0218 14:56:08.338531 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Feb 18 14:56:08 crc kubenswrapper[4957]: I0218 14:56:08.358229 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-6416-account-create-update-xvbh7"] Feb 18 14:56:08 crc kubenswrapper[4957]: I0218 14:56:08.371226 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x8rcb\" (UniqueName: \"kubernetes.io/projected/c916872c-8d06-4608-84d8-1159ad3c99eb-kube-api-access-x8rcb\") pod \"nova-cell1-db-create-sgw96\" (UID: \"c916872c-8d06-4608-84d8-1159ad3c99eb\") " pod="openstack/nova-cell1-db-create-sgw96" Feb 18 14:56:08 crc kubenswrapper[4957]: I0218 14:56:08.371638 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c916872c-8d06-4608-84d8-1159ad3c99eb-operator-scripts\") pod \"nova-cell1-db-create-sgw96\" (UID: \"c916872c-8d06-4608-84d8-1159ad3c99eb\") " pod="openstack/nova-cell1-db-create-sgw96" Feb 18 14:56:08 crc kubenswrapper[4957]: I0218 14:56:08.371697 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eb399cd7-737a-423f-8a68-71d4a3c4f592-operator-scripts\") pod \"nova-api-0197-account-create-update-tlpj9\" (UID: \"eb399cd7-737a-423f-8a68-71d4a3c4f592\") " pod="openstack/nova-api-0197-account-create-update-tlpj9" Feb 18 14:56:08 crc kubenswrapper[4957]: I0218 14:56:08.371765 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pgvg5\" (UniqueName: \"kubernetes.io/projected/eb399cd7-737a-423f-8a68-71d4a3c4f592-kube-api-access-pgvg5\") pod \"nova-api-0197-account-create-update-tlpj9\" (UID: \"eb399cd7-737a-423f-8a68-71d4a3c4f592\") " pod="openstack/nova-api-0197-account-create-update-tlpj9" Feb 18 14:56:08 crc kubenswrapper[4957]: I0218 14:56:08.391481 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c916872c-8d06-4608-84d8-1159ad3c99eb-operator-scripts\") pod \"nova-cell1-db-create-sgw96\" (UID: \"c916872c-8d06-4608-84d8-1159ad3c99eb\") " pod="openstack/nova-cell1-db-create-sgw96" Feb 18 14:56:08 crc kubenswrapper[4957]: I0218 14:56:08.391562 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eb399cd7-737a-423f-8a68-71d4a3c4f592-operator-scripts\") pod \"nova-api-0197-account-create-update-tlpj9\" (UID: \"eb399cd7-737a-423f-8a68-71d4a3c4f592\") " pod="openstack/nova-api-0197-account-create-update-tlpj9" Feb 18 14:56:08 crc kubenswrapper[4957]: I0218 14:56:08.402785 4957 scope.go:117] "RemoveContainer" containerID="bdfce11eb4e025e06bb4df0d601bc902b5674114f8d28b7fe7505bc8ea4d4aec" Feb 18 14:56:08 crc kubenswrapper[4957]: I0218 14:56:08.424373 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pgvg5\" (UniqueName: \"kubernetes.io/projected/eb399cd7-737a-423f-8a68-71d4a3c4f592-kube-api-access-pgvg5\") pod \"nova-api-0197-account-create-update-tlpj9\" (UID: \"eb399cd7-737a-423f-8a68-71d4a3c4f592\") " pod="openstack/nova-api-0197-account-create-update-tlpj9" Feb 18 14:56:08 crc kubenswrapper[4957]: I0218 14:56:08.424380 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x8rcb\" (UniqueName: \"kubernetes.io/projected/c916872c-8d06-4608-84d8-1159ad3c99eb-kube-api-access-x8rcb\") pod \"nova-cell1-db-create-sgw96\" (UID: \"c916872c-8d06-4608-84d8-1159ad3c99eb\") " pod="openstack/nova-cell1-db-create-sgw96" Feb 18 14:56:08 crc kubenswrapper[4957]: I0218 14:56:08.462519 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-f855-account-create-update-2d5wz"] Feb 18 14:56:08 crc kubenswrapper[4957]: I0218 14:56:08.464529 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-f855-account-create-update-2d5wz" Feb 18 14:56:08 crc kubenswrapper[4957]: I0218 14:56:08.467972 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Feb 18 14:56:08 crc kubenswrapper[4957]: I0218 14:56:08.476926 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-f855-account-create-update-2d5wz"] Feb 18 14:56:08 crc kubenswrapper[4957]: I0218 14:56:08.498212 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ncbwm\" (UniqueName: \"kubernetes.io/projected/a7dd324f-84f9-4860-8cd0-c00e9eba5367-kube-api-access-ncbwm\") pod \"nova-cell1-f855-account-create-update-2d5wz\" (UID: \"a7dd324f-84f9-4860-8cd0-c00e9eba5367\") " pod="openstack/nova-cell1-f855-account-create-update-2d5wz" Feb 18 14:56:08 crc kubenswrapper[4957]: I0218 14:56:08.498352 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a7dd324f-84f9-4860-8cd0-c00e9eba5367-operator-scripts\") pod \"nova-cell1-f855-account-create-update-2d5wz\" (UID: \"a7dd324f-84f9-4860-8cd0-c00e9eba5367\") " pod="openstack/nova-cell1-f855-account-create-update-2d5wz" Feb 18 14:56:08 crc kubenswrapper[4957]: I0218 14:56:08.498377 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fbcc208f-6b78-4c4c-88d9-043d963343de-operator-scripts\") pod \"nova-cell0-6416-account-create-update-xvbh7\" (UID: \"fbcc208f-6b78-4c4c-88d9-043d963343de\") " pod="openstack/nova-cell0-6416-account-create-update-xvbh7" Feb 18 14:56:08 crc kubenswrapper[4957]: I0218 14:56:08.498411 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6nkrm\" (UniqueName: \"kubernetes.io/projected/fbcc208f-6b78-4c4c-88d9-043d963343de-kube-api-access-6nkrm\") pod \"nova-cell0-6416-account-create-update-xvbh7\" (UID: \"fbcc208f-6b78-4c4c-88d9-043d963343de\") " pod="openstack/nova-cell0-6416-account-create-update-xvbh7" Feb 18 14:56:08 crc kubenswrapper[4957]: I0218 14:56:08.610326 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0197-account-create-update-tlpj9" Feb 18 14:56:08 crc kubenswrapper[4957]: I0218 14:56:08.611198 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-b67z9" Feb 18 14:56:08 crc kubenswrapper[4957]: I0218 14:56:08.611892 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a7dd324f-84f9-4860-8cd0-c00e9eba5367-operator-scripts\") pod \"nova-cell1-f855-account-create-update-2d5wz\" (UID: \"a7dd324f-84f9-4860-8cd0-c00e9eba5367\") " pod="openstack/nova-cell1-f855-account-create-update-2d5wz" Feb 18 14:56:08 crc kubenswrapper[4957]: I0218 14:56:08.611938 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fbcc208f-6b78-4c4c-88d9-043d963343de-operator-scripts\") pod \"nova-cell0-6416-account-create-update-xvbh7\" (UID: \"fbcc208f-6b78-4c4c-88d9-043d963343de\") " pod="openstack/nova-cell0-6416-account-create-update-xvbh7" Feb 18 14:56:08 crc kubenswrapper[4957]: I0218 14:56:08.611985 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6nkrm\" (UniqueName: \"kubernetes.io/projected/fbcc208f-6b78-4c4c-88d9-043d963343de-kube-api-access-6nkrm\") pod \"nova-cell0-6416-account-create-update-xvbh7\" (UID: \"fbcc208f-6b78-4c4c-88d9-043d963343de\") " pod="openstack/nova-cell0-6416-account-create-update-xvbh7" Feb 18 14:56:08 crc kubenswrapper[4957]: I0218 14:56:08.612109 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ncbwm\" (UniqueName: \"kubernetes.io/projected/a7dd324f-84f9-4860-8cd0-c00e9eba5367-kube-api-access-ncbwm\") pod \"nova-cell1-f855-account-create-update-2d5wz\" (UID: \"a7dd324f-84f9-4860-8cd0-c00e9eba5367\") " pod="openstack/nova-cell1-f855-account-create-update-2d5wz" Feb 18 14:56:08 crc kubenswrapper[4957]: I0218 14:56:08.623133 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-sgw96" Feb 18 14:56:08 crc kubenswrapper[4957]: I0218 14:56:08.628145 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a7dd324f-84f9-4860-8cd0-c00e9eba5367-operator-scripts\") pod \"nova-cell1-f855-account-create-update-2d5wz\" (UID: \"a7dd324f-84f9-4860-8cd0-c00e9eba5367\") " pod="openstack/nova-cell1-f855-account-create-update-2d5wz" Feb 18 14:56:08 crc kubenswrapper[4957]: I0218 14:56:08.630267 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fbcc208f-6b78-4c4c-88d9-043d963343de-operator-scripts\") pod \"nova-cell0-6416-account-create-update-xvbh7\" (UID: \"fbcc208f-6b78-4c4c-88d9-043d963343de\") " pod="openstack/nova-cell0-6416-account-create-update-xvbh7" Feb 18 14:56:08 crc kubenswrapper[4957]: I0218 14:56:08.639804 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ncbwm\" (UniqueName: \"kubernetes.io/projected/a7dd324f-84f9-4860-8cd0-c00e9eba5367-kube-api-access-ncbwm\") pod \"nova-cell1-f855-account-create-update-2d5wz\" (UID: \"a7dd324f-84f9-4860-8cd0-c00e9eba5367\") " pod="openstack/nova-cell1-f855-account-create-update-2d5wz" Feb 18 14:56:08 crc kubenswrapper[4957]: I0218 14:56:08.640338 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6nkrm\" (UniqueName: \"kubernetes.io/projected/fbcc208f-6b78-4c4c-88d9-043d963343de-kube-api-access-6nkrm\") pod \"nova-cell0-6416-account-create-update-xvbh7\" (UID: \"fbcc208f-6b78-4c4c-88d9-043d963343de\") " pod="openstack/nova-cell0-6416-account-create-update-xvbh7" Feb 18 14:56:08 crc kubenswrapper[4957]: I0218 14:56:08.705086 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-6416-account-create-update-xvbh7" Feb 18 14:56:08 crc kubenswrapper[4957]: I0218 14:56:08.823364 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-f855-account-create-update-2d5wz" Feb 18 14:56:09 crc kubenswrapper[4957]: I0218 14:56:09.110953 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-hh59v"] Feb 18 14:56:09 crc kubenswrapper[4957]: I0218 14:56:09.222927 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-5dfdd78696-x7mzw" Feb 18 14:56:09 crc kubenswrapper[4957]: I0218 14:56:09.341927 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5dfdd78696-x7mzw" event={"ID":"0f682c53-b8cc-42e4-a3b3-3bccfdf7c908","Type":"ContainerDied","Data":"41853f0fa5b7bbf79cb43da0a47dd8047662bac15803baa8a55289f9e2b841ae"} Feb 18 14:56:09 crc kubenswrapper[4957]: I0218 14:56:09.341979 4957 scope.go:117] "RemoveContainer" containerID="7265b82329950bbde40268f45d2449ca895f0d203c4f346616159cb830a1c65f" Feb 18 14:56:09 crc kubenswrapper[4957]: I0218 14:56:09.342100 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-5dfdd78696-x7mzw" Feb 18 14:56:09 crc kubenswrapper[4957]: I0218 14:56:09.349952 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-hh59v" event={"ID":"099706d3-04cd-4729-b03d-774bc14ae8b5","Type":"ContainerStarted","Data":"d5f699113960681164d95f463e45af8acb58b3d81dec8f16c3a6f786b6cf3dea"} Feb 18 14:56:09 crc kubenswrapper[4957]: I0218 14:56:09.357034 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f682c53-b8cc-42e4-a3b3-3bccfdf7c908-combined-ca-bundle\") pod \"0f682c53-b8cc-42e4-a3b3-3bccfdf7c908\" (UID: \"0f682c53-b8cc-42e4-a3b3-3bccfdf7c908\") " Feb 18 14:56:09 crc kubenswrapper[4957]: I0218 14:56:09.357118 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0f682c53-b8cc-42e4-a3b3-3bccfdf7c908-config-data-custom\") pod \"0f682c53-b8cc-42e4-a3b3-3bccfdf7c908\" (UID: \"0f682c53-b8cc-42e4-a3b3-3bccfdf7c908\") " Feb 18 14:56:09 crc kubenswrapper[4957]: I0218 14:56:09.357523 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f682c53-b8cc-42e4-a3b3-3bccfdf7c908-config-data\") pod \"0f682c53-b8cc-42e4-a3b3-3bccfdf7c908\" (UID: \"0f682c53-b8cc-42e4-a3b3-3bccfdf7c908\") " Feb 18 14:56:09 crc kubenswrapper[4957]: I0218 14:56:09.357670 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hwq4g\" (UniqueName: \"kubernetes.io/projected/0f682c53-b8cc-42e4-a3b3-3bccfdf7c908-kube-api-access-hwq4g\") pod \"0f682c53-b8cc-42e4-a3b3-3bccfdf7c908\" (UID: \"0f682c53-b8cc-42e4-a3b3-3bccfdf7c908\") " Feb 18 14:56:09 crc kubenswrapper[4957]: I0218 14:56:09.370709 4957 scope.go:117] "RemoveContainer" containerID="0b02eb364ae87efff724f29f33cdfbdaf7a7bbc39dd2df08ac9687f3d5fd9944" Feb 18 14:56:09 crc kubenswrapper[4957]: E0218 14:56:09.370943 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-6744cfff74-cgs9b_openstack(3e973679-e589-4827-8e24-d2fda83ab2e2)\"" pod="openstack/heat-api-6744cfff74-cgs9b" podUID="3e973679-e589-4827-8e24-d2fda83ab2e2" Feb 18 14:56:09 crc kubenswrapper[4957]: I0218 14:56:09.372208 4957 scope.go:117] "RemoveContainer" containerID="a63d2bf777122f3fbb509f7787ba6007bbb503ee64f6e8458ab8b0b855be90e6" Feb 18 14:56:09 crc kubenswrapper[4957]: E0218 14:56:09.372713 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-7d845d58dd-m6z94_openstack(37f34265-b814-4eb7-b633-b1516352e951)\"" pod="openstack/heat-cfnapi-7d845d58dd-m6z94" podUID="37f34265-b814-4eb7-b633-b1516352e951" Feb 18 14:56:09 crc kubenswrapper[4957]: I0218 14:56:09.373155 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f682c53-b8cc-42e4-a3b3-3bccfdf7c908-kube-api-access-hwq4g" (OuterVolumeSpecName: "kube-api-access-hwq4g") pod "0f682c53-b8cc-42e4-a3b3-3bccfdf7c908" (UID: "0f682c53-b8cc-42e4-a3b3-3bccfdf7c908"). InnerVolumeSpecName "kube-api-access-hwq4g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:56:09 crc kubenswrapper[4957]: I0218 14:56:09.386368 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f682c53-b8cc-42e4-a3b3-3bccfdf7c908-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "0f682c53-b8cc-42e4-a3b3-3bccfdf7c908" (UID: "0f682c53-b8cc-42e4-a3b3-3bccfdf7c908"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:56:09 crc kubenswrapper[4957]: I0218 14:56:09.460826 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hwq4g\" (UniqueName: \"kubernetes.io/projected/0f682c53-b8cc-42e4-a3b3-3bccfdf7c908-kube-api-access-hwq4g\") on node \"crc\" DevicePath \"\"" Feb 18 14:56:09 crc kubenswrapper[4957]: I0218 14:56:09.461134 4957 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0f682c53-b8cc-42e4-a3b3-3bccfdf7c908-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 18 14:56:09 crc kubenswrapper[4957]: I0218 14:56:09.462745 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f682c53-b8cc-42e4-a3b3-3bccfdf7c908-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0f682c53-b8cc-42e4-a3b3-3bccfdf7c908" (UID: "0f682c53-b8cc-42e4-a3b3-3bccfdf7c908"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:56:09 crc kubenswrapper[4957]: I0218 14:56:09.476466 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-b67z9"] Feb 18 14:56:09 crc kubenswrapper[4957]: W0218 14:56:09.482820 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcb47212d_7882_43e9_bea7_a114f2e4f629.slice/crio-56d1df01cef611d17552f315909794a7d05410fc786cb1b46c2c517073d12a94 WatchSource:0}: Error finding container 56d1df01cef611d17552f315909794a7d05410fc786cb1b46c2c517073d12a94: Status 404 returned error can't find the container with id 56d1df01cef611d17552f315909794a7d05410fc786cb1b46c2c517073d12a94 Feb 18 14:56:09 crc kubenswrapper[4957]: I0218 14:56:09.545583 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f682c53-b8cc-42e4-a3b3-3bccfdf7c908-config-data" (OuterVolumeSpecName: "config-data") pod "0f682c53-b8cc-42e4-a3b3-3bccfdf7c908" (UID: "0f682c53-b8cc-42e4-a3b3-3bccfdf7c908"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:56:09 crc kubenswrapper[4957]: I0218 14:56:09.566541 4957 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f682c53-b8cc-42e4-a3b3-3bccfdf7c908-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:56:09 crc kubenswrapper[4957]: I0218 14:56:09.566851 4957 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f682c53-b8cc-42e4-a3b3-3bccfdf7c908-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 14:56:09 crc kubenswrapper[4957]: I0218 14:56:09.697675 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-5dfdd78696-x7mzw"] Feb 18 14:56:09 crc kubenswrapper[4957]: I0218 14:56:09.709937 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-5dfdd78696-x7mzw"] Feb 18 14:56:10 crc kubenswrapper[4957]: I0218 14:56:10.027496 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-6416-account-create-update-xvbh7"] Feb 18 14:56:10 crc kubenswrapper[4957]: W0218 14:56:10.029805 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfbcc208f_6b78_4c4c_88d9_043d963343de.slice/crio-3d67cd3344610b83f592680a5d0103e1df0a51b839ab2bfa7b39d9ab0d369373 WatchSource:0}: Error finding container 3d67cd3344610b83f592680a5d0103e1df0a51b839ab2bfa7b39d9ab0d369373: Status 404 returned error can't find the container with id 3d67cd3344610b83f592680a5d0103e1df0a51b839ab2bfa7b39d9ab0d369373 Feb 18 14:56:10 crc kubenswrapper[4957]: I0218 14:56:10.039859 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0197-account-create-update-tlpj9"] Feb 18 14:56:10 crc kubenswrapper[4957]: W0218 14:56:10.056432 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda7dd324f_84f9_4860_8cd0_c00e9eba5367.slice/crio-7ba7a6cd1044839e14ecfe6988e9a137dad7352936b51be1ca20774b667a2ecc WatchSource:0}: Error finding container 7ba7a6cd1044839e14ecfe6988e9a137dad7352936b51be1ca20774b667a2ecc: Status 404 returned error can't find the container with id 7ba7a6cd1044839e14ecfe6988e9a137dad7352936b51be1ca20774b667a2ecc Feb 18 14:56:10 crc kubenswrapper[4957]: I0218 14:56:10.057745 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-f855-account-create-update-2d5wz"] Feb 18 14:56:10 crc kubenswrapper[4957]: I0218 14:56:10.067249 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-sgw96"] Feb 18 14:56:10 crc kubenswrapper[4957]: I0218 14:56:10.250355 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f682c53-b8cc-42e4-a3b3-3bccfdf7c908" path="/var/lib/kubelet/pods/0f682c53-b8cc-42e4-a3b3-3bccfdf7c908/volumes" Feb 18 14:56:10 crc kubenswrapper[4957]: I0218 14:56:10.304682 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7756b9d78c-rst4k" Feb 18 14:56:10 crc kubenswrapper[4957]: I0218 14:56:10.442101 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"32518ba7-d47c-461e-a3e3-f13cca6bfd40","Type":"ContainerStarted","Data":"13619cc6549493bb8b29544bcfb1f3bfd178bf312059f6cb10aaf479918f2488"} Feb 18 14:56:10 crc kubenswrapper[4957]: I0218 14:56:10.442370 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="32518ba7-d47c-461e-a3e3-f13cca6bfd40" containerName="ceilometer-central-agent" containerID="cri-o://83613776e4fdcc13175a733b8e96ed4451c30ba7131123ddeb3588ef6b3677ba" gracePeriod=30 Feb 18 14:56:10 crc kubenswrapper[4957]: I0218 14:56:10.442490 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="32518ba7-d47c-461e-a3e3-f13cca6bfd40" containerName="proxy-httpd" containerID="cri-o://13619cc6549493bb8b29544bcfb1f3bfd178bf312059f6cb10aaf479918f2488" gracePeriod=30 Feb 18 14:56:10 crc kubenswrapper[4957]: I0218 14:56:10.442532 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 18 14:56:10 crc kubenswrapper[4957]: I0218 14:56:10.442550 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="32518ba7-d47c-461e-a3e3-f13cca6bfd40" containerName="sg-core" containerID="cri-o://8cef25036ba9f6eeba7b7d0c60e51e5b021cad30ea9b8df843af6d851b6c405d" gracePeriod=30 Feb 18 14:56:10 crc kubenswrapper[4957]: I0218 14:56:10.442617 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="32518ba7-d47c-461e-a3e3-f13cca6bfd40" containerName="ceilometer-notification-agent" containerID="cri-o://16de19ebf6ef47001ba7f6991eaa123b1157f7154ee4d8d30dc469d077de5e29" gracePeriod=30 Feb 18 14:56:10 crc kubenswrapper[4957]: I0218 14:56:10.457714 4957 generic.go:334] "Generic (PLEG): container finished" podID="cb47212d-7882-43e9-bea7-a114f2e4f629" containerID="716110cbddac71890976bfd1a4c49210212732b27bb2bf28f291137c8d596708" exitCode=0 Feb 18 14:56:10 crc kubenswrapper[4957]: I0218 14:56:10.457849 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-b67z9" event={"ID":"cb47212d-7882-43e9-bea7-a114f2e4f629","Type":"ContainerDied","Data":"716110cbddac71890976bfd1a4c49210212732b27bb2bf28f291137c8d596708"} Feb 18 14:56:10 crc kubenswrapper[4957]: I0218 14:56:10.458286 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-b67z9" event={"ID":"cb47212d-7882-43e9-bea7-a114f2e4f629","Type":"ContainerStarted","Data":"56d1df01cef611d17552f315909794a7d05410fc786cb1b46c2c517073d12a94"} Feb 18 14:56:10 crc kubenswrapper[4957]: I0218 14:56:10.498998 4957 generic.go:334] "Generic (PLEG): container finished" podID="099706d3-04cd-4729-b03d-774bc14ae8b5" containerID="4a3b20a3f54458baa4887945c0f72d355994507e67f82655beaca5e29bbe5530" exitCode=0 Feb 18 14:56:10 crc kubenswrapper[4957]: I0218 14:56:10.498105 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-xmqsm"] Feb 18 14:56:10 crc kubenswrapper[4957]: I0218 14:56:10.500822 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-hh59v" event={"ID":"099706d3-04cd-4729-b03d-774bc14ae8b5","Type":"ContainerDied","Data":"4a3b20a3f54458baa4887945c0f72d355994507e67f82655beaca5e29bbe5530"} Feb 18 14:56:10 crc kubenswrapper[4957]: I0218 14:56:10.501036 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c9776ccc5-xmqsm" podUID="88ba3d00-f51e-4168-809e-ee46dad21b45" containerName="dnsmasq-dns" containerID="cri-o://468d188c4710786a555618d14803fc9033243d1d74a37ba304b78da81c0d0171" gracePeriod=10 Feb 18 14:56:10 crc kubenswrapper[4957]: I0218 14:56:10.509443 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.186379456 podStartE2EDuration="9.509366508s" podCreationTimestamp="2026-02-18 14:56:01 +0000 UTC" firstStartedPulling="2026-02-18 14:56:03.161594346 +0000 UTC m=+1469.682459080" lastFinishedPulling="2026-02-18 14:56:09.484581388 +0000 UTC m=+1476.005446132" observedRunningTime="2026-02-18 14:56:10.491685277 +0000 UTC m=+1477.012550041" watchObservedRunningTime="2026-02-18 14:56:10.509366508 +0000 UTC m=+1477.030231252" Feb 18 14:56:10 crc kubenswrapper[4957]: I0218 14:56:10.511166 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-f855-account-create-update-2d5wz" event={"ID":"a7dd324f-84f9-4860-8cd0-c00e9eba5367","Type":"ContainerStarted","Data":"7ba7a6cd1044839e14ecfe6988e9a137dad7352936b51be1ca20774b667a2ecc"} Feb 18 14:56:10 crc kubenswrapper[4957]: I0218 14:56:10.516763 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0197-account-create-update-tlpj9" event={"ID":"eb399cd7-737a-423f-8a68-71d4a3c4f592","Type":"ContainerStarted","Data":"d72975c1c1120766f44cdfe9de026c9cf1e95d8951868ce8c8d00b85f05024e2"} Feb 18 14:56:10 crc kubenswrapper[4957]: I0218 14:56:10.552109 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-6416-account-create-update-xvbh7" event={"ID":"fbcc208f-6b78-4c4c-88d9-043d963343de","Type":"ContainerStarted","Data":"3d67cd3344610b83f592680a5d0103e1df0a51b839ab2bfa7b39d9ab0d369373"} Feb 18 14:56:10 crc kubenswrapper[4957]: I0218 14:56:10.570595 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-sgw96" event={"ID":"c916872c-8d06-4608-84d8-1159ad3c99eb","Type":"ContainerStarted","Data":"26e51923ccaaa81a76f919dd0d14c674cc2faa56bda7f49966dba1ea43d004b1"} Feb 18 14:56:10 crc kubenswrapper[4957]: E0218 14:56:10.645252 4957 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod32518ba7_d47c_461e_a3e3_f13cca6bfd40.slice/crio-conmon-8cef25036ba9f6eeba7b7d0c60e51e5b021cad30ea9b8df843af6d851b6c405d.scope\": RecentStats: unable to find data in memory cache]" Feb 18 14:56:11 crc kubenswrapper[4957]: I0218 14:56:11.413111 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-xmqsm" Feb 18 14:56:11 crc kubenswrapper[4957]: I0218 14:56:11.492180 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/88ba3d00-f51e-4168-809e-ee46dad21b45-dns-svc\") pod \"88ba3d00-f51e-4168-809e-ee46dad21b45\" (UID: \"88ba3d00-f51e-4168-809e-ee46dad21b45\") " Feb 18 14:56:11 crc kubenswrapper[4957]: I0218 14:56:11.492477 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pwv5g\" (UniqueName: \"kubernetes.io/projected/88ba3d00-f51e-4168-809e-ee46dad21b45-kube-api-access-pwv5g\") pod \"88ba3d00-f51e-4168-809e-ee46dad21b45\" (UID: \"88ba3d00-f51e-4168-809e-ee46dad21b45\") " Feb 18 14:56:11 crc kubenswrapper[4957]: I0218 14:56:11.492579 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/88ba3d00-f51e-4168-809e-ee46dad21b45-dns-swift-storage-0\") pod \"88ba3d00-f51e-4168-809e-ee46dad21b45\" (UID: \"88ba3d00-f51e-4168-809e-ee46dad21b45\") " Feb 18 14:56:11 crc kubenswrapper[4957]: I0218 14:56:11.492648 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/88ba3d00-f51e-4168-809e-ee46dad21b45-ovsdbserver-nb\") pod \"88ba3d00-f51e-4168-809e-ee46dad21b45\" (UID: \"88ba3d00-f51e-4168-809e-ee46dad21b45\") " Feb 18 14:56:11 crc kubenswrapper[4957]: I0218 14:56:11.492709 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88ba3d00-f51e-4168-809e-ee46dad21b45-config\") pod \"88ba3d00-f51e-4168-809e-ee46dad21b45\" (UID: \"88ba3d00-f51e-4168-809e-ee46dad21b45\") " Feb 18 14:56:11 crc kubenswrapper[4957]: I0218 14:56:11.492819 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/88ba3d00-f51e-4168-809e-ee46dad21b45-ovsdbserver-sb\") pod \"88ba3d00-f51e-4168-809e-ee46dad21b45\" (UID: \"88ba3d00-f51e-4168-809e-ee46dad21b45\") " Feb 18 14:56:11 crc kubenswrapper[4957]: I0218 14:56:11.521063 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88ba3d00-f51e-4168-809e-ee46dad21b45-kube-api-access-pwv5g" (OuterVolumeSpecName: "kube-api-access-pwv5g") pod "88ba3d00-f51e-4168-809e-ee46dad21b45" (UID: "88ba3d00-f51e-4168-809e-ee46dad21b45"). InnerVolumeSpecName "kube-api-access-pwv5g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:56:11 crc kubenswrapper[4957]: I0218 14:56:11.565111 4957 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-cfnapi-7d845d58dd-m6z94" Feb 18 14:56:11 crc kubenswrapper[4957]: I0218 14:56:11.565162 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-7d845d58dd-m6z94" Feb 18 14:56:11 crc kubenswrapper[4957]: I0218 14:56:11.566095 4957 scope.go:117] "RemoveContainer" containerID="a63d2bf777122f3fbb509f7787ba6007bbb503ee64f6e8458ab8b0b855be90e6" Feb 18 14:56:11 crc kubenswrapper[4957]: E0218 14:56:11.566568 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-7d845d58dd-m6z94_openstack(37f34265-b814-4eb7-b633-b1516352e951)\"" pod="openstack/heat-cfnapi-7d845d58dd-m6z94" podUID="37f34265-b814-4eb7-b633-b1516352e951" Feb 18 14:56:11 crc kubenswrapper[4957]: I0218 14:56:11.575179 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/88ba3d00-f51e-4168-809e-ee46dad21b45-config" (OuterVolumeSpecName: "config") pod "88ba3d00-f51e-4168-809e-ee46dad21b45" (UID: "88ba3d00-f51e-4168-809e-ee46dad21b45"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:56:11 crc kubenswrapper[4957]: I0218 14:56:11.578217 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/88ba3d00-f51e-4168-809e-ee46dad21b45-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "88ba3d00-f51e-4168-809e-ee46dad21b45" (UID: "88ba3d00-f51e-4168-809e-ee46dad21b45"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:56:11 crc kubenswrapper[4957]: I0218 14:56:11.587853 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/88ba3d00-f51e-4168-809e-ee46dad21b45-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "88ba3d00-f51e-4168-809e-ee46dad21b45" (UID: "88ba3d00-f51e-4168-809e-ee46dad21b45"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:56:11 crc kubenswrapper[4957]: I0218 14:56:11.592102 4957 generic.go:334] "Generic (PLEG): container finished" podID="32518ba7-d47c-461e-a3e3-f13cca6bfd40" containerID="13619cc6549493bb8b29544bcfb1f3bfd178bf312059f6cb10aaf479918f2488" exitCode=0 Feb 18 14:56:11 crc kubenswrapper[4957]: I0218 14:56:11.592134 4957 generic.go:334] "Generic (PLEG): container finished" podID="32518ba7-d47c-461e-a3e3-f13cca6bfd40" containerID="8cef25036ba9f6eeba7b7d0c60e51e5b021cad30ea9b8df843af6d851b6c405d" exitCode=2 Feb 18 14:56:11 crc kubenswrapper[4957]: I0218 14:56:11.592145 4957 generic.go:334] "Generic (PLEG): container finished" podID="32518ba7-d47c-461e-a3e3-f13cca6bfd40" containerID="16de19ebf6ef47001ba7f6991eaa123b1157f7154ee4d8d30dc469d077de5e29" exitCode=0 Feb 18 14:56:11 crc kubenswrapper[4957]: I0218 14:56:11.592182 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"32518ba7-d47c-461e-a3e3-f13cca6bfd40","Type":"ContainerDied","Data":"13619cc6549493bb8b29544bcfb1f3bfd178bf312059f6cb10aaf479918f2488"} Feb 18 14:56:11 crc kubenswrapper[4957]: I0218 14:56:11.592210 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"32518ba7-d47c-461e-a3e3-f13cca6bfd40","Type":"ContainerDied","Data":"8cef25036ba9f6eeba7b7d0c60e51e5b021cad30ea9b8df843af6d851b6c405d"} Feb 18 14:56:11 crc kubenswrapper[4957]: I0218 14:56:11.592220 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"32518ba7-d47c-461e-a3e3-f13cca6bfd40","Type":"ContainerDied","Data":"16de19ebf6ef47001ba7f6991eaa123b1157f7154ee4d8d30dc469d077de5e29"} Feb 18 14:56:11 crc kubenswrapper[4957]: I0218 14:56:11.593577 4957 generic.go:334] "Generic (PLEG): container finished" podID="fbcc208f-6b78-4c4c-88d9-043d963343de" containerID="aaebe5fbdb152684860ae096b14c21d0c576ef2f34946a12b07f0cd5158b5ae4" exitCode=0 Feb 18 14:56:11 crc kubenswrapper[4957]: I0218 14:56:11.593616 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-6416-account-create-update-xvbh7" event={"ID":"fbcc208f-6b78-4c4c-88d9-043d963343de","Type":"ContainerDied","Data":"aaebe5fbdb152684860ae096b14c21d0c576ef2f34946a12b07f0cd5158b5ae4"} Feb 18 14:56:11 crc kubenswrapper[4957]: I0218 14:56:11.597789 4957 generic.go:334] "Generic (PLEG): container finished" podID="c916872c-8d06-4608-84d8-1159ad3c99eb" containerID="40c7b293d24561fef5e2e690dc155987e816f9c9eb40019c0084028a66501342" exitCode=0 Feb 18 14:56:11 crc kubenswrapper[4957]: I0218 14:56:11.597836 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-sgw96" event={"ID":"c916872c-8d06-4608-84d8-1159ad3c99eb","Type":"ContainerDied","Data":"40c7b293d24561fef5e2e690dc155987e816f9c9eb40019c0084028a66501342"} Feb 18 14:56:11 crc kubenswrapper[4957]: I0218 14:56:11.602031 4957 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/88ba3d00-f51e-4168-809e-ee46dad21b45-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 18 14:56:11 crc kubenswrapper[4957]: I0218 14:56:11.602081 4957 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88ba3d00-f51e-4168-809e-ee46dad21b45-config\") on node \"crc\" DevicePath \"\"" Feb 18 14:56:11 crc kubenswrapper[4957]: I0218 14:56:11.602093 4957 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/88ba3d00-f51e-4168-809e-ee46dad21b45-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 18 14:56:11 crc kubenswrapper[4957]: I0218 14:56:11.602107 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pwv5g\" (UniqueName: \"kubernetes.io/projected/88ba3d00-f51e-4168-809e-ee46dad21b45-kube-api-access-pwv5g\") on node \"crc\" DevicePath \"\"" Feb 18 14:56:11 crc kubenswrapper[4957]: I0218 14:56:11.615402 4957 generic.go:334] "Generic (PLEG): container finished" podID="a7dd324f-84f9-4860-8cd0-c00e9eba5367" containerID="884f7835142ab18776738373d9e1490cdd17b294e1b88591dfa7d04c36fa3716" exitCode=0 Feb 18 14:56:11 crc kubenswrapper[4957]: I0218 14:56:11.615488 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-f855-account-create-update-2d5wz" event={"ID":"a7dd324f-84f9-4860-8cd0-c00e9eba5367","Type":"ContainerDied","Data":"884f7835142ab18776738373d9e1490cdd17b294e1b88591dfa7d04c36fa3716"} Feb 18 14:56:11 crc kubenswrapper[4957]: I0218 14:56:11.628252 4957 generic.go:334] "Generic (PLEG): container finished" podID="88ba3d00-f51e-4168-809e-ee46dad21b45" containerID="468d188c4710786a555618d14803fc9033243d1d74a37ba304b78da81c0d0171" exitCode=0 Feb 18 14:56:11 crc kubenswrapper[4957]: I0218 14:56:11.628351 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-xmqsm" event={"ID":"88ba3d00-f51e-4168-809e-ee46dad21b45","Type":"ContainerDied","Data":"468d188c4710786a555618d14803fc9033243d1d74a37ba304b78da81c0d0171"} Feb 18 14:56:11 crc kubenswrapper[4957]: I0218 14:56:11.628378 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-xmqsm" event={"ID":"88ba3d00-f51e-4168-809e-ee46dad21b45","Type":"ContainerDied","Data":"c211fd28de5f634eeb862576f336b7514081517505ba949ee6d66f8066058b9e"} Feb 18 14:56:11 crc kubenswrapper[4957]: I0218 14:56:11.628394 4957 scope.go:117] "RemoveContainer" containerID="468d188c4710786a555618d14803fc9033243d1d74a37ba304b78da81c0d0171" Feb 18 14:56:11 crc kubenswrapper[4957]: I0218 14:56:11.628386 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-xmqsm" Feb 18 14:56:11 crc kubenswrapper[4957]: I0218 14:56:11.633091 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/88ba3d00-f51e-4168-809e-ee46dad21b45-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "88ba3d00-f51e-4168-809e-ee46dad21b45" (UID: "88ba3d00-f51e-4168-809e-ee46dad21b45"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:56:11 crc kubenswrapper[4957]: I0218 14:56:11.634254 4957 generic.go:334] "Generic (PLEG): container finished" podID="eb399cd7-737a-423f-8a68-71d4a3c4f592" containerID="ffc19cb12c23d8889812cecd6f0f6af7f8f1ab7d3328e956f842b34069817284" exitCode=0 Feb 18 14:56:11 crc kubenswrapper[4957]: I0218 14:56:11.634475 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0197-account-create-update-tlpj9" event={"ID":"eb399cd7-737a-423f-8a68-71d4a3c4f592","Type":"ContainerDied","Data":"ffc19cb12c23d8889812cecd6f0f6af7f8f1ab7d3328e956f842b34069817284"} Feb 18 14:56:11 crc kubenswrapper[4957]: I0218 14:56:11.636956 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/88ba3d00-f51e-4168-809e-ee46dad21b45-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "88ba3d00-f51e-4168-809e-ee46dad21b45" (UID: "88ba3d00-f51e-4168-809e-ee46dad21b45"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:56:11 crc kubenswrapper[4957]: I0218 14:56:11.656133 4957 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-api-6744cfff74-cgs9b" Feb 18 14:56:11 crc kubenswrapper[4957]: I0218 14:56:11.657676 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-6744cfff74-cgs9b" Feb 18 14:56:11 crc kubenswrapper[4957]: I0218 14:56:11.657965 4957 scope.go:117] "RemoveContainer" containerID="0b02eb364ae87efff724f29f33cdfbdaf7a7bbc39dd2df08ac9687f3d5fd9944" Feb 18 14:56:11 crc kubenswrapper[4957]: E0218 14:56:11.663911 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-6744cfff74-cgs9b_openstack(3e973679-e589-4827-8e24-d2fda83ab2e2)\"" pod="openstack/heat-api-6744cfff74-cgs9b" podUID="3e973679-e589-4827-8e24-d2fda83ab2e2" Feb 18 14:56:11 crc kubenswrapper[4957]: I0218 14:56:11.693673 4957 scope.go:117] "RemoveContainer" containerID="e43614936541aa847085fa2acc27df27335703c8e13d1309259154c97a649b48" Feb 18 14:56:11 crc kubenswrapper[4957]: I0218 14:56:11.705556 4957 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/88ba3d00-f51e-4168-809e-ee46dad21b45-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 18 14:56:11 crc kubenswrapper[4957]: I0218 14:56:11.705591 4957 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/88ba3d00-f51e-4168-809e-ee46dad21b45-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 14:56:11 crc kubenswrapper[4957]: I0218 14:56:11.744949 4957 scope.go:117] "RemoveContainer" containerID="468d188c4710786a555618d14803fc9033243d1d74a37ba304b78da81c0d0171" Feb 18 14:56:11 crc kubenswrapper[4957]: E0218 14:56:11.746013 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"468d188c4710786a555618d14803fc9033243d1d74a37ba304b78da81c0d0171\": container with ID starting with 468d188c4710786a555618d14803fc9033243d1d74a37ba304b78da81c0d0171 not found: ID does not exist" containerID="468d188c4710786a555618d14803fc9033243d1d74a37ba304b78da81c0d0171" Feb 18 14:56:11 crc kubenswrapper[4957]: I0218 14:56:11.746054 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"468d188c4710786a555618d14803fc9033243d1d74a37ba304b78da81c0d0171"} err="failed to get container status \"468d188c4710786a555618d14803fc9033243d1d74a37ba304b78da81c0d0171\": rpc error: code = NotFound desc = could not find container \"468d188c4710786a555618d14803fc9033243d1d74a37ba304b78da81c0d0171\": container with ID starting with 468d188c4710786a555618d14803fc9033243d1d74a37ba304b78da81c0d0171 not found: ID does not exist" Feb 18 14:56:11 crc kubenswrapper[4957]: I0218 14:56:11.746083 4957 scope.go:117] "RemoveContainer" containerID="e43614936541aa847085fa2acc27df27335703c8e13d1309259154c97a649b48" Feb 18 14:56:11 crc kubenswrapper[4957]: E0218 14:56:11.746279 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e43614936541aa847085fa2acc27df27335703c8e13d1309259154c97a649b48\": container with ID starting with e43614936541aa847085fa2acc27df27335703c8e13d1309259154c97a649b48 not found: ID does not exist" containerID="e43614936541aa847085fa2acc27df27335703c8e13d1309259154c97a649b48" Feb 18 14:56:11 crc kubenswrapper[4957]: I0218 14:56:11.746306 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e43614936541aa847085fa2acc27df27335703c8e13d1309259154c97a649b48"} err="failed to get container status \"e43614936541aa847085fa2acc27df27335703c8e13d1309259154c97a649b48\": rpc error: code = NotFound desc = could not find container \"e43614936541aa847085fa2acc27df27335703c8e13d1309259154c97a649b48\": container with ID starting with e43614936541aa847085fa2acc27df27335703c8e13d1309259154c97a649b48 not found: ID does not exist" Feb 18 14:56:11 crc kubenswrapper[4957]: I0218 14:56:11.989862 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-xmqsm"] Feb 18 14:56:12 crc kubenswrapper[4957]: I0218 14:56:12.014769 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-xmqsm"] Feb 18 14:56:12 crc kubenswrapper[4957]: I0218 14:56:12.235337 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="88ba3d00-f51e-4168-809e-ee46dad21b45" path="/var/lib/kubelet/pods/88ba3d00-f51e-4168-809e-ee46dad21b45/volumes" Feb 18 14:56:12 crc kubenswrapper[4957]: I0218 14:56:12.350001 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-hh59v" Feb 18 14:56:12 crc kubenswrapper[4957]: I0218 14:56:12.357182 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-b67z9" Feb 18 14:56:12 crc kubenswrapper[4957]: I0218 14:56:12.446159 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5xj97\" (UniqueName: \"kubernetes.io/projected/099706d3-04cd-4729-b03d-774bc14ae8b5-kube-api-access-5xj97\") pod \"099706d3-04cd-4729-b03d-774bc14ae8b5\" (UID: \"099706d3-04cd-4729-b03d-774bc14ae8b5\") " Feb 18 14:56:12 crc kubenswrapper[4957]: I0218 14:56:12.446355 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h72fz\" (UniqueName: \"kubernetes.io/projected/cb47212d-7882-43e9-bea7-a114f2e4f629-kube-api-access-h72fz\") pod \"cb47212d-7882-43e9-bea7-a114f2e4f629\" (UID: \"cb47212d-7882-43e9-bea7-a114f2e4f629\") " Feb 18 14:56:12 crc kubenswrapper[4957]: I0218 14:56:12.446601 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/099706d3-04cd-4729-b03d-774bc14ae8b5-operator-scripts\") pod \"099706d3-04cd-4729-b03d-774bc14ae8b5\" (UID: \"099706d3-04cd-4729-b03d-774bc14ae8b5\") " Feb 18 14:56:12 crc kubenswrapper[4957]: I0218 14:56:12.446651 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cb47212d-7882-43e9-bea7-a114f2e4f629-operator-scripts\") pod \"cb47212d-7882-43e9-bea7-a114f2e4f629\" (UID: \"cb47212d-7882-43e9-bea7-a114f2e4f629\") " Feb 18 14:56:12 crc kubenswrapper[4957]: I0218 14:56:12.447934 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/099706d3-04cd-4729-b03d-774bc14ae8b5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "099706d3-04cd-4729-b03d-774bc14ae8b5" (UID: "099706d3-04cd-4729-b03d-774bc14ae8b5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:56:12 crc kubenswrapper[4957]: I0218 14:56:12.448094 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb47212d-7882-43e9-bea7-a114f2e4f629-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "cb47212d-7882-43e9-bea7-a114f2e4f629" (UID: "cb47212d-7882-43e9-bea7-a114f2e4f629"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:56:12 crc kubenswrapper[4957]: I0218 14:56:12.452956 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb47212d-7882-43e9-bea7-a114f2e4f629-kube-api-access-h72fz" (OuterVolumeSpecName: "kube-api-access-h72fz") pod "cb47212d-7882-43e9-bea7-a114f2e4f629" (UID: "cb47212d-7882-43e9-bea7-a114f2e4f629"). InnerVolumeSpecName "kube-api-access-h72fz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:56:12 crc kubenswrapper[4957]: I0218 14:56:12.453974 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/099706d3-04cd-4729-b03d-774bc14ae8b5-kube-api-access-5xj97" (OuterVolumeSpecName: "kube-api-access-5xj97") pod "099706d3-04cd-4729-b03d-774bc14ae8b5" (UID: "099706d3-04cd-4729-b03d-774bc14ae8b5"). InnerVolumeSpecName "kube-api-access-5xj97". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:56:12 crc kubenswrapper[4957]: I0218 14:56:12.550775 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h72fz\" (UniqueName: \"kubernetes.io/projected/cb47212d-7882-43e9-bea7-a114f2e4f629-kube-api-access-h72fz\") on node \"crc\" DevicePath \"\"" Feb 18 14:56:12 crc kubenswrapper[4957]: I0218 14:56:12.550826 4957 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/099706d3-04cd-4729-b03d-774bc14ae8b5-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 14:56:12 crc kubenswrapper[4957]: I0218 14:56:12.550837 4957 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cb47212d-7882-43e9-bea7-a114f2e4f629-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 14:56:12 crc kubenswrapper[4957]: I0218 14:56:12.550849 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5xj97\" (UniqueName: \"kubernetes.io/projected/099706d3-04cd-4729-b03d-774bc14ae8b5-kube-api-access-5xj97\") on node \"crc\" DevicePath \"\"" Feb 18 14:56:12 crc kubenswrapper[4957]: I0218 14:56:12.646245 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"aa2f421b-f6d0-4db4-9162-f863e45ca417","Type":"ContainerStarted","Data":"4bb12ea1898d1d45e09d943b4e3e62f5a01bb97956a14dc6b5094caa1160f024"} Feb 18 14:56:12 crc kubenswrapper[4957]: I0218 14:56:12.649715 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-b67z9" event={"ID":"cb47212d-7882-43e9-bea7-a114f2e4f629","Type":"ContainerDied","Data":"56d1df01cef611d17552f315909794a7d05410fc786cb1b46c2c517073d12a94"} Feb 18 14:56:12 crc kubenswrapper[4957]: I0218 14:56:12.649749 4957 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="56d1df01cef611d17552f315909794a7d05410fc786cb1b46c2c517073d12a94" Feb 18 14:56:12 crc kubenswrapper[4957]: I0218 14:56:12.649806 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-b67z9" Feb 18 14:56:12 crc kubenswrapper[4957]: I0218 14:56:12.655260 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-hh59v" event={"ID":"099706d3-04cd-4729-b03d-774bc14ae8b5","Type":"ContainerDied","Data":"d5f699113960681164d95f463e45af8acb58b3d81dec8f16c3a6f786b6cf3dea"} Feb 18 14:56:12 crc kubenswrapper[4957]: I0218 14:56:12.655315 4957 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d5f699113960681164d95f463e45af8acb58b3d81dec8f16c3a6f786b6cf3dea" Feb 18 14:56:12 crc kubenswrapper[4957]: I0218 14:56:12.655398 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-hh59v" Feb 18 14:56:12 crc kubenswrapper[4957]: I0218 14:56:12.656718 4957 scope.go:117] "RemoveContainer" containerID="0b02eb364ae87efff724f29f33cdfbdaf7a7bbc39dd2df08ac9687f3d5fd9944" Feb 18 14:56:12 crc kubenswrapper[4957]: E0218 14:56:12.657056 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-6744cfff74-cgs9b_openstack(3e973679-e589-4827-8e24-d2fda83ab2e2)\"" pod="openstack/heat-api-6744cfff74-cgs9b" podUID="3e973679-e589-4827-8e24-d2fda83ab2e2" Feb 18 14:56:12 crc kubenswrapper[4957]: I0218 14:56:12.669040 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-7fb5f67558-76mdf" Feb 18 14:56:12 crc kubenswrapper[4957]: I0218 14:56:12.686065 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.002686039 podStartE2EDuration="29.686042574s" podCreationTimestamp="2026-02-18 14:55:43 +0000 UTC" firstStartedPulling="2026-02-18 14:55:44.012347107 +0000 UTC m=+1450.533211851" lastFinishedPulling="2026-02-18 14:56:11.695703632 +0000 UTC m=+1478.216568386" observedRunningTime="2026-02-18 14:56:12.669969269 +0000 UTC m=+1479.190834013" watchObservedRunningTime="2026-02-18 14:56:12.686042574 +0000 UTC m=+1479.206907318" Feb 18 14:56:13 crc kubenswrapper[4957]: I0218 14:56:13.434181 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-6416-account-create-update-xvbh7" Feb 18 14:56:13 crc kubenswrapper[4957]: I0218 14:56:13.616470 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6nkrm\" (UniqueName: \"kubernetes.io/projected/fbcc208f-6b78-4c4c-88d9-043d963343de-kube-api-access-6nkrm\") pod \"fbcc208f-6b78-4c4c-88d9-043d963343de\" (UID: \"fbcc208f-6b78-4c4c-88d9-043d963343de\") " Feb 18 14:56:13 crc kubenswrapper[4957]: I0218 14:56:13.616849 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fbcc208f-6b78-4c4c-88d9-043d963343de-operator-scripts\") pod \"fbcc208f-6b78-4c4c-88d9-043d963343de\" (UID: \"fbcc208f-6b78-4c4c-88d9-043d963343de\") " Feb 18 14:56:13 crc kubenswrapper[4957]: I0218 14:56:13.619243 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fbcc208f-6b78-4c4c-88d9-043d963343de-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fbcc208f-6b78-4c4c-88d9-043d963343de" (UID: "fbcc208f-6b78-4c4c-88d9-043d963343de"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:56:13 crc kubenswrapper[4957]: I0218 14:56:13.619800 4957 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fbcc208f-6b78-4c4c-88d9-043d963343de-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 14:56:13 crc kubenswrapper[4957]: I0218 14:56:13.623470 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fbcc208f-6b78-4c4c-88d9-043d963343de-kube-api-access-6nkrm" (OuterVolumeSpecName: "kube-api-access-6nkrm") pod "fbcc208f-6b78-4c4c-88d9-043d963343de" (UID: "fbcc208f-6b78-4c4c-88d9-043d963343de"). InnerVolumeSpecName "kube-api-access-6nkrm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:56:13 crc kubenswrapper[4957]: I0218 14:56:13.678876 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-6416-account-create-update-xvbh7" event={"ID":"fbcc208f-6b78-4c4c-88d9-043d963343de","Type":"ContainerDied","Data":"3d67cd3344610b83f592680a5d0103e1df0a51b839ab2bfa7b39d9ab0d369373"} Feb 18 14:56:13 crc kubenswrapper[4957]: I0218 14:56:13.678922 4957 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3d67cd3344610b83f592680a5d0103e1df0a51b839ab2bfa7b39d9ab0d369373" Feb 18 14:56:13 crc kubenswrapper[4957]: I0218 14:56:13.678995 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-6416-account-create-update-xvbh7" Feb 18 14:56:13 crc kubenswrapper[4957]: I0218 14:56:13.727216 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6nkrm\" (UniqueName: \"kubernetes.io/projected/fbcc208f-6b78-4c4c-88d9-043d963343de-kube-api-access-6nkrm\") on node \"crc\" DevicePath \"\"" Feb 18 14:56:13 crc kubenswrapper[4957]: I0218 14:56:13.809444 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-f855-account-create-update-2d5wz" Feb 18 14:56:13 crc kubenswrapper[4957]: I0218 14:56:13.820756 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-sgw96" Feb 18 14:56:13 crc kubenswrapper[4957]: I0218 14:56:13.836714 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0197-account-create-update-tlpj9" Feb 18 14:56:13 crc kubenswrapper[4957]: I0218 14:56:13.931327 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ncbwm\" (UniqueName: \"kubernetes.io/projected/a7dd324f-84f9-4860-8cd0-c00e9eba5367-kube-api-access-ncbwm\") pod \"a7dd324f-84f9-4860-8cd0-c00e9eba5367\" (UID: \"a7dd324f-84f9-4860-8cd0-c00e9eba5367\") " Feb 18 14:56:13 crc kubenswrapper[4957]: I0218 14:56:13.931445 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c916872c-8d06-4608-84d8-1159ad3c99eb-operator-scripts\") pod \"c916872c-8d06-4608-84d8-1159ad3c99eb\" (UID: \"c916872c-8d06-4608-84d8-1159ad3c99eb\") " Feb 18 14:56:13 crc kubenswrapper[4957]: I0218 14:56:13.931491 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a7dd324f-84f9-4860-8cd0-c00e9eba5367-operator-scripts\") pod \"a7dd324f-84f9-4860-8cd0-c00e9eba5367\" (UID: \"a7dd324f-84f9-4860-8cd0-c00e9eba5367\") " Feb 18 14:56:13 crc kubenswrapper[4957]: I0218 14:56:13.931530 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pgvg5\" (UniqueName: \"kubernetes.io/projected/eb399cd7-737a-423f-8a68-71d4a3c4f592-kube-api-access-pgvg5\") pod \"eb399cd7-737a-423f-8a68-71d4a3c4f592\" (UID: \"eb399cd7-737a-423f-8a68-71d4a3c4f592\") " Feb 18 14:56:13 crc kubenswrapper[4957]: I0218 14:56:13.931825 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x8rcb\" (UniqueName: \"kubernetes.io/projected/c916872c-8d06-4608-84d8-1159ad3c99eb-kube-api-access-x8rcb\") pod \"c916872c-8d06-4608-84d8-1159ad3c99eb\" (UID: \"c916872c-8d06-4608-84d8-1159ad3c99eb\") " Feb 18 14:56:13 crc kubenswrapper[4957]: I0218 14:56:13.931878 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eb399cd7-737a-423f-8a68-71d4a3c4f592-operator-scripts\") pod \"eb399cd7-737a-423f-8a68-71d4a3c4f592\" (UID: \"eb399cd7-737a-423f-8a68-71d4a3c4f592\") " Feb 18 14:56:13 crc kubenswrapper[4957]: I0218 14:56:13.932126 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a7dd324f-84f9-4860-8cd0-c00e9eba5367-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a7dd324f-84f9-4860-8cd0-c00e9eba5367" (UID: "a7dd324f-84f9-4860-8cd0-c00e9eba5367"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:56:13 crc kubenswrapper[4957]: I0218 14:56:13.932131 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c916872c-8d06-4608-84d8-1159ad3c99eb-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c916872c-8d06-4608-84d8-1159ad3c99eb" (UID: "c916872c-8d06-4608-84d8-1159ad3c99eb"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:56:13 crc kubenswrapper[4957]: I0218 14:56:13.932684 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb399cd7-737a-423f-8a68-71d4a3c4f592-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "eb399cd7-737a-423f-8a68-71d4a3c4f592" (UID: "eb399cd7-737a-423f-8a68-71d4a3c4f592"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:56:13 crc kubenswrapper[4957]: I0218 14:56:13.932823 4957 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eb399cd7-737a-423f-8a68-71d4a3c4f592-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 14:56:13 crc kubenswrapper[4957]: I0218 14:56:13.932846 4957 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c916872c-8d06-4608-84d8-1159ad3c99eb-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 14:56:13 crc kubenswrapper[4957]: I0218 14:56:13.932858 4957 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a7dd324f-84f9-4860-8cd0-c00e9eba5367-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 14:56:13 crc kubenswrapper[4957]: I0218 14:56:13.945337 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c916872c-8d06-4608-84d8-1159ad3c99eb-kube-api-access-x8rcb" (OuterVolumeSpecName: "kube-api-access-x8rcb") pod "c916872c-8d06-4608-84d8-1159ad3c99eb" (UID: "c916872c-8d06-4608-84d8-1159ad3c99eb"). InnerVolumeSpecName "kube-api-access-x8rcb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:56:13 crc kubenswrapper[4957]: I0218 14:56:13.945484 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb399cd7-737a-423f-8a68-71d4a3c4f592-kube-api-access-pgvg5" (OuterVolumeSpecName: "kube-api-access-pgvg5") pod "eb399cd7-737a-423f-8a68-71d4a3c4f592" (UID: "eb399cd7-737a-423f-8a68-71d4a3c4f592"). InnerVolumeSpecName "kube-api-access-pgvg5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:56:13 crc kubenswrapper[4957]: I0218 14:56:13.945564 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7dd324f-84f9-4860-8cd0-c00e9eba5367-kube-api-access-ncbwm" (OuterVolumeSpecName: "kube-api-access-ncbwm") pod "a7dd324f-84f9-4860-8cd0-c00e9eba5367" (UID: "a7dd324f-84f9-4860-8cd0-c00e9eba5367"). InnerVolumeSpecName "kube-api-access-ncbwm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:56:14 crc kubenswrapper[4957]: I0218 14:56:14.035594 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x8rcb\" (UniqueName: \"kubernetes.io/projected/c916872c-8d06-4608-84d8-1159ad3c99eb-kube-api-access-x8rcb\") on node \"crc\" DevicePath \"\"" Feb 18 14:56:14 crc kubenswrapper[4957]: I0218 14:56:14.035641 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ncbwm\" (UniqueName: \"kubernetes.io/projected/a7dd324f-84f9-4860-8cd0-c00e9eba5367-kube-api-access-ncbwm\") on node \"crc\" DevicePath \"\"" Feb 18 14:56:14 crc kubenswrapper[4957]: I0218 14:56:14.035654 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pgvg5\" (UniqueName: \"kubernetes.io/projected/eb399cd7-737a-423f-8a68-71d4a3c4f592-kube-api-access-pgvg5\") on node \"crc\" DevicePath \"\"" Feb 18 14:56:14 crc kubenswrapper[4957]: I0218 14:56:14.690682 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-f855-account-create-update-2d5wz" Feb 18 14:56:14 crc kubenswrapper[4957]: I0218 14:56:14.690687 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-f855-account-create-update-2d5wz" event={"ID":"a7dd324f-84f9-4860-8cd0-c00e9eba5367","Type":"ContainerDied","Data":"7ba7a6cd1044839e14ecfe6988e9a137dad7352936b51be1ca20774b667a2ecc"} Feb 18 14:56:14 crc kubenswrapper[4957]: I0218 14:56:14.691242 4957 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7ba7a6cd1044839e14ecfe6988e9a137dad7352936b51be1ca20774b667a2ecc" Feb 18 14:56:14 crc kubenswrapper[4957]: I0218 14:56:14.693175 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0197-account-create-update-tlpj9" event={"ID":"eb399cd7-737a-423f-8a68-71d4a3c4f592","Type":"ContainerDied","Data":"d72975c1c1120766f44cdfe9de026c9cf1e95d8951868ce8c8d00b85f05024e2"} Feb 18 14:56:14 crc kubenswrapper[4957]: I0218 14:56:14.693198 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0197-account-create-update-tlpj9" Feb 18 14:56:14 crc kubenswrapper[4957]: I0218 14:56:14.693206 4957 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d72975c1c1120766f44cdfe9de026c9cf1e95d8951868ce8c8d00b85f05024e2" Feb 18 14:56:14 crc kubenswrapper[4957]: I0218 14:56:14.695930 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-sgw96" event={"ID":"c916872c-8d06-4608-84d8-1159ad3c99eb","Type":"ContainerDied","Data":"26e51923ccaaa81a76f919dd0d14c674cc2faa56bda7f49966dba1ea43d004b1"} Feb 18 14:56:14 crc kubenswrapper[4957]: I0218 14:56:14.695982 4957 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="26e51923ccaaa81a76f919dd0d14c674cc2faa56bda7f49966dba1ea43d004b1" Feb 18 14:56:14 crc kubenswrapper[4957]: I0218 14:56:14.695988 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-sgw96" Feb 18 14:56:15 crc kubenswrapper[4957]: I0218 14:56:15.252575 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-655997456c-8vtx7" Feb 18 14:56:15 crc kubenswrapper[4957]: I0218 14:56:15.345398 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-6744cfff74-cgs9b"] Feb 18 14:56:15 crc kubenswrapper[4957]: I0218 14:56:15.682894 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-77b45dc78-6kx7v" Feb 18 14:56:15 crc kubenswrapper[4957]: I0218 14:56:15.790497 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-7d845d58dd-m6z94"] Feb 18 14:56:15 crc kubenswrapper[4957]: I0218 14:56:15.905091 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6744cfff74-cgs9b" Feb 18 14:56:15 crc kubenswrapper[4957]: I0218 14:56:15.978718 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Feb 18 14:56:16 crc kubenswrapper[4957]: I0218 14:56:16.004901 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3e973679-e589-4827-8e24-d2fda83ab2e2-config-data-custom\") pod \"3e973679-e589-4827-8e24-d2fda83ab2e2\" (UID: \"3e973679-e589-4827-8e24-d2fda83ab2e2\") " Feb 18 14:56:16 crc kubenswrapper[4957]: I0218 14:56:16.005040 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p5g4z\" (UniqueName: \"kubernetes.io/projected/3e973679-e589-4827-8e24-d2fda83ab2e2-kube-api-access-p5g4z\") pod \"3e973679-e589-4827-8e24-d2fda83ab2e2\" (UID: \"3e973679-e589-4827-8e24-d2fda83ab2e2\") " Feb 18 14:56:16 crc kubenswrapper[4957]: I0218 14:56:16.005136 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e973679-e589-4827-8e24-d2fda83ab2e2-combined-ca-bundle\") pod \"3e973679-e589-4827-8e24-d2fda83ab2e2\" (UID: \"3e973679-e589-4827-8e24-d2fda83ab2e2\") " Feb 18 14:56:16 crc kubenswrapper[4957]: I0218 14:56:16.005449 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e973679-e589-4827-8e24-d2fda83ab2e2-config-data\") pod \"3e973679-e589-4827-8e24-d2fda83ab2e2\" (UID: \"3e973679-e589-4827-8e24-d2fda83ab2e2\") " Feb 18 14:56:16 crc kubenswrapper[4957]: I0218 14:56:16.028599 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e973679-e589-4827-8e24-d2fda83ab2e2-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "3e973679-e589-4827-8e24-d2fda83ab2e2" (UID: "3e973679-e589-4827-8e24-d2fda83ab2e2"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:56:16 crc kubenswrapper[4957]: I0218 14:56:16.044635 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e973679-e589-4827-8e24-d2fda83ab2e2-kube-api-access-p5g4z" (OuterVolumeSpecName: "kube-api-access-p5g4z") pod "3e973679-e589-4827-8e24-d2fda83ab2e2" (UID: "3e973679-e589-4827-8e24-d2fda83ab2e2"). InnerVolumeSpecName "kube-api-access-p5g4z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:56:16 crc kubenswrapper[4957]: I0218 14:56:16.068216 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e973679-e589-4827-8e24-d2fda83ab2e2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3e973679-e589-4827-8e24-d2fda83ab2e2" (UID: "3e973679-e589-4827-8e24-d2fda83ab2e2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:56:16 crc kubenswrapper[4957]: I0218 14:56:16.109671 4957 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3e973679-e589-4827-8e24-d2fda83ab2e2-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 18 14:56:16 crc kubenswrapper[4957]: I0218 14:56:16.109705 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p5g4z\" (UniqueName: \"kubernetes.io/projected/3e973679-e589-4827-8e24-d2fda83ab2e2-kube-api-access-p5g4z\") on node \"crc\" DevicePath \"\"" Feb 18 14:56:16 crc kubenswrapper[4957]: I0218 14:56:16.109716 4957 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e973679-e589-4827-8e24-d2fda83ab2e2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:56:16 crc kubenswrapper[4957]: I0218 14:56:16.112349 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e973679-e589-4827-8e24-d2fda83ab2e2-config-data" (OuterVolumeSpecName: "config-data") pod "3e973679-e589-4827-8e24-d2fda83ab2e2" (UID: "3e973679-e589-4827-8e24-d2fda83ab2e2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:56:16 crc kubenswrapper[4957]: I0218 14:56:16.213011 4957 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e973679-e589-4827-8e24-d2fda83ab2e2-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 14:56:16 crc kubenswrapper[4957]: I0218 14:56:16.344231 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7d845d58dd-m6z94" Feb 18 14:56:16 crc kubenswrapper[4957]: I0218 14:56:16.520618 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/37f34265-b814-4eb7-b633-b1516352e951-config-data-custom\") pod \"37f34265-b814-4eb7-b633-b1516352e951\" (UID: \"37f34265-b814-4eb7-b633-b1516352e951\") " Feb 18 14:56:16 crc kubenswrapper[4957]: I0218 14:56:16.521047 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37f34265-b814-4eb7-b633-b1516352e951-config-data\") pod \"37f34265-b814-4eb7-b633-b1516352e951\" (UID: \"37f34265-b814-4eb7-b633-b1516352e951\") " Feb 18 14:56:16 crc kubenswrapper[4957]: I0218 14:56:16.521082 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rxszb\" (UniqueName: \"kubernetes.io/projected/37f34265-b814-4eb7-b633-b1516352e951-kube-api-access-rxszb\") pod \"37f34265-b814-4eb7-b633-b1516352e951\" (UID: \"37f34265-b814-4eb7-b633-b1516352e951\") " Feb 18 14:56:16 crc kubenswrapper[4957]: I0218 14:56:16.521250 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37f34265-b814-4eb7-b633-b1516352e951-combined-ca-bundle\") pod \"37f34265-b814-4eb7-b633-b1516352e951\" (UID: \"37f34265-b814-4eb7-b633-b1516352e951\") " Feb 18 14:56:16 crc kubenswrapper[4957]: I0218 14:56:16.525147 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-687db6759-27j8z" Feb 18 14:56:16 crc kubenswrapper[4957]: I0218 14:56:16.530329 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37f34265-b814-4eb7-b633-b1516352e951-kube-api-access-rxszb" (OuterVolumeSpecName: "kube-api-access-rxszb") pod "37f34265-b814-4eb7-b633-b1516352e951" (UID: "37f34265-b814-4eb7-b633-b1516352e951"). InnerVolumeSpecName "kube-api-access-rxszb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:56:16 crc kubenswrapper[4957]: I0218 14:56:16.531588 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37f34265-b814-4eb7-b633-b1516352e951-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "37f34265-b814-4eb7-b633-b1516352e951" (UID: "37f34265-b814-4eb7-b633-b1516352e951"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:56:16 crc kubenswrapper[4957]: I0218 14:56:16.580203 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37f34265-b814-4eb7-b633-b1516352e951-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "37f34265-b814-4eb7-b633-b1516352e951" (UID: "37f34265-b814-4eb7-b633-b1516352e951"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:56:16 crc kubenswrapper[4957]: I0218 14:56:16.593615 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-65df87cd54-bjsc4"] Feb 18 14:56:16 crc kubenswrapper[4957]: I0218 14:56:16.595870 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-engine-65df87cd54-bjsc4" podUID="bde3dea8-c483-423d-8f4a-74575433fd2f" containerName="heat-engine" containerID="cri-o://b6ed8a5c9d2f5879a3cf52c2eea94e0aec596e98c2e9ed9df790ef55c4e92c45" gracePeriod=60 Feb 18 14:56:16 crc kubenswrapper[4957]: E0218 14:56:16.602880 4957 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b6ed8a5c9d2f5879a3cf52c2eea94e0aec596e98c2e9ed9df790ef55c4e92c45" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 18 14:56:16 crc kubenswrapper[4957]: E0218 14:56:16.606859 4957 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b6ed8a5c9d2f5879a3cf52c2eea94e0aec596e98c2e9ed9df790ef55c4e92c45" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 18 14:56:16 crc kubenswrapper[4957]: E0218 14:56:16.613913 4957 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b6ed8a5c9d2f5879a3cf52c2eea94e0aec596e98c2e9ed9df790ef55c4e92c45" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 18 14:56:16 crc kubenswrapper[4957]: E0218 14:56:16.613998 4957 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-65df87cd54-bjsc4" podUID="bde3dea8-c483-423d-8f4a-74575433fd2f" containerName="heat-engine" Feb 18 14:56:16 crc kubenswrapper[4957]: I0218 14:56:16.624719 4957 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37f34265-b814-4eb7-b633-b1516352e951-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:56:16 crc kubenswrapper[4957]: I0218 14:56:16.624757 4957 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/37f34265-b814-4eb7-b633-b1516352e951-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 18 14:56:16 crc kubenswrapper[4957]: I0218 14:56:16.624766 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rxszb\" (UniqueName: \"kubernetes.io/projected/37f34265-b814-4eb7-b633-b1516352e951-kube-api-access-rxszb\") on node \"crc\" DevicePath \"\"" Feb 18 14:56:16 crc kubenswrapper[4957]: I0218 14:56:16.648194 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37f34265-b814-4eb7-b633-b1516352e951-config-data" (OuterVolumeSpecName: "config-data") pod "37f34265-b814-4eb7-b633-b1516352e951" (UID: "37f34265-b814-4eb7-b633-b1516352e951"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:56:16 crc kubenswrapper[4957]: I0218 14:56:16.718277 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6744cfff74-cgs9b" Feb 18 14:56:16 crc kubenswrapper[4957]: I0218 14:56:16.718268 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6744cfff74-cgs9b" event={"ID":"3e973679-e589-4827-8e24-d2fda83ab2e2","Type":"ContainerDied","Data":"1e9f058a04df53ed2f4240290871638c0bf5315bd1029aeba10c3c32111cf339"} Feb 18 14:56:16 crc kubenswrapper[4957]: I0218 14:56:16.718459 4957 scope.go:117] "RemoveContainer" containerID="0b02eb364ae87efff724f29f33cdfbdaf7a7bbc39dd2df08ac9687f3d5fd9944" Feb 18 14:56:16 crc kubenswrapper[4957]: I0218 14:56:16.720174 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7d845d58dd-m6z94" event={"ID":"37f34265-b814-4eb7-b633-b1516352e951","Type":"ContainerDied","Data":"31b44221f96767c7889188a7cc88911f1d912482af56c514514cb9162b2a03b5"} Feb 18 14:56:16 crc kubenswrapper[4957]: I0218 14:56:16.720255 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7d845d58dd-m6z94" Feb 18 14:56:16 crc kubenswrapper[4957]: I0218 14:56:16.726761 4957 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37f34265-b814-4eb7-b633-b1516352e951-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 14:56:16 crc kubenswrapper[4957]: I0218 14:56:16.752252 4957 scope.go:117] "RemoveContainer" containerID="a63d2bf777122f3fbb509f7787ba6007bbb503ee64f6e8458ab8b0b855be90e6" Feb 18 14:56:16 crc kubenswrapper[4957]: I0218 14:56:16.769313 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-6744cfff74-cgs9b"] Feb 18 14:56:16 crc kubenswrapper[4957]: I0218 14:56:16.783119 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-6744cfff74-cgs9b"] Feb 18 14:56:16 crc kubenswrapper[4957]: I0218 14:56:16.793406 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-7d845d58dd-m6z94"] Feb 18 14:56:16 crc kubenswrapper[4957]: I0218 14:56:16.805394 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-7d845d58dd-m6z94"] Feb 18 14:56:17 crc kubenswrapper[4957]: I0218 14:56:17.741057 4957 generic.go:334] "Generic (PLEG): container finished" podID="32518ba7-d47c-461e-a3e3-f13cca6bfd40" containerID="83613776e4fdcc13175a733b8e96ed4451c30ba7131123ddeb3588ef6b3677ba" exitCode=0 Feb 18 14:56:17 crc kubenswrapper[4957]: I0218 14:56:17.741135 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"32518ba7-d47c-461e-a3e3-f13cca6bfd40","Type":"ContainerDied","Data":"83613776e4fdcc13175a733b8e96ed4451c30ba7131123ddeb3588ef6b3677ba"} Feb 18 14:56:17 crc kubenswrapper[4957]: I0218 14:56:17.973551 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 14:56:18 crc kubenswrapper[4957]: I0218 14:56:18.061469 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32518ba7-d47c-461e-a3e3-f13cca6bfd40-combined-ca-bundle\") pod \"32518ba7-d47c-461e-a3e3-f13cca6bfd40\" (UID: \"32518ba7-d47c-461e-a3e3-f13cca6bfd40\") " Feb 18 14:56:18 crc kubenswrapper[4957]: I0218 14:56:18.062072 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/32518ba7-d47c-461e-a3e3-f13cca6bfd40-log-httpd\") pod \"32518ba7-d47c-461e-a3e3-f13cca6bfd40\" (UID: \"32518ba7-d47c-461e-a3e3-f13cca6bfd40\") " Feb 18 14:56:18 crc kubenswrapper[4957]: I0218 14:56:18.062124 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/32518ba7-d47c-461e-a3e3-f13cca6bfd40-scripts\") pod \"32518ba7-d47c-461e-a3e3-f13cca6bfd40\" (UID: \"32518ba7-d47c-461e-a3e3-f13cca6bfd40\") " Feb 18 14:56:18 crc kubenswrapper[4957]: I0218 14:56:18.062165 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/32518ba7-d47c-461e-a3e3-f13cca6bfd40-run-httpd\") pod \"32518ba7-d47c-461e-a3e3-f13cca6bfd40\" (UID: \"32518ba7-d47c-461e-a3e3-f13cca6bfd40\") " Feb 18 14:56:18 crc kubenswrapper[4957]: I0218 14:56:18.062933 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/32518ba7-d47c-461e-a3e3-f13cca6bfd40-sg-core-conf-yaml\") pod \"32518ba7-d47c-461e-a3e3-f13cca6bfd40\" (UID: \"32518ba7-d47c-461e-a3e3-f13cca6bfd40\") " Feb 18 14:56:18 crc kubenswrapper[4957]: I0218 14:56:18.062979 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mxjl6\" (UniqueName: \"kubernetes.io/projected/32518ba7-d47c-461e-a3e3-f13cca6bfd40-kube-api-access-mxjl6\") pod \"32518ba7-d47c-461e-a3e3-f13cca6bfd40\" (UID: \"32518ba7-d47c-461e-a3e3-f13cca6bfd40\") " Feb 18 14:56:18 crc kubenswrapper[4957]: I0218 14:56:18.063036 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32518ba7-d47c-461e-a3e3-f13cca6bfd40-config-data\") pod \"32518ba7-d47c-461e-a3e3-f13cca6bfd40\" (UID: \"32518ba7-d47c-461e-a3e3-f13cca6bfd40\") " Feb 18 14:56:18 crc kubenswrapper[4957]: I0218 14:56:18.063273 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/32518ba7-d47c-461e-a3e3-f13cca6bfd40-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "32518ba7-d47c-461e-a3e3-f13cca6bfd40" (UID: "32518ba7-d47c-461e-a3e3-f13cca6bfd40"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:56:18 crc kubenswrapper[4957]: I0218 14:56:18.064020 4957 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/32518ba7-d47c-461e-a3e3-f13cca6bfd40-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 14:56:18 crc kubenswrapper[4957]: I0218 14:56:18.064064 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/32518ba7-d47c-461e-a3e3-f13cca6bfd40-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "32518ba7-d47c-461e-a3e3-f13cca6bfd40" (UID: "32518ba7-d47c-461e-a3e3-f13cca6bfd40"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:56:18 crc kubenswrapper[4957]: I0218 14:56:18.085239 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32518ba7-d47c-461e-a3e3-f13cca6bfd40-scripts" (OuterVolumeSpecName: "scripts") pod "32518ba7-d47c-461e-a3e3-f13cca6bfd40" (UID: "32518ba7-d47c-461e-a3e3-f13cca6bfd40"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:56:18 crc kubenswrapper[4957]: I0218 14:56:18.101273 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32518ba7-d47c-461e-a3e3-f13cca6bfd40-kube-api-access-mxjl6" (OuterVolumeSpecName: "kube-api-access-mxjl6") pod "32518ba7-d47c-461e-a3e3-f13cca6bfd40" (UID: "32518ba7-d47c-461e-a3e3-f13cca6bfd40"). InnerVolumeSpecName "kube-api-access-mxjl6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:56:18 crc kubenswrapper[4957]: I0218 14:56:18.166060 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mxjl6\" (UniqueName: \"kubernetes.io/projected/32518ba7-d47c-461e-a3e3-f13cca6bfd40-kube-api-access-mxjl6\") on node \"crc\" DevicePath \"\"" Feb 18 14:56:18 crc kubenswrapper[4957]: I0218 14:56:18.166092 4957 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/32518ba7-d47c-461e-a3e3-f13cca6bfd40-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 14:56:18 crc kubenswrapper[4957]: I0218 14:56:18.166101 4957 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/32518ba7-d47c-461e-a3e3-f13cca6bfd40-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 14:56:18 crc kubenswrapper[4957]: I0218 14:56:18.171567 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32518ba7-d47c-461e-a3e3-f13cca6bfd40-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "32518ba7-d47c-461e-a3e3-f13cca6bfd40" (UID: "32518ba7-d47c-461e-a3e3-f13cca6bfd40"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:56:18 crc kubenswrapper[4957]: I0218 14:56:18.207589 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32518ba7-d47c-461e-a3e3-f13cca6bfd40-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "32518ba7-d47c-461e-a3e3-f13cca6bfd40" (UID: "32518ba7-d47c-461e-a3e3-f13cca6bfd40"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:56:18 crc kubenswrapper[4957]: I0218 14:56:18.232527 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37f34265-b814-4eb7-b633-b1516352e951" path="/var/lib/kubelet/pods/37f34265-b814-4eb7-b633-b1516352e951/volumes" Feb 18 14:56:18 crc kubenswrapper[4957]: I0218 14:56:18.233081 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e973679-e589-4827-8e24-d2fda83ab2e2" path="/var/lib/kubelet/pods/3e973679-e589-4827-8e24-d2fda83ab2e2/volumes" Feb 18 14:56:18 crc kubenswrapper[4957]: I0218 14:56:18.268750 4957 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/32518ba7-d47c-461e-a3e3-f13cca6bfd40-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 18 14:56:18 crc kubenswrapper[4957]: I0218 14:56:18.268868 4957 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32518ba7-d47c-461e-a3e3-f13cca6bfd40-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:56:18 crc kubenswrapper[4957]: I0218 14:56:18.302515 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32518ba7-d47c-461e-a3e3-f13cca6bfd40-config-data" (OuterVolumeSpecName: "config-data") pod "32518ba7-d47c-461e-a3e3-f13cca6bfd40" (UID: "32518ba7-d47c-461e-a3e3-f13cca6bfd40"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:56:18 crc kubenswrapper[4957]: I0218 14:56:18.373251 4957 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32518ba7-d47c-461e-a3e3-f13cca6bfd40-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 14:56:18 crc kubenswrapper[4957]: I0218 14:56:18.556724 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-b2srx"] Feb 18 14:56:18 crc kubenswrapper[4957]: E0218 14:56:18.557145 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32518ba7-d47c-461e-a3e3-f13cca6bfd40" containerName="ceilometer-notification-agent" Feb 18 14:56:18 crc kubenswrapper[4957]: I0218 14:56:18.557163 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="32518ba7-d47c-461e-a3e3-f13cca6bfd40" containerName="ceilometer-notification-agent" Feb 18 14:56:18 crc kubenswrapper[4957]: E0218 14:56:18.557182 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32518ba7-d47c-461e-a3e3-f13cca6bfd40" containerName="ceilometer-central-agent" Feb 18 14:56:18 crc kubenswrapper[4957]: I0218 14:56:18.557189 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="32518ba7-d47c-461e-a3e3-f13cca6bfd40" containerName="ceilometer-central-agent" Feb 18 14:56:18 crc kubenswrapper[4957]: E0218 14:56:18.557200 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f682c53-b8cc-42e4-a3b3-3bccfdf7c908" containerName="heat-api" Feb 18 14:56:18 crc kubenswrapper[4957]: I0218 14:56:18.557206 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f682c53-b8cc-42e4-a3b3-3bccfdf7c908" containerName="heat-api" Feb 18 14:56:18 crc kubenswrapper[4957]: E0218 14:56:18.557217 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32518ba7-d47c-461e-a3e3-f13cca6bfd40" containerName="sg-core" Feb 18 14:56:18 crc kubenswrapper[4957]: I0218 14:56:18.557223 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="32518ba7-d47c-461e-a3e3-f13cca6bfd40" containerName="sg-core" Feb 18 14:56:18 crc kubenswrapper[4957]: E0218 14:56:18.557236 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb47212d-7882-43e9-bea7-a114f2e4f629" containerName="mariadb-database-create" Feb 18 14:56:18 crc kubenswrapper[4957]: I0218 14:56:18.557242 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb47212d-7882-43e9-bea7-a114f2e4f629" containerName="mariadb-database-create" Feb 18 14:56:18 crc kubenswrapper[4957]: E0218 14:56:18.557254 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e973679-e589-4827-8e24-d2fda83ab2e2" containerName="heat-api" Feb 18 14:56:18 crc kubenswrapper[4957]: I0218 14:56:18.557259 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e973679-e589-4827-8e24-d2fda83ab2e2" containerName="heat-api" Feb 18 14:56:18 crc kubenswrapper[4957]: E0218 14:56:18.557267 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbcc208f-6b78-4c4c-88d9-043d963343de" containerName="mariadb-account-create-update" Feb 18 14:56:18 crc kubenswrapper[4957]: I0218 14:56:18.557272 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbcc208f-6b78-4c4c-88d9-043d963343de" containerName="mariadb-account-create-update" Feb 18 14:56:18 crc kubenswrapper[4957]: E0218 14:56:18.557281 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7dd324f-84f9-4860-8cd0-c00e9eba5367" containerName="mariadb-account-create-update" Feb 18 14:56:18 crc kubenswrapper[4957]: I0218 14:56:18.557288 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7dd324f-84f9-4860-8cd0-c00e9eba5367" containerName="mariadb-account-create-update" Feb 18 14:56:18 crc kubenswrapper[4957]: E0218 14:56:18.557296 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e973679-e589-4827-8e24-d2fda83ab2e2" containerName="heat-api" Feb 18 14:56:18 crc kubenswrapper[4957]: I0218 14:56:18.557303 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e973679-e589-4827-8e24-d2fda83ab2e2" containerName="heat-api" Feb 18 14:56:18 crc kubenswrapper[4957]: E0218 14:56:18.557320 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37f34265-b814-4eb7-b633-b1516352e951" containerName="heat-cfnapi" Feb 18 14:56:18 crc kubenswrapper[4957]: I0218 14:56:18.557326 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="37f34265-b814-4eb7-b633-b1516352e951" containerName="heat-cfnapi" Feb 18 14:56:18 crc kubenswrapper[4957]: E0218 14:56:18.557336 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="099706d3-04cd-4729-b03d-774bc14ae8b5" containerName="mariadb-database-create" Feb 18 14:56:18 crc kubenswrapper[4957]: I0218 14:56:18.557341 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="099706d3-04cd-4729-b03d-774bc14ae8b5" containerName="mariadb-database-create" Feb 18 14:56:18 crc kubenswrapper[4957]: E0218 14:56:18.557353 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37f34265-b814-4eb7-b633-b1516352e951" containerName="heat-cfnapi" Feb 18 14:56:18 crc kubenswrapper[4957]: I0218 14:56:18.557359 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="37f34265-b814-4eb7-b633-b1516352e951" containerName="heat-cfnapi" Feb 18 14:56:18 crc kubenswrapper[4957]: E0218 14:56:18.557370 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32518ba7-d47c-461e-a3e3-f13cca6bfd40" containerName="proxy-httpd" Feb 18 14:56:18 crc kubenswrapper[4957]: I0218 14:56:18.557376 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="32518ba7-d47c-461e-a3e3-f13cca6bfd40" containerName="proxy-httpd" Feb 18 14:56:18 crc kubenswrapper[4957]: E0218 14:56:18.557388 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb399cd7-737a-423f-8a68-71d4a3c4f592" containerName="mariadb-account-create-update" Feb 18 14:56:18 crc kubenswrapper[4957]: I0218 14:56:18.557394 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb399cd7-737a-423f-8a68-71d4a3c4f592" containerName="mariadb-account-create-update" Feb 18 14:56:18 crc kubenswrapper[4957]: E0218 14:56:18.557409 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88ba3d00-f51e-4168-809e-ee46dad21b45" containerName="init" Feb 18 14:56:18 crc kubenswrapper[4957]: I0218 14:56:18.557417 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="88ba3d00-f51e-4168-809e-ee46dad21b45" containerName="init" Feb 18 14:56:18 crc kubenswrapper[4957]: E0218 14:56:18.557442 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88ba3d00-f51e-4168-809e-ee46dad21b45" containerName="dnsmasq-dns" Feb 18 14:56:18 crc kubenswrapper[4957]: I0218 14:56:18.557448 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="88ba3d00-f51e-4168-809e-ee46dad21b45" containerName="dnsmasq-dns" Feb 18 14:56:18 crc kubenswrapper[4957]: E0218 14:56:18.557460 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c916872c-8d06-4608-84d8-1159ad3c99eb" containerName="mariadb-database-create" Feb 18 14:56:18 crc kubenswrapper[4957]: I0218 14:56:18.557466 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="c916872c-8d06-4608-84d8-1159ad3c99eb" containerName="mariadb-database-create" Feb 18 14:56:18 crc kubenswrapper[4957]: I0218 14:56:18.557715 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="099706d3-04cd-4729-b03d-774bc14ae8b5" containerName="mariadb-database-create" Feb 18 14:56:18 crc kubenswrapper[4957]: I0218 14:56:18.557731 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="32518ba7-d47c-461e-a3e3-f13cca6bfd40" containerName="ceilometer-central-agent" Feb 18 14:56:18 crc kubenswrapper[4957]: I0218 14:56:18.557740 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="fbcc208f-6b78-4c4c-88d9-043d963343de" containerName="mariadb-account-create-update" Feb 18 14:56:18 crc kubenswrapper[4957]: I0218 14:56:18.557750 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7dd324f-84f9-4860-8cd0-c00e9eba5367" containerName="mariadb-account-create-update" Feb 18 14:56:18 crc kubenswrapper[4957]: I0218 14:56:18.557760 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="c916872c-8d06-4608-84d8-1159ad3c99eb" containerName="mariadb-database-create" Feb 18 14:56:18 crc kubenswrapper[4957]: I0218 14:56:18.557770 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb399cd7-737a-423f-8a68-71d4a3c4f592" containerName="mariadb-account-create-update" Feb 18 14:56:18 crc kubenswrapper[4957]: I0218 14:56:18.557780 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="37f34265-b814-4eb7-b633-b1516352e951" containerName="heat-cfnapi" Feb 18 14:56:18 crc kubenswrapper[4957]: I0218 14:56:18.557791 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="88ba3d00-f51e-4168-809e-ee46dad21b45" containerName="dnsmasq-dns" Feb 18 14:56:18 crc kubenswrapper[4957]: I0218 14:56:18.557801 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="32518ba7-d47c-461e-a3e3-f13cca6bfd40" containerName="sg-core" Feb 18 14:56:18 crc kubenswrapper[4957]: I0218 14:56:18.557809 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e973679-e589-4827-8e24-d2fda83ab2e2" containerName="heat-api" Feb 18 14:56:18 crc kubenswrapper[4957]: I0218 14:56:18.557816 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f682c53-b8cc-42e4-a3b3-3bccfdf7c908" containerName="heat-api" Feb 18 14:56:18 crc kubenswrapper[4957]: I0218 14:56:18.557827 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e973679-e589-4827-8e24-d2fda83ab2e2" containerName="heat-api" Feb 18 14:56:18 crc kubenswrapper[4957]: I0218 14:56:18.557844 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="32518ba7-d47c-461e-a3e3-f13cca6bfd40" containerName="ceilometer-notification-agent" Feb 18 14:56:18 crc kubenswrapper[4957]: I0218 14:56:18.557852 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="32518ba7-d47c-461e-a3e3-f13cca6bfd40" containerName="proxy-httpd" Feb 18 14:56:18 crc kubenswrapper[4957]: I0218 14:56:18.557858 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb47212d-7882-43e9-bea7-a114f2e4f629" containerName="mariadb-database-create" Feb 18 14:56:18 crc kubenswrapper[4957]: I0218 14:56:18.558610 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-b2srx" Feb 18 14:56:18 crc kubenswrapper[4957]: I0218 14:56:18.563822 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-k9jxh" Feb 18 14:56:18 crc kubenswrapper[4957]: I0218 14:56:18.564006 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 18 14:56:18 crc kubenswrapper[4957]: I0218 14:56:18.564197 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Feb 18 14:56:18 crc kubenswrapper[4957]: I0218 14:56:18.568558 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-b2srx"] Feb 18 14:56:18 crc kubenswrapper[4957]: I0218 14:56:18.685858 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb7ac820-e63c-4996-af6b-b0f45530ef91-config-data\") pod \"nova-cell0-conductor-db-sync-b2srx\" (UID: \"cb7ac820-e63c-4996-af6b-b0f45530ef91\") " pod="openstack/nova-cell0-conductor-db-sync-b2srx" Feb 18 14:56:18 crc kubenswrapper[4957]: I0218 14:56:18.685926 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb7ac820-e63c-4996-af6b-b0f45530ef91-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-b2srx\" (UID: \"cb7ac820-e63c-4996-af6b-b0f45530ef91\") " pod="openstack/nova-cell0-conductor-db-sync-b2srx" Feb 18 14:56:18 crc kubenswrapper[4957]: I0218 14:56:18.685959 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7h7qz\" (UniqueName: \"kubernetes.io/projected/cb7ac820-e63c-4996-af6b-b0f45530ef91-kube-api-access-7h7qz\") pod \"nova-cell0-conductor-db-sync-b2srx\" (UID: \"cb7ac820-e63c-4996-af6b-b0f45530ef91\") " pod="openstack/nova-cell0-conductor-db-sync-b2srx" Feb 18 14:56:18 crc kubenswrapper[4957]: I0218 14:56:18.686139 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cb7ac820-e63c-4996-af6b-b0f45530ef91-scripts\") pod \"nova-cell0-conductor-db-sync-b2srx\" (UID: \"cb7ac820-e63c-4996-af6b-b0f45530ef91\") " pod="openstack/nova-cell0-conductor-db-sync-b2srx" Feb 18 14:56:18 crc kubenswrapper[4957]: I0218 14:56:18.757667 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"32518ba7-d47c-461e-a3e3-f13cca6bfd40","Type":"ContainerDied","Data":"b4c923cd8483abdf715f733440a105cd02f27f1615ec6426656423f10ac20452"} Feb 18 14:56:18 crc kubenswrapper[4957]: I0218 14:56:18.757734 4957 scope.go:117] "RemoveContainer" containerID="13619cc6549493bb8b29544bcfb1f3bfd178bf312059f6cb10aaf479918f2488" Feb 18 14:56:18 crc kubenswrapper[4957]: I0218 14:56:18.757734 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 14:56:18 crc kubenswrapper[4957]: I0218 14:56:18.780963 4957 scope.go:117] "RemoveContainer" containerID="8cef25036ba9f6eeba7b7d0c60e51e5b021cad30ea9b8df843af6d851b6c405d" Feb 18 14:56:18 crc kubenswrapper[4957]: I0218 14:56:18.790310 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb7ac820-e63c-4996-af6b-b0f45530ef91-config-data\") pod \"nova-cell0-conductor-db-sync-b2srx\" (UID: \"cb7ac820-e63c-4996-af6b-b0f45530ef91\") " pod="openstack/nova-cell0-conductor-db-sync-b2srx" Feb 18 14:56:18 crc kubenswrapper[4957]: I0218 14:56:18.790640 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb7ac820-e63c-4996-af6b-b0f45530ef91-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-b2srx\" (UID: \"cb7ac820-e63c-4996-af6b-b0f45530ef91\") " pod="openstack/nova-cell0-conductor-db-sync-b2srx" Feb 18 14:56:18 crc kubenswrapper[4957]: I0218 14:56:18.790775 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7h7qz\" (UniqueName: \"kubernetes.io/projected/cb7ac820-e63c-4996-af6b-b0f45530ef91-kube-api-access-7h7qz\") pod \"nova-cell0-conductor-db-sync-b2srx\" (UID: \"cb7ac820-e63c-4996-af6b-b0f45530ef91\") " pod="openstack/nova-cell0-conductor-db-sync-b2srx" Feb 18 14:56:18 crc kubenswrapper[4957]: I0218 14:56:18.791023 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cb7ac820-e63c-4996-af6b-b0f45530ef91-scripts\") pod \"nova-cell0-conductor-db-sync-b2srx\" (UID: \"cb7ac820-e63c-4996-af6b-b0f45530ef91\") " pod="openstack/nova-cell0-conductor-db-sync-b2srx" Feb 18 14:56:18 crc kubenswrapper[4957]: I0218 14:56:18.801569 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb7ac820-e63c-4996-af6b-b0f45530ef91-config-data\") pod \"nova-cell0-conductor-db-sync-b2srx\" (UID: \"cb7ac820-e63c-4996-af6b-b0f45530ef91\") " pod="openstack/nova-cell0-conductor-db-sync-b2srx" Feb 18 14:56:18 crc kubenswrapper[4957]: I0218 14:56:18.802016 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb7ac820-e63c-4996-af6b-b0f45530ef91-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-b2srx\" (UID: \"cb7ac820-e63c-4996-af6b-b0f45530ef91\") " pod="openstack/nova-cell0-conductor-db-sync-b2srx" Feb 18 14:56:18 crc kubenswrapper[4957]: I0218 14:56:18.802312 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cb7ac820-e63c-4996-af6b-b0f45530ef91-scripts\") pod \"nova-cell0-conductor-db-sync-b2srx\" (UID: \"cb7ac820-e63c-4996-af6b-b0f45530ef91\") " pod="openstack/nova-cell0-conductor-db-sync-b2srx" Feb 18 14:56:18 crc kubenswrapper[4957]: I0218 14:56:18.809621 4957 scope.go:117] "RemoveContainer" containerID="16de19ebf6ef47001ba7f6991eaa123b1157f7154ee4d8d30dc469d077de5e29" Feb 18 14:56:18 crc kubenswrapper[4957]: I0218 14:56:18.809759 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 14:56:18 crc kubenswrapper[4957]: I0218 14:56:18.824476 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 18 14:56:18 crc kubenswrapper[4957]: I0218 14:56:18.833039 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7h7qz\" (UniqueName: \"kubernetes.io/projected/cb7ac820-e63c-4996-af6b-b0f45530ef91-kube-api-access-7h7qz\") pod \"nova-cell0-conductor-db-sync-b2srx\" (UID: \"cb7ac820-e63c-4996-af6b-b0f45530ef91\") " pod="openstack/nova-cell0-conductor-db-sync-b2srx" Feb 18 14:56:18 crc kubenswrapper[4957]: I0218 14:56:18.845074 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 18 14:56:18 crc kubenswrapper[4957]: I0218 14:56:18.845811 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="37f34265-b814-4eb7-b633-b1516352e951" containerName="heat-cfnapi" Feb 18 14:56:18 crc kubenswrapper[4957]: I0218 14:56:18.861542 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 14:56:18 crc kubenswrapper[4957]: I0218 14:56:18.861657 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 14:56:18 crc kubenswrapper[4957]: I0218 14:56:18.864739 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 18 14:56:18 crc kubenswrapper[4957]: I0218 14:56:18.865105 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 18 14:56:18 crc kubenswrapper[4957]: I0218 14:56:18.865442 4957 scope.go:117] "RemoveContainer" containerID="83613776e4fdcc13175a733b8e96ed4451c30ba7131123ddeb3588ef6b3677ba" Feb 18 14:56:18 crc kubenswrapper[4957]: I0218 14:56:18.913540 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-b2srx" Feb 18 14:56:19 crc kubenswrapper[4957]: I0218 14:56:19.001797 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8lwq\" (UniqueName: \"kubernetes.io/projected/bbaf2250-41f8-4278-a9ad-65c766e5e0dd-kube-api-access-w8lwq\") pod \"ceilometer-0\" (UID: \"bbaf2250-41f8-4278-a9ad-65c766e5e0dd\") " pod="openstack/ceilometer-0" Feb 18 14:56:19 crc kubenswrapper[4957]: I0218 14:56:19.001874 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbaf2250-41f8-4278-a9ad-65c766e5e0dd-config-data\") pod \"ceilometer-0\" (UID: \"bbaf2250-41f8-4278-a9ad-65c766e5e0dd\") " pod="openstack/ceilometer-0" Feb 18 14:56:19 crc kubenswrapper[4957]: I0218 14:56:19.002088 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bbaf2250-41f8-4278-a9ad-65c766e5e0dd-run-httpd\") pod \"ceilometer-0\" (UID: \"bbaf2250-41f8-4278-a9ad-65c766e5e0dd\") " pod="openstack/ceilometer-0" Feb 18 14:56:19 crc kubenswrapper[4957]: I0218 14:56:19.002164 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bbaf2250-41f8-4278-a9ad-65c766e5e0dd-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bbaf2250-41f8-4278-a9ad-65c766e5e0dd\") " pod="openstack/ceilometer-0" Feb 18 14:56:19 crc kubenswrapper[4957]: I0218 14:56:19.002203 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbaf2250-41f8-4278-a9ad-65c766e5e0dd-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bbaf2250-41f8-4278-a9ad-65c766e5e0dd\") " pod="openstack/ceilometer-0" Feb 18 14:56:19 crc kubenswrapper[4957]: I0218 14:56:19.002233 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bbaf2250-41f8-4278-a9ad-65c766e5e0dd-log-httpd\") pod \"ceilometer-0\" (UID: \"bbaf2250-41f8-4278-a9ad-65c766e5e0dd\") " pod="openstack/ceilometer-0" Feb 18 14:56:19 crc kubenswrapper[4957]: I0218 14:56:19.002701 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bbaf2250-41f8-4278-a9ad-65c766e5e0dd-scripts\") pod \"ceilometer-0\" (UID: \"bbaf2250-41f8-4278-a9ad-65c766e5e0dd\") " pod="openstack/ceilometer-0" Feb 18 14:56:19 crc kubenswrapper[4957]: I0218 14:56:19.105169 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bbaf2250-41f8-4278-a9ad-65c766e5e0dd-scripts\") pod \"ceilometer-0\" (UID: \"bbaf2250-41f8-4278-a9ad-65c766e5e0dd\") " pod="openstack/ceilometer-0" Feb 18 14:56:19 crc kubenswrapper[4957]: I0218 14:56:19.105275 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w8lwq\" (UniqueName: \"kubernetes.io/projected/bbaf2250-41f8-4278-a9ad-65c766e5e0dd-kube-api-access-w8lwq\") pod \"ceilometer-0\" (UID: \"bbaf2250-41f8-4278-a9ad-65c766e5e0dd\") " pod="openstack/ceilometer-0" Feb 18 14:56:19 crc kubenswrapper[4957]: I0218 14:56:19.105304 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbaf2250-41f8-4278-a9ad-65c766e5e0dd-config-data\") pod \"ceilometer-0\" (UID: \"bbaf2250-41f8-4278-a9ad-65c766e5e0dd\") " pod="openstack/ceilometer-0" Feb 18 14:56:19 crc kubenswrapper[4957]: I0218 14:56:19.105341 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bbaf2250-41f8-4278-a9ad-65c766e5e0dd-run-httpd\") pod \"ceilometer-0\" (UID: \"bbaf2250-41f8-4278-a9ad-65c766e5e0dd\") " pod="openstack/ceilometer-0" Feb 18 14:56:19 crc kubenswrapper[4957]: I0218 14:56:19.105361 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bbaf2250-41f8-4278-a9ad-65c766e5e0dd-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bbaf2250-41f8-4278-a9ad-65c766e5e0dd\") " pod="openstack/ceilometer-0" Feb 18 14:56:19 crc kubenswrapper[4957]: I0218 14:56:19.105382 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbaf2250-41f8-4278-a9ad-65c766e5e0dd-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bbaf2250-41f8-4278-a9ad-65c766e5e0dd\") " pod="openstack/ceilometer-0" Feb 18 14:56:19 crc kubenswrapper[4957]: I0218 14:56:19.105402 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bbaf2250-41f8-4278-a9ad-65c766e5e0dd-log-httpd\") pod \"ceilometer-0\" (UID: \"bbaf2250-41f8-4278-a9ad-65c766e5e0dd\") " pod="openstack/ceilometer-0" Feb 18 14:56:19 crc kubenswrapper[4957]: I0218 14:56:19.106001 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bbaf2250-41f8-4278-a9ad-65c766e5e0dd-log-httpd\") pod \"ceilometer-0\" (UID: \"bbaf2250-41f8-4278-a9ad-65c766e5e0dd\") " pod="openstack/ceilometer-0" Feb 18 14:56:19 crc kubenswrapper[4957]: I0218 14:56:19.106266 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bbaf2250-41f8-4278-a9ad-65c766e5e0dd-run-httpd\") pod \"ceilometer-0\" (UID: \"bbaf2250-41f8-4278-a9ad-65c766e5e0dd\") " pod="openstack/ceilometer-0" Feb 18 14:56:19 crc kubenswrapper[4957]: I0218 14:56:19.118859 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bbaf2250-41f8-4278-a9ad-65c766e5e0dd-scripts\") pod \"ceilometer-0\" (UID: \"bbaf2250-41f8-4278-a9ad-65c766e5e0dd\") " pod="openstack/ceilometer-0" Feb 18 14:56:19 crc kubenswrapper[4957]: I0218 14:56:19.119586 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bbaf2250-41f8-4278-a9ad-65c766e5e0dd-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bbaf2250-41f8-4278-a9ad-65c766e5e0dd\") " pod="openstack/ceilometer-0" Feb 18 14:56:19 crc kubenswrapper[4957]: I0218 14:56:19.119616 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbaf2250-41f8-4278-a9ad-65c766e5e0dd-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bbaf2250-41f8-4278-a9ad-65c766e5e0dd\") " pod="openstack/ceilometer-0" Feb 18 14:56:19 crc kubenswrapper[4957]: I0218 14:56:19.119882 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbaf2250-41f8-4278-a9ad-65c766e5e0dd-config-data\") pod \"ceilometer-0\" (UID: \"bbaf2250-41f8-4278-a9ad-65c766e5e0dd\") " pod="openstack/ceilometer-0" Feb 18 14:56:19 crc kubenswrapper[4957]: I0218 14:56:19.130451 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w8lwq\" (UniqueName: \"kubernetes.io/projected/bbaf2250-41f8-4278-a9ad-65c766e5e0dd-kube-api-access-w8lwq\") pod \"ceilometer-0\" (UID: \"bbaf2250-41f8-4278-a9ad-65c766e5e0dd\") " pod="openstack/ceilometer-0" Feb 18 14:56:19 crc kubenswrapper[4957]: I0218 14:56:19.270031 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 14:56:19 crc kubenswrapper[4957]: I0218 14:56:19.802615 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-b2srx"] Feb 18 14:56:19 crc kubenswrapper[4957]: W0218 14:56:19.833023 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcb7ac820_e63c_4996_af6b_b0f45530ef91.slice/crio-b73ef87f6a68f2e2e6fff250e12ba020ec3073802cd58043951fd610bbb7b66e WatchSource:0}: Error finding container b73ef87f6a68f2e2e6fff250e12ba020ec3073802cd58043951fd610bbb7b66e: Status 404 returned error can't find the container with id b73ef87f6a68f2e2e6fff250e12ba020ec3073802cd58043951fd610bbb7b66e Feb 18 14:56:20 crc kubenswrapper[4957]: E0218 14:56:20.116646 4957 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b6ed8a5c9d2f5879a3cf52c2eea94e0aec596e98c2e9ed9df790ef55c4e92c45" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 18 14:56:20 crc kubenswrapper[4957]: E0218 14:56:20.118784 4957 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b6ed8a5c9d2f5879a3cf52c2eea94e0aec596e98c2e9ed9df790ef55c4e92c45" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 18 14:56:20 crc kubenswrapper[4957]: E0218 14:56:20.121934 4957 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b6ed8a5c9d2f5879a3cf52c2eea94e0aec596e98c2e9ed9df790ef55c4e92c45" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 18 14:56:20 crc kubenswrapper[4957]: E0218 14:56:20.121991 4957 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-65df87cd54-bjsc4" podUID="bde3dea8-c483-423d-8f4a-74575433fd2f" containerName="heat-engine" Feb 18 14:56:20 crc kubenswrapper[4957]: I0218 14:56:20.134631 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 14:56:20 crc kubenswrapper[4957]: I0218 14:56:20.239071 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32518ba7-d47c-461e-a3e3-f13cca6bfd40" path="/var/lib/kubelet/pods/32518ba7-d47c-461e-a3e3-f13cca6bfd40/volumes" Feb 18 14:56:20 crc kubenswrapper[4957]: I0218 14:56:20.829086 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bbaf2250-41f8-4278-a9ad-65c766e5e0dd","Type":"ContainerStarted","Data":"dec1a1ae044cc2ed101d83df0cbf0fae263040f34af917301ae43a43dc181a2b"} Feb 18 14:56:20 crc kubenswrapper[4957]: I0218 14:56:20.838015 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-b2srx" event={"ID":"cb7ac820-e63c-4996-af6b-b0f45530ef91","Type":"ContainerStarted","Data":"b73ef87f6a68f2e2e6fff250e12ba020ec3073802cd58043951fd610bbb7b66e"} Feb 18 14:56:21 crc kubenswrapper[4957]: I0218 14:56:21.277532 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 14:56:21 crc kubenswrapper[4957]: I0218 14:56:21.856279 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bbaf2250-41f8-4278-a9ad-65c766e5e0dd","Type":"ContainerStarted","Data":"04132032e953fe9a4c03bc31595b5685b8e97224273fe90b506838a70c8ee0f9"} Feb 18 14:56:22 crc kubenswrapper[4957]: I0218 14:56:22.873228 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bbaf2250-41f8-4278-a9ad-65c766e5e0dd","Type":"ContainerStarted","Data":"8c1b890d5fd9505a656719032abf5bf5576c11c1ec0659700a912326efddd8f9"} Feb 18 14:56:22 crc kubenswrapper[4957]: I0218 14:56:22.873774 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bbaf2250-41f8-4278-a9ad-65c766e5e0dd","Type":"ContainerStarted","Data":"2e317e757815ed454960cb98f2b5526451acf81732ac32f2002270cd04314718"} Feb 18 14:56:25 crc kubenswrapper[4957]: I0218 14:56:25.932720 4957 generic.go:334] "Generic (PLEG): container finished" podID="bde3dea8-c483-423d-8f4a-74575433fd2f" containerID="b6ed8a5c9d2f5879a3cf52c2eea94e0aec596e98c2e9ed9df790ef55c4e92c45" exitCode=0 Feb 18 14:56:25 crc kubenswrapper[4957]: I0218 14:56:25.933289 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-65df87cd54-bjsc4" event={"ID":"bde3dea8-c483-423d-8f4a-74575433fd2f","Type":"ContainerDied","Data":"b6ed8a5c9d2f5879a3cf52c2eea94e0aec596e98c2e9ed9df790ef55c4e92c45"} Feb 18 14:56:30 crc kubenswrapper[4957]: E0218 14:56:30.114856 4957 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b6ed8a5c9d2f5879a3cf52c2eea94e0aec596e98c2e9ed9df790ef55c4e92c45 is running failed: container process not found" containerID="b6ed8a5c9d2f5879a3cf52c2eea94e0aec596e98c2e9ed9df790ef55c4e92c45" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 18 14:56:30 crc kubenswrapper[4957]: E0218 14:56:30.115653 4957 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b6ed8a5c9d2f5879a3cf52c2eea94e0aec596e98c2e9ed9df790ef55c4e92c45 is running failed: container process not found" containerID="b6ed8a5c9d2f5879a3cf52c2eea94e0aec596e98c2e9ed9df790ef55c4e92c45" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 18 14:56:30 crc kubenswrapper[4957]: E0218 14:56:30.115972 4957 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b6ed8a5c9d2f5879a3cf52c2eea94e0aec596e98c2e9ed9df790ef55c4e92c45 is running failed: container process not found" containerID="b6ed8a5c9d2f5879a3cf52c2eea94e0aec596e98c2e9ed9df790ef55c4e92c45" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 18 14:56:30 crc kubenswrapper[4957]: E0218 14:56:30.116038 4957 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b6ed8a5c9d2f5879a3cf52c2eea94e0aec596e98c2e9ed9df790ef55c4e92c45 is running failed: container process not found" probeType="Readiness" pod="openstack/heat-engine-65df87cd54-bjsc4" podUID="bde3dea8-c483-423d-8f4a-74575433fd2f" containerName="heat-engine" Feb 18 14:56:31 crc kubenswrapper[4957]: I0218 14:56:31.542823 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-65df87cd54-bjsc4" Feb 18 14:56:31 crc kubenswrapper[4957]: I0218 14:56:31.655951 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bde3dea8-c483-423d-8f4a-74575433fd2f-config-data\") pod \"bde3dea8-c483-423d-8f4a-74575433fd2f\" (UID: \"bde3dea8-c483-423d-8f4a-74575433fd2f\") " Feb 18 14:56:31 crc kubenswrapper[4957]: I0218 14:56:31.656090 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bde3dea8-c483-423d-8f4a-74575433fd2f-config-data-custom\") pod \"bde3dea8-c483-423d-8f4a-74575433fd2f\" (UID: \"bde3dea8-c483-423d-8f4a-74575433fd2f\") " Feb 18 14:56:31 crc kubenswrapper[4957]: I0218 14:56:31.656170 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bde3dea8-c483-423d-8f4a-74575433fd2f-combined-ca-bundle\") pod \"bde3dea8-c483-423d-8f4a-74575433fd2f\" (UID: \"bde3dea8-c483-423d-8f4a-74575433fd2f\") " Feb 18 14:56:31 crc kubenswrapper[4957]: I0218 14:56:31.656226 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k9gdc\" (UniqueName: \"kubernetes.io/projected/bde3dea8-c483-423d-8f4a-74575433fd2f-kube-api-access-k9gdc\") pod \"bde3dea8-c483-423d-8f4a-74575433fd2f\" (UID: \"bde3dea8-c483-423d-8f4a-74575433fd2f\") " Feb 18 14:56:31 crc kubenswrapper[4957]: I0218 14:56:31.663893 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bde3dea8-c483-423d-8f4a-74575433fd2f-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "bde3dea8-c483-423d-8f4a-74575433fd2f" (UID: "bde3dea8-c483-423d-8f4a-74575433fd2f"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:56:31 crc kubenswrapper[4957]: I0218 14:56:31.664199 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bde3dea8-c483-423d-8f4a-74575433fd2f-kube-api-access-k9gdc" (OuterVolumeSpecName: "kube-api-access-k9gdc") pod "bde3dea8-c483-423d-8f4a-74575433fd2f" (UID: "bde3dea8-c483-423d-8f4a-74575433fd2f"). InnerVolumeSpecName "kube-api-access-k9gdc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:56:31 crc kubenswrapper[4957]: I0218 14:56:31.760346 4957 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bde3dea8-c483-423d-8f4a-74575433fd2f-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 18 14:56:31 crc kubenswrapper[4957]: I0218 14:56:31.760404 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k9gdc\" (UniqueName: \"kubernetes.io/projected/bde3dea8-c483-423d-8f4a-74575433fd2f-kube-api-access-k9gdc\") on node \"crc\" DevicePath \"\"" Feb 18 14:56:31 crc kubenswrapper[4957]: I0218 14:56:31.768508 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bde3dea8-c483-423d-8f4a-74575433fd2f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bde3dea8-c483-423d-8f4a-74575433fd2f" (UID: "bde3dea8-c483-423d-8f4a-74575433fd2f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:56:31 crc kubenswrapper[4957]: I0218 14:56:31.792996 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bde3dea8-c483-423d-8f4a-74575433fd2f-config-data" (OuterVolumeSpecName: "config-data") pod "bde3dea8-c483-423d-8f4a-74575433fd2f" (UID: "bde3dea8-c483-423d-8f4a-74575433fd2f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:56:31 crc kubenswrapper[4957]: I0218 14:56:31.863361 4957 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bde3dea8-c483-423d-8f4a-74575433fd2f-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 14:56:31 crc kubenswrapper[4957]: I0218 14:56:31.863407 4957 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bde3dea8-c483-423d-8f4a-74575433fd2f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:56:32 crc kubenswrapper[4957]: I0218 14:56:32.023941 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-65df87cd54-bjsc4" event={"ID":"bde3dea8-c483-423d-8f4a-74575433fd2f","Type":"ContainerDied","Data":"3115fb07fc1cc6939894ecf250674667b35f5368d315b100d5f6be1dcbdd4f8d"} Feb 18 14:56:32 crc kubenswrapper[4957]: I0218 14:56:32.024325 4957 scope.go:117] "RemoveContainer" containerID="b6ed8a5c9d2f5879a3cf52c2eea94e0aec596e98c2e9ed9df790ef55c4e92c45" Feb 18 14:56:32 crc kubenswrapper[4957]: I0218 14:56:32.024011 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-65df87cd54-bjsc4" Feb 18 14:56:32 crc kubenswrapper[4957]: I0218 14:56:32.026543 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-b2srx" event={"ID":"cb7ac820-e63c-4996-af6b-b0f45530ef91","Type":"ContainerStarted","Data":"d034e5e901bf87db3b68accb8d7445274e8e34fb1ea4765dd77aa842b1b74514"} Feb 18 14:56:32 crc kubenswrapper[4957]: I0218 14:56:32.037655 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bbaf2250-41f8-4278-a9ad-65c766e5e0dd","Type":"ContainerStarted","Data":"ab6a389b2ea669acd56d71cc3369d20c5dae8d00ae5a31d93c78c90f4d45af25"} Feb 18 14:56:32 crc kubenswrapper[4957]: I0218 14:56:32.037830 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bbaf2250-41f8-4278-a9ad-65c766e5e0dd" containerName="ceilometer-central-agent" containerID="cri-o://04132032e953fe9a4c03bc31595b5685b8e97224273fe90b506838a70c8ee0f9" gracePeriod=30 Feb 18 14:56:32 crc kubenswrapper[4957]: I0218 14:56:32.038070 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 18 14:56:32 crc kubenswrapper[4957]: I0218 14:56:32.038129 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bbaf2250-41f8-4278-a9ad-65c766e5e0dd" containerName="proxy-httpd" containerID="cri-o://ab6a389b2ea669acd56d71cc3369d20c5dae8d00ae5a31d93c78c90f4d45af25" gracePeriod=30 Feb 18 14:56:32 crc kubenswrapper[4957]: I0218 14:56:32.047694 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bbaf2250-41f8-4278-a9ad-65c766e5e0dd" containerName="sg-core" containerID="cri-o://8c1b890d5fd9505a656719032abf5bf5576c11c1ec0659700a912326efddd8f9" gracePeriod=30 Feb 18 14:56:32 crc kubenswrapper[4957]: I0218 14:56:32.048132 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bbaf2250-41f8-4278-a9ad-65c766e5e0dd" containerName="ceilometer-notification-agent" containerID="cri-o://2e317e757815ed454960cb98f2b5526451acf81732ac32f2002270cd04314718" gracePeriod=30 Feb 18 14:56:32 crc kubenswrapper[4957]: I0218 14:56:32.073362 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-b2srx" podStartSLOduration=2.633843475 podStartE2EDuration="14.07334328s" podCreationTimestamp="2026-02-18 14:56:18 +0000 UTC" firstStartedPulling="2026-02-18 14:56:19.844641541 +0000 UTC m=+1486.365506285" lastFinishedPulling="2026-02-18 14:56:31.284141346 +0000 UTC m=+1497.805006090" observedRunningTime="2026-02-18 14:56:32.05226545 +0000 UTC m=+1498.573130194" watchObservedRunningTime="2026-02-18 14:56:32.07334328 +0000 UTC m=+1498.594208024" Feb 18 14:56:32 crc kubenswrapper[4957]: I0218 14:56:32.082351 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.927191943 podStartE2EDuration="14.08233553s" podCreationTimestamp="2026-02-18 14:56:18 +0000 UTC" firstStartedPulling="2026-02-18 14:56:20.123718386 +0000 UTC m=+1486.644583130" lastFinishedPulling="2026-02-18 14:56:31.278861973 +0000 UTC m=+1497.799726717" observedRunningTime="2026-02-18 14:56:32.081223158 +0000 UTC m=+1498.602087902" watchObservedRunningTime="2026-02-18 14:56:32.08233553 +0000 UTC m=+1498.603200274" Feb 18 14:56:32 crc kubenswrapper[4957]: I0218 14:56:32.120994 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-65df87cd54-bjsc4"] Feb 18 14:56:32 crc kubenswrapper[4957]: I0218 14:56:32.133806 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-engine-65df87cd54-bjsc4"] Feb 18 14:56:32 crc kubenswrapper[4957]: I0218 14:56:32.230564 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bde3dea8-c483-423d-8f4a-74575433fd2f" path="/var/lib/kubelet/pods/bde3dea8-c483-423d-8f4a-74575433fd2f/volumes" Feb 18 14:56:33 crc kubenswrapper[4957]: I0218 14:56:33.065246 4957 generic.go:334] "Generic (PLEG): container finished" podID="bbaf2250-41f8-4278-a9ad-65c766e5e0dd" containerID="8c1b890d5fd9505a656719032abf5bf5576c11c1ec0659700a912326efddd8f9" exitCode=2 Feb 18 14:56:33 crc kubenswrapper[4957]: I0218 14:56:33.065545 4957 generic.go:334] "Generic (PLEG): container finished" podID="bbaf2250-41f8-4278-a9ad-65c766e5e0dd" containerID="2e317e757815ed454960cb98f2b5526451acf81732ac32f2002270cd04314718" exitCode=0 Feb 18 14:56:33 crc kubenswrapper[4957]: I0218 14:56:33.065553 4957 generic.go:334] "Generic (PLEG): container finished" podID="bbaf2250-41f8-4278-a9ad-65c766e5e0dd" containerID="04132032e953fe9a4c03bc31595b5685b8e97224273fe90b506838a70c8ee0f9" exitCode=0 Feb 18 14:56:33 crc kubenswrapper[4957]: I0218 14:56:33.065360 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bbaf2250-41f8-4278-a9ad-65c766e5e0dd","Type":"ContainerDied","Data":"8c1b890d5fd9505a656719032abf5bf5576c11c1ec0659700a912326efddd8f9"} Feb 18 14:56:33 crc kubenswrapper[4957]: I0218 14:56:33.066623 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bbaf2250-41f8-4278-a9ad-65c766e5e0dd","Type":"ContainerDied","Data":"2e317e757815ed454960cb98f2b5526451acf81732ac32f2002270cd04314718"} Feb 18 14:56:33 crc kubenswrapper[4957]: I0218 14:56:33.066638 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bbaf2250-41f8-4278-a9ad-65c766e5e0dd","Type":"ContainerDied","Data":"04132032e953fe9a4c03bc31595b5685b8e97224273fe90b506838a70c8ee0f9"} Feb 18 14:56:33 crc kubenswrapper[4957]: I0218 14:56:33.202368 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 14:56:33 crc kubenswrapper[4957]: I0218 14:56:33.203035 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="eb55dbb2-b556-42c0-af4c-e94f389576c3" containerName="glance-log" containerID="cri-o://a23e1035e2130f7ffc0042e2de1aeb8444de6b117f9304a7d6f61a57faddbd56" gracePeriod=30 Feb 18 14:56:33 crc kubenswrapper[4957]: I0218 14:56:33.203122 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="eb55dbb2-b556-42c0-af4c-e94f389576c3" containerName="glance-httpd" containerID="cri-o://1e2f8288271c4a9f01a34ca593d2f6701401f7c597ca71e28d78a04c62be5f52" gracePeriod=30 Feb 18 14:56:34 crc kubenswrapper[4957]: I0218 14:56:34.079193 4957 generic.go:334] "Generic (PLEG): container finished" podID="eb55dbb2-b556-42c0-af4c-e94f389576c3" containerID="a23e1035e2130f7ffc0042e2de1aeb8444de6b117f9304a7d6f61a57faddbd56" exitCode=143 Feb 18 14:56:34 crc kubenswrapper[4957]: I0218 14:56:34.079271 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"eb55dbb2-b556-42c0-af4c-e94f389576c3","Type":"ContainerDied","Data":"a23e1035e2130f7ffc0042e2de1aeb8444de6b117f9304a7d6f61a57faddbd56"} Feb 18 14:56:34 crc kubenswrapper[4957]: I0218 14:56:34.415233 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 14:56:34 crc kubenswrapper[4957]: I0218 14:56:34.415742 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="eef0cbff-2dd2-4fee-86cc-04d3bd3032f9" containerName="glance-httpd" containerID="cri-o://77670be4dca07a5341a66ef224ba145df08a99a0a4195e0749d8b4fc0773cc5d" gracePeriod=30 Feb 18 14:56:34 crc kubenswrapper[4957]: I0218 14:56:34.415633 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="eef0cbff-2dd2-4fee-86cc-04d3bd3032f9" containerName="glance-log" containerID="cri-o://7af7e42045fe579837329e1cb64f1e6505f8fdfdeeb82dc284b3e513d17e3132" gracePeriod=30 Feb 18 14:56:35 crc kubenswrapper[4957]: I0218 14:56:35.092140 4957 generic.go:334] "Generic (PLEG): container finished" podID="eef0cbff-2dd2-4fee-86cc-04d3bd3032f9" containerID="7af7e42045fe579837329e1cb64f1e6505f8fdfdeeb82dc284b3e513d17e3132" exitCode=143 Feb 18 14:56:35 crc kubenswrapper[4957]: I0218 14:56:35.092243 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"eef0cbff-2dd2-4fee-86cc-04d3bd3032f9","Type":"ContainerDied","Data":"7af7e42045fe579837329e1cb64f1e6505f8fdfdeeb82dc284b3e513d17e3132"} Feb 18 14:56:37 crc kubenswrapper[4957]: I0218 14:56:37.116576 4957 generic.go:334] "Generic (PLEG): container finished" podID="eb55dbb2-b556-42c0-af4c-e94f389576c3" containerID="1e2f8288271c4a9f01a34ca593d2f6701401f7c597ca71e28d78a04c62be5f52" exitCode=0 Feb 18 14:56:37 crc kubenswrapper[4957]: I0218 14:56:37.116667 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"eb55dbb2-b556-42c0-af4c-e94f389576c3","Type":"ContainerDied","Data":"1e2f8288271c4a9f01a34ca593d2f6701401f7c597ca71e28d78a04c62be5f52"} Feb 18 14:56:37 crc kubenswrapper[4957]: I0218 14:56:37.118613 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"eb55dbb2-b556-42c0-af4c-e94f389576c3","Type":"ContainerDied","Data":"545dc7fad2b0e39e450bfe027538249e70b9a92ad218511a077714df1c2410a4"} Feb 18 14:56:37 crc kubenswrapper[4957]: I0218 14:56:37.118735 4957 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="545dc7fad2b0e39e450bfe027538249e70b9a92ad218511a077714df1c2410a4" Feb 18 14:56:37 crc kubenswrapper[4957]: I0218 14:56:37.210890 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 18 14:56:37 crc kubenswrapper[4957]: I0218 14:56:37.279365 4957 patch_prober.go:28] interesting pod/machine-config-daemon-x8wwg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 14:56:37 crc kubenswrapper[4957]: I0218 14:56:37.279432 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 14:56:37 crc kubenswrapper[4957]: I0218 14:56:37.413095 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb55dbb2-b556-42c0-af4c-e94f389576c3-combined-ca-bundle\") pod \"eb55dbb2-b556-42c0-af4c-e94f389576c3\" (UID: \"eb55dbb2-b556-42c0-af4c-e94f389576c3\") " Feb 18 14:56:37 crc kubenswrapper[4957]: I0218 14:56:37.413171 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pttkc\" (UniqueName: \"kubernetes.io/projected/eb55dbb2-b556-42c0-af4c-e94f389576c3-kube-api-access-pttkc\") pod \"eb55dbb2-b556-42c0-af4c-e94f389576c3\" (UID: \"eb55dbb2-b556-42c0-af4c-e94f389576c3\") " Feb 18 14:56:37 crc kubenswrapper[4957]: I0218 14:56:37.413231 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/eb55dbb2-b556-42c0-af4c-e94f389576c3-httpd-run\") pod \"eb55dbb2-b556-42c0-af4c-e94f389576c3\" (UID: \"eb55dbb2-b556-42c0-af4c-e94f389576c3\") " Feb 18 14:56:37 crc kubenswrapper[4957]: I0218 14:56:37.413705 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb55dbb2-b556-42c0-af4c-e94f389576c3-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "eb55dbb2-b556-42c0-af4c-e94f389576c3" (UID: "eb55dbb2-b556-42c0-af4c-e94f389576c3"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:56:37 crc kubenswrapper[4957]: I0218 14:56:37.413938 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e7977071-236d-4f61-8ab3-a6195398953d\") pod \"eb55dbb2-b556-42c0-af4c-e94f389576c3\" (UID: \"eb55dbb2-b556-42c0-af4c-e94f389576c3\") " Feb 18 14:56:37 crc kubenswrapper[4957]: I0218 14:56:37.414098 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb55dbb2-b556-42c0-af4c-e94f389576c3-scripts\") pod \"eb55dbb2-b556-42c0-af4c-e94f389576c3\" (UID: \"eb55dbb2-b556-42c0-af4c-e94f389576c3\") " Feb 18 14:56:37 crc kubenswrapper[4957]: I0218 14:56:37.414150 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb55dbb2-b556-42c0-af4c-e94f389576c3-config-data\") pod \"eb55dbb2-b556-42c0-af4c-e94f389576c3\" (UID: \"eb55dbb2-b556-42c0-af4c-e94f389576c3\") " Feb 18 14:56:37 crc kubenswrapper[4957]: I0218 14:56:37.414186 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb55dbb2-b556-42c0-af4c-e94f389576c3-public-tls-certs\") pod \"eb55dbb2-b556-42c0-af4c-e94f389576c3\" (UID: \"eb55dbb2-b556-42c0-af4c-e94f389576c3\") " Feb 18 14:56:37 crc kubenswrapper[4957]: I0218 14:56:37.414278 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eb55dbb2-b556-42c0-af4c-e94f389576c3-logs\") pod \"eb55dbb2-b556-42c0-af4c-e94f389576c3\" (UID: \"eb55dbb2-b556-42c0-af4c-e94f389576c3\") " Feb 18 14:56:37 crc kubenswrapper[4957]: I0218 14:56:37.414840 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb55dbb2-b556-42c0-af4c-e94f389576c3-logs" (OuterVolumeSpecName: "logs") pod "eb55dbb2-b556-42c0-af4c-e94f389576c3" (UID: "eb55dbb2-b556-42c0-af4c-e94f389576c3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:56:37 crc kubenswrapper[4957]: I0218 14:56:37.415060 4957 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eb55dbb2-b556-42c0-af4c-e94f389576c3-logs\") on node \"crc\" DevicePath \"\"" Feb 18 14:56:37 crc kubenswrapper[4957]: I0218 14:56:37.415080 4957 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/eb55dbb2-b556-42c0-af4c-e94f389576c3-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 18 14:56:37 crc kubenswrapper[4957]: I0218 14:56:37.418945 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb55dbb2-b556-42c0-af4c-e94f389576c3-kube-api-access-pttkc" (OuterVolumeSpecName: "kube-api-access-pttkc") pod "eb55dbb2-b556-42c0-af4c-e94f389576c3" (UID: "eb55dbb2-b556-42c0-af4c-e94f389576c3"). InnerVolumeSpecName "kube-api-access-pttkc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:56:37 crc kubenswrapper[4957]: I0218 14:56:37.422038 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb55dbb2-b556-42c0-af4c-e94f389576c3-scripts" (OuterVolumeSpecName: "scripts") pod "eb55dbb2-b556-42c0-af4c-e94f389576c3" (UID: "eb55dbb2-b556-42c0-af4c-e94f389576c3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:56:37 crc kubenswrapper[4957]: I0218 14:56:37.453600 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb55dbb2-b556-42c0-af4c-e94f389576c3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "eb55dbb2-b556-42c0-af4c-e94f389576c3" (UID: "eb55dbb2-b556-42c0-af4c-e94f389576c3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:56:37 crc kubenswrapper[4957]: I0218 14:56:37.459940 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e7977071-236d-4f61-8ab3-a6195398953d" (OuterVolumeSpecName: "glance") pod "eb55dbb2-b556-42c0-af4c-e94f389576c3" (UID: "eb55dbb2-b556-42c0-af4c-e94f389576c3"). InnerVolumeSpecName "pvc-e7977071-236d-4f61-8ab3-a6195398953d". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 18 14:56:37 crc kubenswrapper[4957]: I0218 14:56:37.494922 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb55dbb2-b556-42c0-af4c-e94f389576c3-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "eb55dbb2-b556-42c0-af4c-e94f389576c3" (UID: "eb55dbb2-b556-42c0-af4c-e94f389576c3"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:56:37 crc kubenswrapper[4957]: I0218 14:56:37.494948 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb55dbb2-b556-42c0-af4c-e94f389576c3-config-data" (OuterVolumeSpecName: "config-data") pod "eb55dbb2-b556-42c0-af4c-e94f389576c3" (UID: "eb55dbb2-b556-42c0-af4c-e94f389576c3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:56:37 crc kubenswrapper[4957]: I0218 14:56:37.517089 4957 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb55dbb2-b556-42c0-af4c-e94f389576c3-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 14:56:37 crc kubenswrapper[4957]: I0218 14:56:37.517300 4957 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb55dbb2-b556-42c0-af4c-e94f389576c3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:56:37 crc kubenswrapper[4957]: I0218 14:56:37.517374 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pttkc\" (UniqueName: \"kubernetes.io/projected/eb55dbb2-b556-42c0-af4c-e94f389576c3-kube-api-access-pttkc\") on node \"crc\" DevicePath \"\"" Feb 18 14:56:37 crc kubenswrapper[4957]: I0218 14:56:37.517575 4957 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-e7977071-236d-4f61-8ab3-a6195398953d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e7977071-236d-4f61-8ab3-a6195398953d\") on node \"crc\" " Feb 18 14:56:37 crc kubenswrapper[4957]: I0218 14:56:37.517668 4957 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb55dbb2-b556-42c0-af4c-e94f389576c3-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 14:56:37 crc kubenswrapper[4957]: I0218 14:56:37.517737 4957 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb55dbb2-b556-42c0-af4c-e94f389576c3-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 14:56:37 crc kubenswrapper[4957]: I0218 14:56:37.553209 4957 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 18 14:56:37 crc kubenswrapper[4957]: I0218 14:56:37.553613 4957 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-e7977071-236d-4f61-8ab3-a6195398953d" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e7977071-236d-4f61-8ab3-a6195398953d") on node "crc" Feb 18 14:56:37 crc kubenswrapper[4957]: I0218 14:56:37.619700 4957 reconciler_common.go:293] "Volume detached for volume \"pvc-e7977071-236d-4f61-8ab3-a6195398953d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e7977071-236d-4f61-8ab3-a6195398953d\") on node \"crc\" DevicePath \"\"" Feb 18 14:56:38 crc kubenswrapper[4957]: I0218 14:56:38.131185 4957 generic.go:334] "Generic (PLEG): container finished" podID="eef0cbff-2dd2-4fee-86cc-04d3bd3032f9" containerID="77670be4dca07a5341a66ef224ba145df08a99a0a4195e0749d8b4fc0773cc5d" exitCode=0 Feb 18 14:56:38 crc kubenswrapper[4957]: I0218 14:56:38.131306 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"eef0cbff-2dd2-4fee-86cc-04d3bd3032f9","Type":"ContainerDied","Data":"77670be4dca07a5341a66ef224ba145df08a99a0a4195e0749d8b4fc0773cc5d"} Feb 18 14:56:38 crc kubenswrapper[4957]: I0218 14:56:38.131551 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 18 14:56:38 crc kubenswrapper[4957]: I0218 14:56:38.131551 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"eef0cbff-2dd2-4fee-86cc-04d3bd3032f9","Type":"ContainerDied","Data":"9855b749d5fcd7f19ac26072a6f499ffdbfd396e69dc950cf8f4dbf737ec6587"} Feb 18 14:56:38 crc kubenswrapper[4957]: I0218 14:56:38.131570 4957 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9855b749d5fcd7f19ac26072a6f499ffdbfd396e69dc950cf8f4dbf737ec6587" Feb 18 14:56:38 crc kubenswrapper[4957]: I0218 14:56:38.220203 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 18 14:56:38 crc kubenswrapper[4957]: I0218 14:56:38.243857 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 14:56:38 crc kubenswrapper[4957]: I0218 14:56:38.255548 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 14:56:38 crc kubenswrapper[4957]: I0218 14:56:38.323643 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 14:56:38 crc kubenswrapper[4957]: E0218 14:56:38.325350 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bde3dea8-c483-423d-8f4a-74575433fd2f" containerName="heat-engine" Feb 18 14:56:38 crc kubenswrapper[4957]: I0218 14:56:38.326563 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="bde3dea8-c483-423d-8f4a-74575433fd2f" containerName="heat-engine" Feb 18 14:56:38 crc kubenswrapper[4957]: E0218 14:56:38.326894 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eef0cbff-2dd2-4fee-86cc-04d3bd3032f9" containerName="glance-log" Feb 18 14:56:38 crc kubenswrapper[4957]: I0218 14:56:38.326972 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="eef0cbff-2dd2-4fee-86cc-04d3bd3032f9" containerName="glance-log" Feb 18 14:56:38 crc kubenswrapper[4957]: E0218 14:56:38.327061 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eef0cbff-2dd2-4fee-86cc-04d3bd3032f9" containerName="glance-httpd" Feb 18 14:56:38 crc kubenswrapper[4957]: I0218 14:56:38.327237 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="eef0cbff-2dd2-4fee-86cc-04d3bd3032f9" containerName="glance-httpd" Feb 18 14:56:38 crc kubenswrapper[4957]: E0218 14:56:38.327329 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb55dbb2-b556-42c0-af4c-e94f389576c3" containerName="glance-httpd" Feb 18 14:56:38 crc kubenswrapper[4957]: I0218 14:56:38.327386 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb55dbb2-b556-42c0-af4c-e94f389576c3" containerName="glance-httpd" Feb 18 14:56:38 crc kubenswrapper[4957]: E0218 14:56:38.327480 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb55dbb2-b556-42c0-af4c-e94f389576c3" containerName="glance-log" Feb 18 14:56:38 crc kubenswrapper[4957]: I0218 14:56:38.327554 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb55dbb2-b556-42c0-af4c-e94f389576c3" containerName="glance-log" Feb 18 14:56:38 crc kubenswrapper[4957]: I0218 14:56:38.328295 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="eef0cbff-2dd2-4fee-86cc-04d3bd3032f9" containerName="glance-httpd" Feb 18 14:56:38 crc kubenswrapper[4957]: I0218 14:56:38.328403 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb55dbb2-b556-42c0-af4c-e94f389576c3" containerName="glance-log" Feb 18 14:56:38 crc kubenswrapper[4957]: I0218 14:56:38.328494 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="bde3dea8-c483-423d-8f4a-74575433fd2f" containerName="heat-engine" Feb 18 14:56:38 crc kubenswrapper[4957]: I0218 14:56:38.328565 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb55dbb2-b556-42c0-af4c-e94f389576c3" containerName="glance-httpd" Feb 18 14:56:38 crc kubenswrapper[4957]: I0218 14:56:38.328638 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="eef0cbff-2dd2-4fee-86cc-04d3bd3032f9" containerName="glance-log" Feb 18 14:56:38 crc kubenswrapper[4957]: I0218 14:56:38.331503 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 18 14:56:38 crc kubenswrapper[4957]: I0218 14:56:38.336502 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 18 14:56:38 crc kubenswrapper[4957]: I0218 14:56:38.336816 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 18 14:56:38 crc kubenswrapper[4957]: I0218 14:56:38.338483 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/eef0cbff-2dd2-4fee-86cc-04d3bd3032f9-httpd-run\") pod \"eef0cbff-2dd2-4fee-86cc-04d3bd3032f9\" (UID: \"eef0cbff-2dd2-4fee-86cc-04d3bd3032f9\") " Feb 18 14:56:38 crc kubenswrapper[4957]: I0218 14:56:38.338652 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eef0cbff-2dd2-4fee-86cc-04d3bd3032f9-scripts\") pod \"eef0cbff-2dd2-4fee-86cc-04d3bd3032f9\" (UID: \"eef0cbff-2dd2-4fee-86cc-04d3bd3032f9\") " Feb 18 14:56:38 crc kubenswrapper[4957]: I0218 14:56:38.338723 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/eef0cbff-2dd2-4fee-86cc-04d3bd3032f9-internal-tls-certs\") pod \"eef0cbff-2dd2-4fee-86cc-04d3bd3032f9\" (UID: \"eef0cbff-2dd2-4fee-86cc-04d3bd3032f9\") " Feb 18 14:56:38 crc kubenswrapper[4957]: I0218 14:56:38.339207 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7h9fj\" (UniqueName: \"kubernetes.io/projected/eef0cbff-2dd2-4fee-86cc-04d3bd3032f9-kube-api-access-7h9fj\") pod \"eef0cbff-2dd2-4fee-86cc-04d3bd3032f9\" (UID: \"eef0cbff-2dd2-4fee-86cc-04d3bd3032f9\") " Feb 18 14:56:38 crc kubenswrapper[4957]: I0218 14:56:38.339297 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eef0cbff-2dd2-4fee-86cc-04d3bd3032f9-combined-ca-bundle\") pod \"eef0cbff-2dd2-4fee-86cc-04d3bd3032f9\" (UID: \"eef0cbff-2dd2-4fee-86cc-04d3bd3032f9\") " Feb 18 14:56:38 crc kubenswrapper[4957]: I0218 14:56:38.339349 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eef0cbff-2dd2-4fee-86cc-04d3bd3032f9-logs\") pod \"eef0cbff-2dd2-4fee-86cc-04d3bd3032f9\" (UID: \"eef0cbff-2dd2-4fee-86cc-04d3bd3032f9\") " Feb 18 14:56:38 crc kubenswrapper[4957]: I0218 14:56:38.339389 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eef0cbff-2dd2-4fee-86cc-04d3bd3032f9-config-data\") pod \"eef0cbff-2dd2-4fee-86cc-04d3bd3032f9\" (UID: \"eef0cbff-2dd2-4fee-86cc-04d3bd3032f9\") " Feb 18 14:56:38 crc kubenswrapper[4957]: I0218 14:56:38.340748 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2d87d3ce-8343-4706-ae50-f2bcfa8ba982\") pod \"eef0cbff-2dd2-4fee-86cc-04d3bd3032f9\" (UID: \"eef0cbff-2dd2-4fee-86cc-04d3bd3032f9\") " Feb 18 14:56:38 crc kubenswrapper[4957]: I0218 14:56:38.344051 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eef0cbff-2dd2-4fee-86cc-04d3bd3032f9-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "eef0cbff-2dd2-4fee-86cc-04d3bd3032f9" (UID: "eef0cbff-2dd2-4fee-86cc-04d3bd3032f9"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:56:38 crc kubenswrapper[4957]: I0218 14:56:38.344613 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eef0cbff-2dd2-4fee-86cc-04d3bd3032f9-logs" (OuterVolumeSpecName: "logs") pod "eef0cbff-2dd2-4fee-86cc-04d3bd3032f9" (UID: "eef0cbff-2dd2-4fee-86cc-04d3bd3032f9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:56:38 crc kubenswrapper[4957]: I0218 14:56:38.358102 4957 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eef0cbff-2dd2-4fee-86cc-04d3bd3032f9-logs\") on node \"crc\" DevicePath \"\"" Feb 18 14:56:38 crc kubenswrapper[4957]: I0218 14:56:38.358364 4957 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/eef0cbff-2dd2-4fee-86cc-04d3bd3032f9-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 18 14:56:38 crc kubenswrapper[4957]: I0218 14:56:38.358098 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eef0cbff-2dd2-4fee-86cc-04d3bd3032f9-scripts" (OuterVolumeSpecName: "scripts") pod "eef0cbff-2dd2-4fee-86cc-04d3bd3032f9" (UID: "eef0cbff-2dd2-4fee-86cc-04d3bd3032f9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:56:38 crc kubenswrapper[4957]: I0218 14:56:38.365861 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 14:56:38 crc kubenswrapper[4957]: I0218 14:56:38.380406 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eef0cbff-2dd2-4fee-86cc-04d3bd3032f9-kube-api-access-7h9fj" (OuterVolumeSpecName: "kube-api-access-7h9fj") pod "eef0cbff-2dd2-4fee-86cc-04d3bd3032f9" (UID: "eef0cbff-2dd2-4fee-86cc-04d3bd3032f9"). InnerVolumeSpecName "kube-api-access-7h9fj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:56:38 crc kubenswrapper[4957]: I0218 14:56:38.392032 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2d87d3ce-8343-4706-ae50-f2bcfa8ba982" (OuterVolumeSpecName: "glance") pod "eef0cbff-2dd2-4fee-86cc-04d3bd3032f9" (UID: "eef0cbff-2dd2-4fee-86cc-04d3bd3032f9"). InnerVolumeSpecName "pvc-2d87d3ce-8343-4706-ae50-f2bcfa8ba982". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 18 14:56:38 crc kubenswrapper[4957]: I0218 14:56:38.406488 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eef0cbff-2dd2-4fee-86cc-04d3bd3032f9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "eef0cbff-2dd2-4fee-86cc-04d3bd3032f9" (UID: "eef0cbff-2dd2-4fee-86cc-04d3bd3032f9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:56:38 crc kubenswrapper[4957]: I0218 14:56:38.436588 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eef0cbff-2dd2-4fee-86cc-04d3bd3032f9-config-data" (OuterVolumeSpecName: "config-data") pod "eef0cbff-2dd2-4fee-86cc-04d3bd3032f9" (UID: "eef0cbff-2dd2-4fee-86cc-04d3bd3032f9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:56:38 crc kubenswrapper[4957]: I0218 14:56:38.460851 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1308e026-1cb5-4c09-8623-beac2d513406-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"1308e026-1cb5-4c09-8623-beac2d513406\") " pod="openstack/glance-default-external-api-0" Feb 18 14:56:38 crc kubenswrapper[4957]: I0218 14:56:38.460902 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1308e026-1cb5-4c09-8623-beac2d513406-scripts\") pod \"glance-default-external-api-0\" (UID: \"1308e026-1cb5-4c09-8623-beac2d513406\") " pod="openstack/glance-default-external-api-0" Feb 18 14:56:38 crc kubenswrapper[4957]: I0218 14:56:38.460972 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1308e026-1cb5-4c09-8623-beac2d513406-logs\") pod \"glance-default-external-api-0\" (UID: \"1308e026-1cb5-4c09-8623-beac2d513406\") " pod="openstack/glance-default-external-api-0" Feb 18 14:56:38 crc kubenswrapper[4957]: I0218 14:56:38.461014 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1308e026-1cb5-4c09-8623-beac2d513406-config-data\") pod \"glance-default-external-api-0\" (UID: \"1308e026-1cb5-4c09-8623-beac2d513406\") " pod="openstack/glance-default-external-api-0" Feb 18 14:56:38 crc kubenswrapper[4957]: I0218 14:56:38.461089 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-e7977071-236d-4f61-8ab3-a6195398953d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e7977071-236d-4f61-8ab3-a6195398953d\") pod \"glance-default-external-api-0\" (UID: \"1308e026-1cb5-4c09-8623-beac2d513406\") " pod="openstack/glance-default-external-api-0" Feb 18 14:56:38 crc kubenswrapper[4957]: I0218 14:56:38.461181 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1308e026-1cb5-4c09-8623-beac2d513406-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"1308e026-1cb5-4c09-8623-beac2d513406\") " pod="openstack/glance-default-external-api-0" Feb 18 14:56:38 crc kubenswrapper[4957]: I0218 14:56:38.461228 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1308e026-1cb5-4c09-8623-beac2d513406-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"1308e026-1cb5-4c09-8623-beac2d513406\") " pod="openstack/glance-default-external-api-0" Feb 18 14:56:38 crc kubenswrapper[4957]: I0218 14:56:38.461265 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kptl8\" (UniqueName: \"kubernetes.io/projected/1308e026-1cb5-4c09-8623-beac2d513406-kube-api-access-kptl8\") pod \"glance-default-external-api-0\" (UID: \"1308e026-1cb5-4c09-8623-beac2d513406\") " pod="openstack/glance-default-external-api-0" Feb 18 14:56:38 crc kubenswrapper[4957]: I0218 14:56:38.461353 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7h9fj\" (UniqueName: \"kubernetes.io/projected/eef0cbff-2dd2-4fee-86cc-04d3bd3032f9-kube-api-access-7h9fj\") on node \"crc\" DevicePath \"\"" Feb 18 14:56:38 crc kubenswrapper[4957]: I0218 14:56:38.461368 4957 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eef0cbff-2dd2-4fee-86cc-04d3bd3032f9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:56:38 crc kubenswrapper[4957]: I0218 14:56:38.461380 4957 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eef0cbff-2dd2-4fee-86cc-04d3bd3032f9-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 14:56:38 crc kubenswrapper[4957]: I0218 14:56:38.461406 4957 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-2d87d3ce-8343-4706-ae50-f2bcfa8ba982\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2d87d3ce-8343-4706-ae50-f2bcfa8ba982\") on node \"crc\" " Feb 18 14:56:38 crc kubenswrapper[4957]: I0218 14:56:38.461434 4957 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eef0cbff-2dd2-4fee-86cc-04d3bd3032f9-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 14:56:38 crc kubenswrapper[4957]: I0218 14:56:38.493564 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eef0cbff-2dd2-4fee-86cc-04d3bd3032f9-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "eef0cbff-2dd2-4fee-86cc-04d3bd3032f9" (UID: "eef0cbff-2dd2-4fee-86cc-04d3bd3032f9"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:56:38 crc kubenswrapper[4957]: I0218 14:56:38.511500 4957 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 18 14:56:38 crc kubenswrapper[4957]: I0218 14:56:38.511675 4957 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-2d87d3ce-8343-4706-ae50-f2bcfa8ba982" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2d87d3ce-8343-4706-ae50-f2bcfa8ba982") on node "crc" Feb 18 14:56:38 crc kubenswrapper[4957]: I0218 14:56:38.562775 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1308e026-1cb5-4c09-8623-beac2d513406-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"1308e026-1cb5-4c09-8623-beac2d513406\") " pod="openstack/glance-default-external-api-0" Feb 18 14:56:38 crc kubenswrapper[4957]: I0218 14:56:38.562847 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1308e026-1cb5-4c09-8623-beac2d513406-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"1308e026-1cb5-4c09-8623-beac2d513406\") " pod="openstack/glance-default-external-api-0" Feb 18 14:56:38 crc kubenswrapper[4957]: I0218 14:56:38.562891 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kptl8\" (UniqueName: \"kubernetes.io/projected/1308e026-1cb5-4c09-8623-beac2d513406-kube-api-access-kptl8\") pod \"glance-default-external-api-0\" (UID: \"1308e026-1cb5-4c09-8623-beac2d513406\") " pod="openstack/glance-default-external-api-0" Feb 18 14:56:38 crc kubenswrapper[4957]: I0218 14:56:38.562967 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1308e026-1cb5-4c09-8623-beac2d513406-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"1308e026-1cb5-4c09-8623-beac2d513406\") " pod="openstack/glance-default-external-api-0" Feb 18 14:56:38 crc kubenswrapper[4957]: I0218 14:56:38.562989 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1308e026-1cb5-4c09-8623-beac2d513406-scripts\") pod \"glance-default-external-api-0\" (UID: \"1308e026-1cb5-4c09-8623-beac2d513406\") " pod="openstack/glance-default-external-api-0" Feb 18 14:56:38 crc kubenswrapper[4957]: I0218 14:56:38.563054 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1308e026-1cb5-4c09-8623-beac2d513406-logs\") pod \"glance-default-external-api-0\" (UID: \"1308e026-1cb5-4c09-8623-beac2d513406\") " pod="openstack/glance-default-external-api-0" Feb 18 14:56:38 crc kubenswrapper[4957]: I0218 14:56:38.563095 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1308e026-1cb5-4c09-8623-beac2d513406-config-data\") pod \"glance-default-external-api-0\" (UID: \"1308e026-1cb5-4c09-8623-beac2d513406\") " pod="openstack/glance-default-external-api-0" Feb 18 14:56:38 crc kubenswrapper[4957]: I0218 14:56:38.563183 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-e7977071-236d-4f61-8ab3-a6195398953d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e7977071-236d-4f61-8ab3-a6195398953d\") pod \"glance-default-external-api-0\" (UID: \"1308e026-1cb5-4c09-8623-beac2d513406\") " pod="openstack/glance-default-external-api-0" Feb 18 14:56:38 crc kubenswrapper[4957]: I0218 14:56:38.563283 4957 reconciler_common.go:293] "Volume detached for volume \"pvc-2d87d3ce-8343-4706-ae50-f2bcfa8ba982\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2d87d3ce-8343-4706-ae50-f2bcfa8ba982\") on node \"crc\" DevicePath \"\"" Feb 18 14:56:38 crc kubenswrapper[4957]: I0218 14:56:38.563295 4957 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/eef0cbff-2dd2-4fee-86cc-04d3bd3032f9-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 14:56:38 crc kubenswrapper[4957]: I0218 14:56:38.564118 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1308e026-1cb5-4c09-8623-beac2d513406-logs\") pod \"glance-default-external-api-0\" (UID: \"1308e026-1cb5-4c09-8623-beac2d513406\") " pod="openstack/glance-default-external-api-0" Feb 18 14:56:38 crc kubenswrapper[4957]: I0218 14:56:38.565506 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1308e026-1cb5-4c09-8623-beac2d513406-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"1308e026-1cb5-4c09-8623-beac2d513406\") " pod="openstack/glance-default-external-api-0" Feb 18 14:56:38 crc kubenswrapper[4957]: I0218 14:56:38.566985 4957 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 18 14:56:38 crc kubenswrapper[4957]: I0218 14:56:38.567023 4957 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-e7977071-236d-4f61-8ab3-a6195398953d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e7977071-236d-4f61-8ab3-a6195398953d\") pod \"glance-default-external-api-0\" (UID: \"1308e026-1cb5-4c09-8623-beac2d513406\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/591545b4df89bef99058b4a9f50b40c040b4db545d218b7f1013700bffeeb8a2/globalmount\"" pod="openstack/glance-default-external-api-0" Feb 18 14:56:38 crc kubenswrapper[4957]: I0218 14:56:38.568178 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1308e026-1cb5-4c09-8623-beac2d513406-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"1308e026-1cb5-4c09-8623-beac2d513406\") " pod="openstack/glance-default-external-api-0" Feb 18 14:56:38 crc kubenswrapper[4957]: I0218 14:56:38.570056 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1308e026-1cb5-4c09-8623-beac2d513406-config-data\") pod \"glance-default-external-api-0\" (UID: \"1308e026-1cb5-4c09-8623-beac2d513406\") " pod="openstack/glance-default-external-api-0" Feb 18 14:56:38 crc kubenswrapper[4957]: I0218 14:56:38.572362 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1308e026-1cb5-4c09-8623-beac2d513406-scripts\") pod \"glance-default-external-api-0\" (UID: \"1308e026-1cb5-4c09-8623-beac2d513406\") " pod="openstack/glance-default-external-api-0" Feb 18 14:56:38 crc kubenswrapper[4957]: I0218 14:56:38.572653 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1308e026-1cb5-4c09-8623-beac2d513406-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"1308e026-1cb5-4c09-8623-beac2d513406\") " pod="openstack/glance-default-external-api-0" Feb 18 14:56:38 crc kubenswrapper[4957]: I0218 14:56:38.587471 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kptl8\" (UniqueName: \"kubernetes.io/projected/1308e026-1cb5-4c09-8623-beac2d513406-kube-api-access-kptl8\") pod \"glance-default-external-api-0\" (UID: \"1308e026-1cb5-4c09-8623-beac2d513406\") " pod="openstack/glance-default-external-api-0" Feb 18 14:56:38 crc kubenswrapper[4957]: I0218 14:56:38.644565 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-e7977071-236d-4f61-8ab3-a6195398953d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e7977071-236d-4f61-8ab3-a6195398953d\") pod \"glance-default-external-api-0\" (UID: \"1308e026-1cb5-4c09-8623-beac2d513406\") " pod="openstack/glance-default-external-api-0" Feb 18 14:56:38 crc kubenswrapper[4957]: I0218 14:56:38.734890 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 18 14:56:39 crc kubenswrapper[4957]: I0218 14:56:39.143673 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 18 14:56:39 crc kubenswrapper[4957]: I0218 14:56:39.190179 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 14:56:39 crc kubenswrapper[4957]: I0218 14:56:39.221143 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 14:56:39 crc kubenswrapper[4957]: I0218 14:56:39.236369 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 14:56:39 crc kubenswrapper[4957]: I0218 14:56:39.238457 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 18 14:56:39 crc kubenswrapper[4957]: I0218 14:56:39.247947 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 18 14:56:39 crc kubenswrapper[4957]: I0218 14:56:39.248202 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 18 14:56:39 crc kubenswrapper[4957]: I0218 14:56:39.251135 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 14:56:39 crc kubenswrapper[4957]: I0218 14:56:39.390899 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-2d87d3ce-8343-4706-ae50-f2bcfa8ba982\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2d87d3ce-8343-4706-ae50-f2bcfa8ba982\") pod \"glance-default-internal-api-0\" (UID: \"bb989a4d-f9cc-4305-a0e6-8f162b666e9d\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:56:39 crc kubenswrapper[4957]: I0218 14:56:39.390996 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7bv5\" (UniqueName: \"kubernetes.io/projected/bb989a4d-f9cc-4305-a0e6-8f162b666e9d-kube-api-access-q7bv5\") pod \"glance-default-internal-api-0\" (UID: \"bb989a4d-f9cc-4305-a0e6-8f162b666e9d\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:56:39 crc kubenswrapper[4957]: I0218 14:56:39.391021 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb989a4d-f9cc-4305-a0e6-8f162b666e9d-scripts\") pod \"glance-default-internal-api-0\" (UID: \"bb989a4d-f9cc-4305-a0e6-8f162b666e9d\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:56:39 crc kubenswrapper[4957]: I0218 14:56:39.391063 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb989a4d-f9cc-4305-a0e6-8f162b666e9d-config-data\") pod \"glance-default-internal-api-0\" (UID: \"bb989a4d-f9cc-4305-a0e6-8f162b666e9d\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:56:39 crc kubenswrapper[4957]: I0218 14:56:39.391148 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb989a4d-f9cc-4305-a0e6-8f162b666e9d-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"bb989a4d-f9cc-4305-a0e6-8f162b666e9d\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:56:39 crc kubenswrapper[4957]: I0218 14:56:39.391186 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bb989a4d-f9cc-4305-a0e6-8f162b666e9d-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"bb989a4d-f9cc-4305-a0e6-8f162b666e9d\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:56:39 crc kubenswrapper[4957]: I0218 14:56:39.391271 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb989a4d-f9cc-4305-a0e6-8f162b666e9d-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"bb989a4d-f9cc-4305-a0e6-8f162b666e9d\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:56:39 crc kubenswrapper[4957]: I0218 14:56:39.391341 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb989a4d-f9cc-4305-a0e6-8f162b666e9d-logs\") pod \"glance-default-internal-api-0\" (UID: \"bb989a4d-f9cc-4305-a0e6-8f162b666e9d\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:56:39 crc kubenswrapper[4957]: I0218 14:56:39.408605 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 14:56:39 crc kubenswrapper[4957]: I0218 14:56:39.493719 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb989a4d-f9cc-4305-a0e6-8f162b666e9d-logs\") pod \"glance-default-internal-api-0\" (UID: \"bb989a4d-f9cc-4305-a0e6-8f162b666e9d\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:56:39 crc kubenswrapper[4957]: I0218 14:56:39.494127 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-2d87d3ce-8343-4706-ae50-f2bcfa8ba982\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2d87d3ce-8343-4706-ae50-f2bcfa8ba982\") pod \"glance-default-internal-api-0\" (UID: \"bb989a4d-f9cc-4305-a0e6-8f162b666e9d\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:56:39 crc kubenswrapper[4957]: I0218 14:56:39.494197 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q7bv5\" (UniqueName: \"kubernetes.io/projected/bb989a4d-f9cc-4305-a0e6-8f162b666e9d-kube-api-access-q7bv5\") pod \"glance-default-internal-api-0\" (UID: \"bb989a4d-f9cc-4305-a0e6-8f162b666e9d\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:56:39 crc kubenswrapper[4957]: I0218 14:56:39.494224 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb989a4d-f9cc-4305-a0e6-8f162b666e9d-scripts\") pod \"glance-default-internal-api-0\" (UID: \"bb989a4d-f9cc-4305-a0e6-8f162b666e9d\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:56:39 crc kubenswrapper[4957]: I0218 14:56:39.494281 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb989a4d-f9cc-4305-a0e6-8f162b666e9d-config-data\") pod \"glance-default-internal-api-0\" (UID: \"bb989a4d-f9cc-4305-a0e6-8f162b666e9d\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:56:39 crc kubenswrapper[4957]: I0218 14:56:39.494348 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb989a4d-f9cc-4305-a0e6-8f162b666e9d-logs\") pod \"glance-default-internal-api-0\" (UID: \"bb989a4d-f9cc-4305-a0e6-8f162b666e9d\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:56:39 crc kubenswrapper[4957]: I0218 14:56:39.494397 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb989a4d-f9cc-4305-a0e6-8f162b666e9d-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"bb989a4d-f9cc-4305-a0e6-8f162b666e9d\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:56:39 crc kubenswrapper[4957]: I0218 14:56:39.494448 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bb989a4d-f9cc-4305-a0e6-8f162b666e9d-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"bb989a4d-f9cc-4305-a0e6-8f162b666e9d\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:56:39 crc kubenswrapper[4957]: I0218 14:56:39.494558 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb989a4d-f9cc-4305-a0e6-8f162b666e9d-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"bb989a4d-f9cc-4305-a0e6-8f162b666e9d\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:56:39 crc kubenswrapper[4957]: I0218 14:56:39.497975 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bb989a4d-f9cc-4305-a0e6-8f162b666e9d-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"bb989a4d-f9cc-4305-a0e6-8f162b666e9d\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:56:39 crc kubenswrapper[4957]: I0218 14:56:39.503731 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb989a4d-f9cc-4305-a0e6-8f162b666e9d-scripts\") pod \"glance-default-internal-api-0\" (UID: \"bb989a4d-f9cc-4305-a0e6-8f162b666e9d\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:56:39 crc kubenswrapper[4957]: I0218 14:56:39.503861 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb989a4d-f9cc-4305-a0e6-8f162b666e9d-config-data\") pod \"glance-default-internal-api-0\" (UID: \"bb989a4d-f9cc-4305-a0e6-8f162b666e9d\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:56:39 crc kubenswrapper[4957]: I0218 14:56:39.508196 4957 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 18 14:56:39 crc kubenswrapper[4957]: I0218 14:56:39.508241 4957 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-2d87d3ce-8343-4706-ae50-f2bcfa8ba982\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2d87d3ce-8343-4706-ae50-f2bcfa8ba982\") pod \"glance-default-internal-api-0\" (UID: \"bb989a4d-f9cc-4305-a0e6-8f162b666e9d\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/ad2bd3876c77f68ca56101056f3774a80bb42ebacbba043834367849c0bb95ba/globalmount\"" pod="openstack/glance-default-internal-api-0" Feb 18 14:56:39 crc kubenswrapper[4957]: I0218 14:56:39.508378 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb989a4d-f9cc-4305-a0e6-8f162b666e9d-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"bb989a4d-f9cc-4305-a0e6-8f162b666e9d\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:56:39 crc kubenswrapper[4957]: I0218 14:56:39.519459 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb989a4d-f9cc-4305-a0e6-8f162b666e9d-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"bb989a4d-f9cc-4305-a0e6-8f162b666e9d\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:56:39 crc kubenswrapper[4957]: I0218 14:56:39.527262 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q7bv5\" (UniqueName: \"kubernetes.io/projected/bb989a4d-f9cc-4305-a0e6-8f162b666e9d-kube-api-access-q7bv5\") pod \"glance-default-internal-api-0\" (UID: \"bb989a4d-f9cc-4305-a0e6-8f162b666e9d\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:56:39 crc kubenswrapper[4957]: I0218 14:56:39.602882 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-2d87d3ce-8343-4706-ae50-f2bcfa8ba982\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2d87d3ce-8343-4706-ae50-f2bcfa8ba982\") pod \"glance-default-internal-api-0\" (UID: \"bb989a4d-f9cc-4305-a0e6-8f162b666e9d\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:56:39 crc kubenswrapper[4957]: I0218 14:56:39.888933 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 18 14:56:40 crc kubenswrapper[4957]: I0218 14:56:40.242166 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb55dbb2-b556-42c0-af4c-e94f389576c3" path="/var/lib/kubelet/pods/eb55dbb2-b556-42c0-af4c-e94f389576c3/volumes" Feb 18 14:56:40 crc kubenswrapper[4957]: I0218 14:56:40.243356 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eef0cbff-2dd2-4fee-86cc-04d3bd3032f9" path="/var/lib/kubelet/pods/eef0cbff-2dd2-4fee-86cc-04d3bd3032f9/volumes" Feb 18 14:56:40 crc kubenswrapper[4957]: I0218 14:56:40.245120 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1308e026-1cb5-4c09-8623-beac2d513406","Type":"ContainerStarted","Data":"69b285088c247ad71a1fc4d66084864c11bbe76307e358f110e5e822dbeb8b2d"} Feb 18 14:56:40 crc kubenswrapper[4957]: I0218 14:56:40.532756 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 14:56:40 crc kubenswrapper[4957]: W0218 14:56:40.535134 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbb989a4d_f9cc_4305_a0e6_8f162b666e9d.slice/crio-2bee8114c7fbbee237463c6d4d6e5b55a62f1310b67858061d3bc8504d96d9f7 WatchSource:0}: Error finding container 2bee8114c7fbbee237463c6d4d6e5b55a62f1310b67858061d3bc8504d96d9f7: Status 404 returned error can't find the container with id 2bee8114c7fbbee237463c6d4d6e5b55a62f1310b67858061d3bc8504d96d9f7 Feb 18 14:56:41 crc kubenswrapper[4957]: I0218 14:56:41.250467 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"bb989a4d-f9cc-4305-a0e6-8f162b666e9d","Type":"ContainerStarted","Data":"d388e0d6f3d75a1875a741ea539865431c540e6b5455b2cc9692c74e780466aa"} Feb 18 14:56:41 crc kubenswrapper[4957]: I0218 14:56:41.251081 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"bb989a4d-f9cc-4305-a0e6-8f162b666e9d","Type":"ContainerStarted","Data":"2bee8114c7fbbee237463c6d4d6e5b55a62f1310b67858061d3bc8504d96d9f7"} Feb 18 14:56:41 crc kubenswrapper[4957]: I0218 14:56:41.283343 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1308e026-1cb5-4c09-8623-beac2d513406","Type":"ContainerStarted","Data":"5d871042e5115087a854901eee644f7a4318edf130aa3446c4f36bddf7052d66"} Feb 18 14:56:41 crc kubenswrapper[4957]: I0218 14:56:41.283464 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1308e026-1cb5-4c09-8623-beac2d513406","Type":"ContainerStarted","Data":"3e4fe25d36bfb03ab67a1511d0abe54f04aa14a5e667dee10c59947f31cb13ea"} Feb 18 14:56:41 crc kubenswrapper[4957]: I0218 14:56:41.335062 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.335041285 podStartE2EDuration="3.335041285s" podCreationTimestamp="2026-02-18 14:56:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:56:41.312955086 +0000 UTC m=+1507.833819850" watchObservedRunningTime="2026-02-18 14:56:41.335041285 +0000 UTC m=+1507.855906029" Feb 18 14:56:42 crc kubenswrapper[4957]: I0218 14:56:42.303360 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"bb989a4d-f9cc-4305-a0e6-8f162b666e9d","Type":"ContainerStarted","Data":"d4ca451a4df85ce1f6cfbcb8c1c95875540b77f5bafedf1a8811d218a3745840"} Feb 18 14:56:42 crc kubenswrapper[4957]: I0218 14:56:42.325733 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.325710707 podStartE2EDuration="3.325710707s" podCreationTimestamp="2026-02-18 14:56:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:56:42.319649542 +0000 UTC m=+1508.840514296" watchObservedRunningTime="2026-02-18 14:56:42.325710707 +0000 UTC m=+1508.846575451" Feb 18 14:56:45 crc kubenswrapper[4957]: I0218 14:56:45.347834 4957 generic.go:334] "Generic (PLEG): container finished" podID="cb7ac820-e63c-4996-af6b-b0f45530ef91" containerID="d034e5e901bf87db3b68accb8d7445274e8e34fb1ea4765dd77aa842b1b74514" exitCode=0 Feb 18 14:56:45 crc kubenswrapper[4957]: I0218 14:56:45.347927 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-b2srx" event={"ID":"cb7ac820-e63c-4996-af6b-b0f45530ef91","Type":"ContainerDied","Data":"d034e5e901bf87db3b68accb8d7445274e8e34fb1ea4765dd77aa842b1b74514"} Feb 18 14:56:46 crc kubenswrapper[4957]: I0218 14:56:46.822536 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-b2srx" Feb 18 14:56:46 crc kubenswrapper[4957]: I0218 14:56:46.906222 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb7ac820-e63c-4996-af6b-b0f45530ef91-combined-ca-bundle\") pod \"cb7ac820-e63c-4996-af6b-b0f45530ef91\" (UID: \"cb7ac820-e63c-4996-af6b-b0f45530ef91\") " Feb 18 14:56:46 crc kubenswrapper[4957]: I0218 14:56:46.906335 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7h7qz\" (UniqueName: \"kubernetes.io/projected/cb7ac820-e63c-4996-af6b-b0f45530ef91-kube-api-access-7h7qz\") pod \"cb7ac820-e63c-4996-af6b-b0f45530ef91\" (UID: \"cb7ac820-e63c-4996-af6b-b0f45530ef91\") " Feb 18 14:56:46 crc kubenswrapper[4957]: I0218 14:56:46.906364 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb7ac820-e63c-4996-af6b-b0f45530ef91-config-data\") pod \"cb7ac820-e63c-4996-af6b-b0f45530ef91\" (UID: \"cb7ac820-e63c-4996-af6b-b0f45530ef91\") " Feb 18 14:56:46 crc kubenswrapper[4957]: I0218 14:56:46.906379 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cb7ac820-e63c-4996-af6b-b0f45530ef91-scripts\") pod \"cb7ac820-e63c-4996-af6b-b0f45530ef91\" (UID: \"cb7ac820-e63c-4996-af6b-b0f45530ef91\") " Feb 18 14:56:46 crc kubenswrapper[4957]: I0218 14:56:46.913856 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb7ac820-e63c-4996-af6b-b0f45530ef91-kube-api-access-7h7qz" (OuterVolumeSpecName: "kube-api-access-7h7qz") pod "cb7ac820-e63c-4996-af6b-b0f45530ef91" (UID: "cb7ac820-e63c-4996-af6b-b0f45530ef91"). InnerVolumeSpecName "kube-api-access-7h7qz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:56:46 crc kubenswrapper[4957]: I0218 14:56:46.914465 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb7ac820-e63c-4996-af6b-b0f45530ef91-scripts" (OuterVolumeSpecName: "scripts") pod "cb7ac820-e63c-4996-af6b-b0f45530ef91" (UID: "cb7ac820-e63c-4996-af6b-b0f45530ef91"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:56:46 crc kubenswrapper[4957]: I0218 14:56:46.946281 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb7ac820-e63c-4996-af6b-b0f45530ef91-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cb7ac820-e63c-4996-af6b-b0f45530ef91" (UID: "cb7ac820-e63c-4996-af6b-b0f45530ef91"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:56:46 crc kubenswrapper[4957]: I0218 14:56:46.953903 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb7ac820-e63c-4996-af6b-b0f45530ef91-config-data" (OuterVolumeSpecName: "config-data") pod "cb7ac820-e63c-4996-af6b-b0f45530ef91" (UID: "cb7ac820-e63c-4996-af6b-b0f45530ef91"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:56:47 crc kubenswrapper[4957]: I0218 14:56:47.009576 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7h7qz\" (UniqueName: \"kubernetes.io/projected/cb7ac820-e63c-4996-af6b-b0f45530ef91-kube-api-access-7h7qz\") on node \"crc\" DevicePath \"\"" Feb 18 14:56:47 crc kubenswrapper[4957]: I0218 14:56:47.009623 4957 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb7ac820-e63c-4996-af6b-b0f45530ef91-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 14:56:47 crc kubenswrapper[4957]: I0218 14:56:47.009636 4957 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cb7ac820-e63c-4996-af6b-b0f45530ef91-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 14:56:47 crc kubenswrapper[4957]: I0218 14:56:47.009648 4957 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb7ac820-e63c-4996-af6b-b0f45530ef91-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:56:47 crc kubenswrapper[4957]: I0218 14:56:47.377841 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-b2srx" event={"ID":"cb7ac820-e63c-4996-af6b-b0f45530ef91","Type":"ContainerDied","Data":"b73ef87f6a68f2e2e6fff250e12ba020ec3073802cd58043951fd610bbb7b66e"} Feb 18 14:56:47 crc kubenswrapper[4957]: I0218 14:56:47.378233 4957 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b73ef87f6a68f2e2e6fff250e12ba020ec3073802cd58043951fd610bbb7b66e" Feb 18 14:56:47 crc kubenswrapper[4957]: I0218 14:56:47.377922 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-b2srx" Feb 18 14:56:47 crc kubenswrapper[4957]: I0218 14:56:47.577193 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 18 14:56:47 crc kubenswrapper[4957]: E0218 14:56:47.577953 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb7ac820-e63c-4996-af6b-b0f45530ef91" containerName="nova-cell0-conductor-db-sync" Feb 18 14:56:47 crc kubenswrapper[4957]: I0218 14:56:47.577982 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb7ac820-e63c-4996-af6b-b0f45530ef91" containerName="nova-cell0-conductor-db-sync" Feb 18 14:56:47 crc kubenswrapper[4957]: I0218 14:56:47.578455 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb7ac820-e63c-4996-af6b-b0f45530ef91" containerName="nova-cell0-conductor-db-sync" Feb 18 14:56:47 crc kubenswrapper[4957]: I0218 14:56:47.579674 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 18 14:56:47 crc kubenswrapper[4957]: I0218 14:56:47.583834 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 18 14:56:47 crc kubenswrapper[4957]: I0218 14:56:47.585575 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-k9jxh" Feb 18 14:56:47 crc kubenswrapper[4957]: I0218 14:56:47.593730 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 18 14:56:47 crc kubenswrapper[4957]: I0218 14:56:47.623900 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dda89cee-ab64-461a-a48a-b5ec914cfb05-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"dda89cee-ab64-461a-a48a-b5ec914cfb05\") " pod="openstack/nova-cell0-conductor-0" Feb 18 14:56:47 crc kubenswrapper[4957]: I0218 14:56:47.623977 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dda89cee-ab64-461a-a48a-b5ec914cfb05-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"dda89cee-ab64-461a-a48a-b5ec914cfb05\") " pod="openstack/nova-cell0-conductor-0" Feb 18 14:56:47 crc kubenswrapper[4957]: I0218 14:56:47.624101 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvchn\" (UniqueName: \"kubernetes.io/projected/dda89cee-ab64-461a-a48a-b5ec914cfb05-kube-api-access-wvchn\") pod \"nova-cell0-conductor-0\" (UID: \"dda89cee-ab64-461a-a48a-b5ec914cfb05\") " pod="openstack/nova-cell0-conductor-0" Feb 18 14:56:47 crc kubenswrapper[4957]: I0218 14:56:47.726492 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dda89cee-ab64-461a-a48a-b5ec914cfb05-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"dda89cee-ab64-461a-a48a-b5ec914cfb05\") " pod="openstack/nova-cell0-conductor-0" Feb 18 14:56:47 crc kubenswrapper[4957]: I0218 14:56:47.726571 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dda89cee-ab64-461a-a48a-b5ec914cfb05-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"dda89cee-ab64-461a-a48a-b5ec914cfb05\") " pod="openstack/nova-cell0-conductor-0" Feb 18 14:56:47 crc kubenswrapper[4957]: I0218 14:56:47.726682 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wvchn\" (UniqueName: \"kubernetes.io/projected/dda89cee-ab64-461a-a48a-b5ec914cfb05-kube-api-access-wvchn\") pod \"nova-cell0-conductor-0\" (UID: \"dda89cee-ab64-461a-a48a-b5ec914cfb05\") " pod="openstack/nova-cell0-conductor-0" Feb 18 14:56:47 crc kubenswrapper[4957]: I0218 14:56:47.732306 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dda89cee-ab64-461a-a48a-b5ec914cfb05-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"dda89cee-ab64-461a-a48a-b5ec914cfb05\") " pod="openstack/nova-cell0-conductor-0" Feb 18 14:56:47 crc kubenswrapper[4957]: I0218 14:56:47.732587 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dda89cee-ab64-461a-a48a-b5ec914cfb05-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"dda89cee-ab64-461a-a48a-b5ec914cfb05\") " pod="openstack/nova-cell0-conductor-0" Feb 18 14:56:47 crc kubenswrapper[4957]: I0218 14:56:47.742870 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wvchn\" (UniqueName: \"kubernetes.io/projected/dda89cee-ab64-461a-a48a-b5ec914cfb05-kube-api-access-wvchn\") pod \"nova-cell0-conductor-0\" (UID: \"dda89cee-ab64-461a-a48a-b5ec914cfb05\") " pod="openstack/nova-cell0-conductor-0" Feb 18 14:56:47 crc kubenswrapper[4957]: I0218 14:56:47.916793 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 18 14:56:48 crc kubenswrapper[4957]: I0218 14:56:48.485704 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 18 14:56:48 crc kubenswrapper[4957]: I0218 14:56:48.737710 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 18 14:56:48 crc kubenswrapper[4957]: I0218 14:56:48.738175 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 18 14:56:48 crc kubenswrapper[4957]: I0218 14:56:48.780432 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 18 14:56:48 crc kubenswrapper[4957]: I0218 14:56:48.794191 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 18 14:56:49 crc kubenswrapper[4957]: I0218 14:56:49.286636 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="bbaf2250-41f8-4278-a9ad-65c766e5e0dd" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 18 14:56:49 crc kubenswrapper[4957]: I0218 14:56:49.401630 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"dda89cee-ab64-461a-a48a-b5ec914cfb05","Type":"ContainerStarted","Data":"2ebf8136e23559d86817befd60a729736b25e9e3c7cb667ed6a1a91bf6b29bef"} Feb 18 14:56:49 crc kubenswrapper[4957]: I0218 14:56:49.401711 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"dda89cee-ab64-461a-a48a-b5ec914cfb05","Type":"ContainerStarted","Data":"6ec45b4fa8fa832bb22ebb11aff16d1e519a7e5bc1fe0637ff25568c455c6d36"} Feb 18 14:56:49 crc kubenswrapper[4957]: I0218 14:56:49.402374 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 18 14:56:49 crc kubenswrapper[4957]: I0218 14:56:49.402401 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 18 14:56:49 crc kubenswrapper[4957]: I0218 14:56:49.442165 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.442127864 podStartE2EDuration="2.442127864s" podCreationTimestamp="2026-02-18 14:56:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:56:49.423736222 +0000 UTC m=+1515.944600976" watchObservedRunningTime="2026-02-18 14:56:49.442127864 +0000 UTC m=+1515.962992608" Feb 18 14:56:49 crc kubenswrapper[4957]: I0218 14:56:49.890114 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 18 14:56:49 crc kubenswrapper[4957]: I0218 14:56:49.890526 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 18 14:56:49 crc kubenswrapper[4957]: I0218 14:56:49.937722 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 18 14:56:49 crc kubenswrapper[4957]: I0218 14:56:49.946006 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 18 14:56:50 crc kubenswrapper[4957]: I0218 14:56:50.412547 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 18 14:56:50 crc kubenswrapper[4957]: I0218 14:56:50.412587 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 18 14:56:50 crc kubenswrapper[4957]: I0218 14:56:50.412630 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Feb 18 14:56:52 crc kubenswrapper[4957]: I0218 14:56:52.767573 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 18 14:56:52 crc kubenswrapper[4957]: I0218 14:56:52.768373 4957 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 18 14:56:53 crc kubenswrapper[4957]: I0218 14:56:53.128578 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 18 14:56:53 crc kubenswrapper[4957]: I0218 14:56:53.330943 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 18 14:56:53 crc kubenswrapper[4957]: I0218 14:56:53.331053 4957 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 18 14:56:53 crc kubenswrapper[4957]: I0218 14:56:53.964977 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 18 14:56:57 crc kubenswrapper[4957]: I0218 14:56:57.968469 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Feb 18 14:56:58 crc kubenswrapper[4957]: I0218 14:56:58.515308 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-k2zxb"] Feb 18 14:56:58 crc kubenswrapper[4957]: I0218 14:56:58.517520 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-k2zxb" Feb 18 14:56:58 crc kubenswrapper[4957]: I0218 14:56:58.524259 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Feb 18 14:56:58 crc kubenswrapper[4957]: I0218 14:56:58.524464 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Feb 18 14:56:58 crc kubenswrapper[4957]: I0218 14:56:58.539123 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-k2zxb"] Feb 18 14:56:58 crc kubenswrapper[4957]: I0218 14:56:58.543105 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fa8a4e3-39e3-4e9b-8529-9f712fbc509a-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-k2zxb\" (UID: \"6fa8a4e3-39e3-4e9b-8529-9f712fbc509a\") " pod="openstack/nova-cell0-cell-mapping-k2zxb" Feb 18 14:56:58 crc kubenswrapper[4957]: I0218 14:56:58.543523 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vsww5\" (UniqueName: \"kubernetes.io/projected/6fa8a4e3-39e3-4e9b-8529-9f712fbc509a-kube-api-access-vsww5\") pod \"nova-cell0-cell-mapping-k2zxb\" (UID: \"6fa8a4e3-39e3-4e9b-8529-9f712fbc509a\") " pod="openstack/nova-cell0-cell-mapping-k2zxb" Feb 18 14:56:58 crc kubenswrapper[4957]: I0218 14:56:58.543772 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6fa8a4e3-39e3-4e9b-8529-9f712fbc509a-scripts\") pod \"nova-cell0-cell-mapping-k2zxb\" (UID: \"6fa8a4e3-39e3-4e9b-8529-9f712fbc509a\") " pod="openstack/nova-cell0-cell-mapping-k2zxb" Feb 18 14:56:58 crc kubenswrapper[4957]: I0218 14:56:58.543884 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6fa8a4e3-39e3-4e9b-8529-9f712fbc509a-config-data\") pod \"nova-cell0-cell-mapping-k2zxb\" (UID: \"6fa8a4e3-39e3-4e9b-8529-9f712fbc509a\") " pod="openstack/nova-cell0-cell-mapping-k2zxb" Feb 18 14:56:58 crc kubenswrapper[4957]: I0218 14:56:58.676237 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fa8a4e3-39e3-4e9b-8529-9f712fbc509a-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-k2zxb\" (UID: \"6fa8a4e3-39e3-4e9b-8529-9f712fbc509a\") " pod="openstack/nova-cell0-cell-mapping-k2zxb" Feb 18 14:56:58 crc kubenswrapper[4957]: I0218 14:56:58.677187 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vsww5\" (UniqueName: \"kubernetes.io/projected/6fa8a4e3-39e3-4e9b-8529-9f712fbc509a-kube-api-access-vsww5\") pod \"nova-cell0-cell-mapping-k2zxb\" (UID: \"6fa8a4e3-39e3-4e9b-8529-9f712fbc509a\") " pod="openstack/nova-cell0-cell-mapping-k2zxb" Feb 18 14:56:58 crc kubenswrapper[4957]: I0218 14:56:58.677331 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6fa8a4e3-39e3-4e9b-8529-9f712fbc509a-scripts\") pod \"nova-cell0-cell-mapping-k2zxb\" (UID: \"6fa8a4e3-39e3-4e9b-8529-9f712fbc509a\") " pod="openstack/nova-cell0-cell-mapping-k2zxb" Feb 18 14:56:58 crc kubenswrapper[4957]: I0218 14:56:58.677370 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6fa8a4e3-39e3-4e9b-8529-9f712fbc509a-config-data\") pod \"nova-cell0-cell-mapping-k2zxb\" (UID: \"6fa8a4e3-39e3-4e9b-8529-9f712fbc509a\") " pod="openstack/nova-cell0-cell-mapping-k2zxb" Feb 18 14:56:58 crc kubenswrapper[4957]: I0218 14:56:58.694872 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fa8a4e3-39e3-4e9b-8529-9f712fbc509a-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-k2zxb\" (UID: \"6fa8a4e3-39e3-4e9b-8529-9f712fbc509a\") " pod="openstack/nova-cell0-cell-mapping-k2zxb" Feb 18 14:56:58 crc kubenswrapper[4957]: I0218 14:56:58.697947 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6fa8a4e3-39e3-4e9b-8529-9f712fbc509a-scripts\") pod \"nova-cell0-cell-mapping-k2zxb\" (UID: \"6fa8a4e3-39e3-4e9b-8529-9f712fbc509a\") " pod="openstack/nova-cell0-cell-mapping-k2zxb" Feb 18 14:56:58 crc kubenswrapper[4957]: I0218 14:56:58.700908 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6fa8a4e3-39e3-4e9b-8529-9f712fbc509a-config-data\") pod \"nova-cell0-cell-mapping-k2zxb\" (UID: \"6fa8a4e3-39e3-4e9b-8529-9f712fbc509a\") " pod="openstack/nova-cell0-cell-mapping-k2zxb" Feb 18 14:56:58 crc kubenswrapper[4957]: I0218 14:56:58.732885 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vsww5\" (UniqueName: \"kubernetes.io/projected/6fa8a4e3-39e3-4e9b-8529-9f712fbc509a-kube-api-access-vsww5\") pod \"nova-cell0-cell-mapping-k2zxb\" (UID: \"6fa8a4e3-39e3-4e9b-8529-9f712fbc509a\") " pod="openstack/nova-cell0-cell-mapping-k2zxb" Feb 18 14:56:58 crc kubenswrapper[4957]: I0218 14:56:58.837812 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 18 14:56:58 crc kubenswrapper[4957]: I0218 14:56:58.840118 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 14:56:58 crc kubenswrapper[4957]: I0218 14:56:58.847863 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 18 14:56:58 crc kubenswrapper[4957]: I0218 14:56:58.857654 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-k2zxb" Feb 18 14:56:58 crc kubenswrapper[4957]: I0218 14:56:58.868453 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 18 14:56:59 crc kubenswrapper[4957]: I0218 14:56:59.044560 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-654jw\" (UniqueName: \"kubernetes.io/projected/f0cd3140-eb02-488a-a95c-87e12ba3d061-kube-api-access-654jw\") pod \"nova-api-0\" (UID: \"f0cd3140-eb02-488a-a95c-87e12ba3d061\") " pod="openstack/nova-api-0" Feb 18 14:56:59 crc kubenswrapper[4957]: I0218 14:56:59.044957 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0cd3140-eb02-488a-a95c-87e12ba3d061-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f0cd3140-eb02-488a-a95c-87e12ba3d061\") " pod="openstack/nova-api-0" Feb 18 14:56:59 crc kubenswrapper[4957]: I0218 14:56:59.045092 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0cd3140-eb02-488a-a95c-87e12ba3d061-config-data\") pod \"nova-api-0\" (UID: \"f0cd3140-eb02-488a-a95c-87e12ba3d061\") " pod="openstack/nova-api-0" Feb 18 14:56:59 crc kubenswrapper[4957]: I0218 14:56:59.045188 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f0cd3140-eb02-488a-a95c-87e12ba3d061-logs\") pod \"nova-api-0\" (UID: \"f0cd3140-eb02-488a-a95c-87e12ba3d061\") " pod="openstack/nova-api-0" Feb 18 14:56:59 crc kubenswrapper[4957]: I0218 14:56:59.106296 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 18 14:56:59 crc kubenswrapper[4957]: I0218 14:56:59.108510 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 18 14:56:59 crc kubenswrapper[4957]: I0218 14:56:59.114893 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 18 14:56:59 crc kubenswrapper[4957]: I0218 14:56:59.142668 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-create-f7zmd"] Feb 18 14:56:59 crc kubenswrapper[4957]: I0218 14:56:59.154033 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-f7zmd" Feb 18 14:56:59 crc kubenswrapper[4957]: I0218 14:56:59.155391 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-654jw\" (UniqueName: \"kubernetes.io/projected/f0cd3140-eb02-488a-a95c-87e12ba3d061-kube-api-access-654jw\") pod \"nova-api-0\" (UID: \"f0cd3140-eb02-488a-a95c-87e12ba3d061\") " pod="openstack/nova-api-0" Feb 18 14:56:59 crc kubenswrapper[4957]: I0218 14:56:59.155509 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b048c2e2-cb55-4531-b51d-32ad1c2a92be-config-data\") pod \"nova-metadata-0\" (UID: \"b048c2e2-cb55-4531-b51d-32ad1c2a92be\") " pod="openstack/nova-metadata-0" Feb 18 14:56:59 crc kubenswrapper[4957]: I0218 14:56:59.155558 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4md8\" (UniqueName: \"kubernetes.io/projected/b048c2e2-cb55-4531-b51d-32ad1c2a92be-kube-api-access-l4md8\") pod \"nova-metadata-0\" (UID: \"b048c2e2-cb55-4531-b51d-32ad1c2a92be\") " pod="openstack/nova-metadata-0" Feb 18 14:56:59 crc kubenswrapper[4957]: I0218 14:56:59.155607 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0cd3140-eb02-488a-a95c-87e12ba3d061-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f0cd3140-eb02-488a-a95c-87e12ba3d061\") " pod="openstack/nova-api-0" Feb 18 14:56:59 crc kubenswrapper[4957]: I0218 14:56:59.155682 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0cd3140-eb02-488a-a95c-87e12ba3d061-config-data\") pod \"nova-api-0\" (UID: \"f0cd3140-eb02-488a-a95c-87e12ba3d061\") " pod="openstack/nova-api-0" Feb 18 14:56:59 crc kubenswrapper[4957]: I0218 14:56:59.155767 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f0cd3140-eb02-488a-a95c-87e12ba3d061-logs\") pod \"nova-api-0\" (UID: \"f0cd3140-eb02-488a-a95c-87e12ba3d061\") " pod="openstack/nova-api-0" Feb 18 14:56:59 crc kubenswrapper[4957]: I0218 14:56:59.156000 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b048c2e2-cb55-4531-b51d-32ad1c2a92be-logs\") pod \"nova-metadata-0\" (UID: \"b048c2e2-cb55-4531-b51d-32ad1c2a92be\") " pod="openstack/nova-metadata-0" Feb 18 14:56:59 crc kubenswrapper[4957]: I0218 14:56:59.156029 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b048c2e2-cb55-4531-b51d-32ad1c2a92be-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"b048c2e2-cb55-4531-b51d-32ad1c2a92be\") " pod="openstack/nova-metadata-0" Feb 18 14:56:59 crc kubenswrapper[4957]: I0218 14:56:59.164560 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f0cd3140-eb02-488a-a95c-87e12ba3d061-logs\") pod \"nova-api-0\" (UID: \"f0cd3140-eb02-488a-a95c-87e12ba3d061\") " pod="openstack/nova-api-0" Feb 18 14:56:59 crc kubenswrapper[4957]: I0218 14:56:59.183547 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0cd3140-eb02-488a-a95c-87e12ba3d061-config-data\") pod \"nova-api-0\" (UID: \"f0cd3140-eb02-488a-a95c-87e12ba3d061\") " pod="openstack/nova-api-0" Feb 18 14:56:59 crc kubenswrapper[4957]: I0218 14:56:59.192521 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 18 14:56:59 crc kubenswrapper[4957]: I0218 14:56:59.194467 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 18 14:56:59 crc kubenswrapper[4957]: I0218 14:56:59.201956 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 18 14:56:59 crc kubenswrapper[4957]: I0218 14:56:59.202235 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-654jw\" (UniqueName: \"kubernetes.io/projected/f0cd3140-eb02-488a-a95c-87e12ba3d061-kube-api-access-654jw\") pod \"nova-api-0\" (UID: \"f0cd3140-eb02-488a-a95c-87e12ba3d061\") " pod="openstack/nova-api-0" Feb 18 14:56:59 crc kubenswrapper[4957]: I0218 14:56:59.204177 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0cd3140-eb02-488a-a95c-87e12ba3d061-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f0cd3140-eb02-488a-a95c-87e12ba3d061\") " pod="openstack/nova-api-0" Feb 18 14:56:59 crc kubenswrapper[4957]: I0218 14:56:59.259250 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2z9nx\" (UniqueName: \"kubernetes.io/projected/7ab939f4-a04d-445f-92fd-d26bd08f852c-kube-api-access-2z9nx\") pod \"aodh-db-create-f7zmd\" (UID: \"7ab939f4-a04d-445f-92fd-d26bd08f852c\") " pod="openstack/aodh-db-create-f7zmd" Feb 18 14:56:59 crc kubenswrapper[4957]: I0218 14:56:59.259302 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b893f85-0849-44a2-b99c-02a720f05422-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"8b893f85-0849-44a2-b99c-02a720f05422\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 14:56:59 crc kubenswrapper[4957]: I0218 14:56:59.259478 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7ab939f4-a04d-445f-92fd-d26bd08f852c-operator-scripts\") pod \"aodh-db-create-f7zmd\" (UID: \"7ab939f4-a04d-445f-92fd-d26bd08f852c\") " pod="openstack/aodh-db-create-f7zmd" Feb 18 14:56:59 crc kubenswrapper[4957]: I0218 14:56:59.259525 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b893f85-0849-44a2-b99c-02a720f05422-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"8b893f85-0849-44a2-b99c-02a720f05422\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 14:56:59 crc kubenswrapper[4957]: I0218 14:56:59.259644 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwj6v\" (UniqueName: \"kubernetes.io/projected/8b893f85-0849-44a2-b99c-02a720f05422-kube-api-access-kwj6v\") pod \"nova-cell1-novncproxy-0\" (UID: \"8b893f85-0849-44a2-b99c-02a720f05422\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 14:56:59 crc kubenswrapper[4957]: I0218 14:56:59.259674 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b048c2e2-cb55-4531-b51d-32ad1c2a92be-logs\") pod \"nova-metadata-0\" (UID: \"b048c2e2-cb55-4531-b51d-32ad1c2a92be\") " pod="openstack/nova-metadata-0" Feb 18 14:56:59 crc kubenswrapper[4957]: I0218 14:56:59.259702 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b048c2e2-cb55-4531-b51d-32ad1c2a92be-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"b048c2e2-cb55-4531-b51d-32ad1c2a92be\") " pod="openstack/nova-metadata-0" Feb 18 14:56:59 crc kubenswrapper[4957]: I0218 14:56:59.259859 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b048c2e2-cb55-4531-b51d-32ad1c2a92be-config-data\") pod \"nova-metadata-0\" (UID: \"b048c2e2-cb55-4531-b51d-32ad1c2a92be\") " pod="openstack/nova-metadata-0" Feb 18 14:56:59 crc kubenswrapper[4957]: I0218 14:56:59.259893 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l4md8\" (UniqueName: \"kubernetes.io/projected/b048c2e2-cb55-4531-b51d-32ad1c2a92be-kube-api-access-l4md8\") pod \"nova-metadata-0\" (UID: \"b048c2e2-cb55-4531-b51d-32ad1c2a92be\") " pod="openstack/nova-metadata-0" Feb 18 14:56:59 crc kubenswrapper[4957]: I0218 14:56:59.261409 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b048c2e2-cb55-4531-b51d-32ad1c2a92be-logs\") pod \"nova-metadata-0\" (UID: \"b048c2e2-cb55-4531-b51d-32ad1c2a92be\") " pod="openstack/nova-metadata-0" Feb 18 14:56:59 crc kubenswrapper[4957]: I0218 14:56:59.276694 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 18 14:56:59 crc kubenswrapper[4957]: I0218 14:56:59.279586 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b048c2e2-cb55-4531-b51d-32ad1c2a92be-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"b048c2e2-cb55-4531-b51d-32ad1c2a92be\") " pod="openstack/nova-metadata-0" Feb 18 14:56:59 crc kubenswrapper[4957]: I0218 14:56:59.285474 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b048c2e2-cb55-4531-b51d-32ad1c2a92be-config-data\") pod \"nova-metadata-0\" (UID: \"b048c2e2-cb55-4531-b51d-32ad1c2a92be\") " pod="openstack/nova-metadata-0" Feb 18 14:56:59 crc kubenswrapper[4957]: I0218 14:56:59.316565 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l4md8\" (UniqueName: \"kubernetes.io/projected/b048c2e2-cb55-4531-b51d-32ad1c2a92be-kube-api-access-l4md8\") pod \"nova-metadata-0\" (UID: \"b048c2e2-cb55-4531-b51d-32ad1c2a92be\") " pod="openstack/nova-metadata-0" Feb 18 14:56:59 crc kubenswrapper[4957]: I0218 14:56:59.366491 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 14:56:59 crc kubenswrapper[4957]: I0218 14:56:59.368156 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2z9nx\" (UniqueName: \"kubernetes.io/projected/7ab939f4-a04d-445f-92fd-d26bd08f852c-kube-api-access-2z9nx\") pod \"aodh-db-create-f7zmd\" (UID: \"7ab939f4-a04d-445f-92fd-d26bd08f852c\") " pod="openstack/aodh-db-create-f7zmd" Feb 18 14:56:59 crc kubenswrapper[4957]: I0218 14:56:59.368187 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b893f85-0849-44a2-b99c-02a720f05422-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"8b893f85-0849-44a2-b99c-02a720f05422\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 14:56:59 crc kubenswrapper[4957]: I0218 14:56:59.368257 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7ab939f4-a04d-445f-92fd-d26bd08f852c-operator-scripts\") pod \"aodh-db-create-f7zmd\" (UID: \"7ab939f4-a04d-445f-92fd-d26bd08f852c\") " pod="openstack/aodh-db-create-f7zmd" Feb 18 14:56:59 crc kubenswrapper[4957]: I0218 14:56:59.368277 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b893f85-0849-44a2-b99c-02a720f05422-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"8b893f85-0849-44a2-b99c-02a720f05422\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 14:56:59 crc kubenswrapper[4957]: I0218 14:56:59.368331 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kwj6v\" (UniqueName: \"kubernetes.io/projected/8b893f85-0849-44a2-b99c-02a720f05422-kube-api-access-kwj6v\") pod \"nova-cell1-novncproxy-0\" (UID: \"8b893f85-0849-44a2-b99c-02a720f05422\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 14:56:59 crc kubenswrapper[4957]: I0218 14:56:59.373769 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b893f85-0849-44a2-b99c-02a720f05422-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"8b893f85-0849-44a2-b99c-02a720f05422\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 14:56:59 crc kubenswrapper[4957]: I0218 14:56:59.378946 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7ab939f4-a04d-445f-92fd-d26bd08f852c-operator-scripts\") pod \"aodh-db-create-f7zmd\" (UID: \"7ab939f4-a04d-445f-92fd-d26bd08f852c\") " pod="openstack/aodh-db-create-f7zmd" Feb 18 14:56:59 crc kubenswrapper[4957]: I0218 14:56:59.407211 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b893f85-0849-44a2-b99c-02a720f05422-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"8b893f85-0849-44a2-b99c-02a720f05422\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 14:56:59 crc kubenswrapper[4957]: I0218 14:56:59.413570 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-f7zmd"] Feb 18 14:56:59 crc kubenswrapper[4957]: I0218 14:56:59.420099 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwj6v\" (UniqueName: \"kubernetes.io/projected/8b893f85-0849-44a2-b99c-02a720f05422-kube-api-access-kwj6v\") pod \"nova-cell1-novncproxy-0\" (UID: \"8b893f85-0849-44a2-b99c-02a720f05422\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 14:56:59 crc kubenswrapper[4957]: I0218 14:56:59.439507 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2z9nx\" (UniqueName: \"kubernetes.io/projected/7ab939f4-a04d-445f-92fd-d26bd08f852c-kube-api-access-2z9nx\") pod \"aodh-db-create-f7zmd\" (UID: \"7ab939f4-a04d-445f-92fd-d26bd08f852c\") " pod="openstack/aodh-db-create-f7zmd" Feb 18 14:56:59 crc kubenswrapper[4957]: I0218 14:56:59.471012 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 14:56:59 crc kubenswrapper[4957]: I0218 14:56:59.472078 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 18 14:56:59 crc kubenswrapper[4957]: I0218 14:56:59.519568 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-2c2d-account-create-update-tn2f2"] Feb 18 14:56:59 crc kubenswrapper[4957]: I0218 14:56:59.521266 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-2c2d-account-create-update-tn2f2" Feb 18 14:56:59 crc kubenswrapper[4957]: I0218 14:56:59.531007 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-db-secret" Feb 18 14:56:59 crc kubenswrapper[4957]: I0218 14:56:59.562192 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-2c2d-account-create-update-tn2f2"] Feb 18 14:56:59 crc kubenswrapper[4957]: I0218 14:56:59.600054 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-f7zmd" Feb 18 14:56:59 crc kubenswrapper[4957]: I0218 14:56:59.607563 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 14:56:59 crc kubenswrapper[4957]: I0218 14:56:59.610333 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 18 14:56:59 crc kubenswrapper[4957]: I0218 14:56:59.637602 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 18 14:56:59 crc kubenswrapper[4957]: I0218 14:56:59.649929 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 18 14:56:59 crc kubenswrapper[4957]: I0218 14:56:59.651501 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-rk4dn"] Feb 18 14:56:59 crc kubenswrapper[4957]: I0218 14:56:59.656793 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9b86998b5-rk4dn" Feb 18 14:56:59 crc kubenswrapper[4957]: I0218 14:56:59.701489 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 14:56:59 crc kubenswrapper[4957]: I0218 14:56:59.711331 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/184b1171-6492-48b6-bf23-2286c360264b-operator-scripts\") pod \"aodh-2c2d-account-create-update-tn2f2\" (UID: \"184b1171-6492-48b6-bf23-2286c360264b\") " pod="openstack/aodh-2c2d-account-create-update-tn2f2" Feb 18 14:56:59 crc kubenswrapper[4957]: I0218 14:56:59.711483 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddt7n\" (UniqueName: \"kubernetes.io/projected/184b1171-6492-48b6-bf23-2286c360264b-kube-api-access-ddt7n\") pod \"aodh-2c2d-account-create-update-tn2f2\" (UID: \"184b1171-6492-48b6-bf23-2286c360264b\") " pod="openstack/aodh-2c2d-account-create-update-tn2f2" Feb 18 14:56:59 crc kubenswrapper[4957]: I0218 14:56:59.761277 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-rk4dn"] Feb 18 14:56:59 crc kubenswrapper[4957]: I0218 14:56:59.815572 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/31309517-a9cd-40f9-8f3b-3f9b92f96247-config\") pod \"dnsmasq-dns-9b86998b5-rk4dn\" (UID: \"31309517-a9cd-40f9-8f3b-3f9b92f96247\") " pod="openstack/dnsmasq-dns-9b86998b5-rk4dn" Feb 18 14:56:59 crc kubenswrapper[4957]: I0218 14:56:59.815973 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/31309517-a9cd-40f9-8f3b-3f9b92f96247-dns-svc\") pod \"dnsmasq-dns-9b86998b5-rk4dn\" (UID: \"31309517-a9cd-40f9-8f3b-3f9b92f96247\") " pod="openstack/dnsmasq-dns-9b86998b5-rk4dn" Feb 18 14:56:59 crc kubenswrapper[4957]: I0218 14:56:59.816002 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a913c6a-cf3b-49f6-b1c6-70b090d52925-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"4a913c6a-cf3b-49f6-b1c6-70b090d52925\") " pod="openstack/nova-scheduler-0" Feb 18 14:56:59 crc kubenswrapper[4957]: I0218 14:56:59.816040 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfgkf\" (UniqueName: \"kubernetes.io/projected/4a913c6a-cf3b-49f6-b1c6-70b090d52925-kube-api-access-dfgkf\") pod \"nova-scheduler-0\" (UID: \"4a913c6a-cf3b-49f6-b1c6-70b090d52925\") " pod="openstack/nova-scheduler-0" Feb 18 14:56:59 crc kubenswrapper[4957]: I0218 14:56:59.816069 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/31309517-a9cd-40f9-8f3b-3f9b92f96247-dns-swift-storage-0\") pod \"dnsmasq-dns-9b86998b5-rk4dn\" (UID: \"31309517-a9cd-40f9-8f3b-3f9b92f96247\") " pod="openstack/dnsmasq-dns-9b86998b5-rk4dn" Feb 18 14:56:59 crc kubenswrapper[4957]: I0218 14:56:59.816084 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnrms\" (UniqueName: \"kubernetes.io/projected/31309517-a9cd-40f9-8f3b-3f9b92f96247-kube-api-access-mnrms\") pod \"dnsmasq-dns-9b86998b5-rk4dn\" (UID: \"31309517-a9cd-40f9-8f3b-3f9b92f96247\") " pod="openstack/dnsmasq-dns-9b86998b5-rk4dn" Feb 18 14:56:59 crc kubenswrapper[4957]: I0218 14:56:59.816127 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ddt7n\" (UniqueName: \"kubernetes.io/projected/184b1171-6492-48b6-bf23-2286c360264b-kube-api-access-ddt7n\") pod \"aodh-2c2d-account-create-update-tn2f2\" (UID: \"184b1171-6492-48b6-bf23-2286c360264b\") " pod="openstack/aodh-2c2d-account-create-update-tn2f2" Feb 18 14:56:59 crc kubenswrapper[4957]: I0218 14:56:59.816224 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/31309517-a9cd-40f9-8f3b-3f9b92f96247-ovsdbserver-sb\") pod \"dnsmasq-dns-9b86998b5-rk4dn\" (UID: \"31309517-a9cd-40f9-8f3b-3f9b92f96247\") " pod="openstack/dnsmasq-dns-9b86998b5-rk4dn" Feb 18 14:56:59 crc kubenswrapper[4957]: I0218 14:56:59.816296 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/31309517-a9cd-40f9-8f3b-3f9b92f96247-ovsdbserver-nb\") pod \"dnsmasq-dns-9b86998b5-rk4dn\" (UID: \"31309517-a9cd-40f9-8f3b-3f9b92f96247\") " pod="openstack/dnsmasq-dns-9b86998b5-rk4dn" Feb 18 14:56:59 crc kubenswrapper[4957]: I0218 14:56:59.816596 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/184b1171-6492-48b6-bf23-2286c360264b-operator-scripts\") pod \"aodh-2c2d-account-create-update-tn2f2\" (UID: \"184b1171-6492-48b6-bf23-2286c360264b\") " pod="openstack/aodh-2c2d-account-create-update-tn2f2" Feb 18 14:56:59 crc kubenswrapper[4957]: I0218 14:56:59.816652 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a913c6a-cf3b-49f6-b1c6-70b090d52925-config-data\") pod \"nova-scheduler-0\" (UID: \"4a913c6a-cf3b-49f6-b1c6-70b090d52925\") " pod="openstack/nova-scheduler-0" Feb 18 14:56:59 crc kubenswrapper[4957]: I0218 14:56:59.822040 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/184b1171-6492-48b6-bf23-2286c360264b-operator-scripts\") pod \"aodh-2c2d-account-create-update-tn2f2\" (UID: \"184b1171-6492-48b6-bf23-2286c360264b\") " pod="openstack/aodh-2c2d-account-create-update-tn2f2" Feb 18 14:56:59 crc kubenswrapper[4957]: I0218 14:56:59.886464 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ddt7n\" (UniqueName: \"kubernetes.io/projected/184b1171-6492-48b6-bf23-2286c360264b-kube-api-access-ddt7n\") pod \"aodh-2c2d-account-create-update-tn2f2\" (UID: \"184b1171-6492-48b6-bf23-2286c360264b\") " pod="openstack/aodh-2c2d-account-create-update-tn2f2" Feb 18 14:56:59 crc kubenswrapper[4957]: I0218 14:56:59.920054 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a913c6a-cf3b-49f6-b1c6-70b090d52925-config-data\") pod \"nova-scheduler-0\" (UID: \"4a913c6a-cf3b-49f6-b1c6-70b090d52925\") " pod="openstack/nova-scheduler-0" Feb 18 14:56:59 crc kubenswrapper[4957]: I0218 14:56:59.920121 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/31309517-a9cd-40f9-8f3b-3f9b92f96247-config\") pod \"dnsmasq-dns-9b86998b5-rk4dn\" (UID: \"31309517-a9cd-40f9-8f3b-3f9b92f96247\") " pod="openstack/dnsmasq-dns-9b86998b5-rk4dn" Feb 18 14:56:59 crc kubenswrapper[4957]: I0218 14:56:59.920163 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/31309517-a9cd-40f9-8f3b-3f9b92f96247-dns-svc\") pod \"dnsmasq-dns-9b86998b5-rk4dn\" (UID: \"31309517-a9cd-40f9-8f3b-3f9b92f96247\") " pod="openstack/dnsmasq-dns-9b86998b5-rk4dn" Feb 18 14:56:59 crc kubenswrapper[4957]: I0218 14:56:59.920191 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a913c6a-cf3b-49f6-b1c6-70b090d52925-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"4a913c6a-cf3b-49f6-b1c6-70b090d52925\") " pod="openstack/nova-scheduler-0" Feb 18 14:56:59 crc kubenswrapper[4957]: I0218 14:56:59.920238 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dfgkf\" (UniqueName: \"kubernetes.io/projected/4a913c6a-cf3b-49f6-b1c6-70b090d52925-kube-api-access-dfgkf\") pod \"nova-scheduler-0\" (UID: \"4a913c6a-cf3b-49f6-b1c6-70b090d52925\") " pod="openstack/nova-scheduler-0" Feb 18 14:56:59 crc kubenswrapper[4957]: I0218 14:56:59.920265 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/31309517-a9cd-40f9-8f3b-3f9b92f96247-dns-swift-storage-0\") pod \"dnsmasq-dns-9b86998b5-rk4dn\" (UID: \"31309517-a9cd-40f9-8f3b-3f9b92f96247\") " pod="openstack/dnsmasq-dns-9b86998b5-rk4dn" Feb 18 14:56:59 crc kubenswrapper[4957]: I0218 14:56:59.920284 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mnrms\" (UniqueName: \"kubernetes.io/projected/31309517-a9cd-40f9-8f3b-3f9b92f96247-kube-api-access-mnrms\") pod \"dnsmasq-dns-9b86998b5-rk4dn\" (UID: \"31309517-a9cd-40f9-8f3b-3f9b92f96247\") " pod="openstack/dnsmasq-dns-9b86998b5-rk4dn" Feb 18 14:56:59 crc kubenswrapper[4957]: I0218 14:56:59.920492 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/31309517-a9cd-40f9-8f3b-3f9b92f96247-ovsdbserver-sb\") pod \"dnsmasq-dns-9b86998b5-rk4dn\" (UID: \"31309517-a9cd-40f9-8f3b-3f9b92f96247\") " pod="openstack/dnsmasq-dns-9b86998b5-rk4dn" Feb 18 14:56:59 crc kubenswrapper[4957]: I0218 14:56:59.920547 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/31309517-a9cd-40f9-8f3b-3f9b92f96247-ovsdbserver-nb\") pod \"dnsmasq-dns-9b86998b5-rk4dn\" (UID: \"31309517-a9cd-40f9-8f3b-3f9b92f96247\") " pod="openstack/dnsmasq-dns-9b86998b5-rk4dn" Feb 18 14:56:59 crc kubenswrapper[4957]: I0218 14:56:59.924205 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/31309517-a9cd-40f9-8f3b-3f9b92f96247-config\") pod \"dnsmasq-dns-9b86998b5-rk4dn\" (UID: \"31309517-a9cd-40f9-8f3b-3f9b92f96247\") " pod="openstack/dnsmasq-dns-9b86998b5-rk4dn" Feb 18 14:56:59 crc kubenswrapper[4957]: I0218 14:56:59.924612 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/31309517-a9cd-40f9-8f3b-3f9b92f96247-ovsdbserver-sb\") pod \"dnsmasq-dns-9b86998b5-rk4dn\" (UID: \"31309517-a9cd-40f9-8f3b-3f9b92f96247\") " pod="openstack/dnsmasq-dns-9b86998b5-rk4dn" Feb 18 14:56:59 crc kubenswrapper[4957]: I0218 14:56:59.928134 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a913c6a-cf3b-49f6-b1c6-70b090d52925-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"4a913c6a-cf3b-49f6-b1c6-70b090d52925\") " pod="openstack/nova-scheduler-0" Feb 18 14:56:59 crc kubenswrapper[4957]: I0218 14:56:59.931121 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/31309517-a9cd-40f9-8f3b-3f9b92f96247-ovsdbserver-nb\") pod \"dnsmasq-dns-9b86998b5-rk4dn\" (UID: \"31309517-a9cd-40f9-8f3b-3f9b92f96247\") " pod="openstack/dnsmasq-dns-9b86998b5-rk4dn" Feb 18 14:56:59 crc kubenswrapper[4957]: I0218 14:56:59.934639 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/31309517-a9cd-40f9-8f3b-3f9b92f96247-dns-swift-storage-0\") pod \"dnsmasq-dns-9b86998b5-rk4dn\" (UID: \"31309517-a9cd-40f9-8f3b-3f9b92f96247\") " pod="openstack/dnsmasq-dns-9b86998b5-rk4dn" Feb 18 14:56:59 crc kubenswrapper[4957]: I0218 14:56:59.938825 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/31309517-a9cd-40f9-8f3b-3f9b92f96247-dns-svc\") pod \"dnsmasq-dns-9b86998b5-rk4dn\" (UID: \"31309517-a9cd-40f9-8f3b-3f9b92f96247\") " pod="openstack/dnsmasq-dns-9b86998b5-rk4dn" Feb 18 14:56:59 crc kubenswrapper[4957]: I0218 14:56:59.943171 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a913c6a-cf3b-49f6-b1c6-70b090d52925-config-data\") pod \"nova-scheduler-0\" (UID: \"4a913c6a-cf3b-49f6-b1c6-70b090d52925\") " pod="openstack/nova-scheduler-0" Feb 18 14:56:59 crc kubenswrapper[4957]: I0218 14:56:59.948161 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mnrms\" (UniqueName: \"kubernetes.io/projected/31309517-a9cd-40f9-8f3b-3f9b92f96247-kube-api-access-mnrms\") pod \"dnsmasq-dns-9b86998b5-rk4dn\" (UID: \"31309517-a9cd-40f9-8f3b-3f9b92f96247\") " pod="openstack/dnsmasq-dns-9b86998b5-rk4dn" Feb 18 14:57:00 crc kubenswrapper[4957]: I0218 14:57:00.011547 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dfgkf\" (UniqueName: \"kubernetes.io/projected/4a913c6a-cf3b-49f6-b1c6-70b090d52925-kube-api-access-dfgkf\") pod \"nova-scheduler-0\" (UID: \"4a913c6a-cf3b-49f6-b1c6-70b090d52925\") " pod="openstack/nova-scheduler-0" Feb 18 14:57:00 crc kubenswrapper[4957]: I0218 14:57:00.212460 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 18 14:57:00 crc kubenswrapper[4957]: I0218 14:57:00.218406 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9b86998b5-rk4dn" Feb 18 14:57:00 crc kubenswrapper[4957]: I0218 14:57:00.220072 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-2c2d-account-create-update-tn2f2" Feb 18 14:57:00 crc kubenswrapper[4957]: I0218 14:57:00.530407 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-qcgxs"] Feb 18 14:57:00 crc kubenswrapper[4957]: I0218 14:57:00.532178 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-qcgxs" Feb 18 14:57:00 crc kubenswrapper[4957]: I0218 14:57:00.542915 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 18 14:57:00 crc kubenswrapper[4957]: I0218 14:57:00.543126 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Feb 18 14:57:00 crc kubenswrapper[4957]: I0218 14:57:00.570482 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-qcgxs"] Feb 18 14:57:00 crc kubenswrapper[4957]: I0218 14:57:00.606787 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-k2zxb"] Feb 18 14:57:00 crc kubenswrapper[4957]: I0218 14:57:00.666035 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1361fd5c-3b5c-41e3-9c89-8df6ce0ea622-config-data\") pod \"nova-cell1-conductor-db-sync-qcgxs\" (UID: \"1361fd5c-3b5c-41e3-9c89-8df6ce0ea622\") " pod="openstack/nova-cell1-conductor-db-sync-qcgxs" Feb 18 14:57:00 crc kubenswrapper[4957]: I0218 14:57:00.666363 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1361fd5c-3b5c-41e3-9c89-8df6ce0ea622-scripts\") pod \"nova-cell1-conductor-db-sync-qcgxs\" (UID: \"1361fd5c-3b5c-41e3-9c89-8df6ce0ea622\") " pod="openstack/nova-cell1-conductor-db-sync-qcgxs" Feb 18 14:57:00 crc kubenswrapper[4957]: I0218 14:57:00.666392 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1361fd5c-3b5c-41e3-9c89-8df6ce0ea622-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-qcgxs\" (UID: \"1361fd5c-3b5c-41e3-9c89-8df6ce0ea622\") " pod="openstack/nova-cell1-conductor-db-sync-qcgxs" Feb 18 14:57:00 crc kubenswrapper[4957]: I0218 14:57:00.666469 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swhrj\" (UniqueName: \"kubernetes.io/projected/1361fd5c-3b5c-41e3-9c89-8df6ce0ea622-kube-api-access-swhrj\") pod \"nova-cell1-conductor-db-sync-qcgxs\" (UID: \"1361fd5c-3b5c-41e3-9c89-8df6ce0ea622\") " pod="openstack/nova-cell1-conductor-db-sync-qcgxs" Feb 18 14:57:00 crc kubenswrapper[4957]: I0218 14:57:00.771328 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1361fd5c-3b5c-41e3-9c89-8df6ce0ea622-scripts\") pod \"nova-cell1-conductor-db-sync-qcgxs\" (UID: \"1361fd5c-3b5c-41e3-9c89-8df6ce0ea622\") " pod="openstack/nova-cell1-conductor-db-sync-qcgxs" Feb 18 14:57:00 crc kubenswrapper[4957]: I0218 14:57:00.771436 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1361fd5c-3b5c-41e3-9c89-8df6ce0ea622-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-qcgxs\" (UID: \"1361fd5c-3b5c-41e3-9c89-8df6ce0ea622\") " pod="openstack/nova-cell1-conductor-db-sync-qcgxs" Feb 18 14:57:00 crc kubenswrapper[4957]: I0218 14:57:00.771566 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-swhrj\" (UniqueName: \"kubernetes.io/projected/1361fd5c-3b5c-41e3-9c89-8df6ce0ea622-kube-api-access-swhrj\") pod \"nova-cell1-conductor-db-sync-qcgxs\" (UID: \"1361fd5c-3b5c-41e3-9c89-8df6ce0ea622\") " pod="openstack/nova-cell1-conductor-db-sync-qcgxs" Feb 18 14:57:00 crc kubenswrapper[4957]: I0218 14:57:00.771897 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1361fd5c-3b5c-41e3-9c89-8df6ce0ea622-config-data\") pod \"nova-cell1-conductor-db-sync-qcgxs\" (UID: \"1361fd5c-3b5c-41e3-9c89-8df6ce0ea622\") " pod="openstack/nova-cell1-conductor-db-sync-qcgxs" Feb 18 14:57:00 crc kubenswrapper[4957]: I0218 14:57:00.780983 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1361fd5c-3b5c-41e3-9c89-8df6ce0ea622-config-data\") pod \"nova-cell1-conductor-db-sync-qcgxs\" (UID: \"1361fd5c-3b5c-41e3-9c89-8df6ce0ea622\") " pod="openstack/nova-cell1-conductor-db-sync-qcgxs" Feb 18 14:57:00 crc kubenswrapper[4957]: I0218 14:57:00.781591 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1361fd5c-3b5c-41e3-9c89-8df6ce0ea622-scripts\") pod \"nova-cell1-conductor-db-sync-qcgxs\" (UID: \"1361fd5c-3b5c-41e3-9c89-8df6ce0ea622\") " pod="openstack/nova-cell1-conductor-db-sync-qcgxs" Feb 18 14:57:00 crc kubenswrapper[4957]: I0218 14:57:00.791953 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-swhrj\" (UniqueName: \"kubernetes.io/projected/1361fd5c-3b5c-41e3-9c89-8df6ce0ea622-kube-api-access-swhrj\") pod \"nova-cell1-conductor-db-sync-qcgxs\" (UID: \"1361fd5c-3b5c-41e3-9c89-8df6ce0ea622\") " pod="openstack/nova-cell1-conductor-db-sync-qcgxs" Feb 18 14:57:00 crc kubenswrapper[4957]: I0218 14:57:00.796966 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1361fd5c-3b5c-41e3-9c89-8df6ce0ea622-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-qcgxs\" (UID: \"1361fd5c-3b5c-41e3-9c89-8df6ce0ea622\") " pod="openstack/nova-cell1-conductor-db-sync-qcgxs" Feb 18 14:57:00 crc kubenswrapper[4957]: I0218 14:57:00.876054 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-qcgxs" Feb 18 14:57:01 crc kubenswrapper[4957]: I0218 14:57:01.352528 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 14:57:01 crc kubenswrapper[4957]: I0218 14:57:01.384683 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 18 14:57:01 crc kubenswrapper[4957]: I0218 14:57:01.400038 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-f7zmd"] Feb 18 14:57:01 crc kubenswrapper[4957]: I0218 14:57:01.591179 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-k2zxb" event={"ID":"6fa8a4e3-39e3-4e9b-8529-9f712fbc509a","Type":"ContainerStarted","Data":"3d9c2b9fc3ab71da01f89461a3fb761c6e10c8f3e271eb5de464b72ab8001f95"} Feb 18 14:57:01 crc kubenswrapper[4957]: I0218 14:57:01.591225 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-k2zxb" event={"ID":"6fa8a4e3-39e3-4e9b-8529-9f712fbc509a","Type":"ContainerStarted","Data":"c00ee21c574649ee893e29ef090a51a2dc7913ae5d7bc1ac5f91fbe6dd5e7d49"} Feb 18 14:57:01 crc kubenswrapper[4957]: I0218 14:57:01.604682 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f0cd3140-eb02-488a-a95c-87e12ba3d061","Type":"ContainerStarted","Data":"b0c5f8bf8824d1d789864eeabb7be45aa62808e9ddee6b4e7917a9713bf9d626"} Feb 18 14:57:01 crc kubenswrapper[4957]: I0218 14:57:01.622237 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-f7zmd" event={"ID":"7ab939f4-a04d-445f-92fd-d26bd08f852c","Type":"ContainerStarted","Data":"8ce5af0fe49b145e79aab46552f779cd6b3831c21ed620e6acb07b9c563aa70f"} Feb 18 14:57:01 crc kubenswrapper[4957]: I0218 14:57:01.627080 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b048c2e2-cb55-4531-b51d-32ad1c2a92be","Type":"ContainerStarted","Data":"e81a286726c2392aa2d3794162016342038afddad6dbcae744e2a7aa9848cb07"} Feb 18 14:57:01 crc kubenswrapper[4957]: I0218 14:57:01.644839 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-2c2d-account-create-update-tn2f2"] Feb 18 14:57:01 crc kubenswrapper[4957]: I0218 14:57:01.670239 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-k2zxb" podStartSLOduration=3.670221165 podStartE2EDuration="3.670221165s" podCreationTimestamp="2026-02-18 14:56:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:57:01.622466293 +0000 UTC m=+1528.143331037" watchObservedRunningTime="2026-02-18 14:57:01.670221165 +0000 UTC m=+1528.191085909" Feb 18 14:57:01 crc kubenswrapper[4957]: I0218 14:57:01.721727 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-rk4dn"] Feb 18 14:57:01 crc kubenswrapper[4957]: I0218 14:57:01.799048 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 18 14:57:01 crc kubenswrapper[4957]: I0218 14:57:01.814404 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 14:57:01 crc kubenswrapper[4957]: I0218 14:57:01.942231 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-qcgxs"] Feb 18 14:57:02 crc kubenswrapper[4957]: I0218 14:57:02.673814 4957 generic.go:334] "Generic (PLEG): container finished" podID="7ab939f4-a04d-445f-92fd-d26bd08f852c" containerID="79a244a584f702c4fb08a1653bc760578b6a60356bb6d816e0d2737600890137" exitCode=0 Feb 18 14:57:02 crc kubenswrapper[4957]: I0218 14:57:02.674533 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-f7zmd" event={"ID":"7ab939f4-a04d-445f-92fd-d26bd08f852c","Type":"ContainerDied","Data":"79a244a584f702c4fb08a1653bc760578b6a60356bb6d816e0d2737600890137"} Feb 18 14:57:02 crc kubenswrapper[4957]: I0218 14:57:02.682310 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-2c2d-account-create-update-tn2f2" event={"ID":"184b1171-6492-48b6-bf23-2286c360264b","Type":"ContainerStarted","Data":"e1f1fea28bb35b29ef07e94128f5b946a8099b11fea98c3bc578547fbb5b4ec4"} Feb 18 14:57:02 crc kubenswrapper[4957]: I0218 14:57:02.682435 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-2c2d-account-create-update-tn2f2" event={"ID":"184b1171-6492-48b6-bf23-2286c360264b","Type":"ContainerStarted","Data":"e1448c1c143b9259145910d81c2e42e26b45b79249ab8d1d0e8d0ceaa0579ac4"} Feb 18 14:57:02 crc kubenswrapper[4957]: I0218 14:57:02.686066 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"8b893f85-0849-44a2-b99c-02a720f05422","Type":"ContainerStarted","Data":"1453f2e3ebfeb410f19a762a41b8d7b4e72652658df7a65df88b7ec49f1356e7"} Feb 18 14:57:02 crc kubenswrapper[4957]: I0218 14:57:02.733792 4957 generic.go:334] "Generic (PLEG): container finished" podID="31309517-a9cd-40f9-8f3b-3f9b92f96247" containerID="f37df6fd7451398bce69af825f9808acd4f567e0f9efcbd6f0486f5f9b407749" exitCode=0 Feb 18 14:57:02 crc kubenswrapper[4957]: I0218 14:57:02.733962 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9b86998b5-rk4dn" event={"ID":"31309517-a9cd-40f9-8f3b-3f9b92f96247","Type":"ContainerDied","Data":"f37df6fd7451398bce69af825f9808acd4f567e0f9efcbd6f0486f5f9b407749"} Feb 18 14:57:02 crc kubenswrapper[4957]: I0218 14:57:02.733993 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9b86998b5-rk4dn" event={"ID":"31309517-a9cd-40f9-8f3b-3f9b92f96247","Type":"ContainerStarted","Data":"42942c6833da331980fdd18c84f49f9c1c92955b12665d43ae16d6219f170ef2"} Feb 18 14:57:02 crc kubenswrapper[4957]: I0218 14:57:02.752185 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"4a913c6a-cf3b-49f6-b1c6-70b090d52925","Type":"ContainerStarted","Data":"833ef9b89e6d4de51f411a3daaf67c2fc425f1f7e58085c22f73bd2b63999f12"} Feb 18 14:57:02 crc kubenswrapper[4957]: I0218 14:57:02.753144 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-2c2d-account-create-update-tn2f2" podStartSLOduration=3.753122886 podStartE2EDuration="3.753122886s" podCreationTimestamp="2026-02-18 14:56:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:57:02.726871276 +0000 UTC m=+1529.247736020" watchObservedRunningTime="2026-02-18 14:57:02.753122886 +0000 UTC m=+1529.273987630" Feb 18 14:57:02 crc kubenswrapper[4957]: I0218 14:57:02.754895 4957 generic.go:334] "Generic (PLEG): container finished" podID="bbaf2250-41f8-4278-a9ad-65c766e5e0dd" containerID="ab6a389b2ea669acd56d71cc3369d20c5dae8d00ae5a31d93c78c90f4d45af25" exitCode=137 Feb 18 14:57:02 crc kubenswrapper[4957]: I0218 14:57:02.754935 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bbaf2250-41f8-4278-a9ad-65c766e5e0dd","Type":"ContainerDied","Data":"ab6a389b2ea669acd56d71cc3369d20c5dae8d00ae5a31d93c78c90f4d45af25"} Feb 18 14:57:02 crc kubenswrapper[4957]: I0218 14:57:02.758707 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-qcgxs" event={"ID":"1361fd5c-3b5c-41e3-9c89-8df6ce0ea622","Type":"ContainerStarted","Data":"01730d6d735d58461dda1209cdedeaa90b4fcf641bf04b52122f1a25dd8f91a7"} Feb 18 14:57:02 crc kubenswrapper[4957]: I0218 14:57:02.758738 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-qcgxs" event={"ID":"1361fd5c-3b5c-41e3-9c89-8df6ce0ea622","Type":"ContainerStarted","Data":"b1eca616784b64085aa12d46dee77b6031678f5676b2adcb374e4f10a6fe4398"} Feb 18 14:57:02 crc kubenswrapper[4957]: I0218 14:57:02.835063 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-qcgxs" podStartSLOduration=2.835047126 podStartE2EDuration="2.835047126s" podCreationTimestamp="2026-02-18 14:57:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:57:02.809369253 +0000 UTC m=+1529.330233997" watchObservedRunningTime="2026-02-18 14:57:02.835047126 +0000 UTC m=+1529.355911870" Feb 18 14:57:03 crc kubenswrapper[4957]: I0218 14:57:03.169169 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 14:57:03 crc kubenswrapper[4957]: I0218 14:57:03.269184 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bbaf2250-41f8-4278-a9ad-65c766e5e0dd-run-httpd\") pod \"bbaf2250-41f8-4278-a9ad-65c766e5e0dd\" (UID: \"bbaf2250-41f8-4278-a9ad-65c766e5e0dd\") " Feb 18 14:57:03 crc kubenswrapper[4957]: I0218 14:57:03.269914 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bbaf2250-41f8-4278-a9ad-65c766e5e0dd-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "bbaf2250-41f8-4278-a9ad-65c766e5e0dd" (UID: "bbaf2250-41f8-4278-a9ad-65c766e5e0dd"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:57:03 crc kubenswrapper[4957]: I0218 14:57:03.269988 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbaf2250-41f8-4278-a9ad-65c766e5e0dd-config-data\") pod \"bbaf2250-41f8-4278-a9ad-65c766e5e0dd\" (UID: \"bbaf2250-41f8-4278-a9ad-65c766e5e0dd\") " Feb 18 14:57:03 crc kubenswrapper[4957]: I0218 14:57:03.270038 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbaf2250-41f8-4278-a9ad-65c766e5e0dd-combined-ca-bundle\") pod \"bbaf2250-41f8-4278-a9ad-65c766e5e0dd\" (UID: \"bbaf2250-41f8-4278-a9ad-65c766e5e0dd\") " Feb 18 14:57:03 crc kubenswrapper[4957]: I0218 14:57:03.270163 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bbaf2250-41f8-4278-a9ad-65c766e5e0dd-log-httpd\") pod \"bbaf2250-41f8-4278-a9ad-65c766e5e0dd\" (UID: \"bbaf2250-41f8-4278-a9ad-65c766e5e0dd\") " Feb 18 14:57:03 crc kubenswrapper[4957]: I0218 14:57:03.270197 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bbaf2250-41f8-4278-a9ad-65c766e5e0dd-scripts\") pod \"bbaf2250-41f8-4278-a9ad-65c766e5e0dd\" (UID: \"bbaf2250-41f8-4278-a9ad-65c766e5e0dd\") " Feb 18 14:57:03 crc kubenswrapper[4957]: I0218 14:57:03.270338 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bbaf2250-41f8-4278-a9ad-65c766e5e0dd-sg-core-conf-yaml\") pod \"bbaf2250-41f8-4278-a9ad-65c766e5e0dd\" (UID: \"bbaf2250-41f8-4278-a9ad-65c766e5e0dd\") " Feb 18 14:57:03 crc kubenswrapper[4957]: I0218 14:57:03.270383 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w8lwq\" (UniqueName: \"kubernetes.io/projected/bbaf2250-41f8-4278-a9ad-65c766e5e0dd-kube-api-access-w8lwq\") pod \"bbaf2250-41f8-4278-a9ad-65c766e5e0dd\" (UID: \"bbaf2250-41f8-4278-a9ad-65c766e5e0dd\") " Feb 18 14:57:03 crc kubenswrapper[4957]: I0218 14:57:03.270999 4957 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bbaf2250-41f8-4278-a9ad-65c766e5e0dd-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 14:57:03 crc kubenswrapper[4957]: I0218 14:57:03.273099 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bbaf2250-41f8-4278-a9ad-65c766e5e0dd-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "bbaf2250-41f8-4278-a9ad-65c766e5e0dd" (UID: "bbaf2250-41f8-4278-a9ad-65c766e5e0dd"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:57:03 crc kubenswrapper[4957]: I0218 14:57:03.286691 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbaf2250-41f8-4278-a9ad-65c766e5e0dd-scripts" (OuterVolumeSpecName: "scripts") pod "bbaf2250-41f8-4278-a9ad-65c766e5e0dd" (UID: "bbaf2250-41f8-4278-a9ad-65c766e5e0dd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:57:03 crc kubenswrapper[4957]: I0218 14:57:03.289931 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bbaf2250-41f8-4278-a9ad-65c766e5e0dd-kube-api-access-w8lwq" (OuterVolumeSpecName: "kube-api-access-w8lwq") pod "bbaf2250-41f8-4278-a9ad-65c766e5e0dd" (UID: "bbaf2250-41f8-4278-a9ad-65c766e5e0dd"). InnerVolumeSpecName "kube-api-access-w8lwq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:57:03 crc kubenswrapper[4957]: I0218 14:57:03.329524 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbaf2250-41f8-4278-a9ad-65c766e5e0dd-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "bbaf2250-41f8-4278-a9ad-65c766e5e0dd" (UID: "bbaf2250-41f8-4278-a9ad-65c766e5e0dd"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:57:03 crc kubenswrapper[4957]: I0218 14:57:03.374267 4957 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bbaf2250-41f8-4278-a9ad-65c766e5e0dd-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 14:57:03 crc kubenswrapper[4957]: I0218 14:57:03.374307 4957 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bbaf2250-41f8-4278-a9ad-65c766e5e0dd-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 14:57:03 crc kubenswrapper[4957]: I0218 14:57:03.374317 4957 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bbaf2250-41f8-4278-a9ad-65c766e5e0dd-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 18 14:57:03 crc kubenswrapper[4957]: I0218 14:57:03.374329 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w8lwq\" (UniqueName: \"kubernetes.io/projected/bbaf2250-41f8-4278-a9ad-65c766e5e0dd-kube-api-access-w8lwq\") on node \"crc\" DevicePath \"\"" Feb 18 14:57:03 crc kubenswrapper[4957]: I0218 14:57:03.449485 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbaf2250-41f8-4278-a9ad-65c766e5e0dd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bbaf2250-41f8-4278-a9ad-65c766e5e0dd" (UID: "bbaf2250-41f8-4278-a9ad-65c766e5e0dd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:57:03 crc kubenswrapper[4957]: I0218 14:57:03.476861 4957 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbaf2250-41f8-4278-a9ad-65c766e5e0dd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:57:03 crc kubenswrapper[4957]: I0218 14:57:03.489621 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbaf2250-41f8-4278-a9ad-65c766e5e0dd-config-data" (OuterVolumeSpecName: "config-data") pod "bbaf2250-41f8-4278-a9ad-65c766e5e0dd" (UID: "bbaf2250-41f8-4278-a9ad-65c766e5e0dd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:57:03 crc kubenswrapper[4957]: I0218 14:57:03.555145 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 18 14:57:03 crc kubenswrapper[4957]: I0218 14:57:03.579031 4957 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbaf2250-41f8-4278-a9ad-65c766e5e0dd-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 14:57:03 crc kubenswrapper[4957]: I0218 14:57:03.608427 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 14:57:03 crc kubenswrapper[4957]: I0218 14:57:03.775118 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9b86998b5-rk4dn" event={"ID":"31309517-a9cd-40f9-8f3b-3f9b92f96247","Type":"ContainerStarted","Data":"9be2ea1046c81e0d3232168b8c0567f2c9808ab66131d4621b5d6d0ab2dbc9b1"} Feb 18 14:57:03 crc kubenswrapper[4957]: I0218 14:57:03.777178 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-9b86998b5-rk4dn" Feb 18 14:57:03 crc kubenswrapper[4957]: I0218 14:57:03.787496 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bbaf2250-41f8-4278-a9ad-65c766e5e0dd","Type":"ContainerDied","Data":"dec1a1ae044cc2ed101d83df0cbf0fae263040f34af917301ae43a43dc181a2b"} Feb 18 14:57:03 crc kubenswrapper[4957]: I0218 14:57:03.787578 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 14:57:03 crc kubenswrapper[4957]: I0218 14:57:03.787585 4957 scope.go:117] "RemoveContainer" containerID="ab6a389b2ea669acd56d71cc3369d20c5dae8d00ae5a31d93c78c90f4d45af25" Feb 18 14:57:03 crc kubenswrapper[4957]: I0218 14:57:03.803412 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-9b86998b5-rk4dn" podStartSLOduration=4.803392203 podStartE2EDuration="4.803392203s" podCreationTimestamp="2026-02-18 14:56:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:57:03.801051415 +0000 UTC m=+1530.321916169" watchObservedRunningTime="2026-02-18 14:57:03.803392203 +0000 UTC m=+1530.324256947" Feb 18 14:57:03 crc kubenswrapper[4957]: I0218 14:57:03.820032 4957 generic.go:334] "Generic (PLEG): container finished" podID="184b1171-6492-48b6-bf23-2286c360264b" containerID="e1f1fea28bb35b29ef07e94128f5b946a8099b11fea98c3bc578547fbb5b4ec4" exitCode=0 Feb 18 14:57:03 crc kubenswrapper[4957]: I0218 14:57:03.820143 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-2c2d-account-create-update-tn2f2" event={"ID":"184b1171-6492-48b6-bf23-2286c360264b","Type":"ContainerDied","Data":"e1f1fea28bb35b29ef07e94128f5b946a8099b11fea98c3bc578547fbb5b4ec4"} Feb 18 14:57:03 crc kubenswrapper[4957]: I0218 14:57:03.864856 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 14:57:03 crc kubenswrapper[4957]: I0218 14:57:03.880902 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 18 14:57:03 crc kubenswrapper[4957]: I0218 14:57:03.948114 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 18 14:57:03 crc kubenswrapper[4957]: E0218 14:57:03.949049 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbaf2250-41f8-4278-a9ad-65c766e5e0dd" containerName="sg-core" Feb 18 14:57:03 crc kubenswrapper[4957]: I0218 14:57:03.949064 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbaf2250-41f8-4278-a9ad-65c766e5e0dd" containerName="sg-core" Feb 18 14:57:03 crc kubenswrapper[4957]: E0218 14:57:03.949078 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbaf2250-41f8-4278-a9ad-65c766e5e0dd" containerName="proxy-httpd" Feb 18 14:57:03 crc kubenswrapper[4957]: I0218 14:57:03.949083 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbaf2250-41f8-4278-a9ad-65c766e5e0dd" containerName="proxy-httpd" Feb 18 14:57:03 crc kubenswrapper[4957]: E0218 14:57:03.949109 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbaf2250-41f8-4278-a9ad-65c766e5e0dd" containerName="ceilometer-central-agent" Feb 18 14:57:03 crc kubenswrapper[4957]: I0218 14:57:03.949115 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbaf2250-41f8-4278-a9ad-65c766e5e0dd" containerName="ceilometer-central-agent" Feb 18 14:57:03 crc kubenswrapper[4957]: E0218 14:57:03.949126 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbaf2250-41f8-4278-a9ad-65c766e5e0dd" containerName="ceilometer-notification-agent" Feb 18 14:57:03 crc kubenswrapper[4957]: I0218 14:57:03.949146 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbaf2250-41f8-4278-a9ad-65c766e5e0dd" containerName="ceilometer-notification-agent" Feb 18 14:57:03 crc kubenswrapper[4957]: I0218 14:57:03.957030 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbaf2250-41f8-4278-a9ad-65c766e5e0dd" containerName="ceilometer-notification-agent" Feb 18 14:57:03 crc kubenswrapper[4957]: I0218 14:57:03.957363 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbaf2250-41f8-4278-a9ad-65c766e5e0dd" containerName="proxy-httpd" Feb 18 14:57:03 crc kubenswrapper[4957]: I0218 14:57:03.957398 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbaf2250-41f8-4278-a9ad-65c766e5e0dd" containerName="sg-core" Feb 18 14:57:03 crc kubenswrapper[4957]: I0218 14:57:03.957411 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbaf2250-41f8-4278-a9ad-65c766e5e0dd" containerName="ceilometer-central-agent" Feb 18 14:57:03 crc kubenswrapper[4957]: I0218 14:57:03.959761 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 14:57:03 crc kubenswrapper[4957]: I0218 14:57:03.965974 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 18 14:57:03 crc kubenswrapper[4957]: I0218 14:57:03.966277 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 18 14:57:03 crc kubenswrapper[4957]: I0218 14:57:03.971645 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 14:57:04 crc kubenswrapper[4957]: I0218 14:57:04.111917 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc34b37f-eee2-4e72-b5f5-0c9f7a8fedc3-config-data\") pod \"ceilometer-0\" (UID: \"bc34b37f-eee2-4e72-b5f5-0c9f7a8fedc3\") " pod="openstack/ceilometer-0" Feb 18 14:57:04 crc kubenswrapper[4957]: I0218 14:57:04.112719 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frf58\" (UniqueName: \"kubernetes.io/projected/bc34b37f-eee2-4e72-b5f5-0c9f7a8fedc3-kube-api-access-frf58\") pod \"ceilometer-0\" (UID: \"bc34b37f-eee2-4e72-b5f5-0c9f7a8fedc3\") " pod="openstack/ceilometer-0" Feb 18 14:57:04 crc kubenswrapper[4957]: I0218 14:57:04.112762 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bc34b37f-eee2-4e72-b5f5-0c9f7a8fedc3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bc34b37f-eee2-4e72-b5f5-0c9f7a8fedc3\") " pod="openstack/ceilometer-0" Feb 18 14:57:04 crc kubenswrapper[4957]: I0218 14:57:04.112808 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc34b37f-eee2-4e72-b5f5-0c9f7a8fedc3-scripts\") pod \"ceilometer-0\" (UID: \"bc34b37f-eee2-4e72-b5f5-0c9f7a8fedc3\") " pod="openstack/ceilometer-0" Feb 18 14:57:04 crc kubenswrapper[4957]: I0218 14:57:04.112843 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bc34b37f-eee2-4e72-b5f5-0c9f7a8fedc3-run-httpd\") pod \"ceilometer-0\" (UID: \"bc34b37f-eee2-4e72-b5f5-0c9f7a8fedc3\") " pod="openstack/ceilometer-0" Feb 18 14:57:04 crc kubenswrapper[4957]: I0218 14:57:04.113025 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bc34b37f-eee2-4e72-b5f5-0c9f7a8fedc3-log-httpd\") pod \"ceilometer-0\" (UID: \"bc34b37f-eee2-4e72-b5f5-0c9f7a8fedc3\") " pod="openstack/ceilometer-0" Feb 18 14:57:04 crc kubenswrapper[4957]: I0218 14:57:04.113332 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc34b37f-eee2-4e72-b5f5-0c9f7a8fedc3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bc34b37f-eee2-4e72-b5f5-0c9f7a8fedc3\") " pod="openstack/ceilometer-0" Feb 18 14:57:04 crc kubenswrapper[4957]: I0218 14:57:04.215573 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc34b37f-eee2-4e72-b5f5-0c9f7a8fedc3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bc34b37f-eee2-4e72-b5f5-0c9f7a8fedc3\") " pod="openstack/ceilometer-0" Feb 18 14:57:04 crc kubenswrapper[4957]: I0218 14:57:04.215732 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc34b37f-eee2-4e72-b5f5-0c9f7a8fedc3-config-data\") pod \"ceilometer-0\" (UID: \"bc34b37f-eee2-4e72-b5f5-0c9f7a8fedc3\") " pod="openstack/ceilometer-0" Feb 18 14:57:04 crc kubenswrapper[4957]: I0218 14:57:04.215832 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bc34b37f-eee2-4e72-b5f5-0c9f7a8fedc3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bc34b37f-eee2-4e72-b5f5-0c9f7a8fedc3\") " pod="openstack/ceilometer-0" Feb 18 14:57:04 crc kubenswrapper[4957]: I0218 14:57:04.215856 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-frf58\" (UniqueName: \"kubernetes.io/projected/bc34b37f-eee2-4e72-b5f5-0c9f7a8fedc3-kube-api-access-frf58\") pod \"ceilometer-0\" (UID: \"bc34b37f-eee2-4e72-b5f5-0c9f7a8fedc3\") " pod="openstack/ceilometer-0" Feb 18 14:57:04 crc kubenswrapper[4957]: I0218 14:57:04.215882 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc34b37f-eee2-4e72-b5f5-0c9f7a8fedc3-scripts\") pod \"ceilometer-0\" (UID: \"bc34b37f-eee2-4e72-b5f5-0c9f7a8fedc3\") " pod="openstack/ceilometer-0" Feb 18 14:57:04 crc kubenswrapper[4957]: I0218 14:57:04.215909 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bc34b37f-eee2-4e72-b5f5-0c9f7a8fedc3-run-httpd\") pod \"ceilometer-0\" (UID: \"bc34b37f-eee2-4e72-b5f5-0c9f7a8fedc3\") " pod="openstack/ceilometer-0" Feb 18 14:57:04 crc kubenswrapper[4957]: I0218 14:57:04.215996 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bc34b37f-eee2-4e72-b5f5-0c9f7a8fedc3-log-httpd\") pod \"ceilometer-0\" (UID: \"bc34b37f-eee2-4e72-b5f5-0c9f7a8fedc3\") " pod="openstack/ceilometer-0" Feb 18 14:57:04 crc kubenswrapper[4957]: I0218 14:57:04.227691 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bc34b37f-eee2-4e72-b5f5-0c9f7a8fedc3-log-httpd\") pod \"ceilometer-0\" (UID: \"bc34b37f-eee2-4e72-b5f5-0c9f7a8fedc3\") " pod="openstack/ceilometer-0" Feb 18 14:57:04 crc kubenswrapper[4957]: I0218 14:57:04.229538 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bc34b37f-eee2-4e72-b5f5-0c9f7a8fedc3-run-httpd\") pod \"ceilometer-0\" (UID: \"bc34b37f-eee2-4e72-b5f5-0c9f7a8fedc3\") " pod="openstack/ceilometer-0" Feb 18 14:57:05 crc kubenswrapper[4957]: I0218 14:57:04.230748 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc34b37f-eee2-4e72-b5f5-0c9f7a8fedc3-scripts\") pod \"ceilometer-0\" (UID: \"bc34b37f-eee2-4e72-b5f5-0c9f7a8fedc3\") " pod="openstack/ceilometer-0" Feb 18 14:57:05 crc kubenswrapper[4957]: I0218 14:57:04.239575 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc34b37f-eee2-4e72-b5f5-0c9f7a8fedc3-config-data\") pod \"ceilometer-0\" (UID: \"bc34b37f-eee2-4e72-b5f5-0c9f7a8fedc3\") " pod="openstack/ceilometer-0" Feb 18 14:57:05 crc kubenswrapper[4957]: I0218 14:57:04.240072 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bc34b37f-eee2-4e72-b5f5-0c9f7a8fedc3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bc34b37f-eee2-4e72-b5f5-0c9f7a8fedc3\") " pod="openstack/ceilometer-0" Feb 18 14:57:05 crc kubenswrapper[4957]: I0218 14:57:04.255299 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-frf58\" (UniqueName: \"kubernetes.io/projected/bc34b37f-eee2-4e72-b5f5-0c9f7a8fedc3-kube-api-access-frf58\") pod \"ceilometer-0\" (UID: \"bc34b37f-eee2-4e72-b5f5-0c9f7a8fedc3\") " pod="openstack/ceilometer-0" Feb 18 14:57:05 crc kubenswrapper[4957]: I0218 14:57:04.271379 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc34b37f-eee2-4e72-b5f5-0c9f7a8fedc3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bc34b37f-eee2-4e72-b5f5-0c9f7a8fedc3\") " pod="openstack/ceilometer-0" Feb 18 14:57:05 crc kubenswrapper[4957]: I0218 14:57:04.282395 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 14:57:05 crc kubenswrapper[4957]: I0218 14:57:04.288181 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bbaf2250-41f8-4278-a9ad-65c766e5e0dd" path="/var/lib/kubelet/pods/bbaf2250-41f8-4278-a9ad-65c766e5e0dd/volumes" Feb 18 14:57:06 crc kubenswrapper[4957]: I0218 14:57:06.888633 4957 generic.go:334] "Generic (PLEG): container finished" podID="9b0c6928-d889-421a-81e2-5e9dd8e1e986" containerID="9b5b56bcedf4cb081d5bedfc96a6585aae36c6e77ed3081e492606f90e9c0e68" exitCode=137 Feb 18 14:57:06 crc kubenswrapper[4957]: I0218 14:57:06.889288 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7fb5f67558-76mdf" event={"ID":"9b0c6928-d889-421a-81e2-5e9dd8e1e986","Type":"ContainerDied","Data":"9b5b56bcedf4cb081d5bedfc96a6585aae36c6e77ed3081e492606f90e9c0e68"} Feb 18 14:57:07 crc kubenswrapper[4957]: I0218 14:57:07.171672 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-2c2d-account-create-update-tn2f2" Feb 18 14:57:07 crc kubenswrapper[4957]: I0218 14:57:07.200633 4957 scope.go:117] "RemoveContainer" containerID="8c1b890d5fd9505a656719032abf5bf5576c11c1ec0659700a912326efddd8f9" Feb 18 14:57:07 crc kubenswrapper[4957]: I0218 14:57:07.213317 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-f7zmd" Feb 18 14:57:07 crc kubenswrapper[4957]: I0218 14:57:07.279023 4957 patch_prober.go:28] interesting pod/machine-config-daemon-x8wwg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 14:57:07 crc kubenswrapper[4957]: I0218 14:57:07.279092 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 14:57:07 crc kubenswrapper[4957]: I0218 14:57:07.279142 4957 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" Feb 18 14:57:07 crc kubenswrapper[4957]: I0218 14:57:07.280190 4957 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c4e669be0ab1992542c422c61f78e7526ae93fe9c47c7d1457d0a05970a49398"} pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 14:57:07 crc kubenswrapper[4957]: I0218 14:57:07.280292 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" containerName="machine-config-daemon" containerID="cri-o://c4e669be0ab1992542c422c61f78e7526ae93fe9c47c7d1457d0a05970a49398" gracePeriod=600 Feb 18 14:57:07 crc kubenswrapper[4957]: I0218 14:57:07.305048 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/184b1171-6492-48b6-bf23-2286c360264b-operator-scripts\") pod \"184b1171-6492-48b6-bf23-2286c360264b\" (UID: \"184b1171-6492-48b6-bf23-2286c360264b\") " Feb 18 14:57:07 crc kubenswrapper[4957]: I0218 14:57:07.305107 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7ab939f4-a04d-445f-92fd-d26bd08f852c-operator-scripts\") pod \"7ab939f4-a04d-445f-92fd-d26bd08f852c\" (UID: \"7ab939f4-a04d-445f-92fd-d26bd08f852c\") " Feb 18 14:57:07 crc kubenswrapper[4957]: I0218 14:57:07.305222 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ddt7n\" (UniqueName: \"kubernetes.io/projected/184b1171-6492-48b6-bf23-2286c360264b-kube-api-access-ddt7n\") pod \"184b1171-6492-48b6-bf23-2286c360264b\" (UID: \"184b1171-6492-48b6-bf23-2286c360264b\") " Feb 18 14:57:07 crc kubenswrapper[4957]: I0218 14:57:07.305360 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2z9nx\" (UniqueName: \"kubernetes.io/projected/7ab939f4-a04d-445f-92fd-d26bd08f852c-kube-api-access-2z9nx\") pod \"7ab939f4-a04d-445f-92fd-d26bd08f852c\" (UID: \"7ab939f4-a04d-445f-92fd-d26bd08f852c\") " Feb 18 14:57:07 crc kubenswrapper[4957]: I0218 14:57:07.306510 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ab939f4-a04d-445f-92fd-d26bd08f852c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7ab939f4-a04d-445f-92fd-d26bd08f852c" (UID: "7ab939f4-a04d-445f-92fd-d26bd08f852c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:57:07 crc kubenswrapper[4957]: I0218 14:57:07.306535 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/184b1171-6492-48b6-bf23-2286c360264b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "184b1171-6492-48b6-bf23-2286c360264b" (UID: "184b1171-6492-48b6-bf23-2286c360264b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:57:07 crc kubenswrapper[4957]: I0218 14:57:07.310759 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ab939f4-a04d-445f-92fd-d26bd08f852c-kube-api-access-2z9nx" (OuterVolumeSpecName: "kube-api-access-2z9nx") pod "7ab939f4-a04d-445f-92fd-d26bd08f852c" (UID: "7ab939f4-a04d-445f-92fd-d26bd08f852c"). InnerVolumeSpecName "kube-api-access-2z9nx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:57:07 crc kubenswrapper[4957]: I0218 14:57:07.315067 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/184b1171-6492-48b6-bf23-2286c360264b-kube-api-access-ddt7n" (OuterVolumeSpecName: "kube-api-access-ddt7n") pod "184b1171-6492-48b6-bf23-2286c360264b" (UID: "184b1171-6492-48b6-bf23-2286c360264b"). InnerVolumeSpecName "kube-api-access-ddt7n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:57:07 crc kubenswrapper[4957]: I0218 14:57:07.375391 4957 scope.go:117] "RemoveContainer" containerID="2e317e757815ed454960cb98f2b5526451acf81732ac32f2002270cd04314718" Feb 18 14:57:07 crc kubenswrapper[4957]: E0218 14:57:07.409067 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-x8wwg_openshift-machine-config-operator(4cde17e3-43e9-4bed-afe8-5b76229e35cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" Feb 18 14:57:07 crc kubenswrapper[4957]: I0218 14:57:07.410596 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2z9nx\" (UniqueName: \"kubernetes.io/projected/7ab939f4-a04d-445f-92fd-d26bd08f852c-kube-api-access-2z9nx\") on node \"crc\" DevicePath \"\"" Feb 18 14:57:07 crc kubenswrapper[4957]: I0218 14:57:07.410641 4957 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/184b1171-6492-48b6-bf23-2286c360264b-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 14:57:07 crc kubenswrapper[4957]: I0218 14:57:07.410655 4957 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7ab939f4-a04d-445f-92fd-d26bd08f852c-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 14:57:07 crc kubenswrapper[4957]: I0218 14:57:07.410666 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ddt7n\" (UniqueName: \"kubernetes.io/projected/184b1171-6492-48b6-bf23-2286c360264b-kube-api-access-ddt7n\") on node \"crc\" DevicePath \"\"" Feb 18 14:57:07 crc kubenswrapper[4957]: I0218 14:57:07.435721 4957 scope.go:117] "RemoveContainer" containerID="04132032e953fe9a4c03bc31595b5685b8e97224273fe90b506838a70c8ee0f9" Feb 18 14:57:07 crc kubenswrapper[4957]: I0218 14:57:07.456141 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7fb5f67558-76mdf" Feb 18 14:57:07 crc kubenswrapper[4957]: I0218 14:57:07.619202 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b0c6928-d889-421a-81e2-5e9dd8e1e986-config-data\") pod \"9b0c6928-d889-421a-81e2-5e9dd8e1e986\" (UID: \"9b0c6928-d889-421a-81e2-5e9dd8e1e986\") " Feb 18 14:57:07 crc kubenswrapper[4957]: I0218 14:57:07.619232 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kmhqz\" (UniqueName: \"kubernetes.io/projected/9b0c6928-d889-421a-81e2-5e9dd8e1e986-kube-api-access-kmhqz\") pod \"9b0c6928-d889-421a-81e2-5e9dd8e1e986\" (UID: \"9b0c6928-d889-421a-81e2-5e9dd8e1e986\") " Feb 18 14:57:07 crc kubenswrapper[4957]: I0218 14:57:07.619349 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9b0c6928-d889-421a-81e2-5e9dd8e1e986-config-data-custom\") pod \"9b0c6928-d889-421a-81e2-5e9dd8e1e986\" (UID: \"9b0c6928-d889-421a-81e2-5e9dd8e1e986\") " Feb 18 14:57:07 crc kubenswrapper[4957]: I0218 14:57:07.619594 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b0c6928-d889-421a-81e2-5e9dd8e1e986-combined-ca-bundle\") pod \"9b0c6928-d889-421a-81e2-5e9dd8e1e986\" (UID: \"9b0c6928-d889-421a-81e2-5e9dd8e1e986\") " Feb 18 14:57:07 crc kubenswrapper[4957]: I0218 14:57:07.627866 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b0c6928-d889-421a-81e2-5e9dd8e1e986-kube-api-access-kmhqz" (OuterVolumeSpecName: "kube-api-access-kmhqz") pod "9b0c6928-d889-421a-81e2-5e9dd8e1e986" (UID: "9b0c6928-d889-421a-81e2-5e9dd8e1e986"). InnerVolumeSpecName "kube-api-access-kmhqz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:57:07 crc kubenswrapper[4957]: I0218 14:57:07.629204 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b0c6928-d889-421a-81e2-5e9dd8e1e986-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "9b0c6928-d889-421a-81e2-5e9dd8e1e986" (UID: "9b0c6928-d889-421a-81e2-5e9dd8e1e986"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:57:07 crc kubenswrapper[4957]: I0218 14:57:07.722639 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kmhqz\" (UniqueName: \"kubernetes.io/projected/9b0c6928-d889-421a-81e2-5e9dd8e1e986-kube-api-access-kmhqz\") on node \"crc\" DevicePath \"\"" Feb 18 14:57:07 crc kubenswrapper[4957]: I0218 14:57:07.722678 4957 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9b0c6928-d889-421a-81e2-5e9dd8e1e986-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 18 14:57:07 crc kubenswrapper[4957]: I0218 14:57:07.723921 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 14:57:07 crc kubenswrapper[4957]: I0218 14:57:07.728159 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b0c6928-d889-421a-81e2-5e9dd8e1e986-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9b0c6928-d889-421a-81e2-5e9dd8e1e986" (UID: "9b0c6928-d889-421a-81e2-5e9dd8e1e986"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:57:07 crc kubenswrapper[4957]: I0218 14:57:07.743992 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b0c6928-d889-421a-81e2-5e9dd8e1e986-config-data" (OuterVolumeSpecName: "config-data") pod "9b0c6928-d889-421a-81e2-5e9dd8e1e986" (UID: "9b0c6928-d889-421a-81e2-5e9dd8e1e986"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:57:07 crc kubenswrapper[4957]: I0218 14:57:07.825254 4957 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b0c6928-d889-421a-81e2-5e9dd8e1e986-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:57:07 crc kubenswrapper[4957]: I0218 14:57:07.825284 4957 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b0c6928-d889-421a-81e2-5e9dd8e1e986-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 14:57:07 crc kubenswrapper[4957]: I0218 14:57:07.904245 4957 generic.go:334] "Generic (PLEG): container finished" podID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" containerID="c4e669be0ab1992542c422c61f78e7526ae93fe9c47c7d1457d0a05970a49398" exitCode=0 Feb 18 14:57:07 crc kubenswrapper[4957]: I0218 14:57:07.904323 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" event={"ID":"4cde17e3-43e9-4bed-afe8-5b76229e35cf","Type":"ContainerDied","Data":"c4e669be0ab1992542c422c61f78e7526ae93fe9c47c7d1457d0a05970a49398"} Feb 18 14:57:07 crc kubenswrapper[4957]: I0218 14:57:07.904386 4957 scope.go:117] "RemoveContainer" containerID="3685eb85ff4135d1d788989087159dbba7bab3b709d7c66d8e50ece795084250" Feb 18 14:57:07 crc kubenswrapper[4957]: I0218 14:57:07.905410 4957 scope.go:117] "RemoveContainer" containerID="c4e669be0ab1992542c422c61f78e7526ae93fe9c47c7d1457d0a05970a49398" Feb 18 14:57:07 crc kubenswrapper[4957]: E0218 14:57:07.905899 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-x8wwg_openshift-machine-config-operator(4cde17e3-43e9-4bed-afe8-5b76229e35cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" Feb 18 14:57:07 crc kubenswrapper[4957]: I0218 14:57:07.907648 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b048c2e2-cb55-4531-b51d-32ad1c2a92be","Type":"ContainerStarted","Data":"a4933a93d6ff93d6a587413d30f6a2af5220273555ef1373b6502b6e1308e20d"} Feb 18 14:57:07 crc kubenswrapper[4957]: I0218 14:57:07.910461 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f0cd3140-eb02-488a-a95c-87e12ba3d061","Type":"ContainerStarted","Data":"5e6f1e624a6456379a444c472e6ed627f85b9fcea51914672af79d002f6e738e"} Feb 18 14:57:07 crc kubenswrapper[4957]: I0218 14:57:07.914900 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-2c2d-account-create-update-tn2f2" event={"ID":"184b1171-6492-48b6-bf23-2286c360264b","Type":"ContainerDied","Data":"e1448c1c143b9259145910d81c2e42e26b45b79249ab8d1d0e8d0ceaa0579ac4"} Feb 18 14:57:07 crc kubenswrapper[4957]: I0218 14:57:07.914938 4957 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e1448c1c143b9259145910d81c2e42e26b45b79249ab8d1d0e8d0ceaa0579ac4" Feb 18 14:57:07 crc kubenswrapper[4957]: I0218 14:57:07.915011 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-2c2d-account-create-update-tn2f2" Feb 18 14:57:07 crc kubenswrapper[4957]: I0218 14:57:07.917232 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7fb5f67558-76mdf" event={"ID":"9b0c6928-d889-421a-81e2-5e9dd8e1e986","Type":"ContainerDied","Data":"1cc5d1bfc18192542b1d3142aaa65193c8cb21784486995a7604140f796ff7a4"} Feb 18 14:57:07 crc kubenswrapper[4957]: I0218 14:57:07.917348 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7fb5f67558-76mdf" Feb 18 14:57:07 crc kubenswrapper[4957]: I0218 14:57:07.928174 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"4a913c6a-cf3b-49f6-b1c6-70b090d52925","Type":"ContainerStarted","Data":"fbd22de36ee7fde6bd502dd4e12a70e2dd44810b4c81c6c8057c229f61b2daa4"} Feb 18 14:57:07 crc kubenswrapper[4957]: I0218 14:57:07.949821 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-f7zmd" event={"ID":"7ab939f4-a04d-445f-92fd-d26bd08f852c","Type":"ContainerDied","Data":"8ce5af0fe49b145e79aab46552f779cd6b3831c21ed620e6acb07b9c563aa70f"} Feb 18 14:57:07 crc kubenswrapper[4957]: I0218 14:57:07.949926 4957 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8ce5af0fe49b145e79aab46552f779cd6b3831c21ed620e6acb07b9c563aa70f" Feb 18 14:57:07 crc kubenswrapper[4957]: I0218 14:57:07.950059 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-f7zmd" Feb 18 14:57:07 crc kubenswrapper[4957]: I0218 14:57:07.975771 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"8b893f85-0849-44a2-b99c-02a720f05422","Type":"ContainerStarted","Data":"655f2914d723924bf80c867dd18f2fc5e98a466e00938cd121d79d0d9317cc31"} Feb 18 14:57:07 crc kubenswrapper[4957]: I0218 14:57:07.976077 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="8b893f85-0849-44a2-b99c-02a720f05422" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://655f2914d723924bf80c867dd18f2fc5e98a466e00938cd121d79d0d9317cc31" gracePeriod=30 Feb 18 14:57:07 crc kubenswrapper[4957]: I0218 14:57:07.985611 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bc34b37f-eee2-4e72-b5f5-0c9f7a8fedc3","Type":"ContainerStarted","Data":"d0aca479c1edc8f5e51fab2232e2a5ec1e6b7f262c215987e8b4b6c613f116e7"} Feb 18 14:57:08 crc kubenswrapper[4957]: I0218 14:57:08.084464 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.7652506470000002 podStartE2EDuration="9.084442565s" podCreationTimestamp="2026-02-18 14:56:59 +0000 UTC" firstStartedPulling="2026-02-18 14:57:01.909666203 +0000 UTC m=+1528.430530947" lastFinishedPulling="2026-02-18 14:57:07.228858121 +0000 UTC m=+1533.749722865" observedRunningTime="2026-02-18 14:57:07.974154814 +0000 UTC m=+1534.495019558" watchObservedRunningTime="2026-02-18 14:57:08.084442565 +0000 UTC m=+1534.605307309" Feb 18 14:57:08 crc kubenswrapper[4957]: I0218 14:57:08.097108 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=4.565438186 podStartE2EDuration="10.097084601s" podCreationTimestamp="2026-02-18 14:56:58 +0000 UTC" firstStartedPulling="2026-02-18 14:57:01.694873108 +0000 UTC m=+1528.215737852" lastFinishedPulling="2026-02-18 14:57:07.226519523 +0000 UTC m=+1533.747384267" observedRunningTime="2026-02-18 14:57:07.994214945 +0000 UTC m=+1534.515079689" watchObservedRunningTime="2026-02-18 14:57:08.097084601 +0000 UTC m=+1534.617949355" Feb 18 14:57:08 crc kubenswrapper[4957]: I0218 14:57:08.153509 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-7fb5f67558-76mdf"] Feb 18 14:57:08 crc kubenswrapper[4957]: I0218 14:57:08.164022 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-7fb5f67558-76mdf"] Feb 18 14:57:08 crc kubenswrapper[4957]: I0218 14:57:08.168177 4957 scope.go:117] "RemoveContainer" containerID="9b5b56bcedf4cb081d5bedfc96a6585aae36c6e77ed3081e492606f90e9c0e68" Feb 18 14:57:08 crc kubenswrapper[4957]: I0218 14:57:08.234878 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b0c6928-d889-421a-81e2-5e9dd8e1e986" path="/var/lib/kubelet/pods/9b0c6928-d889-421a-81e2-5e9dd8e1e986/volumes" Feb 18 14:57:08 crc kubenswrapper[4957]: I0218 14:57:08.999163 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b048c2e2-cb55-4531-b51d-32ad1c2a92be","Type":"ContainerStarted","Data":"2c6382537ea0a3d044ebd5726273b3549b84510ff7652a1c2205da09d8bdfb4e"} Feb 18 14:57:08 crc kubenswrapper[4957]: I0218 14:57:08.999306 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="b048c2e2-cb55-4531-b51d-32ad1c2a92be" containerName="nova-metadata-log" containerID="cri-o://a4933a93d6ff93d6a587413d30f6a2af5220273555ef1373b6502b6e1308e20d" gracePeriod=30 Feb 18 14:57:09 crc kubenswrapper[4957]: I0218 14:57:08.999498 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="b048c2e2-cb55-4531-b51d-32ad1c2a92be" containerName="nova-metadata-metadata" containerID="cri-o://2c6382537ea0a3d044ebd5726273b3549b84510ff7652a1c2205da09d8bdfb4e" gracePeriod=30 Feb 18 14:57:09 crc kubenswrapper[4957]: I0218 14:57:09.004598 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f0cd3140-eb02-488a-a95c-87e12ba3d061","Type":"ContainerStarted","Data":"f87915f0c46d43dddbba0635262ab6a28eeacffd7a59714cd68aa62f776c7c12"} Feb 18 14:57:09 crc kubenswrapper[4957]: I0218 14:57:09.006399 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bc34b37f-eee2-4e72-b5f5-0c9f7a8fedc3","Type":"ContainerStarted","Data":"0deacaba22fa5c38bc83d56ac00893d1b5ca73f087d0940caac8dd70dae1e6bf"} Feb 18 14:57:09 crc kubenswrapper[4957]: I0218 14:57:09.024571 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=4.931571039 podStartE2EDuration="11.024547895s" podCreationTimestamp="2026-02-18 14:56:58 +0000 UTC" firstStartedPulling="2026-02-18 14:57:01.353279455 +0000 UTC m=+1527.874144199" lastFinishedPulling="2026-02-18 14:57:07.446256311 +0000 UTC m=+1533.967121055" observedRunningTime="2026-02-18 14:57:09.018372997 +0000 UTC m=+1535.539237741" watchObservedRunningTime="2026-02-18 14:57:09.024547895 +0000 UTC m=+1535.545412639" Feb 18 14:57:09 crc kubenswrapper[4957]: I0218 14:57:09.039087 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=5.169508314 podStartE2EDuration="11.039069685s" podCreationTimestamp="2026-02-18 14:56:58 +0000 UTC" firstStartedPulling="2026-02-18 14:57:01.352991987 +0000 UTC m=+1527.873856731" lastFinishedPulling="2026-02-18 14:57:07.222553358 +0000 UTC m=+1533.743418102" observedRunningTime="2026-02-18 14:57:09.035461071 +0000 UTC m=+1535.556325825" watchObservedRunningTime="2026-02-18 14:57:09.039069685 +0000 UTC m=+1535.559934429" Feb 18 14:57:09 crc kubenswrapper[4957]: I0218 14:57:09.476868 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 18 14:57:09 crc kubenswrapper[4957]: I0218 14:57:09.477158 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 18 14:57:09 crc kubenswrapper[4957]: I0218 14:57:09.477170 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 18 14:57:09 crc kubenswrapper[4957]: I0218 14:57:09.477183 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 18 14:57:09 crc kubenswrapper[4957]: I0218 14:57:09.632802 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-sync-xmtj7"] Feb 18 14:57:09 crc kubenswrapper[4957]: E0218 14:57:09.633815 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="184b1171-6492-48b6-bf23-2286c360264b" containerName="mariadb-account-create-update" Feb 18 14:57:09 crc kubenswrapper[4957]: I0218 14:57:09.633839 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="184b1171-6492-48b6-bf23-2286c360264b" containerName="mariadb-account-create-update" Feb 18 14:57:09 crc kubenswrapper[4957]: E0218 14:57:09.633863 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ab939f4-a04d-445f-92fd-d26bd08f852c" containerName="mariadb-database-create" Feb 18 14:57:09 crc kubenswrapper[4957]: I0218 14:57:09.633871 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ab939f4-a04d-445f-92fd-d26bd08f852c" containerName="mariadb-database-create" Feb 18 14:57:09 crc kubenswrapper[4957]: E0218 14:57:09.633885 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b0c6928-d889-421a-81e2-5e9dd8e1e986" containerName="heat-cfnapi" Feb 18 14:57:09 crc kubenswrapper[4957]: I0218 14:57:09.633893 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b0c6928-d889-421a-81e2-5e9dd8e1e986" containerName="heat-cfnapi" Feb 18 14:57:09 crc kubenswrapper[4957]: I0218 14:57:09.634208 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b0c6928-d889-421a-81e2-5e9dd8e1e986" containerName="heat-cfnapi" Feb 18 14:57:09 crc kubenswrapper[4957]: I0218 14:57:09.634231 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="184b1171-6492-48b6-bf23-2286c360264b" containerName="mariadb-account-create-update" Feb 18 14:57:09 crc kubenswrapper[4957]: I0218 14:57:09.634261 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ab939f4-a04d-445f-92fd-d26bd08f852c" containerName="mariadb-database-create" Feb 18 14:57:09 crc kubenswrapper[4957]: I0218 14:57:09.635382 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-xmtj7" Feb 18 14:57:09 crc kubenswrapper[4957]: I0218 14:57:09.642310 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Feb 18 14:57:09 crc kubenswrapper[4957]: I0218 14:57:09.642482 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-62fdg" Feb 18 14:57:09 crc kubenswrapper[4957]: I0218 14:57:09.642548 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 18 14:57:09 crc kubenswrapper[4957]: I0218 14:57:09.642311 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Feb 18 14:57:09 crc kubenswrapper[4957]: I0218 14:57:09.644384 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-xmtj7"] Feb 18 14:57:09 crc kubenswrapper[4957]: I0218 14:57:09.650090 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 18 14:57:09 crc kubenswrapper[4957]: I0218 14:57:09.778825 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zd6z7\" (UniqueName: \"kubernetes.io/projected/af20e059-3e19-4e13-be41-de0fb244b627-kube-api-access-zd6z7\") pod \"aodh-db-sync-xmtj7\" (UID: \"af20e059-3e19-4e13-be41-de0fb244b627\") " pod="openstack/aodh-db-sync-xmtj7" Feb 18 14:57:09 crc kubenswrapper[4957]: I0218 14:57:09.778912 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/af20e059-3e19-4e13-be41-de0fb244b627-scripts\") pod \"aodh-db-sync-xmtj7\" (UID: \"af20e059-3e19-4e13-be41-de0fb244b627\") " pod="openstack/aodh-db-sync-xmtj7" Feb 18 14:57:09 crc kubenswrapper[4957]: I0218 14:57:09.779049 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af20e059-3e19-4e13-be41-de0fb244b627-combined-ca-bundle\") pod \"aodh-db-sync-xmtj7\" (UID: \"af20e059-3e19-4e13-be41-de0fb244b627\") " pod="openstack/aodh-db-sync-xmtj7" Feb 18 14:57:09 crc kubenswrapper[4957]: I0218 14:57:09.779085 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af20e059-3e19-4e13-be41-de0fb244b627-config-data\") pod \"aodh-db-sync-xmtj7\" (UID: \"af20e059-3e19-4e13-be41-de0fb244b627\") " pod="openstack/aodh-db-sync-xmtj7" Feb 18 14:57:09 crc kubenswrapper[4957]: I0218 14:57:09.819462 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 18 14:57:09 crc kubenswrapper[4957]: I0218 14:57:09.885369 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af20e059-3e19-4e13-be41-de0fb244b627-combined-ca-bundle\") pod \"aodh-db-sync-xmtj7\" (UID: \"af20e059-3e19-4e13-be41-de0fb244b627\") " pod="openstack/aodh-db-sync-xmtj7" Feb 18 14:57:09 crc kubenswrapper[4957]: I0218 14:57:09.886181 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af20e059-3e19-4e13-be41-de0fb244b627-config-data\") pod \"aodh-db-sync-xmtj7\" (UID: \"af20e059-3e19-4e13-be41-de0fb244b627\") " pod="openstack/aodh-db-sync-xmtj7" Feb 18 14:57:09 crc kubenswrapper[4957]: I0218 14:57:09.886800 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zd6z7\" (UniqueName: \"kubernetes.io/projected/af20e059-3e19-4e13-be41-de0fb244b627-kube-api-access-zd6z7\") pod \"aodh-db-sync-xmtj7\" (UID: \"af20e059-3e19-4e13-be41-de0fb244b627\") " pod="openstack/aodh-db-sync-xmtj7" Feb 18 14:57:09 crc kubenswrapper[4957]: I0218 14:57:09.886872 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/af20e059-3e19-4e13-be41-de0fb244b627-scripts\") pod \"aodh-db-sync-xmtj7\" (UID: \"af20e059-3e19-4e13-be41-de0fb244b627\") " pod="openstack/aodh-db-sync-xmtj7" Feb 18 14:57:09 crc kubenswrapper[4957]: I0218 14:57:09.895867 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af20e059-3e19-4e13-be41-de0fb244b627-combined-ca-bundle\") pod \"aodh-db-sync-xmtj7\" (UID: \"af20e059-3e19-4e13-be41-de0fb244b627\") " pod="openstack/aodh-db-sync-xmtj7" Feb 18 14:57:09 crc kubenswrapper[4957]: I0218 14:57:09.896224 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/af20e059-3e19-4e13-be41-de0fb244b627-scripts\") pod \"aodh-db-sync-xmtj7\" (UID: \"af20e059-3e19-4e13-be41-de0fb244b627\") " pod="openstack/aodh-db-sync-xmtj7" Feb 18 14:57:09 crc kubenswrapper[4957]: I0218 14:57:09.897182 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af20e059-3e19-4e13-be41-de0fb244b627-config-data\") pod \"aodh-db-sync-xmtj7\" (UID: \"af20e059-3e19-4e13-be41-de0fb244b627\") " pod="openstack/aodh-db-sync-xmtj7" Feb 18 14:57:09 crc kubenswrapper[4957]: I0218 14:57:09.910127 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zd6z7\" (UniqueName: \"kubernetes.io/projected/af20e059-3e19-4e13-be41-de0fb244b627-kube-api-access-zd6z7\") pod \"aodh-db-sync-xmtj7\" (UID: \"af20e059-3e19-4e13-be41-de0fb244b627\") " pod="openstack/aodh-db-sync-xmtj7" Feb 18 14:57:09 crc kubenswrapper[4957]: I0218 14:57:09.981237 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-xmtj7" Feb 18 14:57:09 crc kubenswrapper[4957]: I0218 14:57:09.987704 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l4md8\" (UniqueName: \"kubernetes.io/projected/b048c2e2-cb55-4531-b51d-32ad1c2a92be-kube-api-access-l4md8\") pod \"b048c2e2-cb55-4531-b51d-32ad1c2a92be\" (UID: \"b048c2e2-cb55-4531-b51d-32ad1c2a92be\") " Feb 18 14:57:09 crc kubenswrapper[4957]: I0218 14:57:09.987749 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b048c2e2-cb55-4531-b51d-32ad1c2a92be-logs\") pod \"b048c2e2-cb55-4531-b51d-32ad1c2a92be\" (UID: \"b048c2e2-cb55-4531-b51d-32ad1c2a92be\") " Feb 18 14:57:09 crc kubenswrapper[4957]: I0218 14:57:09.987860 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b048c2e2-cb55-4531-b51d-32ad1c2a92be-config-data\") pod \"b048c2e2-cb55-4531-b51d-32ad1c2a92be\" (UID: \"b048c2e2-cb55-4531-b51d-32ad1c2a92be\") " Feb 18 14:57:09 crc kubenswrapper[4957]: I0218 14:57:09.987979 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b048c2e2-cb55-4531-b51d-32ad1c2a92be-combined-ca-bundle\") pod \"b048c2e2-cb55-4531-b51d-32ad1c2a92be\" (UID: \"b048c2e2-cb55-4531-b51d-32ad1c2a92be\") " Feb 18 14:57:09 crc kubenswrapper[4957]: I0218 14:57:09.989304 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b048c2e2-cb55-4531-b51d-32ad1c2a92be-logs" (OuterVolumeSpecName: "logs") pod "b048c2e2-cb55-4531-b51d-32ad1c2a92be" (UID: "b048c2e2-cb55-4531-b51d-32ad1c2a92be"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:57:10 crc kubenswrapper[4957]: I0218 14:57:10.005681 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b048c2e2-cb55-4531-b51d-32ad1c2a92be-kube-api-access-l4md8" (OuterVolumeSpecName: "kube-api-access-l4md8") pod "b048c2e2-cb55-4531-b51d-32ad1c2a92be" (UID: "b048c2e2-cb55-4531-b51d-32ad1c2a92be"). InnerVolumeSpecName "kube-api-access-l4md8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:57:10 crc kubenswrapper[4957]: I0218 14:57:10.064242 4957 generic.go:334] "Generic (PLEG): container finished" podID="b048c2e2-cb55-4531-b51d-32ad1c2a92be" containerID="2c6382537ea0a3d044ebd5726273b3549b84510ff7652a1c2205da09d8bdfb4e" exitCode=0 Feb 18 14:57:10 crc kubenswrapper[4957]: I0218 14:57:10.064499 4957 generic.go:334] "Generic (PLEG): container finished" podID="b048c2e2-cb55-4531-b51d-32ad1c2a92be" containerID="a4933a93d6ff93d6a587413d30f6a2af5220273555ef1373b6502b6e1308e20d" exitCode=143 Feb 18 14:57:10 crc kubenswrapper[4957]: I0218 14:57:10.065503 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 18 14:57:10 crc kubenswrapper[4957]: I0218 14:57:10.066025 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b048c2e2-cb55-4531-b51d-32ad1c2a92be","Type":"ContainerDied","Data":"2c6382537ea0a3d044ebd5726273b3549b84510ff7652a1c2205da09d8bdfb4e"} Feb 18 14:57:10 crc kubenswrapper[4957]: I0218 14:57:10.066048 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b048c2e2-cb55-4531-b51d-32ad1c2a92be","Type":"ContainerDied","Data":"a4933a93d6ff93d6a587413d30f6a2af5220273555ef1373b6502b6e1308e20d"} Feb 18 14:57:10 crc kubenswrapper[4957]: I0218 14:57:10.066061 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b048c2e2-cb55-4531-b51d-32ad1c2a92be","Type":"ContainerDied","Data":"e81a286726c2392aa2d3794162016342038afddad6dbcae744e2a7aa9848cb07"} Feb 18 14:57:10 crc kubenswrapper[4957]: I0218 14:57:10.066076 4957 scope.go:117] "RemoveContainer" containerID="2c6382537ea0a3d044ebd5726273b3549b84510ff7652a1c2205da09d8bdfb4e" Feb 18 14:57:10 crc kubenswrapper[4957]: I0218 14:57:10.092332 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l4md8\" (UniqueName: \"kubernetes.io/projected/b048c2e2-cb55-4531-b51d-32ad1c2a92be-kube-api-access-l4md8\") on node \"crc\" DevicePath \"\"" Feb 18 14:57:10 crc kubenswrapper[4957]: I0218 14:57:10.092380 4957 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b048c2e2-cb55-4531-b51d-32ad1c2a92be-logs\") on node \"crc\" DevicePath \"\"" Feb 18 14:57:10 crc kubenswrapper[4957]: I0218 14:57:10.097950 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b048c2e2-cb55-4531-b51d-32ad1c2a92be-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b048c2e2-cb55-4531-b51d-32ad1c2a92be" (UID: "b048c2e2-cb55-4531-b51d-32ad1c2a92be"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:57:10 crc kubenswrapper[4957]: I0218 14:57:10.113618 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b048c2e2-cb55-4531-b51d-32ad1c2a92be-config-data" (OuterVolumeSpecName: "config-data") pod "b048c2e2-cb55-4531-b51d-32ad1c2a92be" (UID: "b048c2e2-cb55-4531-b51d-32ad1c2a92be"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:57:10 crc kubenswrapper[4957]: I0218 14:57:10.194066 4957 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b048c2e2-cb55-4531-b51d-32ad1c2a92be-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 14:57:10 crc kubenswrapper[4957]: I0218 14:57:10.194099 4957 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b048c2e2-cb55-4531-b51d-32ad1c2a92be-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:57:10 crc kubenswrapper[4957]: I0218 14:57:10.331871 4957 scope.go:117] "RemoveContainer" containerID="a4933a93d6ff93d6a587413d30f6a2af5220273555ef1373b6502b6e1308e20d" Feb 18 14:57:10 crc kubenswrapper[4957]: I0218 14:57:10.383464 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 18 14:57:10 crc kubenswrapper[4957]: I0218 14:57:10.384513 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 18 14:57:10 crc kubenswrapper[4957]: I0218 14:57:10.384609 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 18 14:57:10 crc kubenswrapper[4957]: I0218 14:57:10.384655 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-9b86998b5-rk4dn" Feb 18 14:57:10 crc kubenswrapper[4957]: I0218 14:57:10.512733 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 18 14:57:10 crc kubenswrapper[4957]: I0218 14:57:10.514227 4957 scope.go:117] "RemoveContainer" containerID="2c6382537ea0a3d044ebd5726273b3549b84510ff7652a1c2205da09d8bdfb4e" Feb 18 14:57:10 crc kubenswrapper[4957]: E0218 14:57:10.527865 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c6382537ea0a3d044ebd5726273b3549b84510ff7652a1c2205da09d8bdfb4e\": container with ID starting with 2c6382537ea0a3d044ebd5726273b3549b84510ff7652a1c2205da09d8bdfb4e not found: ID does not exist" containerID="2c6382537ea0a3d044ebd5726273b3549b84510ff7652a1c2205da09d8bdfb4e" Feb 18 14:57:10 crc kubenswrapper[4957]: I0218 14:57:10.527916 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c6382537ea0a3d044ebd5726273b3549b84510ff7652a1c2205da09d8bdfb4e"} err="failed to get container status \"2c6382537ea0a3d044ebd5726273b3549b84510ff7652a1c2205da09d8bdfb4e\": rpc error: code = NotFound desc = could not find container \"2c6382537ea0a3d044ebd5726273b3549b84510ff7652a1c2205da09d8bdfb4e\": container with ID starting with 2c6382537ea0a3d044ebd5726273b3549b84510ff7652a1c2205da09d8bdfb4e not found: ID does not exist" Feb 18 14:57:10 crc kubenswrapper[4957]: I0218 14:57:10.527944 4957 scope.go:117] "RemoveContainer" containerID="a4933a93d6ff93d6a587413d30f6a2af5220273555ef1373b6502b6e1308e20d" Feb 18 14:57:10 crc kubenswrapper[4957]: E0218 14:57:10.532559 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a4933a93d6ff93d6a587413d30f6a2af5220273555ef1373b6502b6e1308e20d\": container with ID starting with a4933a93d6ff93d6a587413d30f6a2af5220273555ef1373b6502b6e1308e20d not found: ID does not exist" containerID="a4933a93d6ff93d6a587413d30f6a2af5220273555ef1373b6502b6e1308e20d" Feb 18 14:57:10 crc kubenswrapper[4957]: I0218 14:57:10.532598 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4933a93d6ff93d6a587413d30f6a2af5220273555ef1373b6502b6e1308e20d"} err="failed to get container status \"a4933a93d6ff93d6a587413d30f6a2af5220273555ef1373b6502b6e1308e20d\": rpc error: code = NotFound desc = could not find container \"a4933a93d6ff93d6a587413d30f6a2af5220273555ef1373b6502b6e1308e20d\": container with ID starting with a4933a93d6ff93d6a587413d30f6a2af5220273555ef1373b6502b6e1308e20d not found: ID does not exist" Feb 18 14:57:10 crc kubenswrapper[4957]: I0218 14:57:10.532623 4957 scope.go:117] "RemoveContainer" containerID="2c6382537ea0a3d044ebd5726273b3549b84510ff7652a1c2205da09d8bdfb4e" Feb 18 14:57:10 crc kubenswrapper[4957]: I0218 14:57:10.539209 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c6382537ea0a3d044ebd5726273b3549b84510ff7652a1c2205da09d8bdfb4e"} err="failed to get container status \"2c6382537ea0a3d044ebd5726273b3549b84510ff7652a1c2205da09d8bdfb4e\": rpc error: code = NotFound desc = could not find container \"2c6382537ea0a3d044ebd5726273b3549b84510ff7652a1c2205da09d8bdfb4e\": container with ID starting with 2c6382537ea0a3d044ebd5726273b3549b84510ff7652a1c2205da09d8bdfb4e not found: ID does not exist" Feb 18 14:57:10 crc kubenswrapper[4957]: I0218 14:57:10.539291 4957 scope.go:117] "RemoveContainer" containerID="a4933a93d6ff93d6a587413d30f6a2af5220273555ef1373b6502b6e1308e20d" Feb 18 14:57:10 crc kubenswrapper[4957]: I0218 14:57:10.548841 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4933a93d6ff93d6a587413d30f6a2af5220273555ef1373b6502b6e1308e20d"} err="failed to get container status \"a4933a93d6ff93d6a587413d30f6a2af5220273555ef1373b6502b6e1308e20d\": rpc error: code = NotFound desc = could not find container \"a4933a93d6ff93d6a587413d30f6a2af5220273555ef1373b6502b6e1308e20d\": container with ID starting with a4933a93d6ff93d6a587413d30f6a2af5220273555ef1373b6502b6e1308e20d not found: ID does not exist" Feb 18 14:57:10 crc kubenswrapper[4957]: I0218 14:57:10.579577 4957 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="f0cd3140-eb02-488a-a95c-87e12ba3d061" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.236:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 14:57:10 crc kubenswrapper[4957]: I0218 14:57:10.580048 4957 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="f0cd3140-eb02-488a-a95c-87e12ba3d061" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.236:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 14:57:10 crc kubenswrapper[4957]: I0218 14:57:10.600307 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 14:57:10 crc kubenswrapper[4957]: I0218 14:57:10.646823 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 14:57:10 crc kubenswrapper[4957]: I0218 14:57:10.673483 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-rst4k"] Feb 18 14:57:10 crc kubenswrapper[4957]: I0218 14:57:10.673764 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7756b9d78c-rst4k" podUID="1ef0249f-b7a3-4183-9f46-0553a63c26ac" containerName="dnsmasq-dns" containerID="cri-o://f568f4eb22b215c69c7a0c81adfbdc1cfd1d51b2a55d3436b786596a9a8853ea" gracePeriod=10 Feb 18 14:57:10 crc kubenswrapper[4957]: I0218 14:57:10.688516 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 18 14:57:10 crc kubenswrapper[4957]: E0218 14:57:10.689216 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b048c2e2-cb55-4531-b51d-32ad1c2a92be" containerName="nova-metadata-metadata" Feb 18 14:57:10 crc kubenswrapper[4957]: I0218 14:57:10.691215 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="b048c2e2-cb55-4531-b51d-32ad1c2a92be" containerName="nova-metadata-metadata" Feb 18 14:57:10 crc kubenswrapper[4957]: E0218 14:57:10.691306 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b048c2e2-cb55-4531-b51d-32ad1c2a92be" containerName="nova-metadata-log" Feb 18 14:57:10 crc kubenswrapper[4957]: I0218 14:57:10.691500 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="b048c2e2-cb55-4531-b51d-32ad1c2a92be" containerName="nova-metadata-log" Feb 18 14:57:10 crc kubenswrapper[4957]: I0218 14:57:10.692383 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="b048c2e2-cb55-4531-b51d-32ad1c2a92be" containerName="nova-metadata-metadata" Feb 18 14:57:10 crc kubenswrapper[4957]: I0218 14:57:10.693120 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="b048c2e2-cb55-4531-b51d-32ad1c2a92be" containerName="nova-metadata-log" Feb 18 14:57:10 crc kubenswrapper[4957]: I0218 14:57:10.695017 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 18 14:57:10 crc kubenswrapper[4957]: I0218 14:57:10.705157 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 18 14:57:10 crc kubenswrapper[4957]: I0218 14:57:10.705228 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 18 14:57:10 crc kubenswrapper[4957]: I0218 14:57:10.709254 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 14:57:10 crc kubenswrapper[4957]: I0218 14:57:10.745110 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/883f4ea2-7199-4e92-b46d-1fe3d655c9d5-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"883f4ea2-7199-4e92-b46d-1fe3d655c9d5\") " pod="openstack/nova-metadata-0" Feb 18 14:57:10 crc kubenswrapper[4957]: I0218 14:57:10.745190 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/883f4ea2-7199-4e92-b46d-1fe3d655c9d5-logs\") pod \"nova-metadata-0\" (UID: \"883f4ea2-7199-4e92-b46d-1fe3d655c9d5\") " pod="openstack/nova-metadata-0" Feb 18 14:57:10 crc kubenswrapper[4957]: I0218 14:57:10.745235 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/883f4ea2-7199-4e92-b46d-1fe3d655c9d5-config-data\") pod \"nova-metadata-0\" (UID: \"883f4ea2-7199-4e92-b46d-1fe3d655c9d5\") " pod="openstack/nova-metadata-0" Feb 18 14:57:10 crc kubenswrapper[4957]: I0218 14:57:10.745302 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/883f4ea2-7199-4e92-b46d-1fe3d655c9d5-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"883f4ea2-7199-4e92-b46d-1fe3d655c9d5\") " pod="openstack/nova-metadata-0" Feb 18 14:57:10 crc kubenswrapper[4957]: I0218 14:57:10.745364 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqgps\" (UniqueName: \"kubernetes.io/projected/883f4ea2-7199-4e92-b46d-1fe3d655c9d5-kube-api-access-kqgps\") pod \"nova-metadata-0\" (UID: \"883f4ea2-7199-4e92-b46d-1fe3d655c9d5\") " pod="openstack/nova-metadata-0" Feb 18 14:57:10 crc kubenswrapper[4957]: I0218 14:57:10.834360 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-xmtj7"] Feb 18 14:57:10 crc kubenswrapper[4957]: I0218 14:57:10.847376 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/883f4ea2-7199-4e92-b46d-1fe3d655c9d5-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"883f4ea2-7199-4e92-b46d-1fe3d655c9d5\") " pod="openstack/nova-metadata-0" Feb 18 14:57:10 crc kubenswrapper[4957]: I0218 14:57:10.847464 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kqgps\" (UniqueName: \"kubernetes.io/projected/883f4ea2-7199-4e92-b46d-1fe3d655c9d5-kube-api-access-kqgps\") pod \"nova-metadata-0\" (UID: \"883f4ea2-7199-4e92-b46d-1fe3d655c9d5\") " pod="openstack/nova-metadata-0" Feb 18 14:57:10 crc kubenswrapper[4957]: I0218 14:57:10.847570 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/883f4ea2-7199-4e92-b46d-1fe3d655c9d5-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"883f4ea2-7199-4e92-b46d-1fe3d655c9d5\") " pod="openstack/nova-metadata-0" Feb 18 14:57:10 crc kubenswrapper[4957]: I0218 14:57:10.847632 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/883f4ea2-7199-4e92-b46d-1fe3d655c9d5-logs\") pod \"nova-metadata-0\" (UID: \"883f4ea2-7199-4e92-b46d-1fe3d655c9d5\") " pod="openstack/nova-metadata-0" Feb 18 14:57:10 crc kubenswrapper[4957]: I0218 14:57:10.847678 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/883f4ea2-7199-4e92-b46d-1fe3d655c9d5-config-data\") pod \"nova-metadata-0\" (UID: \"883f4ea2-7199-4e92-b46d-1fe3d655c9d5\") " pod="openstack/nova-metadata-0" Feb 18 14:57:10 crc kubenswrapper[4957]: I0218 14:57:10.850122 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/883f4ea2-7199-4e92-b46d-1fe3d655c9d5-logs\") pod \"nova-metadata-0\" (UID: \"883f4ea2-7199-4e92-b46d-1fe3d655c9d5\") " pod="openstack/nova-metadata-0" Feb 18 14:57:10 crc kubenswrapper[4957]: I0218 14:57:10.852099 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/883f4ea2-7199-4e92-b46d-1fe3d655c9d5-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"883f4ea2-7199-4e92-b46d-1fe3d655c9d5\") " pod="openstack/nova-metadata-0" Feb 18 14:57:10 crc kubenswrapper[4957]: I0218 14:57:10.854586 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/883f4ea2-7199-4e92-b46d-1fe3d655c9d5-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"883f4ea2-7199-4e92-b46d-1fe3d655c9d5\") " pod="openstack/nova-metadata-0" Feb 18 14:57:10 crc kubenswrapper[4957]: I0218 14:57:10.854793 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/883f4ea2-7199-4e92-b46d-1fe3d655c9d5-config-data\") pod \"nova-metadata-0\" (UID: \"883f4ea2-7199-4e92-b46d-1fe3d655c9d5\") " pod="openstack/nova-metadata-0" Feb 18 14:57:10 crc kubenswrapper[4957]: I0218 14:57:10.890202 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kqgps\" (UniqueName: \"kubernetes.io/projected/883f4ea2-7199-4e92-b46d-1fe3d655c9d5-kube-api-access-kqgps\") pod \"nova-metadata-0\" (UID: \"883f4ea2-7199-4e92-b46d-1fe3d655c9d5\") " pod="openstack/nova-metadata-0" Feb 18 14:57:11 crc kubenswrapper[4957]: I0218 14:57:11.084082 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-xmtj7" event={"ID":"af20e059-3e19-4e13-be41-de0fb244b627","Type":"ContainerStarted","Data":"2ab6d4a4b87ae03ee65f11240662bfe1a17828b551c15d7a842c89f4a16d94fe"} Feb 18 14:57:11 crc kubenswrapper[4957]: I0218 14:57:11.097995 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bc34b37f-eee2-4e72-b5f5-0c9f7a8fedc3","Type":"ContainerStarted","Data":"f217ad3ab0ec876b4b2d70ff032cba1d000c153a3313540f14e2706e7dc2cf0d"} Feb 18 14:57:11 crc kubenswrapper[4957]: I0218 14:57:11.100544 4957 generic.go:334] "Generic (PLEG): container finished" podID="1ef0249f-b7a3-4183-9f46-0553a63c26ac" containerID="f568f4eb22b215c69c7a0c81adfbdc1cfd1d51b2a55d3436b786596a9a8853ea" exitCode=0 Feb 18 14:57:11 crc kubenswrapper[4957]: I0218 14:57:11.100634 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7756b9d78c-rst4k" event={"ID":"1ef0249f-b7a3-4183-9f46-0553a63c26ac","Type":"ContainerDied","Data":"f568f4eb22b215c69c7a0c81adfbdc1cfd1d51b2a55d3436b786596a9a8853ea"} Feb 18 14:57:11 crc kubenswrapper[4957]: I0218 14:57:11.163121 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 18 14:57:11 crc kubenswrapper[4957]: I0218 14:57:11.593781 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7756b9d78c-rst4k" Feb 18 14:57:11 crc kubenswrapper[4957]: I0218 14:57:11.687301 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1ef0249f-b7a3-4183-9f46-0553a63c26ac-dns-svc\") pod \"1ef0249f-b7a3-4183-9f46-0553a63c26ac\" (UID: \"1ef0249f-b7a3-4183-9f46-0553a63c26ac\") " Feb 18 14:57:11 crc kubenswrapper[4957]: I0218 14:57:11.687371 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1ef0249f-b7a3-4183-9f46-0553a63c26ac-ovsdbserver-sb\") pod \"1ef0249f-b7a3-4183-9f46-0553a63c26ac\" (UID: \"1ef0249f-b7a3-4183-9f46-0553a63c26ac\") " Feb 18 14:57:11 crc kubenswrapper[4957]: I0218 14:57:11.688461 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1ef0249f-b7a3-4183-9f46-0553a63c26ac-ovsdbserver-nb\") pod \"1ef0249f-b7a3-4183-9f46-0553a63c26ac\" (UID: \"1ef0249f-b7a3-4183-9f46-0553a63c26ac\") " Feb 18 14:57:11 crc kubenswrapper[4957]: I0218 14:57:11.688827 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j95qh\" (UniqueName: \"kubernetes.io/projected/1ef0249f-b7a3-4183-9f46-0553a63c26ac-kube-api-access-j95qh\") pod \"1ef0249f-b7a3-4183-9f46-0553a63c26ac\" (UID: \"1ef0249f-b7a3-4183-9f46-0553a63c26ac\") " Feb 18 14:57:11 crc kubenswrapper[4957]: I0218 14:57:11.688851 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ef0249f-b7a3-4183-9f46-0553a63c26ac-config\") pod \"1ef0249f-b7a3-4183-9f46-0553a63c26ac\" (UID: \"1ef0249f-b7a3-4183-9f46-0553a63c26ac\") " Feb 18 14:57:11 crc kubenswrapper[4957]: I0218 14:57:11.689003 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1ef0249f-b7a3-4183-9f46-0553a63c26ac-dns-swift-storage-0\") pod \"1ef0249f-b7a3-4183-9f46-0553a63c26ac\" (UID: \"1ef0249f-b7a3-4183-9f46-0553a63c26ac\") " Feb 18 14:57:11 crc kubenswrapper[4957]: I0218 14:57:11.714207 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ef0249f-b7a3-4183-9f46-0553a63c26ac-kube-api-access-j95qh" (OuterVolumeSpecName: "kube-api-access-j95qh") pod "1ef0249f-b7a3-4183-9f46-0553a63c26ac" (UID: "1ef0249f-b7a3-4183-9f46-0553a63c26ac"). InnerVolumeSpecName "kube-api-access-j95qh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:57:11 crc kubenswrapper[4957]: I0218 14:57:11.791633 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j95qh\" (UniqueName: \"kubernetes.io/projected/1ef0249f-b7a3-4183-9f46-0553a63c26ac-kube-api-access-j95qh\") on node \"crc\" DevicePath \"\"" Feb 18 14:57:11 crc kubenswrapper[4957]: I0218 14:57:11.813382 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1ef0249f-b7a3-4183-9f46-0553a63c26ac-config" (OuterVolumeSpecName: "config") pod "1ef0249f-b7a3-4183-9f46-0553a63c26ac" (UID: "1ef0249f-b7a3-4183-9f46-0553a63c26ac"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:57:11 crc kubenswrapper[4957]: I0218 14:57:11.855096 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1ef0249f-b7a3-4183-9f46-0553a63c26ac-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "1ef0249f-b7a3-4183-9f46-0553a63c26ac" (UID: "1ef0249f-b7a3-4183-9f46-0553a63c26ac"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:57:11 crc kubenswrapper[4957]: I0218 14:57:11.894083 4957 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1ef0249f-b7a3-4183-9f46-0553a63c26ac-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 18 14:57:11 crc kubenswrapper[4957]: I0218 14:57:11.894295 4957 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ef0249f-b7a3-4183-9f46-0553a63c26ac-config\") on node \"crc\" DevicePath \"\"" Feb 18 14:57:11 crc kubenswrapper[4957]: I0218 14:57:11.909659 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1ef0249f-b7a3-4183-9f46-0553a63c26ac-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "1ef0249f-b7a3-4183-9f46-0553a63c26ac" (UID: "1ef0249f-b7a3-4183-9f46-0553a63c26ac"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:57:11 crc kubenswrapper[4957]: I0218 14:57:11.954740 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1ef0249f-b7a3-4183-9f46-0553a63c26ac-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "1ef0249f-b7a3-4183-9f46-0553a63c26ac" (UID: "1ef0249f-b7a3-4183-9f46-0553a63c26ac"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:57:11 crc kubenswrapper[4957]: I0218 14:57:11.975135 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1ef0249f-b7a3-4183-9f46-0553a63c26ac-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1ef0249f-b7a3-4183-9f46-0553a63c26ac" (UID: "1ef0249f-b7a3-4183-9f46-0553a63c26ac"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:57:11 crc kubenswrapper[4957]: I0218 14:57:11.999065 4957 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1ef0249f-b7a3-4183-9f46-0553a63c26ac-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 14:57:11 crc kubenswrapper[4957]: I0218 14:57:11.999101 4957 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1ef0249f-b7a3-4183-9f46-0553a63c26ac-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 18 14:57:11 crc kubenswrapper[4957]: I0218 14:57:11.999112 4957 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1ef0249f-b7a3-4183-9f46-0553a63c26ac-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 18 14:57:12 crc kubenswrapper[4957]: W0218 14:57:12.105759 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod883f4ea2_7199_4e92_b46d_1fe3d655c9d5.slice/crio-c4a1dc1861bc018fb503a0cb3e94e6a753f166143899346043e7d494aa0327fc WatchSource:0}: Error finding container c4a1dc1861bc018fb503a0cb3e94e6a753f166143899346043e7d494aa0327fc: Status 404 returned error can't find the container with id c4a1dc1861bc018fb503a0cb3e94e6a753f166143899346043e7d494aa0327fc Feb 18 14:57:12 crc kubenswrapper[4957]: I0218 14:57:12.107609 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 14:57:12 crc kubenswrapper[4957]: I0218 14:57:12.229853 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7756b9d78c-rst4k" Feb 18 14:57:12 crc kubenswrapper[4957]: I0218 14:57:12.269287 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b048c2e2-cb55-4531-b51d-32ad1c2a92be" path="/var/lib/kubelet/pods/b048c2e2-cb55-4531-b51d-32ad1c2a92be/volumes" Feb 18 14:57:12 crc kubenswrapper[4957]: I0218 14:57:12.270525 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7756b9d78c-rst4k" event={"ID":"1ef0249f-b7a3-4183-9f46-0553a63c26ac","Type":"ContainerDied","Data":"c857029ab903a044e7a2d9f932029dbe8ef04f4d0673ec35a74b02de5c2ecf84"} Feb 18 14:57:12 crc kubenswrapper[4957]: I0218 14:57:12.270579 4957 scope.go:117] "RemoveContainer" containerID="f568f4eb22b215c69c7a0c81adfbdc1cfd1d51b2a55d3436b786596a9a8853ea" Feb 18 14:57:12 crc kubenswrapper[4957]: I0218 14:57:12.284121 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bc34b37f-eee2-4e72-b5f5-0c9f7a8fedc3","Type":"ContainerStarted","Data":"75cc6931588be5820eb086ca73fea1b3b64d672a1cbe642fcae84e796efe870f"} Feb 18 14:57:12 crc kubenswrapper[4957]: I0218 14:57:12.307194 4957 scope.go:117] "RemoveContainer" containerID="133aed933ba7099e0fabbec64c2d6990b96b4c50fa7225dcf49d748699dce49b" Feb 18 14:57:12 crc kubenswrapper[4957]: I0218 14:57:12.357493 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-rst4k"] Feb 18 14:57:12 crc kubenswrapper[4957]: I0218 14:57:12.367298 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-rst4k"] Feb 18 14:57:13 crc kubenswrapper[4957]: I0218 14:57:13.309610 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"883f4ea2-7199-4e92-b46d-1fe3d655c9d5","Type":"ContainerStarted","Data":"505676ccb912aa25dbe9d9c6db1fc245672a25467f6ab67a93f19f0644a054f3"} Feb 18 14:57:13 crc kubenswrapper[4957]: I0218 14:57:13.310081 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"883f4ea2-7199-4e92-b46d-1fe3d655c9d5","Type":"ContainerStarted","Data":"9b8dcf7c727c0270a6fd7fb3a7e3dcd56d3757ebd6cd90f87fbc355f95aa281f"} Feb 18 14:57:13 crc kubenswrapper[4957]: I0218 14:57:13.310094 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"883f4ea2-7199-4e92-b46d-1fe3d655c9d5","Type":"ContainerStarted","Data":"c4a1dc1861bc018fb503a0cb3e94e6a753f166143899346043e7d494aa0327fc"} Feb 18 14:57:13 crc kubenswrapper[4957]: I0218 14:57:13.316373 4957 generic.go:334] "Generic (PLEG): container finished" podID="6fa8a4e3-39e3-4e9b-8529-9f712fbc509a" containerID="3d9c2b9fc3ab71da01f89461a3fb761c6e10c8f3e271eb5de464b72ab8001f95" exitCode=0 Feb 18 14:57:13 crc kubenswrapper[4957]: I0218 14:57:13.316443 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-k2zxb" event={"ID":"6fa8a4e3-39e3-4e9b-8529-9f712fbc509a","Type":"ContainerDied","Data":"3d9c2b9fc3ab71da01f89461a3fb761c6e10c8f3e271eb5de464b72ab8001f95"} Feb 18 14:57:13 crc kubenswrapper[4957]: I0218 14:57:13.341217 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.341198047 podStartE2EDuration="3.341198047s" podCreationTimestamp="2026-02-18 14:57:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:57:13.335021208 +0000 UTC m=+1539.855885952" watchObservedRunningTime="2026-02-18 14:57:13.341198047 +0000 UTC m=+1539.862062791" Feb 18 14:57:14 crc kubenswrapper[4957]: I0218 14:57:14.232636 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ef0249f-b7a3-4183-9f46-0553a63c26ac" path="/var/lib/kubelet/pods/1ef0249f-b7a3-4183-9f46-0553a63c26ac/volumes" Feb 18 14:57:14 crc kubenswrapper[4957]: I0218 14:57:14.338477 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bc34b37f-eee2-4e72-b5f5-0c9f7a8fedc3","Type":"ContainerStarted","Data":"63f52b326f18d037abd43c6340e95d7f2c0a447913c93815a7539ec32d89da01"} Feb 18 14:57:14 crc kubenswrapper[4957]: I0218 14:57:14.362305 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=5.644055917 podStartE2EDuration="11.36228894s" podCreationTimestamp="2026-02-18 14:57:03 +0000 UTC" firstStartedPulling="2026-02-18 14:57:07.73622394 +0000 UTC m=+1534.257088684" lastFinishedPulling="2026-02-18 14:57:13.454456963 +0000 UTC m=+1539.975321707" observedRunningTime="2026-02-18 14:57:14.361887778 +0000 UTC m=+1540.882752532" watchObservedRunningTime="2026-02-18 14:57:14.36228894 +0000 UTC m=+1540.883153674" Feb 18 14:57:15 crc kubenswrapper[4957]: I0218 14:57:15.365176 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 18 14:57:16 crc kubenswrapper[4957]: I0218 14:57:16.163859 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 18 14:57:16 crc kubenswrapper[4957]: I0218 14:57:16.163932 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 18 14:57:17 crc kubenswrapper[4957]: I0218 14:57:17.859186 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-k2zxb" Feb 18 14:57:17 crc kubenswrapper[4957]: I0218 14:57:17.991995 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vsww5\" (UniqueName: \"kubernetes.io/projected/6fa8a4e3-39e3-4e9b-8529-9f712fbc509a-kube-api-access-vsww5\") pod \"6fa8a4e3-39e3-4e9b-8529-9f712fbc509a\" (UID: \"6fa8a4e3-39e3-4e9b-8529-9f712fbc509a\") " Feb 18 14:57:17 crc kubenswrapper[4957]: I0218 14:57:17.992056 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6fa8a4e3-39e3-4e9b-8529-9f712fbc509a-scripts\") pod \"6fa8a4e3-39e3-4e9b-8529-9f712fbc509a\" (UID: \"6fa8a4e3-39e3-4e9b-8529-9f712fbc509a\") " Feb 18 14:57:17 crc kubenswrapper[4957]: I0218 14:57:17.992276 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fa8a4e3-39e3-4e9b-8529-9f712fbc509a-combined-ca-bundle\") pod \"6fa8a4e3-39e3-4e9b-8529-9f712fbc509a\" (UID: \"6fa8a4e3-39e3-4e9b-8529-9f712fbc509a\") " Feb 18 14:57:17 crc kubenswrapper[4957]: I0218 14:57:17.995221 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6fa8a4e3-39e3-4e9b-8529-9f712fbc509a-config-data\") pod \"6fa8a4e3-39e3-4e9b-8529-9f712fbc509a\" (UID: \"6fa8a4e3-39e3-4e9b-8529-9f712fbc509a\") " Feb 18 14:57:18 crc kubenswrapper[4957]: I0218 14:57:18.002204 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6fa8a4e3-39e3-4e9b-8529-9f712fbc509a-scripts" (OuterVolumeSpecName: "scripts") pod "6fa8a4e3-39e3-4e9b-8529-9f712fbc509a" (UID: "6fa8a4e3-39e3-4e9b-8529-9f712fbc509a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:57:18 crc kubenswrapper[4957]: I0218 14:57:18.004296 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6fa8a4e3-39e3-4e9b-8529-9f712fbc509a-kube-api-access-vsww5" (OuterVolumeSpecName: "kube-api-access-vsww5") pod "6fa8a4e3-39e3-4e9b-8529-9f712fbc509a" (UID: "6fa8a4e3-39e3-4e9b-8529-9f712fbc509a"). InnerVolumeSpecName "kube-api-access-vsww5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:57:18 crc kubenswrapper[4957]: I0218 14:57:18.044517 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6fa8a4e3-39e3-4e9b-8529-9f712fbc509a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6fa8a4e3-39e3-4e9b-8529-9f712fbc509a" (UID: "6fa8a4e3-39e3-4e9b-8529-9f712fbc509a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:57:18 crc kubenswrapper[4957]: I0218 14:57:18.099046 4957 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fa8a4e3-39e3-4e9b-8529-9f712fbc509a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:57:18 crc kubenswrapper[4957]: I0218 14:57:18.099076 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vsww5\" (UniqueName: \"kubernetes.io/projected/6fa8a4e3-39e3-4e9b-8529-9f712fbc509a-kube-api-access-vsww5\") on node \"crc\" DevicePath \"\"" Feb 18 14:57:18 crc kubenswrapper[4957]: I0218 14:57:18.099087 4957 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6fa8a4e3-39e3-4e9b-8529-9f712fbc509a-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 14:57:18 crc kubenswrapper[4957]: I0218 14:57:18.129059 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6fa8a4e3-39e3-4e9b-8529-9f712fbc509a-config-data" (OuterVolumeSpecName: "config-data") pod "6fa8a4e3-39e3-4e9b-8529-9f712fbc509a" (UID: "6fa8a4e3-39e3-4e9b-8529-9f712fbc509a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:57:18 crc kubenswrapper[4957]: I0218 14:57:18.201231 4957 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6fa8a4e3-39e3-4e9b-8529-9f712fbc509a-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 14:57:18 crc kubenswrapper[4957]: I0218 14:57:18.400979 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-k2zxb" event={"ID":"6fa8a4e3-39e3-4e9b-8529-9f712fbc509a","Type":"ContainerDied","Data":"c00ee21c574649ee893e29ef090a51a2dc7913ae5d7bc1ac5f91fbe6dd5e7d49"} Feb 18 14:57:18 crc kubenswrapper[4957]: I0218 14:57:18.401035 4957 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c00ee21c574649ee893e29ef090a51a2dc7913ae5d7bc1ac5f91fbe6dd5e7d49" Feb 18 14:57:18 crc kubenswrapper[4957]: I0218 14:57:18.401109 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-k2zxb" Feb 18 14:57:19 crc kubenswrapper[4957]: I0218 14:57:19.056279 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 14:57:19 crc kubenswrapper[4957]: I0218 14:57:19.056922 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="4a913c6a-cf3b-49f6-b1c6-70b090d52925" containerName="nova-scheduler-scheduler" containerID="cri-o://fbd22de36ee7fde6bd502dd4e12a70e2dd44810b4c81c6c8057c229f61b2daa4" gracePeriod=30 Feb 18 14:57:19 crc kubenswrapper[4957]: I0218 14:57:19.070299 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 18 14:57:19 crc kubenswrapper[4957]: I0218 14:57:19.070865 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="f0cd3140-eb02-488a-a95c-87e12ba3d061" containerName="nova-api-log" containerID="cri-o://5e6f1e624a6456379a444c472e6ed627f85b9fcea51914672af79d002f6e738e" gracePeriod=30 Feb 18 14:57:19 crc kubenswrapper[4957]: I0218 14:57:19.070953 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="f0cd3140-eb02-488a-a95c-87e12ba3d061" containerName="nova-api-api" containerID="cri-o://f87915f0c46d43dddbba0635262ab6a28eeacffd7a59714cd68aa62f776c7c12" gracePeriod=30 Feb 18 14:57:19 crc kubenswrapper[4957]: I0218 14:57:19.090030 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 14:57:19 crc kubenswrapper[4957]: I0218 14:57:19.090306 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="883f4ea2-7199-4e92-b46d-1fe3d655c9d5" containerName="nova-metadata-log" containerID="cri-o://9b8dcf7c727c0270a6fd7fb3a7e3dcd56d3757ebd6cd90f87fbc355f95aa281f" gracePeriod=30 Feb 18 14:57:19 crc kubenswrapper[4957]: I0218 14:57:19.090566 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="883f4ea2-7199-4e92-b46d-1fe3d655c9d5" containerName="nova-metadata-metadata" containerID="cri-o://505676ccb912aa25dbe9d9c6db1fc245672a25467f6ab67a93f19f0644a054f3" gracePeriod=30 Feb 18 14:57:19 crc kubenswrapper[4957]: I0218 14:57:19.413047 4957 generic.go:334] "Generic (PLEG): container finished" podID="1361fd5c-3b5c-41e3-9c89-8df6ce0ea622" containerID="01730d6d735d58461dda1209cdedeaa90b4fcf641bf04b52122f1a25dd8f91a7" exitCode=0 Feb 18 14:57:19 crc kubenswrapper[4957]: I0218 14:57:19.413445 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-qcgxs" event={"ID":"1361fd5c-3b5c-41e3-9c89-8df6ce0ea622","Type":"ContainerDied","Data":"01730d6d735d58461dda1209cdedeaa90b4fcf641bf04b52122f1a25dd8f91a7"} Feb 18 14:57:19 crc kubenswrapper[4957]: I0218 14:57:19.416134 4957 generic.go:334] "Generic (PLEG): container finished" podID="f0cd3140-eb02-488a-a95c-87e12ba3d061" containerID="5e6f1e624a6456379a444c472e6ed627f85b9fcea51914672af79d002f6e738e" exitCode=143 Feb 18 14:57:19 crc kubenswrapper[4957]: I0218 14:57:19.416218 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f0cd3140-eb02-488a-a95c-87e12ba3d061","Type":"ContainerDied","Data":"5e6f1e624a6456379a444c472e6ed627f85b9fcea51914672af79d002f6e738e"} Feb 18 14:57:19 crc kubenswrapper[4957]: I0218 14:57:19.419007 4957 generic.go:334] "Generic (PLEG): container finished" podID="883f4ea2-7199-4e92-b46d-1fe3d655c9d5" containerID="9b8dcf7c727c0270a6fd7fb3a7e3dcd56d3757ebd6cd90f87fbc355f95aa281f" exitCode=143 Feb 18 14:57:19 crc kubenswrapper[4957]: I0218 14:57:19.419089 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"883f4ea2-7199-4e92-b46d-1fe3d655c9d5","Type":"ContainerDied","Data":"9b8dcf7c727c0270a6fd7fb3a7e3dcd56d3757ebd6cd90f87fbc355f95aa281f"} Feb 18 14:57:19 crc kubenswrapper[4957]: I0218 14:57:19.420865 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-xmtj7" event={"ID":"af20e059-3e19-4e13-be41-de0fb244b627","Type":"ContainerStarted","Data":"b29e628ff5cb4f796ed60b9c7972af2f184fd134d446f1e92f2d922990b9cd01"} Feb 18 14:57:19 crc kubenswrapper[4957]: I0218 14:57:19.463794 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-db-sync-xmtj7" podStartSLOduration=2.744063277 podStartE2EDuration="10.463774079s" podCreationTimestamp="2026-02-18 14:57:09 +0000 UTC" firstStartedPulling="2026-02-18 14:57:10.869228296 +0000 UTC m=+1537.390093040" lastFinishedPulling="2026-02-18 14:57:18.588939108 +0000 UTC m=+1545.109803842" observedRunningTime="2026-02-18 14:57:19.45307532 +0000 UTC m=+1545.973940064" watchObservedRunningTime="2026-02-18 14:57:19.463774079 +0000 UTC m=+1545.984638823" Feb 18 14:57:20 crc kubenswrapper[4957]: I0218 14:57:20.113559 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 18 14:57:20 crc kubenswrapper[4957]: E0218 14:57:20.223357 4957 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fbd22de36ee7fde6bd502dd4e12a70e2dd44810b4c81c6c8057c229f61b2daa4" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 18 14:57:20 crc kubenswrapper[4957]: E0218 14:57:20.226029 4957 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fbd22de36ee7fde6bd502dd4e12a70e2dd44810b4c81c6c8057c229f61b2daa4" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 18 14:57:20 crc kubenswrapper[4957]: E0218 14:57:20.228255 4957 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fbd22de36ee7fde6bd502dd4e12a70e2dd44810b4c81c6c8057c229f61b2daa4" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 18 14:57:20 crc kubenswrapper[4957]: E0218 14:57:20.228332 4957 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="4a913c6a-cf3b-49f6-b1c6-70b090d52925" containerName="nova-scheduler-scheduler" Feb 18 14:57:20 crc kubenswrapper[4957]: I0218 14:57:20.259254 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/883f4ea2-7199-4e92-b46d-1fe3d655c9d5-nova-metadata-tls-certs\") pod \"883f4ea2-7199-4e92-b46d-1fe3d655c9d5\" (UID: \"883f4ea2-7199-4e92-b46d-1fe3d655c9d5\") " Feb 18 14:57:20 crc kubenswrapper[4957]: I0218 14:57:20.259371 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/883f4ea2-7199-4e92-b46d-1fe3d655c9d5-combined-ca-bundle\") pod \"883f4ea2-7199-4e92-b46d-1fe3d655c9d5\" (UID: \"883f4ea2-7199-4e92-b46d-1fe3d655c9d5\") " Feb 18 14:57:20 crc kubenswrapper[4957]: I0218 14:57:20.259441 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/883f4ea2-7199-4e92-b46d-1fe3d655c9d5-logs\") pod \"883f4ea2-7199-4e92-b46d-1fe3d655c9d5\" (UID: \"883f4ea2-7199-4e92-b46d-1fe3d655c9d5\") " Feb 18 14:57:20 crc kubenswrapper[4957]: I0218 14:57:20.259473 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/883f4ea2-7199-4e92-b46d-1fe3d655c9d5-config-data\") pod \"883f4ea2-7199-4e92-b46d-1fe3d655c9d5\" (UID: \"883f4ea2-7199-4e92-b46d-1fe3d655c9d5\") " Feb 18 14:57:20 crc kubenswrapper[4957]: I0218 14:57:20.259508 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kqgps\" (UniqueName: \"kubernetes.io/projected/883f4ea2-7199-4e92-b46d-1fe3d655c9d5-kube-api-access-kqgps\") pod \"883f4ea2-7199-4e92-b46d-1fe3d655c9d5\" (UID: \"883f4ea2-7199-4e92-b46d-1fe3d655c9d5\") " Feb 18 14:57:20 crc kubenswrapper[4957]: I0218 14:57:20.260262 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/883f4ea2-7199-4e92-b46d-1fe3d655c9d5-logs" (OuterVolumeSpecName: "logs") pod "883f4ea2-7199-4e92-b46d-1fe3d655c9d5" (UID: "883f4ea2-7199-4e92-b46d-1fe3d655c9d5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:57:20 crc kubenswrapper[4957]: I0218 14:57:20.260657 4957 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/883f4ea2-7199-4e92-b46d-1fe3d655c9d5-logs\") on node \"crc\" DevicePath \"\"" Feb 18 14:57:20 crc kubenswrapper[4957]: I0218 14:57:20.289197 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/883f4ea2-7199-4e92-b46d-1fe3d655c9d5-kube-api-access-kqgps" (OuterVolumeSpecName: "kube-api-access-kqgps") pod "883f4ea2-7199-4e92-b46d-1fe3d655c9d5" (UID: "883f4ea2-7199-4e92-b46d-1fe3d655c9d5"). InnerVolumeSpecName "kube-api-access-kqgps". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:57:20 crc kubenswrapper[4957]: I0218 14:57:20.309626 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/883f4ea2-7199-4e92-b46d-1fe3d655c9d5-config-data" (OuterVolumeSpecName: "config-data") pod "883f4ea2-7199-4e92-b46d-1fe3d655c9d5" (UID: "883f4ea2-7199-4e92-b46d-1fe3d655c9d5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:57:20 crc kubenswrapper[4957]: I0218 14:57:20.315589 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/883f4ea2-7199-4e92-b46d-1fe3d655c9d5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "883f4ea2-7199-4e92-b46d-1fe3d655c9d5" (UID: "883f4ea2-7199-4e92-b46d-1fe3d655c9d5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:57:20 crc kubenswrapper[4957]: I0218 14:57:20.324503 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/883f4ea2-7199-4e92-b46d-1fe3d655c9d5-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "883f4ea2-7199-4e92-b46d-1fe3d655c9d5" (UID: "883f4ea2-7199-4e92-b46d-1fe3d655c9d5"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:57:20 crc kubenswrapper[4957]: I0218 14:57:20.363331 4957 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/883f4ea2-7199-4e92-b46d-1fe3d655c9d5-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 14:57:20 crc kubenswrapper[4957]: I0218 14:57:20.363383 4957 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/883f4ea2-7199-4e92-b46d-1fe3d655c9d5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:57:20 crc kubenswrapper[4957]: I0218 14:57:20.363395 4957 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/883f4ea2-7199-4e92-b46d-1fe3d655c9d5-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 14:57:20 crc kubenswrapper[4957]: I0218 14:57:20.363407 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kqgps\" (UniqueName: \"kubernetes.io/projected/883f4ea2-7199-4e92-b46d-1fe3d655c9d5-kube-api-access-kqgps\") on node \"crc\" DevicePath \"\"" Feb 18 14:57:20 crc kubenswrapper[4957]: I0218 14:57:20.432690 4957 generic.go:334] "Generic (PLEG): container finished" podID="883f4ea2-7199-4e92-b46d-1fe3d655c9d5" containerID="505676ccb912aa25dbe9d9c6db1fc245672a25467f6ab67a93f19f0644a054f3" exitCode=0 Feb 18 14:57:20 crc kubenswrapper[4957]: I0218 14:57:20.432728 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"883f4ea2-7199-4e92-b46d-1fe3d655c9d5","Type":"ContainerDied","Data":"505676ccb912aa25dbe9d9c6db1fc245672a25467f6ab67a93f19f0644a054f3"} Feb 18 14:57:20 crc kubenswrapper[4957]: I0218 14:57:20.432785 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"883f4ea2-7199-4e92-b46d-1fe3d655c9d5","Type":"ContainerDied","Data":"c4a1dc1861bc018fb503a0cb3e94e6a753f166143899346043e7d494aa0327fc"} Feb 18 14:57:20 crc kubenswrapper[4957]: I0218 14:57:20.432804 4957 scope.go:117] "RemoveContainer" containerID="505676ccb912aa25dbe9d9c6db1fc245672a25467f6ab67a93f19f0644a054f3" Feb 18 14:57:20 crc kubenswrapper[4957]: I0218 14:57:20.432806 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 18 14:57:20 crc kubenswrapper[4957]: I0218 14:57:20.483106 4957 scope.go:117] "RemoveContainer" containerID="9b8dcf7c727c0270a6fd7fb3a7e3dcd56d3757ebd6cd90f87fbc355f95aa281f" Feb 18 14:57:20 crc kubenswrapper[4957]: I0218 14:57:20.490721 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 14:57:20 crc kubenswrapper[4957]: I0218 14:57:20.508471 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 14:57:20 crc kubenswrapper[4957]: I0218 14:57:20.531011 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 18 14:57:20 crc kubenswrapper[4957]: E0218 14:57:20.531613 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ef0249f-b7a3-4183-9f46-0553a63c26ac" containerName="dnsmasq-dns" Feb 18 14:57:20 crc kubenswrapper[4957]: I0218 14:57:20.531628 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ef0249f-b7a3-4183-9f46-0553a63c26ac" containerName="dnsmasq-dns" Feb 18 14:57:20 crc kubenswrapper[4957]: E0218 14:57:20.531657 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ef0249f-b7a3-4183-9f46-0553a63c26ac" containerName="init" Feb 18 14:57:20 crc kubenswrapper[4957]: I0218 14:57:20.531663 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ef0249f-b7a3-4183-9f46-0553a63c26ac" containerName="init" Feb 18 14:57:20 crc kubenswrapper[4957]: E0218 14:57:20.531680 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6fa8a4e3-39e3-4e9b-8529-9f712fbc509a" containerName="nova-manage" Feb 18 14:57:20 crc kubenswrapper[4957]: I0218 14:57:20.531687 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fa8a4e3-39e3-4e9b-8529-9f712fbc509a" containerName="nova-manage" Feb 18 14:57:20 crc kubenswrapper[4957]: E0218 14:57:20.531702 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="883f4ea2-7199-4e92-b46d-1fe3d655c9d5" containerName="nova-metadata-metadata" Feb 18 14:57:20 crc kubenswrapper[4957]: I0218 14:57:20.531708 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="883f4ea2-7199-4e92-b46d-1fe3d655c9d5" containerName="nova-metadata-metadata" Feb 18 14:57:20 crc kubenswrapper[4957]: E0218 14:57:20.531723 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="883f4ea2-7199-4e92-b46d-1fe3d655c9d5" containerName="nova-metadata-log" Feb 18 14:57:20 crc kubenswrapper[4957]: I0218 14:57:20.531730 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="883f4ea2-7199-4e92-b46d-1fe3d655c9d5" containerName="nova-metadata-log" Feb 18 14:57:20 crc kubenswrapper[4957]: I0218 14:57:20.531940 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="883f4ea2-7199-4e92-b46d-1fe3d655c9d5" containerName="nova-metadata-log" Feb 18 14:57:20 crc kubenswrapper[4957]: I0218 14:57:20.531950 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="6fa8a4e3-39e3-4e9b-8529-9f712fbc509a" containerName="nova-manage" Feb 18 14:57:20 crc kubenswrapper[4957]: I0218 14:57:20.531971 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ef0249f-b7a3-4183-9f46-0553a63c26ac" containerName="dnsmasq-dns" Feb 18 14:57:20 crc kubenswrapper[4957]: I0218 14:57:20.531989 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="883f4ea2-7199-4e92-b46d-1fe3d655c9d5" containerName="nova-metadata-metadata" Feb 18 14:57:20 crc kubenswrapper[4957]: I0218 14:57:20.533240 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 18 14:57:20 crc kubenswrapper[4957]: I0218 14:57:20.543267 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 18 14:57:20 crc kubenswrapper[4957]: I0218 14:57:20.543529 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 18 14:57:20 crc kubenswrapper[4957]: I0218 14:57:20.544881 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 14:57:20 crc kubenswrapper[4957]: I0218 14:57:20.550353 4957 scope.go:117] "RemoveContainer" containerID="505676ccb912aa25dbe9d9c6db1fc245672a25467f6ab67a93f19f0644a054f3" Feb 18 14:57:20 crc kubenswrapper[4957]: E0218 14:57:20.551162 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"505676ccb912aa25dbe9d9c6db1fc245672a25467f6ab67a93f19f0644a054f3\": container with ID starting with 505676ccb912aa25dbe9d9c6db1fc245672a25467f6ab67a93f19f0644a054f3 not found: ID does not exist" containerID="505676ccb912aa25dbe9d9c6db1fc245672a25467f6ab67a93f19f0644a054f3" Feb 18 14:57:20 crc kubenswrapper[4957]: I0218 14:57:20.551295 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"505676ccb912aa25dbe9d9c6db1fc245672a25467f6ab67a93f19f0644a054f3"} err="failed to get container status \"505676ccb912aa25dbe9d9c6db1fc245672a25467f6ab67a93f19f0644a054f3\": rpc error: code = NotFound desc = could not find container \"505676ccb912aa25dbe9d9c6db1fc245672a25467f6ab67a93f19f0644a054f3\": container with ID starting with 505676ccb912aa25dbe9d9c6db1fc245672a25467f6ab67a93f19f0644a054f3 not found: ID does not exist" Feb 18 14:57:20 crc kubenswrapper[4957]: I0218 14:57:20.551395 4957 scope.go:117] "RemoveContainer" containerID="9b8dcf7c727c0270a6fd7fb3a7e3dcd56d3757ebd6cd90f87fbc355f95aa281f" Feb 18 14:57:20 crc kubenswrapper[4957]: E0218 14:57:20.552065 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9b8dcf7c727c0270a6fd7fb3a7e3dcd56d3757ebd6cd90f87fbc355f95aa281f\": container with ID starting with 9b8dcf7c727c0270a6fd7fb3a7e3dcd56d3757ebd6cd90f87fbc355f95aa281f not found: ID does not exist" containerID="9b8dcf7c727c0270a6fd7fb3a7e3dcd56d3757ebd6cd90f87fbc355f95aa281f" Feb 18 14:57:20 crc kubenswrapper[4957]: I0218 14:57:20.552175 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b8dcf7c727c0270a6fd7fb3a7e3dcd56d3757ebd6cd90f87fbc355f95aa281f"} err="failed to get container status \"9b8dcf7c727c0270a6fd7fb3a7e3dcd56d3757ebd6cd90f87fbc355f95aa281f\": rpc error: code = NotFound desc = could not find container \"9b8dcf7c727c0270a6fd7fb3a7e3dcd56d3757ebd6cd90f87fbc355f95aa281f\": container with ID starting with 9b8dcf7c727c0270a6fd7fb3a7e3dcd56d3757ebd6cd90f87fbc355f95aa281f not found: ID does not exist" Feb 18 14:57:20 crc kubenswrapper[4957]: I0218 14:57:20.672576 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e8ad40e-e92a-4b1c-b081-7da83ab34669-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3e8ad40e-e92a-4b1c-b081-7da83ab34669\") " pod="openstack/nova-metadata-0" Feb 18 14:57:20 crc kubenswrapper[4957]: I0218 14:57:20.672657 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e8ad40e-e92a-4b1c-b081-7da83ab34669-config-data\") pod \"nova-metadata-0\" (UID: \"3e8ad40e-e92a-4b1c-b081-7da83ab34669\") " pod="openstack/nova-metadata-0" Feb 18 14:57:20 crc kubenswrapper[4957]: I0218 14:57:20.672685 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e8ad40e-e92a-4b1c-b081-7da83ab34669-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"3e8ad40e-e92a-4b1c-b081-7da83ab34669\") " pod="openstack/nova-metadata-0" Feb 18 14:57:20 crc kubenswrapper[4957]: I0218 14:57:20.672818 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvsk8\" (UniqueName: \"kubernetes.io/projected/3e8ad40e-e92a-4b1c-b081-7da83ab34669-kube-api-access-tvsk8\") pod \"nova-metadata-0\" (UID: \"3e8ad40e-e92a-4b1c-b081-7da83ab34669\") " pod="openstack/nova-metadata-0" Feb 18 14:57:20 crc kubenswrapper[4957]: I0218 14:57:20.672927 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3e8ad40e-e92a-4b1c-b081-7da83ab34669-logs\") pod \"nova-metadata-0\" (UID: \"3e8ad40e-e92a-4b1c-b081-7da83ab34669\") " pod="openstack/nova-metadata-0" Feb 18 14:57:20 crc kubenswrapper[4957]: I0218 14:57:20.775003 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tvsk8\" (UniqueName: \"kubernetes.io/projected/3e8ad40e-e92a-4b1c-b081-7da83ab34669-kube-api-access-tvsk8\") pod \"nova-metadata-0\" (UID: \"3e8ad40e-e92a-4b1c-b081-7da83ab34669\") " pod="openstack/nova-metadata-0" Feb 18 14:57:20 crc kubenswrapper[4957]: I0218 14:57:20.775123 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3e8ad40e-e92a-4b1c-b081-7da83ab34669-logs\") pod \"nova-metadata-0\" (UID: \"3e8ad40e-e92a-4b1c-b081-7da83ab34669\") " pod="openstack/nova-metadata-0" Feb 18 14:57:20 crc kubenswrapper[4957]: I0218 14:57:20.775181 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e8ad40e-e92a-4b1c-b081-7da83ab34669-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3e8ad40e-e92a-4b1c-b081-7da83ab34669\") " pod="openstack/nova-metadata-0" Feb 18 14:57:20 crc kubenswrapper[4957]: I0218 14:57:20.775249 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e8ad40e-e92a-4b1c-b081-7da83ab34669-config-data\") pod \"nova-metadata-0\" (UID: \"3e8ad40e-e92a-4b1c-b081-7da83ab34669\") " pod="openstack/nova-metadata-0" Feb 18 14:57:20 crc kubenswrapper[4957]: I0218 14:57:20.775271 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e8ad40e-e92a-4b1c-b081-7da83ab34669-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"3e8ad40e-e92a-4b1c-b081-7da83ab34669\") " pod="openstack/nova-metadata-0" Feb 18 14:57:20 crc kubenswrapper[4957]: I0218 14:57:20.776827 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3e8ad40e-e92a-4b1c-b081-7da83ab34669-logs\") pod \"nova-metadata-0\" (UID: \"3e8ad40e-e92a-4b1c-b081-7da83ab34669\") " pod="openstack/nova-metadata-0" Feb 18 14:57:20 crc kubenswrapper[4957]: I0218 14:57:20.780566 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e8ad40e-e92a-4b1c-b081-7da83ab34669-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"3e8ad40e-e92a-4b1c-b081-7da83ab34669\") " pod="openstack/nova-metadata-0" Feb 18 14:57:20 crc kubenswrapper[4957]: I0218 14:57:20.785724 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e8ad40e-e92a-4b1c-b081-7da83ab34669-config-data\") pod \"nova-metadata-0\" (UID: \"3e8ad40e-e92a-4b1c-b081-7da83ab34669\") " pod="openstack/nova-metadata-0" Feb 18 14:57:20 crc kubenswrapper[4957]: I0218 14:57:20.787522 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e8ad40e-e92a-4b1c-b081-7da83ab34669-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3e8ad40e-e92a-4b1c-b081-7da83ab34669\") " pod="openstack/nova-metadata-0" Feb 18 14:57:20 crc kubenswrapper[4957]: I0218 14:57:20.804196 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tvsk8\" (UniqueName: \"kubernetes.io/projected/3e8ad40e-e92a-4b1c-b081-7da83ab34669-kube-api-access-tvsk8\") pod \"nova-metadata-0\" (UID: \"3e8ad40e-e92a-4b1c-b081-7da83ab34669\") " pod="openstack/nova-metadata-0" Feb 18 14:57:20 crc kubenswrapper[4957]: I0218 14:57:20.868907 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 18 14:57:20 crc kubenswrapper[4957]: I0218 14:57:20.948824 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-qcgxs" Feb 18 14:57:21 crc kubenswrapper[4957]: I0218 14:57:21.091762 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1361fd5c-3b5c-41e3-9c89-8df6ce0ea622-scripts\") pod \"1361fd5c-3b5c-41e3-9c89-8df6ce0ea622\" (UID: \"1361fd5c-3b5c-41e3-9c89-8df6ce0ea622\") " Feb 18 14:57:21 crc kubenswrapper[4957]: I0218 14:57:21.092144 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-swhrj\" (UniqueName: \"kubernetes.io/projected/1361fd5c-3b5c-41e3-9c89-8df6ce0ea622-kube-api-access-swhrj\") pod \"1361fd5c-3b5c-41e3-9c89-8df6ce0ea622\" (UID: \"1361fd5c-3b5c-41e3-9c89-8df6ce0ea622\") " Feb 18 14:57:21 crc kubenswrapper[4957]: I0218 14:57:21.092170 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1361fd5c-3b5c-41e3-9c89-8df6ce0ea622-config-data\") pod \"1361fd5c-3b5c-41e3-9c89-8df6ce0ea622\" (UID: \"1361fd5c-3b5c-41e3-9c89-8df6ce0ea622\") " Feb 18 14:57:21 crc kubenswrapper[4957]: I0218 14:57:21.092230 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1361fd5c-3b5c-41e3-9c89-8df6ce0ea622-combined-ca-bundle\") pod \"1361fd5c-3b5c-41e3-9c89-8df6ce0ea622\" (UID: \"1361fd5c-3b5c-41e3-9c89-8df6ce0ea622\") " Feb 18 14:57:21 crc kubenswrapper[4957]: I0218 14:57:21.116360 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1361fd5c-3b5c-41e3-9c89-8df6ce0ea622-scripts" (OuterVolumeSpecName: "scripts") pod "1361fd5c-3b5c-41e3-9c89-8df6ce0ea622" (UID: "1361fd5c-3b5c-41e3-9c89-8df6ce0ea622"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:57:21 crc kubenswrapper[4957]: I0218 14:57:21.128087 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1361fd5c-3b5c-41e3-9c89-8df6ce0ea622-kube-api-access-swhrj" (OuterVolumeSpecName: "kube-api-access-swhrj") pod "1361fd5c-3b5c-41e3-9c89-8df6ce0ea622" (UID: "1361fd5c-3b5c-41e3-9c89-8df6ce0ea622"). InnerVolumeSpecName "kube-api-access-swhrj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:57:21 crc kubenswrapper[4957]: I0218 14:57:21.136940 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1361fd5c-3b5c-41e3-9c89-8df6ce0ea622-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1361fd5c-3b5c-41e3-9c89-8df6ce0ea622" (UID: "1361fd5c-3b5c-41e3-9c89-8df6ce0ea622"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:57:21 crc kubenswrapper[4957]: I0218 14:57:21.165970 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1361fd5c-3b5c-41e3-9c89-8df6ce0ea622-config-data" (OuterVolumeSpecName: "config-data") pod "1361fd5c-3b5c-41e3-9c89-8df6ce0ea622" (UID: "1361fd5c-3b5c-41e3-9c89-8df6ce0ea622"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:57:21 crc kubenswrapper[4957]: I0218 14:57:21.201082 4957 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1361fd5c-3b5c-41e3-9c89-8df6ce0ea622-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 14:57:21 crc kubenswrapper[4957]: I0218 14:57:21.201193 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-swhrj\" (UniqueName: \"kubernetes.io/projected/1361fd5c-3b5c-41e3-9c89-8df6ce0ea622-kube-api-access-swhrj\") on node \"crc\" DevicePath \"\"" Feb 18 14:57:21 crc kubenswrapper[4957]: I0218 14:57:21.202670 4957 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1361fd5c-3b5c-41e3-9c89-8df6ce0ea622-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 14:57:21 crc kubenswrapper[4957]: I0218 14:57:21.202729 4957 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1361fd5c-3b5c-41e3-9c89-8df6ce0ea622-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:57:21 crc kubenswrapper[4957]: I0218 14:57:21.427117 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 14:57:21 crc kubenswrapper[4957]: I0218 14:57:21.456394 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-qcgxs" event={"ID":"1361fd5c-3b5c-41e3-9c89-8df6ce0ea622","Type":"ContainerDied","Data":"b1eca616784b64085aa12d46dee77b6031678f5676b2adcb374e4f10a6fe4398"} Feb 18 14:57:21 crc kubenswrapper[4957]: I0218 14:57:21.456508 4957 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b1eca616784b64085aa12d46dee77b6031678f5676b2adcb374e4f10a6fe4398" Feb 18 14:57:21 crc kubenswrapper[4957]: I0218 14:57:21.456602 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-qcgxs" Feb 18 14:57:21 crc kubenswrapper[4957]: I0218 14:57:21.466107 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3e8ad40e-e92a-4b1c-b081-7da83ab34669","Type":"ContainerStarted","Data":"0323e7705b1a0db12c5d6949258c55ff251c52b5705ccf8942e20a678c39cce4"} Feb 18 14:57:21 crc kubenswrapper[4957]: I0218 14:57:21.528387 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 18 14:57:21 crc kubenswrapper[4957]: E0218 14:57:21.529012 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1361fd5c-3b5c-41e3-9c89-8df6ce0ea622" containerName="nova-cell1-conductor-db-sync" Feb 18 14:57:21 crc kubenswrapper[4957]: I0218 14:57:21.529035 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="1361fd5c-3b5c-41e3-9c89-8df6ce0ea622" containerName="nova-cell1-conductor-db-sync" Feb 18 14:57:21 crc kubenswrapper[4957]: I0218 14:57:21.529386 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="1361fd5c-3b5c-41e3-9c89-8df6ce0ea622" containerName="nova-cell1-conductor-db-sync" Feb 18 14:57:21 crc kubenswrapper[4957]: I0218 14:57:21.530302 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 18 14:57:21 crc kubenswrapper[4957]: I0218 14:57:21.542755 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 18 14:57:21 crc kubenswrapper[4957]: I0218 14:57:21.560192 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 18 14:57:21 crc kubenswrapper[4957]: I0218 14:57:21.611156 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3787001-5ff0-47b4-917d-b0e9cbabd9a0-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"d3787001-5ff0-47b4-917d-b0e9cbabd9a0\") " pod="openstack/nova-cell1-conductor-0" Feb 18 14:57:21 crc kubenswrapper[4957]: I0218 14:57:21.611219 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3787001-5ff0-47b4-917d-b0e9cbabd9a0-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"d3787001-5ff0-47b4-917d-b0e9cbabd9a0\") " pod="openstack/nova-cell1-conductor-0" Feb 18 14:57:21 crc kubenswrapper[4957]: I0218 14:57:21.611314 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9lsm\" (UniqueName: \"kubernetes.io/projected/d3787001-5ff0-47b4-917d-b0e9cbabd9a0-kube-api-access-p9lsm\") pod \"nova-cell1-conductor-0\" (UID: \"d3787001-5ff0-47b4-917d-b0e9cbabd9a0\") " pod="openstack/nova-cell1-conductor-0" Feb 18 14:57:21 crc kubenswrapper[4957]: I0218 14:57:21.713034 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p9lsm\" (UniqueName: \"kubernetes.io/projected/d3787001-5ff0-47b4-917d-b0e9cbabd9a0-kube-api-access-p9lsm\") pod \"nova-cell1-conductor-0\" (UID: \"d3787001-5ff0-47b4-917d-b0e9cbabd9a0\") " pod="openstack/nova-cell1-conductor-0" Feb 18 14:57:21 crc kubenswrapper[4957]: I0218 14:57:21.713203 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3787001-5ff0-47b4-917d-b0e9cbabd9a0-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"d3787001-5ff0-47b4-917d-b0e9cbabd9a0\") " pod="openstack/nova-cell1-conductor-0" Feb 18 14:57:21 crc kubenswrapper[4957]: I0218 14:57:21.713247 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3787001-5ff0-47b4-917d-b0e9cbabd9a0-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"d3787001-5ff0-47b4-917d-b0e9cbabd9a0\") " pod="openstack/nova-cell1-conductor-0" Feb 18 14:57:21 crc kubenswrapper[4957]: I0218 14:57:21.718544 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3787001-5ff0-47b4-917d-b0e9cbabd9a0-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"d3787001-5ff0-47b4-917d-b0e9cbabd9a0\") " pod="openstack/nova-cell1-conductor-0" Feb 18 14:57:21 crc kubenswrapper[4957]: I0218 14:57:21.725016 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3787001-5ff0-47b4-917d-b0e9cbabd9a0-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"d3787001-5ff0-47b4-917d-b0e9cbabd9a0\") " pod="openstack/nova-cell1-conductor-0" Feb 18 14:57:21 crc kubenswrapper[4957]: I0218 14:57:21.753311 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p9lsm\" (UniqueName: \"kubernetes.io/projected/d3787001-5ff0-47b4-917d-b0e9cbabd9a0-kube-api-access-p9lsm\") pod \"nova-cell1-conductor-0\" (UID: \"d3787001-5ff0-47b4-917d-b0e9cbabd9a0\") " pod="openstack/nova-cell1-conductor-0" Feb 18 14:57:21 crc kubenswrapper[4957]: I0218 14:57:21.918047 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 18 14:57:21 crc kubenswrapper[4957]: I0218 14:57:21.952017 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 18 14:57:22 crc kubenswrapper[4957]: I0218 14:57:22.018763 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dfgkf\" (UniqueName: \"kubernetes.io/projected/4a913c6a-cf3b-49f6-b1c6-70b090d52925-kube-api-access-dfgkf\") pod \"4a913c6a-cf3b-49f6-b1c6-70b090d52925\" (UID: \"4a913c6a-cf3b-49f6-b1c6-70b090d52925\") " Feb 18 14:57:22 crc kubenswrapper[4957]: I0218 14:57:22.018898 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a913c6a-cf3b-49f6-b1c6-70b090d52925-config-data\") pod \"4a913c6a-cf3b-49f6-b1c6-70b090d52925\" (UID: \"4a913c6a-cf3b-49f6-b1c6-70b090d52925\") " Feb 18 14:57:22 crc kubenswrapper[4957]: I0218 14:57:22.018967 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a913c6a-cf3b-49f6-b1c6-70b090d52925-combined-ca-bundle\") pod \"4a913c6a-cf3b-49f6-b1c6-70b090d52925\" (UID: \"4a913c6a-cf3b-49f6-b1c6-70b090d52925\") " Feb 18 14:57:22 crc kubenswrapper[4957]: I0218 14:57:22.024443 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a913c6a-cf3b-49f6-b1c6-70b090d52925-kube-api-access-dfgkf" (OuterVolumeSpecName: "kube-api-access-dfgkf") pod "4a913c6a-cf3b-49f6-b1c6-70b090d52925" (UID: "4a913c6a-cf3b-49f6-b1c6-70b090d52925"). InnerVolumeSpecName "kube-api-access-dfgkf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:57:22 crc kubenswrapper[4957]: I0218 14:57:22.117503 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a913c6a-cf3b-49f6-b1c6-70b090d52925-config-data" (OuterVolumeSpecName: "config-data") pod "4a913c6a-cf3b-49f6-b1c6-70b090d52925" (UID: "4a913c6a-cf3b-49f6-b1c6-70b090d52925"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:57:22 crc kubenswrapper[4957]: I0218 14:57:22.122534 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dfgkf\" (UniqueName: \"kubernetes.io/projected/4a913c6a-cf3b-49f6-b1c6-70b090d52925-kube-api-access-dfgkf\") on node \"crc\" DevicePath \"\"" Feb 18 14:57:22 crc kubenswrapper[4957]: I0218 14:57:22.122570 4957 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a913c6a-cf3b-49f6-b1c6-70b090d52925-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 14:57:22 crc kubenswrapper[4957]: I0218 14:57:22.126781 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a913c6a-cf3b-49f6-b1c6-70b090d52925-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4a913c6a-cf3b-49f6-b1c6-70b090d52925" (UID: "4a913c6a-cf3b-49f6-b1c6-70b090d52925"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:57:22 crc kubenswrapper[4957]: I0218 14:57:22.216401 4957 scope.go:117] "RemoveContainer" containerID="c4e669be0ab1992542c422c61f78e7526ae93fe9c47c7d1457d0a05970a49398" Feb 18 14:57:22 crc kubenswrapper[4957]: E0218 14:57:22.216692 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-x8wwg_openshift-machine-config-operator(4cde17e3-43e9-4bed-afe8-5b76229e35cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" Feb 18 14:57:22 crc kubenswrapper[4957]: I0218 14:57:22.224313 4957 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a913c6a-cf3b-49f6-b1c6-70b090d52925-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:57:22 crc kubenswrapper[4957]: I0218 14:57:22.232644 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="883f4ea2-7199-4e92-b46d-1fe3d655c9d5" path="/var/lib/kubelet/pods/883f4ea2-7199-4e92-b46d-1fe3d655c9d5/volumes" Feb 18 14:57:22 crc kubenswrapper[4957]: I0218 14:57:22.478446 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-xmtj7" event={"ID":"af20e059-3e19-4e13-be41-de0fb244b627","Type":"ContainerDied","Data":"b29e628ff5cb4f796ed60b9c7972af2f184fd134d446f1e92f2d922990b9cd01"} Feb 18 14:57:22 crc kubenswrapper[4957]: I0218 14:57:22.478466 4957 generic.go:334] "Generic (PLEG): container finished" podID="af20e059-3e19-4e13-be41-de0fb244b627" containerID="b29e628ff5cb4f796ed60b9c7972af2f184fd134d446f1e92f2d922990b9cd01" exitCode=0 Feb 18 14:57:22 crc kubenswrapper[4957]: I0218 14:57:22.483356 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3e8ad40e-e92a-4b1c-b081-7da83ab34669","Type":"ContainerStarted","Data":"1ccf214a29b8e0df23866d723b91742a0ad63fcd37d6787b81417e5862edbcec"} Feb 18 14:57:22 crc kubenswrapper[4957]: I0218 14:57:22.483399 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3e8ad40e-e92a-4b1c-b081-7da83ab34669","Type":"ContainerStarted","Data":"6d4d3e813992d51b14303f85553e5b8c2444d099c20f27f2074f789417281247"} Feb 18 14:57:22 crc kubenswrapper[4957]: I0218 14:57:22.486250 4957 generic.go:334] "Generic (PLEG): container finished" podID="f0cd3140-eb02-488a-a95c-87e12ba3d061" containerID="f87915f0c46d43dddbba0635262ab6a28eeacffd7a59714cd68aa62f776c7c12" exitCode=0 Feb 18 14:57:22 crc kubenswrapper[4957]: I0218 14:57:22.486293 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f0cd3140-eb02-488a-a95c-87e12ba3d061","Type":"ContainerDied","Data":"f87915f0c46d43dddbba0635262ab6a28eeacffd7a59714cd68aa62f776c7c12"} Feb 18 14:57:22 crc kubenswrapper[4957]: I0218 14:57:22.487932 4957 generic.go:334] "Generic (PLEG): container finished" podID="4a913c6a-cf3b-49f6-b1c6-70b090d52925" containerID="fbd22de36ee7fde6bd502dd4e12a70e2dd44810b4c81c6c8057c229f61b2daa4" exitCode=0 Feb 18 14:57:22 crc kubenswrapper[4957]: I0218 14:57:22.487957 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"4a913c6a-cf3b-49f6-b1c6-70b090d52925","Type":"ContainerDied","Data":"fbd22de36ee7fde6bd502dd4e12a70e2dd44810b4c81c6c8057c229f61b2daa4"} Feb 18 14:57:22 crc kubenswrapper[4957]: I0218 14:57:22.487971 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"4a913c6a-cf3b-49f6-b1c6-70b090d52925","Type":"ContainerDied","Data":"833ef9b89e6d4de51f411a3daaf67c2fc425f1f7e58085c22f73bd2b63999f12"} Feb 18 14:57:22 crc kubenswrapper[4957]: I0218 14:57:22.487986 4957 scope.go:117] "RemoveContainer" containerID="fbd22de36ee7fde6bd502dd4e12a70e2dd44810b4c81c6c8057c229f61b2daa4" Feb 18 14:57:22 crc kubenswrapper[4957]: I0218 14:57:22.488089 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 18 14:57:22 crc kubenswrapper[4957]: I0218 14:57:22.544815 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 18 14:57:22 crc kubenswrapper[4957]: I0218 14:57:22.562797 4957 scope.go:117] "RemoveContainer" containerID="fbd22de36ee7fde6bd502dd4e12a70e2dd44810b4c81c6c8057c229f61b2daa4" Feb 18 14:57:22 crc kubenswrapper[4957]: E0218 14:57:22.563978 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fbd22de36ee7fde6bd502dd4e12a70e2dd44810b4c81c6c8057c229f61b2daa4\": container with ID starting with fbd22de36ee7fde6bd502dd4e12a70e2dd44810b4c81c6c8057c229f61b2daa4 not found: ID does not exist" containerID="fbd22de36ee7fde6bd502dd4e12a70e2dd44810b4c81c6c8057c229f61b2daa4" Feb 18 14:57:22 crc kubenswrapper[4957]: I0218 14:57:22.564027 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fbd22de36ee7fde6bd502dd4e12a70e2dd44810b4c81c6c8057c229f61b2daa4"} err="failed to get container status \"fbd22de36ee7fde6bd502dd4e12a70e2dd44810b4c81c6c8057c229f61b2daa4\": rpc error: code = NotFound desc = could not find container \"fbd22de36ee7fde6bd502dd4e12a70e2dd44810b4c81c6c8057c229f61b2daa4\": container with ID starting with fbd22de36ee7fde6bd502dd4e12a70e2dd44810b4c81c6c8057c229f61b2daa4 not found: ID does not exist" Feb 18 14:57:22 crc kubenswrapper[4957]: I0218 14:57:22.615012 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.614985952 podStartE2EDuration="2.614985952s" podCreationTimestamp="2026-02-18 14:57:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:57:22.519392196 +0000 UTC m=+1549.040256950" watchObservedRunningTime="2026-02-18 14:57:22.614985952 +0000 UTC m=+1549.135850696" Feb 18 14:57:22 crc kubenswrapper[4957]: I0218 14:57:22.685483 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 14:57:22 crc kubenswrapper[4957]: I0218 14:57:22.699696 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 14:57:22 crc kubenswrapper[4957]: I0218 14:57:22.715120 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 14:57:22 crc kubenswrapper[4957]: E0218 14:57:22.715595 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a913c6a-cf3b-49f6-b1c6-70b090d52925" containerName="nova-scheduler-scheduler" Feb 18 14:57:22 crc kubenswrapper[4957]: I0218 14:57:22.715607 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a913c6a-cf3b-49f6-b1c6-70b090d52925" containerName="nova-scheduler-scheduler" Feb 18 14:57:22 crc kubenswrapper[4957]: I0218 14:57:22.715848 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a913c6a-cf3b-49f6-b1c6-70b090d52925" containerName="nova-scheduler-scheduler" Feb 18 14:57:22 crc kubenswrapper[4957]: I0218 14:57:22.716638 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 18 14:57:22 crc kubenswrapper[4957]: I0218 14:57:22.720149 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 18 14:57:22 crc kubenswrapper[4957]: I0218 14:57:22.740524 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 14:57:22 crc kubenswrapper[4957]: I0218 14:57:22.864213 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5p6p\" (UniqueName: \"kubernetes.io/projected/0c252f5d-2954-4fdd-a999-29b7c6469ade-kube-api-access-k5p6p\") pod \"nova-scheduler-0\" (UID: \"0c252f5d-2954-4fdd-a999-29b7c6469ade\") " pod="openstack/nova-scheduler-0" Feb 18 14:57:22 crc kubenswrapper[4957]: I0218 14:57:22.864341 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c252f5d-2954-4fdd-a999-29b7c6469ade-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"0c252f5d-2954-4fdd-a999-29b7c6469ade\") " pod="openstack/nova-scheduler-0" Feb 18 14:57:22 crc kubenswrapper[4957]: I0218 14:57:22.864495 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c252f5d-2954-4fdd-a999-29b7c6469ade-config-data\") pod \"nova-scheduler-0\" (UID: \"0c252f5d-2954-4fdd-a999-29b7c6469ade\") " pod="openstack/nova-scheduler-0" Feb 18 14:57:22 crc kubenswrapper[4957]: I0218 14:57:22.966160 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c252f5d-2954-4fdd-a999-29b7c6469ade-config-data\") pod \"nova-scheduler-0\" (UID: \"0c252f5d-2954-4fdd-a999-29b7c6469ade\") " pod="openstack/nova-scheduler-0" Feb 18 14:57:22 crc kubenswrapper[4957]: I0218 14:57:22.966355 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5p6p\" (UniqueName: \"kubernetes.io/projected/0c252f5d-2954-4fdd-a999-29b7c6469ade-kube-api-access-k5p6p\") pod \"nova-scheduler-0\" (UID: \"0c252f5d-2954-4fdd-a999-29b7c6469ade\") " pod="openstack/nova-scheduler-0" Feb 18 14:57:22 crc kubenswrapper[4957]: I0218 14:57:22.966410 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c252f5d-2954-4fdd-a999-29b7c6469ade-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"0c252f5d-2954-4fdd-a999-29b7c6469ade\") " pod="openstack/nova-scheduler-0" Feb 18 14:57:22 crc kubenswrapper[4957]: I0218 14:57:22.972806 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c252f5d-2954-4fdd-a999-29b7c6469ade-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"0c252f5d-2954-4fdd-a999-29b7c6469ade\") " pod="openstack/nova-scheduler-0" Feb 18 14:57:22 crc kubenswrapper[4957]: I0218 14:57:22.976982 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c252f5d-2954-4fdd-a999-29b7c6469ade-config-data\") pod \"nova-scheduler-0\" (UID: \"0c252f5d-2954-4fdd-a999-29b7c6469ade\") " pod="openstack/nova-scheduler-0" Feb 18 14:57:22 crc kubenswrapper[4957]: I0218 14:57:22.990915 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5p6p\" (UniqueName: \"kubernetes.io/projected/0c252f5d-2954-4fdd-a999-29b7c6469ade-kube-api-access-k5p6p\") pod \"nova-scheduler-0\" (UID: \"0c252f5d-2954-4fdd-a999-29b7c6469ade\") " pod="openstack/nova-scheduler-0" Feb 18 14:57:23 crc kubenswrapper[4957]: I0218 14:57:23.147855 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 14:57:23 crc kubenswrapper[4957]: I0218 14:57:23.186394 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 18 14:57:23 crc kubenswrapper[4957]: I0218 14:57:23.272175 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0cd3140-eb02-488a-a95c-87e12ba3d061-combined-ca-bundle\") pod \"f0cd3140-eb02-488a-a95c-87e12ba3d061\" (UID: \"f0cd3140-eb02-488a-a95c-87e12ba3d061\") " Feb 18 14:57:23 crc kubenswrapper[4957]: I0218 14:57:23.272490 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-654jw\" (UniqueName: \"kubernetes.io/projected/f0cd3140-eb02-488a-a95c-87e12ba3d061-kube-api-access-654jw\") pod \"f0cd3140-eb02-488a-a95c-87e12ba3d061\" (UID: \"f0cd3140-eb02-488a-a95c-87e12ba3d061\") " Feb 18 14:57:23 crc kubenswrapper[4957]: I0218 14:57:23.272544 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0cd3140-eb02-488a-a95c-87e12ba3d061-config-data\") pod \"f0cd3140-eb02-488a-a95c-87e12ba3d061\" (UID: \"f0cd3140-eb02-488a-a95c-87e12ba3d061\") " Feb 18 14:57:23 crc kubenswrapper[4957]: I0218 14:57:23.272583 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f0cd3140-eb02-488a-a95c-87e12ba3d061-logs\") pod \"f0cd3140-eb02-488a-a95c-87e12ba3d061\" (UID: \"f0cd3140-eb02-488a-a95c-87e12ba3d061\") " Feb 18 14:57:23 crc kubenswrapper[4957]: I0218 14:57:23.278208 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f0cd3140-eb02-488a-a95c-87e12ba3d061-logs" (OuterVolumeSpecName: "logs") pod "f0cd3140-eb02-488a-a95c-87e12ba3d061" (UID: "f0cd3140-eb02-488a-a95c-87e12ba3d061"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:57:23 crc kubenswrapper[4957]: I0218 14:57:23.280610 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0cd3140-eb02-488a-a95c-87e12ba3d061-kube-api-access-654jw" (OuterVolumeSpecName: "kube-api-access-654jw") pod "f0cd3140-eb02-488a-a95c-87e12ba3d061" (UID: "f0cd3140-eb02-488a-a95c-87e12ba3d061"). InnerVolumeSpecName "kube-api-access-654jw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:57:23 crc kubenswrapper[4957]: I0218 14:57:23.306269 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0cd3140-eb02-488a-a95c-87e12ba3d061-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f0cd3140-eb02-488a-a95c-87e12ba3d061" (UID: "f0cd3140-eb02-488a-a95c-87e12ba3d061"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:57:23 crc kubenswrapper[4957]: I0218 14:57:23.314441 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0cd3140-eb02-488a-a95c-87e12ba3d061-config-data" (OuterVolumeSpecName: "config-data") pod "f0cd3140-eb02-488a-a95c-87e12ba3d061" (UID: "f0cd3140-eb02-488a-a95c-87e12ba3d061"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:57:23 crc kubenswrapper[4957]: I0218 14:57:23.374966 4957 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0cd3140-eb02-488a-a95c-87e12ba3d061-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:57:23 crc kubenswrapper[4957]: I0218 14:57:23.375233 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-654jw\" (UniqueName: \"kubernetes.io/projected/f0cd3140-eb02-488a-a95c-87e12ba3d061-kube-api-access-654jw\") on node \"crc\" DevicePath \"\"" Feb 18 14:57:23 crc kubenswrapper[4957]: I0218 14:57:23.375245 4957 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0cd3140-eb02-488a-a95c-87e12ba3d061-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 14:57:23 crc kubenswrapper[4957]: I0218 14:57:23.375253 4957 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f0cd3140-eb02-488a-a95c-87e12ba3d061-logs\") on node \"crc\" DevicePath \"\"" Feb 18 14:57:23 crc kubenswrapper[4957]: I0218 14:57:23.506970 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 14:57:23 crc kubenswrapper[4957]: I0218 14:57:23.507035 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f0cd3140-eb02-488a-a95c-87e12ba3d061","Type":"ContainerDied","Data":"b0c5f8bf8824d1d789864eeabb7be45aa62808e9ddee6b4e7917a9713bf9d626"} Feb 18 14:57:23 crc kubenswrapper[4957]: I0218 14:57:23.507652 4957 scope.go:117] "RemoveContainer" containerID="f87915f0c46d43dddbba0635262ab6a28eeacffd7a59714cd68aa62f776c7c12" Feb 18 14:57:23 crc kubenswrapper[4957]: I0218 14:57:23.515405 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"d3787001-5ff0-47b4-917d-b0e9cbabd9a0","Type":"ContainerStarted","Data":"dd12ddb19b54519bd2944cab921bcd823dbb74b901cd9574d1264358dac6b351"} Feb 18 14:57:23 crc kubenswrapper[4957]: I0218 14:57:23.515465 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"d3787001-5ff0-47b4-917d-b0e9cbabd9a0","Type":"ContainerStarted","Data":"a069213918120b754b8f38adb8d7662d305cdd88c137f0e735b3e2b96b61b53d"} Feb 18 14:57:23 crc kubenswrapper[4957]: I0218 14:57:23.516315 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Feb 18 14:57:23 crc kubenswrapper[4957]: I0218 14:57:23.546747 4957 scope.go:117] "RemoveContainer" containerID="5e6f1e624a6456379a444c472e6ed627f85b9fcea51914672af79d002f6e738e" Feb 18 14:57:23 crc kubenswrapper[4957]: I0218 14:57:23.569847 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.569825187 podStartE2EDuration="2.569825187s" podCreationTimestamp="2026-02-18 14:57:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:57:23.53808521 +0000 UTC m=+1550.058949964" watchObservedRunningTime="2026-02-18 14:57:23.569825187 +0000 UTC m=+1550.090689931" Feb 18 14:57:23 crc kubenswrapper[4957]: I0218 14:57:23.597005 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 18 14:57:23 crc kubenswrapper[4957]: I0218 14:57:23.611052 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 18 14:57:23 crc kubenswrapper[4957]: I0218 14:57:23.628326 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 18 14:57:23 crc kubenswrapper[4957]: E0218 14:57:23.629158 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0cd3140-eb02-488a-a95c-87e12ba3d061" containerName="nova-api-api" Feb 18 14:57:23 crc kubenswrapper[4957]: I0218 14:57:23.629176 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0cd3140-eb02-488a-a95c-87e12ba3d061" containerName="nova-api-api" Feb 18 14:57:23 crc kubenswrapper[4957]: E0218 14:57:23.629213 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0cd3140-eb02-488a-a95c-87e12ba3d061" containerName="nova-api-log" Feb 18 14:57:23 crc kubenswrapper[4957]: I0218 14:57:23.629218 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0cd3140-eb02-488a-a95c-87e12ba3d061" containerName="nova-api-log" Feb 18 14:57:23 crc kubenswrapper[4957]: I0218 14:57:23.629453 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0cd3140-eb02-488a-a95c-87e12ba3d061" containerName="nova-api-log" Feb 18 14:57:23 crc kubenswrapper[4957]: I0218 14:57:23.629490 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0cd3140-eb02-488a-a95c-87e12ba3d061" containerName="nova-api-api" Feb 18 14:57:23 crc kubenswrapper[4957]: I0218 14:57:23.630736 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 14:57:23 crc kubenswrapper[4957]: I0218 14:57:23.636257 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 18 14:57:23 crc kubenswrapper[4957]: I0218 14:57:23.678500 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 18 14:57:23 crc kubenswrapper[4957]: I0218 14:57:23.683932 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c417d58-815f-4a5d-aacf-bb6a06bc8b51-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3c417d58-815f-4a5d-aacf-bb6a06bc8b51\") " pod="openstack/nova-api-0" Feb 18 14:57:23 crc kubenswrapper[4957]: I0218 14:57:23.683993 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3c417d58-815f-4a5d-aacf-bb6a06bc8b51-logs\") pod \"nova-api-0\" (UID: \"3c417d58-815f-4a5d-aacf-bb6a06bc8b51\") " pod="openstack/nova-api-0" Feb 18 14:57:23 crc kubenswrapper[4957]: I0218 14:57:23.684024 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c417d58-815f-4a5d-aacf-bb6a06bc8b51-config-data\") pod \"nova-api-0\" (UID: \"3c417d58-815f-4a5d-aacf-bb6a06bc8b51\") " pod="openstack/nova-api-0" Feb 18 14:57:23 crc kubenswrapper[4957]: I0218 14:57:23.684228 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6bk7\" (UniqueName: \"kubernetes.io/projected/3c417d58-815f-4a5d-aacf-bb6a06bc8b51-kube-api-access-h6bk7\") pod \"nova-api-0\" (UID: \"3c417d58-815f-4a5d-aacf-bb6a06bc8b51\") " pod="openstack/nova-api-0" Feb 18 14:57:23 crc kubenswrapper[4957]: I0218 14:57:23.723559 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 14:57:23 crc kubenswrapper[4957]: I0218 14:57:23.787530 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h6bk7\" (UniqueName: \"kubernetes.io/projected/3c417d58-815f-4a5d-aacf-bb6a06bc8b51-kube-api-access-h6bk7\") pod \"nova-api-0\" (UID: \"3c417d58-815f-4a5d-aacf-bb6a06bc8b51\") " pod="openstack/nova-api-0" Feb 18 14:57:23 crc kubenswrapper[4957]: I0218 14:57:23.787577 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c417d58-815f-4a5d-aacf-bb6a06bc8b51-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3c417d58-815f-4a5d-aacf-bb6a06bc8b51\") " pod="openstack/nova-api-0" Feb 18 14:57:23 crc kubenswrapper[4957]: I0218 14:57:23.787609 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3c417d58-815f-4a5d-aacf-bb6a06bc8b51-logs\") pod \"nova-api-0\" (UID: \"3c417d58-815f-4a5d-aacf-bb6a06bc8b51\") " pod="openstack/nova-api-0" Feb 18 14:57:23 crc kubenswrapper[4957]: I0218 14:57:23.787635 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c417d58-815f-4a5d-aacf-bb6a06bc8b51-config-data\") pod \"nova-api-0\" (UID: \"3c417d58-815f-4a5d-aacf-bb6a06bc8b51\") " pod="openstack/nova-api-0" Feb 18 14:57:23 crc kubenswrapper[4957]: I0218 14:57:23.788226 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3c417d58-815f-4a5d-aacf-bb6a06bc8b51-logs\") pod \"nova-api-0\" (UID: \"3c417d58-815f-4a5d-aacf-bb6a06bc8b51\") " pod="openstack/nova-api-0" Feb 18 14:57:23 crc kubenswrapper[4957]: I0218 14:57:23.791469 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c417d58-815f-4a5d-aacf-bb6a06bc8b51-config-data\") pod \"nova-api-0\" (UID: \"3c417d58-815f-4a5d-aacf-bb6a06bc8b51\") " pod="openstack/nova-api-0" Feb 18 14:57:23 crc kubenswrapper[4957]: I0218 14:57:23.792186 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c417d58-815f-4a5d-aacf-bb6a06bc8b51-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3c417d58-815f-4a5d-aacf-bb6a06bc8b51\") " pod="openstack/nova-api-0" Feb 18 14:57:23 crc kubenswrapper[4957]: I0218 14:57:23.809797 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h6bk7\" (UniqueName: \"kubernetes.io/projected/3c417d58-815f-4a5d-aacf-bb6a06bc8b51-kube-api-access-h6bk7\") pod \"nova-api-0\" (UID: \"3c417d58-815f-4a5d-aacf-bb6a06bc8b51\") " pod="openstack/nova-api-0" Feb 18 14:57:23 crc kubenswrapper[4957]: I0218 14:57:23.965140 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 14:57:24 crc kubenswrapper[4957]: I0218 14:57:24.136573 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-xmtj7" Feb 18 14:57:24 crc kubenswrapper[4957]: I0218 14:57:24.198372 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/af20e059-3e19-4e13-be41-de0fb244b627-scripts\") pod \"af20e059-3e19-4e13-be41-de0fb244b627\" (UID: \"af20e059-3e19-4e13-be41-de0fb244b627\") " Feb 18 14:57:24 crc kubenswrapper[4957]: I0218 14:57:24.198488 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af20e059-3e19-4e13-be41-de0fb244b627-config-data\") pod \"af20e059-3e19-4e13-be41-de0fb244b627\" (UID: \"af20e059-3e19-4e13-be41-de0fb244b627\") " Feb 18 14:57:24 crc kubenswrapper[4957]: I0218 14:57:24.199287 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zd6z7\" (UniqueName: \"kubernetes.io/projected/af20e059-3e19-4e13-be41-de0fb244b627-kube-api-access-zd6z7\") pod \"af20e059-3e19-4e13-be41-de0fb244b627\" (UID: \"af20e059-3e19-4e13-be41-de0fb244b627\") " Feb 18 14:57:24 crc kubenswrapper[4957]: I0218 14:57:24.199348 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af20e059-3e19-4e13-be41-de0fb244b627-combined-ca-bundle\") pod \"af20e059-3e19-4e13-be41-de0fb244b627\" (UID: \"af20e059-3e19-4e13-be41-de0fb244b627\") " Feb 18 14:57:24 crc kubenswrapper[4957]: I0218 14:57:24.215640 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af20e059-3e19-4e13-be41-de0fb244b627-kube-api-access-zd6z7" (OuterVolumeSpecName: "kube-api-access-zd6z7") pod "af20e059-3e19-4e13-be41-de0fb244b627" (UID: "af20e059-3e19-4e13-be41-de0fb244b627"). InnerVolumeSpecName "kube-api-access-zd6z7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:57:24 crc kubenswrapper[4957]: I0218 14:57:24.216030 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af20e059-3e19-4e13-be41-de0fb244b627-scripts" (OuterVolumeSpecName: "scripts") pod "af20e059-3e19-4e13-be41-de0fb244b627" (UID: "af20e059-3e19-4e13-be41-de0fb244b627"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:57:24 crc kubenswrapper[4957]: I0218 14:57:24.235873 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a913c6a-cf3b-49f6-b1c6-70b090d52925" path="/var/lib/kubelet/pods/4a913c6a-cf3b-49f6-b1c6-70b090d52925/volumes" Feb 18 14:57:24 crc kubenswrapper[4957]: I0218 14:57:24.236871 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f0cd3140-eb02-488a-a95c-87e12ba3d061" path="/var/lib/kubelet/pods/f0cd3140-eb02-488a-a95c-87e12ba3d061/volumes" Feb 18 14:57:24 crc kubenswrapper[4957]: I0218 14:57:24.247940 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af20e059-3e19-4e13-be41-de0fb244b627-config-data" (OuterVolumeSpecName: "config-data") pod "af20e059-3e19-4e13-be41-de0fb244b627" (UID: "af20e059-3e19-4e13-be41-de0fb244b627"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:57:24 crc kubenswrapper[4957]: I0218 14:57:24.306791 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zd6z7\" (UniqueName: \"kubernetes.io/projected/af20e059-3e19-4e13-be41-de0fb244b627-kube-api-access-zd6z7\") on node \"crc\" DevicePath \"\"" Feb 18 14:57:24 crc kubenswrapper[4957]: I0218 14:57:24.306832 4957 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/af20e059-3e19-4e13-be41-de0fb244b627-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 14:57:24 crc kubenswrapper[4957]: I0218 14:57:24.306843 4957 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af20e059-3e19-4e13-be41-de0fb244b627-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 14:57:24 crc kubenswrapper[4957]: I0218 14:57:24.339303 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af20e059-3e19-4e13-be41-de0fb244b627-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "af20e059-3e19-4e13-be41-de0fb244b627" (UID: "af20e059-3e19-4e13-be41-de0fb244b627"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:57:24 crc kubenswrapper[4957]: I0218 14:57:24.409981 4957 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af20e059-3e19-4e13-be41-de0fb244b627-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:57:24 crc kubenswrapper[4957]: W0218 14:57:24.500940 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3c417d58_815f_4a5d_aacf_bb6a06bc8b51.slice/crio-899cfbf2fa1216a8a254743ed791d61a63101d6b07789192ed4157e0194ca36e WatchSource:0}: Error finding container 899cfbf2fa1216a8a254743ed791d61a63101d6b07789192ed4157e0194ca36e: Status 404 returned error can't find the container with id 899cfbf2fa1216a8a254743ed791d61a63101d6b07789192ed4157e0194ca36e Feb 18 14:57:24 crc kubenswrapper[4957]: I0218 14:57:24.501497 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 18 14:57:24 crc kubenswrapper[4957]: I0218 14:57:24.538109 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"0c252f5d-2954-4fdd-a999-29b7c6469ade","Type":"ContainerStarted","Data":"c1c48296ba99dc61517cc9d7d5b9006037d8173774ac9cbdcfe7c418c4924a68"} Feb 18 14:57:24 crc kubenswrapper[4957]: I0218 14:57:24.538158 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"0c252f5d-2954-4fdd-a999-29b7c6469ade","Type":"ContainerStarted","Data":"3330ed036e421b617733c035a76899a1f41b82ae47cda2114f8962b1f08884d8"} Feb 18 14:57:24 crc kubenswrapper[4957]: I0218 14:57:24.541650 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-xmtj7" event={"ID":"af20e059-3e19-4e13-be41-de0fb244b627","Type":"ContainerDied","Data":"2ab6d4a4b87ae03ee65f11240662bfe1a17828b551c15d7a842c89f4a16d94fe"} Feb 18 14:57:24 crc kubenswrapper[4957]: I0218 14:57:24.541694 4957 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2ab6d4a4b87ae03ee65f11240662bfe1a17828b551c15d7a842c89f4a16d94fe" Feb 18 14:57:24 crc kubenswrapper[4957]: I0218 14:57:24.541752 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-xmtj7" Feb 18 14:57:24 crc kubenswrapper[4957]: I0218 14:57:24.546177 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3c417d58-815f-4a5d-aacf-bb6a06bc8b51","Type":"ContainerStarted","Data":"899cfbf2fa1216a8a254743ed791d61a63101d6b07789192ed4157e0194ca36e"} Feb 18 14:57:24 crc kubenswrapper[4957]: I0218 14:57:24.565063 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.565039062 podStartE2EDuration="2.565039062s" podCreationTimestamp="2026-02-18 14:57:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:57:24.557902135 +0000 UTC m=+1551.078766879" watchObservedRunningTime="2026-02-18 14:57:24.565039062 +0000 UTC m=+1551.085903806" Feb 18 14:57:24 crc kubenswrapper[4957]: I0218 14:57:24.960691 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Feb 18 14:57:24 crc kubenswrapper[4957]: E0218 14:57:24.962477 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af20e059-3e19-4e13-be41-de0fb244b627" containerName="aodh-db-sync" Feb 18 14:57:24 crc kubenswrapper[4957]: I0218 14:57:24.962518 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="af20e059-3e19-4e13-be41-de0fb244b627" containerName="aodh-db-sync" Feb 18 14:57:24 crc kubenswrapper[4957]: I0218 14:57:24.962887 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="af20e059-3e19-4e13-be41-de0fb244b627" containerName="aodh-db-sync" Feb 18 14:57:24 crc kubenswrapper[4957]: I0218 14:57:24.970931 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 18 14:57:24 crc kubenswrapper[4957]: I0218 14:57:24.974836 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Feb 18 14:57:24 crc kubenswrapper[4957]: I0218 14:57:24.975049 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Feb 18 14:57:24 crc kubenswrapper[4957]: I0218 14:57:24.975449 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-62fdg" Feb 18 14:57:25 crc kubenswrapper[4957]: I0218 14:57:25.005692 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Feb 18 14:57:25 crc kubenswrapper[4957]: I0218 14:57:25.034930 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f31564e5-da12-4b69-b9ca-da180143fcf7-combined-ca-bundle\") pod \"aodh-0\" (UID: \"f31564e5-da12-4b69-b9ca-da180143fcf7\") " pod="openstack/aodh-0" Feb 18 14:57:25 crc kubenswrapper[4957]: I0218 14:57:25.035120 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f31564e5-da12-4b69-b9ca-da180143fcf7-config-data\") pod \"aodh-0\" (UID: \"f31564e5-da12-4b69-b9ca-da180143fcf7\") " pod="openstack/aodh-0" Feb 18 14:57:25 crc kubenswrapper[4957]: I0218 14:57:25.035173 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f31564e5-da12-4b69-b9ca-da180143fcf7-scripts\") pod \"aodh-0\" (UID: \"f31564e5-da12-4b69-b9ca-da180143fcf7\") " pod="openstack/aodh-0" Feb 18 14:57:25 crc kubenswrapper[4957]: I0218 14:57:25.035344 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvqkv\" (UniqueName: \"kubernetes.io/projected/f31564e5-da12-4b69-b9ca-da180143fcf7-kube-api-access-dvqkv\") pod \"aodh-0\" (UID: \"f31564e5-da12-4b69-b9ca-da180143fcf7\") " pod="openstack/aodh-0" Feb 18 14:57:25 crc kubenswrapper[4957]: I0218 14:57:25.137996 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f31564e5-da12-4b69-b9ca-da180143fcf7-config-data\") pod \"aodh-0\" (UID: \"f31564e5-da12-4b69-b9ca-da180143fcf7\") " pod="openstack/aodh-0" Feb 18 14:57:25 crc kubenswrapper[4957]: I0218 14:57:25.138041 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f31564e5-da12-4b69-b9ca-da180143fcf7-scripts\") pod \"aodh-0\" (UID: \"f31564e5-da12-4b69-b9ca-da180143fcf7\") " pod="openstack/aodh-0" Feb 18 14:57:25 crc kubenswrapper[4957]: I0218 14:57:25.138136 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dvqkv\" (UniqueName: \"kubernetes.io/projected/f31564e5-da12-4b69-b9ca-da180143fcf7-kube-api-access-dvqkv\") pod \"aodh-0\" (UID: \"f31564e5-da12-4b69-b9ca-da180143fcf7\") " pod="openstack/aodh-0" Feb 18 14:57:25 crc kubenswrapper[4957]: I0218 14:57:25.138209 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f31564e5-da12-4b69-b9ca-da180143fcf7-combined-ca-bundle\") pod \"aodh-0\" (UID: \"f31564e5-da12-4b69-b9ca-da180143fcf7\") " pod="openstack/aodh-0" Feb 18 14:57:25 crc kubenswrapper[4957]: I0218 14:57:25.143394 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f31564e5-da12-4b69-b9ca-da180143fcf7-config-data\") pod \"aodh-0\" (UID: \"f31564e5-da12-4b69-b9ca-da180143fcf7\") " pod="openstack/aodh-0" Feb 18 14:57:25 crc kubenswrapper[4957]: I0218 14:57:25.147840 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f31564e5-da12-4b69-b9ca-da180143fcf7-scripts\") pod \"aodh-0\" (UID: \"f31564e5-da12-4b69-b9ca-da180143fcf7\") " pod="openstack/aodh-0" Feb 18 14:57:25 crc kubenswrapper[4957]: I0218 14:57:25.154996 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f31564e5-da12-4b69-b9ca-da180143fcf7-combined-ca-bundle\") pod \"aodh-0\" (UID: \"f31564e5-da12-4b69-b9ca-da180143fcf7\") " pod="openstack/aodh-0" Feb 18 14:57:25 crc kubenswrapper[4957]: I0218 14:57:25.162995 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dvqkv\" (UniqueName: \"kubernetes.io/projected/f31564e5-da12-4b69-b9ca-da180143fcf7-kube-api-access-dvqkv\") pod \"aodh-0\" (UID: \"f31564e5-da12-4b69-b9ca-da180143fcf7\") " pod="openstack/aodh-0" Feb 18 14:57:25 crc kubenswrapper[4957]: I0218 14:57:25.376184 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 18 14:57:25 crc kubenswrapper[4957]: I0218 14:57:25.584029 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3c417d58-815f-4a5d-aacf-bb6a06bc8b51","Type":"ContainerStarted","Data":"24e8d5f20b031ca69e3fc1db7624a1216c481cb783d1af65698f6e57b2ee78d7"} Feb 18 14:57:25 crc kubenswrapper[4957]: I0218 14:57:25.584081 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3c417d58-815f-4a5d-aacf-bb6a06bc8b51","Type":"ContainerStarted","Data":"a826a374248b7899d62e1ed034cc30d914edab884d16f334ae62df1eb81d6389"} Feb 18 14:57:25 crc kubenswrapper[4957]: I0218 14:57:25.626926 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.626907545 podStartE2EDuration="2.626907545s" podCreationTimestamp="2026-02-18 14:57:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:57:25.621623742 +0000 UTC m=+1552.142488486" watchObservedRunningTime="2026-02-18 14:57:25.626907545 +0000 UTC m=+1552.147772289" Feb 18 14:57:25 crc kubenswrapper[4957]: I0218 14:57:25.869679 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 18 14:57:25 crc kubenswrapper[4957]: I0218 14:57:25.871393 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 18 14:57:26 crc kubenswrapper[4957]: I0218 14:57:26.032023 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Feb 18 14:57:26 crc kubenswrapper[4957]: I0218 14:57:26.605286 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"f31564e5-da12-4b69-b9ca-da180143fcf7","Type":"ContainerStarted","Data":"87a6a19ca35f71fb79b295723e52b5f0c3c9dbde9c89de09efa9d66c79c89662"} Feb 18 14:57:27 crc kubenswrapper[4957]: I0218 14:57:27.630886 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"f31564e5-da12-4b69-b9ca-da180143fcf7","Type":"ContainerStarted","Data":"0f4891fb81952c02d798ed6b6740866bca11088f3415c2234b23a29655eea545"} Feb 18 14:57:28 crc kubenswrapper[4957]: I0218 14:57:28.187885 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 18 14:57:28 crc kubenswrapper[4957]: I0218 14:57:28.307527 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Feb 18 14:57:29 crc kubenswrapper[4957]: I0218 14:57:29.010103 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 14:57:29 crc kubenswrapper[4957]: I0218 14:57:29.010842 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bc34b37f-eee2-4e72-b5f5-0c9f7a8fedc3" containerName="ceilometer-central-agent" containerID="cri-o://0deacaba22fa5c38bc83d56ac00893d1b5ca73f087d0940caac8dd70dae1e6bf" gracePeriod=30 Feb 18 14:57:29 crc kubenswrapper[4957]: I0218 14:57:29.010918 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bc34b37f-eee2-4e72-b5f5-0c9f7a8fedc3" containerName="proxy-httpd" containerID="cri-o://63f52b326f18d037abd43c6340e95d7f2c0a447913c93815a7539ec32d89da01" gracePeriod=30 Feb 18 14:57:29 crc kubenswrapper[4957]: I0218 14:57:29.010985 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bc34b37f-eee2-4e72-b5f5-0c9f7a8fedc3" containerName="sg-core" containerID="cri-o://75cc6931588be5820eb086ca73fea1b3b64d672a1cbe642fcae84e796efe870f" gracePeriod=30 Feb 18 14:57:29 crc kubenswrapper[4957]: I0218 14:57:29.011050 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bc34b37f-eee2-4e72-b5f5-0c9f7a8fedc3" containerName="ceilometer-notification-agent" containerID="cri-o://f217ad3ab0ec876b4b2d70ff032cba1d000c153a3313540f14e2706e7dc2cf0d" gracePeriod=30 Feb 18 14:57:29 crc kubenswrapper[4957]: I0218 14:57:29.024842 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="bc34b37f-eee2-4e72-b5f5-0c9f7a8fedc3" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.244:3000/\": EOF" Feb 18 14:57:29 crc kubenswrapper[4957]: E0218 14:57:29.602687 4957 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbc34b37f_eee2_4e72_b5f5_0c9f7a8fedc3.slice/crio-0deacaba22fa5c38bc83d56ac00893d1b5ca73f087d0940caac8dd70dae1e6bf.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbc34b37f_eee2_4e72_b5f5_0c9f7a8fedc3.slice/crio-conmon-0deacaba22fa5c38bc83d56ac00893d1b5ca73f087d0940caac8dd70dae1e6bf.scope\": RecentStats: unable to find data in memory cache]" Feb 18 14:57:29 crc kubenswrapper[4957]: I0218 14:57:29.666574 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"f31564e5-da12-4b69-b9ca-da180143fcf7","Type":"ContainerStarted","Data":"36c6a347aaf51fb57e8910183d1ab01939a13967586002dbd0ad1a899c4a9b4f"} Feb 18 14:57:29 crc kubenswrapper[4957]: I0218 14:57:29.669936 4957 generic.go:334] "Generic (PLEG): container finished" podID="bc34b37f-eee2-4e72-b5f5-0c9f7a8fedc3" containerID="63f52b326f18d037abd43c6340e95d7f2c0a447913c93815a7539ec32d89da01" exitCode=0 Feb 18 14:57:29 crc kubenswrapper[4957]: I0218 14:57:29.669969 4957 generic.go:334] "Generic (PLEG): container finished" podID="bc34b37f-eee2-4e72-b5f5-0c9f7a8fedc3" containerID="75cc6931588be5820eb086ca73fea1b3b64d672a1cbe642fcae84e796efe870f" exitCode=2 Feb 18 14:57:29 crc kubenswrapper[4957]: I0218 14:57:29.669979 4957 generic.go:334] "Generic (PLEG): container finished" podID="bc34b37f-eee2-4e72-b5f5-0c9f7a8fedc3" containerID="0deacaba22fa5c38bc83d56ac00893d1b5ca73f087d0940caac8dd70dae1e6bf" exitCode=0 Feb 18 14:57:29 crc kubenswrapper[4957]: I0218 14:57:29.669991 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bc34b37f-eee2-4e72-b5f5-0c9f7a8fedc3","Type":"ContainerDied","Data":"63f52b326f18d037abd43c6340e95d7f2c0a447913c93815a7539ec32d89da01"} Feb 18 14:57:29 crc kubenswrapper[4957]: I0218 14:57:29.670015 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bc34b37f-eee2-4e72-b5f5-0c9f7a8fedc3","Type":"ContainerDied","Data":"75cc6931588be5820eb086ca73fea1b3b64d672a1cbe642fcae84e796efe870f"} Feb 18 14:57:29 crc kubenswrapper[4957]: I0218 14:57:29.670026 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bc34b37f-eee2-4e72-b5f5-0c9f7a8fedc3","Type":"ContainerDied","Data":"0deacaba22fa5c38bc83d56ac00893d1b5ca73f087d0940caac8dd70dae1e6bf"} Feb 18 14:57:30 crc kubenswrapper[4957]: I0218 14:57:30.324386 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 14:57:30 crc kubenswrapper[4957]: I0218 14:57:30.398680 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc34b37f-eee2-4e72-b5f5-0c9f7a8fedc3-config-data\") pod \"bc34b37f-eee2-4e72-b5f5-0c9f7a8fedc3\" (UID: \"bc34b37f-eee2-4e72-b5f5-0c9f7a8fedc3\") " Feb 18 14:57:30 crc kubenswrapper[4957]: I0218 14:57:30.398819 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-frf58\" (UniqueName: \"kubernetes.io/projected/bc34b37f-eee2-4e72-b5f5-0c9f7a8fedc3-kube-api-access-frf58\") pod \"bc34b37f-eee2-4e72-b5f5-0c9f7a8fedc3\" (UID: \"bc34b37f-eee2-4e72-b5f5-0c9f7a8fedc3\") " Feb 18 14:57:30 crc kubenswrapper[4957]: I0218 14:57:30.398866 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bc34b37f-eee2-4e72-b5f5-0c9f7a8fedc3-sg-core-conf-yaml\") pod \"bc34b37f-eee2-4e72-b5f5-0c9f7a8fedc3\" (UID: \"bc34b37f-eee2-4e72-b5f5-0c9f7a8fedc3\") " Feb 18 14:57:30 crc kubenswrapper[4957]: I0218 14:57:30.398942 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bc34b37f-eee2-4e72-b5f5-0c9f7a8fedc3-run-httpd\") pod \"bc34b37f-eee2-4e72-b5f5-0c9f7a8fedc3\" (UID: \"bc34b37f-eee2-4e72-b5f5-0c9f7a8fedc3\") " Feb 18 14:57:30 crc kubenswrapper[4957]: I0218 14:57:30.399001 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc34b37f-eee2-4e72-b5f5-0c9f7a8fedc3-scripts\") pod \"bc34b37f-eee2-4e72-b5f5-0c9f7a8fedc3\" (UID: \"bc34b37f-eee2-4e72-b5f5-0c9f7a8fedc3\") " Feb 18 14:57:30 crc kubenswrapper[4957]: I0218 14:57:30.399058 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bc34b37f-eee2-4e72-b5f5-0c9f7a8fedc3-log-httpd\") pod \"bc34b37f-eee2-4e72-b5f5-0c9f7a8fedc3\" (UID: \"bc34b37f-eee2-4e72-b5f5-0c9f7a8fedc3\") " Feb 18 14:57:30 crc kubenswrapper[4957]: I0218 14:57:30.399160 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc34b37f-eee2-4e72-b5f5-0c9f7a8fedc3-combined-ca-bundle\") pod \"bc34b37f-eee2-4e72-b5f5-0c9f7a8fedc3\" (UID: \"bc34b37f-eee2-4e72-b5f5-0c9f7a8fedc3\") " Feb 18 14:57:30 crc kubenswrapper[4957]: I0218 14:57:30.405031 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc34b37f-eee2-4e72-b5f5-0c9f7a8fedc3-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "bc34b37f-eee2-4e72-b5f5-0c9f7a8fedc3" (UID: "bc34b37f-eee2-4e72-b5f5-0c9f7a8fedc3"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:57:30 crc kubenswrapper[4957]: I0218 14:57:30.405085 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc34b37f-eee2-4e72-b5f5-0c9f7a8fedc3-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "bc34b37f-eee2-4e72-b5f5-0c9f7a8fedc3" (UID: "bc34b37f-eee2-4e72-b5f5-0c9f7a8fedc3"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:57:30 crc kubenswrapper[4957]: I0218 14:57:30.405565 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc34b37f-eee2-4e72-b5f5-0c9f7a8fedc3-scripts" (OuterVolumeSpecName: "scripts") pod "bc34b37f-eee2-4e72-b5f5-0c9f7a8fedc3" (UID: "bc34b37f-eee2-4e72-b5f5-0c9f7a8fedc3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:57:30 crc kubenswrapper[4957]: I0218 14:57:30.414980 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc34b37f-eee2-4e72-b5f5-0c9f7a8fedc3-kube-api-access-frf58" (OuterVolumeSpecName: "kube-api-access-frf58") pod "bc34b37f-eee2-4e72-b5f5-0c9f7a8fedc3" (UID: "bc34b37f-eee2-4e72-b5f5-0c9f7a8fedc3"). InnerVolumeSpecName "kube-api-access-frf58". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:57:30 crc kubenswrapper[4957]: I0218 14:57:30.436267 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc34b37f-eee2-4e72-b5f5-0c9f7a8fedc3-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "bc34b37f-eee2-4e72-b5f5-0c9f7a8fedc3" (UID: "bc34b37f-eee2-4e72-b5f5-0c9f7a8fedc3"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:57:30 crc kubenswrapper[4957]: I0218 14:57:30.504358 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-frf58\" (UniqueName: \"kubernetes.io/projected/bc34b37f-eee2-4e72-b5f5-0c9f7a8fedc3-kube-api-access-frf58\") on node \"crc\" DevicePath \"\"" Feb 18 14:57:30 crc kubenswrapper[4957]: I0218 14:57:30.504389 4957 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bc34b37f-eee2-4e72-b5f5-0c9f7a8fedc3-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 18 14:57:30 crc kubenswrapper[4957]: I0218 14:57:30.504399 4957 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bc34b37f-eee2-4e72-b5f5-0c9f7a8fedc3-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 14:57:30 crc kubenswrapper[4957]: I0218 14:57:30.504408 4957 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc34b37f-eee2-4e72-b5f5-0c9f7a8fedc3-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 14:57:30 crc kubenswrapper[4957]: I0218 14:57:30.504431 4957 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bc34b37f-eee2-4e72-b5f5-0c9f7a8fedc3-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 14:57:30 crc kubenswrapper[4957]: I0218 14:57:30.516326 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc34b37f-eee2-4e72-b5f5-0c9f7a8fedc3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bc34b37f-eee2-4e72-b5f5-0c9f7a8fedc3" (UID: "bc34b37f-eee2-4e72-b5f5-0c9f7a8fedc3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:57:30 crc kubenswrapper[4957]: I0218 14:57:30.581868 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc34b37f-eee2-4e72-b5f5-0c9f7a8fedc3-config-data" (OuterVolumeSpecName: "config-data") pod "bc34b37f-eee2-4e72-b5f5-0c9f7a8fedc3" (UID: "bc34b37f-eee2-4e72-b5f5-0c9f7a8fedc3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:57:30 crc kubenswrapper[4957]: I0218 14:57:30.606461 4957 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc34b37f-eee2-4e72-b5f5-0c9f7a8fedc3-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 14:57:30 crc kubenswrapper[4957]: I0218 14:57:30.606505 4957 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc34b37f-eee2-4e72-b5f5-0c9f7a8fedc3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:57:30 crc kubenswrapper[4957]: I0218 14:57:30.684970 4957 generic.go:334] "Generic (PLEG): container finished" podID="bc34b37f-eee2-4e72-b5f5-0c9f7a8fedc3" containerID="f217ad3ab0ec876b4b2d70ff032cba1d000c153a3313540f14e2706e7dc2cf0d" exitCode=0 Feb 18 14:57:30 crc kubenswrapper[4957]: I0218 14:57:30.685021 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bc34b37f-eee2-4e72-b5f5-0c9f7a8fedc3","Type":"ContainerDied","Data":"f217ad3ab0ec876b4b2d70ff032cba1d000c153a3313540f14e2706e7dc2cf0d"} Feb 18 14:57:30 crc kubenswrapper[4957]: I0218 14:57:30.685054 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bc34b37f-eee2-4e72-b5f5-0c9f7a8fedc3","Type":"ContainerDied","Data":"d0aca479c1edc8f5e51fab2232e2a5ec1e6b7f262c215987e8b4b6c613f116e7"} Feb 18 14:57:30 crc kubenswrapper[4957]: I0218 14:57:30.685076 4957 scope.go:117] "RemoveContainer" containerID="63f52b326f18d037abd43c6340e95d7f2c0a447913c93815a7539ec32d89da01" Feb 18 14:57:30 crc kubenswrapper[4957]: I0218 14:57:30.685238 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 14:57:30 crc kubenswrapper[4957]: I0218 14:57:30.731746 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 14:57:30 crc kubenswrapper[4957]: I0218 14:57:30.739140 4957 scope.go:117] "RemoveContainer" containerID="75cc6931588be5820eb086ca73fea1b3b64d672a1cbe642fcae84e796efe870f" Feb 18 14:57:30 crc kubenswrapper[4957]: I0218 14:57:30.758479 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 18 14:57:30 crc kubenswrapper[4957]: I0218 14:57:30.787491 4957 scope.go:117] "RemoveContainer" containerID="f217ad3ab0ec876b4b2d70ff032cba1d000c153a3313540f14e2706e7dc2cf0d" Feb 18 14:57:30 crc kubenswrapper[4957]: I0218 14:57:30.799098 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 18 14:57:30 crc kubenswrapper[4957]: E0218 14:57:30.799675 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc34b37f-eee2-4e72-b5f5-0c9f7a8fedc3" containerName="ceilometer-central-agent" Feb 18 14:57:30 crc kubenswrapper[4957]: I0218 14:57:30.799694 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc34b37f-eee2-4e72-b5f5-0c9f7a8fedc3" containerName="ceilometer-central-agent" Feb 18 14:57:30 crc kubenswrapper[4957]: E0218 14:57:30.799705 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc34b37f-eee2-4e72-b5f5-0c9f7a8fedc3" containerName="sg-core" Feb 18 14:57:30 crc kubenswrapper[4957]: I0218 14:57:30.799712 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc34b37f-eee2-4e72-b5f5-0c9f7a8fedc3" containerName="sg-core" Feb 18 14:57:30 crc kubenswrapper[4957]: E0218 14:57:30.799725 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc34b37f-eee2-4e72-b5f5-0c9f7a8fedc3" containerName="ceilometer-notification-agent" Feb 18 14:57:30 crc kubenswrapper[4957]: I0218 14:57:30.799731 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc34b37f-eee2-4e72-b5f5-0c9f7a8fedc3" containerName="ceilometer-notification-agent" Feb 18 14:57:30 crc kubenswrapper[4957]: E0218 14:57:30.799741 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc34b37f-eee2-4e72-b5f5-0c9f7a8fedc3" containerName="proxy-httpd" Feb 18 14:57:30 crc kubenswrapper[4957]: I0218 14:57:30.799747 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc34b37f-eee2-4e72-b5f5-0c9f7a8fedc3" containerName="proxy-httpd" Feb 18 14:57:30 crc kubenswrapper[4957]: I0218 14:57:30.799989 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc34b37f-eee2-4e72-b5f5-0c9f7a8fedc3" containerName="sg-core" Feb 18 14:57:30 crc kubenswrapper[4957]: I0218 14:57:30.800006 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc34b37f-eee2-4e72-b5f5-0c9f7a8fedc3" containerName="proxy-httpd" Feb 18 14:57:30 crc kubenswrapper[4957]: I0218 14:57:30.800016 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc34b37f-eee2-4e72-b5f5-0c9f7a8fedc3" containerName="ceilometer-central-agent" Feb 18 14:57:30 crc kubenswrapper[4957]: I0218 14:57:30.800027 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc34b37f-eee2-4e72-b5f5-0c9f7a8fedc3" containerName="ceilometer-notification-agent" Feb 18 14:57:30 crc kubenswrapper[4957]: I0218 14:57:30.802103 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 14:57:30 crc kubenswrapper[4957]: I0218 14:57:30.805407 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 18 14:57:30 crc kubenswrapper[4957]: I0218 14:57:30.805544 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 18 14:57:30 crc kubenswrapper[4957]: I0218 14:57:30.811723 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 14:57:30 crc kubenswrapper[4957]: I0218 14:57:30.829975 4957 scope.go:117] "RemoveContainer" containerID="0deacaba22fa5c38bc83d56ac00893d1b5ca73f087d0940caac8dd70dae1e6bf" Feb 18 14:57:30 crc kubenswrapper[4957]: I0218 14:57:30.852908 4957 scope.go:117] "RemoveContainer" containerID="63f52b326f18d037abd43c6340e95d7f2c0a447913c93815a7539ec32d89da01" Feb 18 14:57:30 crc kubenswrapper[4957]: E0218 14:57:30.853446 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"63f52b326f18d037abd43c6340e95d7f2c0a447913c93815a7539ec32d89da01\": container with ID starting with 63f52b326f18d037abd43c6340e95d7f2c0a447913c93815a7539ec32d89da01 not found: ID does not exist" containerID="63f52b326f18d037abd43c6340e95d7f2c0a447913c93815a7539ec32d89da01" Feb 18 14:57:30 crc kubenswrapper[4957]: I0218 14:57:30.853499 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"63f52b326f18d037abd43c6340e95d7f2c0a447913c93815a7539ec32d89da01"} err="failed to get container status \"63f52b326f18d037abd43c6340e95d7f2c0a447913c93815a7539ec32d89da01\": rpc error: code = NotFound desc = could not find container \"63f52b326f18d037abd43c6340e95d7f2c0a447913c93815a7539ec32d89da01\": container with ID starting with 63f52b326f18d037abd43c6340e95d7f2c0a447913c93815a7539ec32d89da01 not found: ID does not exist" Feb 18 14:57:30 crc kubenswrapper[4957]: I0218 14:57:30.853528 4957 scope.go:117] "RemoveContainer" containerID="75cc6931588be5820eb086ca73fea1b3b64d672a1cbe642fcae84e796efe870f" Feb 18 14:57:30 crc kubenswrapper[4957]: E0218 14:57:30.853919 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"75cc6931588be5820eb086ca73fea1b3b64d672a1cbe642fcae84e796efe870f\": container with ID starting with 75cc6931588be5820eb086ca73fea1b3b64d672a1cbe642fcae84e796efe870f not found: ID does not exist" containerID="75cc6931588be5820eb086ca73fea1b3b64d672a1cbe642fcae84e796efe870f" Feb 18 14:57:30 crc kubenswrapper[4957]: I0218 14:57:30.853993 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75cc6931588be5820eb086ca73fea1b3b64d672a1cbe642fcae84e796efe870f"} err="failed to get container status \"75cc6931588be5820eb086ca73fea1b3b64d672a1cbe642fcae84e796efe870f\": rpc error: code = NotFound desc = could not find container \"75cc6931588be5820eb086ca73fea1b3b64d672a1cbe642fcae84e796efe870f\": container with ID starting with 75cc6931588be5820eb086ca73fea1b3b64d672a1cbe642fcae84e796efe870f not found: ID does not exist" Feb 18 14:57:30 crc kubenswrapper[4957]: I0218 14:57:30.854029 4957 scope.go:117] "RemoveContainer" containerID="f217ad3ab0ec876b4b2d70ff032cba1d000c153a3313540f14e2706e7dc2cf0d" Feb 18 14:57:30 crc kubenswrapper[4957]: E0218 14:57:30.854411 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f217ad3ab0ec876b4b2d70ff032cba1d000c153a3313540f14e2706e7dc2cf0d\": container with ID starting with f217ad3ab0ec876b4b2d70ff032cba1d000c153a3313540f14e2706e7dc2cf0d not found: ID does not exist" containerID="f217ad3ab0ec876b4b2d70ff032cba1d000c153a3313540f14e2706e7dc2cf0d" Feb 18 14:57:30 crc kubenswrapper[4957]: I0218 14:57:30.854458 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f217ad3ab0ec876b4b2d70ff032cba1d000c153a3313540f14e2706e7dc2cf0d"} err="failed to get container status \"f217ad3ab0ec876b4b2d70ff032cba1d000c153a3313540f14e2706e7dc2cf0d\": rpc error: code = NotFound desc = could not find container \"f217ad3ab0ec876b4b2d70ff032cba1d000c153a3313540f14e2706e7dc2cf0d\": container with ID starting with f217ad3ab0ec876b4b2d70ff032cba1d000c153a3313540f14e2706e7dc2cf0d not found: ID does not exist" Feb 18 14:57:30 crc kubenswrapper[4957]: I0218 14:57:30.854480 4957 scope.go:117] "RemoveContainer" containerID="0deacaba22fa5c38bc83d56ac00893d1b5ca73f087d0940caac8dd70dae1e6bf" Feb 18 14:57:30 crc kubenswrapper[4957]: E0218 14:57:30.855129 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0deacaba22fa5c38bc83d56ac00893d1b5ca73f087d0940caac8dd70dae1e6bf\": container with ID starting with 0deacaba22fa5c38bc83d56ac00893d1b5ca73f087d0940caac8dd70dae1e6bf not found: ID does not exist" containerID="0deacaba22fa5c38bc83d56ac00893d1b5ca73f087d0940caac8dd70dae1e6bf" Feb 18 14:57:30 crc kubenswrapper[4957]: I0218 14:57:30.855158 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0deacaba22fa5c38bc83d56ac00893d1b5ca73f087d0940caac8dd70dae1e6bf"} err="failed to get container status \"0deacaba22fa5c38bc83d56ac00893d1b5ca73f087d0940caac8dd70dae1e6bf\": rpc error: code = NotFound desc = could not find container \"0deacaba22fa5c38bc83d56ac00893d1b5ca73f087d0940caac8dd70dae1e6bf\": container with ID starting with 0deacaba22fa5c38bc83d56ac00893d1b5ca73f087d0940caac8dd70dae1e6bf not found: ID does not exist" Feb 18 14:57:30 crc kubenswrapper[4957]: I0218 14:57:30.870453 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 18 14:57:30 crc kubenswrapper[4957]: I0218 14:57:30.870681 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 18 14:57:30 crc kubenswrapper[4957]: I0218 14:57:30.919703 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/098731c3-3fd0-495d-806d-8f8f931110d6-config-data\") pod \"ceilometer-0\" (UID: \"098731c3-3fd0-495d-806d-8f8f931110d6\") " pod="openstack/ceilometer-0" Feb 18 14:57:30 crc kubenswrapper[4957]: I0218 14:57:30.919809 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5tjk4\" (UniqueName: \"kubernetes.io/projected/098731c3-3fd0-495d-806d-8f8f931110d6-kube-api-access-5tjk4\") pod \"ceilometer-0\" (UID: \"098731c3-3fd0-495d-806d-8f8f931110d6\") " pod="openstack/ceilometer-0" Feb 18 14:57:30 crc kubenswrapper[4957]: I0218 14:57:30.919859 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/098731c3-3fd0-495d-806d-8f8f931110d6-log-httpd\") pod \"ceilometer-0\" (UID: \"098731c3-3fd0-495d-806d-8f8f931110d6\") " pod="openstack/ceilometer-0" Feb 18 14:57:30 crc kubenswrapper[4957]: I0218 14:57:30.919890 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/098731c3-3fd0-495d-806d-8f8f931110d6-scripts\") pod \"ceilometer-0\" (UID: \"098731c3-3fd0-495d-806d-8f8f931110d6\") " pod="openstack/ceilometer-0" Feb 18 14:57:30 crc kubenswrapper[4957]: I0218 14:57:30.919931 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/098731c3-3fd0-495d-806d-8f8f931110d6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"098731c3-3fd0-495d-806d-8f8f931110d6\") " pod="openstack/ceilometer-0" Feb 18 14:57:30 crc kubenswrapper[4957]: I0218 14:57:30.919976 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/098731c3-3fd0-495d-806d-8f8f931110d6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"098731c3-3fd0-495d-806d-8f8f931110d6\") " pod="openstack/ceilometer-0" Feb 18 14:57:30 crc kubenswrapper[4957]: I0218 14:57:30.920012 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/098731c3-3fd0-495d-806d-8f8f931110d6-run-httpd\") pod \"ceilometer-0\" (UID: \"098731c3-3fd0-495d-806d-8f8f931110d6\") " pod="openstack/ceilometer-0" Feb 18 14:57:31 crc kubenswrapper[4957]: I0218 14:57:31.025613 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/098731c3-3fd0-495d-806d-8f8f931110d6-scripts\") pod \"ceilometer-0\" (UID: \"098731c3-3fd0-495d-806d-8f8f931110d6\") " pod="openstack/ceilometer-0" Feb 18 14:57:31 crc kubenswrapper[4957]: I0218 14:57:31.025787 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/098731c3-3fd0-495d-806d-8f8f931110d6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"098731c3-3fd0-495d-806d-8f8f931110d6\") " pod="openstack/ceilometer-0" Feb 18 14:57:31 crc kubenswrapper[4957]: I0218 14:57:31.025885 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/098731c3-3fd0-495d-806d-8f8f931110d6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"098731c3-3fd0-495d-806d-8f8f931110d6\") " pod="openstack/ceilometer-0" Feb 18 14:57:31 crc kubenswrapper[4957]: I0218 14:57:31.026222 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/098731c3-3fd0-495d-806d-8f8f931110d6-run-httpd\") pod \"ceilometer-0\" (UID: \"098731c3-3fd0-495d-806d-8f8f931110d6\") " pod="openstack/ceilometer-0" Feb 18 14:57:31 crc kubenswrapper[4957]: I0218 14:57:31.026450 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/098731c3-3fd0-495d-806d-8f8f931110d6-config-data\") pod \"ceilometer-0\" (UID: \"098731c3-3fd0-495d-806d-8f8f931110d6\") " pod="openstack/ceilometer-0" Feb 18 14:57:31 crc kubenswrapper[4957]: I0218 14:57:31.026609 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5tjk4\" (UniqueName: \"kubernetes.io/projected/098731c3-3fd0-495d-806d-8f8f931110d6-kube-api-access-5tjk4\") pod \"ceilometer-0\" (UID: \"098731c3-3fd0-495d-806d-8f8f931110d6\") " pod="openstack/ceilometer-0" Feb 18 14:57:31 crc kubenswrapper[4957]: I0218 14:57:31.026699 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/098731c3-3fd0-495d-806d-8f8f931110d6-log-httpd\") pod \"ceilometer-0\" (UID: \"098731c3-3fd0-495d-806d-8f8f931110d6\") " pod="openstack/ceilometer-0" Feb 18 14:57:31 crc kubenswrapper[4957]: I0218 14:57:31.027273 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/098731c3-3fd0-495d-806d-8f8f931110d6-log-httpd\") pod \"ceilometer-0\" (UID: \"098731c3-3fd0-495d-806d-8f8f931110d6\") " pod="openstack/ceilometer-0" Feb 18 14:57:31 crc kubenswrapper[4957]: I0218 14:57:31.027465 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/098731c3-3fd0-495d-806d-8f8f931110d6-run-httpd\") pod \"ceilometer-0\" (UID: \"098731c3-3fd0-495d-806d-8f8f931110d6\") " pod="openstack/ceilometer-0" Feb 18 14:57:31 crc kubenswrapper[4957]: I0218 14:57:31.033352 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/098731c3-3fd0-495d-806d-8f8f931110d6-config-data\") pod \"ceilometer-0\" (UID: \"098731c3-3fd0-495d-806d-8f8f931110d6\") " pod="openstack/ceilometer-0" Feb 18 14:57:31 crc kubenswrapper[4957]: I0218 14:57:31.033710 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/098731c3-3fd0-495d-806d-8f8f931110d6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"098731c3-3fd0-495d-806d-8f8f931110d6\") " pod="openstack/ceilometer-0" Feb 18 14:57:31 crc kubenswrapper[4957]: I0218 14:57:31.034225 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/098731c3-3fd0-495d-806d-8f8f931110d6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"098731c3-3fd0-495d-806d-8f8f931110d6\") " pod="openstack/ceilometer-0" Feb 18 14:57:31 crc kubenswrapper[4957]: I0218 14:57:31.041621 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/098731c3-3fd0-495d-806d-8f8f931110d6-scripts\") pod \"ceilometer-0\" (UID: \"098731c3-3fd0-495d-806d-8f8f931110d6\") " pod="openstack/ceilometer-0" Feb 18 14:57:31 crc kubenswrapper[4957]: I0218 14:57:31.044780 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5tjk4\" (UniqueName: \"kubernetes.io/projected/098731c3-3fd0-495d-806d-8f8f931110d6-kube-api-access-5tjk4\") pod \"ceilometer-0\" (UID: \"098731c3-3fd0-495d-806d-8f8f931110d6\") " pod="openstack/ceilometer-0" Feb 18 14:57:31 crc kubenswrapper[4957]: I0218 14:57:31.130646 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 14:57:31 crc kubenswrapper[4957]: I0218 14:57:31.702336 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 14:57:31 crc kubenswrapper[4957]: I0218 14:57:31.710564 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"f31564e5-da12-4b69-b9ca-da180143fcf7","Type":"ContainerStarted","Data":"cfad5ed664578418cba8e64a2ba3c814273856afdae8cc6e92591b69e3329209"} Feb 18 14:57:31 crc kubenswrapper[4957]: I0218 14:57:31.880711 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 14:57:31 crc kubenswrapper[4957]: W0218 14:57:31.889570 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod098731c3_3fd0_495d_806d_8f8f931110d6.slice/crio-4ba957de3954619124ab95cdd05dbd742d7ac5c806eda41b4f320dc351359a9b WatchSource:0}: Error finding container 4ba957de3954619124ab95cdd05dbd742d7ac5c806eda41b4f320dc351359a9b: Status 404 returned error can't find the container with id 4ba957de3954619124ab95cdd05dbd742d7ac5c806eda41b4f320dc351359a9b Feb 18 14:57:31 crc kubenswrapper[4957]: I0218 14:57:31.906763 4957 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="3e8ad40e-e92a-4b1c-b081-7da83ab34669" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.247:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 14:57:31 crc kubenswrapper[4957]: I0218 14:57:31.906775 4957 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="3e8ad40e-e92a-4b1c-b081-7da83ab34669" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.247:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 14:57:32 crc kubenswrapper[4957]: I0218 14:57:32.034118 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Feb 18 14:57:32 crc kubenswrapper[4957]: I0218 14:57:32.234296 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc34b37f-eee2-4e72-b5f5-0c9f7a8fedc3" path="/var/lib/kubelet/pods/bc34b37f-eee2-4e72-b5f5-0c9f7a8fedc3/volumes" Feb 18 14:57:32 crc kubenswrapper[4957]: I0218 14:57:32.730330 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"098731c3-3fd0-495d-806d-8f8f931110d6","Type":"ContainerStarted","Data":"9a02d3eaa82fcb94a91a6cfb5d7a3d0af7f8f6515ff6312f4c4e6c017c896e2a"} Feb 18 14:57:32 crc kubenswrapper[4957]: I0218 14:57:32.730375 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"098731c3-3fd0-495d-806d-8f8f931110d6","Type":"ContainerStarted","Data":"4ba957de3954619124ab95cdd05dbd742d7ac5c806eda41b4f320dc351359a9b"} Feb 18 14:57:33 crc kubenswrapper[4957]: I0218 14:57:33.187357 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 18 14:57:33 crc kubenswrapper[4957]: I0218 14:57:33.235215 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 18 14:57:33 crc kubenswrapper[4957]: I0218 14:57:33.790334 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"f31564e5-da12-4b69-b9ca-da180143fcf7","Type":"ContainerStarted","Data":"8bf1b96865f9211259425a10943eba558295653e6cc9b33d88ceaee925193dcc"} Feb 18 14:57:33 crc kubenswrapper[4957]: I0218 14:57:33.790700 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="f31564e5-da12-4b69-b9ca-da180143fcf7" containerName="aodh-listener" containerID="cri-o://8bf1b96865f9211259425a10943eba558295653e6cc9b33d88ceaee925193dcc" gracePeriod=30 Feb 18 14:57:33 crc kubenswrapper[4957]: I0218 14:57:33.790725 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="f31564e5-da12-4b69-b9ca-da180143fcf7" containerName="aodh-evaluator" containerID="cri-o://36c6a347aaf51fb57e8910183d1ab01939a13967586002dbd0ad1a899c4a9b4f" gracePeriod=30 Feb 18 14:57:33 crc kubenswrapper[4957]: I0218 14:57:33.790753 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="f31564e5-da12-4b69-b9ca-da180143fcf7" containerName="aodh-notifier" containerID="cri-o://cfad5ed664578418cba8e64a2ba3c814273856afdae8cc6e92591b69e3329209" gracePeriod=30 Feb 18 14:57:33 crc kubenswrapper[4957]: I0218 14:57:33.790681 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="f31564e5-da12-4b69-b9ca-da180143fcf7" containerName="aodh-api" containerID="cri-o://0f4891fb81952c02d798ed6b6740866bca11088f3415c2234b23a29655eea545" gracePeriod=30 Feb 18 14:57:33 crc kubenswrapper[4957]: I0218 14:57:33.813162 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=2.477506154 podStartE2EDuration="9.813142454s" podCreationTimestamp="2026-02-18 14:57:24 +0000 UTC" firstStartedPulling="2026-02-18 14:57:26.041948233 +0000 UTC m=+1552.562812977" lastFinishedPulling="2026-02-18 14:57:33.377584533 +0000 UTC m=+1559.898449277" observedRunningTime="2026-02-18 14:57:33.808804319 +0000 UTC m=+1560.329669073" watchObservedRunningTime="2026-02-18 14:57:33.813142454 +0000 UTC m=+1560.334007198" Feb 18 14:57:33 crc kubenswrapper[4957]: I0218 14:57:33.856415 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 18 14:57:33 crc kubenswrapper[4957]: I0218 14:57:33.965660 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 18 14:57:33 crc kubenswrapper[4957]: I0218 14:57:33.965711 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 18 14:57:34 crc kubenswrapper[4957]: I0218 14:57:34.232536 4957 scope.go:117] "RemoveContainer" containerID="c4e669be0ab1992542c422c61f78e7526ae93fe9c47c7d1457d0a05970a49398" Feb 18 14:57:34 crc kubenswrapper[4957]: E0218 14:57:34.233122 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-x8wwg_openshift-machine-config-operator(4cde17e3-43e9-4bed-afe8-5b76229e35cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" Feb 18 14:57:34 crc kubenswrapper[4957]: I0218 14:57:34.805074 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"098731c3-3fd0-495d-806d-8f8f931110d6","Type":"ContainerStarted","Data":"97e4a79220823ecfc43cb75031f1e8469c1da9926cb82bcad741dc9bc03ddfc2"} Feb 18 14:57:34 crc kubenswrapper[4957]: I0218 14:57:34.808853 4957 generic.go:334] "Generic (PLEG): container finished" podID="f31564e5-da12-4b69-b9ca-da180143fcf7" containerID="cfad5ed664578418cba8e64a2ba3c814273856afdae8cc6e92591b69e3329209" exitCode=0 Feb 18 14:57:34 crc kubenswrapper[4957]: I0218 14:57:34.808890 4957 generic.go:334] "Generic (PLEG): container finished" podID="f31564e5-da12-4b69-b9ca-da180143fcf7" containerID="36c6a347aaf51fb57e8910183d1ab01939a13967586002dbd0ad1a899c4a9b4f" exitCode=0 Feb 18 14:57:34 crc kubenswrapper[4957]: I0218 14:57:34.808901 4957 generic.go:334] "Generic (PLEG): container finished" podID="f31564e5-da12-4b69-b9ca-da180143fcf7" containerID="0f4891fb81952c02d798ed6b6740866bca11088f3415c2234b23a29655eea545" exitCode=0 Feb 18 14:57:34 crc kubenswrapper[4957]: I0218 14:57:34.808947 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"f31564e5-da12-4b69-b9ca-da180143fcf7","Type":"ContainerDied","Data":"cfad5ed664578418cba8e64a2ba3c814273856afdae8cc6e92591b69e3329209"} Feb 18 14:57:34 crc kubenswrapper[4957]: I0218 14:57:34.808987 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"f31564e5-da12-4b69-b9ca-da180143fcf7","Type":"ContainerDied","Data":"36c6a347aaf51fb57e8910183d1ab01939a13967586002dbd0ad1a899c4a9b4f"} Feb 18 14:57:34 crc kubenswrapper[4957]: I0218 14:57:34.808999 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"f31564e5-da12-4b69-b9ca-da180143fcf7","Type":"ContainerDied","Data":"0f4891fb81952c02d798ed6b6740866bca11088f3415c2234b23a29655eea545"} Feb 18 14:57:35 crc kubenswrapper[4957]: I0218 14:57:35.048592 4957 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="3c417d58-815f-4a5d-aacf-bb6a06bc8b51" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.250:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 14:57:35 crc kubenswrapper[4957]: I0218 14:57:35.048592 4957 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="3c417d58-815f-4a5d-aacf-bb6a06bc8b51" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.250:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 14:57:35 crc kubenswrapper[4957]: I0218 14:57:35.819824 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"098731c3-3fd0-495d-806d-8f8f931110d6","Type":"ContainerStarted","Data":"4b8ffab9a69b94767a29482499927d15a031739357f822c815821acf12255e88"} Feb 18 14:57:37 crc kubenswrapper[4957]: I0218 14:57:37.847123 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"098731c3-3fd0-495d-806d-8f8f931110d6","Type":"ContainerStarted","Data":"a56db990a4062eef4acd63d1eea4b831ee3c18fd858d9fd946627ec6543266ab"} Feb 18 14:57:37 crc kubenswrapper[4957]: I0218 14:57:37.847794 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 18 14:57:37 crc kubenswrapper[4957]: I0218 14:57:37.847692 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="098731c3-3fd0-495d-806d-8f8f931110d6" containerName="proxy-httpd" containerID="cri-o://a56db990a4062eef4acd63d1eea4b831ee3c18fd858d9fd946627ec6543266ab" gracePeriod=30 Feb 18 14:57:37 crc kubenswrapper[4957]: I0218 14:57:37.847326 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="098731c3-3fd0-495d-806d-8f8f931110d6" containerName="ceilometer-central-agent" containerID="cri-o://9a02d3eaa82fcb94a91a6cfb5d7a3d0af7f8f6515ff6312f4c4e6c017c896e2a" gracePeriod=30 Feb 18 14:57:37 crc kubenswrapper[4957]: I0218 14:57:37.847713 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="098731c3-3fd0-495d-806d-8f8f931110d6" containerName="sg-core" containerID="cri-o://4b8ffab9a69b94767a29482499927d15a031739357f822c815821acf12255e88" gracePeriod=30 Feb 18 14:57:37 crc kubenswrapper[4957]: I0218 14:57:37.847721 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="098731c3-3fd0-495d-806d-8f8f931110d6" containerName="ceilometer-notification-agent" containerID="cri-o://97e4a79220823ecfc43cb75031f1e8469c1da9926cb82bcad741dc9bc03ddfc2" gracePeriod=30 Feb 18 14:57:37 crc kubenswrapper[4957]: I0218 14:57:37.881003 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.543143939 podStartE2EDuration="7.880985457s" podCreationTimestamp="2026-02-18 14:57:30 +0000 UTC" firstStartedPulling="2026-02-18 14:57:31.892021211 +0000 UTC m=+1558.412885955" lastFinishedPulling="2026-02-18 14:57:37.229862729 +0000 UTC m=+1563.750727473" observedRunningTime="2026-02-18 14:57:37.870770691 +0000 UTC m=+1564.391635435" watchObservedRunningTime="2026-02-18 14:57:37.880985457 +0000 UTC m=+1564.401850201" Feb 18 14:57:38 crc kubenswrapper[4957]: I0218 14:57:38.615752 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 18 14:57:38 crc kubenswrapper[4957]: I0218 14:57:38.735359 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b893f85-0849-44a2-b99c-02a720f05422-combined-ca-bundle\") pod \"8b893f85-0849-44a2-b99c-02a720f05422\" (UID: \"8b893f85-0849-44a2-b99c-02a720f05422\") " Feb 18 14:57:38 crc kubenswrapper[4957]: I0218 14:57:38.735689 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b893f85-0849-44a2-b99c-02a720f05422-config-data\") pod \"8b893f85-0849-44a2-b99c-02a720f05422\" (UID: \"8b893f85-0849-44a2-b99c-02a720f05422\") " Feb 18 14:57:38 crc kubenswrapper[4957]: I0218 14:57:38.736383 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kwj6v\" (UniqueName: \"kubernetes.io/projected/8b893f85-0849-44a2-b99c-02a720f05422-kube-api-access-kwj6v\") pod \"8b893f85-0849-44a2-b99c-02a720f05422\" (UID: \"8b893f85-0849-44a2-b99c-02a720f05422\") " Feb 18 14:57:38 crc kubenswrapper[4957]: I0218 14:57:38.741960 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b893f85-0849-44a2-b99c-02a720f05422-kube-api-access-kwj6v" (OuterVolumeSpecName: "kube-api-access-kwj6v") pod "8b893f85-0849-44a2-b99c-02a720f05422" (UID: "8b893f85-0849-44a2-b99c-02a720f05422"). InnerVolumeSpecName "kube-api-access-kwj6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:57:38 crc kubenswrapper[4957]: I0218 14:57:38.774064 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b893f85-0849-44a2-b99c-02a720f05422-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8b893f85-0849-44a2-b99c-02a720f05422" (UID: "8b893f85-0849-44a2-b99c-02a720f05422"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:57:38 crc kubenswrapper[4957]: I0218 14:57:38.774661 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b893f85-0849-44a2-b99c-02a720f05422-config-data" (OuterVolumeSpecName: "config-data") pod "8b893f85-0849-44a2-b99c-02a720f05422" (UID: "8b893f85-0849-44a2-b99c-02a720f05422"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:57:38 crc kubenswrapper[4957]: I0218 14:57:38.840188 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kwj6v\" (UniqueName: \"kubernetes.io/projected/8b893f85-0849-44a2-b99c-02a720f05422-kube-api-access-kwj6v\") on node \"crc\" DevicePath \"\"" Feb 18 14:57:38 crc kubenswrapper[4957]: I0218 14:57:38.840228 4957 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b893f85-0849-44a2-b99c-02a720f05422-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:57:38 crc kubenswrapper[4957]: I0218 14:57:38.840237 4957 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b893f85-0849-44a2-b99c-02a720f05422-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 14:57:38 crc kubenswrapper[4957]: I0218 14:57:38.859731 4957 generic.go:334] "Generic (PLEG): container finished" podID="098731c3-3fd0-495d-806d-8f8f931110d6" containerID="a56db990a4062eef4acd63d1eea4b831ee3c18fd858d9fd946627ec6543266ab" exitCode=0 Feb 18 14:57:38 crc kubenswrapper[4957]: I0218 14:57:38.861797 4957 generic.go:334] "Generic (PLEG): container finished" podID="098731c3-3fd0-495d-806d-8f8f931110d6" containerID="4b8ffab9a69b94767a29482499927d15a031739357f822c815821acf12255e88" exitCode=2 Feb 18 14:57:38 crc kubenswrapper[4957]: I0218 14:57:38.861911 4957 generic.go:334] "Generic (PLEG): container finished" podID="098731c3-3fd0-495d-806d-8f8f931110d6" containerID="97e4a79220823ecfc43cb75031f1e8469c1da9926cb82bcad741dc9bc03ddfc2" exitCode=0 Feb 18 14:57:38 crc kubenswrapper[4957]: I0218 14:57:38.859769 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"098731c3-3fd0-495d-806d-8f8f931110d6","Type":"ContainerDied","Data":"a56db990a4062eef4acd63d1eea4b831ee3c18fd858d9fd946627ec6543266ab"} Feb 18 14:57:38 crc kubenswrapper[4957]: I0218 14:57:38.862127 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"098731c3-3fd0-495d-806d-8f8f931110d6","Type":"ContainerDied","Data":"4b8ffab9a69b94767a29482499927d15a031739357f822c815821acf12255e88"} Feb 18 14:57:38 crc kubenswrapper[4957]: I0218 14:57:38.862154 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"098731c3-3fd0-495d-806d-8f8f931110d6","Type":"ContainerDied","Data":"97e4a79220823ecfc43cb75031f1e8469c1da9926cb82bcad741dc9bc03ddfc2"} Feb 18 14:57:38 crc kubenswrapper[4957]: I0218 14:57:38.865016 4957 generic.go:334] "Generic (PLEG): container finished" podID="8b893f85-0849-44a2-b99c-02a720f05422" containerID="655f2914d723924bf80c867dd18f2fc5e98a466e00938cd121d79d0d9317cc31" exitCode=137 Feb 18 14:57:38 crc kubenswrapper[4957]: I0218 14:57:38.865062 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"8b893f85-0849-44a2-b99c-02a720f05422","Type":"ContainerDied","Data":"655f2914d723924bf80c867dd18f2fc5e98a466e00938cd121d79d0d9317cc31"} Feb 18 14:57:38 crc kubenswrapper[4957]: I0218 14:57:38.865093 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 18 14:57:38 crc kubenswrapper[4957]: I0218 14:57:38.865226 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"8b893f85-0849-44a2-b99c-02a720f05422","Type":"ContainerDied","Data":"1453f2e3ebfeb410f19a762a41b8d7b4e72652658df7a65df88b7ec49f1356e7"} Feb 18 14:57:38 crc kubenswrapper[4957]: I0218 14:57:38.865307 4957 scope.go:117] "RemoveContainer" containerID="655f2914d723924bf80c867dd18f2fc5e98a466e00938cd121d79d0d9317cc31" Feb 18 14:57:38 crc kubenswrapper[4957]: I0218 14:57:38.913840 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 18 14:57:38 crc kubenswrapper[4957]: I0218 14:57:38.921257 4957 scope.go:117] "RemoveContainer" containerID="655f2914d723924bf80c867dd18f2fc5e98a466e00938cd121d79d0d9317cc31" Feb 18 14:57:38 crc kubenswrapper[4957]: E0218 14:57:38.922542 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"655f2914d723924bf80c867dd18f2fc5e98a466e00938cd121d79d0d9317cc31\": container with ID starting with 655f2914d723924bf80c867dd18f2fc5e98a466e00938cd121d79d0d9317cc31 not found: ID does not exist" containerID="655f2914d723924bf80c867dd18f2fc5e98a466e00938cd121d79d0d9317cc31" Feb 18 14:57:38 crc kubenswrapper[4957]: I0218 14:57:38.922583 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"655f2914d723924bf80c867dd18f2fc5e98a466e00938cd121d79d0d9317cc31"} err="failed to get container status \"655f2914d723924bf80c867dd18f2fc5e98a466e00938cd121d79d0d9317cc31\": rpc error: code = NotFound desc = could not find container \"655f2914d723924bf80c867dd18f2fc5e98a466e00938cd121d79d0d9317cc31\": container with ID starting with 655f2914d723924bf80c867dd18f2fc5e98a466e00938cd121d79d0d9317cc31 not found: ID does not exist" Feb 18 14:57:38 crc kubenswrapper[4957]: I0218 14:57:38.938860 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 18 14:57:38 crc kubenswrapper[4957]: I0218 14:57:38.961865 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 18 14:57:38 crc kubenswrapper[4957]: E0218 14:57:38.962712 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b893f85-0849-44a2-b99c-02a720f05422" containerName="nova-cell1-novncproxy-novncproxy" Feb 18 14:57:38 crc kubenswrapper[4957]: I0218 14:57:38.962825 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b893f85-0849-44a2-b99c-02a720f05422" containerName="nova-cell1-novncproxy-novncproxy" Feb 18 14:57:38 crc kubenswrapper[4957]: I0218 14:57:38.963193 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b893f85-0849-44a2-b99c-02a720f05422" containerName="nova-cell1-novncproxy-novncproxy" Feb 18 14:57:38 crc kubenswrapper[4957]: I0218 14:57:38.964105 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 18 14:57:38 crc kubenswrapper[4957]: I0218 14:57:38.964281 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 18 14:57:38 crc kubenswrapper[4957]: I0218 14:57:38.969770 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Feb 18 14:57:38 crc kubenswrapper[4957]: I0218 14:57:38.970666 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 18 14:57:38 crc kubenswrapper[4957]: I0218 14:57:38.970846 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Feb 18 14:57:39 crc kubenswrapper[4957]: I0218 14:57:39.045912 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/4657d4de-c971-422a-abb6-9f3f16421c2a-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"4657d4de-c971-422a-abb6-9f3f16421c2a\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 14:57:39 crc kubenswrapper[4957]: I0218 14:57:39.045964 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4657d4de-c971-422a-abb6-9f3f16421c2a-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"4657d4de-c971-422a-abb6-9f3f16421c2a\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 14:57:39 crc kubenswrapper[4957]: I0218 14:57:39.046010 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5lqg5\" (UniqueName: \"kubernetes.io/projected/4657d4de-c971-422a-abb6-9f3f16421c2a-kube-api-access-5lqg5\") pod \"nova-cell1-novncproxy-0\" (UID: \"4657d4de-c971-422a-abb6-9f3f16421c2a\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 14:57:39 crc kubenswrapper[4957]: I0218 14:57:39.046061 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4657d4de-c971-422a-abb6-9f3f16421c2a-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"4657d4de-c971-422a-abb6-9f3f16421c2a\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 14:57:39 crc kubenswrapper[4957]: I0218 14:57:39.046151 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/4657d4de-c971-422a-abb6-9f3f16421c2a-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"4657d4de-c971-422a-abb6-9f3f16421c2a\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 14:57:39 crc kubenswrapper[4957]: I0218 14:57:39.147627 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4657d4de-c971-422a-abb6-9f3f16421c2a-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"4657d4de-c971-422a-abb6-9f3f16421c2a\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 14:57:39 crc kubenswrapper[4957]: I0218 14:57:39.147968 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/4657d4de-c971-422a-abb6-9f3f16421c2a-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"4657d4de-c971-422a-abb6-9f3f16421c2a\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 14:57:39 crc kubenswrapper[4957]: I0218 14:57:39.148133 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/4657d4de-c971-422a-abb6-9f3f16421c2a-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"4657d4de-c971-422a-abb6-9f3f16421c2a\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 14:57:39 crc kubenswrapper[4957]: I0218 14:57:39.148269 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4657d4de-c971-422a-abb6-9f3f16421c2a-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"4657d4de-c971-422a-abb6-9f3f16421c2a\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 14:57:39 crc kubenswrapper[4957]: I0218 14:57:39.148539 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5lqg5\" (UniqueName: \"kubernetes.io/projected/4657d4de-c971-422a-abb6-9f3f16421c2a-kube-api-access-5lqg5\") pod \"nova-cell1-novncproxy-0\" (UID: \"4657d4de-c971-422a-abb6-9f3f16421c2a\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 14:57:39 crc kubenswrapper[4957]: I0218 14:57:39.151840 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/4657d4de-c971-422a-abb6-9f3f16421c2a-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"4657d4de-c971-422a-abb6-9f3f16421c2a\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 14:57:39 crc kubenswrapper[4957]: I0218 14:57:39.151880 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4657d4de-c971-422a-abb6-9f3f16421c2a-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"4657d4de-c971-422a-abb6-9f3f16421c2a\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 14:57:39 crc kubenswrapper[4957]: I0218 14:57:39.152065 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4657d4de-c971-422a-abb6-9f3f16421c2a-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"4657d4de-c971-422a-abb6-9f3f16421c2a\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 14:57:39 crc kubenswrapper[4957]: I0218 14:57:39.152074 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/4657d4de-c971-422a-abb6-9f3f16421c2a-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"4657d4de-c971-422a-abb6-9f3f16421c2a\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 14:57:39 crc kubenswrapper[4957]: I0218 14:57:39.165112 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5lqg5\" (UniqueName: \"kubernetes.io/projected/4657d4de-c971-422a-abb6-9f3f16421c2a-kube-api-access-5lqg5\") pod \"nova-cell1-novncproxy-0\" (UID: \"4657d4de-c971-422a-abb6-9f3f16421c2a\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 14:57:39 crc kubenswrapper[4957]: I0218 14:57:39.286331 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 18 14:57:39 crc kubenswrapper[4957]: I0218 14:57:39.782759 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 18 14:57:39 crc kubenswrapper[4957]: I0218 14:57:39.887698 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"4657d4de-c971-422a-abb6-9f3f16421c2a","Type":"ContainerStarted","Data":"8fbce17ea5ffe26ac728867cd10e162c9b91d6d2a6bbe1cccc9679ac6eb7937e"} Feb 18 14:57:40 crc kubenswrapper[4957]: I0218 14:57:40.226974 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b893f85-0849-44a2-b99c-02a720f05422" path="/var/lib/kubelet/pods/8b893f85-0849-44a2-b99c-02a720f05422/volumes" Feb 18 14:57:40 crc kubenswrapper[4957]: I0218 14:57:40.877814 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 18 14:57:40 crc kubenswrapper[4957]: I0218 14:57:40.879288 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 18 14:57:40 crc kubenswrapper[4957]: I0218 14:57:40.886261 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 18 14:57:40 crc kubenswrapper[4957]: I0218 14:57:40.925130 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"4657d4de-c971-422a-abb6-9f3f16421c2a","Type":"ContainerStarted","Data":"16642303d0f649f3993bb76d0676ef8e8c66658fa1975dee0d430a9f972f896c"} Feb 18 14:57:40 crc kubenswrapper[4957]: I0218 14:57:40.934106 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 18 14:57:40 crc kubenswrapper[4957]: I0218 14:57:40.962119 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.962097863 podStartE2EDuration="2.962097863s" podCreationTimestamp="2026-02-18 14:57:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:57:40.958142238 +0000 UTC m=+1567.479006992" watchObservedRunningTime="2026-02-18 14:57:40.962097863 +0000 UTC m=+1567.482962617" Feb 18 14:57:41 crc kubenswrapper[4957]: I0218 14:57:41.740550 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 14:57:41 crc kubenswrapper[4957]: I0218 14:57:41.820744 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/098731c3-3fd0-495d-806d-8f8f931110d6-scripts\") pod \"098731c3-3fd0-495d-806d-8f8f931110d6\" (UID: \"098731c3-3fd0-495d-806d-8f8f931110d6\") " Feb 18 14:57:41 crc kubenswrapper[4957]: I0218 14:57:41.820824 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/098731c3-3fd0-495d-806d-8f8f931110d6-sg-core-conf-yaml\") pod \"098731c3-3fd0-495d-806d-8f8f931110d6\" (UID: \"098731c3-3fd0-495d-806d-8f8f931110d6\") " Feb 18 14:57:41 crc kubenswrapper[4957]: I0218 14:57:41.820938 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/098731c3-3fd0-495d-806d-8f8f931110d6-config-data\") pod \"098731c3-3fd0-495d-806d-8f8f931110d6\" (UID: \"098731c3-3fd0-495d-806d-8f8f931110d6\") " Feb 18 14:57:41 crc kubenswrapper[4957]: I0218 14:57:41.821012 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5tjk4\" (UniqueName: \"kubernetes.io/projected/098731c3-3fd0-495d-806d-8f8f931110d6-kube-api-access-5tjk4\") pod \"098731c3-3fd0-495d-806d-8f8f931110d6\" (UID: \"098731c3-3fd0-495d-806d-8f8f931110d6\") " Feb 18 14:57:41 crc kubenswrapper[4957]: I0218 14:57:41.821065 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/098731c3-3fd0-495d-806d-8f8f931110d6-combined-ca-bundle\") pod \"098731c3-3fd0-495d-806d-8f8f931110d6\" (UID: \"098731c3-3fd0-495d-806d-8f8f931110d6\") " Feb 18 14:57:41 crc kubenswrapper[4957]: I0218 14:57:41.821176 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/098731c3-3fd0-495d-806d-8f8f931110d6-run-httpd\") pod \"098731c3-3fd0-495d-806d-8f8f931110d6\" (UID: \"098731c3-3fd0-495d-806d-8f8f931110d6\") " Feb 18 14:57:41 crc kubenswrapper[4957]: I0218 14:57:41.821264 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/098731c3-3fd0-495d-806d-8f8f931110d6-log-httpd\") pod \"098731c3-3fd0-495d-806d-8f8f931110d6\" (UID: \"098731c3-3fd0-495d-806d-8f8f931110d6\") " Feb 18 14:57:41 crc kubenswrapper[4957]: I0218 14:57:41.821629 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/098731c3-3fd0-495d-806d-8f8f931110d6-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "098731c3-3fd0-495d-806d-8f8f931110d6" (UID: "098731c3-3fd0-495d-806d-8f8f931110d6"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:57:41 crc kubenswrapper[4957]: I0218 14:57:41.821917 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/098731c3-3fd0-495d-806d-8f8f931110d6-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "098731c3-3fd0-495d-806d-8f8f931110d6" (UID: "098731c3-3fd0-495d-806d-8f8f931110d6"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:57:41 crc kubenswrapper[4957]: I0218 14:57:41.822100 4957 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/098731c3-3fd0-495d-806d-8f8f931110d6-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 14:57:41 crc kubenswrapper[4957]: I0218 14:57:41.822125 4957 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/098731c3-3fd0-495d-806d-8f8f931110d6-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 14:57:41 crc kubenswrapper[4957]: I0218 14:57:41.827588 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/098731c3-3fd0-495d-806d-8f8f931110d6-kube-api-access-5tjk4" (OuterVolumeSpecName: "kube-api-access-5tjk4") pod "098731c3-3fd0-495d-806d-8f8f931110d6" (UID: "098731c3-3fd0-495d-806d-8f8f931110d6"). InnerVolumeSpecName "kube-api-access-5tjk4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:57:41 crc kubenswrapper[4957]: I0218 14:57:41.841160 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/098731c3-3fd0-495d-806d-8f8f931110d6-scripts" (OuterVolumeSpecName: "scripts") pod "098731c3-3fd0-495d-806d-8f8f931110d6" (UID: "098731c3-3fd0-495d-806d-8f8f931110d6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:57:41 crc kubenswrapper[4957]: I0218 14:57:41.871995 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/098731c3-3fd0-495d-806d-8f8f931110d6-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "098731c3-3fd0-495d-806d-8f8f931110d6" (UID: "098731c3-3fd0-495d-806d-8f8f931110d6"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:57:41 crc kubenswrapper[4957]: I0218 14:57:41.924200 4957 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/098731c3-3fd0-495d-806d-8f8f931110d6-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 14:57:41 crc kubenswrapper[4957]: I0218 14:57:41.924233 4957 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/098731c3-3fd0-495d-806d-8f8f931110d6-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 18 14:57:41 crc kubenswrapper[4957]: I0218 14:57:41.924246 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5tjk4\" (UniqueName: \"kubernetes.io/projected/098731c3-3fd0-495d-806d-8f8f931110d6-kube-api-access-5tjk4\") on node \"crc\" DevicePath \"\"" Feb 18 14:57:41 crc kubenswrapper[4957]: I0218 14:57:41.943196 4957 generic.go:334] "Generic (PLEG): container finished" podID="098731c3-3fd0-495d-806d-8f8f931110d6" containerID="9a02d3eaa82fcb94a91a6cfb5d7a3d0af7f8f6515ff6312f4c4e6c017c896e2a" exitCode=0 Feb 18 14:57:41 crc kubenswrapper[4957]: I0218 14:57:41.943273 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"098731c3-3fd0-495d-806d-8f8f931110d6","Type":"ContainerDied","Data":"9a02d3eaa82fcb94a91a6cfb5d7a3d0af7f8f6515ff6312f4c4e6c017c896e2a"} Feb 18 14:57:41 crc kubenswrapper[4957]: I0218 14:57:41.943337 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"098731c3-3fd0-495d-806d-8f8f931110d6","Type":"ContainerDied","Data":"4ba957de3954619124ab95cdd05dbd742d7ac5c806eda41b4f320dc351359a9b"} Feb 18 14:57:41 crc kubenswrapper[4957]: I0218 14:57:41.943359 4957 scope.go:117] "RemoveContainer" containerID="a56db990a4062eef4acd63d1eea4b831ee3c18fd858d9fd946627ec6543266ab" Feb 18 14:57:41 crc kubenswrapper[4957]: I0218 14:57:41.943300 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 14:57:41 crc kubenswrapper[4957]: I0218 14:57:41.950975 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/098731c3-3fd0-495d-806d-8f8f931110d6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "098731c3-3fd0-495d-806d-8f8f931110d6" (UID: "098731c3-3fd0-495d-806d-8f8f931110d6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:57:41 crc kubenswrapper[4957]: I0218 14:57:41.968809 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/098731c3-3fd0-495d-806d-8f8f931110d6-config-data" (OuterVolumeSpecName: "config-data") pod "098731c3-3fd0-495d-806d-8f8f931110d6" (UID: "098731c3-3fd0-495d-806d-8f8f931110d6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:57:41 crc kubenswrapper[4957]: I0218 14:57:41.987671 4957 scope.go:117] "RemoveContainer" containerID="4b8ffab9a69b94767a29482499927d15a031739357f822c815821acf12255e88" Feb 18 14:57:42 crc kubenswrapper[4957]: I0218 14:57:42.011512 4957 scope.go:117] "RemoveContainer" containerID="97e4a79220823ecfc43cb75031f1e8469c1da9926cb82bcad741dc9bc03ddfc2" Feb 18 14:57:42 crc kubenswrapper[4957]: I0218 14:57:42.026781 4957 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/098731c3-3fd0-495d-806d-8f8f931110d6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:57:42 crc kubenswrapper[4957]: I0218 14:57:42.026815 4957 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/098731c3-3fd0-495d-806d-8f8f931110d6-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 14:57:42 crc kubenswrapper[4957]: I0218 14:57:42.042287 4957 scope.go:117] "RemoveContainer" containerID="9a02d3eaa82fcb94a91a6cfb5d7a3d0af7f8f6515ff6312f4c4e6c017c896e2a" Feb 18 14:57:42 crc kubenswrapper[4957]: I0218 14:57:42.064607 4957 scope.go:117] "RemoveContainer" containerID="a56db990a4062eef4acd63d1eea4b831ee3c18fd858d9fd946627ec6543266ab" Feb 18 14:57:42 crc kubenswrapper[4957]: E0218 14:57:42.065069 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a56db990a4062eef4acd63d1eea4b831ee3c18fd858d9fd946627ec6543266ab\": container with ID starting with a56db990a4062eef4acd63d1eea4b831ee3c18fd858d9fd946627ec6543266ab not found: ID does not exist" containerID="a56db990a4062eef4acd63d1eea4b831ee3c18fd858d9fd946627ec6543266ab" Feb 18 14:57:42 crc kubenswrapper[4957]: I0218 14:57:42.065123 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a56db990a4062eef4acd63d1eea4b831ee3c18fd858d9fd946627ec6543266ab"} err="failed to get container status \"a56db990a4062eef4acd63d1eea4b831ee3c18fd858d9fd946627ec6543266ab\": rpc error: code = NotFound desc = could not find container \"a56db990a4062eef4acd63d1eea4b831ee3c18fd858d9fd946627ec6543266ab\": container with ID starting with a56db990a4062eef4acd63d1eea4b831ee3c18fd858d9fd946627ec6543266ab not found: ID does not exist" Feb 18 14:57:42 crc kubenswrapper[4957]: I0218 14:57:42.065159 4957 scope.go:117] "RemoveContainer" containerID="4b8ffab9a69b94767a29482499927d15a031739357f822c815821acf12255e88" Feb 18 14:57:42 crc kubenswrapper[4957]: E0218 14:57:42.066390 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4b8ffab9a69b94767a29482499927d15a031739357f822c815821acf12255e88\": container with ID starting with 4b8ffab9a69b94767a29482499927d15a031739357f822c815821acf12255e88 not found: ID does not exist" containerID="4b8ffab9a69b94767a29482499927d15a031739357f822c815821acf12255e88" Feb 18 14:57:42 crc kubenswrapper[4957]: I0218 14:57:42.066504 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b8ffab9a69b94767a29482499927d15a031739357f822c815821acf12255e88"} err="failed to get container status \"4b8ffab9a69b94767a29482499927d15a031739357f822c815821acf12255e88\": rpc error: code = NotFound desc = could not find container \"4b8ffab9a69b94767a29482499927d15a031739357f822c815821acf12255e88\": container with ID starting with 4b8ffab9a69b94767a29482499927d15a031739357f822c815821acf12255e88 not found: ID does not exist" Feb 18 14:57:42 crc kubenswrapper[4957]: I0218 14:57:42.066534 4957 scope.go:117] "RemoveContainer" containerID="97e4a79220823ecfc43cb75031f1e8469c1da9926cb82bcad741dc9bc03ddfc2" Feb 18 14:57:42 crc kubenswrapper[4957]: E0218 14:57:42.066791 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"97e4a79220823ecfc43cb75031f1e8469c1da9926cb82bcad741dc9bc03ddfc2\": container with ID starting with 97e4a79220823ecfc43cb75031f1e8469c1da9926cb82bcad741dc9bc03ddfc2 not found: ID does not exist" containerID="97e4a79220823ecfc43cb75031f1e8469c1da9926cb82bcad741dc9bc03ddfc2" Feb 18 14:57:42 crc kubenswrapper[4957]: I0218 14:57:42.066838 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"97e4a79220823ecfc43cb75031f1e8469c1da9926cb82bcad741dc9bc03ddfc2"} err="failed to get container status \"97e4a79220823ecfc43cb75031f1e8469c1da9926cb82bcad741dc9bc03ddfc2\": rpc error: code = NotFound desc = could not find container \"97e4a79220823ecfc43cb75031f1e8469c1da9926cb82bcad741dc9bc03ddfc2\": container with ID starting with 97e4a79220823ecfc43cb75031f1e8469c1da9926cb82bcad741dc9bc03ddfc2 not found: ID does not exist" Feb 18 14:57:42 crc kubenswrapper[4957]: I0218 14:57:42.066866 4957 scope.go:117] "RemoveContainer" containerID="9a02d3eaa82fcb94a91a6cfb5d7a3d0af7f8f6515ff6312f4c4e6c017c896e2a" Feb 18 14:57:42 crc kubenswrapper[4957]: E0218 14:57:42.067109 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9a02d3eaa82fcb94a91a6cfb5d7a3d0af7f8f6515ff6312f4c4e6c017c896e2a\": container with ID starting with 9a02d3eaa82fcb94a91a6cfb5d7a3d0af7f8f6515ff6312f4c4e6c017c896e2a not found: ID does not exist" containerID="9a02d3eaa82fcb94a91a6cfb5d7a3d0af7f8f6515ff6312f4c4e6c017c896e2a" Feb 18 14:57:42 crc kubenswrapper[4957]: I0218 14:57:42.067133 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a02d3eaa82fcb94a91a6cfb5d7a3d0af7f8f6515ff6312f4c4e6c017c896e2a"} err="failed to get container status \"9a02d3eaa82fcb94a91a6cfb5d7a3d0af7f8f6515ff6312f4c4e6c017c896e2a\": rpc error: code = NotFound desc = could not find container \"9a02d3eaa82fcb94a91a6cfb5d7a3d0af7f8f6515ff6312f4c4e6c017c896e2a\": container with ID starting with 9a02d3eaa82fcb94a91a6cfb5d7a3d0af7f8f6515ff6312f4c4e6c017c896e2a not found: ID does not exist" Feb 18 14:57:42 crc kubenswrapper[4957]: I0218 14:57:42.281729 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 14:57:42 crc kubenswrapper[4957]: I0218 14:57:42.298390 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 18 14:57:42 crc kubenswrapper[4957]: I0218 14:57:42.336794 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 18 14:57:42 crc kubenswrapper[4957]: E0218 14:57:42.337336 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="098731c3-3fd0-495d-806d-8f8f931110d6" containerName="sg-core" Feb 18 14:57:42 crc kubenswrapper[4957]: I0218 14:57:42.337354 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="098731c3-3fd0-495d-806d-8f8f931110d6" containerName="sg-core" Feb 18 14:57:42 crc kubenswrapper[4957]: E0218 14:57:42.337370 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="098731c3-3fd0-495d-806d-8f8f931110d6" containerName="ceilometer-central-agent" Feb 18 14:57:42 crc kubenswrapper[4957]: I0218 14:57:42.337377 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="098731c3-3fd0-495d-806d-8f8f931110d6" containerName="ceilometer-central-agent" Feb 18 14:57:42 crc kubenswrapper[4957]: E0218 14:57:42.337409 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="098731c3-3fd0-495d-806d-8f8f931110d6" containerName="proxy-httpd" Feb 18 14:57:42 crc kubenswrapper[4957]: I0218 14:57:42.337428 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="098731c3-3fd0-495d-806d-8f8f931110d6" containerName="proxy-httpd" Feb 18 14:57:42 crc kubenswrapper[4957]: E0218 14:57:42.337440 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="098731c3-3fd0-495d-806d-8f8f931110d6" containerName="ceilometer-notification-agent" Feb 18 14:57:42 crc kubenswrapper[4957]: I0218 14:57:42.337448 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="098731c3-3fd0-495d-806d-8f8f931110d6" containerName="ceilometer-notification-agent" Feb 18 14:57:42 crc kubenswrapper[4957]: I0218 14:57:42.337671 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="098731c3-3fd0-495d-806d-8f8f931110d6" containerName="sg-core" Feb 18 14:57:42 crc kubenswrapper[4957]: I0218 14:57:42.337698 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="098731c3-3fd0-495d-806d-8f8f931110d6" containerName="ceilometer-central-agent" Feb 18 14:57:42 crc kubenswrapper[4957]: I0218 14:57:42.337717 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="098731c3-3fd0-495d-806d-8f8f931110d6" containerName="ceilometer-notification-agent" Feb 18 14:57:42 crc kubenswrapper[4957]: I0218 14:57:42.337733 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="098731c3-3fd0-495d-806d-8f8f931110d6" containerName="proxy-httpd" Feb 18 14:57:42 crc kubenswrapper[4957]: I0218 14:57:42.340055 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 14:57:42 crc kubenswrapper[4957]: I0218 14:57:42.353508 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 18 14:57:42 crc kubenswrapper[4957]: I0218 14:57:42.353820 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 18 14:57:42 crc kubenswrapper[4957]: I0218 14:57:42.358479 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 14:57:42 crc kubenswrapper[4957]: I0218 14:57:42.435571 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d3e9c34-3c6d-42ae-a4b8-4961f1929a23-config-data\") pod \"ceilometer-0\" (UID: \"3d3e9c34-3c6d-42ae-a4b8-4961f1929a23\") " pod="openstack/ceilometer-0" Feb 18 14:57:42 crc kubenswrapper[4957]: I0218 14:57:42.435939 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3d3e9c34-3c6d-42ae-a4b8-4961f1929a23-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3d3e9c34-3c6d-42ae-a4b8-4961f1929a23\") " pod="openstack/ceilometer-0" Feb 18 14:57:42 crc kubenswrapper[4957]: I0218 14:57:42.435975 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rsccr\" (UniqueName: \"kubernetes.io/projected/3d3e9c34-3c6d-42ae-a4b8-4961f1929a23-kube-api-access-rsccr\") pod \"ceilometer-0\" (UID: \"3d3e9c34-3c6d-42ae-a4b8-4961f1929a23\") " pod="openstack/ceilometer-0" Feb 18 14:57:42 crc kubenswrapper[4957]: I0218 14:57:42.435995 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d3e9c34-3c6d-42ae-a4b8-4961f1929a23-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3d3e9c34-3c6d-42ae-a4b8-4961f1929a23\") " pod="openstack/ceilometer-0" Feb 18 14:57:42 crc kubenswrapper[4957]: I0218 14:57:42.436106 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3d3e9c34-3c6d-42ae-a4b8-4961f1929a23-scripts\") pod \"ceilometer-0\" (UID: \"3d3e9c34-3c6d-42ae-a4b8-4961f1929a23\") " pod="openstack/ceilometer-0" Feb 18 14:57:42 crc kubenswrapper[4957]: I0218 14:57:42.436136 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3d3e9c34-3c6d-42ae-a4b8-4961f1929a23-run-httpd\") pod \"ceilometer-0\" (UID: \"3d3e9c34-3c6d-42ae-a4b8-4961f1929a23\") " pod="openstack/ceilometer-0" Feb 18 14:57:42 crc kubenswrapper[4957]: I0218 14:57:42.436166 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3d3e9c34-3c6d-42ae-a4b8-4961f1929a23-log-httpd\") pod \"ceilometer-0\" (UID: \"3d3e9c34-3c6d-42ae-a4b8-4961f1929a23\") " pod="openstack/ceilometer-0" Feb 18 14:57:42 crc kubenswrapper[4957]: I0218 14:57:42.540250 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d3e9c34-3c6d-42ae-a4b8-4961f1929a23-config-data\") pod \"ceilometer-0\" (UID: \"3d3e9c34-3c6d-42ae-a4b8-4961f1929a23\") " pod="openstack/ceilometer-0" Feb 18 14:57:42 crc kubenswrapper[4957]: I0218 14:57:42.540458 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3d3e9c34-3c6d-42ae-a4b8-4961f1929a23-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3d3e9c34-3c6d-42ae-a4b8-4961f1929a23\") " pod="openstack/ceilometer-0" Feb 18 14:57:42 crc kubenswrapper[4957]: I0218 14:57:42.540528 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rsccr\" (UniqueName: \"kubernetes.io/projected/3d3e9c34-3c6d-42ae-a4b8-4961f1929a23-kube-api-access-rsccr\") pod \"ceilometer-0\" (UID: \"3d3e9c34-3c6d-42ae-a4b8-4961f1929a23\") " pod="openstack/ceilometer-0" Feb 18 14:57:42 crc kubenswrapper[4957]: I0218 14:57:42.540593 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d3e9c34-3c6d-42ae-a4b8-4961f1929a23-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3d3e9c34-3c6d-42ae-a4b8-4961f1929a23\") " pod="openstack/ceilometer-0" Feb 18 14:57:42 crc kubenswrapper[4957]: I0218 14:57:42.540831 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3d3e9c34-3c6d-42ae-a4b8-4961f1929a23-scripts\") pod \"ceilometer-0\" (UID: \"3d3e9c34-3c6d-42ae-a4b8-4961f1929a23\") " pod="openstack/ceilometer-0" Feb 18 14:57:42 crc kubenswrapper[4957]: I0218 14:57:42.540910 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3d3e9c34-3c6d-42ae-a4b8-4961f1929a23-run-httpd\") pod \"ceilometer-0\" (UID: \"3d3e9c34-3c6d-42ae-a4b8-4961f1929a23\") " pod="openstack/ceilometer-0" Feb 18 14:57:42 crc kubenswrapper[4957]: I0218 14:57:42.540994 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3d3e9c34-3c6d-42ae-a4b8-4961f1929a23-log-httpd\") pod \"ceilometer-0\" (UID: \"3d3e9c34-3c6d-42ae-a4b8-4961f1929a23\") " pod="openstack/ceilometer-0" Feb 18 14:57:42 crc kubenswrapper[4957]: I0218 14:57:42.542208 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3d3e9c34-3c6d-42ae-a4b8-4961f1929a23-log-httpd\") pod \"ceilometer-0\" (UID: \"3d3e9c34-3c6d-42ae-a4b8-4961f1929a23\") " pod="openstack/ceilometer-0" Feb 18 14:57:42 crc kubenswrapper[4957]: I0218 14:57:42.548751 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3d3e9c34-3c6d-42ae-a4b8-4961f1929a23-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3d3e9c34-3c6d-42ae-a4b8-4961f1929a23\") " pod="openstack/ceilometer-0" Feb 18 14:57:42 crc kubenswrapper[4957]: I0218 14:57:42.549309 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3d3e9c34-3c6d-42ae-a4b8-4961f1929a23-run-httpd\") pod \"ceilometer-0\" (UID: \"3d3e9c34-3c6d-42ae-a4b8-4961f1929a23\") " pod="openstack/ceilometer-0" Feb 18 14:57:42 crc kubenswrapper[4957]: I0218 14:57:42.565865 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d3e9c34-3c6d-42ae-a4b8-4961f1929a23-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3d3e9c34-3c6d-42ae-a4b8-4961f1929a23\") " pod="openstack/ceilometer-0" Feb 18 14:57:42 crc kubenswrapper[4957]: I0218 14:57:42.567511 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rsccr\" (UniqueName: \"kubernetes.io/projected/3d3e9c34-3c6d-42ae-a4b8-4961f1929a23-kube-api-access-rsccr\") pod \"ceilometer-0\" (UID: \"3d3e9c34-3c6d-42ae-a4b8-4961f1929a23\") " pod="openstack/ceilometer-0" Feb 18 14:57:42 crc kubenswrapper[4957]: I0218 14:57:42.567978 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d3e9c34-3c6d-42ae-a4b8-4961f1929a23-config-data\") pod \"ceilometer-0\" (UID: \"3d3e9c34-3c6d-42ae-a4b8-4961f1929a23\") " pod="openstack/ceilometer-0" Feb 18 14:57:42 crc kubenswrapper[4957]: I0218 14:57:42.568613 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3d3e9c34-3c6d-42ae-a4b8-4961f1929a23-scripts\") pod \"ceilometer-0\" (UID: \"3d3e9c34-3c6d-42ae-a4b8-4961f1929a23\") " pod="openstack/ceilometer-0" Feb 18 14:57:42 crc kubenswrapper[4957]: I0218 14:57:42.680827 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 14:57:43 crc kubenswrapper[4957]: I0218 14:57:43.219899 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 14:57:43 crc kubenswrapper[4957]: I0218 14:57:43.729544 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-nsb6k"] Feb 18 14:57:43 crc kubenswrapper[4957]: I0218 14:57:43.736771 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nsb6k" Feb 18 14:57:43 crc kubenswrapper[4957]: I0218 14:57:43.744125 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nsb6k"] Feb 18 14:57:43 crc kubenswrapper[4957]: I0218 14:57:43.785038 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7z2zz\" (UniqueName: \"kubernetes.io/projected/a5d8a39a-4f7f-4d3e-b205-9a209721ca4b-kube-api-access-7z2zz\") pod \"community-operators-nsb6k\" (UID: \"a5d8a39a-4f7f-4d3e-b205-9a209721ca4b\") " pod="openshift-marketplace/community-operators-nsb6k" Feb 18 14:57:43 crc kubenswrapper[4957]: I0218 14:57:43.785178 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a5d8a39a-4f7f-4d3e-b205-9a209721ca4b-catalog-content\") pod \"community-operators-nsb6k\" (UID: \"a5d8a39a-4f7f-4d3e-b205-9a209721ca4b\") " pod="openshift-marketplace/community-operators-nsb6k" Feb 18 14:57:43 crc kubenswrapper[4957]: I0218 14:57:43.785270 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a5d8a39a-4f7f-4d3e-b205-9a209721ca4b-utilities\") pod \"community-operators-nsb6k\" (UID: \"a5d8a39a-4f7f-4d3e-b205-9a209721ca4b\") " pod="openshift-marketplace/community-operators-nsb6k" Feb 18 14:57:43 crc kubenswrapper[4957]: I0218 14:57:43.888070 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a5d8a39a-4f7f-4d3e-b205-9a209721ca4b-utilities\") pod \"community-operators-nsb6k\" (UID: \"a5d8a39a-4f7f-4d3e-b205-9a209721ca4b\") " pod="openshift-marketplace/community-operators-nsb6k" Feb 18 14:57:43 crc kubenswrapper[4957]: I0218 14:57:43.888345 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7z2zz\" (UniqueName: \"kubernetes.io/projected/a5d8a39a-4f7f-4d3e-b205-9a209721ca4b-kube-api-access-7z2zz\") pod \"community-operators-nsb6k\" (UID: \"a5d8a39a-4f7f-4d3e-b205-9a209721ca4b\") " pod="openshift-marketplace/community-operators-nsb6k" Feb 18 14:57:43 crc kubenswrapper[4957]: I0218 14:57:43.888439 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a5d8a39a-4f7f-4d3e-b205-9a209721ca4b-catalog-content\") pod \"community-operators-nsb6k\" (UID: \"a5d8a39a-4f7f-4d3e-b205-9a209721ca4b\") " pod="openshift-marketplace/community-operators-nsb6k" Feb 18 14:57:43 crc kubenswrapper[4957]: I0218 14:57:43.889019 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a5d8a39a-4f7f-4d3e-b205-9a209721ca4b-catalog-content\") pod \"community-operators-nsb6k\" (UID: \"a5d8a39a-4f7f-4d3e-b205-9a209721ca4b\") " pod="openshift-marketplace/community-operators-nsb6k" Feb 18 14:57:43 crc kubenswrapper[4957]: I0218 14:57:43.889134 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a5d8a39a-4f7f-4d3e-b205-9a209721ca4b-utilities\") pod \"community-operators-nsb6k\" (UID: \"a5d8a39a-4f7f-4d3e-b205-9a209721ca4b\") " pod="openshift-marketplace/community-operators-nsb6k" Feb 18 14:57:43 crc kubenswrapper[4957]: I0218 14:57:43.913338 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7z2zz\" (UniqueName: \"kubernetes.io/projected/a5d8a39a-4f7f-4d3e-b205-9a209721ca4b-kube-api-access-7z2zz\") pod \"community-operators-nsb6k\" (UID: \"a5d8a39a-4f7f-4d3e-b205-9a209721ca4b\") " pod="openshift-marketplace/community-operators-nsb6k" Feb 18 14:57:43 crc kubenswrapper[4957]: I0218 14:57:43.977786 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 18 14:57:43 crc kubenswrapper[4957]: I0218 14:57:43.978432 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 18 14:57:43 crc kubenswrapper[4957]: I0218 14:57:43.983978 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 18 14:57:43 crc kubenswrapper[4957]: I0218 14:57:43.985835 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 18 14:57:44 crc kubenswrapper[4957]: I0218 14:57:44.022802 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3d3e9c34-3c6d-42ae-a4b8-4961f1929a23","Type":"ContainerStarted","Data":"d2b3a76ef0308cba4fa3ebb7435007fc2a037b90ad5eae014b554f86332ead55"} Feb 18 14:57:44 crc kubenswrapper[4957]: I0218 14:57:44.022901 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 18 14:57:44 crc kubenswrapper[4957]: I0218 14:57:44.041758 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 18 14:57:44 crc kubenswrapper[4957]: I0218 14:57:44.072393 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nsb6k" Feb 18 14:57:44 crc kubenswrapper[4957]: I0218 14:57:44.265177 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="098731c3-3fd0-495d-806d-8f8f931110d6" path="/var/lib/kubelet/pods/098731c3-3fd0-495d-806d-8f8f931110d6/volumes" Feb 18 14:57:44 crc kubenswrapper[4957]: I0218 14:57:44.291504 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 18 14:57:44 crc kubenswrapper[4957]: I0218 14:57:44.336561 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-9dfnj"] Feb 18 14:57:44 crc kubenswrapper[4957]: I0218 14:57:44.338860 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7bbf7cf9-9dfnj" Feb 18 14:57:44 crc kubenswrapper[4957]: I0218 14:57:44.402048 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52088720-a981-46e8-be3c-eee35c337203-config\") pod \"dnsmasq-dns-6b7bbf7cf9-9dfnj\" (UID: \"52088720-a981-46e8-be3c-eee35c337203\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-9dfnj" Feb 18 14:57:44 crc kubenswrapper[4957]: I0218 14:57:44.402129 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/52088720-a981-46e8-be3c-eee35c337203-dns-svc\") pod \"dnsmasq-dns-6b7bbf7cf9-9dfnj\" (UID: \"52088720-a981-46e8-be3c-eee35c337203\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-9dfnj" Feb 18 14:57:44 crc kubenswrapper[4957]: I0218 14:57:44.402152 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8v98b\" (UniqueName: \"kubernetes.io/projected/52088720-a981-46e8-be3c-eee35c337203-kube-api-access-8v98b\") pod \"dnsmasq-dns-6b7bbf7cf9-9dfnj\" (UID: \"52088720-a981-46e8-be3c-eee35c337203\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-9dfnj" Feb 18 14:57:44 crc kubenswrapper[4957]: I0218 14:57:44.402180 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/52088720-a981-46e8-be3c-eee35c337203-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7bbf7cf9-9dfnj\" (UID: \"52088720-a981-46e8-be3c-eee35c337203\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-9dfnj" Feb 18 14:57:44 crc kubenswrapper[4957]: I0218 14:57:44.402222 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/52088720-a981-46e8-be3c-eee35c337203-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7bbf7cf9-9dfnj\" (UID: \"52088720-a981-46e8-be3c-eee35c337203\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-9dfnj" Feb 18 14:57:44 crc kubenswrapper[4957]: I0218 14:57:44.402264 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/52088720-a981-46e8-be3c-eee35c337203-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7bbf7cf9-9dfnj\" (UID: \"52088720-a981-46e8-be3c-eee35c337203\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-9dfnj" Feb 18 14:57:44 crc kubenswrapper[4957]: I0218 14:57:44.424487 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-9dfnj"] Feb 18 14:57:44 crc kubenswrapper[4957]: I0218 14:57:44.507778 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52088720-a981-46e8-be3c-eee35c337203-config\") pod \"dnsmasq-dns-6b7bbf7cf9-9dfnj\" (UID: \"52088720-a981-46e8-be3c-eee35c337203\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-9dfnj" Feb 18 14:57:44 crc kubenswrapper[4957]: I0218 14:57:44.507869 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/52088720-a981-46e8-be3c-eee35c337203-dns-svc\") pod \"dnsmasq-dns-6b7bbf7cf9-9dfnj\" (UID: \"52088720-a981-46e8-be3c-eee35c337203\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-9dfnj" Feb 18 14:57:44 crc kubenswrapper[4957]: I0218 14:57:44.507889 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8v98b\" (UniqueName: \"kubernetes.io/projected/52088720-a981-46e8-be3c-eee35c337203-kube-api-access-8v98b\") pod \"dnsmasq-dns-6b7bbf7cf9-9dfnj\" (UID: \"52088720-a981-46e8-be3c-eee35c337203\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-9dfnj" Feb 18 14:57:44 crc kubenswrapper[4957]: I0218 14:57:44.507918 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/52088720-a981-46e8-be3c-eee35c337203-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7bbf7cf9-9dfnj\" (UID: \"52088720-a981-46e8-be3c-eee35c337203\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-9dfnj" Feb 18 14:57:44 crc kubenswrapper[4957]: I0218 14:57:44.507960 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/52088720-a981-46e8-be3c-eee35c337203-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7bbf7cf9-9dfnj\" (UID: \"52088720-a981-46e8-be3c-eee35c337203\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-9dfnj" Feb 18 14:57:44 crc kubenswrapper[4957]: I0218 14:57:44.507990 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/52088720-a981-46e8-be3c-eee35c337203-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7bbf7cf9-9dfnj\" (UID: \"52088720-a981-46e8-be3c-eee35c337203\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-9dfnj" Feb 18 14:57:44 crc kubenswrapper[4957]: I0218 14:57:44.509532 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52088720-a981-46e8-be3c-eee35c337203-config\") pod \"dnsmasq-dns-6b7bbf7cf9-9dfnj\" (UID: \"52088720-a981-46e8-be3c-eee35c337203\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-9dfnj" Feb 18 14:57:44 crc kubenswrapper[4957]: I0218 14:57:44.510103 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/52088720-a981-46e8-be3c-eee35c337203-dns-svc\") pod \"dnsmasq-dns-6b7bbf7cf9-9dfnj\" (UID: \"52088720-a981-46e8-be3c-eee35c337203\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-9dfnj" Feb 18 14:57:44 crc kubenswrapper[4957]: I0218 14:57:44.510644 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/52088720-a981-46e8-be3c-eee35c337203-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7bbf7cf9-9dfnj\" (UID: \"52088720-a981-46e8-be3c-eee35c337203\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-9dfnj" Feb 18 14:57:44 crc kubenswrapper[4957]: I0218 14:57:44.511130 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/52088720-a981-46e8-be3c-eee35c337203-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7bbf7cf9-9dfnj\" (UID: \"52088720-a981-46e8-be3c-eee35c337203\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-9dfnj" Feb 18 14:57:44 crc kubenswrapper[4957]: I0218 14:57:44.518768 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/52088720-a981-46e8-be3c-eee35c337203-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7bbf7cf9-9dfnj\" (UID: \"52088720-a981-46e8-be3c-eee35c337203\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-9dfnj" Feb 18 14:57:44 crc kubenswrapper[4957]: I0218 14:57:44.540544 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8v98b\" (UniqueName: \"kubernetes.io/projected/52088720-a981-46e8-be3c-eee35c337203-kube-api-access-8v98b\") pod \"dnsmasq-dns-6b7bbf7cf9-9dfnj\" (UID: \"52088720-a981-46e8-be3c-eee35c337203\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-9dfnj" Feb 18 14:57:44 crc kubenswrapper[4957]: I0218 14:57:44.720326 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7bbf7cf9-9dfnj" Feb 18 14:57:44 crc kubenswrapper[4957]: I0218 14:57:44.957730 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nsb6k"] Feb 18 14:57:45 crc kubenswrapper[4957]: I0218 14:57:45.037980 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3d3e9c34-3c6d-42ae-a4b8-4961f1929a23","Type":"ContainerStarted","Data":"9be13743bbecc985224ec75eb657b6d3f2cabf139ea7ce55bedec28a1de9ebaf"} Feb 18 14:57:45 crc kubenswrapper[4957]: I0218 14:57:45.040802 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nsb6k" event={"ID":"a5d8a39a-4f7f-4d3e-b205-9a209721ca4b","Type":"ContainerStarted","Data":"9b80de4c171299b1bd2c85d1c36348429f7aa532916dbd42ee9788704a9c2c32"} Feb 18 14:57:45 crc kubenswrapper[4957]: I0218 14:57:45.293443 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-9dfnj"] Feb 18 14:57:46 crc kubenswrapper[4957]: I0218 14:57:46.052340 4957 generic.go:334] "Generic (PLEG): container finished" podID="52088720-a981-46e8-be3c-eee35c337203" containerID="7dc6dcd2d39642c23ab8a30597dfe713255b952330e37f782dd5cf1e9be1f01c" exitCode=0 Feb 18 14:57:46 crc kubenswrapper[4957]: I0218 14:57:46.052400 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7bbf7cf9-9dfnj" event={"ID":"52088720-a981-46e8-be3c-eee35c337203","Type":"ContainerDied","Data":"7dc6dcd2d39642c23ab8a30597dfe713255b952330e37f782dd5cf1e9be1f01c"} Feb 18 14:57:46 crc kubenswrapper[4957]: I0218 14:57:46.052717 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7bbf7cf9-9dfnj" event={"ID":"52088720-a981-46e8-be3c-eee35c337203","Type":"ContainerStarted","Data":"c6cffa162176eadc14ccf99431b51292ef45420906b9153acf7c3f38822d4e81"} Feb 18 14:57:46 crc kubenswrapper[4957]: I0218 14:57:46.059188 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3d3e9c34-3c6d-42ae-a4b8-4961f1929a23","Type":"ContainerStarted","Data":"ce85bc347667418526982f4f5c3199f9ae0b0615c5e84c47687a5fabc71d899b"} Feb 18 14:57:46 crc kubenswrapper[4957]: I0218 14:57:46.063618 4957 generic.go:334] "Generic (PLEG): container finished" podID="a5d8a39a-4f7f-4d3e-b205-9a209721ca4b" containerID="755f4b288fb54599022bd8f48acc82ba306b2bf4ec27b7a7d6f40b809f7edb04" exitCode=0 Feb 18 14:57:46 crc kubenswrapper[4957]: I0218 14:57:46.064466 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nsb6k" event={"ID":"a5d8a39a-4f7f-4d3e-b205-9a209721ca4b","Type":"ContainerDied","Data":"755f4b288fb54599022bd8f48acc82ba306b2bf4ec27b7a7d6f40b809f7edb04"} Feb 18 14:57:46 crc kubenswrapper[4957]: I0218 14:57:46.214069 4957 scope.go:117] "RemoveContainer" containerID="c4e669be0ab1992542c422c61f78e7526ae93fe9c47c7d1457d0a05970a49398" Feb 18 14:57:46 crc kubenswrapper[4957]: E0218 14:57:46.214787 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-x8wwg_openshift-machine-config-operator(4cde17e3-43e9-4bed-afe8-5b76229e35cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" Feb 18 14:57:47 crc kubenswrapper[4957]: I0218 14:57:47.028155 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 18 14:57:47 crc kubenswrapper[4957]: I0218 14:57:47.090720 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3d3e9c34-3c6d-42ae-a4b8-4961f1929a23","Type":"ContainerStarted","Data":"122f6d9b86ad2b090728f8a5dab106362988b765b1e16cabea16df7911f18af1"} Feb 18 14:57:47 crc kubenswrapper[4957]: I0218 14:57:47.109307 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7bbf7cf9-9dfnj" event={"ID":"52088720-a981-46e8-be3c-eee35c337203","Type":"ContainerStarted","Data":"f0d448bdb7fe25c82d9b33bcf7919dba8fe5fa9bc27d4cacd5306fd5c1257981"} Feb 18 14:57:47 crc kubenswrapper[4957]: I0218 14:57:47.109662 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="3c417d58-815f-4a5d-aacf-bb6a06bc8b51" containerName="nova-api-log" containerID="cri-o://a826a374248b7899d62e1ed034cc30d914edab884d16f334ae62df1eb81d6389" gracePeriod=30 Feb 18 14:57:47 crc kubenswrapper[4957]: I0218 14:57:47.109718 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="3c417d58-815f-4a5d-aacf-bb6a06bc8b51" containerName="nova-api-api" containerID="cri-o://24e8d5f20b031ca69e3fc1db7624a1216c481cb783d1af65698f6e57b2ee78d7" gracePeriod=30 Feb 18 14:57:47 crc kubenswrapper[4957]: I0218 14:57:47.165063 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6b7bbf7cf9-9dfnj" podStartSLOduration=3.16503983 podStartE2EDuration="3.16503983s" podCreationTimestamp="2026-02-18 14:57:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:57:47.142016194 +0000 UTC m=+1573.662880938" watchObservedRunningTime="2026-02-18 14:57:47.16503983 +0000 UTC m=+1573.685904574" Feb 18 14:57:48 crc kubenswrapper[4957]: I0218 14:57:48.128684 4957 generic.go:334] "Generic (PLEG): container finished" podID="3c417d58-815f-4a5d-aacf-bb6a06bc8b51" containerID="a826a374248b7899d62e1ed034cc30d914edab884d16f334ae62df1eb81d6389" exitCode=143 Feb 18 14:57:48 crc kubenswrapper[4957]: I0218 14:57:48.130438 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3c417d58-815f-4a5d-aacf-bb6a06bc8b51","Type":"ContainerDied","Data":"a826a374248b7899d62e1ed034cc30d914edab884d16f334ae62df1eb81d6389"} Feb 18 14:57:48 crc kubenswrapper[4957]: I0218 14:57:48.130478 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6b7bbf7cf9-9dfnj" Feb 18 14:57:48 crc kubenswrapper[4957]: I0218 14:57:48.791099 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 14:57:49 crc kubenswrapper[4957]: I0218 14:57:49.142388 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3d3e9c34-3c6d-42ae-a4b8-4961f1929a23","Type":"ContainerStarted","Data":"dc8f259ad6031168350986394195d61c1bcddbafa9478819494bfb158caa9f0f"} Feb 18 14:57:49 crc kubenswrapper[4957]: I0218 14:57:49.172284 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.293329522 podStartE2EDuration="7.172264963s" podCreationTimestamp="2026-02-18 14:57:42 +0000 UTC" firstStartedPulling="2026-02-18 14:57:43.194461 +0000 UTC m=+1569.715325744" lastFinishedPulling="2026-02-18 14:57:48.073396441 +0000 UTC m=+1574.594261185" observedRunningTime="2026-02-18 14:57:49.164785477 +0000 UTC m=+1575.685650221" watchObservedRunningTime="2026-02-18 14:57:49.172264963 +0000 UTC m=+1575.693129707" Feb 18 14:57:49 crc kubenswrapper[4957]: I0218 14:57:49.287429 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Feb 18 14:57:49 crc kubenswrapper[4957]: I0218 14:57:49.312463 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Feb 18 14:57:50 crc kubenswrapper[4957]: I0218 14:57:50.151881 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3d3e9c34-3c6d-42ae-a4b8-4961f1929a23" containerName="ceilometer-central-agent" containerID="cri-o://9be13743bbecc985224ec75eb657b6d3f2cabf139ea7ce55bedec28a1de9ebaf" gracePeriod=30 Feb 18 14:57:50 crc kubenswrapper[4957]: I0218 14:57:50.151903 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3d3e9c34-3c6d-42ae-a4b8-4961f1929a23" containerName="proxy-httpd" containerID="cri-o://dc8f259ad6031168350986394195d61c1bcddbafa9478819494bfb158caa9f0f" gracePeriod=30 Feb 18 14:57:50 crc kubenswrapper[4957]: I0218 14:57:50.151920 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 18 14:57:50 crc kubenswrapper[4957]: I0218 14:57:50.151954 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3d3e9c34-3c6d-42ae-a4b8-4961f1929a23" containerName="ceilometer-notification-agent" containerID="cri-o://ce85bc347667418526982f4f5c3199f9ae0b0615c5e84c47687a5fabc71d899b" gracePeriod=30 Feb 18 14:57:50 crc kubenswrapper[4957]: I0218 14:57:50.151963 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3d3e9c34-3c6d-42ae-a4b8-4961f1929a23" containerName="sg-core" containerID="cri-o://122f6d9b86ad2b090728f8a5dab106362988b765b1e16cabea16df7911f18af1" gracePeriod=30 Feb 18 14:57:50 crc kubenswrapper[4957]: I0218 14:57:50.175213 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Feb 18 14:57:50 crc kubenswrapper[4957]: I0218 14:57:50.363572 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-npjhw"] Feb 18 14:57:50 crc kubenswrapper[4957]: I0218 14:57:50.366429 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-npjhw" Feb 18 14:57:50 crc kubenswrapper[4957]: I0218 14:57:50.369955 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Feb 18 14:57:50 crc kubenswrapper[4957]: I0218 14:57:50.372732 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Feb 18 14:57:50 crc kubenswrapper[4957]: I0218 14:57:50.376133 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-npjhw"] Feb 18 14:57:50 crc kubenswrapper[4957]: I0218 14:57:50.485527 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g77jk\" (UniqueName: \"kubernetes.io/projected/4fe65613-300e-43e4-82df-4480ee80a335-kube-api-access-g77jk\") pod \"nova-cell1-cell-mapping-npjhw\" (UID: \"4fe65613-300e-43e4-82df-4480ee80a335\") " pod="openstack/nova-cell1-cell-mapping-npjhw" Feb 18 14:57:50 crc kubenswrapper[4957]: I0218 14:57:50.485998 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4fe65613-300e-43e4-82df-4480ee80a335-scripts\") pod \"nova-cell1-cell-mapping-npjhw\" (UID: \"4fe65613-300e-43e4-82df-4480ee80a335\") " pod="openstack/nova-cell1-cell-mapping-npjhw" Feb 18 14:57:50 crc kubenswrapper[4957]: I0218 14:57:50.486054 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4fe65613-300e-43e4-82df-4480ee80a335-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-npjhw\" (UID: \"4fe65613-300e-43e4-82df-4480ee80a335\") " pod="openstack/nova-cell1-cell-mapping-npjhw" Feb 18 14:57:50 crc kubenswrapper[4957]: I0218 14:57:50.486170 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4fe65613-300e-43e4-82df-4480ee80a335-config-data\") pod \"nova-cell1-cell-mapping-npjhw\" (UID: \"4fe65613-300e-43e4-82df-4480ee80a335\") " pod="openstack/nova-cell1-cell-mapping-npjhw" Feb 18 14:57:50 crc kubenswrapper[4957]: I0218 14:57:50.587870 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g77jk\" (UniqueName: \"kubernetes.io/projected/4fe65613-300e-43e4-82df-4480ee80a335-kube-api-access-g77jk\") pod \"nova-cell1-cell-mapping-npjhw\" (UID: \"4fe65613-300e-43e4-82df-4480ee80a335\") " pod="openstack/nova-cell1-cell-mapping-npjhw" Feb 18 14:57:50 crc kubenswrapper[4957]: I0218 14:57:50.587997 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4fe65613-300e-43e4-82df-4480ee80a335-scripts\") pod \"nova-cell1-cell-mapping-npjhw\" (UID: \"4fe65613-300e-43e4-82df-4480ee80a335\") " pod="openstack/nova-cell1-cell-mapping-npjhw" Feb 18 14:57:50 crc kubenswrapper[4957]: I0218 14:57:50.588041 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4fe65613-300e-43e4-82df-4480ee80a335-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-npjhw\" (UID: \"4fe65613-300e-43e4-82df-4480ee80a335\") " pod="openstack/nova-cell1-cell-mapping-npjhw" Feb 18 14:57:50 crc kubenswrapper[4957]: I0218 14:57:50.588123 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4fe65613-300e-43e4-82df-4480ee80a335-config-data\") pod \"nova-cell1-cell-mapping-npjhw\" (UID: \"4fe65613-300e-43e4-82df-4480ee80a335\") " pod="openstack/nova-cell1-cell-mapping-npjhw" Feb 18 14:57:50 crc kubenswrapper[4957]: I0218 14:57:50.594805 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4fe65613-300e-43e4-82df-4480ee80a335-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-npjhw\" (UID: \"4fe65613-300e-43e4-82df-4480ee80a335\") " pod="openstack/nova-cell1-cell-mapping-npjhw" Feb 18 14:57:50 crc kubenswrapper[4957]: I0218 14:57:50.598097 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4fe65613-300e-43e4-82df-4480ee80a335-scripts\") pod \"nova-cell1-cell-mapping-npjhw\" (UID: \"4fe65613-300e-43e4-82df-4480ee80a335\") " pod="openstack/nova-cell1-cell-mapping-npjhw" Feb 18 14:57:50 crc kubenswrapper[4957]: I0218 14:57:50.599286 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4fe65613-300e-43e4-82df-4480ee80a335-config-data\") pod \"nova-cell1-cell-mapping-npjhw\" (UID: \"4fe65613-300e-43e4-82df-4480ee80a335\") " pod="openstack/nova-cell1-cell-mapping-npjhw" Feb 18 14:57:50 crc kubenswrapper[4957]: I0218 14:57:50.614806 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g77jk\" (UniqueName: \"kubernetes.io/projected/4fe65613-300e-43e4-82df-4480ee80a335-kube-api-access-g77jk\") pod \"nova-cell1-cell-mapping-npjhw\" (UID: \"4fe65613-300e-43e4-82df-4480ee80a335\") " pod="openstack/nova-cell1-cell-mapping-npjhw" Feb 18 14:57:50 crc kubenswrapper[4957]: I0218 14:57:50.714217 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-npjhw" Feb 18 14:57:51 crc kubenswrapper[4957]: I0218 14:57:51.166742 4957 generic.go:334] "Generic (PLEG): container finished" podID="3d3e9c34-3c6d-42ae-a4b8-4961f1929a23" containerID="dc8f259ad6031168350986394195d61c1bcddbafa9478819494bfb158caa9f0f" exitCode=0 Feb 18 14:57:51 crc kubenswrapper[4957]: I0218 14:57:51.166785 4957 generic.go:334] "Generic (PLEG): container finished" podID="3d3e9c34-3c6d-42ae-a4b8-4961f1929a23" containerID="122f6d9b86ad2b090728f8a5dab106362988b765b1e16cabea16df7911f18af1" exitCode=2 Feb 18 14:57:51 crc kubenswrapper[4957]: I0218 14:57:51.166798 4957 generic.go:334] "Generic (PLEG): container finished" podID="3d3e9c34-3c6d-42ae-a4b8-4961f1929a23" containerID="ce85bc347667418526982f4f5c3199f9ae0b0615c5e84c47687a5fabc71d899b" exitCode=0 Feb 18 14:57:51 crc kubenswrapper[4957]: I0218 14:57:51.166866 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3d3e9c34-3c6d-42ae-a4b8-4961f1929a23","Type":"ContainerDied","Data":"dc8f259ad6031168350986394195d61c1bcddbafa9478819494bfb158caa9f0f"} Feb 18 14:57:51 crc kubenswrapper[4957]: I0218 14:57:51.166900 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3d3e9c34-3c6d-42ae-a4b8-4961f1929a23","Type":"ContainerDied","Data":"122f6d9b86ad2b090728f8a5dab106362988b765b1e16cabea16df7911f18af1"} Feb 18 14:57:51 crc kubenswrapper[4957]: I0218 14:57:51.166915 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3d3e9c34-3c6d-42ae-a4b8-4961f1929a23","Type":"ContainerDied","Data":"ce85bc347667418526982f4f5c3199f9ae0b0615c5e84c47687a5fabc71d899b"} Feb 18 14:57:51 crc kubenswrapper[4957]: I0218 14:57:51.169086 4957 generic.go:334] "Generic (PLEG): container finished" podID="3c417d58-815f-4a5d-aacf-bb6a06bc8b51" containerID="24e8d5f20b031ca69e3fc1db7624a1216c481cb783d1af65698f6e57b2ee78d7" exitCode=0 Feb 18 14:57:51 crc kubenswrapper[4957]: I0218 14:57:51.170397 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3c417d58-815f-4a5d-aacf-bb6a06bc8b51","Type":"ContainerDied","Data":"24e8d5f20b031ca69e3fc1db7624a1216c481cb783d1af65698f6e57b2ee78d7"} Feb 18 14:57:53 crc kubenswrapper[4957]: I0218 14:57:53.187339 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 14:57:53 crc kubenswrapper[4957]: I0218 14:57:53.236059 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3c417d58-815f-4a5d-aacf-bb6a06bc8b51","Type":"ContainerDied","Data":"899cfbf2fa1216a8a254743ed791d61a63101d6b07789192ed4157e0194ca36e"} Feb 18 14:57:53 crc kubenswrapper[4957]: I0218 14:57:53.236102 4957 scope.go:117] "RemoveContainer" containerID="24e8d5f20b031ca69e3fc1db7624a1216c481cb783d1af65698f6e57b2ee78d7" Feb 18 14:57:53 crc kubenswrapper[4957]: I0218 14:57:53.236225 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 14:57:53 crc kubenswrapper[4957]: I0218 14:57:53.282608 4957 scope.go:117] "RemoveContainer" containerID="a826a374248b7899d62e1ed034cc30d914edab884d16f334ae62df1eb81d6389" Feb 18 14:57:53 crc kubenswrapper[4957]: I0218 14:57:53.283257 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c417d58-815f-4a5d-aacf-bb6a06bc8b51-combined-ca-bundle\") pod \"3c417d58-815f-4a5d-aacf-bb6a06bc8b51\" (UID: \"3c417d58-815f-4a5d-aacf-bb6a06bc8b51\") " Feb 18 14:57:53 crc kubenswrapper[4957]: I0218 14:57:53.283328 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h6bk7\" (UniqueName: \"kubernetes.io/projected/3c417d58-815f-4a5d-aacf-bb6a06bc8b51-kube-api-access-h6bk7\") pod \"3c417d58-815f-4a5d-aacf-bb6a06bc8b51\" (UID: \"3c417d58-815f-4a5d-aacf-bb6a06bc8b51\") " Feb 18 14:57:53 crc kubenswrapper[4957]: I0218 14:57:53.283434 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3c417d58-815f-4a5d-aacf-bb6a06bc8b51-logs\") pod \"3c417d58-815f-4a5d-aacf-bb6a06bc8b51\" (UID: \"3c417d58-815f-4a5d-aacf-bb6a06bc8b51\") " Feb 18 14:57:53 crc kubenswrapper[4957]: I0218 14:57:53.283552 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c417d58-815f-4a5d-aacf-bb6a06bc8b51-config-data\") pod \"3c417d58-815f-4a5d-aacf-bb6a06bc8b51\" (UID: \"3c417d58-815f-4a5d-aacf-bb6a06bc8b51\") " Feb 18 14:57:53 crc kubenswrapper[4957]: I0218 14:57:53.288230 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3c417d58-815f-4a5d-aacf-bb6a06bc8b51-logs" (OuterVolumeSpecName: "logs") pod "3c417d58-815f-4a5d-aacf-bb6a06bc8b51" (UID: "3c417d58-815f-4a5d-aacf-bb6a06bc8b51"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:57:53 crc kubenswrapper[4957]: I0218 14:57:53.294647 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c417d58-815f-4a5d-aacf-bb6a06bc8b51-kube-api-access-h6bk7" (OuterVolumeSpecName: "kube-api-access-h6bk7") pod "3c417d58-815f-4a5d-aacf-bb6a06bc8b51" (UID: "3c417d58-815f-4a5d-aacf-bb6a06bc8b51"). InnerVolumeSpecName "kube-api-access-h6bk7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:57:53 crc kubenswrapper[4957]: I0218 14:57:53.329872 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c417d58-815f-4a5d-aacf-bb6a06bc8b51-config-data" (OuterVolumeSpecName: "config-data") pod "3c417d58-815f-4a5d-aacf-bb6a06bc8b51" (UID: "3c417d58-815f-4a5d-aacf-bb6a06bc8b51"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:57:53 crc kubenswrapper[4957]: I0218 14:57:53.357704 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c417d58-815f-4a5d-aacf-bb6a06bc8b51-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3c417d58-815f-4a5d-aacf-bb6a06bc8b51" (UID: "3c417d58-815f-4a5d-aacf-bb6a06bc8b51"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:57:53 crc kubenswrapper[4957]: I0218 14:57:53.360061 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-npjhw"] Feb 18 14:57:53 crc kubenswrapper[4957]: W0218 14:57:53.361841 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4fe65613_300e_43e4_82df_4480ee80a335.slice/crio-482063088e1079df2cfb287c31f002f27b0a76c94139576dd7448054bc82233f WatchSource:0}: Error finding container 482063088e1079df2cfb287c31f002f27b0a76c94139576dd7448054bc82233f: Status 404 returned error can't find the container with id 482063088e1079df2cfb287c31f002f27b0a76c94139576dd7448054bc82233f Feb 18 14:57:53 crc kubenswrapper[4957]: I0218 14:57:53.387546 4957 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c417d58-815f-4a5d-aacf-bb6a06bc8b51-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:57:53 crc kubenswrapper[4957]: I0218 14:57:53.387583 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h6bk7\" (UniqueName: \"kubernetes.io/projected/3c417d58-815f-4a5d-aacf-bb6a06bc8b51-kube-api-access-h6bk7\") on node \"crc\" DevicePath \"\"" Feb 18 14:57:53 crc kubenswrapper[4957]: I0218 14:57:53.387599 4957 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3c417d58-815f-4a5d-aacf-bb6a06bc8b51-logs\") on node \"crc\" DevicePath \"\"" Feb 18 14:57:53 crc kubenswrapper[4957]: I0218 14:57:53.387609 4957 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c417d58-815f-4a5d-aacf-bb6a06bc8b51-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 14:57:53 crc kubenswrapper[4957]: I0218 14:57:53.735899 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 18 14:57:53 crc kubenswrapper[4957]: I0218 14:57:53.755859 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 18 14:57:53 crc kubenswrapper[4957]: I0218 14:57:53.779995 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 18 14:57:53 crc kubenswrapper[4957]: E0218 14:57:53.780579 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c417d58-815f-4a5d-aacf-bb6a06bc8b51" containerName="nova-api-api" Feb 18 14:57:53 crc kubenswrapper[4957]: I0218 14:57:53.780596 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c417d58-815f-4a5d-aacf-bb6a06bc8b51" containerName="nova-api-api" Feb 18 14:57:53 crc kubenswrapper[4957]: E0218 14:57:53.780644 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c417d58-815f-4a5d-aacf-bb6a06bc8b51" containerName="nova-api-log" Feb 18 14:57:53 crc kubenswrapper[4957]: I0218 14:57:53.780653 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c417d58-815f-4a5d-aacf-bb6a06bc8b51" containerName="nova-api-log" Feb 18 14:57:53 crc kubenswrapper[4957]: I0218 14:57:53.780903 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c417d58-815f-4a5d-aacf-bb6a06bc8b51" containerName="nova-api-log" Feb 18 14:57:53 crc kubenswrapper[4957]: I0218 14:57:53.780945 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c417d58-815f-4a5d-aacf-bb6a06bc8b51" containerName="nova-api-api" Feb 18 14:57:53 crc kubenswrapper[4957]: I0218 14:57:53.784600 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 14:57:53 crc kubenswrapper[4957]: I0218 14:57:53.790275 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 18 14:57:53 crc kubenswrapper[4957]: I0218 14:57:53.790466 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 18 14:57:53 crc kubenswrapper[4957]: I0218 14:57:53.797714 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 18 14:57:53 crc kubenswrapper[4957]: I0218 14:57:53.802659 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 18 14:57:53 crc kubenswrapper[4957]: I0218 14:57:53.905467 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/83538f1b-8063-4c0c-8bd3-ba87ce3a0637-public-tls-certs\") pod \"nova-api-0\" (UID: \"83538f1b-8063-4c0c-8bd3-ba87ce3a0637\") " pod="openstack/nova-api-0" Feb 18 14:57:53 crc kubenswrapper[4957]: I0218 14:57:53.905547 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83538f1b-8063-4c0c-8bd3-ba87ce3a0637-config-data\") pod \"nova-api-0\" (UID: \"83538f1b-8063-4c0c-8bd3-ba87ce3a0637\") " pod="openstack/nova-api-0" Feb 18 14:57:53 crc kubenswrapper[4957]: I0218 14:57:53.905607 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-htmd7\" (UniqueName: \"kubernetes.io/projected/83538f1b-8063-4c0c-8bd3-ba87ce3a0637-kube-api-access-htmd7\") pod \"nova-api-0\" (UID: \"83538f1b-8063-4c0c-8bd3-ba87ce3a0637\") " pod="openstack/nova-api-0" Feb 18 14:57:53 crc kubenswrapper[4957]: I0218 14:57:53.905681 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83538f1b-8063-4c0c-8bd3-ba87ce3a0637-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"83538f1b-8063-4c0c-8bd3-ba87ce3a0637\") " pod="openstack/nova-api-0" Feb 18 14:57:53 crc kubenswrapper[4957]: I0218 14:57:53.905781 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/83538f1b-8063-4c0c-8bd3-ba87ce3a0637-logs\") pod \"nova-api-0\" (UID: \"83538f1b-8063-4c0c-8bd3-ba87ce3a0637\") " pod="openstack/nova-api-0" Feb 18 14:57:53 crc kubenswrapper[4957]: I0218 14:57:53.905818 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/83538f1b-8063-4c0c-8bd3-ba87ce3a0637-internal-tls-certs\") pod \"nova-api-0\" (UID: \"83538f1b-8063-4c0c-8bd3-ba87ce3a0637\") " pod="openstack/nova-api-0" Feb 18 14:57:54 crc kubenswrapper[4957]: I0218 14:57:54.007584 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83538f1b-8063-4c0c-8bd3-ba87ce3a0637-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"83538f1b-8063-4c0c-8bd3-ba87ce3a0637\") " pod="openstack/nova-api-0" Feb 18 14:57:54 crc kubenswrapper[4957]: I0218 14:57:54.009017 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/83538f1b-8063-4c0c-8bd3-ba87ce3a0637-logs\") pod \"nova-api-0\" (UID: \"83538f1b-8063-4c0c-8bd3-ba87ce3a0637\") " pod="openstack/nova-api-0" Feb 18 14:57:54 crc kubenswrapper[4957]: I0218 14:57:54.009054 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/83538f1b-8063-4c0c-8bd3-ba87ce3a0637-internal-tls-certs\") pod \"nova-api-0\" (UID: \"83538f1b-8063-4c0c-8bd3-ba87ce3a0637\") " pod="openstack/nova-api-0" Feb 18 14:57:54 crc kubenswrapper[4957]: I0218 14:57:54.009190 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/83538f1b-8063-4c0c-8bd3-ba87ce3a0637-public-tls-certs\") pod \"nova-api-0\" (UID: \"83538f1b-8063-4c0c-8bd3-ba87ce3a0637\") " pod="openstack/nova-api-0" Feb 18 14:57:54 crc kubenswrapper[4957]: I0218 14:57:54.009231 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83538f1b-8063-4c0c-8bd3-ba87ce3a0637-config-data\") pod \"nova-api-0\" (UID: \"83538f1b-8063-4c0c-8bd3-ba87ce3a0637\") " pod="openstack/nova-api-0" Feb 18 14:57:54 crc kubenswrapper[4957]: I0218 14:57:54.009293 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-htmd7\" (UniqueName: \"kubernetes.io/projected/83538f1b-8063-4c0c-8bd3-ba87ce3a0637-kube-api-access-htmd7\") pod \"nova-api-0\" (UID: \"83538f1b-8063-4c0c-8bd3-ba87ce3a0637\") " pod="openstack/nova-api-0" Feb 18 14:57:54 crc kubenswrapper[4957]: I0218 14:57:54.011763 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/83538f1b-8063-4c0c-8bd3-ba87ce3a0637-logs\") pod \"nova-api-0\" (UID: \"83538f1b-8063-4c0c-8bd3-ba87ce3a0637\") " pod="openstack/nova-api-0" Feb 18 14:57:54 crc kubenswrapper[4957]: I0218 14:57:54.016261 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83538f1b-8063-4c0c-8bd3-ba87ce3a0637-config-data\") pod \"nova-api-0\" (UID: \"83538f1b-8063-4c0c-8bd3-ba87ce3a0637\") " pod="openstack/nova-api-0" Feb 18 14:57:54 crc kubenswrapper[4957]: I0218 14:57:54.016836 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/83538f1b-8063-4c0c-8bd3-ba87ce3a0637-public-tls-certs\") pod \"nova-api-0\" (UID: \"83538f1b-8063-4c0c-8bd3-ba87ce3a0637\") " pod="openstack/nova-api-0" Feb 18 14:57:54 crc kubenswrapper[4957]: I0218 14:57:54.019593 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83538f1b-8063-4c0c-8bd3-ba87ce3a0637-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"83538f1b-8063-4c0c-8bd3-ba87ce3a0637\") " pod="openstack/nova-api-0" Feb 18 14:57:54 crc kubenswrapper[4957]: I0218 14:57:54.029517 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/83538f1b-8063-4c0c-8bd3-ba87ce3a0637-internal-tls-certs\") pod \"nova-api-0\" (UID: \"83538f1b-8063-4c0c-8bd3-ba87ce3a0637\") " pod="openstack/nova-api-0" Feb 18 14:57:54 crc kubenswrapper[4957]: I0218 14:57:54.029596 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-htmd7\" (UniqueName: \"kubernetes.io/projected/83538f1b-8063-4c0c-8bd3-ba87ce3a0637-kube-api-access-htmd7\") pod \"nova-api-0\" (UID: \"83538f1b-8063-4c0c-8bd3-ba87ce3a0637\") " pod="openstack/nova-api-0" Feb 18 14:57:54 crc kubenswrapper[4957]: I0218 14:57:54.219450 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 14:57:54 crc kubenswrapper[4957]: I0218 14:57:54.238351 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c417d58-815f-4a5d-aacf-bb6a06bc8b51" path="/var/lib/kubelet/pods/3c417d58-815f-4a5d-aacf-bb6a06bc8b51/volumes" Feb 18 14:57:54 crc kubenswrapper[4957]: I0218 14:57:54.259250 4957 generic.go:334] "Generic (PLEG): container finished" podID="3d3e9c34-3c6d-42ae-a4b8-4961f1929a23" containerID="9be13743bbecc985224ec75eb657b6d3f2cabf139ea7ce55bedec28a1de9ebaf" exitCode=0 Feb 18 14:57:54 crc kubenswrapper[4957]: I0218 14:57:54.259305 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3d3e9c34-3c6d-42ae-a4b8-4961f1929a23","Type":"ContainerDied","Data":"9be13743bbecc985224ec75eb657b6d3f2cabf139ea7ce55bedec28a1de9ebaf"} Feb 18 14:57:54 crc kubenswrapper[4957]: I0218 14:57:54.259330 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3d3e9c34-3c6d-42ae-a4b8-4961f1929a23","Type":"ContainerDied","Data":"d2b3a76ef0308cba4fa3ebb7435007fc2a037b90ad5eae014b554f86332ead55"} Feb 18 14:57:54 crc kubenswrapper[4957]: I0218 14:57:54.259340 4957 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d2b3a76ef0308cba4fa3ebb7435007fc2a037b90ad5eae014b554f86332ead55" Feb 18 14:57:54 crc kubenswrapper[4957]: I0218 14:57:54.261072 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 14:57:54 crc kubenswrapper[4957]: I0218 14:57:54.261607 4957 generic.go:334] "Generic (PLEG): container finished" podID="a5d8a39a-4f7f-4d3e-b205-9a209721ca4b" containerID="801d7c91d9fcb29370d25297093f49ad57794ebf202d27e939e710b842462c1e" exitCode=0 Feb 18 14:57:54 crc kubenswrapper[4957]: I0218 14:57:54.261676 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nsb6k" event={"ID":"a5d8a39a-4f7f-4d3e-b205-9a209721ca4b","Type":"ContainerDied","Data":"801d7c91d9fcb29370d25297093f49ad57794ebf202d27e939e710b842462c1e"} Feb 18 14:57:54 crc kubenswrapper[4957]: I0218 14:57:54.290348 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-npjhw" event={"ID":"4fe65613-300e-43e4-82df-4480ee80a335","Type":"ContainerStarted","Data":"edf131b19d8fe0b28dfbd541a784a4ad56855eeedfb67fee7f5171f0ea8ac5ed"} Feb 18 14:57:54 crc kubenswrapper[4957]: I0218 14:57:54.290393 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-npjhw" event={"ID":"4fe65613-300e-43e4-82df-4480ee80a335","Type":"ContainerStarted","Data":"482063088e1079df2cfb287c31f002f27b0a76c94139576dd7448054bc82233f"} Feb 18 14:57:54 crc kubenswrapper[4957]: I0218 14:57:54.315922 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3d3e9c34-3c6d-42ae-a4b8-4961f1929a23-log-httpd\") pod \"3d3e9c34-3c6d-42ae-a4b8-4961f1929a23\" (UID: \"3d3e9c34-3c6d-42ae-a4b8-4961f1929a23\") " Feb 18 14:57:54 crc kubenswrapper[4957]: I0218 14:57:54.316015 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3d3e9c34-3c6d-42ae-a4b8-4961f1929a23-scripts\") pod \"3d3e9c34-3c6d-42ae-a4b8-4961f1929a23\" (UID: \"3d3e9c34-3c6d-42ae-a4b8-4961f1929a23\") " Feb 18 14:57:54 crc kubenswrapper[4957]: I0218 14:57:54.316178 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3d3e9c34-3c6d-42ae-a4b8-4961f1929a23-sg-core-conf-yaml\") pod \"3d3e9c34-3c6d-42ae-a4b8-4961f1929a23\" (UID: \"3d3e9c34-3c6d-42ae-a4b8-4961f1929a23\") " Feb 18 14:57:54 crc kubenswrapper[4957]: I0218 14:57:54.316332 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d3e9c34-3c6d-42ae-a4b8-4961f1929a23-config-data\") pod \"3d3e9c34-3c6d-42ae-a4b8-4961f1929a23\" (UID: \"3d3e9c34-3c6d-42ae-a4b8-4961f1929a23\") " Feb 18 14:57:54 crc kubenswrapper[4957]: I0218 14:57:54.316462 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rsccr\" (UniqueName: \"kubernetes.io/projected/3d3e9c34-3c6d-42ae-a4b8-4961f1929a23-kube-api-access-rsccr\") pod \"3d3e9c34-3c6d-42ae-a4b8-4961f1929a23\" (UID: \"3d3e9c34-3c6d-42ae-a4b8-4961f1929a23\") " Feb 18 14:57:54 crc kubenswrapper[4957]: I0218 14:57:54.316683 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d3e9c34-3c6d-42ae-a4b8-4961f1929a23-combined-ca-bundle\") pod \"3d3e9c34-3c6d-42ae-a4b8-4961f1929a23\" (UID: \"3d3e9c34-3c6d-42ae-a4b8-4961f1929a23\") " Feb 18 14:57:54 crc kubenswrapper[4957]: I0218 14:57:54.316708 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3d3e9c34-3c6d-42ae-a4b8-4961f1929a23-run-httpd\") pod \"3d3e9c34-3c6d-42ae-a4b8-4961f1929a23\" (UID: \"3d3e9c34-3c6d-42ae-a4b8-4961f1929a23\") " Feb 18 14:57:54 crc kubenswrapper[4957]: I0218 14:57:54.325704 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3d3e9c34-3c6d-42ae-a4b8-4961f1929a23-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "3d3e9c34-3c6d-42ae-a4b8-4961f1929a23" (UID: "3d3e9c34-3c6d-42ae-a4b8-4961f1929a23"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:57:54 crc kubenswrapper[4957]: I0218 14:57:54.325734 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3d3e9c34-3c6d-42ae-a4b8-4961f1929a23-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "3d3e9c34-3c6d-42ae-a4b8-4961f1929a23" (UID: "3d3e9c34-3c6d-42ae-a4b8-4961f1929a23"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:57:54 crc kubenswrapper[4957]: I0218 14:57:54.327719 4957 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3d3e9c34-3c6d-42ae-a4b8-4961f1929a23-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 14:57:54 crc kubenswrapper[4957]: I0218 14:57:54.327751 4957 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3d3e9c34-3c6d-42ae-a4b8-4961f1929a23-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 14:57:54 crc kubenswrapper[4957]: I0218 14:57:54.335276 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d3e9c34-3c6d-42ae-a4b8-4961f1929a23-kube-api-access-rsccr" (OuterVolumeSpecName: "kube-api-access-rsccr") pod "3d3e9c34-3c6d-42ae-a4b8-4961f1929a23" (UID: "3d3e9c34-3c6d-42ae-a4b8-4961f1929a23"). InnerVolumeSpecName "kube-api-access-rsccr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:57:54 crc kubenswrapper[4957]: I0218 14:57:54.339841 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d3e9c34-3c6d-42ae-a4b8-4961f1929a23-scripts" (OuterVolumeSpecName: "scripts") pod "3d3e9c34-3c6d-42ae-a4b8-4961f1929a23" (UID: "3d3e9c34-3c6d-42ae-a4b8-4961f1929a23"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:57:54 crc kubenswrapper[4957]: I0218 14:57:54.350036 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-npjhw" podStartSLOduration=4.350016129 podStartE2EDuration="4.350016129s" podCreationTimestamp="2026-02-18 14:57:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:57:54.348768543 +0000 UTC m=+1580.869633287" watchObservedRunningTime="2026-02-18 14:57:54.350016129 +0000 UTC m=+1580.870880873" Feb 18 14:57:54 crc kubenswrapper[4957]: I0218 14:57:54.414828 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d3e9c34-3c6d-42ae-a4b8-4961f1929a23-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "3d3e9c34-3c6d-42ae-a4b8-4961f1929a23" (UID: "3d3e9c34-3c6d-42ae-a4b8-4961f1929a23"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:57:54 crc kubenswrapper[4957]: I0218 14:57:54.432263 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rsccr\" (UniqueName: \"kubernetes.io/projected/3d3e9c34-3c6d-42ae-a4b8-4961f1929a23-kube-api-access-rsccr\") on node \"crc\" DevicePath \"\"" Feb 18 14:57:54 crc kubenswrapper[4957]: I0218 14:57:54.432303 4957 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3d3e9c34-3c6d-42ae-a4b8-4961f1929a23-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 14:57:54 crc kubenswrapper[4957]: I0218 14:57:54.432318 4957 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3d3e9c34-3c6d-42ae-a4b8-4961f1929a23-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 18 14:57:54 crc kubenswrapper[4957]: I0218 14:57:54.477651 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d3e9c34-3c6d-42ae-a4b8-4961f1929a23-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3d3e9c34-3c6d-42ae-a4b8-4961f1929a23" (UID: "3d3e9c34-3c6d-42ae-a4b8-4961f1929a23"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:57:54 crc kubenswrapper[4957]: I0218 14:57:54.504625 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d3e9c34-3c6d-42ae-a4b8-4961f1929a23-config-data" (OuterVolumeSpecName: "config-data") pod "3d3e9c34-3c6d-42ae-a4b8-4961f1929a23" (UID: "3d3e9c34-3c6d-42ae-a4b8-4961f1929a23"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:57:54 crc kubenswrapper[4957]: I0218 14:57:54.534727 4957 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d3e9c34-3c6d-42ae-a4b8-4961f1929a23-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 14:57:54 crc kubenswrapper[4957]: I0218 14:57:54.534766 4957 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d3e9c34-3c6d-42ae-a4b8-4961f1929a23-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:57:54 crc kubenswrapper[4957]: I0218 14:57:54.722650 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6b7bbf7cf9-9dfnj" Feb 18 14:57:54 crc kubenswrapper[4957]: I0218 14:57:54.807212 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-rk4dn"] Feb 18 14:57:54 crc kubenswrapper[4957]: I0218 14:57:54.807505 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-9b86998b5-rk4dn" podUID="31309517-a9cd-40f9-8f3b-3f9b92f96247" containerName="dnsmasq-dns" containerID="cri-o://9be2ea1046c81e0d3232168b8c0567f2c9808ab66131d4621b5d6d0ab2dbc9b1" gracePeriod=10 Feb 18 14:57:54 crc kubenswrapper[4957]: I0218 14:57:54.821476 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 18 14:57:55 crc kubenswrapper[4957]: I0218 14:57:55.340322 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nsb6k" event={"ID":"a5d8a39a-4f7f-4d3e-b205-9a209721ca4b","Type":"ContainerStarted","Data":"2a91c301ea3a0ffc8b0cc5fa538f120b4b64c18876510d9ed056aa48fdc7e1b2"} Feb 18 14:57:55 crc kubenswrapper[4957]: I0218 14:57:55.348970 4957 generic.go:334] "Generic (PLEG): container finished" podID="31309517-a9cd-40f9-8f3b-3f9b92f96247" containerID="9be2ea1046c81e0d3232168b8c0567f2c9808ab66131d4621b5d6d0ab2dbc9b1" exitCode=0 Feb 18 14:57:55 crc kubenswrapper[4957]: I0218 14:57:55.349071 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9b86998b5-rk4dn" event={"ID":"31309517-a9cd-40f9-8f3b-3f9b92f96247","Type":"ContainerDied","Data":"9be2ea1046c81e0d3232168b8c0567f2c9808ab66131d4621b5d6d0ab2dbc9b1"} Feb 18 14:57:55 crc kubenswrapper[4957]: I0218 14:57:55.359940 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 14:57:55 crc kubenswrapper[4957]: I0218 14:57:55.362408 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"83538f1b-8063-4c0c-8bd3-ba87ce3a0637","Type":"ContainerStarted","Data":"7190e63d15f3b6d325760c35017fa737c121204867012c337582314afdc0cbb3"} Feb 18 14:57:55 crc kubenswrapper[4957]: I0218 14:57:55.362503 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"83538f1b-8063-4c0c-8bd3-ba87ce3a0637","Type":"ContainerStarted","Data":"875ce42690c059655b388c4c0462578f2439327cd859ec56a320f2afa48dc55a"} Feb 18 14:57:55 crc kubenswrapper[4957]: I0218 14:57:55.375912 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9b86998b5-rk4dn" Feb 18 14:57:55 crc kubenswrapper[4957]: I0218 14:57:55.377879 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-nsb6k" podStartSLOduration=3.749750723 podStartE2EDuration="12.377862167s" podCreationTimestamp="2026-02-18 14:57:43 +0000 UTC" firstStartedPulling="2026-02-18 14:57:46.067519535 +0000 UTC m=+1572.588384279" lastFinishedPulling="2026-02-18 14:57:54.695630989 +0000 UTC m=+1581.216495723" observedRunningTime="2026-02-18 14:57:55.359836286 +0000 UTC m=+1581.880701030" watchObservedRunningTime="2026-02-18 14:57:55.377862167 +0000 UTC m=+1581.898726911" Feb 18 14:57:55 crc kubenswrapper[4957]: I0218 14:57:55.462272 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/31309517-a9cd-40f9-8f3b-3f9b92f96247-dns-svc\") pod \"31309517-a9cd-40f9-8f3b-3f9b92f96247\" (UID: \"31309517-a9cd-40f9-8f3b-3f9b92f96247\") " Feb 18 14:57:55 crc kubenswrapper[4957]: I0218 14:57:55.462337 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/31309517-a9cd-40f9-8f3b-3f9b92f96247-dns-swift-storage-0\") pod \"31309517-a9cd-40f9-8f3b-3f9b92f96247\" (UID: \"31309517-a9cd-40f9-8f3b-3f9b92f96247\") " Feb 18 14:57:55 crc kubenswrapper[4957]: I0218 14:57:55.462460 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrms\" (UniqueName: \"kubernetes.io/projected/31309517-a9cd-40f9-8f3b-3f9b92f96247-kube-api-access-mnrms\") pod \"31309517-a9cd-40f9-8f3b-3f9b92f96247\" (UID: \"31309517-a9cd-40f9-8f3b-3f9b92f96247\") " Feb 18 14:57:55 crc kubenswrapper[4957]: I0218 14:57:55.462487 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/31309517-a9cd-40f9-8f3b-3f9b92f96247-config\") pod \"31309517-a9cd-40f9-8f3b-3f9b92f96247\" (UID: \"31309517-a9cd-40f9-8f3b-3f9b92f96247\") " Feb 18 14:57:55 crc kubenswrapper[4957]: I0218 14:57:55.462542 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/31309517-a9cd-40f9-8f3b-3f9b92f96247-ovsdbserver-nb\") pod \"31309517-a9cd-40f9-8f3b-3f9b92f96247\" (UID: \"31309517-a9cd-40f9-8f3b-3f9b92f96247\") " Feb 18 14:57:55 crc kubenswrapper[4957]: I0218 14:57:55.462682 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/31309517-a9cd-40f9-8f3b-3f9b92f96247-ovsdbserver-sb\") pod \"31309517-a9cd-40f9-8f3b-3f9b92f96247\" (UID: \"31309517-a9cd-40f9-8f3b-3f9b92f96247\") " Feb 18 14:57:55 crc kubenswrapper[4957]: I0218 14:57:55.489485 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 14:57:55 crc kubenswrapper[4957]: I0218 14:57:55.498964 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31309517-a9cd-40f9-8f3b-3f9b92f96247-kube-api-access-mnrms" (OuterVolumeSpecName: "kube-api-access-mnrms") pod "31309517-a9cd-40f9-8f3b-3f9b92f96247" (UID: "31309517-a9cd-40f9-8f3b-3f9b92f96247"). InnerVolumeSpecName "kube-api-access-mnrms". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:57:55 crc kubenswrapper[4957]: I0218 14:57:55.529575 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 18 14:57:55 crc kubenswrapper[4957]: I0218 14:57:55.570659 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrms\" (UniqueName: \"kubernetes.io/projected/31309517-a9cd-40f9-8f3b-3f9b92f96247-kube-api-access-mnrms\") on node \"crc\" DevicePath \"\"" Feb 18 14:57:55 crc kubenswrapper[4957]: I0218 14:57:55.588589 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 18 14:57:55 crc kubenswrapper[4957]: E0218 14:57:55.611436 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d3e9c34-3c6d-42ae-a4b8-4961f1929a23" containerName="ceilometer-notification-agent" Feb 18 14:57:55 crc kubenswrapper[4957]: I0218 14:57:55.611710 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d3e9c34-3c6d-42ae-a4b8-4961f1929a23" containerName="ceilometer-notification-agent" Feb 18 14:57:55 crc kubenswrapper[4957]: E0218 14:57:55.611823 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d3e9c34-3c6d-42ae-a4b8-4961f1929a23" containerName="ceilometer-central-agent" Feb 18 14:57:55 crc kubenswrapper[4957]: I0218 14:57:55.611887 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d3e9c34-3c6d-42ae-a4b8-4961f1929a23" containerName="ceilometer-central-agent" Feb 18 14:57:55 crc kubenswrapper[4957]: E0218 14:57:55.612043 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d3e9c34-3c6d-42ae-a4b8-4961f1929a23" containerName="sg-core" Feb 18 14:57:55 crc kubenswrapper[4957]: I0218 14:57:55.612120 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d3e9c34-3c6d-42ae-a4b8-4961f1929a23" containerName="sg-core" Feb 18 14:57:55 crc kubenswrapper[4957]: E0218 14:57:55.612190 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31309517-a9cd-40f9-8f3b-3f9b92f96247" containerName="init" Feb 18 14:57:55 crc kubenswrapper[4957]: I0218 14:57:55.612253 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="31309517-a9cd-40f9-8f3b-3f9b92f96247" containerName="init" Feb 18 14:57:55 crc kubenswrapper[4957]: E0218 14:57:55.612322 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d3e9c34-3c6d-42ae-a4b8-4961f1929a23" containerName="proxy-httpd" Feb 18 14:57:55 crc kubenswrapper[4957]: I0218 14:57:55.612390 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d3e9c34-3c6d-42ae-a4b8-4961f1929a23" containerName="proxy-httpd" Feb 18 14:57:55 crc kubenswrapper[4957]: E0218 14:57:55.612494 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31309517-a9cd-40f9-8f3b-3f9b92f96247" containerName="dnsmasq-dns" Feb 18 14:57:55 crc kubenswrapper[4957]: I0218 14:57:55.612562 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="31309517-a9cd-40f9-8f3b-3f9b92f96247" containerName="dnsmasq-dns" Feb 18 14:57:55 crc kubenswrapper[4957]: I0218 14:57:55.613112 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d3e9c34-3c6d-42ae-a4b8-4961f1929a23" containerName="ceilometer-central-agent" Feb 18 14:57:55 crc kubenswrapper[4957]: I0218 14:57:55.613212 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d3e9c34-3c6d-42ae-a4b8-4961f1929a23" containerName="ceilometer-notification-agent" Feb 18 14:57:55 crc kubenswrapper[4957]: I0218 14:57:55.613289 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="31309517-a9cd-40f9-8f3b-3f9b92f96247" containerName="dnsmasq-dns" Feb 18 14:57:55 crc kubenswrapper[4957]: I0218 14:57:55.613363 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d3e9c34-3c6d-42ae-a4b8-4961f1929a23" containerName="sg-core" Feb 18 14:57:55 crc kubenswrapper[4957]: I0218 14:57:55.613462 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d3e9c34-3c6d-42ae-a4b8-4961f1929a23" containerName="proxy-httpd" Feb 18 14:57:55 crc kubenswrapper[4957]: I0218 14:57:55.616176 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 14:57:55 crc kubenswrapper[4957]: I0218 14:57:55.677179 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/79e6e661-a1cd-4ff5-becc-87ff6dd1fb4b-scripts\") pod \"ceilometer-0\" (UID: \"79e6e661-a1cd-4ff5-becc-87ff6dd1fb4b\") " pod="openstack/ceilometer-0" Feb 18 14:57:55 crc kubenswrapper[4957]: I0218 14:57:55.677492 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79e6e661-a1cd-4ff5-becc-87ff6dd1fb4b-config-data\") pod \"ceilometer-0\" (UID: \"79e6e661-a1cd-4ff5-becc-87ff6dd1fb4b\") " pod="openstack/ceilometer-0" Feb 18 14:57:55 crc kubenswrapper[4957]: I0218 14:57:55.677597 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/79e6e661-a1cd-4ff5-becc-87ff6dd1fb4b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"79e6e661-a1cd-4ff5-becc-87ff6dd1fb4b\") " pod="openstack/ceilometer-0" Feb 18 14:57:55 crc kubenswrapper[4957]: I0218 14:57:55.677764 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79e6e661-a1cd-4ff5-becc-87ff6dd1fb4b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"79e6e661-a1cd-4ff5-becc-87ff6dd1fb4b\") " pod="openstack/ceilometer-0" Feb 18 14:57:55 crc kubenswrapper[4957]: I0218 14:57:55.677887 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/79e6e661-a1cd-4ff5-becc-87ff6dd1fb4b-log-httpd\") pod \"ceilometer-0\" (UID: \"79e6e661-a1cd-4ff5-becc-87ff6dd1fb4b\") " pod="openstack/ceilometer-0" Feb 18 14:57:55 crc kubenswrapper[4957]: I0218 14:57:55.683371 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 18 14:57:55 crc kubenswrapper[4957]: I0218 14:57:55.686868 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/79e6e661-a1cd-4ff5-becc-87ff6dd1fb4b-run-httpd\") pod \"ceilometer-0\" (UID: \"79e6e661-a1cd-4ff5-becc-87ff6dd1fb4b\") " pod="openstack/ceilometer-0" Feb 18 14:57:55 crc kubenswrapper[4957]: I0218 14:57:55.683824 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 18 14:57:55 crc kubenswrapper[4957]: I0218 14:57:55.687080 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2fgn\" (UniqueName: \"kubernetes.io/projected/79e6e661-a1cd-4ff5-becc-87ff6dd1fb4b-kube-api-access-m2fgn\") pod \"ceilometer-0\" (UID: \"79e6e661-a1cd-4ff5-becc-87ff6dd1fb4b\") " pod="openstack/ceilometer-0" Feb 18 14:57:55 crc kubenswrapper[4957]: I0218 14:57:55.697252 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31309517-a9cd-40f9-8f3b-3f9b92f96247-config" (OuterVolumeSpecName: "config") pod "31309517-a9cd-40f9-8f3b-3f9b92f96247" (UID: "31309517-a9cd-40f9-8f3b-3f9b92f96247"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:57:55 crc kubenswrapper[4957]: I0218 14:57:55.723678 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 14:57:55 crc kubenswrapper[4957]: I0218 14:57:55.733708 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31309517-a9cd-40f9-8f3b-3f9b92f96247-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "31309517-a9cd-40f9-8f3b-3f9b92f96247" (UID: "31309517-a9cd-40f9-8f3b-3f9b92f96247"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:57:55 crc kubenswrapper[4957]: I0218 14:57:55.795184 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/79e6e661-a1cd-4ff5-becc-87ff6dd1fb4b-scripts\") pod \"ceilometer-0\" (UID: \"79e6e661-a1cd-4ff5-becc-87ff6dd1fb4b\") " pod="openstack/ceilometer-0" Feb 18 14:57:55 crc kubenswrapper[4957]: I0218 14:57:55.795240 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79e6e661-a1cd-4ff5-becc-87ff6dd1fb4b-config-data\") pod \"ceilometer-0\" (UID: \"79e6e661-a1cd-4ff5-becc-87ff6dd1fb4b\") " pod="openstack/ceilometer-0" Feb 18 14:57:55 crc kubenswrapper[4957]: I0218 14:57:55.795271 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/79e6e661-a1cd-4ff5-becc-87ff6dd1fb4b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"79e6e661-a1cd-4ff5-becc-87ff6dd1fb4b\") " pod="openstack/ceilometer-0" Feb 18 14:57:55 crc kubenswrapper[4957]: I0218 14:57:55.795370 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79e6e661-a1cd-4ff5-becc-87ff6dd1fb4b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"79e6e661-a1cd-4ff5-becc-87ff6dd1fb4b\") " pod="openstack/ceilometer-0" Feb 18 14:57:55 crc kubenswrapper[4957]: I0218 14:57:55.795449 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/79e6e661-a1cd-4ff5-becc-87ff6dd1fb4b-log-httpd\") pod \"ceilometer-0\" (UID: \"79e6e661-a1cd-4ff5-becc-87ff6dd1fb4b\") " pod="openstack/ceilometer-0" Feb 18 14:57:55 crc kubenswrapper[4957]: I0218 14:57:55.795511 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/79e6e661-a1cd-4ff5-becc-87ff6dd1fb4b-run-httpd\") pod \"ceilometer-0\" (UID: \"79e6e661-a1cd-4ff5-becc-87ff6dd1fb4b\") " pod="openstack/ceilometer-0" Feb 18 14:57:55 crc kubenswrapper[4957]: I0218 14:57:55.795557 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m2fgn\" (UniqueName: \"kubernetes.io/projected/79e6e661-a1cd-4ff5-becc-87ff6dd1fb4b-kube-api-access-m2fgn\") pod \"ceilometer-0\" (UID: \"79e6e661-a1cd-4ff5-becc-87ff6dd1fb4b\") " pod="openstack/ceilometer-0" Feb 18 14:57:55 crc kubenswrapper[4957]: I0218 14:57:55.795727 4957 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/31309517-a9cd-40f9-8f3b-3f9b92f96247-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 14:57:55 crc kubenswrapper[4957]: I0218 14:57:55.796921 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31309517-a9cd-40f9-8f3b-3f9b92f96247-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "31309517-a9cd-40f9-8f3b-3f9b92f96247" (UID: "31309517-a9cd-40f9-8f3b-3f9b92f96247"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:57:55 crc kubenswrapper[4957]: I0218 14:57:55.798578 4957 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/31309517-a9cd-40f9-8f3b-3f9b92f96247-config\") on node \"crc\" DevicePath \"\"" Feb 18 14:57:55 crc kubenswrapper[4957]: I0218 14:57:55.801943 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/79e6e661-a1cd-4ff5-becc-87ff6dd1fb4b-run-httpd\") pod \"ceilometer-0\" (UID: \"79e6e661-a1cd-4ff5-becc-87ff6dd1fb4b\") " pod="openstack/ceilometer-0" Feb 18 14:57:55 crc kubenswrapper[4957]: I0218 14:57:55.802163 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/79e6e661-a1cd-4ff5-becc-87ff6dd1fb4b-log-httpd\") pod \"ceilometer-0\" (UID: \"79e6e661-a1cd-4ff5-becc-87ff6dd1fb4b\") " pod="openstack/ceilometer-0" Feb 18 14:57:55 crc kubenswrapper[4957]: I0218 14:57:55.803101 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/79e6e661-a1cd-4ff5-becc-87ff6dd1fb4b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"79e6e661-a1cd-4ff5-becc-87ff6dd1fb4b\") " pod="openstack/ceilometer-0" Feb 18 14:57:55 crc kubenswrapper[4957]: I0218 14:57:55.810575 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/79e6e661-a1cd-4ff5-becc-87ff6dd1fb4b-scripts\") pod \"ceilometer-0\" (UID: \"79e6e661-a1cd-4ff5-becc-87ff6dd1fb4b\") " pod="openstack/ceilometer-0" Feb 18 14:57:55 crc kubenswrapper[4957]: I0218 14:57:55.810773 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79e6e661-a1cd-4ff5-becc-87ff6dd1fb4b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"79e6e661-a1cd-4ff5-becc-87ff6dd1fb4b\") " pod="openstack/ceilometer-0" Feb 18 14:57:55 crc kubenswrapper[4957]: I0218 14:57:55.819272 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79e6e661-a1cd-4ff5-becc-87ff6dd1fb4b-config-data\") pod \"ceilometer-0\" (UID: \"79e6e661-a1cd-4ff5-becc-87ff6dd1fb4b\") " pod="openstack/ceilometer-0" Feb 18 14:57:55 crc kubenswrapper[4957]: I0218 14:57:55.831433 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m2fgn\" (UniqueName: \"kubernetes.io/projected/79e6e661-a1cd-4ff5-becc-87ff6dd1fb4b-kube-api-access-m2fgn\") pod \"ceilometer-0\" (UID: \"79e6e661-a1cd-4ff5-becc-87ff6dd1fb4b\") " pod="openstack/ceilometer-0" Feb 18 14:57:55 crc kubenswrapper[4957]: I0218 14:57:55.836739 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31309517-a9cd-40f9-8f3b-3f9b92f96247-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "31309517-a9cd-40f9-8f3b-3f9b92f96247" (UID: "31309517-a9cd-40f9-8f3b-3f9b92f96247"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:57:55 crc kubenswrapper[4957]: I0218 14:57:55.837759 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31309517-a9cd-40f9-8f3b-3f9b92f96247-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "31309517-a9cd-40f9-8f3b-3f9b92f96247" (UID: "31309517-a9cd-40f9-8f3b-3f9b92f96247"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:57:55 crc kubenswrapper[4957]: I0218 14:57:55.896909 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 14:57:55 crc kubenswrapper[4957]: I0218 14:57:55.901190 4957 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/31309517-a9cd-40f9-8f3b-3f9b92f96247-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 18 14:57:55 crc kubenswrapper[4957]: I0218 14:57:55.901217 4957 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/31309517-a9cd-40f9-8f3b-3f9b92f96247-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 18 14:57:55 crc kubenswrapper[4957]: I0218 14:57:55.901230 4957 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/31309517-a9cd-40f9-8f3b-3f9b92f96247-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 18 14:57:56 crc kubenswrapper[4957]: I0218 14:57:56.229570 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d3e9c34-3c6d-42ae-a4b8-4961f1929a23" path="/var/lib/kubelet/pods/3d3e9c34-3c6d-42ae-a4b8-4961f1929a23/volumes" Feb 18 14:57:56 crc kubenswrapper[4957]: I0218 14:57:56.397711 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9b86998b5-rk4dn" event={"ID":"31309517-a9cd-40f9-8f3b-3f9b92f96247","Type":"ContainerDied","Data":"42942c6833da331980fdd18c84f49f9c1c92955b12665d43ae16d6219f170ef2"} Feb 18 14:57:56 crc kubenswrapper[4957]: I0218 14:57:56.397743 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9b86998b5-rk4dn" Feb 18 14:57:56 crc kubenswrapper[4957]: I0218 14:57:56.397774 4957 scope.go:117] "RemoveContainer" containerID="9be2ea1046c81e0d3232168b8c0567f2c9808ab66131d4621b5d6d0ab2dbc9b1" Feb 18 14:57:56 crc kubenswrapper[4957]: I0218 14:57:56.407822 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"83538f1b-8063-4c0c-8bd3-ba87ce3a0637","Type":"ContainerStarted","Data":"3ed4d86272f862af96e092e6eb71fba3f83315754986dd67a0ef40866cf6923a"} Feb 18 14:57:56 crc kubenswrapper[4957]: I0218 14:57:56.429237 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-rk4dn"] Feb 18 14:57:56 crc kubenswrapper[4957]: I0218 14:57:56.451237 4957 scope.go:117] "RemoveContainer" containerID="f37df6fd7451398bce69af825f9808acd4f567e0f9efcbd6f0486f5f9b407749" Feb 18 14:57:56 crc kubenswrapper[4957]: I0218 14:57:56.453262 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-rk4dn"] Feb 18 14:57:56 crc kubenswrapper[4957]: I0218 14:57:56.476265 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.476244956 podStartE2EDuration="3.476244956s" podCreationTimestamp="2026-02-18 14:57:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:57:56.463782235 +0000 UTC m=+1582.984646989" watchObservedRunningTime="2026-02-18 14:57:56.476244956 +0000 UTC m=+1582.997109700" Feb 18 14:57:56 crc kubenswrapper[4957]: I0218 14:57:56.497763 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 14:57:56 crc kubenswrapper[4957]: W0218 14:57:56.550697 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod79e6e661_a1cd_4ff5_becc_87ff6dd1fb4b.slice/crio-533953a9f3b8b4ff239f8688cbd02b613ed79898acb173ef09449bfc6ba1b819 WatchSource:0}: Error finding container 533953a9f3b8b4ff239f8688cbd02b613ed79898acb173ef09449bfc6ba1b819: Status 404 returned error can't find the container with id 533953a9f3b8b4ff239f8688cbd02b613ed79898acb173ef09449bfc6ba1b819 Feb 18 14:57:57 crc kubenswrapper[4957]: I0218 14:57:57.423320 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"79e6e661-a1cd-4ff5-becc-87ff6dd1fb4b","Type":"ContainerStarted","Data":"533953a9f3b8b4ff239f8688cbd02b613ed79898acb173ef09449bfc6ba1b819"} Feb 18 14:57:58 crc kubenswrapper[4957]: I0218 14:57:58.227095 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31309517-a9cd-40f9-8f3b-3f9b92f96247" path="/var/lib/kubelet/pods/31309517-a9cd-40f9-8f3b-3f9b92f96247/volumes" Feb 18 14:57:58 crc kubenswrapper[4957]: I0218 14:57:58.440061 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"79e6e661-a1cd-4ff5-becc-87ff6dd1fb4b","Type":"ContainerStarted","Data":"eeb170b903af85131dc6ae701cedca0c4de379321ed706405103fd7adfcd395e"} Feb 18 14:57:59 crc kubenswrapper[4957]: I0218 14:57:59.212653 4957 scope.go:117] "RemoveContainer" containerID="c4e669be0ab1992542c422c61f78e7526ae93fe9c47c7d1457d0a05970a49398" Feb 18 14:57:59 crc kubenswrapper[4957]: E0218 14:57:59.213261 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-x8wwg_openshift-machine-config-operator(4cde17e3-43e9-4bed-afe8-5b76229e35cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" Feb 18 14:57:59 crc kubenswrapper[4957]: I0218 14:57:59.477202 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"79e6e661-a1cd-4ff5-becc-87ff6dd1fb4b","Type":"ContainerStarted","Data":"1e16c55871f4bb19653344e2eff10737813b704cee6acc7536efa492fb30c80f"} Feb 18 14:58:00 crc kubenswrapper[4957]: I0218 14:58:00.221664 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-9b86998b5-rk4dn" podUID="31309517-a9cd-40f9-8f3b-3f9b92f96247" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.242:5353: i/o timeout" Feb 18 14:58:00 crc kubenswrapper[4957]: I0218 14:58:00.517159 4957 generic.go:334] "Generic (PLEG): container finished" podID="4fe65613-300e-43e4-82df-4480ee80a335" containerID="edf131b19d8fe0b28dfbd541a784a4ad56855eeedfb67fee7f5171f0ea8ac5ed" exitCode=0 Feb 18 14:58:00 crc kubenswrapper[4957]: I0218 14:58:00.517649 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-npjhw" event={"ID":"4fe65613-300e-43e4-82df-4480ee80a335","Type":"ContainerDied","Data":"edf131b19d8fe0b28dfbd541a784a4ad56855eeedfb67fee7f5171f0ea8ac5ed"} Feb 18 14:58:00 crc kubenswrapper[4957]: I0218 14:58:00.523547 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"79e6e661-a1cd-4ff5-becc-87ff6dd1fb4b","Type":"ContainerStarted","Data":"bb8dea5340466054afe539ff2ccc7ab58a1810b312586cd487a6a6e73dc482ad"} Feb 18 14:58:01 crc kubenswrapper[4957]: I0218 14:58:01.550389 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-vdxnv"] Feb 18 14:58:01 crc kubenswrapper[4957]: I0218 14:58:01.557960 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vdxnv" Feb 18 14:58:01 crc kubenswrapper[4957]: I0218 14:58:01.575142 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vdxnv"] Feb 18 14:58:01 crc kubenswrapper[4957]: I0218 14:58:01.679340 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/139c8bb5-20f4-4bf6-a384-551cd08dd46e-utilities\") pod \"certified-operators-vdxnv\" (UID: \"139c8bb5-20f4-4bf6-a384-551cd08dd46e\") " pod="openshift-marketplace/certified-operators-vdxnv" Feb 18 14:58:01 crc kubenswrapper[4957]: I0218 14:58:01.679440 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhrvt\" (UniqueName: \"kubernetes.io/projected/139c8bb5-20f4-4bf6-a384-551cd08dd46e-kube-api-access-xhrvt\") pod \"certified-operators-vdxnv\" (UID: \"139c8bb5-20f4-4bf6-a384-551cd08dd46e\") " pod="openshift-marketplace/certified-operators-vdxnv" Feb 18 14:58:01 crc kubenswrapper[4957]: I0218 14:58:01.679538 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/139c8bb5-20f4-4bf6-a384-551cd08dd46e-catalog-content\") pod \"certified-operators-vdxnv\" (UID: \"139c8bb5-20f4-4bf6-a384-551cd08dd46e\") " pod="openshift-marketplace/certified-operators-vdxnv" Feb 18 14:58:01 crc kubenswrapper[4957]: I0218 14:58:01.782319 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/139c8bb5-20f4-4bf6-a384-551cd08dd46e-utilities\") pod \"certified-operators-vdxnv\" (UID: \"139c8bb5-20f4-4bf6-a384-551cd08dd46e\") " pod="openshift-marketplace/certified-operators-vdxnv" Feb 18 14:58:01 crc kubenswrapper[4957]: I0218 14:58:01.782475 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xhrvt\" (UniqueName: \"kubernetes.io/projected/139c8bb5-20f4-4bf6-a384-551cd08dd46e-kube-api-access-xhrvt\") pod \"certified-operators-vdxnv\" (UID: \"139c8bb5-20f4-4bf6-a384-551cd08dd46e\") " pod="openshift-marketplace/certified-operators-vdxnv" Feb 18 14:58:01 crc kubenswrapper[4957]: I0218 14:58:01.782614 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/139c8bb5-20f4-4bf6-a384-551cd08dd46e-catalog-content\") pod \"certified-operators-vdxnv\" (UID: \"139c8bb5-20f4-4bf6-a384-551cd08dd46e\") " pod="openshift-marketplace/certified-operators-vdxnv" Feb 18 14:58:01 crc kubenswrapper[4957]: I0218 14:58:01.782733 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/139c8bb5-20f4-4bf6-a384-551cd08dd46e-utilities\") pod \"certified-operators-vdxnv\" (UID: \"139c8bb5-20f4-4bf6-a384-551cd08dd46e\") " pod="openshift-marketplace/certified-operators-vdxnv" Feb 18 14:58:01 crc kubenswrapper[4957]: I0218 14:58:01.783109 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/139c8bb5-20f4-4bf6-a384-551cd08dd46e-catalog-content\") pod \"certified-operators-vdxnv\" (UID: \"139c8bb5-20f4-4bf6-a384-551cd08dd46e\") " pod="openshift-marketplace/certified-operators-vdxnv" Feb 18 14:58:01 crc kubenswrapper[4957]: I0218 14:58:01.809356 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xhrvt\" (UniqueName: \"kubernetes.io/projected/139c8bb5-20f4-4bf6-a384-551cd08dd46e-kube-api-access-xhrvt\") pod \"certified-operators-vdxnv\" (UID: \"139c8bb5-20f4-4bf6-a384-551cd08dd46e\") " pod="openshift-marketplace/certified-operators-vdxnv" Feb 18 14:58:01 crc kubenswrapper[4957]: I0218 14:58:01.905774 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vdxnv" Feb 18 14:58:02 crc kubenswrapper[4957]: I0218 14:58:02.104212 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-npjhw" Feb 18 14:58:02 crc kubenswrapper[4957]: I0218 14:58:02.199174 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4fe65613-300e-43e4-82df-4480ee80a335-scripts\") pod \"4fe65613-300e-43e4-82df-4480ee80a335\" (UID: \"4fe65613-300e-43e4-82df-4480ee80a335\") " Feb 18 14:58:02 crc kubenswrapper[4957]: I0218 14:58:02.199387 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g77jk\" (UniqueName: \"kubernetes.io/projected/4fe65613-300e-43e4-82df-4480ee80a335-kube-api-access-g77jk\") pod \"4fe65613-300e-43e4-82df-4480ee80a335\" (UID: \"4fe65613-300e-43e4-82df-4480ee80a335\") " Feb 18 14:58:02 crc kubenswrapper[4957]: I0218 14:58:02.199488 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4fe65613-300e-43e4-82df-4480ee80a335-config-data\") pod \"4fe65613-300e-43e4-82df-4480ee80a335\" (UID: \"4fe65613-300e-43e4-82df-4480ee80a335\") " Feb 18 14:58:02 crc kubenswrapper[4957]: I0218 14:58:02.199557 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4fe65613-300e-43e4-82df-4480ee80a335-combined-ca-bundle\") pod \"4fe65613-300e-43e4-82df-4480ee80a335\" (UID: \"4fe65613-300e-43e4-82df-4480ee80a335\") " Feb 18 14:58:02 crc kubenswrapper[4957]: I0218 14:58:02.205121 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4fe65613-300e-43e4-82df-4480ee80a335-kube-api-access-g77jk" (OuterVolumeSpecName: "kube-api-access-g77jk") pod "4fe65613-300e-43e4-82df-4480ee80a335" (UID: "4fe65613-300e-43e4-82df-4480ee80a335"). InnerVolumeSpecName "kube-api-access-g77jk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:58:02 crc kubenswrapper[4957]: I0218 14:58:02.205674 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4fe65613-300e-43e4-82df-4480ee80a335-scripts" (OuterVolumeSpecName: "scripts") pod "4fe65613-300e-43e4-82df-4480ee80a335" (UID: "4fe65613-300e-43e4-82df-4480ee80a335"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:58:02 crc kubenswrapper[4957]: I0218 14:58:02.233408 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4fe65613-300e-43e4-82df-4480ee80a335-config-data" (OuterVolumeSpecName: "config-data") pod "4fe65613-300e-43e4-82df-4480ee80a335" (UID: "4fe65613-300e-43e4-82df-4480ee80a335"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:58:02 crc kubenswrapper[4957]: I0218 14:58:02.243749 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4fe65613-300e-43e4-82df-4480ee80a335-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4fe65613-300e-43e4-82df-4480ee80a335" (UID: "4fe65613-300e-43e4-82df-4480ee80a335"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:58:02 crc kubenswrapper[4957]: I0218 14:58:02.305652 4957 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4fe65613-300e-43e4-82df-4480ee80a335-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 14:58:02 crc kubenswrapper[4957]: I0218 14:58:02.305682 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g77jk\" (UniqueName: \"kubernetes.io/projected/4fe65613-300e-43e4-82df-4480ee80a335-kube-api-access-g77jk\") on node \"crc\" DevicePath \"\"" Feb 18 14:58:02 crc kubenswrapper[4957]: I0218 14:58:02.305694 4957 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4fe65613-300e-43e4-82df-4480ee80a335-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 14:58:02 crc kubenswrapper[4957]: I0218 14:58:02.305703 4957 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4fe65613-300e-43e4-82df-4480ee80a335-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:58:02 crc kubenswrapper[4957]: I0218 14:58:02.429700 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vdxnv"] Feb 18 14:58:02 crc kubenswrapper[4957]: W0218 14:58:02.443656 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod139c8bb5_20f4_4bf6_a384_551cd08dd46e.slice/crio-fd0474c2c67f3e5bf154b28f1639e7b8a7ed45a24af0be7eb669993c4c1499b6 WatchSource:0}: Error finding container fd0474c2c67f3e5bf154b28f1639e7b8a7ed45a24af0be7eb669993c4c1499b6: Status 404 returned error can't find the container with id fd0474c2c67f3e5bf154b28f1639e7b8a7ed45a24af0be7eb669993c4c1499b6 Feb 18 14:58:02 crc kubenswrapper[4957]: I0218 14:58:02.582018 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-npjhw" event={"ID":"4fe65613-300e-43e4-82df-4480ee80a335","Type":"ContainerDied","Data":"482063088e1079df2cfb287c31f002f27b0a76c94139576dd7448054bc82233f"} Feb 18 14:58:02 crc kubenswrapper[4957]: I0218 14:58:02.582388 4957 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="482063088e1079df2cfb287c31f002f27b0a76c94139576dd7448054bc82233f" Feb 18 14:58:02 crc kubenswrapper[4957]: I0218 14:58:02.582546 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-npjhw" Feb 18 14:58:02 crc kubenswrapper[4957]: I0218 14:58:02.587080 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"79e6e661-a1cd-4ff5-becc-87ff6dd1fb4b","Type":"ContainerStarted","Data":"63740087cdbbd8d9d348d53457e27e0ca5fe2fa12f187074387563ee9ead2b6b"} Feb 18 14:58:02 crc kubenswrapper[4957]: I0218 14:58:02.587337 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 18 14:58:02 crc kubenswrapper[4957]: I0218 14:58:02.589799 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vdxnv" event={"ID":"139c8bb5-20f4-4bf6-a384-551cd08dd46e","Type":"ContainerStarted","Data":"fd0474c2c67f3e5bf154b28f1639e7b8a7ed45a24af0be7eb669993c4c1499b6"} Feb 18 14:58:02 crc kubenswrapper[4957]: I0218 14:58:02.621134 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.404808183 podStartE2EDuration="7.621117184s" podCreationTimestamp="2026-02-18 14:57:55 +0000 UTC" firstStartedPulling="2026-02-18 14:57:56.555901101 +0000 UTC m=+1583.076765845" lastFinishedPulling="2026-02-18 14:58:01.772210102 +0000 UTC m=+1588.293074846" observedRunningTime="2026-02-18 14:58:02.604095921 +0000 UTC m=+1589.124960665" watchObservedRunningTime="2026-02-18 14:58:02.621117184 +0000 UTC m=+1589.141981928" Feb 18 14:58:02 crc kubenswrapper[4957]: I0218 14:58:02.788539 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 14:58:02 crc kubenswrapper[4957]: I0218 14:58:02.788798 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="0c252f5d-2954-4fdd-a999-29b7c6469ade" containerName="nova-scheduler-scheduler" containerID="cri-o://c1c48296ba99dc61517cc9d7d5b9006037d8173774ac9cbdcfe7c418c4924a68" gracePeriod=30 Feb 18 14:58:02 crc kubenswrapper[4957]: I0218 14:58:02.815135 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 18 14:58:02 crc kubenswrapper[4957]: I0218 14:58:02.815451 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="83538f1b-8063-4c0c-8bd3-ba87ce3a0637" containerName="nova-api-log" containerID="cri-o://7190e63d15f3b6d325760c35017fa737c121204867012c337582314afdc0cbb3" gracePeriod=30 Feb 18 14:58:02 crc kubenswrapper[4957]: I0218 14:58:02.816035 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="83538f1b-8063-4c0c-8bd3-ba87ce3a0637" containerName="nova-api-api" containerID="cri-o://3ed4d86272f862af96e092e6eb71fba3f83315754986dd67a0ef40866cf6923a" gracePeriod=30 Feb 18 14:58:02 crc kubenswrapper[4957]: I0218 14:58:02.854181 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 14:58:02 crc kubenswrapper[4957]: I0218 14:58:02.854466 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="3e8ad40e-e92a-4b1c-b081-7da83ab34669" containerName="nova-metadata-log" containerID="cri-o://6d4d3e813992d51b14303f85553e5b8c2444d099c20f27f2074f789417281247" gracePeriod=30 Feb 18 14:58:02 crc kubenswrapper[4957]: I0218 14:58:02.855242 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="3e8ad40e-e92a-4b1c-b081-7da83ab34669" containerName="nova-metadata-metadata" containerID="cri-o://1ccf214a29b8e0df23866d723b91742a0ad63fcd37d6787b81417e5862edbcec" gracePeriod=30 Feb 18 14:58:03 crc kubenswrapper[4957]: E0218 14:58:03.234481 4957 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c1c48296ba99dc61517cc9d7d5b9006037d8173774ac9cbdcfe7c418c4924a68" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 18 14:58:03 crc kubenswrapper[4957]: E0218 14:58:03.252849 4957 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c1c48296ba99dc61517cc9d7d5b9006037d8173774ac9cbdcfe7c418c4924a68" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 18 14:58:03 crc kubenswrapper[4957]: E0218 14:58:03.298000 4957 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c1c48296ba99dc61517cc9d7d5b9006037d8173774ac9cbdcfe7c418c4924a68" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 18 14:58:03 crc kubenswrapper[4957]: E0218 14:58:03.298069 4957 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="0c252f5d-2954-4fdd-a999-29b7c6469ade" containerName="nova-scheduler-scheduler" Feb 18 14:58:03 crc kubenswrapper[4957]: I0218 14:58:03.601475 4957 generic.go:334] "Generic (PLEG): container finished" podID="139c8bb5-20f4-4bf6-a384-551cd08dd46e" containerID="8a89f3f1ca2c34059121124359cb85657a9d8139ed3f89b751621802e81df5af" exitCode=0 Feb 18 14:58:03 crc kubenswrapper[4957]: I0218 14:58:03.601529 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vdxnv" event={"ID":"139c8bb5-20f4-4bf6-a384-551cd08dd46e","Type":"ContainerDied","Data":"8a89f3f1ca2c34059121124359cb85657a9d8139ed3f89b751621802e81df5af"} Feb 18 14:58:03 crc kubenswrapper[4957]: I0218 14:58:03.605304 4957 generic.go:334] "Generic (PLEG): container finished" podID="3e8ad40e-e92a-4b1c-b081-7da83ab34669" containerID="6d4d3e813992d51b14303f85553e5b8c2444d099c20f27f2074f789417281247" exitCode=143 Feb 18 14:58:03 crc kubenswrapper[4957]: I0218 14:58:03.605355 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3e8ad40e-e92a-4b1c-b081-7da83ab34669","Type":"ContainerDied","Data":"6d4d3e813992d51b14303f85553e5b8c2444d099c20f27f2074f789417281247"} Feb 18 14:58:03 crc kubenswrapper[4957]: I0218 14:58:03.607977 4957 generic.go:334] "Generic (PLEG): container finished" podID="83538f1b-8063-4c0c-8bd3-ba87ce3a0637" containerID="3ed4d86272f862af96e092e6eb71fba3f83315754986dd67a0ef40866cf6923a" exitCode=0 Feb 18 14:58:03 crc kubenswrapper[4957]: I0218 14:58:03.608480 4957 generic.go:334] "Generic (PLEG): container finished" podID="83538f1b-8063-4c0c-8bd3-ba87ce3a0637" containerID="7190e63d15f3b6d325760c35017fa737c121204867012c337582314afdc0cbb3" exitCode=143 Feb 18 14:58:03 crc kubenswrapper[4957]: I0218 14:58:03.612107 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"83538f1b-8063-4c0c-8bd3-ba87ce3a0637","Type":"ContainerDied","Data":"3ed4d86272f862af96e092e6eb71fba3f83315754986dd67a0ef40866cf6923a"} Feb 18 14:58:03 crc kubenswrapper[4957]: I0218 14:58:03.612175 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"83538f1b-8063-4c0c-8bd3-ba87ce3a0637","Type":"ContainerDied","Data":"7190e63d15f3b6d325760c35017fa737c121204867012c337582314afdc0cbb3"} Feb 18 14:58:03 crc kubenswrapper[4957]: I0218 14:58:03.671230 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 14:58:03 crc kubenswrapper[4957]: I0218 14:58:03.778713 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83538f1b-8063-4c0c-8bd3-ba87ce3a0637-combined-ca-bundle\") pod \"83538f1b-8063-4c0c-8bd3-ba87ce3a0637\" (UID: \"83538f1b-8063-4c0c-8bd3-ba87ce3a0637\") " Feb 18 14:58:03 crc kubenswrapper[4957]: I0218 14:58:03.778791 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htmd7\" (UniqueName: \"kubernetes.io/projected/83538f1b-8063-4c0c-8bd3-ba87ce3a0637-kube-api-access-htmd7\") pod \"83538f1b-8063-4c0c-8bd3-ba87ce3a0637\" (UID: \"83538f1b-8063-4c0c-8bd3-ba87ce3a0637\") " Feb 18 14:58:03 crc kubenswrapper[4957]: I0218 14:58:03.778819 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/83538f1b-8063-4c0c-8bd3-ba87ce3a0637-logs\") pod \"83538f1b-8063-4c0c-8bd3-ba87ce3a0637\" (UID: \"83538f1b-8063-4c0c-8bd3-ba87ce3a0637\") " Feb 18 14:58:03 crc kubenswrapper[4957]: I0218 14:58:03.778860 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83538f1b-8063-4c0c-8bd3-ba87ce3a0637-config-data\") pod \"83538f1b-8063-4c0c-8bd3-ba87ce3a0637\" (UID: \"83538f1b-8063-4c0c-8bd3-ba87ce3a0637\") " Feb 18 14:58:03 crc kubenswrapper[4957]: I0218 14:58:03.778933 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/83538f1b-8063-4c0c-8bd3-ba87ce3a0637-public-tls-certs\") pod \"83538f1b-8063-4c0c-8bd3-ba87ce3a0637\" (UID: \"83538f1b-8063-4c0c-8bd3-ba87ce3a0637\") " Feb 18 14:58:03 crc kubenswrapper[4957]: I0218 14:58:03.779016 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/83538f1b-8063-4c0c-8bd3-ba87ce3a0637-internal-tls-certs\") pod \"83538f1b-8063-4c0c-8bd3-ba87ce3a0637\" (UID: \"83538f1b-8063-4c0c-8bd3-ba87ce3a0637\") " Feb 18 14:58:03 crc kubenswrapper[4957]: I0218 14:58:03.781069 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/83538f1b-8063-4c0c-8bd3-ba87ce3a0637-logs" (OuterVolumeSpecName: "logs") pod "83538f1b-8063-4c0c-8bd3-ba87ce3a0637" (UID: "83538f1b-8063-4c0c-8bd3-ba87ce3a0637"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:58:03 crc kubenswrapper[4957]: I0218 14:58:03.807276 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83538f1b-8063-4c0c-8bd3-ba87ce3a0637-kube-api-access-htmd7" (OuterVolumeSpecName: "kube-api-access-htmd7") pod "83538f1b-8063-4c0c-8bd3-ba87ce3a0637" (UID: "83538f1b-8063-4c0c-8bd3-ba87ce3a0637"). InnerVolumeSpecName "kube-api-access-htmd7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:58:03 crc kubenswrapper[4957]: I0218 14:58:03.836477 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83538f1b-8063-4c0c-8bd3-ba87ce3a0637-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "83538f1b-8063-4c0c-8bd3-ba87ce3a0637" (UID: "83538f1b-8063-4c0c-8bd3-ba87ce3a0637"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:58:03 crc kubenswrapper[4957]: I0218 14:58:03.862974 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83538f1b-8063-4c0c-8bd3-ba87ce3a0637-config-data" (OuterVolumeSpecName: "config-data") pod "83538f1b-8063-4c0c-8bd3-ba87ce3a0637" (UID: "83538f1b-8063-4c0c-8bd3-ba87ce3a0637"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:58:03 crc kubenswrapper[4957]: I0218 14:58:03.869270 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83538f1b-8063-4c0c-8bd3-ba87ce3a0637-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "83538f1b-8063-4c0c-8bd3-ba87ce3a0637" (UID: "83538f1b-8063-4c0c-8bd3-ba87ce3a0637"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:58:03 crc kubenswrapper[4957]: I0218 14:58:03.875570 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83538f1b-8063-4c0c-8bd3-ba87ce3a0637-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "83538f1b-8063-4c0c-8bd3-ba87ce3a0637" (UID: "83538f1b-8063-4c0c-8bd3-ba87ce3a0637"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:58:03 crc kubenswrapper[4957]: I0218 14:58:03.882298 4957 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83538f1b-8063-4c0c-8bd3-ba87ce3a0637-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:58:03 crc kubenswrapper[4957]: I0218 14:58:03.882332 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htmd7\" (UniqueName: \"kubernetes.io/projected/83538f1b-8063-4c0c-8bd3-ba87ce3a0637-kube-api-access-htmd7\") on node \"crc\" DevicePath \"\"" Feb 18 14:58:03 crc kubenswrapper[4957]: I0218 14:58:03.882347 4957 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/83538f1b-8063-4c0c-8bd3-ba87ce3a0637-logs\") on node \"crc\" DevicePath \"\"" Feb 18 14:58:03 crc kubenswrapper[4957]: I0218 14:58:03.882360 4957 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83538f1b-8063-4c0c-8bd3-ba87ce3a0637-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 14:58:03 crc kubenswrapper[4957]: I0218 14:58:03.882370 4957 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/83538f1b-8063-4c0c-8bd3-ba87ce3a0637-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 14:58:03 crc kubenswrapper[4957]: I0218 14:58:03.882381 4957 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/83538f1b-8063-4c0c-8bd3-ba87ce3a0637-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 14:58:04 crc kubenswrapper[4957]: I0218 14:58:04.074644 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-nsb6k" Feb 18 14:58:04 crc kubenswrapper[4957]: I0218 14:58:04.075653 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-nsb6k" Feb 18 14:58:04 crc kubenswrapper[4957]: I0218 14:58:04.148716 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-nsb6k" Feb 18 14:58:04 crc kubenswrapper[4957]: I0218 14:58:04.316104 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 18 14:58:04 crc kubenswrapper[4957]: I0218 14:58:04.395342 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c252f5d-2954-4fdd-a999-29b7c6469ade-combined-ca-bundle\") pod \"0c252f5d-2954-4fdd-a999-29b7c6469ade\" (UID: \"0c252f5d-2954-4fdd-a999-29b7c6469ade\") " Feb 18 14:58:04 crc kubenswrapper[4957]: I0218 14:58:04.395489 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c252f5d-2954-4fdd-a999-29b7c6469ade-config-data\") pod \"0c252f5d-2954-4fdd-a999-29b7c6469ade\" (UID: \"0c252f5d-2954-4fdd-a999-29b7c6469ade\") " Feb 18 14:58:04 crc kubenswrapper[4957]: I0218 14:58:04.395682 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k5p6p\" (UniqueName: \"kubernetes.io/projected/0c252f5d-2954-4fdd-a999-29b7c6469ade-kube-api-access-k5p6p\") pod \"0c252f5d-2954-4fdd-a999-29b7c6469ade\" (UID: \"0c252f5d-2954-4fdd-a999-29b7c6469ade\") " Feb 18 14:58:04 crc kubenswrapper[4957]: I0218 14:58:04.402874 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c252f5d-2954-4fdd-a999-29b7c6469ade-kube-api-access-k5p6p" (OuterVolumeSpecName: "kube-api-access-k5p6p") pod "0c252f5d-2954-4fdd-a999-29b7c6469ade" (UID: "0c252f5d-2954-4fdd-a999-29b7c6469ade"). InnerVolumeSpecName "kube-api-access-k5p6p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:58:04 crc kubenswrapper[4957]: I0218 14:58:04.471591 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c252f5d-2954-4fdd-a999-29b7c6469ade-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0c252f5d-2954-4fdd-a999-29b7c6469ade" (UID: "0c252f5d-2954-4fdd-a999-29b7c6469ade"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:58:04 crc kubenswrapper[4957]: I0218 14:58:04.479975 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c252f5d-2954-4fdd-a999-29b7c6469ade-config-data" (OuterVolumeSpecName: "config-data") pod "0c252f5d-2954-4fdd-a999-29b7c6469ade" (UID: "0c252f5d-2954-4fdd-a999-29b7c6469ade"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:58:04 crc kubenswrapper[4957]: I0218 14:58:04.502410 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k5p6p\" (UniqueName: \"kubernetes.io/projected/0c252f5d-2954-4fdd-a999-29b7c6469ade-kube-api-access-k5p6p\") on node \"crc\" DevicePath \"\"" Feb 18 14:58:04 crc kubenswrapper[4957]: I0218 14:58:04.502463 4957 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c252f5d-2954-4fdd-a999-29b7c6469ade-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:58:04 crc kubenswrapper[4957]: I0218 14:58:04.502476 4957 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c252f5d-2954-4fdd-a999-29b7c6469ade-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 14:58:04 crc kubenswrapper[4957]: I0218 14:58:04.561611 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 18 14:58:04 crc kubenswrapper[4957]: I0218 14:58:04.625207 4957 generic.go:334] "Generic (PLEG): container finished" podID="0c252f5d-2954-4fdd-a999-29b7c6469ade" containerID="c1c48296ba99dc61517cc9d7d5b9006037d8173774ac9cbdcfe7c418c4924a68" exitCode=0 Feb 18 14:58:04 crc kubenswrapper[4957]: I0218 14:58:04.625288 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 18 14:58:04 crc kubenswrapper[4957]: I0218 14:58:04.625287 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"0c252f5d-2954-4fdd-a999-29b7c6469ade","Type":"ContainerDied","Data":"c1c48296ba99dc61517cc9d7d5b9006037d8173774ac9cbdcfe7c418c4924a68"} Feb 18 14:58:04 crc kubenswrapper[4957]: I0218 14:58:04.625460 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"0c252f5d-2954-4fdd-a999-29b7c6469ade","Type":"ContainerDied","Data":"3330ed036e421b617733c035a76899a1f41b82ae47cda2114f8962b1f08884d8"} Feb 18 14:58:04 crc kubenswrapper[4957]: I0218 14:58:04.625494 4957 scope.go:117] "RemoveContainer" containerID="c1c48296ba99dc61517cc9d7d5b9006037d8173774ac9cbdcfe7c418c4924a68" Feb 18 14:58:04 crc kubenswrapper[4957]: I0218 14:58:04.633411 4957 generic.go:334] "Generic (PLEG): container finished" podID="f31564e5-da12-4b69-b9ca-da180143fcf7" containerID="8bf1b96865f9211259425a10943eba558295653e6cc9b33d88ceaee925193dcc" exitCode=137 Feb 18 14:58:04 crc kubenswrapper[4957]: I0218 14:58:04.633519 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 18 14:58:04 crc kubenswrapper[4957]: I0218 14:58:04.633555 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"f31564e5-da12-4b69-b9ca-da180143fcf7","Type":"ContainerDied","Data":"8bf1b96865f9211259425a10943eba558295653e6cc9b33d88ceaee925193dcc"} Feb 18 14:58:04 crc kubenswrapper[4957]: I0218 14:58:04.633613 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"f31564e5-da12-4b69-b9ca-da180143fcf7","Type":"ContainerDied","Data":"87a6a19ca35f71fb79b295723e52b5f0c3c9dbde9c89de09efa9d66c79c89662"} Feb 18 14:58:04 crc kubenswrapper[4957]: I0218 14:58:04.647817 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 14:58:04 crc kubenswrapper[4957]: I0218 14:58:04.648544 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"83538f1b-8063-4c0c-8bd3-ba87ce3a0637","Type":"ContainerDied","Data":"875ce42690c059655b388c4c0462578f2439327cd859ec56a320f2afa48dc55a"} Feb 18 14:58:04 crc kubenswrapper[4957]: I0218 14:58:04.662239 4957 scope.go:117] "RemoveContainer" containerID="c1c48296ba99dc61517cc9d7d5b9006037d8173774ac9cbdcfe7c418c4924a68" Feb 18 14:58:04 crc kubenswrapper[4957]: E0218 14:58:04.663113 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c1c48296ba99dc61517cc9d7d5b9006037d8173774ac9cbdcfe7c418c4924a68\": container with ID starting with c1c48296ba99dc61517cc9d7d5b9006037d8173774ac9cbdcfe7c418c4924a68 not found: ID does not exist" containerID="c1c48296ba99dc61517cc9d7d5b9006037d8173774ac9cbdcfe7c418c4924a68" Feb 18 14:58:04 crc kubenswrapper[4957]: I0218 14:58:04.663146 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1c48296ba99dc61517cc9d7d5b9006037d8173774ac9cbdcfe7c418c4924a68"} err="failed to get container status \"c1c48296ba99dc61517cc9d7d5b9006037d8173774ac9cbdcfe7c418c4924a68\": rpc error: code = NotFound desc = could not find container \"c1c48296ba99dc61517cc9d7d5b9006037d8173774ac9cbdcfe7c418c4924a68\": container with ID starting with c1c48296ba99dc61517cc9d7d5b9006037d8173774ac9cbdcfe7c418c4924a68 not found: ID does not exist" Feb 18 14:58:04 crc kubenswrapper[4957]: I0218 14:58:04.663187 4957 scope.go:117] "RemoveContainer" containerID="8bf1b96865f9211259425a10943eba558295653e6cc9b33d88ceaee925193dcc" Feb 18 14:58:04 crc kubenswrapper[4957]: I0218 14:58:04.695040 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 18 14:58:04 crc kubenswrapper[4957]: I0218 14:58:04.706899 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 18 14:58:04 crc kubenswrapper[4957]: I0218 14:58:04.707628 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dvqkv\" (UniqueName: \"kubernetes.io/projected/f31564e5-da12-4b69-b9ca-da180143fcf7-kube-api-access-dvqkv\") pod \"f31564e5-da12-4b69-b9ca-da180143fcf7\" (UID: \"f31564e5-da12-4b69-b9ca-da180143fcf7\") " Feb 18 14:58:04 crc kubenswrapper[4957]: I0218 14:58:04.707718 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f31564e5-da12-4b69-b9ca-da180143fcf7-scripts\") pod \"f31564e5-da12-4b69-b9ca-da180143fcf7\" (UID: \"f31564e5-da12-4b69-b9ca-da180143fcf7\") " Feb 18 14:58:04 crc kubenswrapper[4957]: I0218 14:58:04.707778 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f31564e5-da12-4b69-b9ca-da180143fcf7-config-data\") pod \"f31564e5-da12-4b69-b9ca-da180143fcf7\" (UID: \"f31564e5-da12-4b69-b9ca-da180143fcf7\") " Feb 18 14:58:04 crc kubenswrapper[4957]: I0218 14:58:04.707838 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f31564e5-da12-4b69-b9ca-da180143fcf7-combined-ca-bundle\") pod \"f31564e5-da12-4b69-b9ca-da180143fcf7\" (UID: \"f31564e5-da12-4b69-b9ca-da180143fcf7\") " Feb 18 14:58:04 crc kubenswrapper[4957]: I0218 14:58:04.717592 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f31564e5-da12-4b69-b9ca-da180143fcf7-scripts" (OuterVolumeSpecName: "scripts") pod "f31564e5-da12-4b69-b9ca-da180143fcf7" (UID: "f31564e5-da12-4b69-b9ca-da180143fcf7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:58:04 crc kubenswrapper[4957]: I0218 14:58:04.732964 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 14:58:04 crc kubenswrapper[4957]: I0218 14:58:04.747392 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-nsb6k" Feb 18 14:58:04 crc kubenswrapper[4957]: I0218 14:58:04.747361 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f31564e5-da12-4b69-b9ca-da180143fcf7-kube-api-access-dvqkv" (OuterVolumeSpecName: "kube-api-access-dvqkv") pod "f31564e5-da12-4b69-b9ca-da180143fcf7" (UID: "f31564e5-da12-4b69-b9ca-da180143fcf7"). InnerVolumeSpecName "kube-api-access-dvqkv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:58:04 crc kubenswrapper[4957]: I0218 14:58:04.747682 4957 scope.go:117] "RemoveContainer" containerID="cfad5ed664578418cba8e64a2ba3c814273856afdae8cc6e92591b69e3329209" Feb 18 14:58:04 crc kubenswrapper[4957]: I0218 14:58:04.748576 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 14:58:04 crc kubenswrapper[4957]: I0218 14:58:04.764741 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 18 14:58:04 crc kubenswrapper[4957]: E0218 14:58:04.775249 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f31564e5-da12-4b69-b9ca-da180143fcf7" containerName="aodh-evaluator" Feb 18 14:58:04 crc kubenswrapper[4957]: I0218 14:58:04.775294 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="f31564e5-da12-4b69-b9ca-da180143fcf7" containerName="aodh-evaluator" Feb 18 14:58:04 crc kubenswrapper[4957]: E0218 14:58:04.775311 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c252f5d-2954-4fdd-a999-29b7c6469ade" containerName="nova-scheduler-scheduler" Feb 18 14:58:04 crc kubenswrapper[4957]: I0218 14:58:04.775322 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c252f5d-2954-4fdd-a999-29b7c6469ade" containerName="nova-scheduler-scheduler" Feb 18 14:58:04 crc kubenswrapper[4957]: E0218 14:58:04.775344 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f31564e5-da12-4b69-b9ca-da180143fcf7" containerName="aodh-listener" Feb 18 14:58:04 crc kubenswrapper[4957]: I0218 14:58:04.775352 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="f31564e5-da12-4b69-b9ca-da180143fcf7" containerName="aodh-listener" Feb 18 14:58:04 crc kubenswrapper[4957]: E0218 14:58:04.775383 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83538f1b-8063-4c0c-8bd3-ba87ce3a0637" containerName="nova-api-api" Feb 18 14:58:04 crc kubenswrapper[4957]: I0218 14:58:04.775393 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="83538f1b-8063-4c0c-8bd3-ba87ce3a0637" containerName="nova-api-api" Feb 18 14:58:04 crc kubenswrapper[4957]: E0218 14:58:04.775435 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f31564e5-da12-4b69-b9ca-da180143fcf7" containerName="aodh-api" Feb 18 14:58:04 crc kubenswrapper[4957]: I0218 14:58:04.775443 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="f31564e5-da12-4b69-b9ca-da180143fcf7" containerName="aodh-api" Feb 18 14:58:04 crc kubenswrapper[4957]: E0218 14:58:04.775456 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fe65613-300e-43e4-82df-4480ee80a335" containerName="nova-manage" Feb 18 14:58:04 crc kubenswrapper[4957]: I0218 14:58:04.775465 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fe65613-300e-43e4-82df-4480ee80a335" containerName="nova-manage" Feb 18 14:58:04 crc kubenswrapper[4957]: E0218 14:58:04.775477 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f31564e5-da12-4b69-b9ca-da180143fcf7" containerName="aodh-notifier" Feb 18 14:58:04 crc kubenswrapper[4957]: I0218 14:58:04.775485 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="f31564e5-da12-4b69-b9ca-da180143fcf7" containerName="aodh-notifier" Feb 18 14:58:04 crc kubenswrapper[4957]: E0218 14:58:04.775511 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83538f1b-8063-4c0c-8bd3-ba87ce3a0637" containerName="nova-api-log" Feb 18 14:58:04 crc kubenswrapper[4957]: I0218 14:58:04.775518 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="83538f1b-8063-4c0c-8bd3-ba87ce3a0637" containerName="nova-api-log" Feb 18 14:58:04 crc kubenswrapper[4957]: I0218 14:58:04.775847 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="83538f1b-8063-4c0c-8bd3-ba87ce3a0637" containerName="nova-api-log" Feb 18 14:58:04 crc kubenswrapper[4957]: I0218 14:58:04.775866 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="f31564e5-da12-4b69-b9ca-da180143fcf7" containerName="aodh-api" Feb 18 14:58:04 crc kubenswrapper[4957]: I0218 14:58:04.775888 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="f31564e5-da12-4b69-b9ca-da180143fcf7" containerName="aodh-evaluator" Feb 18 14:58:04 crc kubenswrapper[4957]: I0218 14:58:04.775902 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="4fe65613-300e-43e4-82df-4480ee80a335" containerName="nova-manage" Feb 18 14:58:04 crc kubenswrapper[4957]: I0218 14:58:04.775928 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="f31564e5-da12-4b69-b9ca-da180143fcf7" containerName="aodh-notifier" Feb 18 14:58:04 crc kubenswrapper[4957]: I0218 14:58:04.775942 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c252f5d-2954-4fdd-a999-29b7c6469ade" containerName="nova-scheduler-scheduler" Feb 18 14:58:04 crc kubenswrapper[4957]: I0218 14:58:04.775959 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="83538f1b-8063-4c0c-8bd3-ba87ce3a0637" containerName="nova-api-api" Feb 18 14:58:04 crc kubenswrapper[4957]: I0218 14:58:04.775979 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="f31564e5-da12-4b69-b9ca-da180143fcf7" containerName="aodh-listener" Feb 18 14:58:04 crc kubenswrapper[4957]: I0218 14:58:04.796716 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 18 14:58:04 crc kubenswrapper[4957]: I0218 14:58:04.796759 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 14:58:04 crc kubenswrapper[4957]: I0218 14:58:04.798011 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 14:58:04 crc kubenswrapper[4957]: I0218 14:58:04.800334 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 18 14:58:04 crc kubenswrapper[4957]: I0218 14:58:04.800689 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 18 14:58:04 crc kubenswrapper[4957]: I0218 14:58:04.800801 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 18 14:58:04 crc kubenswrapper[4957]: I0218 14:58:04.804055 4957 scope.go:117] "RemoveContainer" containerID="36c6a347aaf51fb57e8910183d1ab01939a13967586002dbd0ad1a899c4a9b4f" Feb 18 14:58:04 crc kubenswrapper[4957]: I0218 14:58:04.821346 4957 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f31564e5-da12-4b69-b9ca-da180143fcf7-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 14:58:04 crc kubenswrapper[4957]: I0218 14:58:04.821373 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dvqkv\" (UniqueName: \"kubernetes.io/projected/f31564e5-da12-4b69-b9ca-da180143fcf7-kube-api-access-dvqkv\") on node \"crc\" DevicePath \"\"" Feb 18 14:58:04 crc kubenswrapper[4957]: I0218 14:58:04.823381 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 14:58:04 crc kubenswrapper[4957]: I0218 14:58:04.823507 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 18 14:58:04 crc kubenswrapper[4957]: I0218 14:58:04.825826 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 18 14:58:04 crc kubenswrapper[4957]: I0218 14:58:04.896708 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f31564e5-da12-4b69-b9ca-da180143fcf7-config-data" (OuterVolumeSpecName: "config-data") pod "f31564e5-da12-4b69-b9ca-da180143fcf7" (UID: "f31564e5-da12-4b69-b9ca-da180143fcf7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:58:04 crc kubenswrapper[4957]: I0218 14:58:04.923113 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e38b57a1-5b5d-4046-88b1-248b5eb0fe97-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e38b57a1-5b5d-4046-88b1-248b5eb0fe97\") " pod="openstack/nova-api-0" Feb 18 14:58:04 crc kubenswrapper[4957]: I0218 14:58:04.923181 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e38b57a1-5b5d-4046-88b1-248b5eb0fe97-config-data\") pod \"nova-api-0\" (UID: \"e38b57a1-5b5d-4046-88b1-248b5eb0fe97\") " pod="openstack/nova-api-0" Feb 18 14:58:04 crc kubenswrapper[4957]: I0218 14:58:04.923200 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e38b57a1-5b5d-4046-88b1-248b5eb0fe97-internal-tls-certs\") pod \"nova-api-0\" (UID: \"e38b57a1-5b5d-4046-88b1-248b5eb0fe97\") " pod="openstack/nova-api-0" Feb 18 14:58:04 crc kubenswrapper[4957]: I0218 14:58:04.923224 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nm6bc\" (UniqueName: \"kubernetes.io/projected/e38b57a1-5b5d-4046-88b1-248b5eb0fe97-kube-api-access-nm6bc\") pod \"nova-api-0\" (UID: \"e38b57a1-5b5d-4046-88b1-248b5eb0fe97\") " pod="openstack/nova-api-0" Feb 18 14:58:04 crc kubenswrapper[4957]: I0218 14:58:04.923249 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a3cba85-6288-4e31-aeeb-c65994d4592b-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"1a3cba85-6288-4e31-aeeb-c65994d4592b\") " pod="openstack/nova-scheduler-0" Feb 18 14:58:04 crc kubenswrapper[4957]: I0218 14:58:04.923488 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e38b57a1-5b5d-4046-88b1-248b5eb0fe97-public-tls-certs\") pod \"nova-api-0\" (UID: \"e38b57a1-5b5d-4046-88b1-248b5eb0fe97\") " pod="openstack/nova-api-0" Feb 18 14:58:04 crc kubenswrapper[4957]: I0218 14:58:04.923513 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxkm6\" (UniqueName: \"kubernetes.io/projected/1a3cba85-6288-4e31-aeeb-c65994d4592b-kube-api-access-lxkm6\") pod \"nova-scheduler-0\" (UID: \"1a3cba85-6288-4e31-aeeb-c65994d4592b\") " pod="openstack/nova-scheduler-0" Feb 18 14:58:04 crc kubenswrapper[4957]: I0218 14:58:04.923546 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a3cba85-6288-4e31-aeeb-c65994d4592b-config-data\") pod \"nova-scheduler-0\" (UID: \"1a3cba85-6288-4e31-aeeb-c65994d4592b\") " pod="openstack/nova-scheduler-0" Feb 18 14:58:04 crc kubenswrapper[4957]: I0218 14:58:04.923566 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e38b57a1-5b5d-4046-88b1-248b5eb0fe97-logs\") pod \"nova-api-0\" (UID: \"e38b57a1-5b5d-4046-88b1-248b5eb0fe97\") " pod="openstack/nova-api-0" Feb 18 14:58:04 crc kubenswrapper[4957]: I0218 14:58:04.923690 4957 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f31564e5-da12-4b69-b9ca-da180143fcf7-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 14:58:05 crc kubenswrapper[4957]: I0218 14:58:05.004534 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f31564e5-da12-4b69-b9ca-da180143fcf7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f31564e5-da12-4b69-b9ca-da180143fcf7" (UID: "f31564e5-da12-4b69-b9ca-da180143fcf7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:58:05 crc kubenswrapper[4957]: I0218 14:58:05.025508 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e38b57a1-5b5d-4046-88b1-248b5eb0fe97-public-tls-certs\") pod \"nova-api-0\" (UID: \"e38b57a1-5b5d-4046-88b1-248b5eb0fe97\") " pod="openstack/nova-api-0" Feb 18 14:58:05 crc kubenswrapper[4957]: I0218 14:58:05.025550 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxkm6\" (UniqueName: \"kubernetes.io/projected/1a3cba85-6288-4e31-aeeb-c65994d4592b-kube-api-access-lxkm6\") pod \"nova-scheduler-0\" (UID: \"1a3cba85-6288-4e31-aeeb-c65994d4592b\") " pod="openstack/nova-scheduler-0" Feb 18 14:58:05 crc kubenswrapper[4957]: I0218 14:58:05.025590 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a3cba85-6288-4e31-aeeb-c65994d4592b-config-data\") pod \"nova-scheduler-0\" (UID: \"1a3cba85-6288-4e31-aeeb-c65994d4592b\") " pod="openstack/nova-scheduler-0" Feb 18 14:58:05 crc kubenswrapper[4957]: I0218 14:58:05.025614 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e38b57a1-5b5d-4046-88b1-248b5eb0fe97-logs\") pod \"nova-api-0\" (UID: \"e38b57a1-5b5d-4046-88b1-248b5eb0fe97\") " pod="openstack/nova-api-0" Feb 18 14:58:05 crc kubenswrapper[4957]: I0218 14:58:05.025667 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e38b57a1-5b5d-4046-88b1-248b5eb0fe97-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e38b57a1-5b5d-4046-88b1-248b5eb0fe97\") " pod="openstack/nova-api-0" Feb 18 14:58:05 crc kubenswrapper[4957]: I0218 14:58:05.025699 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e38b57a1-5b5d-4046-88b1-248b5eb0fe97-internal-tls-certs\") pod \"nova-api-0\" (UID: \"e38b57a1-5b5d-4046-88b1-248b5eb0fe97\") " pod="openstack/nova-api-0" Feb 18 14:58:05 crc kubenswrapper[4957]: I0218 14:58:05.025714 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e38b57a1-5b5d-4046-88b1-248b5eb0fe97-config-data\") pod \"nova-api-0\" (UID: \"e38b57a1-5b5d-4046-88b1-248b5eb0fe97\") " pod="openstack/nova-api-0" Feb 18 14:58:05 crc kubenswrapper[4957]: I0218 14:58:05.025735 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nm6bc\" (UniqueName: \"kubernetes.io/projected/e38b57a1-5b5d-4046-88b1-248b5eb0fe97-kube-api-access-nm6bc\") pod \"nova-api-0\" (UID: \"e38b57a1-5b5d-4046-88b1-248b5eb0fe97\") " pod="openstack/nova-api-0" Feb 18 14:58:05 crc kubenswrapper[4957]: I0218 14:58:05.025765 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a3cba85-6288-4e31-aeeb-c65994d4592b-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"1a3cba85-6288-4e31-aeeb-c65994d4592b\") " pod="openstack/nova-scheduler-0" Feb 18 14:58:05 crc kubenswrapper[4957]: I0218 14:58:05.026187 4957 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f31564e5-da12-4b69-b9ca-da180143fcf7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:58:05 crc kubenswrapper[4957]: I0218 14:58:05.027114 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e38b57a1-5b5d-4046-88b1-248b5eb0fe97-logs\") pod \"nova-api-0\" (UID: \"e38b57a1-5b5d-4046-88b1-248b5eb0fe97\") " pod="openstack/nova-api-0" Feb 18 14:58:05 crc kubenswrapper[4957]: I0218 14:58:05.030585 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a3cba85-6288-4e31-aeeb-c65994d4592b-config-data\") pod \"nova-scheduler-0\" (UID: \"1a3cba85-6288-4e31-aeeb-c65994d4592b\") " pod="openstack/nova-scheduler-0" Feb 18 14:58:05 crc kubenswrapper[4957]: I0218 14:58:05.031856 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e38b57a1-5b5d-4046-88b1-248b5eb0fe97-config-data\") pod \"nova-api-0\" (UID: \"e38b57a1-5b5d-4046-88b1-248b5eb0fe97\") " pod="openstack/nova-api-0" Feb 18 14:58:05 crc kubenswrapper[4957]: I0218 14:58:05.032937 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e38b57a1-5b5d-4046-88b1-248b5eb0fe97-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e38b57a1-5b5d-4046-88b1-248b5eb0fe97\") " pod="openstack/nova-api-0" Feb 18 14:58:05 crc kubenswrapper[4957]: I0218 14:58:05.033236 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e38b57a1-5b5d-4046-88b1-248b5eb0fe97-public-tls-certs\") pod \"nova-api-0\" (UID: \"e38b57a1-5b5d-4046-88b1-248b5eb0fe97\") " pod="openstack/nova-api-0" Feb 18 14:58:05 crc kubenswrapper[4957]: I0218 14:58:05.037315 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a3cba85-6288-4e31-aeeb-c65994d4592b-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"1a3cba85-6288-4e31-aeeb-c65994d4592b\") " pod="openstack/nova-scheduler-0" Feb 18 14:58:05 crc kubenswrapper[4957]: I0218 14:58:05.049978 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e38b57a1-5b5d-4046-88b1-248b5eb0fe97-internal-tls-certs\") pod \"nova-api-0\" (UID: \"e38b57a1-5b5d-4046-88b1-248b5eb0fe97\") " pod="openstack/nova-api-0" Feb 18 14:58:05 crc kubenswrapper[4957]: I0218 14:58:05.051463 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nm6bc\" (UniqueName: \"kubernetes.io/projected/e38b57a1-5b5d-4046-88b1-248b5eb0fe97-kube-api-access-nm6bc\") pod \"nova-api-0\" (UID: \"e38b57a1-5b5d-4046-88b1-248b5eb0fe97\") " pod="openstack/nova-api-0" Feb 18 14:58:05 crc kubenswrapper[4957]: I0218 14:58:05.053248 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxkm6\" (UniqueName: \"kubernetes.io/projected/1a3cba85-6288-4e31-aeeb-c65994d4592b-kube-api-access-lxkm6\") pod \"nova-scheduler-0\" (UID: \"1a3cba85-6288-4e31-aeeb-c65994d4592b\") " pod="openstack/nova-scheduler-0" Feb 18 14:58:05 crc kubenswrapper[4957]: I0218 14:58:05.128928 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 14:58:05 crc kubenswrapper[4957]: I0218 14:58:05.135806 4957 scope.go:117] "RemoveContainer" containerID="0f4891fb81952c02d798ed6b6740866bca11088f3415c2234b23a29655eea545" Feb 18 14:58:05 crc kubenswrapper[4957]: I0218 14:58:05.141802 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 18 14:58:05 crc kubenswrapper[4957]: I0218 14:58:05.228842 4957 scope.go:117] "RemoveContainer" containerID="8bf1b96865f9211259425a10943eba558295653e6cc9b33d88ceaee925193dcc" Feb 18 14:58:05 crc kubenswrapper[4957]: E0218 14:58:05.229444 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8bf1b96865f9211259425a10943eba558295653e6cc9b33d88ceaee925193dcc\": container with ID starting with 8bf1b96865f9211259425a10943eba558295653e6cc9b33d88ceaee925193dcc not found: ID does not exist" containerID="8bf1b96865f9211259425a10943eba558295653e6cc9b33d88ceaee925193dcc" Feb 18 14:58:05 crc kubenswrapper[4957]: I0218 14:58:05.229492 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8bf1b96865f9211259425a10943eba558295653e6cc9b33d88ceaee925193dcc"} err="failed to get container status \"8bf1b96865f9211259425a10943eba558295653e6cc9b33d88ceaee925193dcc\": rpc error: code = NotFound desc = could not find container \"8bf1b96865f9211259425a10943eba558295653e6cc9b33d88ceaee925193dcc\": container with ID starting with 8bf1b96865f9211259425a10943eba558295653e6cc9b33d88ceaee925193dcc not found: ID does not exist" Feb 18 14:58:05 crc kubenswrapper[4957]: I0218 14:58:05.229545 4957 scope.go:117] "RemoveContainer" containerID="cfad5ed664578418cba8e64a2ba3c814273856afdae8cc6e92591b69e3329209" Feb 18 14:58:05 crc kubenswrapper[4957]: E0218 14:58:05.229791 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cfad5ed664578418cba8e64a2ba3c814273856afdae8cc6e92591b69e3329209\": container with ID starting with cfad5ed664578418cba8e64a2ba3c814273856afdae8cc6e92591b69e3329209 not found: ID does not exist" containerID="cfad5ed664578418cba8e64a2ba3c814273856afdae8cc6e92591b69e3329209" Feb 18 14:58:05 crc kubenswrapper[4957]: I0218 14:58:05.229822 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cfad5ed664578418cba8e64a2ba3c814273856afdae8cc6e92591b69e3329209"} err="failed to get container status \"cfad5ed664578418cba8e64a2ba3c814273856afdae8cc6e92591b69e3329209\": rpc error: code = NotFound desc = could not find container \"cfad5ed664578418cba8e64a2ba3c814273856afdae8cc6e92591b69e3329209\": container with ID starting with cfad5ed664578418cba8e64a2ba3c814273856afdae8cc6e92591b69e3329209 not found: ID does not exist" Feb 18 14:58:05 crc kubenswrapper[4957]: I0218 14:58:05.229840 4957 scope.go:117] "RemoveContainer" containerID="36c6a347aaf51fb57e8910183d1ab01939a13967586002dbd0ad1a899c4a9b4f" Feb 18 14:58:05 crc kubenswrapper[4957]: E0218 14:58:05.230165 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"36c6a347aaf51fb57e8910183d1ab01939a13967586002dbd0ad1a899c4a9b4f\": container with ID starting with 36c6a347aaf51fb57e8910183d1ab01939a13967586002dbd0ad1a899c4a9b4f not found: ID does not exist" containerID="36c6a347aaf51fb57e8910183d1ab01939a13967586002dbd0ad1a899c4a9b4f" Feb 18 14:58:05 crc kubenswrapper[4957]: I0218 14:58:05.230216 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36c6a347aaf51fb57e8910183d1ab01939a13967586002dbd0ad1a899c4a9b4f"} err="failed to get container status \"36c6a347aaf51fb57e8910183d1ab01939a13967586002dbd0ad1a899c4a9b4f\": rpc error: code = NotFound desc = could not find container \"36c6a347aaf51fb57e8910183d1ab01939a13967586002dbd0ad1a899c4a9b4f\": container with ID starting with 36c6a347aaf51fb57e8910183d1ab01939a13967586002dbd0ad1a899c4a9b4f not found: ID does not exist" Feb 18 14:58:05 crc kubenswrapper[4957]: I0218 14:58:05.230241 4957 scope.go:117] "RemoveContainer" containerID="0f4891fb81952c02d798ed6b6740866bca11088f3415c2234b23a29655eea545" Feb 18 14:58:05 crc kubenswrapper[4957]: E0218 14:58:05.230819 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0f4891fb81952c02d798ed6b6740866bca11088f3415c2234b23a29655eea545\": container with ID starting with 0f4891fb81952c02d798ed6b6740866bca11088f3415c2234b23a29655eea545 not found: ID does not exist" containerID="0f4891fb81952c02d798ed6b6740866bca11088f3415c2234b23a29655eea545" Feb 18 14:58:05 crc kubenswrapper[4957]: I0218 14:58:05.230943 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f4891fb81952c02d798ed6b6740866bca11088f3415c2234b23a29655eea545"} err="failed to get container status \"0f4891fb81952c02d798ed6b6740866bca11088f3415c2234b23a29655eea545\": rpc error: code = NotFound desc = could not find container \"0f4891fb81952c02d798ed6b6740866bca11088f3415c2234b23a29655eea545\": container with ID starting with 0f4891fb81952c02d798ed6b6740866bca11088f3415c2234b23a29655eea545 not found: ID does not exist" Feb 18 14:58:05 crc kubenswrapper[4957]: I0218 14:58:05.230980 4957 scope.go:117] "RemoveContainer" containerID="3ed4d86272f862af96e092e6eb71fba3f83315754986dd67a0ef40866cf6923a" Feb 18 14:58:05 crc kubenswrapper[4957]: I0218 14:58:05.327977 4957 scope.go:117] "RemoveContainer" containerID="7190e63d15f3b6d325760c35017fa737c121204867012c337582314afdc0cbb3" Feb 18 14:58:05 crc kubenswrapper[4957]: I0218 14:58:05.362922 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Feb 18 14:58:05 crc kubenswrapper[4957]: I0218 14:58:05.404378 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-0"] Feb 18 14:58:05 crc kubenswrapper[4957]: I0218 14:58:05.414516 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Feb 18 14:58:05 crc kubenswrapper[4957]: I0218 14:58:05.417945 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 18 14:58:05 crc kubenswrapper[4957]: I0218 14:58:05.428113 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-public-svc" Feb 18 14:58:05 crc kubenswrapper[4957]: I0218 14:58:05.428382 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-62fdg" Feb 18 14:58:05 crc kubenswrapper[4957]: I0218 14:58:05.428575 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-internal-svc" Feb 18 14:58:05 crc kubenswrapper[4957]: I0218 14:58:05.428772 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Feb 18 14:58:05 crc kubenswrapper[4957]: I0218 14:58:05.428959 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Feb 18 14:58:05 crc kubenswrapper[4957]: I0218 14:58:05.440327 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Feb 18 14:58:05 crc kubenswrapper[4957]: I0218 14:58:05.563811 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2f90e4b1-a149-424f-9ad1-228b7bf592ab-scripts\") pod \"aodh-0\" (UID: \"2f90e4b1-a149-424f-9ad1-228b7bf592ab\") " pod="openstack/aodh-0" Feb 18 14:58:05 crc kubenswrapper[4957]: I0218 14:58:05.563978 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2f90e4b1-a149-424f-9ad1-228b7bf592ab-public-tls-certs\") pod \"aodh-0\" (UID: \"2f90e4b1-a149-424f-9ad1-228b7bf592ab\") " pod="openstack/aodh-0" Feb 18 14:58:05 crc kubenswrapper[4957]: I0218 14:58:05.564115 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2f90e4b1-a149-424f-9ad1-228b7bf592ab-internal-tls-certs\") pod \"aodh-0\" (UID: \"2f90e4b1-a149-424f-9ad1-228b7bf592ab\") " pod="openstack/aodh-0" Feb 18 14:58:05 crc kubenswrapper[4957]: I0218 14:58:05.564180 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fz2jh\" (UniqueName: \"kubernetes.io/projected/2f90e4b1-a149-424f-9ad1-228b7bf592ab-kube-api-access-fz2jh\") pod \"aodh-0\" (UID: \"2f90e4b1-a149-424f-9ad1-228b7bf592ab\") " pod="openstack/aodh-0" Feb 18 14:58:05 crc kubenswrapper[4957]: I0218 14:58:05.564273 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f90e4b1-a149-424f-9ad1-228b7bf592ab-config-data\") pod \"aodh-0\" (UID: \"2f90e4b1-a149-424f-9ad1-228b7bf592ab\") " pod="openstack/aodh-0" Feb 18 14:58:05 crc kubenswrapper[4957]: I0218 14:58:05.564369 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f90e4b1-a149-424f-9ad1-228b7bf592ab-combined-ca-bundle\") pod \"aodh-0\" (UID: \"2f90e4b1-a149-424f-9ad1-228b7bf592ab\") " pod="openstack/aodh-0" Feb 18 14:58:05 crc kubenswrapper[4957]: I0218 14:58:05.666069 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f90e4b1-a149-424f-9ad1-228b7bf592ab-config-data\") pod \"aodh-0\" (UID: \"2f90e4b1-a149-424f-9ad1-228b7bf592ab\") " pod="openstack/aodh-0" Feb 18 14:58:05 crc kubenswrapper[4957]: I0218 14:58:05.666170 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f90e4b1-a149-424f-9ad1-228b7bf592ab-combined-ca-bundle\") pod \"aodh-0\" (UID: \"2f90e4b1-a149-424f-9ad1-228b7bf592ab\") " pod="openstack/aodh-0" Feb 18 14:58:05 crc kubenswrapper[4957]: I0218 14:58:05.666273 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2f90e4b1-a149-424f-9ad1-228b7bf592ab-scripts\") pod \"aodh-0\" (UID: \"2f90e4b1-a149-424f-9ad1-228b7bf592ab\") " pod="openstack/aodh-0" Feb 18 14:58:05 crc kubenswrapper[4957]: I0218 14:58:05.666339 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2f90e4b1-a149-424f-9ad1-228b7bf592ab-public-tls-certs\") pod \"aodh-0\" (UID: \"2f90e4b1-a149-424f-9ad1-228b7bf592ab\") " pod="openstack/aodh-0" Feb 18 14:58:05 crc kubenswrapper[4957]: I0218 14:58:05.666398 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2f90e4b1-a149-424f-9ad1-228b7bf592ab-internal-tls-certs\") pod \"aodh-0\" (UID: \"2f90e4b1-a149-424f-9ad1-228b7bf592ab\") " pod="openstack/aodh-0" Feb 18 14:58:05 crc kubenswrapper[4957]: I0218 14:58:05.666446 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fz2jh\" (UniqueName: \"kubernetes.io/projected/2f90e4b1-a149-424f-9ad1-228b7bf592ab-kube-api-access-fz2jh\") pod \"aodh-0\" (UID: \"2f90e4b1-a149-424f-9ad1-228b7bf592ab\") " pod="openstack/aodh-0" Feb 18 14:58:05 crc kubenswrapper[4957]: I0218 14:58:05.672883 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2f90e4b1-a149-424f-9ad1-228b7bf592ab-internal-tls-certs\") pod \"aodh-0\" (UID: \"2f90e4b1-a149-424f-9ad1-228b7bf592ab\") " pod="openstack/aodh-0" Feb 18 14:58:05 crc kubenswrapper[4957]: I0218 14:58:05.672975 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2f90e4b1-a149-424f-9ad1-228b7bf592ab-scripts\") pod \"aodh-0\" (UID: \"2f90e4b1-a149-424f-9ad1-228b7bf592ab\") " pod="openstack/aodh-0" Feb 18 14:58:05 crc kubenswrapper[4957]: I0218 14:58:05.673026 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2f90e4b1-a149-424f-9ad1-228b7bf592ab-public-tls-certs\") pod \"aodh-0\" (UID: \"2f90e4b1-a149-424f-9ad1-228b7bf592ab\") " pod="openstack/aodh-0" Feb 18 14:58:05 crc kubenswrapper[4957]: I0218 14:58:05.673821 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f90e4b1-a149-424f-9ad1-228b7bf592ab-combined-ca-bundle\") pod \"aodh-0\" (UID: \"2f90e4b1-a149-424f-9ad1-228b7bf592ab\") " pod="openstack/aodh-0" Feb 18 14:58:05 crc kubenswrapper[4957]: I0218 14:58:05.682766 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f90e4b1-a149-424f-9ad1-228b7bf592ab-config-data\") pod \"aodh-0\" (UID: \"2f90e4b1-a149-424f-9ad1-228b7bf592ab\") " pod="openstack/aodh-0" Feb 18 14:58:05 crc kubenswrapper[4957]: I0218 14:58:05.687386 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vdxnv" event={"ID":"139c8bb5-20f4-4bf6-a384-551cd08dd46e","Type":"ContainerStarted","Data":"fb7d3524f6dde8c7532c8da685053f5ec3d5870385325f4c45bcd1a56862aa02"} Feb 18 14:58:05 crc kubenswrapper[4957]: I0218 14:58:05.695052 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fz2jh\" (UniqueName: \"kubernetes.io/projected/2f90e4b1-a149-424f-9ad1-228b7bf592ab-kube-api-access-fz2jh\") pod \"aodh-0\" (UID: \"2f90e4b1-a149-424f-9ad1-228b7bf592ab\") " pod="openstack/aodh-0" Feb 18 14:58:05 crc kubenswrapper[4957]: I0218 14:58:05.755155 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nsb6k"] Feb 18 14:58:05 crc kubenswrapper[4957]: I0218 14:58:05.764021 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 18 14:58:05 crc kubenswrapper[4957]: W0218 14:58:05.805842 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1a3cba85_6288_4e31_aeeb_c65994d4592b.slice/crio-0f77e8fa6219ef9ce660f46988f42d416a70ea756949460872349ab4d85f09df WatchSource:0}: Error finding container 0f77e8fa6219ef9ce660f46988f42d416a70ea756949460872349ab4d85f09df: Status 404 returned error can't find the container with id 0f77e8fa6219ef9ce660f46988f42d416a70ea756949460872349ab4d85f09df Feb 18 14:58:05 crc kubenswrapper[4957]: I0218 14:58:05.808300 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 14:58:05 crc kubenswrapper[4957]: I0218 14:58:05.829606 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 18 14:58:06 crc kubenswrapper[4957]: I0218 14:58:06.131177 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4d2wd"] Feb 18 14:58:06 crc kubenswrapper[4957]: I0218 14:58:06.131780 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-4d2wd" podUID="fad52f8a-f601-4c92-8545-2e384475a5d2" containerName="registry-server" containerID="cri-o://59dd8de192c07315ff240d021ddb1434f7b3985bb35d2e2b0a569b9181b13ab4" gracePeriod=2 Feb 18 14:58:06 crc kubenswrapper[4957]: I0218 14:58:06.135576 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="3e8ad40e-e92a-4b1c-b081-7da83ab34669" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.247:8775/\": read tcp 10.217.0.2:42092->10.217.0.247:8775: read: connection reset by peer" Feb 18 14:58:06 crc kubenswrapper[4957]: I0218 14:58:06.135576 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="3e8ad40e-e92a-4b1c-b081-7da83ab34669" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.247:8775/\": read tcp 10.217.0.2:42098->10.217.0.247:8775: read: connection reset by peer" Feb 18 14:58:06 crc kubenswrapper[4957]: I0218 14:58:06.368085 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c252f5d-2954-4fdd-a999-29b7c6469ade" path="/var/lib/kubelet/pods/0c252f5d-2954-4fdd-a999-29b7c6469ade/volumes" Feb 18 14:58:06 crc kubenswrapper[4957]: I0218 14:58:06.369359 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83538f1b-8063-4c0c-8bd3-ba87ce3a0637" path="/var/lib/kubelet/pods/83538f1b-8063-4c0c-8bd3-ba87ce3a0637/volumes" Feb 18 14:58:06 crc kubenswrapper[4957]: I0218 14:58:06.384114 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f31564e5-da12-4b69-b9ca-da180143fcf7" path="/var/lib/kubelet/pods/f31564e5-da12-4b69-b9ca-da180143fcf7/volumes" Feb 18 14:58:06 crc kubenswrapper[4957]: I0218 14:58:06.385669 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Feb 18 14:58:06 crc kubenswrapper[4957]: I0218 14:58:06.711744 4957 generic.go:334] "Generic (PLEG): container finished" podID="3e8ad40e-e92a-4b1c-b081-7da83ab34669" containerID="1ccf214a29b8e0df23866d723b91742a0ad63fcd37d6787b81417e5862edbcec" exitCode=0 Feb 18 14:58:06 crc kubenswrapper[4957]: I0218 14:58:06.711825 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3e8ad40e-e92a-4b1c-b081-7da83ab34669","Type":"ContainerDied","Data":"1ccf214a29b8e0df23866d723b91742a0ad63fcd37d6787b81417e5862edbcec"} Feb 18 14:58:06 crc kubenswrapper[4957]: I0218 14:58:06.720774 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"1a3cba85-6288-4e31-aeeb-c65994d4592b","Type":"ContainerStarted","Data":"65ad665b51072d9c9460fb2e4e62667e0f340c3de975ff3349ce47e93a65ef6e"} Feb 18 14:58:06 crc kubenswrapper[4957]: I0218 14:58:06.720855 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"1a3cba85-6288-4e31-aeeb-c65994d4592b","Type":"ContainerStarted","Data":"0f77e8fa6219ef9ce660f46988f42d416a70ea756949460872349ab4d85f09df"} Feb 18 14:58:06 crc kubenswrapper[4957]: I0218 14:58:06.750662 4957 generic.go:334] "Generic (PLEG): container finished" podID="fad52f8a-f601-4c92-8545-2e384475a5d2" containerID="59dd8de192c07315ff240d021ddb1434f7b3985bb35d2e2b0a569b9181b13ab4" exitCode=0 Feb 18 14:58:06 crc kubenswrapper[4957]: I0218 14:58:06.750833 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4d2wd" event={"ID":"fad52f8a-f601-4c92-8545-2e384475a5d2","Type":"ContainerDied","Data":"59dd8de192c07315ff240d021ddb1434f7b3985bb35d2e2b0a569b9181b13ab4"} Feb 18 14:58:06 crc kubenswrapper[4957]: I0218 14:58:06.751528 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.751508036 podStartE2EDuration="2.751508036s" podCreationTimestamp="2026-02-18 14:58:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:58:06.747924722 +0000 UTC m=+1593.268789476" watchObservedRunningTime="2026-02-18 14:58:06.751508036 +0000 UTC m=+1593.272372780" Feb 18 14:58:06 crc kubenswrapper[4957]: I0218 14:58:06.763739 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e38b57a1-5b5d-4046-88b1-248b5eb0fe97","Type":"ContainerStarted","Data":"c1a9756af44e9e99cdb8522d26f35e1dae7a21103d5cc879c1b231f392801095"} Feb 18 14:58:06 crc kubenswrapper[4957]: I0218 14:58:06.763794 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e38b57a1-5b5d-4046-88b1-248b5eb0fe97","Type":"ContainerStarted","Data":"bb36a92adc72833e285fc6f96184cad41a4259c0d5bad292aba12d6df8f738e2"} Feb 18 14:58:06 crc kubenswrapper[4957]: I0218 14:58:06.763809 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e38b57a1-5b5d-4046-88b1-248b5eb0fe97","Type":"ContainerStarted","Data":"9d6484a18240697fa69a829b7a005e0e09297a0a6005d71bb188bc5935ca606c"} Feb 18 14:58:06 crc kubenswrapper[4957]: I0218 14:58:06.773862 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"2f90e4b1-a149-424f-9ad1-228b7bf592ab","Type":"ContainerStarted","Data":"9f9cae1968763df0f7b8acab6e412fc94a1619f33ff92438f71d794de823e133"} Feb 18 14:58:07 crc kubenswrapper[4957]: I0218 14:58:07.005809 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 18 14:58:07 crc kubenswrapper[4957]: I0218 14:58:07.097402 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.097379833 podStartE2EDuration="3.097379833s" podCreationTimestamp="2026-02-18 14:58:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:58:06.808809154 +0000 UTC m=+1593.329673918" watchObservedRunningTime="2026-02-18 14:58:07.097379833 +0000 UTC m=+1593.618244577" Feb 18 14:58:07 crc kubenswrapper[4957]: I0218 14:58:07.172760 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e8ad40e-e92a-4b1c-b081-7da83ab34669-config-data\") pod \"3e8ad40e-e92a-4b1c-b081-7da83ab34669\" (UID: \"3e8ad40e-e92a-4b1c-b081-7da83ab34669\") " Feb 18 14:58:07 crc kubenswrapper[4957]: I0218 14:58:07.173052 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e8ad40e-e92a-4b1c-b081-7da83ab34669-combined-ca-bundle\") pod \"3e8ad40e-e92a-4b1c-b081-7da83ab34669\" (UID: \"3e8ad40e-e92a-4b1c-b081-7da83ab34669\") " Feb 18 14:58:07 crc kubenswrapper[4957]: I0218 14:58:07.173139 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e8ad40e-e92a-4b1c-b081-7da83ab34669-nova-metadata-tls-certs\") pod \"3e8ad40e-e92a-4b1c-b081-7da83ab34669\" (UID: \"3e8ad40e-e92a-4b1c-b081-7da83ab34669\") " Feb 18 14:58:07 crc kubenswrapper[4957]: I0218 14:58:07.173170 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tvsk8\" (UniqueName: \"kubernetes.io/projected/3e8ad40e-e92a-4b1c-b081-7da83ab34669-kube-api-access-tvsk8\") pod \"3e8ad40e-e92a-4b1c-b081-7da83ab34669\" (UID: \"3e8ad40e-e92a-4b1c-b081-7da83ab34669\") " Feb 18 14:58:07 crc kubenswrapper[4957]: I0218 14:58:07.173241 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3e8ad40e-e92a-4b1c-b081-7da83ab34669-logs\") pod \"3e8ad40e-e92a-4b1c-b081-7da83ab34669\" (UID: \"3e8ad40e-e92a-4b1c-b081-7da83ab34669\") " Feb 18 14:58:07 crc kubenswrapper[4957]: I0218 14:58:07.175112 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3e8ad40e-e92a-4b1c-b081-7da83ab34669-logs" (OuterVolumeSpecName: "logs") pod "3e8ad40e-e92a-4b1c-b081-7da83ab34669" (UID: "3e8ad40e-e92a-4b1c-b081-7da83ab34669"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:58:07 crc kubenswrapper[4957]: I0218 14:58:07.199235 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e8ad40e-e92a-4b1c-b081-7da83ab34669-kube-api-access-tvsk8" (OuterVolumeSpecName: "kube-api-access-tvsk8") pod "3e8ad40e-e92a-4b1c-b081-7da83ab34669" (UID: "3e8ad40e-e92a-4b1c-b081-7da83ab34669"). InnerVolumeSpecName "kube-api-access-tvsk8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:58:07 crc kubenswrapper[4957]: I0218 14:58:07.223342 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tvsk8\" (UniqueName: \"kubernetes.io/projected/3e8ad40e-e92a-4b1c-b081-7da83ab34669-kube-api-access-tvsk8\") on node \"crc\" DevicePath \"\"" Feb 18 14:58:07 crc kubenswrapper[4957]: I0218 14:58:07.223377 4957 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3e8ad40e-e92a-4b1c-b081-7da83ab34669-logs\") on node \"crc\" DevicePath \"\"" Feb 18 14:58:07 crc kubenswrapper[4957]: I0218 14:58:07.255971 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e8ad40e-e92a-4b1c-b081-7da83ab34669-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3e8ad40e-e92a-4b1c-b081-7da83ab34669" (UID: "3e8ad40e-e92a-4b1c-b081-7da83ab34669"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:58:07 crc kubenswrapper[4957]: I0218 14:58:07.283400 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4d2wd" Feb 18 14:58:07 crc kubenswrapper[4957]: I0218 14:58:07.290106 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e8ad40e-e92a-4b1c-b081-7da83ab34669-config-data" (OuterVolumeSpecName: "config-data") pod "3e8ad40e-e92a-4b1c-b081-7da83ab34669" (UID: "3e8ad40e-e92a-4b1c-b081-7da83ab34669"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:58:07 crc kubenswrapper[4957]: I0218 14:58:07.332729 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fad52f8a-f601-4c92-8545-2e384475a5d2-catalog-content\") pod \"fad52f8a-f601-4c92-8545-2e384475a5d2\" (UID: \"fad52f8a-f601-4c92-8545-2e384475a5d2\") " Feb 18 14:58:07 crc kubenswrapper[4957]: I0218 14:58:07.332796 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j2ls8\" (UniqueName: \"kubernetes.io/projected/fad52f8a-f601-4c92-8545-2e384475a5d2-kube-api-access-j2ls8\") pod \"fad52f8a-f601-4c92-8545-2e384475a5d2\" (UID: \"fad52f8a-f601-4c92-8545-2e384475a5d2\") " Feb 18 14:58:07 crc kubenswrapper[4957]: I0218 14:58:07.332996 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fad52f8a-f601-4c92-8545-2e384475a5d2-utilities\") pod \"fad52f8a-f601-4c92-8545-2e384475a5d2\" (UID: \"fad52f8a-f601-4c92-8545-2e384475a5d2\") " Feb 18 14:58:07 crc kubenswrapper[4957]: I0218 14:58:07.341155 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fad52f8a-f601-4c92-8545-2e384475a5d2-kube-api-access-j2ls8" (OuterVolumeSpecName: "kube-api-access-j2ls8") pod "fad52f8a-f601-4c92-8545-2e384475a5d2" (UID: "fad52f8a-f601-4c92-8545-2e384475a5d2"). InnerVolumeSpecName "kube-api-access-j2ls8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:58:07 crc kubenswrapper[4957]: I0218 14:58:07.341594 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fad52f8a-f601-4c92-8545-2e384475a5d2-utilities" (OuterVolumeSpecName: "utilities") pod "fad52f8a-f601-4c92-8545-2e384475a5d2" (UID: "fad52f8a-f601-4c92-8545-2e384475a5d2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:58:07 crc kubenswrapper[4957]: I0218 14:58:07.342970 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j2ls8\" (UniqueName: \"kubernetes.io/projected/fad52f8a-f601-4c92-8545-2e384475a5d2-kube-api-access-j2ls8\") on node \"crc\" DevicePath \"\"" Feb 18 14:58:07 crc kubenswrapper[4957]: I0218 14:58:07.342991 4957 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e8ad40e-e92a-4b1c-b081-7da83ab34669-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 14:58:07 crc kubenswrapper[4957]: I0218 14:58:07.343023 4957 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fad52f8a-f601-4c92-8545-2e384475a5d2-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 14:58:07 crc kubenswrapper[4957]: I0218 14:58:07.343036 4957 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e8ad40e-e92a-4b1c-b081-7da83ab34669-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:58:07 crc kubenswrapper[4957]: I0218 14:58:07.374595 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e8ad40e-e92a-4b1c-b081-7da83ab34669-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "3e8ad40e-e92a-4b1c-b081-7da83ab34669" (UID: "3e8ad40e-e92a-4b1c-b081-7da83ab34669"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:58:07 crc kubenswrapper[4957]: I0218 14:58:07.445615 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fad52f8a-f601-4c92-8545-2e384475a5d2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fad52f8a-f601-4c92-8545-2e384475a5d2" (UID: "fad52f8a-f601-4c92-8545-2e384475a5d2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:58:07 crc kubenswrapper[4957]: I0218 14:58:07.446034 4957 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fad52f8a-f601-4c92-8545-2e384475a5d2-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 14:58:07 crc kubenswrapper[4957]: I0218 14:58:07.446054 4957 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e8ad40e-e92a-4b1c-b081-7da83ab34669-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 14:58:07 crc kubenswrapper[4957]: I0218 14:58:07.784870 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4d2wd" event={"ID":"fad52f8a-f601-4c92-8545-2e384475a5d2","Type":"ContainerDied","Data":"38dccf78b00c5de87e3ae2593540563c727e68e3d6e5acb70e72feebb58d41e0"} Feb 18 14:58:07 crc kubenswrapper[4957]: I0218 14:58:07.785162 4957 scope.go:117] "RemoveContainer" containerID="59dd8de192c07315ff240d021ddb1434f7b3985bb35d2e2b0a569b9181b13ab4" Feb 18 14:58:07 crc kubenswrapper[4957]: I0218 14:58:07.785336 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4d2wd" Feb 18 14:58:07 crc kubenswrapper[4957]: I0218 14:58:07.793651 4957 generic.go:334] "Generic (PLEG): container finished" podID="139c8bb5-20f4-4bf6-a384-551cd08dd46e" containerID="fb7d3524f6dde8c7532c8da685053f5ec3d5870385325f4c45bcd1a56862aa02" exitCode=0 Feb 18 14:58:07 crc kubenswrapper[4957]: I0218 14:58:07.793752 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vdxnv" event={"ID":"139c8bb5-20f4-4bf6-a384-551cd08dd46e","Type":"ContainerDied","Data":"fb7d3524f6dde8c7532c8da685053f5ec3d5870385325f4c45bcd1a56862aa02"} Feb 18 14:58:07 crc kubenswrapper[4957]: I0218 14:58:07.807913 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"2f90e4b1-a149-424f-9ad1-228b7bf592ab","Type":"ContainerStarted","Data":"365b3ecae2b7460331099c7af8bd676e85ecffef5c5e269f97a08d633e2544e6"} Feb 18 14:58:07 crc kubenswrapper[4957]: I0218 14:58:07.815888 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 18 14:58:07 crc kubenswrapper[4957]: I0218 14:58:07.826507 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3e8ad40e-e92a-4b1c-b081-7da83ab34669","Type":"ContainerDied","Data":"0323e7705b1a0db12c5d6949258c55ff251c52b5705ccf8942e20a678c39cce4"} Feb 18 14:58:08 crc kubenswrapper[4957]: I0218 14:58:08.023104 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 14:58:08 crc kubenswrapper[4957]: I0218 14:58:08.028780 4957 scope.go:117] "RemoveContainer" containerID="00382dc050747dfa9494cde8a13edb284c7b71cb29c53227cf9c5c49c8be1eb6" Feb 18 14:58:08 crc kubenswrapper[4957]: I0218 14:58:08.055508 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 14:58:08 crc kubenswrapper[4957]: I0218 14:58:08.071920 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4d2wd"] Feb 18 14:58:08 crc kubenswrapper[4957]: I0218 14:58:08.087057 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-4d2wd"] Feb 18 14:58:08 crc kubenswrapper[4957]: I0218 14:58:08.097789 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 18 14:58:08 crc kubenswrapper[4957]: E0218 14:58:08.101566 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fad52f8a-f601-4c92-8545-2e384475a5d2" containerName="registry-server" Feb 18 14:58:08 crc kubenswrapper[4957]: I0218 14:58:08.101601 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="fad52f8a-f601-4c92-8545-2e384475a5d2" containerName="registry-server" Feb 18 14:58:08 crc kubenswrapper[4957]: E0218 14:58:08.101634 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e8ad40e-e92a-4b1c-b081-7da83ab34669" containerName="nova-metadata-log" Feb 18 14:58:08 crc kubenswrapper[4957]: I0218 14:58:08.101641 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e8ad40e-e92a-4b1c-b081-7da83ab34669" containerName="nova-metadata-log" Feb 18 14:58:08 crc kubenswrapper[4957]: E0218 14:58:08.101651 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e8ad40e-e92a-4b1c-b081-7da83ab34669" containerName="nova-metadata-metadata" Feb 18 14:58:08 crc kubenswrapper[4957]: I0218 14:58:08.101662 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e8ad40e-e92a-4b1c-b081-7da83ab34669" containerName="nova-metadata-metadata" Feb 18 14:58:08 crc kubenswrapper[4957]: E0218 14:58:08.101705 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fad52f8a-f601-4c92-8545-2e384475a5d2" containerName="extract-utilities" Feb 18 14:58:08 crc kubenswrapper[4957]: I0218 14:58:08.101711 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="fad52f8a-f601-4c92-8545-2e384475a5d2" containerName="extract-utilities" Feb 18 14:58:08 crc kubenswrapper[4957]: E0218 14:58:08.101727 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fad52f8a-f601-4c92-8545-2e384475a5d2" containerName="extract-content" Feb 18 14:58:08 crc kubenswrapper[4957]: I0218 14:58:08.101732 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="fad52f8a-f601-4c92-8545-2e384475a5d2" containerName="extract-content" Feb 18 14:58:08 crc kubenswrapper[4957]: I0218 14:58:08.102147 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="fad52f8a-f601-4c92-8545-2e384475a5d2" containerName="registry-server" Feb 18 14:58:08 crc kubenswrapper[4957]: I0218 14:58:08.102177 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e8ad40e-e92a-4b1c-b081-7da83ab34669" containerName="nova-metadata-metadata" Feb 18 14:58:08 crc kubenswrapper[4957]: I0218 14:58:08.102190 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e8ad40e-e92a-4b1c-b081-7da83ab34669" containerName="nova-metadata-log" Feb 18 14:58:08 crc kubenswrapper[4957]: I0218 14:58:08.103495 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 18 14:58:08 crc kubenswrapper[4957]: I0218 14:58:08.109957 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 14:58:08 crc kubenswrapper[4957]: I0218 14:58:08.111809 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 18 14:58:08 crc kubenswrapper[4957]: I0218 14:58:08.111986 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 18 14:58:08 crc kubenswrapper[4957]: I0218 14:58:08.168004 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5c129411-cf16-45ad-be6b-e31866a236e7-logs\") pod \"nova-metadata-0\" (UID: \"5c129411-cf16-45ad-be6b-e31866a236e7\") " pod="openstack/nova-metadata-0" Feb 18 14:58:08 crc kubenswrapper[4957]: I0218 14:58:08.168082 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c129411-cf16-45ad-be6b-e31866a236e7-config-data\") pod \"nova-metadata-0\" (UID: \"5c129411-cf16-45ad-be6b-e31866a236e7\") " pod="openstack/nova-metadata-0" Feb 18 14:58:08 crc kubenswrapper[4957]: I0218 14:58:08.168138 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c129411-cf16-45ad-be6b-e31866a236e7-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"5c129411-cf16-45ad-be6b-e31866a236e7\") " pod="openstack/nova-metadata-0" Feb 18 14:58:08 crc kubenswrapper[4957]: I0218 14:58:08.168194 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jn2t5\" (UniqueName: \"kubernetes.io/projected/5c129411-cf16-45ad-be6b-e31866a236e7-kube-api-access-jn2t5\") pod \"nova-metadata-0\" (UID: \"5c129411-cf16-45ad-be6b-e31866a236e7\") " pod="openstack/nova-metadata-0" Feb 18 14:58:08 crc kubenswrapper[4957]: I0218 14:58:08.168282 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c129411-cf16-45ad-be6b-e31866a236e7-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"5c129411-cf16-45ad-be6b-e31866a236e7\") " pod="openstack/nova-metadata-0" Feb 18 14:58:08 crc kubenswrapper[4957]: I0218 14:58:08.169534 4957 scope.go:117] "RemoveContainer" containerID="d7670c33e30a9db9b44d7e56018e11e7ce0b6b7157d7df8cfc7fc98fe5c3f7d8" Feb 18 14:58:08 crc kubenswrapper[4957]: I0218 14:58:08.211449 4957 scope.go:117] "RemoveContainer" containerID="1ccf214a29b8e0df23866d723b91742a0ad63fcd37d6787b81417e5862edbcec" Feb 18 14:58:08 crc kubenswrapper[4957]: I0218 14:58:08.236558 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e8ad40e-e92a-4b1c-b081-7da83ab34669" path="/var/lib/kubelet/pods/3e8ad40e-e92a-4b1c-b081-7da83ab34669/volumes" Feb 18 14:58:08 crc kubenswrapper[4957]: I0218 14:58:08.237196 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fad52f8a-f601-4c92-8545-2e384475a5d2" path="/var/lib/kubelet/pods/fad52f8a-f601-4c92-8545-2e384475a5d2/volumes" Feb 18 14:58:08 crc kubenswrapper[4957]: I0218 14:58:08.242183 4957 scope.go:117] "RemoveContainer" containerID="6d4d3e813992d51b14303f85553e5b8c2444d099c20f27f2074f789417281247" Feb 18 14:58:08 crc kubenswrapper[4957]: I0218 14:58:08.270771 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5c129411-cf16-45ad-be6b-e31866a236e7-logs\") pod \"nova-metadata-0\" (UID: \"5c129411-cf16-45ad-be6b-e31866a236e7\") " pod="openstack/nova-metadata-0" Feb 18 14:58:08 crc kubenswrapper[4957]: I0218 14:58:08.270914 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c129411-cf16-45ad-be6b-e31866a236e7-config-data\") pod \"nova-metadata-0\" (UID: \"5c129411-cf16-45ad-be6b-e31866a236e7\") " pod="openstack/nova-metadata-0" Feb 18 14:58:08 crc kubenswrapper[4957]: I0218 14:58:08.270989 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c129411-cf16-45ad-be6b-e31866a236e7-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"5c129411-cf16-45ad-be6b-e31866a236e7\") " pod="openstack/nova-metadata-0" Feb 18 14:58:08 crc kubenswrapper[4957]: I0218 14:58:08.271100 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jn2t5\" (UniqueName: \"kubernetes.io/projected/5c129411-cf16-45ad-be6b-e31866a236e7-kube-api-access-jn2t5\") pod \"nova-metadata-0\" (UID: \"5c129411-cf16-45ad-be6b-e31866a236e7\") " pod="openstack/nova-metadata-0" Feb 18 14:58:08 crc kubenswrapper[4957]: I0218 14:58:08.271403 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c129411-cf16-45ad-be6b-e31866a236e7-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"5c129411-cf16-45ad-be6b-e31866a236e7\") " pod="openstack/nova-metadata-0" Feb 18 14:58:08 crc kubenswrapper[4957]: I0218 14:58:08.271857 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5c129411-cf16-45ad-be6b-e31866a236e7-logs\") pod \"nova-metadata-0\" (UID: \"5c129411-cf16-45ad-be6b-e31866a236e7\") " pod="openstack/nova-metadata-0" Feb 18 14:58:08 crc kubenswrapper[4957]: I0218 14:58:08.275349 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c129411-cf16-45ad-be6b-e31866a236e7-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"5c129411-cf16-45ad-be6b-e31866a236e7\") " pod="openstack/nova-metadata-0" Feb 18 14:58:08 crc kubenswrapper[4957]: I0218 14:58:08.275805 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c129411-cf16-45ad-be6b-e31866a236e7-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"5c129411-cf16-45ad-be6b-e31866a236e7\") " pod="openstack/nova-metadata-0" Feb 18 14:58:08 crc kubenswrapper[4957]: I0218 14:58:08.281157 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c129411-cf16-45ad-be6b-e31866a236e7-config-data\") pod \"nova-metadata-0\" (UID: \"5c129411-cf16-45ad-be6b-e31866a236e7\") " pod="openstack/nova-metadata-0" Feb 18 14:58:08 crc kubenswrapper[4957]: I0218 14:58:08.288563 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jn2t5\" (UniqueName: \"kubernetes.io/projected/5c129411-cf16-45ad-be6b-e31866a236e7-kube-api-access-jn2t5\") pod \"nova-metadata-0\" (UID: \"5c129411-cf16-45ad-be6b-e31866a236e7\") " pod="openstack/nova-metadata-0" Feb 18 14:58:08 crc kubenswrapper[4957]: I0218 14:58:08.503637 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 18 14:58:08 crc kubenswrapper[4957]: I0218 14:58:08.865274 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vdxnv" event={"ID":"139c8bb5-20f4-4bf6-a384-551cd08dd46e","Type":"ContainerStarted","Data":"1c2fedb7ab3d98249bd2b88d093d2ec692f902f3603fa23e13f56b1cbc80d8ca"} Feb 18 14:58:08 crc kubenswrapper[4957]: I0218 14:58:08.879659 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"2f90e4b1-a149-424f-9ad1-228b7bf592ab","Type":"ContainerStarted","Data":"3bc17b390ab8d711811837c84ba7e1961670c272507fd4dd778136c6397de598"} Feb 18 14:58:08 crc kubenswrapper[4957]: I0218 14:58:08.910550 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-vdxnv" podStartSLOduration=3.134948373 podStartE2EDuration="7.910526563s" podCreationTimestamp="2026-02-18 14:58:01 +0000 UTC" firstStartedPulling="2026-02-18 14:58:03.603245889 +0000 UTC m=+1590.124110633" lastFinishedPulling="2026-02-18 14:58:08.378824079 +0000 UTC m=+1594.899688823" observedRunningTime="2026-02-18 14:58:08.897739283 +0000 UTC m=+1595.418604027" watchObservedRunningTime="2026-02-18 14:58:08.910526563 +0000 UTC m=+1595.431391297" Feb 18 14:58:09 crc kubenswrapper[4957]: I0218 14:58:09.087615 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 14:58:09 crc kubenswrapper[4957]: W0218 14:58:09.097505 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5c129411_cf16_45ad_be6b_e31866a236e7.slice/crio-229a0388efbe6d61deb91d0ca75a64c3667159066079f7d59ea6afc3915a4e78 WatchSource:0}: Error finding container 229a0388efbe6d61deb91d0ca75a64c3667159066079f7d59ea6afc3915a4e78: Status 404 returned error can't find the container with id 229a0388efbe6d61deb91d0ca75a64c3667159066079f7d59ea6afc3915a4e78 Feb 18 14:58:09 crc kubenswrapper[4957]: I0218 14:58:09.907816 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5c129411-cf16-45ad-be6b-e31866a236e7","Type":"ContainerStarted","Data":"644c80e9e53d8e7ec887f58c5b217f470bf203458d4c0791bd024972ef8f378e"} Feb 18 14:58:09 crc kubenswrapper[4957]: I0218 14:58:09.908221 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5c129411-cf16-45ad-be6b-e31866a236e7","Type":"ContainerStarted","Data":"f352d38043b02a260760fb45d6ae2aa493ac27791495230fc0737c8fdf217406"} Feb 18 14:58:09 crc kubenswrapper[4957]: I0218 14:58:09.908239 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5c129411-cf16-45ad-be6b-e31866a236e7","Type":"ContainerStarted","Data":"229a0388efbe6d61deb91d0ca75a64c3667159066079f7d59ea6afc3915a4e78"} Feb 18 14:58:09 crc kubenswrapper[4957]: I0218 14:58:09.915279 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"2f90e4b1-a149-424f-9ad1-228b7bf592ab","Type":"ContainerStarted","Data":"576b3b412af1ddff86227694ed5276800b493b1b45e7e0d8792c21e68bae1819"} Feb 18 14:58:09 crc kubenswrapper[4957]: I0218 14:58:09.915318 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"2f90e4b1-a149-424f-9ad1-228b7bf592ab","Type":"ContainerStarted","Data":"a371aac4c296750ac28aaaee5cc6a975bb060cba4116d156601d170ba38bdb27"} Feb 18 14:58:09 crc kubenswrapper[4957]: I0218 14:58:09.944871 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=1.9448525490000002 podStartE2EDuration="1.944852549s" podCreationTimestamp="2026-02-18 14:58:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:58:09.929512505 +0000 UTC m=+1596.450377269" watchObservedRunningTime="2026-02-18 14:58:09.944852549 +0000 UTC m=+1596.465717293" Feb 18 14:58:09 crc kubenswrapper[4957]: I0218 14:58:09.967802 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=2.012080365 podStartE2EDuration="4.967781952s" podCreationTimestamp="2026-02-18 14:58:05 +0000 UTC" firstStartedPulling="2026-02-18 14:58:06.500099192 +0000 UTC m=+1593.020963936" lastFinishedPulling="2026-02-18 14:58:09.455800779 +0000 UTC m=+1595.976665523" observedRunningTime="2026-02-18 14:58:09.964690253 +0000 UTC m=+1596.485554997" watchObservedRunningTime="2026-02-18 14:58:09.967781952 +0000 UTC m=+1596.488646696" Feb 18 14:58:10 crc kubenswrapper[4957]: I0218 14:58:10.145629 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 18 14:58:10 crc kubenswrapper[4957]: I0218 14:58:10.512524 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-xbcf9"] Feb 18 14:58:10 crc kubenswrapper[4957]: I0218 14:58:10.515709 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xbcf9" Feb 18 14:58:10 crc kubenswrapper[4957]: I0218 14:58:10.524722 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xbcf9"] Feb 18 14:58:10 crc kubenswrapper[4957]: I0218 14:58:10.663465 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkf5h\" (UniqueName: \"kubernetes.io/projected/ebc99916-f0ef-4d46-a2b2-b22a69b4eece-kube-api-access-xkf5h\") pod \"redhat-marketplace-xbcf9\" (UID: \"ebc99916-f0ef-4d46-a2b2-b22a69b4eece\") " pod="openshift-marketplace/redhat-marketplace-xbcf9" Feb 18 14:58:10 crc kubenswrapper[4957]: I0218 14:58:10.663640 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ebc99916-f0ef-4d46-a2b2-b22a69b4eece-utilities\") pod \"redhat-marketplace-xbcf9\" (UID: \"ebc99916-f0ef-4d46-a2b2-b22a69b4eece\") " pod="openshift-marketplace/redhat-marketplace-xbcf9" Feb 18 14:58:10 crc kubenswrapper[4957]: I0218 14:58:10.664066 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ebc99916-f0ef-4d46-a2b2-b22a69b4eece-catalog-content\") pod \"redhat-marketplace-xbcf9\" (UID: \"ebc99916-f0ef-4d46-a2b2-b22a69b4eece\") " pod="openshift-marketplace/redhat-marketplace-xbcf9" Feb 18 14:58:10 crc kubenswrapper[4957]: I0218 14:58:10.765859 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ebc99916-f0ef-4d46-a2b2-b22a69b4eece-catalog-content\") pod \"redhat-marketplace-xbcf9\" (UID: \"ebc99916-f0ef-4d46-a2b2-b22a69b4eece\") " pod="openshift-marketplace/redhat-marketplace-xbcf9" Feb 18 14:58:10 crc kubenswrapper[4957]: I0218 14:58:10.765937 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xkf5h\" (UniqueName: \"kubernetes.io/projected/ebc99916-f0ef-4d46-a2b2-b22a69b4eece-kube-api-access-xkf5h\") pod \"redhat-marketplace-xbcf9\" (UID: \"ebc99916-f0ef-4d46-a2b2-b22a69b4eece\") " pod="openshift-marketplace/redhat-marketplace-xbcf9" Feb 18 14:58:10 crc kubenswrapper[4957]: I0218 14:58:10.765985 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ebc99916-f0ef-4d46-a2b2-b22a69b4eece-utilities\") pod \"redhat-marketplace-xbcf9\" (UID: \"ebc99916-f0ef-4d46-a2b2-b22a69b4eece\") " pod="openshift-marketplace/redhat-marketplace-xbcf9" Feb 18 14:58:10 crc kubenswrapper[4957]: I0218 14:58:10.766604 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ebc99916-f0ef-4d46-a2b2-b22a69b4eece-catalog-content\") pod \"redhat-marketplace-xbcf9\" (UID: \"ebc99916-f0ef-4d46-a2b2-b22a69b4eece\") " pod="openshift-marketplace/redhat-marketplace-xbcf9" Feb 18 14:58:10 crc kubenswrapper[4957]: I0218 14:58:10.766623 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ebc99916-f0ef-4d46-a2b2-b22a69b4eece-utilities\") pod \"redhat-marketplace-xbcf9\" (UID: \"ebc99916-f0ef-4d46-a2b2-b22a69b4eece\") " pod="openshift-marketplace/redhat-marketplace-xbcf9" Feb 18 14:58:10 crc kubenswrapper[4957]: I0218 14:58:10.789093 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xkf5h\" (UniqueName: \"kubernetes.io/projected/ebc99916-f0ef-4d46-a2b2-b22a69b4eece-kube-api-access-xkf5h\") pod \"redhat-marketplace-xbcf9\" (UID: \"ebc99916-f0ef-4d46-a2b2-b22a69b4eece\") " pod="openshift-marketplace/redhat-marketplace-xbcf9" Feb 18 14:58:10 crc kubenswrapper[4957]: I0218 14:58:10.844125 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xbcf9" Feb 18 14:58:11 crc kubenswrapper[4957]: I0218 14:58:11.444831 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xbcf9"] Feb 18 14:58:11 crc kubenswrapper[4957]: I0218 14:58:11.906905 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-vdxnv" Feb 18 14:58:11 crc kubenswrapper[4957]: I0218 14:58:11.908335 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-vdxnv" Feb 18 14:58:11 crc kubenswrapper[4957]: I0218 14:58:11.948877 4957 generic.go:334] "Generic (PLEG): container finished" podID="ebc99916-f0ef-4d46-a2b2-b22a69b4eece" containerID="552276738b0db6dc48fbfb739b18972a30795a1009a0f59edbe556e70de90d18" exitCode=0 Feb 18 14:58:11 crc kubenswrapper[4957]: I0218 14:58:11.951644 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xbcf9" event={"ID":"ebc99916-f0ef-4d46-a2b2-b22a69b4eece","Type":"ContainerDied","Data":"552276738b0db6dc48fbfb739b18972a30795a1009a0f59edbe556e70de90d18"} Feb 18 14:58:11 crc kubenswrapper[4957]: I0218 14:58:11.951689 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xbcf9" event={"ID":"ebc99916-f0ef-4d46-a2b2-b22a69b4eece","Type":"ContainerStarted","Data":"df3ceb2db8bada3c973dbdd196b82e70748aa3efea09d15bdb0777b2c0c88640"} Feb 18 14:58:11 crc kubenswrapper[4957]: I0218 14:58:11.966943 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-vdxnv" Feb 18 14:58:12 crc kubenswrapper[4957]: I0218 14:58:12.962429 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xbcf9" event={"ID":"ebc99916-f0ef-4d46-a2b2-b22a69b4eece","Type":"ContainerStarted","Data":"259156d0ab07c4ddea0b6e7fd969b6e0d4cb24ee698a6271dca17fc05f8ac85d"} Feb 18 14:58:13 crc kubenswrapper[4957]: I0218 14:58:13.504495 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 18 14:58:13 crc kubenswrapper[4957]: I0218 14:58:13.504831 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 18 14:58:13 crc kubenswrapper[4957]: I0218 14:58:13.986756 4957 generic.go:334] "Generic (PLEG): container finished" podID="ebc99916-f0ef-4d46-a2b2-b22a69b4eece" containerID="259156d0ab07c4ddea0b6e7fd969b6e0d4cb24ee698a6271dca17fc05f8ac85d" exitCode=0 Feb 18 14:58:13 crc kubenswrapper[4957]: I0218 14:58:13.986818 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xbcf9" event={"ID":"ebc99916-f0ef-4d46-a2b2-b22a69b4eece","Type":"ContainerDied","Data":"259156d0ab07c4ddea0b6e7fd969b6e0d4cb24ee698a6271dca17fc05f8ac85d"} Feb 18 14:58:14 crc kubenswrapper[4957]: I0218 14:58:14.221917 4957 scope.go:117] "RemoveContainer" containerID="c4e669be0ab1992542c422c61f78e7526ae93fe9c47c7d1457d0a05970a49398" Feb 18 14:58:14 crc kubenswrapper[4957]: E0218 14:58:14.222558 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-x8wwg_openshift-machine-config-operator(4cde17e3-43e9-4bed-afe8-5b76229e35cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" Feb 18 14:58:14 crc kubenswrapper[4957]: I0218 14:58:14.998036 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xbcf9" event={"ID":"ebc99916-f0ef-4d46-a2b2-b22a69b4eece","Type":"ContainerStarted","Data":"03832a2f289f902776dcfe86593c9d76558814b075e6ec40b55741b7dd7988ca"} Feb 18 14:58:15 crc kubenswrapper[4957]: I0218 14:58:15.040662 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-xbcf9" podStartSLOduration=2.59172236 podStartE2EDuration="5.040638103s" podCreationTimestamp="2026-02-18 14:58:10 +0000 UTC" firstStartedPulling="2026-02-18 14:58:11.953769972 +0000 UTC m=+1598.474634716" lastFinishedPulling="2026-02-18 14:58:14.402685715 +0000 UTC m=+1600.923550459" observedRunningTime="2026-02-18 14:58:15.022259451 +0000 UTC m=+1601.543124195" watchObservedRunningTime="2026-02-18 14:58:15.040638103 +0000 UTC m=+1601.561502847" Feb 18 14:58:15 crc kubenswrapper[4957]: I0218 14:58:15.130850 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 18 14:58:15 crc kubenswrapper[4957]: I0218 14:58:15.130932 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 18 14:58:15 crc kubenswrapper[4957]: I0218 14:58:15.142484 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 18 14:58:15 crc kubenswrapper[4957]: I0218 14:58:15.185157 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 18 14:58:16 crc kubenswrapper[4957]: I0218 14:58:16.053438 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 18 14:58:16 crc kubenswrapper[4957]: I0218 14:58:16.145587 4957 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="e38b57a1-5b5d-4046-88b1-248b5eb0fe97" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.1.5:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 14:58:16 crc kubenswrapper[4957]: I0218 14:58:16.145927 4957 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="e38b57a1-5b5d-4046-88b1-248b5eb0fe97" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.1.5:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 14:58:18 crc kubenswrapper[4957]: I0218 14:58:18.503708 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 18 14:58:18 crc kubenswrapper[4957]: I0218 14:58:18.503752 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 18 14:58:19 crc kubenswrapper[4957]: I0218 14:58:19.516593 4957 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="5c129411-cf16-45ad-be6b-e31866a236e7" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.1.8:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 14:58:19 crc kubenswrapper[4957]: I0218 14:58:19.516626 4957 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="5c129411-cf16-45ad-be6b-e31866a236e7" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.1.8:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 14:58:20 crc kubenswrapper[4957]: I0218 14:58:20.845188 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-xbcf9" Feb 18 14:58:20 crc kubenswrapper[4957]: I0218 14:58:20.845241 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-xbcf9" Feb 18 14:58:20 crc kubenswrapper[4957]: I0218 14:58:20.903701 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-xbcf9" Feb 18 14:58:21 crc kubenswrapper[4957]: I0218 14:58:21.123100 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-xbcf9" Feb 18 14:58:21 crc kubenswrapper[4957]: I0218 14:58:21.183052 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xbcf9"] Feb 18 14:58:21 crc kubenswrapper[4957]: I0218 14:58:21.992402 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-vdxnv" Feb 18 14:58:23 crc kubenswrapper[4957]: I0218 14:58:23.110479 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-xbcf9" podUID="ebc99916-f0ef-4d46-a2b2-b22a69b4eece" containerName="registry-server" containerID="cri-o://03832a2f289f902776dcfe86593c9d76558814b075e6ec40b55741b7dd7988ca" gracePeriod=2 Feb 18 14:58:23 crc kubenswrapper[4957]: I0218 14:58:23.548494 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vdxnv"] Feb 18 14:58:23 crc kubenswrapper[4957]: I0218 14:58:23.548888 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-vdxnv" podUID="139c8bb5-20f4-4bf6-a384-551cd08dd46e" containerName="registry-server" containerID="cri-o://1c2fedb7ab3d98249bd2b88d093d2ec692f902f3603fa23e13f56b1cbc80d8ca" gracePeriod=2 Feb 18 14:58:23 crc kubenswrapper[4957]: I0218 14:58:23.827271 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xbcf9" Feb 18 14:58:23 crc kubenswrapper[4957]: I0218 14:58:23.889172 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xkf5h\" (UniqueName: \"kubernetes.io/projected/ebc99916-f0ef-4d46-a2b2-b22a69b4eece-kube-api-access-xkf5h\") pod \"ebc99916-f0ef-4d46-a2b2-b22a69b4eece\" (UID: \"ebc99916-f0ef-4d46-a2b2-b22a69b4eece\") " Feb 18 14:58:23 crc kubenswrapper[4957]: I0218 14:58:23.889736 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ebc99916-f0ef-4d46-a2b2-b22a69b4eece-utilities\") pod \"ebc99916-f0ef-4d46-a2b2-b22a69b4eece\" (UID: \"ebc99916-f0ef-4d46-a2b2-b22a69b4eece\") " Feb 18 14:58:23 crc kubenswrapper[4957]: I0218 14:58:23.889762 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ebc99916-f0ef-4d46-a2b2-b22a69b4eece-catalog-content\") pod \"ebc99916-f0ef-4d46-a2b2-b22a69b4eece\" (UID: \"ebc99916-f0ef-4d46-a2b2-b22a69b4eece\") " Feb 18 14:58:23 crc kubenswrapper[4957]: I0218 14:58:23.892293 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ebc99916-f0ef-4d46-a2b2-b22a69b4eece-utilities" (OuterVolumeSpecName: "utilities") pod "ebc99916-f0ef-4d46-a2b2-b22a69b4eece" (UID: "ebc99916-f0ef-4d46-a2b2-b22a69b4eece"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:58:23 crc kubenswrapper[4957]: I0218 14:58:23.911660 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ebc99916-f0ef-4d46-a2b2-b22a69b4eece-kube-api-access-xkf5h" (OuterVolumeSpecName: "kube-api-access-xkf5h") pod "ebc99916-f0ef-4d46-a2b2-b22a69b4eece" (UID: "ebc99916-f0ef-4d46-a2b2-b22a69b4eece"). InnerVolumeSpecName "kube-api-access-xkf5h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:58:23 crc kubenswrapper[4957]: I0218 14:58:23.929690 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ebc99916-f0ef-4d46-a2b2-b22a69b4eece-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ebc99916-f0ef-4d46-a2b2-b22a69b4eece" (UID: "ebc99916-f0ef-4d46-a2b2-b22a69b4eece"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:58:23 crc kubenswrapper[4957]: I0218 14:58:23.976343 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vdxnv" Feb 18 14:58:23 crc kubenswrapper[4957]: I0218 14:58:23.993525 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xhrvt\" (UniqueName: \"kubernetes.io/projected/139c8bb5-20f4-4bf6-a384-551cd08dd46e-kube-api-access-xhrvt\") pod \"139c8bb5-20f4-4bf6-a384-551cd08dd46e\" (UID: \"139c8bb5-20f4-4bf6-a384-551cd08dd46e\") " Feb 18 14:58:23 crc kubenswrapper[4957]: I0218 14:58:23.993700 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/139c8bb5-20f4-4bf6-a384-551cd08dd46e-catalog-content\") pod \"139c8bb5-20f4-4bf6-a384-551cd08dd46e\" (UID: \"139c8bb5-20f4-4bf6-a384-551cd08dd46e\") " Feb 18 14:58:23 crc kubenswrapper[4957]: I0218 14:58:23.993815 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/139c8bb5-20f4-4bf6-a384-551cd08dd46e-utilities\") pod \"139c8bb5-20f4-4bf6-a384-551cd08dd46e\" (UID: \"139c8bb5-20f4-4bf6-a384-551cd08dd46e\") " Feb 18 14:58:23 crc kubenswrapper[4957]: I0218 14:58:23.994433 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/139c8bb5-20f4-4bf6-a384-551cd08dd46e-utilities" (OuterVolumeSpecName: "utilities") pod "139c8bb5-20f4-4bf6-a384-551cd08dd46e" (UID: "139c8bb5-20f4-4bf6-a384-551cd08dd46e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:58:23 crc kubenswrapper[4957]: I0218 14:58:23.995245 4957 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/139c8bb5-20f4-4bf6-a384-551cd08dd46e-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 14:58:23 crc kubenswrapper[4957]: I0218 14:58:23.995275 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xkf5h\" (UniqueName: \"kubernetes.io/projected/ebc99916-f0ef-4d46-a2b2-b22a69b4eece-kube-api-access-xkf5h\") on node \"crc\" DevicePath \"\"" Feb 18 14:58:23 crc kubenswrapper[4957]: I0218 14:58:23.995290 4957 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ebc99916-f0ef-4d46-a2b2-b22a69b4eece-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 14:58:23 crc kubenswrapper[4957]: I0218 14:58:23.995303 4957 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ebc99916-f0ef-4d46-a2b2-b22a69b4eece-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 14:58:23 crc kubenswrapper[4957]: I0218 14:58:23.996838 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/139c8bb5-20f4-4bf6-a384-551cd08dd46e-kube-api-access-xhrvt" (OuterVolumeSpecName: "kube-api-access-xhrvt") pod "139c8bb5-20f4-4bf6-a384-551cd08dd46e" (UID: "139c8bb5-20f4-4bf6-a384-551cd08dd46e"). InnerVolumeSpecName "kube-api-access-xhrvt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:58:24 crc kubenswrapper[4957]: I0218 14:58:24.050447 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/139c8bb5-20f4-4bf6-a384-551cd08dd46e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "139c8bb5-20f4-4bf6-a384-551cd08dd46e" (UID: "139c8bb5-20f4-4bf6-a384-551cd08dd46e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:58:24 crc kubenswrapper[4957]: I0218 14:58:24.097752 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xhrvt\" (UniqueName: \"kubernetes.io/projected/139c8bb5-20f4-4bf6-a384-551cd08dd46e-kube-api-access-xhrvt\") on node \"crc\" DevicePath \"\"" Feb 18 14:58:24 crc kubenswrapper[4957]: I0218 14:58:24.097790 4957 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/139c8bb5-20f4-4bf6-a384-551cd08dd46e-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 14:58:24 crc kubenswrapper[4957]: I0218 14:58:24.123780 4957 generic.go:334] "Generic (PLEG): container finished" podID="139c8bb5-20f4-4bf6-a384-551cd08dd46e" containerID="1c2fedb7ab3d98249bd2b88d093d2ec692f902f3603fa23e13f56b1cbc80d8ca" exitCode=0 Feb 18 14:58:24 crc kubenswrapper[4957]: I0218 14:58:24.123845 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vdxnv" event={"ID":"139c8bb5-20f4-4bf6-a384-551cd08dd46e","Type":"ContainerDied","Data":"1c2fedb7ab3d98249bd2b88d093d2ec692f902f3603fa23e13f56b1cbc80d8ca"} Feb 18 14:58:24 crc kubenswrapper[4957]: I0218 14:58:24.123871 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vdxnv" event={"ID":"139c8bb5-20f4-4bf6-a384-551cd08dd46e","Type":"ContainerDied","Data":"fd0474c2c67f3e5bf154b28f1639e7b8a7ed45a24af0be7eb669993c4c1499b6"} Feb 18 14:58:24 crc kubenswrapper[4957]: I0218 14:58:24.123889 4957 scope.go:117] "RemoveContainer" containerID="1c2fedb7ab3d98249bd2b88d093d2ec692f902f3603fa23e13f56b1cbc80d8ca" Feb 18 14:58:24 crc kubenswrapper[4957]: I0218 14:58:24.124020 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vdxnv" Feb 18 14:58:24 crc kubenswrapper[4957]: I0218 14:58:24.130239 4957 generic.go:334] "Generic (PLEG): container finished" podID="ebc99916-f0ef-4d46-a2b2-b22a69b4eece" containerID="03832a2f289f902776dcfe86593c9d76558814b075e6ec40b55741b7dd7988ca" exitCode=0 Feb 18 14:58:24 crc kubenswrapper[4957]: I0218 14:58:24.130293 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xbcf9" event={"ID":"ebc99916-f0ef-4d46-a2b2-b22a69b4eece","Type":"ContainerDied","Data":"03832a2f289f902776dcfe86593c9d76558814b075e6ec40b55741b7dd7988ca"} Feb 18 14:58:24 crc kubenswrapper[4957]: I0218 14:58:24.130323 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xbcf9" event={"ID":"ebc99916-f0ef-4d46-a2b2-b22a69b4eece","Type":"ContainerDied","Data":"df3ceb2db8bada3c973dbdd196b82e70748aa3efea09d15bdb0777b2c0c88640"} Feb 18 14:58:24 crc kubenswrapper[4957]: I0218 14:58:24.130392 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xbcf9" Feb 18 14:58:24 crc kubenswrapper[4957]: I0218 14:58:24.166872 4957 scope.go:117] "RemoveContainer" containerID="fb7d3524f6dde8c7532c8da685053f5ec3d5870385325f4c45bcd1a56862aa02" Feb 18 14:58:24 crc kubenswrapper[4957]: I0218 14:58:24.170193 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vdxnv"] Feb 18 14:58:24 crc kubenswrapper[4957]: I0218 14:58:24.192149 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-vdxnv"] Feb 18 14:58:24 crc kubenswrapper[4957]: I0218 14:58:24.204713 4957 scope.go:117] "RemoveContainer" containerID="8a89f3f1ca2c34059121124359cb85657a9d8139ed3f89b751621802e81df5af" Feb 18 14:58:24 crc kubenswrapper[4957]: I0218 14:58:24.227494 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="139c8bb5-20f4-4bf6-a384-551cd08dd46e" path="/var/lib/kubelet/pods/139c8bb5-20f4-4bf6-a384-551cd08dd46e/volumes" Feb 18 14:58:24 crc kubenswrapper[4957]: I0218 14:58:24.229284 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xbcf9"] Feb 18 14:58:24 crc kubenswrapper[4957]: I0218 14:58:24.231219 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-xbcf9"] Feb 18 14:58:24 crc kubenswrapper[4957]: I0218 14:58:24.233446 4957 scope.go:117] "RemoveContainer" containerID="1c2fedb7ab3d98249bd2b88d093d2ec692f902f3603fa23e13f56b1cbc80d8ca" Feb 18 14:58:24 crc kubenswrapper[4957]: E0218 14:58:24.234171 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1c2fedb7ab3d98249bd2b88d093d2ec692f902f3603fa23e13f56b1cbc80d8ca\": container with ID starting with 1c2fedb7ab3d98249bd2b88d093d2ec692f902f3603fa23e13f56b1cbc80d8ca not found: ID does not exist" containerID="1c2fedb7ab3d98249bd2b88d093d2ec692f902f3603fa23e13f56b1cbc80d8ca" Feb 18 14:58:24 crc kubenswrapper[4957]: I0218 14:58:24.234291 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c2fedb7ab3d98249bd2b88d093d2ec692f902f3603fa23e13f56b1cbc80d8ca"} err="failed to get container status \"1c2fedb7ab3d98249bd2b88d093d2ec692f902f3603fa23e13f56b1cbc80d8ca\": rpc error: code = NotFound desc = could not find container \"1c2fedb7ab3d98249bd2b88d093d2ec692f902f3603fa23e13f56b1cbc80d8ca\": container with ID starting with 1c2fedb7ab3d98249bd2b88d093d2ec692f902f3603fa23e13f56b1cbc80d8ca not found: ID does not exist" Feb 18 14:58:24 crc kubenswrapper[4957]: I0218 14:58:24.234387 4957 scope.go:117] "RemoveContainer" containerID="fb7d3524f6dde8c7532c8da685053f5ec3d5870385325f4c45bcd1a56862aa02" Feb 18 14:58:24 crc kubenswrapper[4957]: E0218 14:58:24.235065 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fb7d3524f6dde8c7532c8da685053f5ec3d5870385325f4c45bcd1a56862aa02\": container with ID starting with fb7d3524f6dde8c7532c8da685053f5ec3d5870385325f4c45bcd1a56862aa02 not found: ID does not exist" containerID="fb7d3524f6dde8c7532c8da685053f5ec3d5870385325f4c45bcd1a56862aa02" Feb 18 14:58:24 crc kubenswrapper[4957]: I0218 14:58:24.235227 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb7d3524f6dde8c7532c8da685053f5ec3d5870385325f4c45bcd1a56862aa02"} err="failed to get container status \"fb7d3524f6dde8c7532c8da685053f5ec3d5870385325f4c45bcd1a56862aa02\": rpc error: code = NotFound desc = could not find container \"fb7d3524f6dde8c7532c8da685053f5ec3d5870385325f4c45bcd1a56862aa02\": container with ID starting with fb7d3524f6dde8c7532c8da685053f5ec3d5870385325f4c45bcd1a56862aa02 not found: ID does not exist" Feb 18 14:58:24 crc kubenswrapper[4957]: I0218 14:58:24.235311 4957 scope.go:117] "RemoveContainer" containerID="8a89f3f1ca2c34059121124359cb85657a9d8139ed3f89b751621802e81df5af" Feb 18 14:58:24 crc kubenswrapper[4957]: E0218 14:58:24.235949 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8a89f3f1ca2c34059121124359cb85657a9d8139ed3f89b751621802e81df5af\": container with ID starting with 8a89f3f1ca2c34059121124359cb85657a9d8139ed3f89b751621802e81df5af not found: ID does not exist" containerID="8a89f3f1ca2c34059121124359cb85657a9d8139ed3f89b751621802e81df5af" Feb 18 14:58:24 crc kubenswrapper[4957]: I0218 14:58:24.236004 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a89f3f1ca2c34059121124359cb85657a9d8139ed3f89b751621802e81df5af"} err="failed to get container status \"8a89f3f1ca2c34059121124359cb85657a9d8139ed3f89b751621802e81df5af\": rpc error: code = NotFound desc = could not find container \"8a89f3f1ca2c34059121124359cb85657a9d8139ed3f89b751621802e81df5af\": container with ID starting with 8a89f3f1ca2c34059121124359cb85657a9d8139ed3f89b751621802e81df5af not found: ID does not exist" Feb 18 14:58:24 crc kubenswrapper[4957]: I0218 14:58:24.236049 4957 scope.go:117] "RemoveContainer" containerID="03832a2f289f902776dcfe86593c9d76558814b075e6ec40b55741b7dd7988ca" Feb 18 14:58:24 crc kubenswrapper[4957]: I0218 14:58:24.305622 4957 scope.go:117] "RemoveContainer" containerID="259156d0ab07c4ddea0b6e7fd969b6e0d4cb24ee698a6271dca17fc05f8ac85d" Feb 18 14:58:24 crc kubenswrapper[4957]: I0218 14:58:24.338225 4957 scope.go:117] "RemoveContainer" containerID="552276738b0db6dc48fbfb739b18972a30795a1009a0f59edbe556e70de90d18" Feb 18 14:58:24 crc kubenswrapper[4957]: I0218 14:58:24.391798 4957 scope.go:117] "RemoveContainer" containerID="03832a2f289f902776dcfe86593c9d76558814b075e6ec40b55741b7dd7988ca" Feb 18 14:58:24 crc kubenswrapper[4957]: E0218 14:58:24.392299 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"03832a2f289f902776dcfe86593c9d76558814b075e6ec40b55741b7dd7988ca\": container with ID starting with 03832a2f289f902776dcfe86593c9d76558814b075e6ec40b55741b7dd7988ca not found: ID does not exist" containerID="03832a2f289f902776dcfe86593c9d76558814b075e6ec40b55741b7dd7988ca" Feb 18 14:58:24 crc kubenswrapper[4957]: I0218 14:58:24.392365 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"03832a2f289f902776dcfe86593c9d76558814b075e6ec40b55741b7dd7988ca"} err="failed to get container status \"03832a2f289f902776dcfe86593c9d76558814b075e6ec40b55741b7dd7988ca\": rpc error: code = NotFound desc = could not find container \"03832a2f289f902776dcfe86593c9d76558814b075e6ec40b55741b7dd7988ca\": container with ID starting with 03832a2f289f902776dcfe86593c9d76558814b075e6ec40b55741b7dd7988ca not found: ID does not exist" Feb 18 14:58:24 crc kubenswrapper[4957]: I0218 14:58:24.392403 4957 scope.go:117] "RemoveContainer" containerID="259156d0ab07c4ddea0b6e7fd969b6e0d4cb24ee698a6271dca17fc05f8ac85d" Feb 18 14:58:24 crc kubenswrapper[4957]: E0218 14:58:24.392876 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"259156d0ab07c4ddea0b6e7fd969b6e0d4cb24ee698a6271dca17fc05f8ac85d\": container with ID starting with 259156d0ab07c4ddea0b6e7fd969b6e0d4cb24ee698a6271dca17fc05f8ac85d not found: ID does not exist" containerID="259156d0ab07c4ddea0b6e7fd969b6e0d4cb24ee698a6271dca17fc05f8ac85d" Feb 18 14:58:24 crc kubenswrapper[4957]: I0218 14:58:24.392912 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"259156d0ab07c4ddea0b6e7fd969b6e0d4cb24ee698a6271dca17fc05f8ac85d"} err="failed to get container status \"259156d0ab07c4ddea0b6e7fd969b6e0d4cb24ee698a6271dca17fc05f8ac85d\": rpc error: code = NotFound desc = could not find container \"259156d0ab07c4ddea0b6e7fd969b6e0d4cb24ee698a6271dca17fc05f8ac85d\": container with ID starting with 259156d0ab07c4ddea0b6e7fd969b6e0d4cb24ee698a6271dca17fc05f8ac85d not found: ID does not exist" Feb 18 14:58:24 crc kubenswrapper[4957]: I0218 14:58:24.392935 4957 scope.go:117] "RemoveContainer" containerID="552276738b0db6dc48fbfb739b18972a30795a1009a0f59edbe556e70de90d18" Feb 18 14:58:24 crc kubenswrapper[4957]: E0218 14:58:24.393254 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"552276738b0db6dc48fbfb739b18972a30795a1009a0f59edbe556e70de90d18\": container with ID starting with 552276738b0db6dc48fbfb739b18972a30795a1009a0f59edbe556e70de90d18 not found: ID does not exist" containerID="552276738b0db6dc48fbfb739b18972a30795a1009a0f59edbe556e70de90d18" Feb 18 14:58:24 crc kubenswrapper[4957]: I0218 14:58:24.393338 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"552276738b0db6dc48fbfb739b18972a30795a1009a0f59edbe556e70de90d18"} err="failed to get container status \"552276738b0db6dc48fbfb739b18972a30795a1009a0f59edbe556e70de90d18\": rpc error: code = NotFound desc = could not find container \"552276738b0db6dc48fbfb739b18972a30795a1009a0f59edbe556e70de90d18\": container with ID starting with 552276738b0db6dc48fbfb739b18972a30795a1009a0f59edbe556e70de90d18 not found: ID does not exist" Feb 18 14:58:25 crc kubenswrapper[4957]: I0218 14:58:25.143092 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 18 14:58:25 crc kubenswrapper[4957]: I0218 14:58:25.144081 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 18 14:58:25 crc kubenswrapper[4957]: I0218 14:58:25.153796 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 18 14:58:25 crc kubenswrapper[4957]: I0218 14:58:25.154960 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 18 14:58:25 crc kubenswrapper[4957]: I0218 14:58:25.212928 4957 scope.go:117] "RemoveContainer" containerID="c4e669be0ab1992542c422c61f78e7526ae93fe9c47c7d1457d0a05970a49398" Feb 18 14:58:25 crc kubenswrapper[4957]: E0218 14:58:25.213600 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-x8wwg_openshift-machine-config-operator(4cde17e3-43e9-4bed-afe8-5b76229e35cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" Feb 18 14:58:25 crc kubenswrapper[4957]: I0218 14:58:25.912056 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 18 14:58:26 crc kubenswrapper[4957]: I0218 14:58:26.165851 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 18 14:58:26 crc kubenswrapper[4957]: I0218 14:58:26.173706 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 18 14:58:26 crc kubenswrapper[4957]: I0218 14:58:26.239782 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ebc99916-f0ef-4d46-a2b2-b22a69b4eece" path="/var/lib/kubelet/pods/ebc99916-f0ef-4d46-a2b2-b22a69b4eece/volumes" Feb 18 14:58:28 crc kubenswrapper[4957]: I0218 14:58:28.513592 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 18 14:58:28 crc kubenswrapper[4957]: I0218 14:58:28.524702 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 18 14:58:28 crc kubenswrapper[4957]: I0218 14:58:28.525690 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 18 14:58:29 crc kubenswrapper[4957]: I0218 14:58:29.213132 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 18 14:58:30 crc kubenswrapper[4957]: I0218 14:58:30.727283 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 18 14:58:30 crc kubenswrapper[4957]: I0218 14:58:30.727840 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="48a06b17-3799-49aa-97b4-40b55c95fa86" containerName="kube-state-metrics" containerID="cri-o://433d13fb9543cea07951bcd4a2518e3dfc9993b92b1249e1b91cea16e4706a25" gracePeriod=30 Feb 18 14:58:30 crc kubenswrapper[4957]: I0218 14:58:30.931245 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-0"] Feb 18 14:58:30 crc kubenswrapper[4957]: I0218 14:58:30.931831 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/mysqld-exporter-0" podUID="f35cf4ac-acd8-4db3-b633-bab9cac6e322" containerName="mysqld-exporter" containerID="cri-o://797b010ef357795bfe0a8e95e8d7e95190074c7bf5a8522e72145362c38177a8" gracePeriod=30 Feb 18 14:58:31 crc kubenswrapper[4957]: I0218 14:58:31.228946 4957 generic.go:334] "Generic (PLEG): container finished" podID="48a06b17-3799-49aa-97b4-40b55c95fa86" containerID="433d13fb9543cea07951bcd4a2518e3dfc9993b92b1249e1b91cea16e4706a25" exitCode=2 Feb 18 14:58:31 crc kubenswrapper[4957]: I0218 14:58:31.229043 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"48a06b17-3799-49aa-97b4-40b55c95fa86","Type":"ContainerDied","Data":"433d13fb9543cea07951bcd4a2518e3dfc9993b92b1249e1b91cea16e4706a25"} Feb 18 14:58:31 crc kubenswrapper[4957]: I0218 14:58:31.232787 4957 generic.go:334] "Generic (PLEG): container finished" podID="f35cf4ac-acd8-4db3-b633-bab9cac6e322" containerID="797b010ef357795bfe0a8e95e8d7e95190074c7bf5a8522e72145362c38177a8" exitCode=2 Feb 18 14:58:31 crc kubenswrapper[4957]: I0218 14:58:31.234141 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"f35cf4ac-acd8-4db3-b633-bab9cac6e322","Type":"ContainerDied","Data":"797b010ef357795bfe0a8e95e8d7e95190074c7bf5a8522e72145362c38177a8"} Feb 18 14:58:31 crc kubenswrapper[4957]: I0218 14:58:31.603923 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 18 14:58:31 crc kubenswrapper[4957]: I0218 14:58:31.611342 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Feb 18 14:58:31 crc kubenswrapper[4957]: I0218 14:58:31.699046 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2wrd2\" (UniqueName: \"kubernetes.io/projected/f35cf4ac-acd8-4db3-b633-bab9cac6e322-kube-api-access-2wrd2\") pod \"f35cf4ac-acd8-4db3-b633-bab9cac6e322\" (UID: \"f35cf4ac-acd8-4db3-b633-bab9cac6e322\") " Feb 18 14:58:31 crc kubenswrapper[4957]: I0218 14:58:31.699235 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f35cf4ac-acd8-4db3-b633-bab9cac6e322-config-data\") pod \"f35cf4ac-acd8-4db3-b633-bab9cac6e322\" (UID: \"f35cf4ac-acd8-4db3-b633-bab9cac6e322\") " Feb 18 14:58:31 crc kubenswrapper[4957]: I0218 14:58:31.699303 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f35cf4ac-acd8-4db3-b633-bab9cac6e322-combined-ca-bundle\") pod \"f35cf4ac-acd8-4db3-b633-bab9cac6e322\" (UID: \"f35cf4ac-acd8-4db3-b633-bab9cac6e322\") " Feb 18 14:58:31 crc kubenswrapper[4957]: I0218 14:58:31.699386 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zh6wb\" (UniqueName: \"kubernetes.io/projected/48a06b17-3799-49aa-97b4-40b55c95fa86-kube-api-access-zh6wb\") pod \"48a06b17-3799-49aa-97b4-40b55c95fa86\" (UID: \"48a06b17-3799-49aa-97b4-40b55c95fa86\") " Feb 18 14:58:31 crc kubenswrapper[4957]: I0218 14:58:31.706967 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f35cf4ac-acd8-4db3-b633-bab9cac6e322-kube-api-access-2wrd2" (OuterVolumeSpecName: "kube-api-access-2wrd2") pod "f35cf4ac-acd8-4db3-b633-bab9cac6e322" (UID: "f35cf4ac-acd8-4db3-b633-bab9cac6e322"). InnerVolumeSpecName "kube-api-access-2wrd2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:58:31 crc kubenswrapper[4957]: I0218 14:58:31.707413 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48a06b17-3799-49aa-97b4-40b55c95fa86-kube-api-access-zh6wb" (OuterVolumeSpecName: "kube-api-access-zh6wb") pod "48a06b17-3799-49aa-97b4-40b55c95fa86" (UID: "48a06b17-3799-49aa-97b4-40b55c95fa86"). InnerVolumeSpecName "kube-api-access-zh6wb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:58:31 crc kubenswrapper[4957]: I0218 14:58:31.738675 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f35cf4ac-acd8-4db3-b633-bab9cac6e322-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f35cf4ac-acd8-4db3-b633-bab9cac6e322" (UID: "f35cf4ac-acd8-4db3-b633-bab9cac6e322"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:58:31 crc kubenswrapper[4957]: I0218 14:58:31.769820 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f35cf4ac-acd8-4db3-b633-bab9cac6e322-config-data" (OuterVolumeSpecName: "config-data") pod "f35cf4ac-acd8-4db3-b633-bab9cac6e322" (UID: "f35cf4ac-acd8-4db3-b633-bab9cac6e322"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:58:31 crc kubenswrapper[4957]: I0218 14:58:31.810461 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zh6wb\" (UniqueName: \"kubernetes.io/projected/48a06b17-3799-49aa-97b4-40b55c95fa86-kube-api-access-zh6wb\") on node \"crc\" DevicePath \"\"" Feb 18 14:58:31 crc kubenswrapper[4957]: I0218 14:58:31.810504 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2wrd2\" (UniqueName: \"kubernetes.io/projected/f35cf4ac-acd8-4db3-b633-bab9cac6e322-kube-api-access-2wrd2\") on node \"crc\" DevicePath \"\"" Feb 18 14:58:31 crc kubenswrapper[4957]: I0218 14:58:31.810514 4957 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f35cf4ac-acd8-4db3-b633-bab9cac6e322-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 14:58:31 crc kubenswrapper[4957]: I0218 14:58:31.810524 4957 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f35cf4ac-acd8-4db3-b633-bab9cac6e322-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:58:32 crc kubenswrapper[4957]: I0218 14:58:32.246348 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"f35cf4ac-acd8-4db3-b633-bab9cac6e322","Type":"ContainerDied","Data":"ef9b1dd4702a1feb8d4b9db96f518c1cd388f2ca91ef865eb8052c5ab313e316"} Feb 18 14:58:32 crc kubenswrapper[4957]: I0218 14:58:32.246405 4957 scope.go:117] "RemoveContainer" containerID="797b010ef357795bfe0a8e95e8d7e95190074c7bf5a8522e72145362c38177a8" Feb 18 14:58:32 crc kubenswrapper[4957]: I0218 14:58:32.246597 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Feb 18 14:58:32 crc kubenswrapper[4957]: I0218 14:58:32.249786 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"48a06b17-3799-49aa-97b4-40b55c95fa86","Type":"ContainerDied","Data":"195d2c4e98d270c5824eb48385c33114b6369a9191f87d913ecb78765d87024f"} Feb 18 14:58:32 crc kubenswrapper[4957]: I0218 14:58:32.249853 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 18 14:58:32 crc kubenswrapper[4957]: I0218 14:58:32.288614 4957 scope.go:117] "RemoveContainer" containerID="433d13fb9543cea07951bcd4a2518e3dfc9993b92b1249e1b91cea16e4706a25" Feb 18 14:58:32 crc kubenswrapper[4957]: I0218 14:58:32.289532 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-0"] Feb 18 14:58:32 crc kubenswrapper[4957]: I0218 14:58:32.330566 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-0"] Feb 18 14:58:32 crc kubenswrapper[4957]: I0218 14:58:32.357782 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 18 14:58:32 crc kubenswrapper[4957]: I0218 14:58:32.372273 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 18 14:58:32 crc kubenswrapper[4957]: I0218 14:58:32.386704 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-0"] Feb 18 14:58:32 crc kubenswrapper[4957]: E0218 14:58:32.387660 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48a06b17-3799-49aa-97b4-40b55c95fa86" containerName="kube-state-metrics" Feb 18 14:58:32 crc kubenswrapper[4957]: I0218 14:58:32.387689 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="48a06b17-3799-49aa-97b4-40b55c95fa86" containerName="kube-state-metrics" Feb 18 14:58:32 crc kubenswrapper[4957]: E0218 14:58:32.387705 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f35cf4ac-acd8-4db3-b633-bab9cac6e322" containerName="mysqld-exporter" Feb 18 14:58:32 crc kubenswrapper[4957]: I0218 14:58:32.387716 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="f35cf4ac-acd8-4db3-b633-bab9cac6e322" containerName="mysqld-exporter" Feb 18 14:58:32 crc kubenswrapper[4957]: E0218 14:58:32.387736 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebc99916-f0ef-4d46-a2b2-b22a69b4eece" containerName="extract-content" Feb 18 14:58:32 crc kubenswrapper[4957]: I0218 14:58:32.387744 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebc99916-f0ef-4d46-a2b2-b22a69b4eece" containerName="extract-content" Feb 18 14:58:32 crc kubenswrapper[4957]: E0218 14:58:32.387762 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebc99916-f0ef-4d46-a2b2-b22a69b4eece" containerName="registry-server" Feb 18 14:58:32 crc kubenswrapper[4957]: I0218 14:58:32.387770 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebc99916-f0ef-4d46-a2b2-b22a69b4eece" containerName="registry-server" Feb 18 14:58:32 crc kubenswrapper[4957]: E0218 14:58:32.387797 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="139c8bb5-20f4-4bf6-a384-551cd08dd46e" containerName="extract-content" Feb 18 14:58:32 crc kubenswrapper[4957]: I0218 14:58:32.387807 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="139c8bb5-20f4-4bf6-a384-551cd08dd46e" containerName="extract-content" Feb 18 14:58:32 crc kubenswrapper[4957]: E0218 14:58:32.387819 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebc99916-f0ef-4d46-a2b2-b22a69b4eece" containerName="extract-utilities" Feb 18 14:58:32 crc kubenswrapper[4957]: I0218 14:58:32.387827 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebc99916-f0ef-4d46-a2b2-b22a69b4eece" containerName="extract-utilities" Feb 18 14:58:32 crc kubenswrapper[4957]: E0218 14:58:32.387850 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="139c8bb5-20f4-4bf6-a384-551cd08dd46e" containerName="extract-utilities" Feb 18 14:58:32 crc kubenswrapper[4957]: I0218 14:58:32.387858 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="139c8bb5-20f4-4bf6-a384-551cd08dd46e" containerName="extract-utilities" Feb 18 14:58:32 crc kubenswrapper[4957]: E0218 14:58:32.387899 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="139c8bb5-20f4-4bf6-a384-551cd08dd46e" containerName="registry-server" Feb 18 14:58:32 crc kubenswrapper[4957]: I0218 14:58:32.387909 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="139c8bb5-20f4-4bf6-a384-551cd08dd46e" containerName="registry-server" Feb 18 14:58:32 crc kubenswrapper[4957]: I0218 14:58:32.388229 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="48a06b17-3799-49aa-97b4-40b55c95fa86" containerName="kube-state-metrics" Feb 18 14:58:32 crc kubenswrapper[4957]: I0218 14:58:32.388244 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="139c8bb5-20f4-4bf6-a384-551cd08dd46e" containerName="registry-server" Feb 18 14:58:32 crc kubenswrapper[4957]: I0218 14:58:32.388255 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="ebc99916-f0ef-4d46-a2b2-b22a69b4eece" containerName="registry-server" Feb 18 14:58:32 crc kubenswrapper[4957]: I0218 14:58:32.388279 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="f35cf4ac-acd8-4db3-b633-bab9cac6e322" containerName="mysqld-exporter" Feb 18 14:58:32 crc kubenswrapper[4957]: I0218 14:58:32.389281 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Feb 18 14:58:32 crc kubenswrapper[4957]: I0218 14:58:32.394585 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-mysqld-exporter-svc" Feb 18 14:58:32 crc kubenswrapper[4957]: I0218 14:58:32.394838 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-config-data" Feb 18 14:58:32 crc kubenswrapper[4957]: I0218 14:58:32.414448 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Feb 18 14:58:32 crc kubenswrapper[4957]: I0218 14:58:32.435322 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bj922\" (UniqueName: \"kubernetes.io/projected/09169c38-c2c1-43b6-b01e-45320845dc8e-kube-api-access-bj922\") pod \"mysqld-exporter-0\" (UID: \"09169c38-c2c1-43b6-b01e-45320845dc8e\") " pod="openstack/mysqld-exporter-0" Feb 18 14:58:32 crc kubenswrapper[4957]: I0218 14:58:32.435496 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/09169c38-c2c1-43b6-b01e-45320845dc8e-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"09169c38-c2c1-43b6-b01e-45320845dc8e\") " pod="openstack/mysqld-exporter-0" Feb 18 14:58:32 crc kubenswrapper[4957]: I0218 14:58:32.435939 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09169c38-c2c1-43b6-b01e-45320845dc8e-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"09169c38-c2c1-43b6-b01e-45320845dc8e\") " pod="openstack/mysqld-exporter-0" Feb 18 14:58:32 crc kubenswrapper[4957]: I0218 14:58:32.436260 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09169c38-c2c1-43b6-b01e-45320845dc8e-config-data\") pod \"mysqld-exporter-0\" (UID: \"09169c38-c2c1-43b6-b01e-45320845dc8e\") " pod="openstack/mysqld-exporter-0" Feb 18 14:58:32 crc kubenswrapper[4957]: I0218 14:58:32.438936 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 18 14:58:32 crc kubenswrapper[4957]: I0218 14:58:32.442353 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 18 14:58:32 crc kubenswrapper[4957]: I0218 14:58:32.446184 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Feb 18 14:58:32 crc kubenswrapper[4957]: I0218 14:58:32.447855 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Feb 18 14:58:32 crc kubenswrapper[4957]: I0218 14:58:32.466085 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 18 14:58:32 crc kubenswrapper[4957]: I0218 14:58:32.538970 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03293231-ba61-4099-89c9-b86cd6d9f489-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"03293231-ba61-4099-89c9-b86cd6d9f489\") " pod="openstack/kube-state-metrics-0" Feb 18 14:58:32 crc kubenswrapper[4957]: I0218 14:58:32.539150 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/03293231-ba61-4099-89c9-b86cd6d9f489-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"03293231-ba61-4099-89c9-b86cd6d9f489\") " pod="openstack/kube-state-metrics-0" Feb 18 14:58:32 crc kubenswrapper[4957]: I0218 14:58:32.539189 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09169c38-c2c1-43b6-b01e-45320845dc8e-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"09169c38-c2c1-43b6-b01e-45320845dc8e\") " pod="openstack/mysqld-exporter-0" Feb 18 14:58:32 crc kubenswrapper[4957]: I0218 14:58:32.539281 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09169c38-c2c1-43b6-b01e-45320845dc8e-config-data\") pod \"mysqld-exporter-0\" (UID: \"09169c38-c2c1-43b6-b01e-45320845dc8e\") " pod="openstack/mysqld-exporter-0" Feb 18 14:58:32 crc kubenswrapper[4957]: I0218 14:58:32.539324 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gptdv\" (UniqueName: \"kubernetes.io/projected/03293231-ba61-4099-89c9-b86cd6d9f489-kube-api-access-gptdv\") pod \"kube-state-metrics-0\" (UID: \"03293231-ba61-4099-89c9-b86cd6d9f489\") " pod="openstack/kube-state-metrics-0" Feb 18 14:58:32 crc kubenswrapper[4957]: I0218 14:58:32.539359 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/03293231-ba61-4099-89c9-b86cd6d9f489-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"03293231-ba61-4099-89c9-b86cd6d9f489\") " pod="openstack/kube-state-metrics-0" Feb 18 14:58:32 crc kubenswrapper[4957]: I0218 14:58:32.539443 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bj922\" (UniqueName: \"kubernetes.io/projected/09169c38-c2c1-43b6-b01e-45320845dc8e-kube-api-access-bj922\") pod \"mysqld-exporter-0\" (UID: \"09169c38-c2c1-43b6-b01e-45320845dc8e\") " pod="openstack/mysqld-exporter-0" Feb 18 14:58:32 crc kubenswrapper[4957]: I0218 14:58:32.539473 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/09169c38-c2c1-43b6-b01e-45320845dc8e-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"09169c38-c2c1-43b6-b01e-45320845dc8e\") " pod="openstack/mysqld-exporter-0" Feb 18 14:58:32 crc kubenswrapper[4957]: I0218 14:58:32.544872 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09169c38-c2c1-43b6-b01e-45320845dc8e-config-data\") pod \"mysqld-exporter-0\" (UID: \"09169c38-c2c1-43b6-b01e-45320845dc8e\") " pod="openstack/mysqld-exporter-0" Feb 18 14:58:32 crc kubenswrapper[4957]: I0218 14:58:32.545115 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09169c38-c2c1-43b6-b01e-45320845dc8e-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"09169c38-c2c1-43b6-b01e-45320845dc8e\") " pod="openstack/mysqld-exporter-0" Feb 18 14:58:32 crc kubenswrapper[4957]: I0218 14:58:32.550952 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/09169c38-c2c1-43b6-b01e-45320845dc8e-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"09169c38-c2c1-43b6-b01e-45320845dc8e\") " pod="openstack/mysqld-exporter-0" Feb 18 14:58:32 crc kubenswrapper[4957]: I0218 14:58:32.560380 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bj922\" (UniqueName: \"kubernetes.io/projected/09169c38-c2c1-43b6-b01e-45320845dc8e-kube-api-access-bj922\") pod \"mysqld-exporter-0\" (UID: \"09169c38-c2c1-43b6-b01e-45320845dc8e\") " pod="openstack/mysqld-exporter-0" Feb 18 14:58:32 crc kubenswrapper[4957]: I0218 14:58:32.640975 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gptdv\" (UniqueName: \"kubernetes.io/projected/03293231-ba61-4099-89c9-b86cd6d9f489-kube-api-access-gptdv\") pod \"kube-state-metrics-0\" (UID: \"03293231-ba61-4099-89c9-b86cd6d9f489\") " pod="openstack/kube-state-metrics-0" Feb 18 14:58:32 crc kubenswrapper[4957]: I0218 14:58:32.641053 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/03293231-ba61-4099-89c9-b86cd6d9f489-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"03293231-ba61-4099-89c9-b86cd6d9f489\") " pod="openstack/kube-state-metrics-0" Feb 18 14:58:32 crc kubenswrapper[4957]: I0218 14:58:32.641160 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03293231-ba61-4099-89c9-b86cd6d9f489-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"03293231-ba61-4099-89c9-b86cd6d9f489\") " pod="openstack/kube-state-metrics-0" Feb 18 14:58:32 crc kubenswrapper[4957]: I0218 14:58:32.641212 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/03293231-ba61-4099-89c9-b86cd6d9f489-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"03293231-ba61-4099-89c9-b86cd6d9f489\") " pod="openstack/kube-state-metrics-0" Feb 18 14:58:32 crc kubenswrapper[4957]: I0218 14:58:32.645590 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/03293231-ba61-4099-89c9-b86cd6d9f489-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"03293231-ba61-4099-89c9-b86cd6d9f489\") " pod="openstack/kube-state-metrics-0" Feb 18 14:58:32 crc kubenswrapper[4957]: I0218 14:58:32.645876 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03293231-ba61-4099-89c9-b86cd6d9f489-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"03293231-ba61-4099-89c9-b86cd6d9f489\") " pod="openstack/kube-state-metrics-0" Feb 18 14:58:32 crc kubenswrapper[4957]: I0218 14:58:32.646915 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/03293231-ba61-4099-89c9-b86cd6d9f489-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"03293231-ba61-4099-89c9-b86cd6d9f489\") " pod="openstack/kube-state-metrics-0" Feb 18 14:58:32 crc kubenswrapper[4957]: I0218 14:58:32.661163 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gptdv\" (UniqueName: \"kubernetes.io/projected/03293231-ba61-4099-89c9-b86cd6d9f489-kube-api-access-gptdv\") pod \"kube-state-metrics-0\" (UID: \"03293231-ba61-4099-89c9-b86cd6d9f489\") " pod="openstack/kube-state-metrics-0" Feb 18 14:58:32 crc kubenswrapper[4957]: I0218 14:58:32.713392 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Feb 18 14:58:32 crc kubenswrapper[4957]: I0218 14:58:32.763771 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 18 14:58:33 crc kubenswrapper[4957]: I0218 14:58:33.151061 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 14:58:33 crc kubenswrapper[4957]: I0218 14:58:33.152247 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="79e6e661-a1cd-4ff5-becc-87ff6dd1fb4b" containerName="ceilometer-notification-agent" containerID="cri-o://1e16c55871f4bb19653344e2eff10737813b704cee6acc7536efa492fb30c80f" gracePeriod=30 Feb 18 14:58:33 crc kubenswrapper[4957]: I0218 14:58:33.152285 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="79e6e661-a1cd-4ff5-becc-87ff6dd1fb4b" containerName="sg-core" containerID="cri-o://bb8dea5340466054afe539ff2ccc7ab58a1810b312586cd487a6a6e73dc482ad" gracePeriod=30 Feb 18 14:58:33 crc kubenswrapper[4957]: I0218 14:58:33.152495 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="79e6e661-a1cd-4ff5-becc-87ff6dd1fb4b" containerName="proxy-httpd" containerID="cri-o://63740087cdbbd8d9d348d53457e27e0ca5fe2fa12f187074387563ee9ead2b6b" gracePeriod=30 Feb 18 14:58:33 crc kubenswrapper[4957]: I0218 14:58:33.152522 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="79e6e661-a1cd-4ff5-becc-87ff6dd1fb4b" containerName="ceilometer-central-agent" containerID="cri-o://eeb170b903af85131dc6ae701cedca0c4de379321ed706405103fd7adfcd395e" gracePeriod=30 Feb 18 14:58:33 crc kubenswrapper[4957]: I0218 14:58:33.280751 4957 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 14:58:33 crc kubenswrapper[4957]: I0218 14:58:33.285374 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Feb 18 14:58:33 crc kubenswrapper[4957]: I0218 14:58:33.433466 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 18 14:58:34 crc kubenswrapper[4957]: I0218 14:58:34.243283 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48a06b17-3799-49aa-97b4-40b55c95fa86" path="/var/lib/kubelet/pods/48a06b17-3799-49aa-97b4-40b55c95fa86/volumes" Feb 18 14:58:34 crc kubenswrapper[4957]: I0218 14:58:34.244268 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f35cf4ac-acd8-4db3-b633-bab9cac6e322" path="/var/lib/kubelet/pods/f35cf4ac-acd8-4db3-b633-bab9cac6e322/volumes" Feb 18 14:58:34 crc kubenswrapper[4957]: I0218 14:58:34.303842 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"03293231-ba61-4099-89c9-b86cd6d9f489","Type":"ContainerStarted","Data":"0a7d4c8ec7d7f5f3062bde1db19bcbacb101be40cf36f34f471d2ba448100aaa"} Feb 18 14:58:34 crc kubenswrapper[4957]: I0218 14:58:34.318843 4957 generic.go:334] "Generic (PLEG): container finished" podID="79e6e661-a1cd-4ff5-becc-87ff6dd1fb4b" containerID="63740087cdbbd8d9d348d53457e27e0ca5fe2fa12f187074387563ee9ead2b6b" exitCode=0 Feb 18 14:58:34 crc kubenswrapper[4957]: I0218 14:58:34.319085 4957 generic.go:334] "Generic (PLEG): container finished" podID="79e6e661-a1cd-4ff5-becc-87ff6dd1fb4b" containerID="bb8dea5340466054afe539ff2ccc7ab58a1810b312586cd487a6a6e73dc482ad" exitCode=2 Feb 18 14:58:34 crc kubenswrapper[4957]: I0218 14:58:34.319207 4957 generic.go:334] "Generic (PLEG): container finished" podID="79e6e661-a1cd-4ff5-becc-87ff6dd1fb4b" containerID="1e16c55871f4bb19653344e2eff10737813b704cee6acc7536efa492fb30c80f" exitCode=0 Feb 18 14:58:34 crc kubenswrapper[4957]: I0218 14:58:34.319218 4957 generic.go:334] "Generic (PLEG): container finished" podID="79e6e661-a1cd-4ff5-becc-87ff6dd1fb4b" containerID="eeb170b903af85131dc6ae701cedca0c4de379321ed706405103fd7adfcd395e" exitCode=0 Feb 18 14:58:34 crc kubenswrapper[4957]: I0218 14:58:34.319284 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"79e6e661-a1cd-4ff5-becc-87ff6dd1fb4b","Type":"ContainerDied","Data":"63740087cdbbd8d9d348d53457e27e0ca5fe2fa12f187074387563ee9ead2b6b"} Feb 18 14:58:34 crc kubenswrapper[4957]: I0218 14:58:34.319311 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"79e6e661-a1cd-4ff5-becc-87ff6dd1fb4b","Type":"ContainerDied","Data":"bb8dea5340466054afe539ff2ccc7ab58a1810b312586cd487a6a6e73dc482ad"} Feb 18 14:58:34 crc kubenswrapper[4957]: I0218 14:58:34.319321 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"79e6e661-a1cd-4ff5-becc-87ff6dd1fb4b","Type":"ContainerDied","Data":"1e16c55871f4bb19653344e2eff10737813b704cee6acc7536efa492fb30c80f"} Feb 18 14:58:34 crc kubenswrapper[4957]: I0218 14:58:34.319330 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"79e6e661-a1cd-4ff5-becc-87ff6dd1fb4b","Type":"ContainerDied","Data":"eeb170b903af85131dc6ae701cedca0c4de379321ed706405103fd7adfcd395e"} Feb 18 14:58:34 crc kubenswrapper[4957]: I0218 14:58:34.327841 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"09169c38-c2c1-43b6-b01e-45320845dc8e","Type":"ContainerStarted","Data":"b0f07829b16b7aefabb6ab44ea53724ac8dec27e02558ed2d38c1985dc21c7a5"} Feb 18 14:58:34 crc kubenswrapper[4957]: I0218 14:58:34.353825 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-0" podStartSLOduration=1.816963362 podStartE2EDuration="2.353804882s" podCreationTimestamp="2026-02-18 14:58:32 +0000 UTC" firstStartedPulling="2026-02-18 14:58:33.280430961 +0000 UTC m=+1619.801295715" lastFinishedPulling="2026-02-18 14:58:33.817272491 +0000 UTC m=+1620.338137235" observedRunningTime="2026-02-18 14:58:34.347688636 +0000 UTC m=+1620.868553380" watchObservedRunningTime="2026-02-18 14:58:34.353804882 +0000 UTC m=+1620.874669626" Feb 18 14:58:34 crc kubenswrapper[4957]: I0218 14:58:34.470643 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 14:58:34 crc kubenswrapper[4957]: I0218 14:58:34.508881 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m2fgn\" (UniqueName: \"kubernetes.io/projected/79e6e661-a1cd-4ff5-becc-87ff6dd1fb4b-kube-api-access-m2fgn\") pod \"79e6e661-a1cd-4ff5-becc-87ff6dd1fb4b\" (UID: \"79e6e661-a1cd-4ff5-becc-87ff6dd1fb4b\") " Feb 18 14:58:34 crc kubenswrapper[4957]: I0218 14:58:34.508997 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/79e6e661-a1cd-4ff5-becc-87ff6dd1fb4b-log-httpd\") pod \"79e6e661-a1cd-4ff5-becc-87ff6dd1fb4b\" (UID: \"79e6e661-a1cd-4ff5-becc-87ff6dd1fb4b\") " Feb 18 14:58:34 crc kubenswrapper[4957]: I0218 14:58:34.509037 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/79e6e661-a1cd-4ff5-becc-87ff6dd1fb4b-scripts\") pod \"79e6e661-a1cd-4ff5-becc-87ff6dd1fb4b\" (UID: \"79e6e661-a1cd-4ff5-becc-87ff6dd1fb4b\") " Feb 18 14:58:34 crc kubenswrapper[4957]: I0218 14:58:34.509132 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/79e6e661-a1cd-4ff5-becc-87ff6dd1fb4b-sg-core-conf-yaml\") pod \"79e6e661-a1cd-4ff5-becc-87ff6dd1fb4b\" (UID: \"79e6e661-a1cd-4ff5-becc-87ff6dd1fb4b\") " Feb 18 14:58:34 crc kubenswrapper[4957]: I0218 14:58:34.509296 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79e6e661-a1cd-4ff5-becc-87ff6dd1fb4b-config-data\") pod \"79e6e661-a1cd-4ff5-becc-87ff6dd1fb4b\" (UID: \"79e6e661-a1cd-4ff5-becc-87ff6dd1fb4b\") " Feb 18 14:58:34 crc kubenswrapper[4957]: I0218 14:58:34.509349 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79e6e661-a1cd-4ff5-becc-87ff6dd1fb4b-combined-ca-bundle\") pod \"79e6e661-a1cd-4ff5-becc-87ff6dd1fb4b\" (UID: \"79e6e661-a1cd-4ff5-becc-87ff6dd1fb4b\") " Feb 18 14:58:34 crc kubenswrapper[4957]: I0218 14:58:34.509381 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/79e6e661-a1cd-4ff5-becc-87ff6dd1fb4b-run-httpd\") pod \"79e6e661-a1cd-4ff5-becc-87ff6dd1fb4b\" (UID: \"79e6e661-a1cd-4ff5-becc-87ff6dd1fb4b\") " Feb 18 14:58:34 crc kubenswrapper[4957]: I0218 14:58:34.510676 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/79e6e661-a1cd-4ff5-becc-87ff6dd1fb4b-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "79e6e661-a1cd-4ff5-becc-87ff6dd1fb4b" (UID: "79e6e661-a1cd-4ff5-becc-87ff6dd1fb4b"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:58:34 crc kubenswrapper[4957]: I0218 14:58:34.511886 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/79e6e661-a1cd-4ff5-becc-87ff6dd1fb4b-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "79e6e661-a1cd-4ff5-becc-87ff6dd1fb4b" (UID: "79e6e661-a1cd-4ff5-becc-87ff6dd1fb4b"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:58:34 crc kubenswrapper[4957]: I0218 14:58:34.516208 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79e6e661-a1cd-4ff5-becc-87ff6dd1fb4b-kube-api-access-m2fgn" (OuterVolumeSpecName: "kube-api-access-m2fgn") pod "79e6e661-a1cd-4ff5-becc-87ff6dd1fb4b" (UID: "79e6e661-a1cd-4ff5-becc-87ff6dd1fb4b"). InnerVolumeSpecName "kube-api-access-m2fgn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:58:34 crc kubenswrapper[4957]: I0218 14:58:34.518510 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79e6e661-a1cd-4ff5-becc-87ff6dd1fb4b-scripts" (OuterVolumeSpecName: "scripts") pod "79e6e661-a1cd-4ff5-becc-87ff6dd1fb4b" (UID: "79e6e661-a1cd-4ff5-becc-87ff6dd1fb4b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:58:34 crc kubenswrapper[4957]: I0218 14:58:34.565105 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79e6e661-a1cd-4ff5-becc-87ff6dd1fb4b-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "79e6e661-a1cd-4ff5-becc-87ff6dd1fb4b" (UID: "79e6e661-a1cd-4ff5-becc-87ff6dd1fb4b"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:58:34 crc kubenswrapper[4957]: I0218 14:58:34.614011 4957 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/79e6e661-a1cd-4ff5-becc-87ff6dd1fb4b-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 14:58:34 crc kubenswrapper[4957]: I0218 14:58:34.614075 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m2fgn\" (UniqueName: \"kubernetes.io/projected/79e6e661-a1cd-4ff5-becc-87ff6dd1fb4b-kube-api-access-m2fgn\") on node \"crc\" DevicePath \"\"" Feb 18 14:58:34 crc kubenswrapper[4957]: I0218 14:58:34.614107 4957 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/79e6e661-a1cd-4ff5-becc-87ff6dd1fb4b-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 14:58:34 crc kubenswrapper[4957]: I0218 14:58:34.614124 4957 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/79e6e661-a1cd-4ff5-becc-87ff6dd1fb4b-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 14:58:34 crc kubenswrapper[4957]: I0218 14:58:34.614138 4957 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/79e6e661-a1cd-4ff5-becc-87ff6dd1fb4b-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 18 14:58:34 crc kubenswrapper[4957]: I0218 14:58:34.619812 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79e6e661-a1cd-4ff5-becc-87ff6dd1fb4b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "79e6e661-a1cd-4ff5-becc-87ff6dd1fb4b" (UID: "79e6e661-a1cd-4ff5-becc-87ff6dd1fb4b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:58:34 crc kubenswrapper[4957]: I0218 14:58:34.703530 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79e6e661-a1cd-4ff5-becc-87ff6dd1fb4b-config-data" (OuterVolumeSpecName: "config-data") pod "79e6e661-a1cd-4ff5-becc-87ff6dd1fb4b" (UID: "79e6e661-a1cd-4ff5-becc-87ff6dd1fb4b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:58:34 crc kubenswrapper[4957]: I0218 14:58:34.718312 4957 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79e6e661-a1cd-4ff5-becc-87ff6dd1fb4b-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 14:58:34 crc kubenswrapper[4957]: I0218 14:58:34.718408 4957 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79e6e661-a1cd-4ff5-becc-87ff6dd1fb4b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:58:35 crc kubenswrapper[4957]: I0218 14:58:35.346225 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"03293231-ba61-4099-89c9-b86cd6d9f489","Type":"ContainerStarted","Data":"7b53cf014e68a35476e6b5964276cdb1d32528dd49a3076ed10871ed9c572eac"} Feb 18 14:58:35 crc kubenswrapper[4957]: I0218 14:58:35.346594 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 18 14:58:35 crc kubenswrapper[4957]: I0218 14:58:35.358165 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"79e6e661-a1cd-4ff5-becc-87ff6dd1fb4b","Type":"ContainerDied","Data":"533953a9f3b8b4ff239f8688cbd02b613ed79898acb173ef09449bfc6ba1b819"} Feb 18 14:58:35 crc kubenswrapper[4957]: I0218 14:58:35.358230 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 14:58:35 crc kubenswrapper[4957]: I0218 14:58:35.358239 4957 scope.go:117] "RemoveContainer" containerID="63740087cdbbd8d9d348d53457e27e0ca5fe2fa12f187074387563ee9ead2b6b" Feb 18 14:58:35 crc kubenswrapper[4957]: I0218 14:58:35.364390 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"09169c38-c2c1-43b6-b01e-45320845dc8e","Type":"ContainerStarted","Data":"ce799e429678d9f19b0b2070cf579a3174dd204e1d8584ba19d62d72676ef0a8"} Feb 18 14:58:35 crc kubenswrapper[4957]: I0218 14:58:35.373992 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.879063477 podStartE2EDuration="3.373968557s" podCreationTimestamp="2026-02-18 14:58:32 +0000 UTC" firstStartedPulling="2026-02-18 14:58:33.430155251 +0000 UTC m=+1619.951020005" lastFinishedPulling="2026-02-18 14:58:33.925060341 +0000 UTC m=+1620.445925085" observedRunningTime="2026-02-18 14:58:35.363829594 +0000 UTC m=+1621.884694338" watchObservedRunningTime="2026-02-18 14:58:35.373968557 +0000 UTC m=+1621.894833321" Feb 18 14:58:35 crc kubenswrapper[4957]: I0218 14:58:35.406620 4957 scope.go:117] "RemoveContainer" containerID="bb8dea5340466054afe539ff2ccc7ab58a1810b312586cd487a6a6e73dc482ad" Feb 18 14:58:35 crc kubenswrapper[4957]: I0218 14:58:35.411383 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 14:58:35 crc kubenswrapper[4957]: I0218 14:58:35.422452 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 18 14:58:35 crc kubenswrapper[4957]: I0218 14:58:35.451640 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 18 14:58:35 crc kubenswrapper[4957]: E0218 14:58:35.452266 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79e6e661-a1cd-4ff5-becc-87ff6dd1fb4b" containerName="sg-core" Feb 18 14:58:35 crc kubenswrapper[4957]: I0218 14:58:35.452292 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="79e6e661-a1cd-4ff5-becc-87ff6dd1fb4b" containerName="sg-core" Feb 18 14:58:35 crc kubenswrapper[4957]: E0218 14:58:35.452308 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79e6e661-a1cd-4ff5-becc-87ff6dd1fb4b" containerName="ceilometer-notification-agent" Feb 18 14:58:35 crc kubenswrapper[4957]: I0218 14:58:35.452316 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="79e6e661-a1cd-4ff5-becc-87ff6dd1fb4b" containerName="ceilometer-notification-agent" Feb 18 14:58:35 crc kubenswrapper[4957]: E0218 14:58:35.452336 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79e6e661-a1cd-4ff5-becc-87ff6dd1fb4b" containerName="proxy-httpd" Feb 18 14:58:35 crc kubenswrapper[4957]: I0218 14:58:35.452344 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="79e6e661-a1cd-4ff5-becc-87ff6dd1fb4b" containerName="proxy-httpd" Feb 18 14:58:35 crc kubenswrapper[4957]: E0218 14:58:35.452361 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79e6e661-a1cd-4ff5-becc-87ff6dd1fb4b" containerName="ceilometer-central-agent" Feb 18 14:58:35 crc kubenswrapper[4957]: I0218 14:58:35.452369 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="79e6e661-a1cd-4ff5-becc-87ff6dd1fb4b" containerName="ceilometer-central-agent" Feb 18 14:58:35 crc kubenswrapper[4957]: I0218 14:58:35.452668 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="79e6e661-a1cd-4ff5-becc-87ff6dd1fb4b" containerName="proxy-httpd" Feb 18 14:58:35 crc kubenswrapper[4957]: I0218 14:58:35.452694 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="79e6e661-a1cd-4ff5-becc-87ff6dd1fb4b" containerName="ceilometer-central-agent" Feb 18 14:58:35 crc kubenswrapper[4957]: I0218 14:58:35.452714 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="79e6e661-a1cd-4ff5-becc-87ff6dd1fb4b" containerName="ceilometer-notification-agent" Feb 18 14:58:35 crc kubenswrapper[4957]: I0218 14:58:35.452752 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="79e6e661-a1cd-4ff5-becc-87ff6dd1fb4b" containerName="sg-core" Feb 18 14:58:35 crc kubenswrapper[4957]: I0218 14:58:35.456031 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 14:58:35 crc kubenswrapper[4957]: I0218 14:58:35.459140 4957 scope.go:117] "RemoveContainer" containerID="1e16c55871f4bb19653344e2eff10737813b704cee6acc7536efa492fb30c80f" Feb 18 14:58:35 crc kubenswrapper[4957]: I0218 14:58:35.462538 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 18 14:58:35 crc kubenswrapper[4957]: I0218 14:58:35.462839 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 18 14:58:35 crc kubenswrapper[4957]: I0218 14:58:35.463082 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 18 14:58:35 crc kubenswrapper[4957]: I0218 14:58:35.477667 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 14:58:35 crc kubenswrapper[4957]: I0218 14:58:35.512532 4957 scope.go:117] "RemoveContainer" containerID="eeb170b903af85131dc6ae701cedca0c4de379321ed706405103fd7adfcd395e" Feb 18 14:58:35 crc kubenswrapper[4957]: I0218 14:58:35.560330 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/203352b7-ba54-4e0f-8759-2774f3f63d3f-scripts\") pod \"ceilometer-0\" (UID: \"203352b7-ba54-4e0f-8759-2774f3f63d3f\") " pod="openstack/ceilometer-0" Feb 18 14:58:35 crc kubenswrapper[4957]: I0218 14:58:35.560382 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g52ll\" (UniqueName: \"kubernetes.io/projected/203352b7-ba54-4e0f-8759-2774f3f63d3f-kube-api-access-g52ll\") pod \"ceilometer-0\" (UID: \"203352b7-ba54-4e0f-8759-2774f3f63d3f\") " pod="openstack/ceilometer-0" Feb 18 14:58:35 crc kubenswrapper[4957]: I0218 14:58:35.560405 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/203352b7-ba54-4e0f-8759-2774f3f63d3f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"203352b7-ba54-4e0f-8759-2774f3f63d3f\") " pod="openstack/ceilometer-0" Feb 18 14:58:35 crc kubenswrapper[4957]: I0218 14:58:35.560447 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/203352b7-ba54-4e0f-8759-2774f3f63d3f-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"203352b7-ba54-4e0f-8759-2774f3f63d3f\") " pod="openstack/ceilometer-0" Feb 18 14:58:35 crc kubenswrapper[4957]: I0218 14:58:35.560483 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/203352b7-ba54-4e0f-8759-2774f3f63d3f-run-httpd\") pod \"ceilometer-0\" (UID: \"203352b7-ba54-4e0f-8759-2774f3f63d3f\") " pod="openstack/ceilometer-0" Feb 18 14:58:35 crc kubenswrapper[4957]: I0218 14:58:35.560565 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/203352b7-ba54-4e0f-8759-2774f3f63d3f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"203352b7-ba54-4e0f-8759-2774f3f63d3f\") " pod="openstack/ceilometer-0" Feb 18 14:58:35 crc kubenswrapper[4957]: I0218 14:58:35.560638 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/203352b7-ba54-4e0f-8759-2774f3f63d3f-log-httpd\") pod \"ceilometer-0\" (UID: \"203352b7-ba54-4e0f-8759-2774f3f63d3f\") " pod="openstack/ceilometer-0" Feb 18 14:58:35 crc kubenswrapper[4957]: I0218 14:58:35.560693 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/203352b7-ba54-4e0f-8759-2774f3f63d3f-config-data\") pod \"ceilometer-0\" (UID: \"203352b7-ba54-4e0f-8759-2774f3f63d3f\") " pod="openstack/ceilometer-0" Feb 18 14:58:35 crc kubenswrapper[4957]: I0218 14:58:35.662714 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/203352b7-ba54-4e0f-8759-2774f3f63d3f-log-httpd\") pod \"ceilometer-0\" (UID: \"203352b7-ba54-4e0f-8759-2774f3f63d3f\") " pod="openstack/ceilometer-0" Feb 18 14:58:35 crc kubenswrapper[4957]: I0218 14:58:35.662844 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/203352b7-ba54-4e0f-8759-2774f3f63d3f-config-data\") pod \"ceilometer-0\" (UID: \"203352b7-ba54-4e0f-8759-2774f3f63d3f\") " pod="openstack/ceilometer-0" Feb 18 14:58:35 crc kubenswrapper[4957]: I0218 14:58:35.662961 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/203352b7-ba54-4e0f-8759-2774f3f63d3f-scripts\") pod \"ceilometer-0\" (UID: \"203352b7-ba54-4e0f-8759-2774f3f63d3f\") " pod="openstack/ceilometer-0" Feb 18 14:58:35 crc kubenswrapper[4957]: I0218 14:58:35.663000 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g52ll\" (UniqueName: \"kubernetes.io/projected/203352b7-ba54-4e0f-8759-2774f3f63d3f-kube-api-access-g52ll\") pod \"ceilometer-0\" (UID: \"203352b7-ba54-4e0f-8759-2774f3f63d3f\") " pod="openstack/ceilometer-0" Feb 18 14:58:35 crc kubenswrapper[4957]: I0218 14:58:35.663030 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/203352b7-ba54-4e0f-8759-2774f3f63d3f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"203352b7-ba54-4e0f-8759-2774f3f63d3f\") " pod="openstack/ceilometer-0" Feb 18 14:58:35 crc kubenswrapper[4957]: I0218 14:58:35.663057 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/203352b7-ba54-4e0f-8759-2774f3f63d3f-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"203352b7-ba54-4e0f-8759-2774f3f63d3f\") " pod="openstack/ceilometer-0" Feb 18 14:58:35 crc kubenswrapper[4957]: I0218 14:58:35.663105 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/203352b7-ba54-4e0f-8759-2774f3f63d3f-run-httpd\") pod \"ceilometer-0\" (UID: \"203352b7-ba54-4e0f-8759-2774f3f63d3f\") " pod="openstack/ceilometer-0" Feb 18 14:58:35 crc kubenswrapper[4957]: I0218 14:58:35.663161 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/203352b7-ba54-4e0f-8759-2774f3f63d3f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"203352b7-ba54-4e0f-8759-2774f3f63d3f\") " pod="openstack/ceilometer-0" Feb 18 14:58:35 crc kubenswrapper[4957]: I0218 14:58:35.663586 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/203352b7-ba54-4e0f-8759-2774f3f63d3f-log-httpd\") pod \"ceilometer-0\" (UID: \"203352b7-ba54-4e0f-8759-2774f3f63d3f\") " pod="openstack/ceilometer-0" Feb 18 14:58:35 crc kubenswrapper[4957]: I0218 14:58:35.664082 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/203352b7-ba54-4e0f-8759-2774f3f63d3f-run-httpd\") pod \"ceilometer-0\" (UID: \"203352b7-ba54-4e0f-8759-2774f3f63d3f\") " pod="openstack/ceilometer-0" Feb 18 14:58:35 crc kubenswrapper[4957]: I0218 14:58:35.668377 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/203352b7-ba54-4e0f-8759-2774f3f63d3f-config-data\") pod \"ceilometer-0\" (UID: \"203352b7-ba54-4e0f-8759-2774f3f63d3f\") " pod="openstack/ceilometer-0" Feb 18 14:58:35 crc kubenswrapper[4957]: I0218 14:58:35.668843 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/203352b7-ba54-4e0f-8759-2774f3f63d3f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"203352b7-ba54-4e0f-8759-2774f3f63d3f\") " pod="openstack/ceilometer-0" Feb 18 14:58:35 crc kubenswrapper[4957]: I0218 14:58:35.668903 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/203352b7-ba54-4e0f-8759-2774f3f63d3f-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"203352b7-ba54-4e0f-8759-2774f3f63d3f\") " pod="openstack/ceilometer-0" Feb 18 14:58:35 crc kubenswrapper[4957]: I0218 14:58:35.670057 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/203352b7-ba54-4e0f-8759-2774f3f63d3f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"203352b7-ba54-4e0f-8759-2774f3f63d3f\") " pod="openstack/ceilometer-0" Feb 18 14:58:35 crc kubenswrapper[4957]: I0218 14:58:35.673769 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/203352b7-ba54-4e0f-8759-2774f3f63d3f-scripts\") pod \"ceilometer-0\" (UID: \"203352b7-ba54-4e0f-8759-2774f3f63d3f\") " pod="openstack/ceilometer-0" Feb 18 14:58:35 crc kubenswrapper[4957]: I0218 14:58:35.684578 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g52ll\" (UniqueName: \"kubernetes.io/projected/203352b7-ba54-4e0f-8759-2774f3f63d3f-kube-api-access-g52ll\") pod \"ceilometer-0\" (UID: \"203352b7-ba54-4e0f-8759-2774f3f63d3f\") " pod="openstack/ceilometer-0" Feb 18 14:58:35 crc kubenswrapper[4957]: I0218 14:58:35.781840 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 14:58:36 crc kubenswrapper[4957]: I0218 14:58:36.228206 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="79e6e661-a1cd-4ff5-becc-87ff6dd1fb4b" path="/var/lib/kubelet/pods/79e6e661-a1cd-4ff5-becc-87ff6dd1fb4b/volumes" Feb 18 14:58:36 crc kubenswrapper[4957]: I0218 14:58:36.281119 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 14:58:36 crc kubenswrapper[4957]: W0218 14:58:36.284291 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod203352b7_ba54_4e0f_8759_2774f3f63d3f.slice/crio-287628a8a35b71721d588d56cb8cbf5dec01b169569ee8182bd16509603c836c WatchSource:0}: Error finding container 287628a8a35b71721d588d56cb8cbf5dec01b169569ee8182bd16509603c836c: Status 404 returned error can't find the container with id 287628a8a35b71721d588d56cb8cbf5dec01b169569ee8182bd16509603c836c Feb 18 14:58:36 crc kubenswrapper[4957]: I0218 14:58:36.395710 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"203352b7-ba54-4e0f-8759-2774f3f63d3f","Type":"ContainerStarted","Data":"287628a8a35b71721d588d56cb8cbf5dec01b169569ee8182bd16509603c836c"} Feb 18 14:58:37 crc kubenswrapper[4957]: I0218 14:58:37.213164 4957 scope.go:117] "RemoveContainer" containerID="c4e669be0ab1992542c422c61f78e7526ae93fe9c47c7d1457d0a05970a49398" Feb 18 14:58:37 crc kubenswrapper[4957]: E0218 14:58:37.213911 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-x8wwg_openshift-machine-config-operator(4cde17e3-43e9-4bed-afe8-5b76229e35cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" Feb 18 14:58:37 crc kubenswrapper[4957]: I0218 14:58:37.444012 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"203352b7-ba54-4e0f-8759-2774f3f63d3f","Type":"ContainerStarted","Data":"040f58114e04be83d54b5a99569b315ae60188f30d682ba7deeb3646a868365d"} Feb 18 14:58:38 crc kubenswrapper[4957]: I0218 14:58:38.459617 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"203352b7-ba54-4e0f-8759-2774f3f63d3f","Type":"ContainerStarted","Data":"c8c41f65b7007ed98dce02616a574dbaac749e667b4bc8d35ce33149dd109f4e"} Feb 18 14:58:39 crc kubenswrapper[4957]: I0218 14:58:39.474088 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"203352b7-ba54-4e0f-8759-2774f3f63d3f","Type":"ContainerStarted","Data":"f69b67abf67493528845930394ec159a63841a2ad8e6c908bae402c040505012"} Feb 18 14:58:39 crc kubenswrapper[4957]: I0218 14:58:39.842643 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-sync-sg7fz"] Feb 18 14:58:39 crc kubenswrapper[4957]: I0218 14:58:39.853853 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-sync-sg7fz"] Feb 18 14:58:39 crc kubenswrapper[4957]: I0218 14:58:39.955510 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-ppkbv"] Feb 18 14:58:39 crc kubenswrapper[4957]: I0218 14:58:39.957089 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-ppkbv" Feb 18 14:58:39 crc kubenswrapper[4957]: I0218 14:58:39.967120 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-ppkbv"] Feb 18 14:58:40 crc kubenswrapper[4957]: I0218 14:58:40.086129 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mctt9\" (UniqueName: \"kubernetes.io/projected/c033e783-4e0d-4ec1-a8c1-877fad072b9b-kube-api-access-mctt9\") pod \"heat-db-sync-ppkbv\" (UID: \"c033e783-4e0d-4ec1-a8c1-877fad072b9b\") " pod="openstack/heat-db-sync-ppkbv" Feb 18 14:58:40 crc kubenswrapper[4957]: I0218 14:58:40.086294 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c033e783-4e0d-4ec1-a8c1-877fad072b9b-combined-ca-bundle\") pod \"heat-db-sync-ppkbv\" (UID: \"c033e783-4e0d-4ec1-a8c1-877fad072b9b\") " pod="openstack/heat-db-sync-ppkbv" Feb 18 14:58:40 crc kubenswrapper[4957]: I0218 14:58:40.086332 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c033e783-4e0d-4ec1-a8c1-877fad072b9b-config-data\") pod \"heat-db-sync-ppkbv\" (UID: \"c033e783-4e0d-4ec1-a8c1-877fad072b9b\") " pod="openstack/heat-db-sync-ppkbv" Feb 18 14:58:40 crc kubenswrapper[4957]: I0218 14:58:40.188198 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c033e783-4e0d-4ec1-a8c1-877fad072b9b-combined-ca-bundle\") pod \"heat-db-sync-ppkbv\" (UID: \"c033e783-4e0d-4ec1-a8c1-877fad072b9b\") " pod="openstack/heat-db-sync-ppkbv" Feb 18 14:58:40 crc kubenswrapper[4957]: I0218 14:58:40.188284 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c033e783-4e0d-4ec1-a8c1-877fad072b9b-config-data\") pod \"heat-db-sync-ppkbv\" (UID: \"c033e783-4e0d-4ec1-a8c1-877fad072b9b\") " pod="openstack/heat-db-sync-ppkbv" Feb 18 14:58:40 crc kubenswrapper[4957]: I0218 14:58:40.188450 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mctt9\" (UniqueName: \"kubernetes.io/projected/c033e783-4e0d-4ec1-a8c1-877fad072b9b-kube-api-access-mctt9\") pod \"heat-db-sync-ppkbv\" (UID: \"c033e783-4e0d-4ec1-a8c1-877fad072b9b\") " pod="openstack/heat-db-sync-ppkbv" Feb 18 14:58:40 crc kubenswrapper[4957]: I0218 14:58:40.195764 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c033e783-4e0d-4ec1-a8c1-877fad072b9b-config-data\") pod \"heat-db-sync-ppkbv\" (UID: \"c033e783-4e0d-4ec1-a8c1-877fad072b9b\") " pod="openstack/heat-db-sync-ppkbv" Feb 18 14:58:40 crc kubenswrapper[4957]: I0218 14:58:40.199087 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c033e783-4e0d-4ec1-a8c1-877fad072b9b-combined-ca-bundle\") pod \"heat-db-sync-ppkbv\" (UID: \"c033e783-4e0d-4ec1-a8c1-877fad072b9b\") " pod="openstack/heat-db-sync-ppkbv" Feb 18 14:58:40 crc kubenswrapper[4957]: I0218 14:58:40.212122 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mctt9\" (UniqueName: \"kubernetes.io/projected/c033e783-4e0d-4ec1-a8c1-877fad072b9b-kube-api-access-mctt9\") pod \"heat-db-sync-ppkbv\" (UID: \"c033e783-4e0d-4ec1-a8c1-877fad072b9b\") " pod="openstack/heat-db-sync-ppkbv" Feb 18 14:58:40 crc kubenswrapper[4957]: I0218 14:58:40.228847 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a3b12676-eb60-406c-a019-461370859d2a" path="/var/lib/kubelet/pods/a3b12676-eb60-406c-a019-461370859d2a/volumes" Feb 18 14:58:40 crc kubenswrapper[4957]: I0218 14:58:40.280162 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-ppkbv" Feb 18 14:58:41 crc kubenswrapper[4957]: W0218 14:58:41.053531 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc033e783_4e0d_4ec1_a8c1_877fad072b9b.slice/crio-50a8a49a06e7590a1ce05cb2f098a65973c0a12bf65d50bc52d80d3fc5293517 WatchSource:0}: Error finding container 50a8a49a06e7590a1ce05cb2f098a65973c0a12bf65d50bc52d80d3fc5293517: Status 404 returned error can't find the container with id 50a8a49a06e7590a1ce05cb2f098a65973c0a12bf65d50bc52d80d3fc5293517 Feb 18 14:58:41 crc kubenswrapper[4957]: I0218 14:58:41.087345 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-ppkbv"] Feb 18 14:58:41 crc kubenswrapper[4957]: I0218 14:58:41.513301 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-ppkbv" event={"ID":"c033e783-4e0d-4ec1-a8c1-877fad072b9b","Type":"ContainerStarted","Data":"50a8a49a06e7590a1ce05cb2f098a65973c0a12bf65d50bc52d80d3fc5293517"} Feb 18 14:58:41 crc kubenswrapper[4957]: I0218 14:58:41.516442 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"203352b7-ba54-4e0f-8759-2774f3f63d3f","Type":"ContainerStarted","Data":"b5907f6d054285aa577d5a34fbdfec4f4aa82b0f733ae74f54f025c67a2ec1ef"} Feb 18 14:58:41 crc kubenswrapper[4957]: I0218 14:58:41.516696 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 18 14:58:41 crc kubenswrapper[4957]: I0218 14:58:41.550543 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.141407785 podStartE2EDuration="6.550519654s" podCreationTimestamp="2026-02-18 14:58:35 +0000 UTC" firstStartedPulling="2026-02-18 14:58:36.287942679 +0000 UTC m=+1622.808807423" lastFinishedPulling="2026-02-18 14:58:40.697054548 +0000 UTC m=+1627.217919292" observedRunningTime="2026-02-18 14:58:41.545862979 +0000 UTC m=+1628.066727733" watchObservedRunningTime="2026-02-18 14:58:41.550519654 +0000 UTC m=+1628.071384398" Feb 18 14:58:41 crc kubenswrapper[4957]: I0218 14:58:41.746227 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-2"] Feb 18 14:58:42 crc kubenswrapper[4957]: I0218 14:58:42.798631 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 18 14:58:42 crc kubenswrapper[4957]: I0218 14:58:42.836883 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 18 14:58:44 crc kubenswrapper[4957]: I0218 14:58:44.651444 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 14:58:44 crc kubenswrapper[4957]: I0218 14:58:44.652595 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="203352b7-ba54-4e0f-8759-2774f3f63d3f" containerName="ceilometer-central-agent" containerID="cri-o://040f58114e04be83d54b5a99569b315ae60188f30d682ba7deeb3646a868365d" gracePeriod=30 Feb 18 14:58:44 crc kubenswrapper[4957]: I0218 14:58:44.652816 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="203352b7-ba54-4e0f-8759-2774f3f63d3f" containerName="proxy-httpd" containerID="cri-o://b5907f6d054285aa577d5a34fbdfec4f4aa82b0f733ae74f54f025c67a2ec1ef" gracePeriod=30 Feb 18 14:58:44 crc kubenswrapper[4957]: I0218 14:58:44.652872 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="203352b7-ba54-4e0f-8759-2774f3f63d3f" containerName="sg-core" containerID="cri-o://f69b67abf67493528845930394ec159a63841a2ad8e6c908bae402c040505012" gracePeriod=30 Feb 18 14:58:44 crc kubenswrapper[4957]: I0218 14:58:44.652943 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="203352b7-ba54-4e0f-8759-2774f3f63d3f" containerName="ceilometer-notification-agent" containerID="cri-o://c8c41f65b7007ed98dce02616a574dbaac749e667b4bc8d35ce33149dd109f4e" gracePeriod=30 Feb 18 14:58:45 crc kubenswrapper[4957]: I0218 14:58:45.590162 4957 generic.go:334] "Generic (PLEG): container finished" podID="203352b7-ba54-4e0f-8759-2774f3f63d3f" containerID="b5907f6d054285aa577d5a34fbdfec4f4aa82b0f733ae74f54f025c67a2ec1ef" exitCode=0 Feb 18 14:58:45 crc kubenswrapper[4957]: I0218 14:58:45.590473 4957 generic.go:334] "Generic (PLEG): container finished" podID="203352b7-ba54-4e0f-8759-2774f3f63d3f" containerID="f69b67abf67493528845930394ec159a63841a2ad8e6c908bae402c040505012" exitCode=2 Feb 18 14:58:45 crc kubenswrapper[4957]: I0218 14:58:45.590481 4957 generic.go:334] "Generic (PLEG): container finished" podID="203352b7-ba54-4e0f-8759-2774f3f63d3f" containerID="c8c41f65b7007ed98dce02616a574dbaac749e667b4bc8d35ce33149dd109f4e" exitCode=0 Feb 18 14:58:45 crc kubenswrapper[4957]: I0218 14:58:45.590488 4957 generic.go:334] "Generic (PLEG): container finished" podID="203352b7-ba54-4e0f-8759-2774f3f63d3f" containerID="040f58114e04be83d54b5a99569b315ae60188f30d682ba7deeb3646a868365d" exitCode=0 Feb 18 14:58:45 crc kubenswrapper[4957]: I0218 14:58:45.590508 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"203352b7-ba54-4e0f-8759-2774f3f63d3f","Type":"ContainerDied","Data":"b5907f6d054285aa577d5a34fbdfec4f4aa82b0f733ae74f54f025c67a2ec1ef"} Feb 18 14:58:45 crc kubenswrapper[4957]: I0218 14:58:45.590534 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"203352b7-ba54-4e0f-8759-2774f3f63d3f","Type":"ContainerDied","Data":"f69b67abf67493528845930394ec159a63841a2ad8e6c908bae402c040505012"} Feb 18 14:58:45 crc kubenswrapper[4957]: I0218 14:58:45.590545 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"203352b7-ba54-4e0f-8759-2774f3f63d3f","Type":"ContainerDied","Data":"c8c41f65b7007ed98dce02616a574dbaac749e667b4bc8d35ce33149dd109f4e"} Feb 18 14:58:45 crc kubenswrapper[4957]: I0218 14:58:45.590553 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"203352b7-ba54-4e0f-8759-2774f3f63d3f","Type":"ContainerDied","Data":"040f58114e04be83d54b5a99569b315ae60188f30d682ba7deeb3646a868365d"} Feb 18 14:58:45 crc kubenswrapper[4957]: I0218 14:58:45.750317 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 14:58:45 crc kubenswrapper[4957]: I0218 14:58:45.887631 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/203352b7-ba54-4e0f-8759-2774f3f63d3f-ceilometer-tls-certs\") pod \"203352b7-ba54-4e0f-8759-2774f3f63d3f\" (UID: \"203352b7-ba54-4e0f-8759-2774f3f63d3f\") " Feb 18 14:58:45 crc kubenswrapper[4957]: I0218 14:58:45.887669 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/203352b7-ba54-4e0f-8759-2774f3f63d3f-log-httpd\") pod \"203352b7-ba54-4e0f-8759-2774f3f63d3f\" (UID: \"203352b7-ba54-4e0f-8759-2774f3f63d3f\") " Feb 18 14:58:45 crc kubenswrapper[4957]: I0218 14:58:45.887770 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/203352b7-ba54-4e0f-8759-2774f3f63d3f-run-httpd\") pod \"203352b7-ba54-4e0f-8759-2774f3f63d3f\" (UID: \"203352b7-ba54-4e0f-8759-2774f3f63d3f\") " Feb 18 14:58:45 crc kubenswrapper[4957]: I0218 14:58:45.887827 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/203352b7-ba54-4e0f-8759-2774f3f63d3f-combined-ca-bundle\") pod \"203352b7-ba54-4e0f-8759-2774f3f63d3f\" (UID: \"203352b7-ba54-4e0f-8759-2774f3f63d3f\") " Feb 18 14:58:45 crc kubenswrapper[4957]: I0218 14:58:45.887884 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/203352b7-ba54-4e0f-8759-2774f3f63d3f-scripts\") pod \"203352b7-ba54-4e0f-8759-2774f3f63d3f\" (UID: \"203352b7-ba54-4e0f-8759-2774f3f63d3f\") " Feb 18 14:58:45 crc kubenswrapper[4957]: I0218 14:58:45.887987 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/203352b7-ba54-4e0f-8759-2774f3f63d3f-config-data\") pod \"203352b7-ba54-4e0f-8759-2774f3f63d3f\" (UID: \"203352b7-ba54-4e0f-8759-2774f3f63d3f\") " Feb 18 14:58:45 crc kubenswrapper[4957]: I0218 14:58:45.888031 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/203352b7-ba54-4e0f-8759-2774f3f63d3f-sg-core-conf-yaml\") pod \"203352b7-ba54-4e0f-8759-2774f3f63d3f\" (UID: \"203352b7-ba54-4e0f-8759-2774f3f63d3f\") " Feb 18 14:58:45 crc kubenswrapper[4957]: I0218 14:58:45.888052 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g52ll\" (UniqueName: \"kubernetes.io/projected/203352b7-ba54-4e0f-8759-2774f3f63d3f-kube-api-access-g52ll\") pod \"203352b7-ba54-4e0f-8759-2774f3f63d3f\" (UID: \"203352b7-ba54-4e0f-8759-2774f3f63d3f\") " Feb 18 14:58:45 crc kubenswrapper[4957]: I0218 14:58:45.889166 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/203352b7-ba54-4e0f-8759-2774f3f63d3f-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "203352b7-ba54-4e0f-8759-2774f3f63d3f" (UID: "203352b7-ba54-4e0f-8759-2774f3f63d3f"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:58:45 crc kubenswrapper[4957]: I0218 14:58:45.889930 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/203352b7-ba54-4e0f-8759-2774f3f63d3f-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "203352b7-ba54-4e0f-8759-2774f3f63d3f" (UID: "203352b7-ba54-4e0f-8759-2774f3f63d3f"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:58:45 crc kubenswrapper[4957]: I0218 14:58:45.909953 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/203352b7-ba54-4e0f-8759-2774f3f63d3f-kube-api-access-g52ll" (OuterVolumeSpecName: "kube-api-access-g52ll") pod "203352b7-ba54-4e0f-8759-2774f3f63d3f" (UID: "203352b7-ba54-4e0f-8759-2774f3f63d3f"). InnerVolumeSpecName "kube-api-access-g52ll". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:58:45 crc kubenswrapper[4957]: I0218 14:58:45.909978 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/203352b7-ba54-4e0f-8759-2774f3f63d3f-scripts" (OuterVolumeSpecName: "scripts") pod "203352b7-ba54-4e0f-8759-2774f3f63d3f" (UID: "203352b7-ba54-4e0f-8759-2774f3f63d3f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:58:45 crc kubenswrapper[4957]: I0218 14:58:45.970762 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/203352b7-ba54-4e0f-8759-2774f3f63d3f-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "203352b7-ba54-4e0f-8759-2774f3f63d3f" (UID: "203352b7-ba54-4e0f-8759-2774f3f63d3f"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:58:45 crc kubenswrapper[4957]: I0218 14:58:45.991898 4957 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/203352b7-ba54-4e0f-8759-2774f3f63d3f-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 18 14:58:45 crc kubenswrapper[4957]: I0218 14:58:45.991949 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g52ll\" (UniqueName: \"kubernetes.io/projected/203352b7-ba54-4e0f-8759-2774f3f63d3f-kube-api-access-g52ll\") on node \"crc\" DevicePath \"\"" Feb 18 14:58:45 crc kubenswrapper[4957]: I0218 14:58:45.991965 4957 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/203352b7-ba54-4e0f-8759-2774f3f63d3f-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 14:58:45 crc kubenswrapper[4957]: I0218 14:58:45.991976 4957 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/203352b7-ba54-4e0f-8759-2774f3f63d3f-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 14:58:45 crc kubenswrapper[4957]: I0218 14:58:45.991988 4957 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/203352b7-ba54-4e0f-8759-2774f3f63d3f-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 14:58:46 crc kubenswrapper[4957]: I0218 14:58:46.005594 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/203352b7-ba54-4e0f-8759-2774f3f63d3f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "203352b7-ba54-4e0f-8759-2774f3f63d3f" (UID: "203352b7-ba54-4e0f-8759-2774f3f63d3f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:58:46 crc kubenswrapper[4957]: I0218 14:58:46.027795 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/203352b7-ba54-4e0f-8759-2774f3f63d3f-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "203352b7-ba54-4e0f-8759-2774f3f63d3f" (UID: "203352b7-ba54-4e0f-8759-2774f3f63d3f"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:58:46 crc kubenswrapper[4957]: I0218 14:58:46.086900 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/203352b7-ba54-4e0f-8759-2774f3f63d3f-config-data" (OuterVolumeSpecName: "config-data") pod "203352b7-ba54-4e0f-8759-2774f3f63d3f" (UID: "203352b7-ba54-4e0f-8759-2774f3f63d3f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:58:46 crc kubenswrapper[4957]: I0218 14:58:46.095054 4957 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/203352b7-ba54-4e0f-8759-2774f3f63d3f-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 14:58:46 crc kubenswrapper[4957]: I0218 14:58:46.095099 4957 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/203352b7-ba54-4e0f-8759-2774f3f63d3f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:58:46 crc kubenswrapper[4957]: I0218 14:58:46.095115 4957 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/203352b7-ba54-4e0f-8759-2774f3f63d3f-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 14:58:46 crc kubenswrapper[4957]: I0218 14:58:46.559028 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-2" podUID="81a0cd7a-3f64-4555-96e9-ad69c2518568" containerName="rabbitmq" containerID="cri-o://f083acbf4c7e675f9c2dc0dacda93d033ac034b808f632f4e7aceaf7cba5fad8" gracePeriod=604796 Feb 18 14:58:46 crc kubenswrapper[4957]: I0218 14:58:46.615685 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"203352b7-ba54-4e0f-8759-2774f3f63d3f","Type":"ContainerDied","Data":"287628a8a35b71721d588d56cb8cbf5dec01b169569ee8182bd16509603c836c"} Feb 18 14:58:46 crc kubenswrapper[4957]: I0218 14:58:46.615767 4957 scope.go:117] "RemoveContainer" containerID="b5907f6d054285aa577d5a34fbdfec4f4aa82b0f733ae74f54f025c67a2ec1ef" Feb 18 14:58:46 crc kubenswrapper[4957]: I0218 14:58:46.616007 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 14:58:46 crc kubenswrapper[4957]: I0218 14:58:46.658293 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 14:58:46 crc kubenswrapper[4957]: I0218 14:58:46.661181 4957 scope.go:117] "RemoveContainer" containerID="f69b67abf67493528845930394ec159a63841a2ad8e6c908bae402c040505012" Feb 18 14:58:46 crc kubenswrapper[4957]: I0218 14:58:46.687070 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 18 14:58:46 crc kubenswrapper[4957]: I0218 14:58:46.696112 4957 scope.go:117] "RemoveContainer" containerID="c8c41f65b7007ed98dce02616a574dbaac749e667b4bc8d35ce33149dd109f4e" Feb 18 14:58:46 crc kubenswrapper[4957]: I0218 14:58:46.725744 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 18 14:58:46 crc kubenswrapper[4957]: E0218 14:58:46.726717 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="203352b7-ba54-4e0f-8759-2774f3f63d3f" containerName="ceilometer-central-agent" Feb 18 14:58:46 crc kubenswrapper[4957]: I0218 14:58:46.726746 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="203352b7-ba54-4e0f-8759-2774f3f63d3f" containerName="ceilometer-central-agent" Feb 18 14:58:46 crc kubenswrapper[4957]: E0218 14:58:46.726774 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="203352b7-ba54-4e0f-8759-2774f3f63d3f" containerName="ceilometer-notification-agent" Feb 18 14:58:46 crc kubenswrapper[4957]: I0218 14:58:46.726784 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="203352b7-ba54-4e0f-8759-2774f3f63d3f" containerName="ceilometer-notification-agent" Feb 18 14:58:46 crc kubenswrapper[4957]: E0218 14:58:46.726801 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="203352b7-ba54-4e0f-8759-2774f3f63d3f" containerName="sg-core" Feb 18 14:58:46 crc kubenswrapper[4957]: I0218 14:58:46.726811 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="203352b7-ba54-4e0f-8759-2774f3f63d3f" containerName="sg-core" Feb 18 14:58:46 crc kubenswrapper[4957]: E0218 14:58:46.726823 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="203352b7-ba54-4e0f-8759-2774f3f63d3f" containerName="proxy-httpd" Feb 18 14:58:46 crc kubenswrapper[4957]: I0218 14:58:46.726831 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="203352b7-ba54-4e0f-8759-2774f3f63d3f" containerName="proxy-httpd" Feb 18 14:58:46 crc kubenswrapper[4957]: I0218 14:58:46.727147 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="203352b7-ba54-4e0f-8759-2774f3f63d3f" containerName="ceilometer-notification-agent" Feb 18 14:58:46 crc kubenswrapper[4957]: I0218 14:58:46.727177 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="203352b7-ba54-4e0f-8759-2774f3f63d3f" containerName="sg-core" Feb 18 14:58:46 crc kubenswrapper[4957]: I0218 14:58:46.727191 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="203352b7-ba54-4e0f-8759-2774f3f63d3f" containerName="ceilometer-central-agent" Feb 18 14:58:46 crc kubenswrapper[4957]: I0218 14:58:46.727216 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="203352b7-ba54-4e0f-8759-2774f3f63d3f" containerName="proxy-httpd" Feb 18 14:58:46 crc kubenswrapper[4957]: I0218 14:58:46.753641 4957 scope.go:117] "RemoveContainer" containerID="040f58114e04be83d54b5a99569b315ae60188f30d682ba7deeb3646a868365d" Feb 18 14:58:46 crc kubenswrapper[4957]: I0218 14:58:46.758953 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 14:58:46 crc kubenswrapper[4957]: I0218 14:58:46.766537 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 18 14:58:46 crc kubenswrapper[4957]: I0218 14:58:46.766739 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 18 14:58:46 crc kubenswrapper[4957]: I0218 14:58:46.769404 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 18 14:58:46 crc kubenswrapper[4957]: I0218 14:58:46.775516 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 14:58:46 crc kubenswrapper[4957]: I0218 14:58:46.814895 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64952df6-ca80-4f2b-a8e3-ce0539d32008-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"64952df6-ca80-4f2b-a8e3-ce0539d32008\") " pod="openstack/ceilometer-0" Feb 18 14:58:46 crc kubenswrapper[4957]: I0218 14:58:46.814964 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/64952df6-ca80-4f2b-a8e3-ce0539d32008-run-httpd\") pod \"ceilometer-0\" (UID: \"64952df6-ca80-4f2b-a8e3-ce0539d32008\") " pod="openstack/ceilometer-0" Feb 18 14:58:46 crc kubenswrapper[4957]: I0218 14:58:46.815053 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/64952df6-ca80-4f2b-a8e3-ce0539d32008-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"64952df6-ca80-4f2b-a8e3-ce0539d32008\") " pod="openstack/ceilometer-0" Feb 18 14:58:46 crc kubenswrapper[4957]: I0218 14:58:46.815106 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rhg77\" (UniqueName: \"kubernetes.io/projected/64952df6-ca80-4f2b-a8e3-ce0539d32008-kube-api-access-rhg77\") pod \"ceilometer-0\" (UID: \"64952df6-ca80-4f2b-a8e3-ce0539d32008\") " pod="openstack/ceilometer-0" Feb 18 14:58:46 crc kubenswrapper[4957]: I0218 14:58:46.815127 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64952df6-ca80-4f2b-a8e3-ce0539d32008-config-data\") pod \"ceilometer-0\" (UID: \"64952df6-ca80-4f2b-a8e3-ce0539d32008\") " pod="openstack/ceilometer-0" Feb 18 14:58:46 crc kubenswrapper[4957]: I0218 14:58:46.815143 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/64952df6-ca80-4f2b-a8e3-ce0539d32008-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"64952df6-ca80-4f2b-a8e3-ce0539d32008\") " pod="openstack/ceilometer-0" Feb 18 14:58:46 crc kubenswrapper[4957]: I0218 14:58:46.815160 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/64952df6-ca80-4f2b-a8e3-ce0539d32008-scripts\") pod \"ceilometer-0\" (UID: \"64952df6-ca80-4f2b-a8e3-ce0539d32008\") " pod="openstack/ceilometer-0" Feb 18 14:58:46 crc kubenswrapper[4957]: I0218 14:58:46.815179 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/64952df6-ca80-4f2b-a8e3-ce0539d32008-log-httpd\") pod \"ceilometer-0\" (UID: \"64952df6-ca80-4f2b-a8e3-ce0539d32008\") " pod="openstack/ceilometer-0" Feb 18 14:58:46 crc kubenswrapper[4957]: I0218 14:58:46.918048 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64952df6-ca80-4f2b-a8e3-ce0539d32008-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"64952df6-ca80-4f2b-a8e3-ce0539d32008\") " pod="openstack/ceilometer-0" Feb 18 14:58:46 crc kubenswrapper[4957]: I0218 14:58:46.918117 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/64952df6-ca80-4f2b-a8e3-ce0539d32008-run-httpd\") pod \"ceilometer-0\" (UID: \"64952df6-ca80-4f2b-a8e3-ce0539d32008\") " pod="openstack/ceilometer-0" Feb 18 14:58:46 crc kubenswrapper[4957]: I0218 14:58:46.918249 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/64952df6-ca80-4f2b-a8e3-ce0539d32008-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"64952df6-ca80-4f2b-a8e3-ce0539d32008\") " pod="openstack/ceilometer-0" Feb 18 14:58:46 crc kubenswrapper[4957]: I0218 14:58:46.918343 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rhg77\" (UniqueName: \"kubernetes.io/projected/64952df6-ca80-4f2b-a8e3-ce0539d32008-kube-api-access-rhg77\") pod \"ceilometer-0\" (UID: \"64952df6-ca80-4f2b-a8e3-ce0539d32008\") " pod="openstack/ceilometer-0" Feb 18 14:58:46 crc kubenswrapper[4957]: I0218 14:58:46.918382 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64952df6-ca80-4f2b-a8e3-ce0539d32008-config-data\") pod \"ceilometer-0\" (UID: \"64952df6-ca80-4f2b-a8e3-ce0539d32008\") " pod="openstack/ceilometer-0" Feb 18 14:58:46 crc kubenswrapper[4957]: I0218 14:58:46.918412 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/64952df6-ca80-4f2b-a8e3-ce0539d32008-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"64952df6-ca80-4f2b-a8e3-ce0539d32008\") " pod="openstack/ceilometer-0" Feb 18 14:58:46 crc kubenswrapper[4957]: I0218 14:58:46.918601 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/64952df6-ca80-4f2b-a8e3-ce0539d32008-scripts\") pod \"ceilometer-0\" (UID: \"64952df6-ca80-4f2b-a8e3-ce0539d32008\") " pod="openstack/ceilometer-0" Feb 18 14:58:46 crc kubenswrapper[4957]: I0218 14:58:46.918631 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/64952df6-ca80-4f2b-a8e3-ce0539d32008-run-httpd\") pod \"ceilometer-0\" (UID: \"64952df6-ca80-4f2b-a8e3-ce0539d32008\") " pod="openstack/ceilometer-0" Feb 18 14:58:46 crc kubenswrapper[4957]: I0218 14:58:46.918645 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/64952df6-ca80-4f2b-a8e3-ce0539d32008-log-httpd\") pod \"ceilometer-0\" (UID: \"64952df6-ca80-4f2b-a8e3-ce0539d32008\") " pod="openstack/ceilometer-0" Feb 18 14:58:46 crc kubenswrapper[4957]: I0218 14:58:46.919000 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/64952df6-ca80-4f2b-a8e3-ce0539d32008-log-httpd\") pod \"ceilometer-0\" (UID: \"64952df6-ca80-4f2b-a8e3-ce0539d32008\") " pod="openstack/ceilometer-0" Feb 18 14:58:46 crc kubenswrapper[4957]: I0218 14:58:46.922772 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/64952df6-ca80-4f2b-a8e3-ce0539d32008-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"64952df6-ca80-4f2b-a8e3-ce0539d32008\") " pod="openstack/ceilometer-0" Feb 18 14:58:46 crc kubenswrapper[4957]: I0218 14:58:46.923121 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/64952df6-ca80-4f2b-a8e3-ce0539d32008-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"64952df6-ca80-4f2b-a8e3-ce0539d32008\") " pod="openstack/ceilometer-0" Feb 18 14:58:46 crc kubenswrapper[4957]: I0218 14:58:46.923253 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64952df6-ca80-4f2b-a8e3-ce0539d32008-config-data\") pod \"ceilometer-0\" (UID: \"64952df6-ca80-4f2b-a8e3-ce0539d32008\") " pod="openstack/ceilometer-0" Feb 18 14:58:46 crc kubenswrapper[4957]: I0218 14:58:46.924656 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/64952df6-ca80-4f2b-a8e3-ce0539d32008-scripts\") pod \"ceilometer-0\" (UID: \"64952df6-ca80-4f2b-a8e3-ce0539d32008\") " pod="openstack/ceilometer-0" Feb 18 14:58:46 crc kubenswrapper[4957]: I0218 14:58:46.930232 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64952df6-ca80-4f2b-a8e3-ce0539d32008-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"64952df6-ca80-4f2b-a8e3-ce0539d32008\") " pod="openstack/ceilometer-0" Feb 18 14:58:46 crc kubenswrapper[4957]: I0218 14:58:46.939243 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rhg77\" (UniqueName: \"kubernetes.io/projected/64952df6-ca80-4f2b-a8e3-ce0539d32008-kube-api-access-rhg77\") pod \"ceilometer-0\" (UID: \"64952df6-ca80-4f2b-a8e3-ce0539d32008\") " pod="openstack/ceilometer-0" Feb 18 14:58:47 crc kubenswrapper[4957]: I0218 14:58:47.084354 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 14:58:47 crc kubenswrapper[4957]: I0218 14:58:47.659280 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 14:58:47 crc kubenswrapper[4957]: I0218 14:58:47.767221 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="12d531cb-f58f-44a6-a638-29ebb85fdbb3" containerName="rabbitmq" containerID="cri-o://351d83c0114e5e295ff8d56e0e90957a2bdf7388269b2c16995008eb48ee3a67" gracePeriod=604796 Feb 18 14:58:48 crc kubenswrapper[4957]: I0218 14:58:48.218480 4957 scope.go:117] "RemoveContainer" containerID="c4e669be0ab1992542c422c61f78e7526ae93fe9c47c7d1457d0a05970a49398" Feb 18 14:58:48 crc kubenswrapper[4957]: E0218 14:58:48.220311 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-x8wwg_openshift-machine-config-operator(4cde17e3-43e9-4bed-afe8-5b76229e35cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" Feb 18 14:58:48 crc kubenswrapper[4957]: I0218 14:58:48.233023 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="203352b7-ba54-4e0f-8759-2774f3f63d3f" path="/var/lib/kubelet/pods/203352b7-ba54-4e0f-8759-2774f3f63d3f/volumes" Feb 18 14:58:48 crc kubenswrapper[4957]: I0218 14:58:48.677924 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"64952df6-ca80-4f2b-a8e3-ce0539d32008","Type":"ContainerStarted","Data":"c894a00d0a29ccfc0f7d4a82480d6fd432c8aa19a68f3046bca522108817b4a9"} Feb 18 14:58:48 crc kubenswrapper[4957]: I0218 14:58:48.774008 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-2" podUID="81a0cd7a-3f64-4555-96e9-ad69c2518568" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.130:5671: connect: connection refused" Feb 18 14:58:48 crc kubenswrapper[4957]: I0218 14:58:48.841804 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="12d531cb-f58f-44a6-a638-29ebb85fdbb3" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.132:5671: connect: connection refused" Feb 18 14:58:53 crc kubenswrapper[4957]: I0218 14:58:53.421020 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Feb 18 14:58:53 crc kubenswrapper[4957]: I0218 14:58:53.544692 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ln6fz\" (UniqueName: \"kubernetes.io/projected/81a0cd7a-3f64-4555-96e9-ad69c2518568-kube-api-access-ln6fz\") pod \"81a0cd7a-3f64-4555-96e9-ad69c2518568\" (UID: \"81a0cd7a-3f64-4555-96e9-ad69c2518568\") " Feb 18 14:58:53 crc kubenswrapper[4957]: I0218 14:58:53.544775 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/81a0cd7a-3f64-4555-96e9-ad69c2518568-config-data\") pod \"81a0cd7a-3f64-4555-96e9-ad69c2518568\" (UID: \"81a0cd7a-3f64-4555-96e9-ad69c2518568\") " Feb 18 14:58:53 crc kubenswrapper[4957]: I0218 14:58:53.545274 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-975c040f-4cc1-48f6-a5ab-06ccc1fd028f\") pod \"81a0cd7a-3f64-4555-96e9-ad69c2518568\" (UID: \"81a0cd7a-3f64-4555-96e9-ad69c2518568\") " Feb 18 14:58:53 crc kubenswrapper[4957]: I0218 14:58:53.545305 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/81a0cd7a-3f64-4555-96e9-ad69c2518568-rabbitmq-tls\") pod \"81a0cd7a-3f64-4555-96e9-ad69c2518568\" (UID: \"81a0cd7a-3f64-4555-96e9-ad69c2518568\") " Feb 18 14:58:53 crc kubenswrapper[4957]: I0218 14:58:53.545324 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/81a0cd7a-3f64-4555-96e9-ad69c2518568-rabbitmq-confd\") pod \"81a0cd7a-3f64-4555-96e9-ad69c2518568\" (UID: \"81a0cd7a-3f64-4555-96e9-ad69c2518568\") " Feb 18 14:58:53 crc kubenswrapper[4957]: I0218 14:58:53.545385 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/81a0cd7a-3f64-4555-96e9-ad69c2518568-rabbitmq-erlang-cookie\") pod \"81a0cd7a-3f64-4555-96e9-ad69c2518568\" (UID: \"81a0cd7a-3f64-4555-96e9-ad69c2518568\") " Feb 18 14:58:53 crc kubenswrapper[4957]: I0218 14:58:53.545446 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/81a0cd7a-3f64-4555-96e9-ad69c2518568-rabbitmq-plugins\") pod \"81a0cd7a-3f64-4555-96e9-ad69c2518568\" (UID: \"81a0cd7a-3f64-4555-96e9-ad69c2518568\") " Feb 18 14:58:53 crc kubenswrapper[4957]: I0218 14:58:53.545488 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/81a0cd7a-3f64-4555-96e9-ad69c2518568-plugins-conf\") pod \"81a0cd7a-3f64-4555-96e9-ad69c2518568\" (UID: \"81a0cd7a-3f64-4555-96e9-ad69c2518568\") " Feb 18 14:58:53 crc kubenswrapper[4957]: I0218 14:58:53.545543 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/81a0cd7a-3f64-4555-96e9-ad69c2518568-server-conf\") pod \"81a0cd7a-3f64-4555-96e9-ad69c2518568\" (UID: \"81a0cd7a-3f64-4555-96e9-ad69c2518568\") " Feb 18 14:58:53 crc kubenswrapper[4957]: I0218 14:58:53.545578 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/81a0cd7a-3f64-4555-96e9-ad69c2518568-pod-info\") pod \"81a0cd7a-3f64-4555-96e9-ad69c2518568\" (UID: \"81a0cd7a-3f64-4555-96e9-ad69c2518568\") " Feb 18 14:58:53 crc kubenswrapper[4957]: I0218 14:58:53.545600 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/81a0cd7a-3f64-4555-96e9-ad69c2518568-erlang-cookie-secret\") pod \"81a0cd7a-3f64-4555-96e9-ad69c2518568\" (UID: \"81a0cd7a-3f64-4555-96e9-ad69c2518568\") " Feb 18 14:58:53 crc kubenswrapper[4957]: I0218 14:58:53.546827 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/81a0cd7a-3f64-4555-96e9-ad69c2518568-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "81a0cd7a-3f64-4555-96e9-ad69c2518568" (UID: "81a0cd7a-3f64-4555-96e9-ad69c2518568"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:58:53 crc kubenswrapper[4957]: I0218 14:58:53.547444 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81a0cd7a-3f64-4555-96e9-ad69c2518568-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "81a0cd7a-3f64-4555-96e9-ad69c2518568" (UID: "81a0cd7a-3f64-4555-96e9-ad69c2518568"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:58:53 crc kubenswrapper[4957]: I0218 14:58:53.548468 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/81a0cd7a-3f64-4555-96e9-ad69c2518568-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "81a0cd7a-3f64-4555-96e9-ad69c2518568" (UID: "81a0cd7a-3f64-4555-96e9-ad69c2518568"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:58:53 crc kubenswrapper[4957]: I0218 14:58:53.560123 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/81a0cd7a-3f64-4555-96e9-ad69c2518568-pod-info" (OuterVolumeSpecName: "pod-info") pod "81a0cd7a-3f64-4555-96e9-ad69c2518568" (UID: "81a0cd7a-3f64-4555-96e9-ad69c2518568"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 18 14:58:53 crc kubenswrapper[4957]: I0218 14:58:53.583945 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81a0cd7a-3f64-4555-96e9-ad69c2518568-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "81a0cd7a-3f64-4555-96e9-ad69c2518568" (UID: "81a0cd7a-3f64-4555-96e9-ad69c2518568"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:58:53 crc kubenswrapper[4957]: I0218 14:58:53.584085 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81a0cd7a-3f64-4555-96e9-ad69c2518568-kube-api-access-ln6fz" (OuterVolumeSpecName: "kube-api-access-ln6fz") pod "81a0cd7a-3f64-4555-96e9-ad69c2518568" (UID: "81a0cd7a-3f64-4555-96e9-ad69c2518568"). InnerVolumeSpecName "kube-api-access-ln6fz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:58:53 crc kubenswrapper[4957]: I0218 14:58:53.587707 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-975c040f-4cc1-48f6-a5ab-06ccc1fd028f" (OuterVolumeSpecName: "persistence") pod "81a0cd7a-3f64-4555-96e9-ad69c2518568" (UID: "81a0cd7a-3f64-4555-96e9-ad69c2518568"). InnerVolumeSpecName "pvc-975c040f-4cc1-48f6-a5ab-06ccc1fd028f". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 18 14:58:53 crc kubenswrapper[4957]: I0218 14:58:53.603692 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81a0cd7a-3f64-4555-96e9-ad69c2518568-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "81a0cd7a-3f64-4555-96e9-ad69c2518568" (UID: "81a0cd7a-3f64-4555-96e9-ad69c2518568"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:58:53 crc kubenswrapper[4957]: I0218 14:58:53.617378 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81a0cd7a-3f64-4555-96e9-ad69c2518568-config-data" (OuterVolumeSpecName: "config-data") pod "81a0cd7a-3f64-4555-96e9-ad69c2518568" (UID: "81a0cd7a-3f64-4555-96e9-ad69c2518568"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:58:53 crc kubenswrapper[4957]: I0218 14:58:53.648833 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ln6fz\" (UniqueName: \"kubernetes.io/projected/81a0cd7a-3f64-4555-96e9-ad69c2518568-kube-api-access-ln6fz\") on node \"crc\" DevicePath \"\"" Feb 18 14:58:53 crc kubenswrapper[4957]: I0218 14:58:53.648883 4957 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/81a0cd7a-3f64-4555-96e9-ad69c2518568-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 14:58:53 crc kubenswrapper[4957]: I0218 14:58:53.648926 4957 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-975c040f-4cc1-48f6-a5ab-06ccc1fd028f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-975c040f-4cc1-48f6-a5ab-06ccc1fd028f\") on node \"crc\" " Feb 18 14:58:53 crc kubenswrapper[4957]: I0218 14:58:53.648940 4957 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/81a0cd7a-3f64-4555-96e9-ad69c2518568-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 18 14:58:53 crc kubenswrapper[4957]: I0218 14:58:53.648953 4957 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/81a0cd7a-3f64-4555-96e9-ad69c2518568-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 18 14:58:53 crc kubenswrapper[4957]: I0218 14:58:53.648968 4957 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/81a0cd7a-3f64-4555-96e9-ad69c2518568-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 18 14:58:53 crc kubenswrapper[4957]: I0218 14:58:53.648981 4957 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/81a0cd7a-3f64-4555-96e9-ad69c2518568-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 18 14:58:53 crc kubenswrapper[4957]: I0218 14:58:53.648997 4957 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/81a0cd7a-3f64-4555-96e9-ad69c2518568-pod-info\") on node \"crc\" DevicePath \"\"" Feb 18 14:58:53 crc kubenswrapper[4957]: I0218 14:58:53.649010 4957 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/81a0cd7a-3f64-4555-96e9-ad69c2518568-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 18 14:58:53 crc kubenswrapper[4957]: I0218 14:58:53.693464 4957 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 18 14:58:53 crc kubenswrapper[4957]: I0218 14:58:53.693853 4957 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-975c040f-4cc1-48f6-a5ab-06ccc1fd028f" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-975c040f-4cc1-48f6-a5ab-06ccc1fd028f") on node "crc" Feb 18 14:58:53 crc kubenswrapper[4957]: I0218 14:58:53.703608 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81a0cd7a-3f64-4555-96e9-ad69c2518568-server-conf" (OuterVolumeSpecName: "server-conf") pod "81a0cd7a-3f64-4555-96e9-ad69c2518568" (UID: "81a0cd7a-3f64-4555-96e9-ad69c2518568"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:58:53 crc kubenswrapper[4957]: I0218 14:58:53.750939 4957 reconciler_common.go:293] "Volume detached for volume \"pvc-975c040f-4cc1-48f6-a5ab-06ccc1fd028f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-975c040f-4cc1-48f6-a5ab-06ccc1fd028f\") on node \"crc\" DevicePath \"\"" Feb 18 14:58:53 crc kubenswrapper[4957]: I0218 14:58:53.750977 4957 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/81a0cd7a-3f64-4555-96e9-ad69c2518568-server-conf\") on node \"crc\" DevicePath \"\"" Feb 18 14:58:53 crc kubenswrapper[4957]: I0218 14:58:53.799540 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81a0cd7a-3f64-4555-96e9-ad69c2518568-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "81a0cd7a-3f64-4555-96e9-ad69c2518568" (UID: "81a0cd7a-3f64-4555-96e9-ad69c2518568"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:58:53 crc kubenswrapper[4957]: I0218 14:58:53.823336 4957 generic.go:334] "Generic (PLEG): container finished" podID="81a0cd7a-3f64-4555-96e9-ad69c2518568" containerID="f083acbf4c7e675f9c2dc0dacda93d033ac034b808f632f4e7aceaf7cba5fad8" exitCode=0 Feb 18 14:58:53 crc kubenswrapper[4957]: I0218 14:58:53.823380 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"81a0cd7a-3f64-4555-96e9-ad69c2518568","Type":"ContainerDied","Data":"f083acbf4c7e675f9c2dc0dacda93d033ac034b808f632f4e7aceaf7cba5fad8"} Feb 18 14:58:53 crc kubenswrapper[4957]: I0218 14:58:53.823408 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"81a0cd7a-3f64-4555-96e9-ad69c2518568","Type":"ContainerDied","Data":"4e9712164a7fce0a4a1172bd337e33d4d6261de86c47fa3fab801240089b1046"} Feb 18 14:58:53 crc kubenswrapper[4957]: I0218 14:58:53.823450 4957 scope.go:117] "RemoveContainer" containerID="f083acbf4c7e675f9c2dc0dacda93d033ac034b808f632f4e7aceaf7cba5fad8" Feb 18 14:58:53 crc kubenswrapper[4957]: I0218 14:58:53.823586 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Feb 18 14:58:53 crc kubenswrapper[4957]: I0218 14:58:53.854294 4957 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/81a0cd7a-3f64-4555-96e9-ad69c2518568-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 18 14:58:53 crc kubenswrapper[4957]: I0218 14:58:53.874742 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-2"] Feb 18 14:58:53 crc kubenswrapper[4957]: I0218 14:58:53.890370 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-2"] Feb 18 14:58:53 crc kubenswrapper[4957]: I0218 14:58:53.923917 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-2"] Feb 18 14:58:53 crc kubenswrapper[4957]: E0218 14:58:53.929104 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81a0cd7a-3f64-4555-96e9-ad69c2518568" containerName="rabbitmq" Feb 18 14:58:53 crc kubenswrapper[4957]: I0218 14:58:53.929156 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="81a0cd7a-3f64-4555-96e9-ad69c2518568" containerName="rabbitmq" Feb 18 14:58:53 crc kubenswrapper[4957]: E0218 14:58:53.929171 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81a0cd7a-3f64-4555-96e9-ad69c2518568" containerName="setup-container" Feb 18 14:58:53 crc kubenswrapper[4957]: I0218 14:58:53.929178 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="81a0cd7a-3f64-4555-96e9-ad69c2518568" containerName="setup-container" Feb 18 14:58:53 crc kubenswrapper[4957]: I0218 14:58:53.929564 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="81a0cd7a-3f64-4555-96e9-ad69c2518568" containerName="rabbitmq" Feb 18 14:58:53 crc kubenswrapper[4957]: I0218 14:58:53.941289 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Feb 18 14:58:53 crc kubenswrapper[4957]: I0218 14:58:53.954890 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Feb 18 14:58:54 crc kubenswrapper[4957]: I0218 14:58:54.062470 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/99fe3777-adec-48ee-b2a8-df742111168d-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"99fe3777-adec-48ee-b2a8-df742111168d\") " pod="openstack/rabbitmq-server-2" Feb 18 14:58:54 crc kubenswrapper[4957]: I0218 14:58:54.062609 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/99fe3777-adec-48ee-b2a8-df742111168d-server-conf\") pod \"rabbitmq-server-2\" (UID: \"99fe3777-adec-48ee-b2a8-df742111168d\") " pod="openstack/rabbitmq-server-2" Feb 18 14:58:54 crc kubenswrapper[4957]: I0218 14:58:54.062639 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/99fe3777-adec-48ee-b2a8-df742111168d-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"99fe3777-adec-48ee-b2a8-df742111168d\") " pod="openstack/rabbitmq-server-2" Feb 18 14:58:54 crc kubenswrapper[4957]: I0218 14:58:54.063948 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/99fe3777-adec-48ee-b2a8-df742111168d-config-data\") pod \"rabbitmq-server-2\" (UID: \"99fe3777-adec-48ee-b2a8-df742111168d\") " pod="openstack/rabbitmq-server-2" Feb 18 14:58:54 crc kubenswrapper[4957]: I0218 14:58:54.064084 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/99fe3777-adec-48ee-b2a8-df742111168d-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"99fe3777-adec-48ee-b2a8-df742111168d\") " pod="openstack/rabbitmq-server-2" Feb 18 14:58:54 crc kubenswrapper[4957]: I0218 14:58:54.064283 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/99fe3777-adec-48ee-b2a8-df742111168d-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"99fe3777-adec-48ee-b2a8-df742111168d\") " pod="openstack/rabbitmq-server-2" Feb 18 14:58:54 crc kubenswrapper[4957]: I0218 14:58:54.064325 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/99fe3777-adec-48ee-b2a8-df742111168d-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"99fe3777-adec-48ee-b2a8-df742111168d\") " pod="openstack/rabbitmq-server-2" Feb 18 14:58:54 crc kubenswrapper[4957]: I0218 14:58:54.064373 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/99fe3777-adec-48ee-b2a8-df742111168d-pod-info\") pod \"rabbitmq-server-2\" (UID: \"99fe3777-adec-48ee-b2a8-df742111168d\") " pod="openstack/rabbitmq-server-2" Feb 18 14:58:54 crc kubenswrapper[4957]: I0218 14:58:54.064512 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/99fe3777-adec-48ee-b2a8-df742111168d-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"99fe3777-adec-48ee-b2a8-df742111168d\") " pod="openstack/rabbitmq-server-2" Feb 18 14:58:54 crc kubenswrapper[4957]: I0218 14:58:54.064553 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-975c040f-4cc1-48f6-a5ab-06ccc1fd028f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-975c040f-4cc1-48f6-a5ab-06ccc1fd028f\") pod \"rabbitmq-server-2\" (UID: \"99fe3777-adec-48ee-b2a8-df742111168d\") " pod="openstack/rabbitmq-server-2" Feb 18 14:58:54 crc kubenswrapper[4957]: I0218 14:58:54.064599 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljq8k\" (UniqueName: \"kubernetes.io/projected/99fe3777-adec-48ee-b2a8-df742111168d-kube-api-access-ljq8k\") pod \"rabbitmq-server-2\" (UID: \"99fe3777-adec-48ee-b2a8-df742111168d\") " pod="openstack/rabbitmq-server-2" Feb 18 14:58:54 crc kubenswrapper[4957]: I0218 14:58:54.166534 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/99fe3777-adec-48ee-b2a8-df742111168d-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"99fe3777-adec-48ee-b2a8-df742111168d\") " pod="openstack/rabbitmq-server-2" Feb 18 14:58:54 crc kubenswrapper[4957]: I0218 14:58:54.166605 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/99fe3777-adec-48ee-b2a8-df742111168d-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"99fe3777-adec-48ee-b2a8-df742111168d\") " pod="openstack/rabbitmq-server-2" Feb 18 14:58:54 crc kubenswrapper[4957]: I0218 14:58:54.166644 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/99fe3777-adec-48ee-b2a8-df742111168d-pod-info\") pod \"rabbitmq-server-2\" (UID: \"99fe3777-adec-48ee-b2a8-df742111168d\") " pod="openstack/rabbitmq-server-2" Feb 18 14:58:54 crc kubenswrapper[4957]: I0218 14:58:54.166690 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/99fe3777-adec-48ee-b2a8-df742111168d-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"99fe3777-adec-48ee-b2a8-df742111168d\") " pod="openstack/rabbitmq-server-2" Feb 18 14:58:54 crc kubenswrapper[4957]: I0218 14:58:54.166720 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-975c040f-4cc1-48f6-a5ab-06ccc1fd028f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-975c040f-4cc1-48f6-a5ab-06ccc1fd028f\") pod \"rabbitmq-server-2\" (UID: \"99fe3777-adec-48ee-b2a8-df742111168d\") " pod="openstack/rabbitmq-server-2" Feb 18 14:58:54 crc kubenswrapper[4957]: I0218 14:58:54.166753 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ljq8k\" (UniqueName: \"kubernetes.io/projected/99fe3777-adec-48ee-b2a8-df742111168d-kube-api-access-ljq8k\") pod \"rabbitmq-server-2\" (UID: \"99fe3777-adec-48ee-b2a8-df742111168d\") " pod="openstack/rabbitmq-server-2" Feb 18 14:58:54 crc kubenswrapper[4957]: I0218 14:58:54.166875 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/99fe3777-adec-48ee-b2a8-df742111168d-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"99fe3777-adec-48ee-b2a8-df742111168d\") " pod="openstack/rabbitmq-server-2" Feb 18 14:58:54 crc kubenswrapper[4957]: I0218 14:58:54.166922 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/99fe3777-adec-48ee-b2a8-df742111168d-server-conf\") pod \"rabbitmq-server-2\" (UID: \"99fe3777-adec-48ee-b2a8-df742111168d\") " pod="openstack/rabbitmq-server-2" Feb 18 14:58:54 crc kubenswrapper[4957]: I0218 14:58:54.166945 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/99fe3777-adec-48ee-b2a8-df742111168d-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"99fe3777-adec-48ee-b2a8-df742111168d\") " pod="openstack/rabbitmq-server-2" Feb 18 14:58:54 crc kubenswrapper[4957]: I0218 14:58:54.167014 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/99fe3777-adec-48ee-b2a8-df742111168d-config-data\") pod \"rabbitmq-server-2\" (UID: \"99fe3777-adec-48ee-b2a8-df742111168d\") " pod="openstack/rabbitmq-server-2" Feb 18 14:58:54 crc kubenswrapper[4957]: I0218 14:58:54.167067 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/99fe3777-adec-48ee-b2a8-df742111168d-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"99fe3777-adec-48ee-b2a8-df742111168d\") " pod="openstack/rabbitmq-server-2" Feb 18 14:58:54 crc kubenswrapper[4957]: I0218 14:58:54.167706 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/99fe3777-adec-48ee-b2a8-df742111168d-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"99fe3777-adec-48ee-b2a8-df742111168d\") " pod="openstack/rabbitmq-server-2" Feb 18 14:58:54 crc kubenswrapper[4957]: I0218 14:58:54.168407 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/99fe3777-adec-48ee-b2a8-df742111168d-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"99fe3777-adec-48ee-b2a8-df742111168d\") " pod="openstack/rabbitmq-server-2" Feb 18 14:58:54 crc kubenswrapper[4957]: I0218 14:58:54.168855 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/99fe3777-adec-48ee-b2a8-df742111168d-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"99fe3777-adec-48ee-b2a8-df742111168d\") " pod="openstack/rabbitmq-server-2" Feb 18 14:58:54 crc kubenswrapper[4957]: I0218 14:58:54.170330 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/99fe3777-adec-48ee-b2a8-df742111168d-config-data\") pod \"rabbitmq-server-2\" (UID: \"99fe3777-adec-48ee-b2a8-df742111168d\") " pod="openstack/rabbitmq-server-2" Feb 18 14:58:54 crc kubenswrapper[4957]: I0218 14:58:54.171010 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/99fe3777-adec-48ee-b2a8-df742111168d-server-conf\") pod \"rabbitmq-server-2\" (UID: \"99fe3777-adec-48ee-b2a8-df742111168d\") " pod="openstack/rabbitmq-server-2" Feb 18 14:58:54 crc kubenswrapper[4957]: I0218 14:58:54.175060 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/99fe3777-adec-48ee-b2a8-df742111168d-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"99fe3777-adec-48ee-b2a8-df742111168d\") " pod="openstack/rabbitmq-server-2" Feb 18 14:58:54 crc kubenswrapper[4957]: I0218 14:58:54.175772 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/99fe3777-adec-48ee-b2a8-df742111168d-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"99fe3777-adec-48ee-b2a8-df742111168d\") " pod="openstack/rabbitmq-server-2" Feb 18 14:58:54 crc kubenswrapper[4957]: I0218 14:58:54.179393 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/99fe3777-adec-48ee-b2a8-df742111168d-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"99fe3777-adec-48ee-b2a8-df742111168d\") " pod="openstack/rabbitmq-server-2" Feb 18 14:58:54 crc kubenswrapper[4957]: I0218 14:58:54.184159 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/99fe3777-adec-48ee-b2a8-df742111168d-pod-info\") pod \"rabbitmq-server-2\" (UID: \"99fe3777-adec-48ee-b2a8-df742111168d\") " pod="openstack/rabbitmq-server-2" Feb 18 14:58:54 crc kubenswrapper[4957]: I0218 14:58:54.204400 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ljq8k\" (UniqueName: \"kubernetes.io/projected/99fe3777-adec-48ee-b2a8-df742111168d-kube-api-access-ljq8k\") pod \"rabbitmq-server-2\" (UID: \"99fe3777-adec-48ee-b2a8-df742111168d\") " pod="openstack/rabbitmq-server-2" Feb 18 14:58:54 crc kubenswrapper[4957]: I0218 14:58:54.228773 4957 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 18 14:58:54 crc kubenswrapper[4957]: I0218 14:58:54.228811 4957 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-975c040f-4cc1-48f6-a5ab-06ccc1fd028f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-975c040f-4cc1-48f6-a5ab-06ccc1fd028f\") pod \"rabbitmq-server-2\" (UID: \"99fe3777-adec-48ee-b2a8-df742111168d\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/351bbc9e70a65695cf515d8d5ce5c1884e5f13a79c1a8fc47be9bcdf43886012/globalmount\"" pod="openstack/rabbitmq-server-2" Feb 18 14:58:54 crc kubenswrapper[4957]: I0218 14:58:54.261730 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81a0cd7a-3f64-4555-96e9-ad69c2518568" path="/var/lib/kubelet/pods/81a0cd7a-3f64-4555-96e9-ad69c2518568/volumes" Feb 18 14:58:54 crc kubenswrapper[4957]: I0218 14:58:54.466235 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-975c040f-4cc1-48f6-a5ab-06ccc1fd028f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-975c040f-4cc1-48f6-a5ab-06ccc1fd028f\") pod \"rabbitmq-server-2\" (UID: \"99fe3777-adec-48ee-b2a8-df742111168d\") " pod="openstack/rabbitmq-server-2" Feb 18 14:58:54 crc kubenswrapper[4957]: I0218 14:58:54.638522 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Feb 18 14:58:54 crc kubenswrapper[4957]: I0218 14:58:54.849997 4957 generic.go:334] "Generic (PLEG): container finished" podID="12d531cb-f58f-44a6-a638-29ebb85fdbb3" containerID="351d83c0114e5e295ff8d56e0e90957a2bdf7388269b2c16995008eb48ee3a67" exitCode=0 Feb 18 14:58:54 crc kubenswrapper[4957]: I0218 14:58:54.850056 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"12d531cb-f58f-44a6-a638-29ebb85fdbb3","Type":"ContainerDied","Data":"351d83c0114e5e295ff8d56e0e90957a2bdf7388269b2c16995008eb48ee3a67"} Feb 18 14:58:58 crc kubenswrapper[4957]: I0218 14:58:58.740010 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7d84b4d45c-vpb78"] Feb 18 14:58:58 crc kubenswrapper[4957]: I0218 14:58:58.743495 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d84b4d45c-vpb78" Feb 18 14:58:58 crc kubenswrapper[4957]: I0218 14:58:58.745935 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Feb 18 14:58:58 crc kubenswrapper[4957]: I0218 14:58:58.758610 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d84b4d45c-vpb78"] Feb 18 14:58:58 crc kubenswrapper[4957]: I0218 14:58:58.842963 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="12d531cb-f58f-44a6-a638-29ebb85fdbb3" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.132:5671: connect: connection refused" Feb 18 14:58:58 crc kubenswrapper[4957]: I0218 14:58:58.910222 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/2dffda1d-4d5a-420b-b627-4bc6fd552c61-openstack-edpm-ipam\") pod \"dnsmasq-dns-7d84b4d45c-vpb78\" (UID: \"2dffda1d-4d5a-420b-b627-4bc6fd552c61\") " pod="openstack/dnsmasq-dns-7d84b4d45c-vpb78" Feb 18 14:58:58 crc kubenswrapper[4957]: I0218 14:58:58.910353 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2dffda1d-4d5a-420b-b627-4bc6fd552c61-dns-swift-storage-0\") pod \"dnsmasq-dns-7d84b4d45c-vpb78\" (UID: \"2dffda1d-4d5a-420b-b627-4bc6fd552c61\") " pod="openstack/dnsmasq-dns-7d84b4d45c-vpb78" Feb 18 14:58:58 crc kubenswrapper[4957]: I0218 14:58:58.910465 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2dffda1d-4d5a-420b-b627-4bc6fd552c61-ovsdbserver-nb\") pod \"dnsmasq-dns-7d84b4d45c-vpb78\" (UID: \"2dffda1d-4d5a-420b-b627-4bc6fd552c61\") " pod="openstack/dnsmasq-dns-7d84b4d45c-vpb78" Feb 18 14:58:58 crc kubenswrapper[4957]: I0218 14:58:58.910487 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2dffda1d-4d5a-420b-b627-4bc6fd552c61-ovsdbserver-sb\") pod \"dnsmasq-dns-7d84b4d45c-vpb78\" (UID: \"2dffda1d-4d5a-420b-b627-4bc6fd552c61\") " pod="openstack/dnsmasq-dns-7d84b4d45c-vpb78" Feb 18 14:58:58 crc kubenswrapper[4957]: I0218 14:58:58.910507 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74nbk\" (UniqueName: \"kubernetes.io/projected/2dffda1d-4d5a-420b-b627-4bc6fd552c61-kube-api-access-74nbk\") pod \"dnsmasq-dns-7d84b4d45c-vpb78\" (UID: \"2dffda1d-4d5a-420b-b627-4bc6fd552c61\") " pod="openstack/dnsmasq-dns-7d84b4d45c-vpb78" Feb 18 14:58:58 crc kubenswrapper[4957]: I0218 14:58:58.910534 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2dffda1d-4d5a-420b-b627-4bc6fd552c61-dns-svc\") pod \"dnsmasq-dns-7d84b4d45c-vpb78\" (UID: \"2dffda1d-4d5a-420b-b627-4bc6fd552c61\") " pod="openstack/dnsmasq-dns-7d84b4d45c-vpb78" Feb 18 14:58:58 crc kubenswrapper[4957]: I0218 14:58:58.910909 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2dffda1d-4d5a-420b-b627-4bc6fd552c61-config\") pod \"dnsmasq-dns-7d84b4d45c-vpb78\" (UID: \"2dffda1d-4d5a-420b-b627-4bc6fd552c61\") " pod="openstack/dnsmasq-dns-7d84b4d45c-vpb78" Feb 18 14:58:59 crc kubenswrapper[4957]: I0218 14:58:59.013682 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/2dffda1d-4d5a-420b-b627-4bc6fd552c61-openstack-edpm-ipam\") pod \"dnsmasq-dns-7d84b4d45c-vpb78\" (UID: \"2dffda1d-4d5a-420b-b627-4bc6fd552c61\") " pod="openstack/dnsmasq-dns-7d84b4d45c-vpb78" Feb 18 14:58:59 crc kubenswrapper[4957]: I0218 14:58:59.013775 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2dffda1d-4d5a-420b-b627-4bc6fd552c61-dns-swift-storage-0\") pod \"dnsmasq-dns-7d84b4d45c-vpb78\" (UID: \"2dffda1d-4d5a-420b-b627-4bc6fd552c61\") " pod="openstack/dnsmasq-dns-7d84b4d45c-vpb78" Feb 18 14:58:59 crc kubenswrapper[4957]: I0218 14:58:59.013860 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2dffda1d-4d5a-420b-b627-4bc6fd552c61-ovsdbserver-nb\") pod \"dnsmasq-dns-7d84b4d45c-vpb78\" (UID: \"2dffda1d-4d5a-420b-b627-4bc6fd552c61\") " pod="openstack/dnsmasq-dns-7d84b4d45c-vpb78" Feb 18 14:58:59 crc kubenswrapper[4957]: I0218 14:58:59.013923 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2dffda1d-4d5a-420b-b627-4bc6fd552c61-ovsdbserver-sb\") pod \"dnsmasq-dns-7d84b4d45c-vpb78\" (UID: \"2dffda1d-4d5a-420b-b627-4bc6fd552c61\") " pod="openstack/dnsmasq-dns-7d84b4d45c-vpb78" Feb 18 14:58:59 crc kubenswrapper[4957]: I0218 14:58:59.013948 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-74nbk\" (UniqueName: \"kubernetes.io/projected/2dffda1d-4d5a-420b-b627-4bc6fd552c61-kube-api-access-74nbk\") pod \"dnsmasq-dns-7d84b4d45c-vpb78\" (UID: \"2dffda1d-4d5a-420b-b627-4bc6fd552c61\") " pod="openstack/dnsmasq-dns-7d84b4d45c-vpb78" Feb 18 14:58:59 crc kubenswrapper[4957]: I0218 14:58:59.013967 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2dffda1d-4d5a-420b-b627-4bc6fd552c61-dns-svc\") pod \"dnsmasq-dns-7d84b4d45c-vpb78\" (UID: \"2dffda1d-4d5a-420b-b627-4bc6fd552c61\") " pod="openstack/dnsmasq-dns-7d84b4d45c-vpb78" Feb 18 14:58:59 crc kubenswrapper[4957]: I0218 14:58:59.014045 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2dffda1d-4d5a-420b-b627-4bc6fd552c61-config\") pod \"dnsmasq-dns-7d84b4d45c-vpb78\" (UID: \"2dffda1d-4d5a-420b-b627-4bc6fd552c61\") " pod="openstack/dnsmasq-dns-7d84b4d45c-vpb78" Feb 18 14:58:59 crc kubenswrapper[4957]: I0218 14:58:59.015335 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/2dffda1d-4d5a-420b-b627-4bc6fd552c61-openstack-edpm-ipam\") pod \"dnsmasq-dns-7d84b4d45c-vpb78\" (UID: \"2dffda1d-4d5a-420b-b627-4bc6fd552c61\") " pod="openstack/dnsmasq-dns-7d84b4d45c-vpb78" Feb 18 14:58:59 crc kubenswrapper[4957]: I0218 14:58:59.015366 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2dffda1d-4d5a-420b-b627-4bc6fd552c61-config\") pod \"dnsmasq-dns-7d84b4d45c-vpb78\" (UID: \"2dffda1d-4d5a-420b-b627-4bc6fd552c61\") " pod="openstack/dnsmasq-dns-7d84b4d45c-vpb78" Feb 18 14:58:59 crc kubenswrapper[4957]: I0218 14:58:59.015866 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2dffda1d-4d5a-420b-b627-4bc6fd552c61-dns-swift-storage-0\") pod \"dnsmasq-dns-7d84b4d45c-vpb78\" (UID: \"2dffda1d-4d5a-420b-b627-4bc6fd552c61\") " pod="openstack/dnsmasq-dns-7d84b4d45c-vpb78" Feb 18 14:58:59 crc kubenswrapper[4957]: I0218 14:58:59.016093 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2dffda1d-4d5a-420b-b627-4bc6fd552c61-ovsdbserver-sb\") pod \"dnsmasq-dns-7d84b4d45c-vpb78\" (UID: \"2dffda1d-4d5a-420b-b627-4bc6fd552c61\") " pod="openstack/dnsmasq-dns-7d84b4d45c-vpb78" Feb 18 14:58:59 crc kubenswrapper[4957]: I0218 14:58:59.016632 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2dffda1d-4d5a-420b-b627-4bc6fd552c61-dns-svc\") pod \"dnsmasq-dns-7d84b4d45c-vpb78\" (UID: \"2dffda1d-4d5a-420b-b627-4bc6fd552c61\") " pod="openstack/dnsmasq-dns-7d84b4d45c-vpb78" Feb 18 14:58:59 crc kubenswrapper[4957]: I0218 14:58:59.016699 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2dffda1d-4d5a-420b-b627-4bc6fd552c61-ovsdbserver-nb\") pod \"dnsmasq-dns-7d84b4d45c-vpb78\" (UID: \"2dffda1d-4d5a-420b-b627-4bc6fd552c61\") " pod="openstack/dnsmasq-dns-7d84b4d45c-vpb78" Feb 18 14:58:59 crc kubenswrapper[4957]: I0218 14:58:59.043248 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-74nbk\" (UniqueName: \"kubernetes.io/projected/2dffda1d-4d5a-420b-b627-4bc6fd552c61-kube-api-access-74nbk\") pod \"dnsmasq-dns-7d84b4d45c-vpb78\" (UID: \"2dffda1d-4d5a-420b-b627-4bc6fd552c61\") " pod="openstack/dnsmasq-dns-7d84b4d45c-vpb78" Feb 18 14:58:59 crc kubenswrapper[4957]: I0218 14:58:59.071860 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d84b4d45c-vpb78" Feb 18 14:59:00 crc kubenswrapper[4957]: I0218 14:59:00.212785 4957 scope.go:117] "RemoveContainer" containerID="c4e669be0ab1992542c422c61f78e7526ae93fe9c47c7d1457d0a05970a49398" Feb 18 14:59:00 crc kubenswrapper[4957]: E0218 14:59:00.213709 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-x8wwg_openshift-machine-config-operator(4cde17e3-43e9-4bed-afe8-5b76229e35cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" Feb 18 14:59:01 crc kubenswrapper[4957]: I0218 14:59:01.389359 4957 scope.go:117] "RemoveContainer" containerID="a137f2d606d0d74bf7ba4698dcc499a5b8f0e2ad8ab294862942ac6ee18f3a52" Feb 18 14:59:01 crc kubenswrapper[4957]: I0218 14:59:01.529657 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:59:01 crc kubenswrapper[4957]: I0218 14:59:01.572745 4957 scope.go:117] "RemoveContainer" containerID="d210d83635aa51b299952aa7eac3ab5e0c00e01b6107d3c3293d4f59d4388270" Feb 18 14:59:01 crc kubenswrapper[4957]: I0218 14:59:01.580162 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/12d531cb-f58f-44a6-a638-29ebb85fdbb3-config-data\") pod \"12d531cb-f58f-44a6-a638-29ebb85fdbb3\" (UID: \"12d531cb-f58f-44a6-a638-29ebb85fdbb3\") " Feb 18 14:59:01 crc kubenswrapper[4957]: I0218 14:59:01.580251 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/12d531cb-f58f-44a6-a638-29ebb85fdbb3-erlang-cookie-secret\") pod \"12d531cb-f58f-44a6-a638-29ebb85fdbb3\" (UID: \"12d531cb-f58f-44a6-a638-29ebb85fdbb3\") " Feb 18 14:59:01 crc kubenswrapper[4957]: I0218 14:59:01.580334 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rjjsw\" (UniqueName: \"kubernetes.io/projected/12d531cb-f58f-44a6-a638-29ebb85fdbb3-kube-api-access-rjjsw\") pod \"12d531cb-f58f-44a6-a638-29ebb85fdbb3\" (UID: \"12d531cb-f58f-44a6-a638-29ebb85fdbb3\") " Feb 18 14:59:01 crc kubenswrapper[4957]: I0218 14:59:01.584234 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-971c377d-ef63-4d1e-9ba8-5f5ad7361f48\") pod \"12d531cb-f58f-44a6-a638-29ebb85fdbb3\" (UID: \"12d531cb-f58f-44a6-a638-29ebb85fdbb3\") " Feb 18 14:59:01 crc kubenswrapper[4957]: I0218 14:59:01.584389 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/12d531cb-f58f-44a6-a638-29ebb85fdbb3-rabbitmq-tls\") pod \"12d531cb-f58f-44a6-a638-29ebb85fdbb3\" (UID: \"12d531cb-f58f-44a6-a638-29ebb85fdbb3\") " Feb 18 14:59:01 crc kubenswrapper[4957]: I0218 14:59:01.584533 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/12d531cb-f58f-44a6-a638-29ebb85fdbb3-rabbitmq-plugins\") pod \"12d531cb-f58f-44a6-a638-29ebb85fdbb3\" (UID: \"12d531cb-f58f-44a6-a638-29ebb85fdbb3\") " Feb 18 14:59:01 crc kubenswrapper[4957]: I0218 14:59:01.584585 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/12d531cb-f58f-44a6-a638-29ebb85fdbb3-plugins-conf\") pod \"12d531cb-f58f-44a6-a638-29ebb85fdbb3\" (UID: \"12d531cb-f58f-44a6-a638-29ebb85fdbb3\") " Feb 18 14:59:01 crc kubenswrapper[4957]: I0218 14:59:01.584624 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/12d531cb-f58f-44a6-a638-29ebb85fdbb3-server-conf\") pod \"12d531cb-f58f-44a6-a638-29ebb85fdbb3\" (UID: \"12d531cb-f58f-44a6-a638-29ebb85fdbb3\") " Feb 18 14:59:01 crc kubenswrapper[4957]: I0218 14:59:01.584761 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/12d531cb-f58f-44a6-a638-29ebb85fdbb3-pod-info\") pod \"12d531cb-f58f-44a6-a638-29ebb85fdbb3\" (UID: \"12d531cb-f58f-44a6-a638-29ebb85fdbb3\") " Feb 18 14:59:01 crc kubenswrapper[4957]: I0218 14:59:01.584852 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/12d531cb-f58f-44a6-a638-29ebb85fdbb3-rabbitmq-confd\") pod \"12d531cb-f58f-44a6-a638-29ebb85fdbb3\" (UID: \"12d531cb-f58f-44a6-a638-29ebb85fdbb3\") " Feb 18 14:59:01 crc kubenswrapper[4957]: I0218 14:59:01.585248 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/12d531cb-f58f-44a6-a638-29ebb85fdbb3-rabbitmq-erlang-cookie\") pod \"12d531cb-f58f-44a6-a638-29ebb85fdbb3\" (UID: \"12d531cb-f58f-44a6-a638-29ebb85fdbb3\") " Feb 18 14:59:01 crc kubenswrapper[4957]: I0218 14:59:01.588235 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/12d531cb-f58f-44a6-a638-29ebb85fdbb3-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "12d531cb-f58f-44a6-a638-29ebb85fdbb3" (UID: "12d531cb-f58f-44a6-a638-29ebb85fdbb3"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:59:01 crc kubenswrapper[4957]: I0218 14:59:01.590648 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/12d531cb-f58f-44a6-a638-29ebb85fdbb3-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "12d531cb-f58f-44a6-a638-29ebb85fdbb3" (UID: "12d531cb-f58f-44a6-a638-29ebb85fdbb3"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:59:01 crc kubenswrapper[4957]: I0218 14:59:01.598253 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/12d531cb-f58f-44a6-a638-29ebb85fdbb3-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "12d531cb-f58f-44a6-a638-29ebb85fdbb3" (UID: "12d531cb-f58f-44a6-a638-29ebb85fdbb3"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:59:01 crc kubenswrapper[4957]: I0218 14:59:01.603224 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12d531cb-f58f-44a6-a638-29ebb85fdbb3-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "12d531cb-f58f-44a6-a638-29ebb85fdbb3" (UID: "12d531cb-f58f-44a6-a638-29ebb85fdbb3"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:59:01 crc kubenswrapper[4957]: I0218 14:59:01.611015 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12d531cb-f58f-44a6-a638-29ebb85fdbb3-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "12d531cb-f58f-44a6-a638-29ebb85fdbb3" (UID: "12d531cb-f58f-44a6-a638-29ebb85fdbb3"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:59:01 crc kubenswrapper[4957]: I0218 14:59:01.651295 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12d531cb-f58f-44a6-a638-29ebb85fdbb3-kube-api-access-rjjsw" (OuterVolumeSpecName: "kube-api-access-rjjsw") pod "12d531cb-f58f-44a6-a638-29ebb85fdbb3" (UID: "12d531cb-f58f-44a6-a638-29ebb85fdbb3"). InnerVolumeSpecName "kube-api-access-rjjsw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:59:01 crc kubenswrapper[4957]: I0218 14:59:01.657178 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/12d531cb-f58f-44a6-a638-29ebb85fdbb3-pod-info" (OuterVolumeSpecName: "pod-info") pod "12d531cb-f58f-44a6-a638-29ebb85fdbb3" (UID: "12d531cb-f58f-44a6-a638-29ebb85fdbb3"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 18 14:59:01 crc kubenswrapper[4957]: I0218 14:59:01.662035 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/12d531cb-f58f-44a6-a638-29ebb85fdbb3-config-data" (OuterVolumeSpecName: "config-data") pod "12d531cb-f58f-44a6-a638-29ebb85fdbb3" (UID: "12d531cb-f58f-44a6-a638-29ebb85fdbb3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:59:01 crc kubenswrapper[4957]: I0218 14:59:01.698030 4957 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/12d531cb-f58f-44a6-a638-29ebb85fdbb3-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 18 14:59:01 crc kubenswrapper[4957]: I0218 14:59:01.698079 4957 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/12d531cb-f58f-44a6-a638-29ebb85fdbb3-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 18 14:59:01 crc kubenswrapper[4957]: I0218 14:59:01.698093 4957 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/12d531cb-f58f-44a6-a638-29ebb85fdbb3-pod-info\") on node \"crc\" DevicePath \"\"" Feb 18 14:59:01 crc kubenswrapper[4957]: I0218 14:59:01.698108 4957 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/12d531cb-f58f-44a6-a638-29ebb85fdbb3-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 18 14:59:01 crc kubenswrapper[4957]: I0218 14:59:01.698121 4957 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/12d531cb-f58f-44a6-a638-29ebb85fdbb3-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 14:59:01 crc kubenswrapper[4957]: I0218 14:59:01.698134 4957 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/12d531cb-f58f-44a6-a638-29ebb85fdbb3-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 18 14:59:01 crc kubenswrapper[4957]: I0218 14:59:01.698146 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rjjsw\" (UniqueName: \"kubernetes.io/projected/12d531cb-f58f-44a6-a638-29ebb85fdbb3-kube-api-access-rjjsw\") on node \"crc\" DevicePath \"\"" Feb 18 14:59:01 crc kubenswrapper[4957]: I0218 14:59:01.698156 4957 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/12d531cb-f58f-44a6-a638-29ebb85fdbb3-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 18 14:59:01 crc kubenswrapper[4957]: I0218 14:59:01.704922 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-971c377d-ef63-4d1e-9ba8-5f5ad7361f48" (OuterVolumeSpecName: "persistence") pod "12d531cb-f58f-44a6-a638-29ebb85fdbb3" (UID: "12d531cb-f58f-44a6-a638-29ebb85fdbb3"). InnerVolumeSpecName "pvc-971c377d-ef63-4d1e-9ba8-5f5ad7361f48". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 18 14:59:01 crc kubenswrapper[4957]: I0218 14:59:01.801074 4957 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-971c377d-ef63-4d1e-9ba8-5f5ad7361f48\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-971c377d-ef63-4d1e-9ba8-5f5ad7361f48\") on node \"crc\" " Feb 18 14:59:01 crc kubenswrapper[4957]: I0218 14:59:01.805204 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/12d531cb-f58f-44a6-a638-29ebb85fdbb3-server-conf" (OuterVolumeSpecName: "server-conf") pod "12d531cb-f58f-44a6-a638-29ebb85fdbb3" (UID: "12d531cb-f58f-44a6-a638-29ebb85fdbb3"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:59:01 crc kubenswrapper[4957]: I0218 14:59:01.856490 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12d531cb-f58f-44a6-a638-29ebb85fdbb3-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "12d531cb-f58f-44a6-a638-29ebb85fdbb3" (UID: "12d531cb-f58f-44a6-a638-29ebb85fdbb3"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:59:01 crc kubenswrapper[4957]: I0218 14:59:01.907331 4957 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/12d531cb-f58f-44a6-a638-29ebb85fdbb3-server-conf\") on node \"crc\" DevicePath \"\"" Feb 18 14:59:01 crc kubenswrapper[4957]: I0218 14:59:01.919972 4957 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/12d531cb-f58f-44a6-a638-29ebb85fdbb3-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 18 14:59:01 crc kubenswrapper[4957]: I0218 14:59:01.910948 4957 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 18 14:59:01 crc kubenswrapper[4957]: I0218 14:59:01.921360 4957 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-971c377d-ef63-4d1e-9ba8-5f5ad7361f48" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-971c377d-ef63-4d1e-9ba8-5f5ad7361f48") on node "crc" Feb 18 14:59:01 crc kubenswrapper[4957]: I0218 14:59:01.983584 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"12d531cb-f58f-44a6-a638-29ebb85fdbb3","Type":"ContainerDied","Data":"78f2929aebeac7505d82fdbf59f327e266f392924bcb88cf2df6c8d626d61787"} Feb 18 14:59:01 crc kubenswrapper[4957]: I0218 14:59:01.984206 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:59:02 crc kubenswrapper[4957]: I0218 14:59:02.022371 4957 reconciler_common.go:293] "Volume detached for volume \"pvc-971c377d-ef63-4d1e-9ba8-5f5ad7361f48\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-971c377d-ef63-4d1e-9ba8-5f5ad7361f48\") on node \"crc\" DevicePath \"\"" Feb 18 14:59:02 crc kubenswrapper[4957]: I0218 14:59:02.035557 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 18 14:59:02 crc kubenswrapper[4957]: I0218 14:59:02.063912 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 18 14:59:02 crc kubenswrapper[4957]: I0218 14:59:02.078028 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 18 14:59:02 crc kubenswrapper[4957]: E0218 14:59:02.078836 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12d531cb-f58f-44a6-a638-29ebb85fdbb3" containerName="rabbitmq" Feb 18 14:59:02 crc kubenswrapper[4957]: I0218 14:59:02.078953 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="12d531cb-f58f-44a6-a638-29ebb85fdbb3" containerName="rabbitmq" Feb 18 14:59:02 crc kubenswrapper[4957]: E0218 14:59:02.079071 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12d531cb-f58f-44a6-a638-29ebb85fdbb3" containerName="setup-container" Feb 18 14:59:02 crc kubenswrapper[4957]: I0218 14:59:02.079169 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="12d531cb-f58f-44a6-a638-29ebb85fdbb3" containerName="setup-container" Feb 18 14:59:02 crc kubenswrapper[4957]: I0218 14:59:02.079583 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="12d531cb-f58f-44a6-a638-29ebb85fdbb3" containerName="rabbitmq" Feb 18 14:59:02 crc kubenswrapper[4957]: I0218 14:59:02.081452 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:59:02 crc kubenswrapper[4957]: I0218 14:59:02.084446 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 18 14:59:02 crc kubenswrapper[4957]: I0218 14:59:02.087542 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 18 14:59:02 crc kubenswrapper[4957]: I0218 14:59:02.087747 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 18 14:59:02 crc kubenswrapper[4957]: I0218 14:59:02.087847 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 18 14:59:02 crc kubenswrapper[4957]: I0218 14:59:02.087987 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-n2q7z" Feb 18 14:59:02 crc kubenswrapper[4957]: I0218 14:59:02.088027 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 18 14:59:02 crc kubenswrapper[4957]: I0218 14:59:02.089180 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 18 14:59:02 crc kubenswrapper[4957]: I0218 14:59:02.092570 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 18 14:59:02 crc kubenswrapper[4957]: I0218 14:59:02.226265 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a6192b5e-59c5-4986-bac1-41acf8c0d46e-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a6192b5e-59c5-4986-bac1-41acf8c0d46e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:59:02 crc kubenswrapper[4957]: I0218 14:59:02.226316 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrhj7\" (UniqueName: \"kubernetes.io/projected/a6192b5e-59c5-4986-bac1-41acf8c0d46e-kube-api-access-hrhj7\") pod \"rabbitmq-cell1-server-0\" (UID: \"a6192b5e-59c5-4986-bac1-41acf8c0d46e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:59:02 crc kubenswrapper[4957]: I0218 14:59:02.226459 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a6192b5e-59c5-4986-bac1-41acf8c0d46e-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"a6192b5e-59c5-4986-bac1-41acf8c0d46e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:59:02 crc kubenswrapper[4957]: I0218 14:59:02.226495 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a6192b5e-59c5-4986-bac1-41acf8c0d46e-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"a6192b5e-59c5-4986-bac1-41acf8c0d46e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:59:02 crc kubenswrapper[4957]: I0218 14:59:02.226520 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a6192b5e-59c5-4986-bac1-41acf8c0d46e-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a6192b5e-59c5-4986-bac1-41acf8c0d46e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:59:02 crc kubenswrapper[4957]: I0218 14:59:02.226577 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a6192b5e-59c5-4986-bac1-41acf8c0d46e-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"a6192b5e-59c5-4986-bac1-41acf8c0d46e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:59:02 crc kubenswrapper[4957]: I0218 14:59:02.226602 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a6192b5e-59c5-4986-bac1-41acf8c0d46e-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"a6192b5e-59c5-4986-bac1-41acf8c0d46e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:59:02 crc kubenswrapper[4957]: I0218 14:59:02.226633 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a6192b5e-59c5-4986-bac1-41acf8c0d46e-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"a6192b5e-59c5-4986-bac1-41acf8c0d46e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:59:02 crc kubenswrapper[4957]: I0218 14:59:02.226659 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-971c377d-ef63-4d1e-9ba8-5f5ad7361f48\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-971c377d-ef63-4d1e-9ba8-5f5ad7361f48\") pod \"rabbitmq-cell1-server-0\" (UID: \"a6192b5e-59c5-4986-bac1-41acf8c0d46e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:59:02 crc kubenswrapper[4957]: I0218 14:59:02.226709 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a6192b5e-59c5-4986-bac1-41acf8c0d46e-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"a6192b5e-59c5-4986-bac1-41acf8c0d46e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:59:02 crc kubenswrapper[4957]: I0218 14:59:02.226730 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a6192b5e-59c5-4986-bac1-41acf8c0d46e-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"a6192b5e-59c5-4986-bac1-41acf8c0d46e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:59:02 crc kubenswrapper[4957]: I0218 14:59:02.227335 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="12d531cb-f58f-44a6-a638-29ebb85fdbb3" path="/var/lib/kubelet/pods/12d531cb-f58f-44a6-a638-29ebb85fdbb3/volumes" Feb 18 14:59:02 crc kubenswrapper[4957]: I0218 14:59:02.329064 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a6192b5e-59c5-4986-bac1-41acf8c0d46e-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"a6192b5e-59c5-4986-bac1-41acf8c0d46e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:59:02 crc kubenswrapper[4957]: I0218 14:59:02.329125 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a6192b5e-59c5-4986-bac1-41acf8c0d46e-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"a6192b5e-59c5-4986-bac1-41acf8c0d46e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:59:02 crc kubenswrapper[4957]: I0218 14:59:02.329168 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a6192b5e-59c5-4986-bac1-41acf8c0d46e-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"a6192b5e-59c5-4986-bac1-41acf8c0d46e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:59:02 crc kubenswrapper[4957]: I0218 14:59:02.329205 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-971c377d-ef63-4d1e-9ba8-5f5ad7361f48\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-971c377d-ef63-4d1e-9ba8-5f5ad7361f48\") pod \"rabbitmq-cell1-server-0\" (UID: \"a6192b5e-59c5-4986-bac1-41acf8c0d46e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:59:02 crc kubenswrapper[4957]: I0218 14:59:02.329260 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a6192b5e-59c5-4986-bac1-41acf8c0d46e-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"a6192b5e-59c5-4986-bac1-41acf8c0d46e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:59:02 crc kubenswrapper[4957]: I0218 14:59:02.329293 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a6192b5e-59c5-4986-bac1-41acf8c0d46e-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"a6192b5e-59c5-4986-bac1-41acf8c0d46e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:59:02 crc kubenswrapper[4957]: I0218 14:59:02.329448 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a6192b5e-59c5-4986-bac1-41acf8c0d46e-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a6192b5e-59c5-4986-bac1-41acf8c0d46e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:59:02 crc kubenswrapper[4957]: I0218 14:59:02.329502 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hrhj7\" (UniqueName: \"kubernetes.io/projected/a6192b5e-59c5-4986-bac1-41acf8c0d46e-kube-api-access-hrhj7\") pod \"rabbitmq-cell1-server-0\" (UID: \"a6192b5e-59c5-4986-bac1-41acf8c0d46e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:59:02 crc kubenswrapper[4957]: I0218 14:59:02.329675 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a6192b5e-59c5-4986-bac1-41acf8c0d46e-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"a6192b5e-59c5-4986-bac1-41acf8c0d46e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:59:02 crc kubenswrapper[4957]: I0218 14:59:02.329713 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a6192b5e-59c5-4986-bac1-41acf8c0d46e-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"a6192b5e-59c5-4986-bac1-41acf8c0d46e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:59:02 crc kubenswrapper[4957]: I0218 14:59:02.329746 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a6192b5e-59c5-4986-bac1-41acf8c0d46e-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a6192b5e-59c5-4986-bac1-41acf8c0d46e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:59:02 crc kubenswrapper[4957]: I0218 14:59:02.329859 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a6192b5e-59c5-4986-bac1-41acf8c0d46e-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"a6192b5e-59c5-4986-bac1-41acf8c0d46e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:59:02 crc kubenswrapper[4957]: I0218 14:59:02.330918 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a6192b5e-59c5-4986-bac1-41acf8c0d46e-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"a6192b5e-59c5-4986-bac1-41acf8c0d46e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:59:02 crc kubenswrapper[4957]: I0218 14:59:02.331512 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a6192b5e-59c5-4986-bac1-41acf8c0d46e-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a6192b5e-59c5-4986-bac1-41acf8c0d46e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:59:02 crc kubenswrapper[4957]: I0218 14:59:02.331651 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a6192b5e-59c5-4986-bac1-41acf8c0d46e-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"a6192b5e-59c5-4986-bac1-41acf8c0d46e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:59:02 crc kubenswrapper[4957]: I0218 14:59:02.331903 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a6192b5e-59c5-4986-bac1-41acf8c0d46e-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a6192b5e-59c5-4986-bac1-41acf8c0d46e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:59:02 crc kubenswrapper[4957]: I0218 14:59:02.333305 4957 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 18 14:59:02 crc kubenswrapper[4957]: I0218 14:59:02.333339 4957 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-971c377d-ef63-4d1e-9ba8-5f5ad7361f48\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-971c377d-ef63-4d1e-9ba8-5f5ad7361f48\") pod \"rabbitmq-cell1-server-0\" (UID: \"a6192b5e-59c5-4986-bac1-41acf8c0d46e\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/8c9b356ebc43732f1fe9abea7b6708f9500d803b22e1280123110210f298fda6/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:59:02 crc kubenswrapper[4957]: I0218 14:59:02.334837 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a6192b5e-59c5-4986-bac1-41acf8c0d46e-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"a6192b5e-59c5-4986-bac1-41acf8c0d46e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:59:02 crc kubenswrapper[4957]: I0218 14:59:02.335534 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a6192b5e-59c5-4986-bac1-41acf8c0d46e-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"a6192b5e-59c5-4986-bac1-41acf8c0d46e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:59:02 crc kubenswrapper[4957]: I0218 14:59:02.336164 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a6192b5e-59c5-4986-bac1-41acf8c0d46e-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"a6192b5e-59c5-4986-bac1-41acf8c0d46e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:59:02 crc kubenswrapper[4957]: I0218 14:59:02.336172 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a6192b5e-59c5-4986-bac1-41acf8c0d46e-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"a6192b5e-59c5-4986-bac1-41acf8c0d46e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:59:02 crc kubenswrapper[4957]: I0218 14:59:02.349651 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hrhj7\" (UniqueName: \"kubernetes.io/projected/a6192b5e-59c5-4986-bac1-41acf8c0d46e-kube-api-access-hrhj7\") pod \"rabbitmq-cell1-server-0\" (UID: \"a6192b5e-59c5-4986-bac1-41acf8c0d46e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:59:02 crc kubenswrapper[4957]: I0218 14:59:02.384376 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-971c377d-ef63-4d1e-9ba8-5f5ad7361f48\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-971c377d-ef63-4d1e-9ba8-5f5ad7361f48\") pod \"rabbitmq-cell1-server-0\" (UID: \"a6192b5e-59c5-4986-bac1-41acf8c0d46e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:59:02 crc kubenswrapper[4957]: I0218 14:59:02.427966 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:59:10 crc kubenswrapper[4957]: I0218 14:59:10.477364 4957 scope.go:117] "RemoveContainer" containerID="a137f2d606d0d74bf7ba4698dcc499a5b8f0e2ad8ab294862942ac6ee18f3a52" Feb 18 14:59:10 crc kubenswrapper[4957]: I0218 14:59:10.986497 4957 scope.go:117] "RemoveContainer" containerID="f083acbf4c7e675f9c2dc0dacda93d033ac034b808f632f4e7aceaf7cba5fad8" Feb 18 14:59:10 crc kubenswrapper[4957]: E0218 14:59:10.987191 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f083acbf4c7e675f9c2dc0dacda93d033ac034b808f632f4e7aceaf7cba5fad8\": container with ID starting with f083acbf4c7e675f9c2dc0dacda93d033ac034b808f632f4e7aceaf7cba5fad8 not found: ID does not exist" containerID="f083acbf4c7e675f9c2dc0dacda93d033ac034b808f632f4e7aceaf7cba5fad8" Feb 18 14:59:10 crc kubenswrapper[4957]: I0218 14:59:10.987231 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f083acbf4c7e675f9c2dc0dacda93d033ac034b808f632f4e7aceaf7cba5fad8"} err="failed to get container status \"f083acbf4c7e675f9c2dc0dacda93d033ac034b808f632f4e7aceaf7cba5fad8\": rpc error: code = NotFound desc = could not find container \"f083acbf4c7e675f9c2dc0dacda93d033ac034b808f632f4e7aceaf7cba5fad8\": container with ID starting with f083acbf4c7e675f9c2dc0dacda93d033ac034b808f632f4e7aceaf7cba5fad8 not found: ID does not exist" Feb 18 14:59:10 crc kubenswrapper[4957]: I0218 14:59:10.987255 4957 scope.go:117] "RemoveContainer" containerID="a137f2d606d0d74bf7ba4698dcc499a5b8f0e2ad8ab294862942ac6ee18f3a52" Feb 18 14:59:10 crc kubenswrapper[4957]: E0218 14:59:10.987527 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a137f2d606d0d74bf7ba4698dcc499a5b8f0e2ad8ab294862942ac6ee18f3a52\": container with ID starting with a137f2d606d0d74bf7ba4698dcc499a5b8f0e2ad8ab294862942ac6ee18f3a52 not found: ID does not exist" containerID="a137f2d606d0d74bf7ba4698dcc499a5b8f0e2ad8ab294862942ac6ee18f3a52" Feb 18 14:59:10 crc kubenswrapper[4957]: I0218 14:59:10.987552 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a137f2d606d0d74bf7ba4698dcc499a5b8f0e2ad8ab294862942ac6ee18f3a52"} err="failed to get container status \"a137f2d606d0d74bf7ba4698dcc499a5b8f0e2ad8ab294862942ac6ee18f3a52\": rpc error: code = NotFound desc = could not find container \"a137f2d606d0d74bf7ba4698dcc499a5b8f0e2ad8ab294862942ac6ee18f3a52\": container with ID starting with a137f2d606d0d74bf7ba4698dcc499a5b8f0e2ad8ab294862942ac6ee18f3a52 not found: ID does not exist" Feb 18 14:59:10 crc kubenswrapper[4957]: I0218 14:59:10.987574 4957 scope.go:117] "RemoveContainer" containerID="351d83c0114e5e295ff8d56e0e90957a2bdf7388269b2c16995008eb48ee3a67" Feb 18 14:59:11 crc kubenswrapper[4957]: E0218 14:59:11.031267 4957 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 18 14:59:11 crc kubenswrapper[4957]: E0218 14:59:11.031331 4957 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 18 14:59:11 crc kubenswrapper[4957]: E0218 14:59:11.031477 4957 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mctt9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-ppkbv_openstack(c033e783-4e0d-4ec1-a8c1-877fad072b9b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 14:59:11 crc kubenswrapper[4957]: E0218 14:59:11.032643 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/heat-db-sync-ppkbv" podUID="c033e783-4e0d-4ec1-a8c1-877fad072b9b" Feb 18 14:59:11 crc kubenswrapper[4957]: I0218 14:59:11.132113 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d84b4d45c-vpb78"] Feb 18 14:59:11 crc kubenswrapper[4957]: E0218 14:59:11.138254 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ppkbv" podUID="c033e783-4e0d-4ec1-a8c1-877fad072b9b" Feb 18 14:59:11 crc kubenswrapper[4957]: I0218 14:59:11.458504 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Feb 18 14:59:11 crc kubenswrapper[4957]: E0218 14:59:11.488861 4957 log.go:32] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to delete container k8s_setup-container_rabbitmq-server-2_openstack_81a0cd7a-3f64-4555-96e9-ad69c2518568_0 in pod sandbox 4e9712164a7fce0a4a1172bd337e33d4d6261de86c47fa3fab801240089b1046 from index: no such id: 'a137f2d606d0d74bf7ba4698dcc499a5b8f0e2ad8ab294862942ac6ee18f3a52'" containerID="a137f2d606d0d74bf7ba4698dcc499a5b8f0e2ad8ab294862942ac6ee18f3a52" Feb 18 14:59:11 crc kubenswrapper[4957]: E0218 14:59:11.488946 4957 kuberuntime_gc.go:150] "Failed to remove container" err="rpc error: code = Unknown desc = failed to delete container k8s_setup-container_rabbitmq-server-2_openstack_81a0cd7a-3f64-4555-96e9-ad69c2518568_0 in pod sandbox 4e9712164a7fce0a4a1172bd337e33d4d6261de86c47fa3fab801240089b1046 from index: no such id: 'a137f2d606d0d74bf7ba4698dcc499a5b8f0e2ad8ab294862942ac6ee18f3a52'" containerID="a137f2d606d0d74bf7ba4698dcc499a5b8f0e2ad8ab294862942ac6ee18f3a52" Feb 18 14:59:11 crc kubenswrapper[4957]: W0218 14:59:11.507375 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2dffda1d_4d5a_420b_b627_4bc6fd552c61.slice/crio-543dd541c81e20beece10ef9cba01865628ee1c86cc4965fd3edd994b616ee2e WatchSource:0}: Error finding container 543dd541c81e20beece10ef9cba01865628ee1c86cc4965fd3edd994b616ee2e: Status 404 returned error can't find the container with id 543dd541c81e20beece10ef9cba01865628ee1c86cc4965fd3edd994b616ee2e Feb 18 14:59:11 crc kubenswrapper[4957]: W0218 14:59:11.523376 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod99fe3777_adec_48ee_b2a8_df742111168d.slice/crio-0d18e7f4eeb02ca6b77b8ba2d9512c70d90eedce1cee8a7ca2b05c257aa87750 WatchSource:0}: Error finding container 0d18e7f4eeb02ca6b77b8ba2d9512c70d90eedce1cee8a7ca2b05c257aa87750: Status 404 returned error can't find the container with id 0d18e7f4eeb02ca6b77b8ba2d9512c70d90eedce1cee8a7ca2b05c257aa87750 Feb 18 14:59:11 crc kubenswrapper[4957]: E0218 14:59:11.531406 4957 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 18 14:59:11 crc kubenswrapper[4957]: E0218 14:59:11.531486 4957 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 18 14:59:11 crc kubenswrapper[4957]: E0218 14:59:11.531628 4957 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68bh9dh677h79h5c8h65dhf5h558h686h579h67h6h57ch65bhbdh698h59ch697h95h75h54bh58bh547h666h664h5h597h56fh685hc5h685h57dq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rhg77,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(64952df6-ca80-4f2b-a8e3-ce0539d32008): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 14:59:11 crc kubenswrapper[4957]: I0218 14:59:11.564992 4957 scope.go:117] "RemoveContainer" containerID="d210d83635aa51b299952aa7eac3ab5e0c00e01b6107d3c3293d4f59d4388270" Feb 18 14:59:11 crc kubenswrapper[4957]: E0218 14:59:11.573492 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d210d83635aa51b299952aa7eac3ab5e0c00e01b6107d3c3293d4f59d4388270\": container with ID starting with d210d83635aa51b299952aa7eac3ab5e0c00e01b6107d3c3293d4f59d4388270 not found: ID does not exist" containerID="d210d83635aa51b299952aa7eac3ab5e0c00e01b6107d3c3293d4f59d4388270" Feb 18 14:59:11 crc kubenswrapper[4957]: I0218 14:59:11.573535 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d210d83635aa51b299952aa7eac3ab5e0c00e01b6107d3c3293d4f59d4388270"} err="failed to get container status \"d210d83635aa51b299952aa7eac3ab5e0c00e01b6107d3c3293d4f59d4388270\": rpc error: code = NotFound desc = could not find container \"d210d83635aa51b299952aa7eac3ab5e0c00e01b6107d3c3293d4f59d4388270\": container with ID starting with d210d83635aa51b299952aa7eac3ab5e0c00e01b6107d3c3293d4f59d4388270 not found: ID does not exist" Feb 18 14:59:12 crc kubenswrapper[4957]: I0218 14:59:12.097985 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 18 14:59:12 crc kubenswrapper[4957]: I0218 14:59:12.199747 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"a6192b5e-59c5-4986-bac1-41acf8c0d46e","Type":"ContainerStarted","Data":"ebd294dbcbcda1f7156b51b588394f972abfce1868cb35fbe3a2d3c7884703f2"} Feb 18 14:59:12 crc kubenswrapper[4957]: I0218 14:59:12.200825 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d84b4d45c-vpb78" event={"ID":"2dffda1d-4d5a-420b-b627-4bc6fd552c61","Type":"ContainerStarted","Data":"543dd541c81e20beece10ef9cba01865628ee1c86cc4965fd3edd994b616ee2e"} Feb 18 14:59:12 crc kubenswrapper[4957]: I0218 14:59:12.209747 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"99fe3777-adec-48ee-b2a8-df742111168d","Type":"ContainerStarted","Data":"0d18e7f4eeb02ca6b77b8ba2d9512c70d90eedce1cee8a7ca2b05c257aa87750"} Feb 18 14:59:13 crc kubenswrapper[4957]: I0218 14:59:13.229093 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"64952df6-ca80-4f2b-a8e3-ce0539d32008","Type":"ContainerStarted","Data":"7b413331cea5e555f4d93bf6d2d3428b505571c72bf8f0f46960e059983bca28"} Feb 18 14:59:13 crc kubenswrapper[4957]: I0218 14:59:13.231153 4957 generic.go:334] "Generic (PLEG): container finished" podID="2dffda1d-4d5a-420b-b627-4bc6fd552c61" containerID="128e4498c3fa8a9b55a08ca04dcfc88e7ed6145cb87d5bd9920b057100e8d528" exitCode=0 Feb 18 14:59:13 crc kubenswrapper[4957]: I0218 14:59:13.231202 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d84b4d45c-vpb78" event={"ID":"2dffda1d-4d5a-420b-b627-4bc6fd552c61","Type":"ContainerDied","Data":"128e4498c3fa8a9b55a08ca04dcfc88e7ed6145cb87d5bd9920b057100e8d528"} Feb 18 14:59:14 crc kubenswrapper[4957]: I0218 14:59:14.224296 4957 scope.go:117] "RemoveContainer" containerID="c4e669be0ab1992542c422c61f78e7526ae93fe9c47c7d1457d0a05970a49398" Feb 18 14:59:14 crc kubenswrapper[4957]: E0218 14:59:14.224793 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-x8wwg_openshift-machine-config-operator(4cde17e3-43e9-4bed-afe8-5b76229e35cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" Feb 18 14:59:14 crc kubenswrapper[4957]: I0218 14:59:14.258180 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"a6192b5e-59c5-4986-bac1-41acf8c0d46e","Type":"ContainerStarted","Data":"3a083ea2e4e2fe2179abb2c73697e94d99111d7037cc3c400bba53719bee62f2"} Feb 18 14:59:14 crc kubenswrapper[4957]: I0218 14:59:14.262688 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d84b4d45c-vpb78" event={"ID":"2dffda1d-4d5a-420b-b627-4bc6fd552c61","Type":"ContainerStarted","Data":"2a55f27a528bf946836d8e13926ddfd51a8c44db00545a6b13c41702b3361511"} Feb 18 14:59:14 crc kubenswrapper[4957]: I0218 14:59:14.262861 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7d84b4d45c-vpb78" Feb 18 14:59:14 crc kubenswrapper[4957]: I0218 14:59:14.266276 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"99fe3777-adec-48ee-b2a8-df742111168d","Type":"ContainerStarted","Data":"d0ffd74a55fd9c0fb3f765efa8857855f4c8f7e62a89726e2999462f9a371f78"} Feb 18 14:59:14 crc kubenswrapper[4957]: I0218 14:59:14.268808 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"64952df6-ca80-4f2b-a8e3-ce0539d32008","Type":"ContainerStarted","Data":"0981eed4b32399b279bd7ee497db112c2b06ffc0754e95a1fd4627eab628515d"} Feb 18 14:59:14 crc kubenswrapper[4957]: I0218 14:59:14.313257 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7d84b4d45c-vpb78" podStartSLOduration=16.313235769 podStartE2EDuration="16.313235769s" podCreationTimestamp="2026-02-18 14:58:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:59:14.306221116 +0000 UTC m=+1660.827085880" watchObservedRunningTime="2026-02-18 14:59:14.313235769 +0000 UTC m=+1660.834100523" Feb 18 14:59:16 crc kubenswrapper[4957]: E0218 14:59:16.367652 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="64952df6-ca80-4f2b-a8e3-ce0539d32008" Feb 18 14:59:17 crc kubenswrapper[4957]: I0218 14:59:17.323739 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"64952df6-ca80-4f2b-a8e3-ce0539d32008","Type":"ContainerStarted","Data":"a2151e86737bd73097792d0d09c48facd303756dcb6d7280590c9ceebfcbdb3b"} Feb 18 14:59:17 crc kubenswrapper[4957]: I0218 14:59:17.324295 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 18 14:59:17 crc kubenswrapper[4957]: E0218 14:59:17.327479 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="64952df6-ca80-4f2b-a8e3-ce0539d32008" Feb 18 14:59:18 crc kubenswrapper[4957]: E0218 14:59:18.348649 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="64952df6-ca80-4f2b-a8e3-ce0539d32008" Feb 18 14:59:19 crc kubenswrapper[4957]: I0218 14:59:19.074705 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7d84b4d45c-vpb78" Feb 18 14:59:19 crc kubenswrapper[4957]: I0218 14:59:19.191329 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-9dfnj"] Feb 18 14:59:19 crc kubenswrapper[4957]: I0218 14:59:19.192466 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6b7bbf7cf9-9dfnj" podUID="52088720-a981-46e8-be3c-eee35c337203" containerName="dnsmasq-dns" containerID="cri-o://f0d448bdb7fe25c82d9b33bcf7919dba8fe5fa9bc27d4cacd5306fd5c1257981" gracePeriod=10 Feb 18 14:59:19 crc kubenswrapper[4957]: I0218 14:59:19.435225 4957 generic.go:334] "Generic (PLEG): container finished" podID="52088720-a981-46e8-be3c-eee35c337203" containerID="f0d448bdb7fe25c82d9b33bcf7919dba8fe5fa9bc27d4cacd5306fd5c1257981" exitCode=0 Feb 18 14:59:19 crc kubenswrapper[4957]: I0218 14:59:19.436441 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7bbf7cf9-9dfnj" event={"ID":"52088720-a981-46e8-be3c-eee35c337203","Type":"ContainerDied","Data":"f0d448bdb7fe25c82d9b33bcf7919dba8fe5fa9bc27d4cacd5306fd5c1257981"} Feb 18 14:59:19 crc kubenswrapper[4957]: I0218 14:59:19.544023 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6f6df4f56c-jt2ll"] Feb 18 14:59:19 crc kubenswrapper[4957]: I0218 14:59:19.546121 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f6df4f56c-jt2ll" Feb 18 14:59:19 crc kubenswrapper[4957]: I0218 14:59:19.583742 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6f6df4f56c-jt2ll"] Feb 18 14:59:19 crc kubenswrapper[4957]: I0218 14:59:19.607035 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b830b39f-23b6-4b85-ba54-e8f4b81d5d5f-dns-svc\") pod \"dnsmasq-dns-6f6df4f56c-jt2ll\" (UID: \"b830b39f-23b6-4b85-ba54-e8f4b81d5d5f\") " pod="openstack/dnsmasq-dns-6f6df4f56c-jt2ll" Feb 18 14:59:19 crc kubenswrapper[4957]: I0218 14:59:19.607094 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b830b39f-23b6-4b85-ba54-e8f4b81d5d5f-ovsdbserver-nb\") pod \"dnsmasq-dns-6f6df4f56c-jt2ll\" (UID: \"b830b39f-23b6-4b85-ba54-e8f4b81d5d5f\") " pod="openstack/dnsmasq-dns-6f6df4f56c-jt2ll" Feb 18 14:59:19 crc kubenswrapper[4957]: I0218 14:59:19.607147 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6lbr\" (UniqueName: \"kubernetes.io/projected/b830b39f-23b6-4b85-ba54-e8f4b81d5d5f-kube-api-access-z6lbr\") pod \"dnsmasq-dns-6f6df4f56c-jt2ll\" (UID: \"b830b39f-23b6-4b85-ba54-e8f4b81d5d5f\") " pod="openstack/dnsmasq-dns-6f6df4f56c-jt2ll" Feb 18 14:59:19 crc kubenswrapper[4957]: I0218 14:59:19.607192 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/b830b39f-23b6-4b85-ba54-e8f4b81d5d5f-openstack-edpm-ipam\") pod \"dnsmasq-dns-6f6df4f56c-jt2ll\" (UID: \"b830b39f-23b6-4b85-ba54-e8f4b81d5d5f\") " pod="openstack/dnsmasq-dns-6f6df4f56c-jt2ll" Feb 18 14:59:19 crc kubenswrapper[4957]: I0218 14:59:19.607243 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b830b39f-23b6-4b85-ba54-e8f4b81d5d5f-config\") pod \"dnsmasq-dns-6f6df4f56c-jt2ll\" (UID: \"b830b39f-23b6-4b85-ba54-e8f4b81d5d5f\") " pod="openstack/dnsmasq-dns-6f6df4f56c-jt2ll" Feb 18 14:59:19 crc kubenswrapper[4957]: I0218 14:59:19.607376 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b830b39f-23b6-4b85-ba54-e8f4b81d5d5f-ovsdbserver-sb\") pod \"dnsmasq-dns-6f6df4f56c-jt2ll\" (UID: \"b830b39f-23b6-4b85-ba54-e8f4b81d5d5f\") " pod="openstack/dnsmasq-dns-6f6df4f56c-jt2ll" Feb 18 14:59:19 crc kubenswrapper[4957]: I0218 14:59:19.607480 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b830b39f-23b6-4b85-ba54-e8f4b81d5d5f-dns-swift-storage-0\") pod \"dnsmasq-dns-6f6df4f56c-jt2ll\" (UID: \"b830b39f-23b6-4b85-ba54-e8f4b81d5d5f\") " pod="openstack/dnsmasq-dns-6f6df4f56c-jt2ll" Feb 18 14:59:19 crc kubenswrapper[4957]: I0218 14:59:19.714992 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b830b39f-23b6-4b85-ba54-e8f4b81d5d5f-dns-swift-storage-0\") pod \"dnsmasq-dns-6f6df4f56c-jt2ll\" (UID: \"b830b39f-23b6-4b85-ba54-e8f4b81d5d5f\") " pod="openstack/dnsmasq-dns-6f6df4f56c-jt2ll" Feb 18 14:59:19 crc kubenswrapper[4957]: I0218 14:59:19.715119 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b830b39f-23b6-4b85-ba54-e8f4b81d5d5f-dns-svc\") pod \"dnsmasq-dns-6f6df4f56c-jt2ll\" (UID: \"b830b39f-23b6-4b85-ba54-e8f4b81d5d5f\") " pod="openstack/dnsmasq-dns-6f6df4f56c-jt2ll" Feb 18 14:59:19 crc kubenswrapper[4957]: I0218 14:59:19.715141 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b830b39f-23b6-4b85-ba54-e8f4b81d5d5f-ovsdbserver-nb\") pod \"dnsmasq-dns-6f6df4f56c-jt2ll\" (UID: \"b830b39f-23b6-4b85-ba54-e8f4b81d5d5f\") " pod="openstack/dnsmasq-dns-6f6df4f56c-jt2ll" Feb 18 14:59:19 crc kubenswrapper[4957]: I0218 14:59:19.715167 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z6lbr\" (UniqueName: \"kubernetes.io/projected/b830b39f-23b6-4b85-ba54-e8f4b81d5d5f-kube-api-access-z6lbr\") pod \"dnsmasq-dns-6f6df4f56c-jt2ll\" (UID: \"b830b39f-23b6-4b85-ba54-e8f4b81d5d5f\") " pod="openstack/dnsmasq-dns-6f6df4f56c-jt2ll" Feb 18 14:59:19 crc kubenswrapper[4957]: I0218 14:59:19.715201 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/b830b39f-23b6-4b85-ba54-e8f4b81d5d5f-openstack-edpm-ipam\") pod \"dnsmasq-dns-6f6df4f56c-jt2ll\" (UID: \"b830b39f-23b6-4b85-ba54-e8f4b81d5d5f\") " pod="openstack/dnsmasq-dns-6f6df4f56c-jt2ll" Feb 18 14:59:19 crc kubenswrapper[4957]: I0218 14:59:19.715230 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b830b39f-23b6-4b85-ba54-e8f4b81d5d5f-config\") pod \"dnsmasq-dns-6f6df4f56c-jt2ll\" (UID: \"b830b39f-23b6-4b85-ba54-e8f4b81d5d5f\") " pod="openstack/dnsmasq-dns-6f6df4f56c-jt2ll" Feb 18 14:59:19 crc kubenswrapper[4957]: I0218 14:59:19.715306 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b830b39f-23b6-4b85-ba54-e8f4b81d5d5f-ovsdbserver-sb\") pod \"dnsmasq-dns-6f6df4f56c-jt2ll\" (UID: \"b830b39f-23b6-4b85-ba54-e8f4b81d5d5f\") " pod="openstack/dnsmasq-dns-6f6df4f56c-jt2ll" Feb 18 14:59:19 crc kubenswrapper[4957]: I0218 14:59:19.716241 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b830b39f-23b6-4b85-ba54-e8f4b81d5d5f-ovsdbserver-sb\") pod \"dnsmasq-dns-6f6df4f56c-jt2ll\" (UID: \"b830b39f-23b6-4b85-ba54-e8f4b81d5d5f\") " pod="openstack/dnsmasq-dns-6f6df4f56c-jt2ll" Feb 18 14:59:19 crc kubenswrapper[4957]: I0218 14:59:19.717099 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b830b39f-23b6-4b85-ba54-e8f4b81d5d5f-ovsdbserver-nb\") pod \"dnsmasq-dns-6f6df4f56c-jt2ll\" (UID: \"b830b39f-23b6-4b85-ba54-e8f4b81d5d5f\") " pod="openstack/dnsmasq-dns-6f6df4f56c-jt2ll" Feb 18 14:59:19 crc kubenswrapper[4957]: I0218 14:59:19.725116 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/b830b39f-23b6-4b85-ba54-e8f4b81d5d5f-openstack-edpm-ipam\") pod \"dnsmasq-dns-6f6df4f56c-jt2ll\" (UID: \"b830b39f-23b6-4b85-ba54-e8f4b81d5d5f\") " pod="openstack/dnsmasq-dns-6f6df4f56c-jt2ll" Feb 18 14:59:19 crc kubenswrapper[4957]: I0218 14:59:19.725550 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b830b39f-23b6-4b85-ba54-e8f4b81d5d5f-dns-swift-storage-0\") pod \"dnsmasq-dns-6f6df4f56c-jt2ll\" (UID: \"b830b39f-23b6-4b85-ba54-e8f4b81d5d5f\") " pod="openstack/dnsmasq-dns-6f6df4f56c-jt2ll" Feb 18 14:59:19 crc kubenswrapper[4957]: I0218 14:59:19.729613 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b830b39f-23b6-4b85-ba54-e8f4b81d5d5f-dns-svc\") pod \"dnsmasq-dns-6f6df4f56c-jt2ll\" (UID: \"b830b39f-23b6-4b85-ba54-e8f4b81d5d5f\") " pod="openstack/dnsmasq-dns-6f6df4f56c-jt2ll" Feb 18 14:59:19 crc kubenswrapper[4957]: I0218 14:59:19.733979 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b830b39f-23b6-4b85-ba54-e8f4b81d5d5f-config\") pod \"dnsmasq-dns-6f6df4f56c-jt2ll\" (UID: \"b830b39f-23b6-4b85-ba54-e8f4b81d5d5f\") " pod="openstack/dnsmasq-dns-6f6df4f56c-jt2ll" Feb 18 14:59:19 crc kubenswrapper[4957]: I0218 14:59:19.767223 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z6lbr\" (UniqueName: \"kubernetes.io/projected/b830b39f-23b6-4b85-ba54-e8f4b81d5d5f-kube-api-access-z6lbr\") pod \"dnsmasq-dns-6f6df4f56c-jt2ll\" (UID: \"b830b39f-23b6-4b85-ba54-e8f4b81d5d5f\") " pod="openstack/dnsmasq-dns-6f6df4f56c-jt2ll" Feb 18 14:59:19 crc kubenswrapper[4957]: I0218 14:59:19.916106 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f6df4f56c-jt2ll" Feb 18 14:59:20 crc kubenswrapper[4957]: I0218 14:59:20.043879 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7bbf7cf9-9dfnj" Feb 18 14:59:20 crc kubenswrapper[4957]: I0218 14:59:20.230944 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8v98b\" (UniqueName: \"kubernetes.io/projected/52088720-a981-46e8-be3c-eee35c337203-kube-api-access-8v98b\") pod \"52088720-a981-46e8-be3c-eee35c337203\" (UID: \"52088720-a981-46e8-be3c-eee35c337203\") " Feb 18 14:59:20 crc kubenswrapper[4957]: I0218 14:59:20.231023 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/52088720-a981-46e8-be3c-eee35c337203-dns-svc\") pod \"52088720-a981-46e8-be3c-eee35c337203\" (UID: \"52088720-a981-46e8-be3c-eee35c337203\") " Feb 18 14:59:20 crc kubenswrapper[4957]: I0218 14:59:20.231178 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/52088720-a981-46e8-be3c-eee35c337203-dns-swift-storage-0\") pod \"52088720-a981-46e8-be3c-eee35c337203\" (UID: \"52088720-a981-46e8-be3c-eee35c337203\") " Feb 18 14:59:20 crc kubenswrapper[4957]: I0218 14:59:20.231220 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/52088720-a981-46e8-be3c-eee35c337203-ovsdbserver-nb\") pod \"52088720-a981-46e8-be3c-eee35c337203\" (UID: \"52088720-a981-46e8-be3c-eee35c337203\") " Feb 18 14:59:20 crc kubenswrapper[4957]: I0218 14:59:20.231387 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52088720-a981-46e8-be3c-eee35c337203-config\") pod \"52088720-a981-46e8-be3c-eee35c337203\" (UID: \"52088720-a981-46e8-be3c-eee35c337203\") " Feb 18 14:59:20 crc kubenswrapper[4957]: I0218 14:59:20.231410 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/52088720-a981-46e8-be3c-eee35c337203-ovsdbserver-sb\") pod \"52088720-a981-46e8-be3c-eee35c337203\" (UID: \"52088720-a981-46e8-be3c-eee35c337203\") " Feb 18 14:59:20 crc kubenswrapper[4957]: I0218 14:59:20.242081 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52088720-a981-46e8-be3c-eee35c337203-kube-api-access-8v98b" (OuterVolumeSpecName: "kube-api-access-8v98b") pod "52088720-a981-46e8-be3c-eee35c337203" (UID: "52088720-a981-46e8-be3c-eee35c337203"). InnerVolumeSpecName "kube-api-access-8v98b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:59:20 crc kubenswrapper[4957]: I0218 14:59:20.296843 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52088720-a981-46e8-be3c-eee35c337203-config" (OuterVolumeSpecName: "config") pod "52088720-a981-46e8-be3c-eee35c337203" (UID: "52088720-a981-46e8-be3c-eee35c337203"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:59:20 crc kubenswrapper[4957]: I0218 14:59:20.313175 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52088720-a981-46e8-be3c-eee35c337203-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "52088720-a981-46e8-be3c-eee35c337203" (UID: "52088720-a981-46e8-be3c-eee35c337203"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:59:20 crc kubenswrapper[4957]: I0218 14:59:20.321998 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52088720-a981-46e8-be3c-eee35c337203-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "52088720-a981-46e8-be3c-eee35c337203" (UID: "52088720-a981-46e8-be3c-eee35c337203"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:59:20 crc kubenswrapper[4957]: I0218 14:59:20.332360 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52088720-a981-46e8-be3c-eee35c337203-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "52088720-a981-46e8-be3c-eee35c337203" (UID: "52088720-a981-46e8-be3c-eee35c337203"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:59:20 crc kubenswrapper[4957]: I0218 14:59:20.334287 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8v98b\" (UniqueName: \"kubernetes.io/projected/52088720-a981-46e8-be3c-eee35c337203-kube-api-access-8v98b\") on node \"crc\" DevicePath \"\"" Feb 18 14:59:20 crc kubenswrapper[4957]: I0218 14:59:20.334307 4957 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/52088720-a981-46e8-be3c-eee35c337203-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 14:59:20 crc kubenswrapper[4957]: I0218 14:59:20.334317 4957 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/52088720-a981-46e8-be3c-eee35c337203-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 18 14:59:20 crc kubenswrapper[4957]: I0218 14:59:20.334327 4957 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52088720-a981-46e8-be3c-eee35c337203-config\") on node \"crc\" DevicePath \"\"" Feb 18 14:59:20 crc kubenswrapper[4957]: I0218 14:59:20.334336 4957 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/52088720-a981-46e8-be3c-eee35c337203-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 18 14:59:20 crc kubenswrapper[4957]: I0218 14:59:20.356647 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52088720-a981-46e8-be3c-eee35c337203-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "52088720-a981-46e8-be3c-eee35c337203" (UID: "52088720-a981-46e8-be3c-eee35c337203"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:59:20 crc kubenswrapper[4957]: I0218 14:59:20.438109 4957 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/52088720-a981-46e8-be3c-eee35c337203-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 18 14:59:20 crc kubenswrapper[4957]: I0218 14:59:20.444077 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6f6df4f56c-jt2ll"] Feb 18 14:59:20 crc kubenswrapper[4957]: I0218 14:59:20.452533 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7bbf7cf9-9dfnj" event={"ID":"52088720-a981-46e8-be3c-eee35c337203","Type":"ContainerDied","Data":"c6cffa162176eadc14ccf99431b51292ef45420906b9153acf7c3f38822d4e81"} Feb 18 14:59:20 crc kubenswrapper[4957]: I0218 14:59:20.452589 4957 scope.go:117] "RemoveContainer" containerID="f0d448bdb7fe25c82d9b33bcf7919dba8fe5fa9bc27d4cacd5306fd5c1257981" Feb 18 14:59:20 crc kubenswrapper[4957]: I0218 14:59:20.452704 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7bbf7cf9-9dfnj" Feb 18 14:59:20 crc kubenswrapper[4957]: I0218 14:59:20.454214 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f6df4f56c-jt2ll" event={"ID":"b830b39f-23b6-4b85-ba54-e8f4b81d5d5f","Type":"ContainerStarted","Data":"d3e0a232785712b4d543b2a88d6871256618d84cd8eb39e8c79505166e7d072d"} Feb 18 14:59:20 crc kubenswrapper[4957]: I0218 14:59:20.633763 4957 scope.go:117] "RemoveContainer" containerID="7dc6dcd2d39642c23ab8a30597dfe713255b952330e37f782dd5cf1e9be1f01c" Feb 18 14:59:20 crc kubenswrapper[4957]: I0218 14:59:20.638373 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-9dfnj"] Feb 18 14:59:20 crc kubenswrapper[4957]: I0218 14:59:20.650620 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-9dfnj"] Feb 18 14:59:21 crc kubenswrapper[4957]: I0218 14:59:21.468902 4957 generic.go:334] "Generic (PLEG): container finished" podID="b830b39f-23b6-4b85-ba54-e8f4b81d5d5f" containerID="907d3f30d4954604eb97855f4a221ef9d58f45ff1346f68e2f5ff6eb1d233d2a" exitCode=0 Feb 18 14:59:21 crc kubenswrapper[4957]: I0218 14:59:21.468992 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f6df4f56c-jt2ll" event={"ID":"b830b39f-23b6-4b85-ba54-e8f4b81d5d5f","Type":"ContainerDied","Data":"907d3f30d4954604eb97855f4a221ef9d58f45ff1346f68e2f5ff6eb1d233d2a"} Feb 18 14:59:22 crc kubenswrapper[4957]: I0218 14:59:22.231030 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52088720-a981-46e8-be3c-eee35c337203" path="/var/lib/kubelet/pods/52088720-a981-46e8-be3c-eee35c337203/volumes" Feb 18 14:59:22 crc kubenswrapper[4957]: I0218 14:59:22.482832 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f6df4f56c-jt2ll" event={"ID":"b830b39f-23b6-4b85-ba54-e8f4b81d5d5f","Type":"ContainerStarted","Data":"806f79a329ac7a7b2628fdcee6158e4cdc8de53bae5ad970936f94f462ad42ef"} Feb 18 14:59:22 crc kubenswrapper[4957]: I0218 14:59:22.483146 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6f6df4f56c-jt2ll" Feb 18 14:59:22 crc kubenswrapper[4957]: I0218 14:59:22.509692 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6f6df4f56c-jt2ll" podStartSLOduration=3.509671155 podStartE2EDuration="3.509671155s" podCreationTimestamp="2026-02-18 14:59:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:59:22.503028664 +0000 UTC m=+1669.023893418" watchObservedRunningTime="2026-02-18 14:59:22.509671155 +0000 UTC m=+1669.030535899" Feb 18 14:59:24 crc kubenswrapper[4957]: I0218 14:59:24.514675 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-ppkbv" event={"ID":"c033e783-4e0d-4ec1-a8c1-877fad072b9b","Type":"ContainerStarted","Data":"3b6fe0430dc9e317cf23e7cbfb0a007c329fb86eb3fa6832d75b081dcf57537d"} Feb 18 14:59:24 crc kubenswrapper[4957]: I0218 14:59:24.544010 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-sync-ppkbv" podStartSLOduration=3.206402848 podStartE2EDuration="45.543987993s" podCreationTimestamp="2026-02-18 14:58:39 +0000 UTC" firstStartedPulling="2026-02-18 14:58:41.064227272 +0000 UTC m=+1627.585092016" lastFinishedPulling="2026-02-18 14:59:23.401812417 +0000 UTC m=+1669.922677161" observedRunningTime="2026-02-18 14:59:24.533046518 +0000 UTC m=+1671.053911302" watchObservedRunningTime="2026-02-18 14:59:24.543987993 +0000 UTC m=+1671.064852737" Feb 18 14:59:24 crc kubenswrapper[4957]: I0218 14:59:24.725364 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6b7bbf7cf9-9dfnj" podUID="52088720-a981-46e8-be3c-eee35c337203" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.1.0:5353: i/o timeout" Feb 18 14:59:25 crc kubenswrapper[4957]: I0218 14:59:25.531686 4957 generic.go:334] "Generic (PLEG): container finished" podID="c033e783-4e0d-4ec1-a8c1-877fad072b9b" containerID="3b6fe0430dc9e317cf23e7cbfb0a007c329fb86eb3fa6832d75b081dcf57537d" exitCode=0 Feb 18 14:59:25 crc kubenswrapper[4957]: I0218 14:59:25.531730 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-ppkbv" event={"ID":"c033e783-4e0d-4ec1-a8c1-877fad072b9b","Type":"ContainerDied","Data":"3b6fe0430dc9e317cf23e7cbfb0a007c329fb86eb3fa6832d75b081dcf57537d"} Feb 18 14:59:27 crc kubenswrapper[4957]: I0218 14:59:27.060067 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-ppkbv" Feb 18 14:59:27 crc kubenswrapper[4957]: I0218 14:59:27.213010 4957 scope.go:117] "RemoveContainer" containerID="c4e669be0ab1992542c422c61f78e7526ae93fe9c47c7d1457d0a05970a49398" Feb 18 14:59:27 crc kubenswrapper[4957]: E0218 14:59:27.213559 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-x8wwg_openshift-machine-config-operator(4cde17e3-43e9-4bed-afe8-5b76229e35cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" Feb 18 14:59:27 crc kubenswrapper[4957]: I0218 14:59:27.228769 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c033e783-4e0d-4ec1-a8c1-877fad072b9b-combined-ca-bundle\") pod \"c033e783-4e0d-4ec1-a8c1-877fad072b9b\" (UID: \"c033e783-4e0d-4ec1-a8c1-877fad072b9b\") " Feb 18 14:59:27 crc kubenswrapper[4957]: I0218 14:59:27.228837 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mctt9\" (UniqueName: \"kubernetes.io/projected/c033e783-4e0d-4ec1-a8c1-877fad072b9b-kube-api-access-mctt9\") pod \"c033e783-4e0d-4ec1-a8c1-877fad072b9b\" (UID: \"c033e783-4e0d-4ec1-a8c1-877fad072b9b\") " Feb 18 14:59:27 crc kubenswrapper[4957]: I0218 14:59:27.229024 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c033e783-4e0d-4ec1-a8c1-877fad072b9b-config-data\") pod \"c033e783-4e0d-4ec1-a8c1-877fad072b9b\" (UID: \"c033e783-4e0d-4ec1-a8c1-877fad072b9b\") " Feb 18 14:59:27 crc kubenswrapper[4957]: I0218 14:59:27.235183 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c033e783-4e0d-4ec1-a8c1-877fad072b9b-kube-api-access-mctt9" (OuterVolumeSpecName: "kube-api-access-mctt9") pod "c033e783-4e0d-4ec1-a8c1-877fad072b9b" (UID: "c033e783-4e0d-4ec1-a8c1-877fad072b9b"). InnerVolumeSpecName "kube-api-access-mctt9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:59:27 crc kubenswrapper[4957]: I0218 14:59:27.270677 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c033e783-4e0d-4ec1-a8c1-877fad072b9b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c033e783-4e0d-4ec1-a8c1-877fad072b9b" (UID: "c033e783-4e0d-4ec1-a8c1-877fad072b9b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:59:27 crc kubenswrapper[4957]: I0218 14:59:27.333150 4957 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c033e783-4e0d-4ec1-a8c1-877fad072b9b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:59:27 crc kubenswrapper[4957]: I0218 14:59:27.333392 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mctt9\" (UniqueName: \"kubernetes.io/projected/c033e783-4e0d-4ec1-a8c1-877fad072b9b-kube-api-access-mctt9\") on node \"crc\" DevicePath \"\"" Feb 18 14:59:27 crc kubenswrapper[4957]: I0218 14:59:27.363973 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c033e783-4e0d-4ec1-a8c1-877fad072b9b-config-data" (OuterVolumeSpecName: "config-data") pod "c033e783-4e0d-4ec1-a8c1-877fad072b9b" (UID: "c033e783-4e0d-4ec1-a8c1-877fad072b9b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:59:27 crc kubenswrapper[4957]: I0218 14:59:27.435810 4957 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c033e783-4e0d-4ec1-a8c1-877fad072b9b-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 14:59:27 crc kubenswrapper[4957]: I0218 14:59:27.558984 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-ppkbv" event={"ID":"c033e783-4e0d-4ec1-a8c1-877fad072b9b","Type":"ContainerDied","Data":"50a8a49a06e7590a1ce05cb2f098a65973c0a12bf65d50bc52d80d3fc5293517"} Feb 18 14:59:27 crc kubenswrapper[4957]: I0218 14:59:27.559037 4957 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="50a8a49a06e7590a1ce05cb2f098a65973c0a12bf65d50bc52d80d3fc5293517" Feb 18 14:59:27 crc kubenswrapper[4957]: I0218 14:59:27.559037 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-ppkbv" Feb 18 14:59:28 crc kubenswrapper[4957]: I0218 14:59:28.583206 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-596b8bcf84-qf6sp"] Feb 18 14:59:28 crc kubenswrapper[4957]: E0218 14:59:28.584166 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52088720-a981-46e8-be3c-eee35c337203" containerName="dnsmasq-dns" Feb 18 14:59:28 crc kubenswrapper[4957]: I0218 14:59:28.584185 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="52088720-a981-46e8-be3c-eee35c337203" containerName="dnsmasq-dns" Feb 18 14:59:28 crc kubenswrapper[4957]: E0218 14:59:28.584232 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c033e783-4e0d-4ec1-a8c1-877fad072b9b" containerName="heat-db-sync" Feb 18 14:59:28 crc kubenswrapper[4957]: I0218 14:59:28.584245 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="c033e783-4e0d-4ec1-a8c1-877fad072b9b" containerName="heat-db-sync" Feb 18 14:59:28 crc kubenswrapper[4957]: E0218 14:59:28.584253 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52088720-a981-46e8-be3c-eee35c337203" containerName="init" Feb 18 14:59:28 crc kubenswrapper[4957]: I0218 14:59:28.584260 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="52088720-a981-46e8-be3c-eee35c337203" containerName="init" Feb 18 14:59:28 crc kubenswrapper[4957]: I0218 14:59:28.584566 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="c033e783-4e0d-4ec1-a8c1-877fad072b9b" containerName="heat-db-sync" Feb 18 14:59:28 crc kubenswrapper[4957]: I0218 14:59:28.584598 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="52088720-a981-46e8-be3c-eee35c337203" containerName="dnsmasq-dns" Feb 18 14:59:28 crc kubenswrapper[4957]: I0218 14:59:28.585629 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-596b8bcf84-qf6sp" Feb 18 14:59:28 crc kubenswrapper[4957]: I0218 14:59:28.602280 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-596b8bcf84-qf6sp"] Feb 18 14:59:28 crc kubenswrapper[4957]: I0218 14:59:28.640809 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-69cc788d47-5c2pf"] Feb 18 14:59:28 crc kubenswrapper[4957]: I0218 14:59:28.643486 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-69cc788d47-5c2pf" Feb 18 14:59:28 crc kubenswrapper[4957]: I0218 14:59:28.663766 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-65d4996964-zpvph"] Feb 18 14:59:28 crc kubenswrapper[4957]: I0218 14:59:28.666282 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-65d4996964-zpvph" Feb 18 14:59:28 crc kubenswrapper[4957]: I0218 14:59:28.671086 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/44f06eec-0e32-4246-a893-652c9b180b2c-config-data-custom\") pod \"heat-engine-596b8bcf84-qf6sp\" (UID: \"44f06eec-0e32-4246-a893-652c9b180b2c\") " pod="openstack/heat-engine-596b8bcf84-qf6sp" Feb 18 14:59:28 crc kubenswrapper[4957]: I0218 14:59:28.671299 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44f06eec-0e32-4246-a893-652c9b180b2c-combined-ca-bundle\") pod \"heat-engine-596b8bcf84-qf6sp\" (UID: \"44f06eec-0e32-4246-a893-652c9b180b2c\") " pod="openstack/heat-engine-596b8bcf84-qf6sp" Feb 18 14:59:28 crc kubenswrapper[4957]: I0218 14:59:28.671503 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4pzf\" (UniqueName: \"kubernetes.io/projected/44f06eec-0e32-4246-a893-652c9b180b2c-kube-api-access-c4pzf\") pod \"heat-engine-596b8bcf84-qf6sp\" (UID: \"44f06eec-0e32-4246-a893-652c9b180b2c\") " pod="openstack/heat-engine-596b8bcf84-qf6sp" Feb 18 14:59:28 crc kubenswrapper[4957]: I0218 14:59:28.671667 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44f06eec-0e32-4246-a893-652c9b180b2c-config-data\") pod \"heat-engine-596b8bcf84-qf6sp\" (UID: \"44f06eec-0e32-4246-a893-652c9b180b2c\") " pod="openstack/heat-engine-596b8bcf84-qf6sp" Feb 18 14:59:28 crc kubenswrapper[4957]: I0218 14:59:28.683300 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-69cc788d47-5c2pf"] Feb 18 14:59:28 crc kubenswrapper[4957]: I0218 14:59:28.703067 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-65d4996964-zpvph"] Feb 18 14:59:28 crc kubenswrapper[4957]: I0218 14:59:28.774390 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/44f06eec-0e32-4246-a893-652c9b180b2c-config-data-custom\") pod \"heat-engine-596b8bcf84-qf6sp\" (UID: \"44f06eec-0e32-4246-a893-652c9b180b2c\") " pod="openstack/heat-engine-596b8bcf84-qf6sp" Feb 18 14:59:28 crc kubenswrapper[4957]: I0218 14:59:28.774694 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2251ef18-33b0-4454-a9ff-2a00fd4974d7-internal-tls-certs\") pod \"heat-api-65d4996964-zpvph\" (UID: \"2251ef18-33b0-4454-a9ff-2a00fd4974d7\") " pod="openstack/heat-api-65d4996964-zpvph" Feb 18 14:59:28 crc kubenswrapper[4957]: I0218 14:59:28.774844 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44f06eec-0e32-4246-a893-652c9b180b2c-combined-ca-bundle\") pod \"heat-engine-596b8bcf84-qf6sp\" (UID: \"44f06eec-0e32-4246-a893-652c9b180b2c\") " pod="openstack/heat-engine-596b8bcf84-qf6sp" Feb 18 14:59:28 crc kubenswrapper[4957]: I0218 14:59:28.774934 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bd286cd4-02f3-4357-8c0e-bf30451df530-config-data-custom\") pod \"heat-cfnapi-69cc788d47-5c2pf\" (UID: \"bd286cd4-02f3-4357-8c0e-bf30451df530\") " pod="openstack/heat-cfnapi-69cc788d47-5c2pf" Feb 18 14:59:28 crc kubenswrapper[4957]: I0218 14:59:28.775040 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd286cd4-02f3-4357-8c0e-bf30451df530-combined-ca-bundle\") pod \"heat-cfnapi-69cc788d47-5c2pf\" (UID: \"bd286cd4-02f3-4357-8c0e-bf30451df530\") " pod="openstack/heat-cfnapi-69cc788d47-5c2pf" Feb 18 14:59:28 crc kubenswrapper[4957]: I0218 14:59:28.775131 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd286cd4-02f3-4357-8c0e-bf30451df530-internal-tls-certs\") pod \"heat-cfnapi-69cc788d47-5c2pf\" (UID: \"bd286cd4-02f3-4357-8c0e-bf30451df530\") " pod="openstack/heat-cfnapi-69cc788d47-5c2pf" Feb 18 14:59:28 crc kubenswrapper[4957]: I0218 14:59:28.775374 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2251ef18-33b0-4454-a9ff-2a00fd4974d7-combined-ca-bundle\") pod \"heat-api-65d4996964-zpvph\" (UID: \"2251ef18-33b0-4454-a9ff-2a00fd4974d7\") " pod="openstack/heat-api-65d4996964-zpvph" Feb 18 14:59:28 crc kubenswrapper[4957]: I0218 14:59:28.775453 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c4pzf\" (UniqueName: \"kubernetes.io/projected/44f06eec-0e32-4246-a893-652c9b180b2c-kube-api-access-c4pzf\") pod \"heat-engine-596b8bcf84-qf6sp\" (UID: \"44f06eec-0e32-4246-a893-652c9b180b2c\") " pod="openstack/heat-engine-596b8bcf84-qf6sp" Feb 18 14:59:28 crc kubenswrapper[4957]: I0218 14:59:28.775489 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2251ef18-33b0-4454-a9ff-2a00fd4974d7-config-data-custom\") pod \"heat-api-65d4996964-zpvph\" (UID: \"2251ef18-33b0-4454-a9ff-2a00fd4974d7\") " pod="openstack/heat-api-65d4996964-zpvph" Feb 18 14:59:28 crc kubenswrapper[4957]: I0218 14:59:28.775552 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd286cd4-02f3-4357-8c0e-bf30451df530-public-tls-certs\") pod \"heat-cfnapi-69cc788d47-5c2pf\" (UID: \"bd286cd4-02f3-4357-8c0e-bf30451df530\") " pod="openstack/heat-cfnapi-69cc788d47-5c2pf" Feb 18 14:59:28 crc kubenswrapper[4957]: I0218 14:59:28.775654 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84mch\" (UniqueName: \"kubernetes.io/projected/2251ef18-33b0-4454-a9ff-2a00fd4974d7-kube-api-access-84mch\") pod \"heat-api-65d4996964-zpvph\" (UID: \"2251ef18-33b0-4454-a9ff-2a00fd4974d7\") " pod="openstack/heat-api-65d4996964-zpvph" Feb 18 14:59:28 crc kubenswrapper[4957]: I0218 14:59:28.775708 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2251ef18-33b0-4454-a9ff-2a00fd4974d7-config-data\") pod \"heat-api-65d4996964-zpvph\" (UID: \"2251ef18-33b0-4454-a9ff-2a00fd4974d7\") " pod="openstack/heat-api-65d4996964-zpvph" Feb 18 14:59:28 crc kubenswrapper[4957]: I0218 14:59:28.775746 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44f06eec-0e32-4246-a893-652c9b180b2c-config-data\") pod \"heat-engine-596b8bcf84-qf6sp\" (UID: \"44f06eec-0e32-4246-a893-652c9b180b2c\") " pod="openstack/heat-engine-596b8bcf84-qf6sp" Feb 18 14:59:28 crc kubenswrapper[4957]: I0218 14:59:28.775886 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd286cd4-02f3-4357-8c0e-bf30451df530-config-data\") pod \"heat-cfnapi-69cc788d47-5c2pf\" (UID: \"bd286cd4-02f3-4357-8c0e-bf30451df530\") " pod="openstack/heat-cfnapi-69cc788d47-5c2pf" Feb 18 14:59:28 crc kubenswrapper[4957]: I0218 14:59:28.775998 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2czj\" (UniqueName: \"kubernetes.io/projected/bd286cd4-02f3-4357-8c0e-bf30451df530-kube-api-access-n2czj\") pod \"heat-cfnapi-69cc788d47-5c2pf\" (UID: \"bd286cd4-02f3-4357-8c0e-bf30451df530\") " pod="openstack/heat-cfnapi-69cc788d47-5c2pf" Feb 18 14:59:28 crc kubenswrapper[4957]: I0218 14:59:28.776110 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2251ef18-33b0-4454-a9ff-2a00fd4974d7-public-tls-certs\") pod \"heat-api-65d4996964-zpvph\" (UID: \"2251ef18-33b0-4454-a9ff-2a00fd4974d7\") " pod="openstack/heat-api-65d4996964-zpvph" Feb 18 14:59:28 crc kubenswrapper[4957]: I0218 14:59:28.780352 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44f06eec-0e32-4246-a893-652c9b180b2c-combined-ca-bundle\") pod \"heat-engine-596b8bcf84-qf6sp\" (UID: \"44f06eec-0e32-4246-a893-652c9b180b2c\") " pod="openstack/heat-engine-596b8bcf84-qf6sp" Feb 18 14:59:28 crc kubenswrapper[4957]: I0218 14:59:28.781030 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44f06eec-0e32-4246-a893-652c9b180b2c-config-data\") pod \"heat-engine-596b8bcf84-qf6sp\" (UID: \"44f06eec-0e32-4246-a893-652c9b180b2c\") " pod="openstack/heat-engine-596b8bcf84-qf6sp" Feb 18 14:59:28 crc kubenswrapper[4957]: I0218 14:59:28.785909 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/44f06eec-0e32-4246-a893-652c9b180b2c-config-data-custom\") pod \"heat-engine-596b8bcf84-qf6sp\" (UID: \"44f06eec-0e32-4246-a893-652c9b180b2c\") " pod="openstack/heat-engine-596b8bcf84-qf6sp" Feb 18 14:59:28 crc kubenswrapper[4957]: I0218 14:59:28.796475 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c4pzf\" (UniqueName: \"kubernetes.io/projected/44f06eec-0e32-4246-a893-652c9b180b2c-kube-api-access-c4pzf\") pod \"heat-engine-596b8bcf84-qf6sp\" (UID: \"44f06eec-0e32-4246-a893-652c9b180b2c\") " pod="openstack/heat-engine-596b8bcf84-qf6sp" Feb 18 14:59:28 crc kubenswrapper[4957]: I0218 14:59:28.877988 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd286cd4-02f3-4357-8c0e-bf30451df530-public-tls-certs\") pod \"heat-cfnapi-69cc788d47-5c2pf\" (UID: \"bd286cd4-02f3-4357-8c0e-bf30451df530\") " pod="openstack/heat-cfnapi-69cc788d47-5c2pf" Feb 18 14:59:28 crc kubenswrapper[4957]: I0218 14:59:28.878047 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-84mch\" (UniqueName: \"kubernetes.io/projected/2251ef18-33b0-4454-a9ff-2a00fd4974d7-kube-api-access-84mch\") pod \"heat-api-65d4996964-zpvph\" (UID: \"2251ef18-33b0-4454-a9ff-2a00fd4974d7\") " pod="openstack/heat-api-65d4996964-zpvph" Feb 18 14:59:28 crc kubenswrapper[4957]: I0218 14:59:28.878077 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2251ef18-33b0-4454-a9ff-2a00fd4974d7-config-data\") pod \"heat-api-65d4996964-zpvph\" (UID: \"2251ef18-33b0-4454-a9ff-2a00fd4974d7\") " pod="openstack/heat-api-65d4996964-zpvph" Feb 18 14:59:28 crc kubenswrapper[4957]: I0218 14:59:28.878136 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd286cd4-02f3-4357-8c0e-bf30451df530-config-data\") pod \"heat-cfnapi-69cc788d47-5c2pf\" (UID: \"bd286cd4-02f3-4357-8c0e-bf30451df530\") " pod="openstack/heat-cfnapi-69cc788d47-5c2pf" Feb 18 14:59:28 crc kubenswrapper[4957]: I0218 14:59:28.878166 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n2czj\" (UniqueName: \"kubernetes.io/projected/bd286cd4-02f3-4357-8c0e-bf30451df530-kube-api-access-n2czj\") pod \"heat-cfnapi-69cc788d47-5c2pf\" (UID: \"bd286cd4-02f3-4357-8c0e-bf30451df530\") " pod="openstack/heat-cfnapi-69cc788d47-5c2pf" Feb 18 14:59:28 crc kubenswrapper[4957]: I0218 14:59:28.878193 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2251ef18-33b0-4454-a9ff-2a00fd4974d7-public-tls-certs\") pod \"heat-api-65d4996964-zpvph\" (UID: \"2251ef18-33b0-4454-a9ff-2a00fd4974d7\") " pod="openstack/heat-api-65d4996964-zpvph" Feb 18 14:59:28 crc kubenswrapper[4957]: I0218 14:59:28.878292 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2251ef18-33b0-4454-a9ff-2a00fd4974d7-internal-tls-certs\") pod \"heat-api-65d4996964-zpvph\" (UID: \"2251ef18-33b0-4454-a9ff-2a00fd4974d7\") " pod="openstack/heat-api-65d4996964-zpvph" Feb 18 14:59:28 crc kubenswrapper[4957]: I0218 14:59:28.878318 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bd286cd4-02f3-4357-8c0e-bf30451df530-config-data-custom\") pod \"heat-cfnapi-69cc788d47-5c2pf\" (UID: \"bd286cd4-02f3-4357-8c0e-bf30451df530\") " pod="openstack/heat-cfnapi-69cc788d47-5c2pf" Feb 18 14:59:28 crc kubenswrapper[4957]: I0218 14:59:28.878346 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd286cd4-02f3-4357-8c0e-bf30451df530-combined-ca-bundle\") pod \"heat-cfnapi-69cc788d47-5c2pf\" (UID: \"bd286cd4-02f3-4357-8c0e-bf30451df530\") " pod="openstack/heat-cfnapi-69cc788d47-5c2pf" Feb 18 14:59:28 crc kubenswrapper[4957]: I0218 14:59:28.878366 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd286cd4-02f3-4357-8c0e-bf30451df530-internal-tls-certs\") pod \"heat-cfnapi-69cc788d47-5c2pf\" (UID: \"bd286cd4-02f3-4357-8c0e-bf30451df530\") " pod="openstack/heat-cfnapi-69cc788d47-5c2pf" Feb 18 14:59:28 crc kubenswrapper[4957]: I0218 14:59:28.878429 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2251ef18-33b0-4454-a9ff-2a00fd4974d7-combined-ca-bundle\") pod \"heat-api-65d4996964-zpvph\" (UID: \"2251ef18-33b0-4454-a9ff-2a00fd4974d7\") " pod="openstack/heat-api-65d4996964-zpvph" Feb 18 14:59:28 crc kubenswrapper[4957]: I0218 14:59:28.878465 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2251ef18-33b0-4454-a9ff-2a00fd4974d7-config-data-custom\") pod \"heat-api-65d4996964-zpvph\" (UID: \"2251ef18-33b0-4454-a9ff-2a00fd4974d7\") " pod="openstack/heat-api-65d4996964-zpvph" Feb 18 14:59:28 crc kubenswrapper[4957]: I0218 14:59:28.883304 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd286cd4-02f3-4357-8c0e-bf30451df530-public-tls-certs\") pod \"heat-cfnapi-69cc788d47-5c2pf\" (UID: \"bd286cd4-02f3-4357-8c0e-bf30451df530\") " pod="openstack/heat-cfnapi-69cc788d47-5c2pf" Feb 18 14:59:28 crc kubenswrapper[4957]: I0218 14:59:28.883479 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2251ef18-33b0-4454-a9ff-2a00fd4974d7-config-data-custom\") pod \"heat-api-65d4996964-zpvph\" (UID: \"2251ef18-33b0-4454-a9ff-2a00fd4974d7\") " pod="openstack/heat-api-65d4996964-zpvph" Feb 18 14:59:28 crc kubenswrapper[4957]: I0218 14:59:28.883990 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2251ef18-33b0-4454-a9ff-2a00fd4974d7-config-data\") pod \"heat-api-65d4996964-zpvph\" (UID: \"2251ef18-33b0-4454-a9ff-2a00fd4974d7\") " pod="openstack/heat-api-65d4996964-zpvph" Feb 18 14:59:28 crc kubenswrapper[4957]: I0218 14:59:28.884153 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bd286cd4-02f3-4357-8c0e-bf30451df530-config-data-custom\") pod \"heat-cfnapi-69cc788d47-5c2pf\" (UID: \"bd286cd4-02f3-4357-8c0e-bf30451df530\") " pod="openstack/heat-cfnapi-69cc788d47-5c2pf" Feb 18 14:59:28 crc kubenswrapper[4957]: I0218 14:59:28.884479 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2251ef18-33b0-4454-a9ff-2a00fd4974d7-internal-tls-certs\") pod \"heat-api-65d4996964-zpvph\" (UID: \"2251ef18-33b0-4454-a9ff-2a00fd4974d7\") " pod="openstack/heat-api-65d4996964-zpvph" Feb 18 14:59:28 crc kubenswrapper[4957]: I0218 14:59:28.884492 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd286cd4-02f3-4357-8c0e-bf30451df530-combined-ca-bundle\") pod \"heat-cfnapi-69cc788d47-5c2pf\" (UID: \"bd286cd4-02f3-4357-8c0e-bf30451df530\") " pod="openstack/heat-cfnapi-69cc788d47-5c2pf" Feb 18 14:59:28 crc kubenswrapper[4957]: I0218 14:59:28.885005 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2251ef18-33b0-4454-a9ff-2a00fd4974d7-public-tls-certs\") pod \"heat-api-65d4996964-zpvph\" (UID: \"2251ef18-33b0-4454-a9ff-2a00fd4974d7\") " pod="openstack/heat-api-65d4996964-zpvph" Feb 18 14:59:28 crc kubenswrapper[4957]: I0218 14:59:28.885171 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd286cd4-02f3-4357-8c0e-bf30451df530-internal-tls-certs\") pod \"heat-cfnapi-69cc788d47-5c2pf\" (UID: \"bd286cd4-02f3-4357-8c0e-bf30451df530\") " pod="openstack/heat-cfnapi-69cc788d47-5c2pf" Feb 18 14:59:28 crc kubenswrapper[4957]: I0218 14:59:28.885734 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2251ef18-33b0-4454-a9ff-2a00fd4974d7-combined-ca-bundle\") pod \"heat-api-65d4996964-zpvph\" (UID: \"2251ef18-33b0-4454-a9ff-2a00fd4974d7\") " pod="openstack/heat-api-65d4996964-zpvph" Feb 18 14:59:28 crc kubenswrapper[4957]: I0218 14:59:28.886902 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd286cd4-02f3-4357-8c0e-bf30451df530-config-data\") pod \"heat-cfnapi-69cc788d47-5c2pf\" (UID: \"bd286cd4-02f3-4357-8c0e-bf30451df530\") " pod="openstack/heat-cfnapi-69cc788d47-5c2pf" Feb 18 14:59:28 crc kubenswrapper[4957]: I0218 14:59:28.897498 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n2czj\" (UniqueName: \"kubernetes.io/projected/bd286cd4-02f3-4357-8c0e-bf30451df530-kube-api-access-n2czj\") pod \"heat-cfnapi-69cc788d47-5c2pf\" (UID: \"bd286cd4-02f3-4357-8c0e-bf30451df530\") " pod="openstack/heat-cfnapi-69cc788d47-5c2pf" Feb 18 14:59:28 crc kubenswrapper[4957]: I0218 14:59:28.898880 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-84mch\" (UniqueName: \"kubernetes.io/projected/2251ef18-33b0-4454-a9ff-2a00fd4974d7-kube-api-access-84mch\") pod \"heat-api-65d4996964-zpvph\" (UID: \"2251ef18-33b0-4454-a9ff-2a00fd4974d7\") " pod="openstack/heat-api-65d4996964-zpvph" Feb 18 14:59:28 crc kubenswrapper[4957]: I0218 14:59:28.908240 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-596b8bcf84-qf6sp" Feb 18 14:59:28 crc kubenswrapper[4957]: I0218 14:59:28.976972 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-69cc788d47-5c2pf" Feb 18 14:59:28 crc kubenswrapper[4957]: I0218 14:59:28.988751 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-65d4996964-zpvph" Feb 18 14:59:29 crc kubenswrapper[4957]: I0218 14:59:29.489736 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-596b8bcf84-qf6sp"] Feb 18 14:59:29 crc kubenswrapper[4957]: I0218 14:59:29.592162 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-596b8bcf84-qf6sp" event={"ID":"44f06eec-0e32-4246-a893-652c9b180b2c","Type":"ContainerStarted","Data":"6f7c7107ee91627ddfe6917a5638d89e4f6b34c41b151457bb978505c00a0525"} Feb 18 14:59:29 crc kubenswrapper[4957]: I0218 14:59:29.594682 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-69cc788d47-5c2pf"] Feb 18 14:59:29 crc kubenswrapper[4957]: I0218 14:59:29.613056 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-65d4996964-zpvph"] Feb 18 14:59:29 crc kubenswrapper[4957]: W0218 14:59:29.614896 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbd286cd4_02f3_4357_8c0e_bf30451df530.slice/crio-0969618b3a6588797816ed5430daa5b608a7300f92216f1976b0040594e71f49 WatchSource:0}: Error finding container 0969618b3a6588797816ed5430daa5b608a7300f92216f1976b0040594e71f49: Status 404 returned error can't find the container with id 0969618b3a6588797816ed5430daa5b608a7300f92216f1976b0040594e71f49 Feb 18 14:59:29 crc kubenswrapper[4957]: I0218 14:59:29.918727 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6f6df4f56c-jt2ll" Feb 18 14:59:29 crc kubenswrapper[4957]: I0218 14:59:29.999068 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d84b4d45c-vpb78"] Feb 18 14:59:29 crc kubenswrapper[4957]: I0218 14:59:29.999455 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7d84b4d45c-vpb78" podUID="2dffda1d-4d5a-420b-b627-4bc6fd552c61" containerName="dnsmasq-dns" containerID="cri-o://2a55f27a528bf946836d8e13926ddfd51a8c44db00545a6b13c41702b3361511" gracePeriod=10 Feb 18 14:59:30 crc kubenswrapper[4957]: I0218 14:59:30.617559 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-65d4996964-zpvph" event={"ID":"2251ef18-33b0-4454-a9ff-2a00fd4974d7","Type":"ContainerStarted","Data":"61ea9563f9aa373708f5877e9e8d42949e5502612f6777f536db1210549f886e"} Feb 18 14:59:30 crc kubenswrapper[4957]: I0218 14:59:30.621621 4957 generic.go:334] "Generic (PLEG): container finished" podID="2dffda1d-4d5a-420b-b627-4bc6fd552c61" containerID="2a55f27a528bf946836d8e13926ddfd51a8c44db00545a6b13c41702b3361511" exitCode=0 Feb 18 14:59:30 crc kubenswrapper[4957]: I0218 14:59:30.621707 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d84b4d45c-vpb78" event={"ID":"2dffda1d-4d5a-420b-b627-4bc6fd552c61","Type":"ContainerDied","Data":"2a55f27a528bf946836d8e13926ddfd51a8c44db00545a6b13c41702b3361511"} Feb 18 14:59:30 crc kubenswrapper[4957]: I0218 14:59:30.639759 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-69cc788d47-5c2pf" event={"ID":"bd286cd4-02f3-4357-8c0e-bf30451df530","Type":"ContainerStarted","Data":"0969618b3a6588797816ed5430daa5b608a7300f92216f1976b0040594e71f49"} Feb 18 14:59:30 crc kubenswrapper[4957]: I0218 14:59:30.648107 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-596b8bcf84-qf6sp" event={"ID":"44f06eec-0e32-4246-a893-652c9b180b2c","Type":"ContainerStarted","Data":"ede6e07a34478e3ed305d903fc1e5c94fc8dbe06149795339b234f55405a5acb"} Feb 18 14:59:30 crc kubenswrapper[4957]: I0218 14:59:30.648332 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-596b8bcf84-qf6sp" Feb 18 14:59:30 crc kubenswrapper[4957]: I0218 14:59:30.672750 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-596b8bcf84-qf6sp" podStartSLOduration=2.67273159 podStartE2EDuration="2.67273159s" podCreationTimestamp="2026-02-18 14:59:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:59:30.669731983 +0000 UTC m=+1677.190596727" watchObservedRunningTime="2026-02-18 14:59:30.67273159 +0000 UTC m=+1677.193596334" Feb 18 14:59:30 crc kubenswrapper[4957]: I0218 14:59:30.727063 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d84b4d45c-vpb78" Feb 18 14:59:30 crc kubenswrapper[4957]: I0218 14:59:30.840499 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2dffda1d-4d5a-420b-b627-4bc6fd552c61-config\") pod \"2dffda1d-4d5a-420b-b627-4bc6fd552c61\" (UID: \"2dffda1d-4d5a-420b-b627-4bc6fd552c61\") " Feb 18 14:59:30 crc kubenswrapper[4957]: I0218 14:59:30.842255 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/2dffda1d-4d5a-420b-b627-4bc6fd552c61-openstack-edpm-ipam\") pod \"2dffda1d-4d5a-420b-b627-4bc6fd552c61\" (UID: \"2dffda1d-4d5a-420b-b627-4bc6fd552c61\") " Feb 18 14:59:30 crc kubenswrapper[4957]: I0218 14:59:30.842335 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2dffda1d-4d5a-420b-b627-4bc6fd552c61-ovsdbserver-sb\") pod \"2dffda1d-4d5a-420b-b627-4bc6fd552c61\" (UID: \"2dffda1d-4d5a-420b-b627-4bc6fd552c61\") " Feb 18 14:59:30 crc kubenswrapper[4957]: I0218 14:59:30.842369 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2dffda1d-4d5a-420b-b627-4bc6fd552c61-ovsdbserver-nb\") pod \"2dffda1d-4d5a-420b-b627-4bc6fd552c61\" (UID: \"2dffda1d-4d5a-420b-b627-4bc6fd552c61\") " Feb 18 14:59:30 crc kubenswrapper[4957]: I0218 14:59:30.842457 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2dffda1d-4d5a-420b-b627-4bc6fd552c61-dns-svc\") pod \"2dffda1d-4d5a-420b-b627-4bc6fd552c61\" (UID: \"2dffda1d-4d5a-420b-b627-4bc6fd552c61\") " Feb 18 14:59:30 crc kubenswrapper[4957]: I0218 14:59:30.842571 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-74nbk\" (UniqueName: \"kubernetes.io/projected/2dffda1d-4d5a-420b-b627-4bc6fd552c61-kube-api-access-74nbk\") pod \"2dffda1d-4d5a-420b-b627-4bc6fd552c61\" (UID: \"2dffda1d-4d5a-420b-b627-4bc6fd552c61\") " Feb 18 14:59:30 crc kubenswrapper[4957]: I0218 14:59:30.843067 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2dffda1d-4d5a-420b-b627-4bc6fd552c61-dns-swift-storage-0\") pod \"2dffda1d-4d5a-420b-b627-4bc6fd552c61\" (UID: \"2dffda1d-4d5a-420b-b627-4bc6fd552c61\") " Feb 18 14:59:30 crc kubenswrapper[4957]: I0218 14:59:30.849219 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2dffda1d-4d5a-420b-b627-4bc6fd552c61-kube-api-access-74nbk" (OuterVolumeSpecName: "kube-api-access-74nbk") pod "2dffda1d-4d5a-420b-b627-4bc6fd552c61" (UID: "2dffda1d-4d5a-420b-b627-4bc6fd552c61"). InnerVolumeSpecName "kube-api-access-74nbk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:59:30 crc kubenswrapper[4957]: I0218 14:59:30.926810 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2dffda1d-4d5a-420b-b627-4bc6fd552c61-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "2dffda1d-4d5a-420b-b627-4bc6fd552c61" (UID: "2dffda1d-4d5a-420b-b627-4bc6fd552c61"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:59:30 crc kubenswrapper[4957]: I0218 14:59:30.943830 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2dffda1d-4d5a-420b-b627-4bc6fd552c61-config" (OuterVolumeSpecName: "config") pod "2dffda1d-4d5a-420b-b627-4bc6fd552c61" (UID: "2dffda1d-4d5a-420b-b627-4bc6fd552c61"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:59:30 crc kubenswrapper[4957]: I0218 14:59:30.946737 4957 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2dffda1d-4d5a-420b-b627-4bc6fd552c61-config\") on node \"crc\" DevicePath \"\"" Feb 18 14:59:30 crc kubenswrapper[4957]: I0218 14:59:30.946770 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-74nbk\" (UniqueName: \"kubernetes.io/projected/2dffda1d-4d5a-420b-b627-4bc6fd552c61-kube-api-access-74nbk\") on node \"crc\" DevicePath \"\"" Feb 18 14:59:30 crc kubenswrapper[4957]: I0218 14:59:30.946784 4957 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2dffda1d-4d5a-420b-b627-4bc6fd552c61-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 18 14:59:30 crc kubenswrapper[4957]: I0218 14:59:30.965122 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2dffda1d-4d5a-420b-b627-4bc6fd552c61-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "2dffda1d-4d5a-420b-b627-4bc6fd552c61" (UID: "2dffda1d-4d5a-420b-b627-4bc6fd552c61"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:59:30 crc kubenswrapper[4957]: I0218 14:59:30.977235 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2dffda1d-4d5a-420b-b627-4bc6fd552c61-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2dffda1d-4d5a-420b-b627-4bc6fd552c61" (UID: "2dffda1d-4d5a-420b-b627-4bc6fd552c61"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:59:30 crc kubenswrapper[4957]: I0218 14:59:30.983006 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2dffda1d-4d5a-420b-b627-4bc6fd552c61-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "2dffda1d-4d5a-420b-b627-4bc6fd552c61" (UID: "2dffda1d-4d5a-420b-b627-4bc6fd552c61"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:59:31 crc kubenswrapper[4957]: I0218 14:59:31.016677 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2dffda1d-4d5a-420b-b627-4bc6fd552c61-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "2dffda1d-4d5a-420b-b627-4bc6fd552c61" (UID: "2dffda1d-4d5a-420b-b627-4bc6fd552c61"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:59:31 crc kubenswrapper[4957]: I0218 14:59:31.048135 4957 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/2dffda1d-4d5a-420b-b627-4bc6fd552c61-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 14:59:31 crc kubenswrapper[4957]: I0218 14:59:31.048188 4957 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2dffda1d-4d5a-420b-b627-4bc6fd552c61-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 18 14:59:31 crc kubenswrapper[4957]: I0218 14:59:31.048212 4957 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2dffda1d-4d5a-420b-b627-4bc6fd552c61-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 18 14:59:31 crc kubenswrapper[4957]: I0218 14:59:31.048226 4957 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2dffda1d-4d5a-420b-b627-4bc6fd552c61-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 14:59:31 crc kubenswrapper[4957]: I0218 14:59:31.663626 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d84b4d45c-vpb78" event={"ID":"2dffda1d-4d5a-420b-b627-4bc6fd552c61","Type":"ContainerDied","Data":"543dd541c81e20beece10ef9cba01865628ee1c86cc4965fd3edd994b616ee2e"} Feb 18 14:59:31 crc kubenswrapper[4957]: I0218 14:59:31.663678 4957 scope.go:117] "RemoveContainer" containerID="2a55f27a528bf946836d8e13926ddfd51a8c44db00545a6b13c41702b3361511" Feb 18 14:59:31 crc kubenswrapper[4957]: I0218 14:59:31.663696 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d84b4d45c-vpb78" Feb 18 14:59:31 crc kubenswrapper[4957]: I0218 14:59:31.710758 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d84b4d45c-vpb78"] Feb 18 14:59:31 crc kubenswrapper[4957]: I0218 14:59:31.721435 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7d84b4d45c-vpb78"] Feb 18 14:59:31 crc kubenswrapper[4957]: I0218 14:59:31.772064 4957 scope.go:117] "RemoveContainer" containerID="128e4498c3fa8a9b55a08ca04dcfc88e7ed6145cb87d5bd9920b057100e8d528" Feb 18 14:59:32 crc kubenswrapper[4957]: I0218 14:59:32.234946 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2dffda1d-4d5a-420b-b627-4bc6fd552c61" path="/var/lib/kubelet/pods/2dffda1d-4d5a-420b-b627-4bc6fd552c61/volumes" Feb 18 14:59:32 crc kubenswrapper[4957]: I0218 14:59:32.239997 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 18 14:59:32 crc kubenswrapper[4957]: I0218 14:59:32.694806 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-65d4996964-zpvph" event={"ID":"2251ef18-33b0-4454-a9ff-2a00fd4974d7","Type":"ContainerStarted","Data":"dedeaa1d4c29d163f7922ab844be04d7a41dcfc6a8f92643466dccb88ae4fa29"} Feb 18 14:59:32 crc kubenswrapper[4957]: I0218 14:59:32.696094 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-65d4996964-zpvph" Feb 18 14:59:32 crc kubenswrapper[4957]: I0218 14:59:32.714108 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-69cc788d47-5c2pf" event={"ID":"bd286cd4-02f3-4357-8c0e-bf30451df530","Type":"ContainerStarted","Data":"7ab4cec2b2e0a700423c04cb9c58afe85fb1df68656e01c419024616dbf02128"} Feb 18 14:59:32 crc kubenswrapper[4957]: I0218 14:59:32.714275 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-69cc788d47-5c2pf" Feb 18 14:59:32 crc kubenswrapper[4957]: I0218 14:59:32.741918 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-65d4996964-zpvph" podStartSLOduration=2.586729598 podStartE2EDuration="4.741901133s" podCreationTimestamp="2026-02-18 14:59:28 +0000 UTC" firstStartedPulling="2026-02-18 14:59:29.619866671 +0000 UTC m=+1676.140731415" lastFinishedPulling="2026-02-18 14:59:31.775038186 +0000 UTC m=+1678.295902950" observedRunningTime="2026-02-18 14:59:32.727280291 +0000 UTC m=+1679.248145045" watchObservedRunningTime="2026-02-18 14:59:32.741901133 +0000 UTC m=+1679.262765877" Feb 18 14:59:32 crc kubenswrapper[4957]: I0218 14:59:32.758656 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-69cc788d47-5c2pf" podStartSLOduration=2.605127298 podStartE2EDuration="4.758638296s" podCreationTimestamp="2026-02-18 14:59:28 +0000 UTC" firstStartedPulling="2026-02-18 14:59:29.620693204 +0000 UTC m=+1676.141557948" lastFinishedPulling="2026-02-18 14:59:31.774204202 +0000 UTC m=+1678.295068946" observedRunningTime="2026-02-18 14:59:32.749697228 +0000 UTC m=+1679.270562032" watchObservedRunningTime="2026-02-18 14:59:32.758638296 +0000 UTC m=+1679.279503040" Feb 18 14:59:33 crc kubenswrapper[4957]: I0218 14:59:33.748809 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"64952df6-ca80-4f2b-a8e3-ce0539d32008","Type":"ContainerStarted","Data":"ee7bfd0150096901eaacb13ccf0a2ce0017054cd8ce3e58ca9d7e0cd72b7f6be"} Feb 18 14:59:33 crc kubenswrapper[4957]: I0218 14:59:33.800534 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.889262284 podStartE2EDuration="47.800507468s" podCreationTimestamp="2026-02-18 14:58:46 +0000 UTC" firstStartedPulling="2026-02-18 14:58:47.655601668 +0000 UTC m=+1634.176466402" lastFinishedPulling="2026-02-18 14:59:32.566846842 +0000 UTC m=+1679.087711586" observedRunningTime="2026-02-18 14:59:33.785401702 +0000 UTC m=+1680.306266486" watchObservedRunningTime="2026-02-18 14:59:33.800507468 +0000 UTC m=+1680.321372232" Feb 18 14:59:40 crc kubenswrapper[4957]: I0218 14:59:40.323009 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-69cc788d47-5c2pf" Feb 18 14:59:40 crc kubenswrapper[4957]: I0218 14:59:40.365589 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-65d4996964-zpvph" Feb 18 14:59:40 crc kubenswrapper[4957]: I0218 14:59:40.411337 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-77b45dc78-6kx7v"] Feb 18 14:59:40 crc kubenswrapper[4957]: I0218 14:59:40.411716 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-cfnapi-77b45dc78-6kx7v" podUID="f1e0aae7-8a7f-4147-9abb-23f7c7691351" containerName="heat-cfnapi" containerID="cri-o://455f8ff9066c4fe019e20e8146393f879d4144a9dda31bcc1aadb3d517bbda3d" gracePeriod=60 Feb 18 14:59:40 crc kubenswrapper[4957]: I0218 14:59:40.463055 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-655997456c-8vtx7"] Feb 18 14:59:40 crc kubenswrapper[4957]: I0218 14:59:40.463291 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-api-655997456c-8vtx7" podUID="1b86c2f5-26d2-4702-89e3-e093e6cf4b21" containerName="heat-api" containerID="cri-o://261e0a582ceca480c2d1a51b5cc803e626d649d129a07f2898f597fa705b1a98" gracePeriod=60 Feb 18 14:59:42 crc kubenswrapper[4957]: I0218 14:59:42.214010 4957 scope.go:117] "RemoveContainer" containerID="c4e669be0ab1992542c422c61f78e7526ae93fe9c47c7d1457d0a05970a49398" Feb 18 14:59:42 crc kubenswrapper[4957]: E0218 14:59:42.214853 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-x8wwg_openshift-machine-config-operator(4cde17e3-43e9-4bed-afe8-5b76229e35cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" Feb 18 14:59:43 crc kubenswrapper[4957]: I0218 14:59:43.604815 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-cfnapi-77b45dc78-6kx7v" podUID="f1e0aae7-8a7f-4147-9abb-23f7c7691351" containerName="heat-cfnapi" probeResult="failure" output="Get \"https://10.217.0.221:8000/healthcheck\": read tcp 10.217.0.2:33594->10.217.0.221:8000: read: connection reset by peer" Feb 18 14:59:43 crc kubenswrapper[4957]: I0218 14:59:43.622106 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-api-655997456c-8vtx7" podUID="1b86c2f5-26d2-4702-89e3-e093e6cf4b21" containerName="heat-api" probeResult="failure" output="Get \"https://10.217.0.220:8004/healthcheck\": read tcp 10.217.0.2:35934->10.217.0.220:8004: read: connection reset by peer" Feb 18 14:59:43 crc kubenswrapper[4957]: I0218 14:59:43.875301 4957 generic.go:334] "Generic (PLEG): container finished" podID="1b86c2f5-26d2-4702-89e3-e093e6cf4b21" containerID="261e0a582ceca480c2d1a51b5cc803e626d649d129a07f2898f597fa705b1a98" exitCode=0 Feb 18 14:59:43 crc kubenswrapper[4957]: I0218 14:59:43.875382 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-655997456c-8vtx7" event={"ID":"1b86c2f5-26d2-4702-89e3-e093e6cf4b21","Type":"ContainerDied","Data":"261e0a582ceca480c2d1a51b5cc803e626d649d129a07f2898f597fa705b1a98"} Feb 18 14:59:43 crc kubenswrapper[4957]: I0218 14:59:43.884171 4957 generic.go:334] "Generic (PLEG): container finished" podID="f1e0aae7-8a7f-4147-9abb-23f7c7691351" containerID="455f8ff9066c4fe019e20e8146393f879d4144a9dda31bcc1aadb3d517bbda3d" exitCode=0 Feb 18 14:59:43 crc kubenswrapper[4957]: I0218 14:59:43.884213 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-77b45dc78-6kx7v" event={"ID":"f1e0aae7-8a7f-4147-9abb-23f7c7691351","Type":"ContainerDied","Data":"455f8ff9066c4fe019e20e8146393f879d4144a9dda31bcc1aadb3d517bbda3d"} Feb 18 14:59:44 crc kubenswrapper[4957]: I0218 14:59:44.262940 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-77b45dc78-6kx7v" Feb 18 14:59:44 crc kubenswrapper[4957]: I0218 14:59:44.269898 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-655997456c-8vtx7" Feb 18 14:59:44 crc kubenswrapper[4957]: I0218 14:59:44.310916 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1b86c2f5-26d2-4702-89e3-e093e6cf4b21-config-data-custom\") pod \"1b86c2f5-26d2-4702-89e3-e093e6cf4b21\" (UID: \"1b86c2f5-26d2-4702-89e3-e093e6cf4b21\") " Feb 18 14:59:44 crc kubenswrapper[4957]: I0218 14:59:44.311025 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5nz47\" (UniqueName: \"kubernetes.io/projected/f1e0aae7-8a7f-4147-9abb-23f7c7691351-kube-api-access-5nz47\") pod \"f1e0aae7-8a7f-4147-9abb-23f7c7691351\" (UID: \"f1e0aae7-8a7f-4147-9abb-23f7c7691351\") " Feb 18 14:59:44 crc kubenswrapper[4957]: I0218 14:59:44.311049 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xtgdv\" (UniqueName: \"kubernetes.io/projected/1b86c2f5-26d2-4702-89e3-e093e6cf4b21-kube-api-access-xtgdv\") pod \"1b86c2f5-26d2-4702-89e3-e093e6cf4b21\" (UID: \"1b86c2f5-26d2-4702-89e3-e093e6cf4b21\") " Feb 18 14:59:44 crc kubenswrapper[4957]: I0218 14:59:44.311079 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1b86c2f5-26d2-4702-89e3-e093e6cf4b21-public-tls-certs\") pod \"1b86c2f5-26d2-4702-89e3-e093e6cf4b21\" (UID: \"1b86c2f5-26d2-4702-89e3-e093e6cf4b21\") " Feb 18 14:59:44 crc kubenswrapper[4957]: I0218 14:59:44.311145 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f1e0aae7-8a7f-4147-9abb-23f7c7691351-config-data-custom\") pod \"f1e0aae7-8a7f-4147-9abb-23f7c7691351\" (UID: \"f1e0aae7-8a7f-4147-9abb-23f7c7691351\") " Feb 18 14:59:44 crc kubenswrapper[4957]: I0218 14:59:44.311243 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f1e0aae7-8a7f-4147-9abb-23f7c7691351-internal-tls-certs\") pod \"f1e0aae7-8a7f-4147-9abb-23f7c7691351\" (UID: \"f1e0aae7-8a7f-4147-9abb-23f7c7691351\") " Feb 18 14:59:44 crc kubenswrapper[4957]: I0218 14:59:44.311293 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b86c2f5-26d2-4702-89e3-e093e6cf4b21-config-data\") pod \"1b86c2f5-26d2-4702-89e3-e093e6cf4b21\" (UID: \"1b86c2f5-26d2-4702-89e3-e093e6cf4b21\") " Feb 18 14:59:44 crc kubenswrapper[4957]: I0218 14:59:44.311339 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1e0aae7-8a7f-4147-9abb-23f7c7691351-config-data\") pod \"f1e0aae7-8a7f-4147-9abb-23f7c7691351\" (UID: \"f1e0aae7-8a7f-4147-9abb-23f7c7691351\") " Feb 18 14:59:44 crc kubenswrapper[4957]: I0218 14:59:44.311369 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b86c2f5-26d2-4702-89e3-e093e6cf4b21-combined-ca-bundle\") pod \"1b86c2f5-26d2-4702-89e3-e093e6cf4b21\" (UID: \"1b86c2f5-26d2-4702-89e3-e093e6cf4b21\") " Feb 18 14:59:44 crc kubenswrapper[4957]: I0218 14:59:44.311430 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1e0aae7-8a7f-4147-9abb-23f7c7691351-combined-ca-bundle\") pod \"f1e0aae7-8a7f-4147-9abb-23f7c7691351\" (UID: \"f1e0aae7-8a7f-4147-9abb-23f7c7691351\") " Feb 18 14:59:44 crc kubenswrapper[4957]: I0218 14:59:44.311460 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f1e0aae7-8a7f-4147-9abb-23f7c7691351-public-tls-certs\") pod \"f1e0aae7-8a7f-4147-9abb-23f7c7691351\" (UID: \"f1e0aae7-8a7f-4147-9abb-23f7c7691351\") " Feb 18 14:59:44 crc kubenswrapper[4957]: I0218 14:59:44.311532 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1b86c2f5-26d2-4702-89e3-e093e6cf4b21-internal-tls-certs\") pod \"1b86c2f5-26d2-4702-89e3-e093e6cf4b21\" (UID: \"1b86c2f5-26d2-4702-89e3-e093e6cf4b21\") " Feb 18 14:59:44 crc kubenswrapper[4957]: I0218 14:59:44.348432 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1e0aae7-8a7f-4147-9abb-23f7c7691351-kube-api-access-5nz47" (OuterVolumeSpecName: "kube-api-access-5nz47") pod "f1e0aae7-8a7f-4147-9abb-23f7c7691351" (UID: "f1e0aae7-8a7f-4147-9abb-23f7c7691351"). InnerVolumeSpecName "kube-api-access-5nz47". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:59:44 crc kubenswrapper[4957]: I0218 14:59:44.394579 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b86c2f5-26d2-4702-89e3-e093e6cf4b21-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "1b86c2f5-26d2-4702-89e3-e093e6cf4b21" (UID: "1b86c2f5-26d2-4702-89e3-e093e6cf4b21"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:59:44 crc kubenswrapper[4957]: I0218 14:59:44.394692 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b86c2f5-26d2-4702-89e3-e093e6cf4b21-kube-api-access-xtgdv" (OuterVolumeSpecName: "kube-api-access-xtgdv") pod "1b86c2f5-26d2-4702-89e3-e093e6cf4b21" (UID: "1b86c2f5-26d2-4702-89e3-e093e6cf4b21"). InnerVolumeSpecName "kube-api-access-xtgdv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:59:44 crc kubenswrapper[4957]: I0218 14:59:44.396523 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1e0aae7-8a7f-4147-9abb-23f7c7691351-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "f1e0aae7-8a7f-4147-9abb-23f7c7691351" (UID: "f1e0aae7-8a7f-4147-9abb-23f7c7691351"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:59:44 crc kubenswrapper[4957]: I0218 14:59:44.432995 4957 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1b86c2f5-26d2-4702-89e3-e093e6cf4b21-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 18 14:59:44 crc kubenswrapper[4957]: I0218 14:59:44.433032 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5nz47\" (UniqueName: \"kubernetes.io/projected/f1e0aae7-8a7f-4147-9abb-23f7c7691351-kube-api-access-5nz47\") on node \"crc\" DevicePath \"\"" Feb 18 14:59:44 crc kubenswrapper[4957]: I0218 14:59:44.433047 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xtgdv\" (UniqueName: \"kubernetes.io/projected/1b86c2f5-26d2-4702-89e3-e093e6cf4b21-kube-api-access-xtgdv\") on node \"crc\" DevicePath \"\"" Feb 18 14:59:44 crc kubenswrapper[4957]: I0218 14:59:44.433058 4957 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f1e0aae7-8a7f-4147-9abb-23f7c7691351-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 18 14:59:44 crc kubenswrapper[4957]: I0218 14:59:44.448980 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1e0aae7-8a7f-4147-9abb-23f7c7691351-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f1e0aae7-8a7f-4147-9abb-23f7c7691351" (UID: "f1e0aae7-8a7f-4147-9abb-23f7c7691351"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:59:44 crc kubenswrapper[4957]: I0218 14:59:44.449779 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b86c2f5-26d2-4702-89e3-e093e6cf4b21-config-data" (OuterVolumeSpecName: "config-data") pod "1b86c2f5-26d2-4702-89e3-e093e6cf4b21" (UID: "1b86c2f5-26d2-4702-89e3-e093e6cf4b21"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:59:44 crc kubenswrapper[4957]: I0218 14:59:44.466884 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1e0aae7-8a7f-4147-9abb-23f7c7691351-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "f1e0aae7-8a7f-4147-9abb-23f7c7691351" (UID: "f1e0aae7-8a7f-4147-9abb-23f7c7691351"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:59:44 crc kubenswrapper[4957]: I0218 14:59:44.467884 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b86c2f5-26d2-4702-89e3-e093e6cf4b21-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1b86c2f5-26d2-4702-89e3-e093e6cf4b21" (UID: "1b86c2f5-26d2-4702-89e3-e093e6cf4b21"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:59:44 crc kubenswrapper[4957]: I0218 14:59:44.468821 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b86c2f5-26d2-4702-89e3-e093e6cf4b21-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "1b86c2f5-26d2-4702-89e3-e093e6cf4b21" (UID: "1b86c2f5-26d2-4702-89e3-e093e6cf4b21"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:59:44 crc kubenswrapper[4957]: I0218 14:59:44.496208 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1e0aae7-8a7f-4147-9abb-23f7c7691351-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "f1e0aae7-8a7f-4147-9abb-23f7c7691351" (UID: "f1e0aae7-8a7f-4147-9abb-23f7c7691351"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:59:44 crc kubenswrapper[4957]: I0218 14:59:44.499357 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1e0aae7-8a7f-4147-9abb-23f7c7691351-config-data" (OuterVolumeSpecName: "config-data") pod "f1e0aae7-8a7f-4147-9abb-23f7c7691351" (UID: "f1e0aae7-8a7f-4147-9abb-23f7c7691351"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:59:44 crc kubenswrapper[4957]: I0218 14:59:44.505570 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b86c2f5-26d2-4702-89e3-e093e6cf4b21-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "1b86c2f5-26d2-4702-89e3-e093e6cf4b21" (UID: "1b86c2f5-26d2-4702-89e3-e093e6cf4b21"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:59:44 crc kubenswrapper[4957]: I0218 14:59:44.535104 4957 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1b86c2f5-26d2-4702-89e3-e093e6cf4b21-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 14:59:44 crc kubenswrapper[4957]: I0218 14:59:44.535156 4957 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f1e0aae7-8a7f-4147-9abb-23f7c7691351-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 14:59:44 crc kubenswrapper[4957]: I0218 14:59:44.535175 4957 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b86c2f5-26d2-4702-89e3-e093e6cf4b21-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 14:59:44 crc kubenswrapper[4957]: I0218 14:59:44.535185 4957 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1e0aae7-8a7f-4147-9abb-23f7c7691351-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 14:59:44 crc kubenswrapper[4957]: I0218 14:59:44.535194 4957 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b86c2f5-26d2-4702-89e3-e093e6cf4b21-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:59:44 crc kubenswrapper[4957]: I0218 14:59:44.535202 4957 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1e0aae7-8a7f-4147-9abb-23f7c7691351-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:59:44 crc kubenswrapper[4957]: I0218 14:59:44.535211 4957 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f1e0aae7-8a7f-4147-9abb-23f7c7691351-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 14:59:44 crc kubenswrapper[4957]: I0218 14:59:44.535218 4957 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1b86c2f5-26d2-4702-89e3-e093e6cf4b21-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 14:59:44 crc kubenswrapper[4957]: I0218 14:59:44.897456 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-655997456c-8vtx7" event={"ID":"1b86c2f5-26d2-4702-89e3-e093e6cf4b21","Type":"ContainerDied","Data":"8f36e313704e970af78e4fec2f7907247bfe43f7fd5a3962be535e7237b87ce1"} Feb 18 14:59:44 crc kubenswrapper[4957]: I0218 14:59:44.897796 4957 scope.go:117] "RemoveContainer" containerID="261e0a582ceca480c2d1a51b5cc803e626d649d129a07f2898f597fa705b1a98" Feb 18 14:59:44 crc kubenswrapper[4957]: I0218 14:59:44.897744 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-655997456c-8vtx7" Feb 18 14:59:44 crc kubenswrapper[4957]: I0218 14:59:44.906342 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-77b45dc78-6kx7v" event={"ID":"f1e0aae7-8a7f-4147-9abb-23f7c7691351","Type":"ContainerDied","Data":"9dc00d016aae5eb6ec71bc537c384b00cf1f957722d6ac321bee5f9478dc743e"} Feb 18 14:59:44 crc kubenswrapper[4957]: I0218 14:59:44.906495 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-77b45dc78-6kx7v" Feb 18 14:59:44 crc kubenswrapper[4957]: I0218 14:59:44.942595 4957 scope.go:117] "RemoveContainer" containerID="455f8ff9066c4fe019e20e8146393f879d4144a9dda31bcc1aadb3d517bbda3d" Feb 18 14:59:44 crc kubenswrapper[4957]: I0218 14:59:44.954955 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-655997456c-8vtx7"] Feb 18 14:59:44 crc kubenswrapper[4957]: I0218 14:59:44.971901 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-655997456c-8vtx7"] Feb 18 14:59:44 crc kubenswrapper[4957]: I0218 14:59:44.983714 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-77b45dc78-6kx7v"] Feb 18 14:59:44 crc kubenswrapper[4957]: I0218 14:59:44.995691 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-77b45dc78-6kx7v"] Feb 18 14:59:46 crc kubenswrapper[4957]: I0218 14:59:46.243669 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b86c2f5-26d2-4702-89e3-e093e6cf4b21" path="/var/lib/kubelet/pods/1b86c2f5-26d2-4702-89e3-e093e6cf4b21/volumes" Feb 18 14:59:46 crc kubenswrapper[4957]: I0218 14:59:46.244922 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f1e0aae7-8a7f-4147-9abb-23f7c7691351" path="/var/lib/kubelet/pods/f1e0aae7-8a7f-4147-9abb-23f7c7691351/volumes" Feb 18 14:59:46 crc kubenswrapper[4957]: I0218 14:59:46.945777 4957 generic.go:334] "Generic (PLEG): container finished" podID="a6192b5e-59c5-4986-bac1-41acf8c0d46e" containerID="3a083ea2e4e2fe2179abb2c73697e94d99111d7037cc3c400bba53719bee62f2" exitCode=0 Feb 18 14:59:46 crc kubenswrapper[4957]: I0218 14:59:46.945888 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"a6192b5e-59c5-4986-bac1-41acf8c0d46e","Type":"ContainerDied","Data":"3a083ea2e4e2fe2179abb2c73697e94d99111d7037cc3c400bba53719bee62f2"} Feb 18 14:59:46 crc kubenswrapper[4957]: I0218 14:59:46.949542 4957 generic.go:334] "Generic (PLEG): container finished" podID="99fe3777-adec-48ee-b2a8-df742111168d" containerID="d0ffd74a55fd9c0fb3f765efa8857855f4c8f7e62a89726e2999462f9a371f78" exitCode=0 Feb 18 14:59:46 crc kubenswrapper[4957]: I0218 14:59:46.949676 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"99fe3777-adec-48ee-b2a8-df742111168d","Type":"ContainerDied","Data":"d0ffd74a55fd9c0fb3f765efa8857855f4c8f7e62a89726e2999462f9a371f78"} Feb 18 14:59:47 crc kubenswrapper[4957]: I0218 14:59:47.967528 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"a6192b5e-59c5-4986-bac1-41acf8c0d46e","Type":"ContainerStarted","Data":"6fdba86333fdcdc486c3ebd9a73831d76271e0454a4df288f210f8c3cb1016d0"} Feb 18 14:59:47 crc kubenswrapper[4957]: I0218 14:59:47.968710 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:59:47 crc kubenswrapper[4957]: I0218 14:59:47.971468 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"99fe3777-adec-48ee-b2a8-df742111168d","Type":"ContainerStarted","Data":"28f92aa6a606018b508e4794479f4752aab1afc8dfa53f5e7a8e2059840015db"} Feb 18 14:59:47 crc kubenswrapper[4957]: I0218 14:59:47.971749 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-2" Feb 18 14:59:48 crc kubenswrapper[4957]: I0218 14:59:48.010600 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=46.010572671 podStartE2EDuration="46.010572671s" podCreationTimestamp="2026-02-18 14:59:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:59:48.003299981 +0000 UTC m=+1694.524164735" watchObservedRunningTime="2026-02-18 14:59:48.010572671 +0000 UTC m=+1694.531437425" Feb 18 14:59:48 crc kubenswrapper[4957]: I0218 14:59:48.049689 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-2" podStartSLOduration=55.049663889 podStartE2EDuration="55.049663889s" podCreationTimestamp="2026-02-18 14:58:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:59:48.036043956 +0000 UTC m=+1694.556908700" watchObservedRunningTime="2026-02-18 14:59:48.049663889 +0000 UTC m=+1694.570528633" Feb 18 14:59:48 crc kubenswrapper[4957]: I0218 14:59:48.952294 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-596b8bcf84-qf6sp" Feb 18 14:59:49 crc kubenswrapper[4957]: I0218 14:59:49.021907 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-687db6759-27j8z"] Feb 18 14:59:49 crc kubenswrapper[4957]: I0218 14:59:49.022198 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-engine-687db6759-27j8z" podUID="befc3a14-3df1-45fd-9d4c-91033abb4d61" containerName="heat-engine" containerID="cri-o://4acb5f41410eaf0ea424db0e036cd829ec37d06029defac53868da981a760108" gracePeriod=60 Feb 18 14:59:49 crc kubenswrapper[4957]: I0218 14:59:49.205829 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-dsk48"] Feb 18 14:59:49 crc kubenswrapper[4957]: E0218 14:59:49.206320 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b86c2f5-26d2-4702-89e3-e093e6cf4b21" containerName="heat-api" Feb 18 14:59:49 crc kubenswrapper[4957]: I0218 14:59:49.206340 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b86c2f5-26d2-4702-89e3-e093e6cf4b21" containerName="heat-api" Feb 18 14:59:49 crc kubenswrapper[4957]: E0218 14:59:49.206353 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2dffda1d-4d5a-420b-b627-4bc6fd552c61" containerName="init" Feb 18 14:59:49 crc kubenswrapper[4957]: I0218 14:59:49.206359 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="2dffda1d-4d5a-420b-b627-4bc6fd552c61" containerName="init" Feb 18 14:59:49 crc kubenswrapper[4957]: E0218 14:59:49.206380 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2dffda1d-4d5a-420b-b627-4bc6fd552c61" containerName="dnsmasq-dns" Feb 18 14:59:49 crc kubenswrapper[4957]: I0218 14:59:49.206386 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="2dffda1d-4d5a-420b-b627-4bc6fd552c61" containerName="dnsmasq-dns" Feb 18 14:59:49 crc kubenswrapper[4957]: E0218 14:59:49.206409 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1e0aae7-8a7f-4147-9abb-23f7c7691351" containerName="heat-cfnapi" Feb 18 14:59:49 crc kubenswrapper[4957]: I0218 14:59:49.206433 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1e0aae7-8a7f-4147-9abb-23f7c7691351" containerName="heat-cfnapi" Feb 18 14:59:49 crc kubenswrapper[4957]: I0218 14:59:49.206647 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b86c2f5-26d2-4702-89e3-e093e6cf4b21" containerName="heat-api" Feb 18 14:59:49 crc kubenswrapper[4957]: I0218 14:59:49.206662 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="2dffda1d-4d5a-420b-b627-4bc6fd552c61" containerName="dnsmasq-dns" Feb 18 14:59:49 crc kubenswrapper[4957]: I0218 14:59:49.206684 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1e0aae7-8a7f-4147-9abb-23f7c7691351" containerName="heat-cfnapi" Feb 18 14:59:49 crc kubenswrapper[4957]: I0218 14:59:49.207481 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-dsk48" Feb 18 14:59:49 crc kubenswrapper[4957]: I0218 14:59:49.212041 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 14:59:49 crc kubenswrapper[4957]: I0218 14:59:49.212260 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 14:59:49 crc kubenswrapper[4957]: I0218 14:59:49.225141 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-dsk48"] Feb 18 14:59:49 crc kubenswrapper[4957]: I0218 14:59:49.225297 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-pmpvb" Feb 18 14:59:49 crc kubenswrapper[4957]: I0218 14:59:49.225530 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 14:59:49 crc kubenswrapper[4957]: I0218 14:59:49.266862 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bpcdp\" (UniqueName: \"kubernetes.io/projected/2cef43ea-a55c-4f05-8598-54b9bfc950b3-kube-api-access-bpcdp\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-dsk48\" (UID: \"2cef43ea-a55c-4f05-8598-54b9bfc950b3\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-dsk48" Feb 18 14:59:49 crc kubenswrapper[4957]: I0218 14:59:49.267201 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2cef43ea-a55c-4f05-8598-54b9bfc950b3-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-dsk48\" (UID: \"2cef43ea-a55c-4f05-8598-54b9bfc950b3\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-dsk48" Feb 18 14:59:49 crc kubenswrapper[4957]: I0218 14:59:49.267307 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cef43ea-a55c-4f05-8598-54b9bfc950b3-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-dsk48\" (UID: \"2cef43ea-a55c-4f05-8598-54b9bfc950b3\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-dsk48" Feb 18 14:59:49 crc kubenswrapper[4957]: I0218 14:59:49.267559 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2cef43ea-a55c-4f05-8598-54b9bfc950b3-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-dsk48\" (UID: \"2cef43ea-a55c-4f05-8598-54b9bfc950b3\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-dsk48" Feb 18 14:59:49 crc kubenswrapper[4957]: I0218 14:59:49.369888 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2cef43ea-a55c-4f05-8598-54b9bfc950b3-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-dsk48\" (UID: \"2cef43ea-a55c-4f05-8598-54b9bfc950b3\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-dsk48" Feb 18 14:59:49 crc kubenswrapper[4957]: I0218 14:59:49.370001 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cef43ea-a55c-4f05-8598-54b9bfc950b3-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-dsk48\" (UID: \"2cef43ea-a55c-4f05-8598-54b9bfc950b3\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-dsk48" Feb 18 14:59:49 crc kubenswrapper[4957]: I0218 14:59:49.370076 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2cef43ea-a55c-4f05-8598-54b9bfc950b3-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-dsk48\" (UID: \"2cef43ea-a55c-4f05-8598-54b9bfc950b3\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-dsk48" Feb 18 14:59:49 crc kubenswrapper[4957]: I0218 14:59:49.370162 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bpcdp\" (UniqueName: \"kubernetes.io/projected/2cef43ea-a55c-4f05-8598-54b9bfc950b3-kube-api-access-bpcdp\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-dsk48\" (UID: \"2cef43ea-a55c-4f05-8598-54b9bfc950b3\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-dsk48" Feb 18 14:59:49 crc kubenswrapper[4957]: I0218 14:59:49.381517 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cef43ea-a55c-4f05-8598-54b9bfc950b3-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-dsk48\" (UID: \"2cef43ea-a55c-4f05-8598-54b9bfc950b3\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-dsk48" Feb 18 14:59:49 crc kubenswrapper[4957]: I0218 14:59:49.384786 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2cef43ea-a55c-4f05-8598-54b9bfc950b3-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-dsk48\" (UID: \"2cef43ea-a55c-4f05-8598-54b9bfc950b3\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-dsk48" Feb 18 14:59:49 crc kubenswrapper[4957]: I0218 14:59:49.389124 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2cef43ea-a55c-4f05-8598-54b9bfc950b3-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-dsk48\" (UID: \"2cef43ea-a55c-4f05-8598-54b9bfc950b3\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-dsk48" Feb 18 14:59:49 crc kubenswrapper[4957]: I0218 14:59:49.392195 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bpcdp\" (UniqueName: \"kubernetes.io/projected/2cef43ea-a55c-4f05-8598-54b9bfc950b3-kube-api-access-bpcdp\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-dsk48\" (UID: \"2cef43ea-a55c-4f05-8598-54b9bfc950b3\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-dsk48" Feb 18 14:59:49 crc kubenswrapper[4957]: I0218 14:59:49.561262 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-dsk48" Feb 18 14:59:50 crc kubenswrapper[4957]: I0218 14:59:50.862786 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-dsk48"] Feb 18 14:59:51 crc kubenswrapper[4957]: I0218 14:59:51.028307 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-dsk48" event={"ID":"2cef43ea-a55c-4f05-8598-54b9bfc950b3","Type":"ContainerStarted","Data":"2e7764d284428e4732671a82a4a5094e1b2155dd2b888690b4df2e36ea462a51"} Feb 18 14:59:53 crc kubenswrapper[4957]: I0218 14:59:53.214366 4957 scope.go:117] "RemoveContainer" containerID="c4e669be0ab1992542c422c61f78e7526ae93fe9c47c7d1457d0a05970a49398" Feb 18 14:59:53 crc kubenswrapper[4957]: E0218 14:59:53.215136 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-x8wwg_openshift-machine-config-operator(4cde17e3-43e9-4bed-afe8-5b76229e35cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" Feb 18 14:59:56 crc kubenswrapper[4957]: E0218 14:59:56.472222 4957 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4acb5f41410eaf0ea424db0e036cd829ec37d06029defac53868da981a760108" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 18 14:59:56 crc kubenswrapper[4957]: E0218 14:59:56.475182 4957 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4acb5f41410eaf0ea424db0e036cd829ec37d06029defac53868da981a760108" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 18 14:59:56 crc kubenswrapper[4957]: E0218 14:59:56.477160 4957 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4acb5f41410eaf0ea424db0e036cd829ec37d06029defac53868da981a760108" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 18 14:59:56 crc kubenswrapper[4957]: E0218 14:59:56.477257 4957 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-687db6759-27j8z" podUID="befc3a14-3df1-45fd-9d4c-91033abb4d61" containerName="heat-engine" Feb 18 14:59:59 crc kubenswrapper[4957]: I0218 14:59:59.160052 4957 generic.go:334] "Generic (PLEG): container finished" podID="befc3a14-3df1-45fd-9d4c-91033abb4d61" containerID="4acb5f41410eaf0ea424db0e036cd829ec37d06029defac53868da981a760108" exitCode=0 Feb 18 14:59:59 crc kubenswrapper[4957]: I0218 14:59:59.160138 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-687db6759-27j8z" event={"ID":"befc3a14-3df1-45fd-9d4c-91033abb4d61","Type":"ContainerDied","Data":"4acb5f41410eaf0ea424db0e036cd829ec37d06029defac53868da981a760108"} Feb 18 15:00:00 crc kubenswrapper[4957]: I0218 15:00:00.131906 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-sync-xmtj7"] Feb 18 15:00:00 crc kubenswrapper[4957]: I0218 15:00:00.155500 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-sync-xmtj7"] Feb 18 15:00:00 crc kubenswrapper[4957]: I0218 15:00:00.173458 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29523780-xzvqh"] Feb 18 15:00:00 crc kubenswrapper[4957]: I0218 15:00:00.177257 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29523780-xzvqh" Feb 18 15:00:00 crc kubenswrapper[4957]: I0218 15:00:00.187378 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 18 15:00:00 crc kubenswrapper[4957]: I0218 15:00:00.187557 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 18 15:00:00 crc kubenswrapper[4957]: I0218 15:00:00.237729 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af20e059-3e19-4e13-be41-de0fb244b627" path="/var/lib/kubelet/pods/af20e059-3e19-4e13-be41-de0fb244b627/volumes" Feb 18 15:00:00 crc kubenswrapper[4957]: I0218 15:00:00.238547 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29523780-xzvqh"] Feb 18 15:00:00 crc kubenswrapper[4957]: I0218 15:00:00.317137 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/951cc00a-60f2-4f26-a0a9-8c9313980f92-secret-volume\") pod \"collect-profiles-29523780-xzvqh\" (UID: \"951cc00a-60f2-4f26-a0a9-8c9313980f92\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523780-xzvqh" Feb 18 15:00:00 crc kubenswrapper[4957]: I0218 15:00:00.317333 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnctc\" (UniqueName: \"kubernetes.io/projected/951cc00a-60f2-4f26-a0a9-8c9313980f92-kube-api-access-lnctc\") pod \"collect-profiles-29523780-xzvqh\" (UID: \"951cc00a-60f2-4f26-a0a9-8c9313980f92\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523780-xzvqh" Feb 18 15:00:00 crc kubenswrapper[4957]: I0218 15:00:00.317362 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/951cc00a-60f2-4f26-a0a9-8c9313980f92-config-volume\") pod \"collect-profiles-29523780-xzvqh\" (UID: \"951cc00a-60f2-4f26-a0a9-8c9313980f92\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523780-xzvqh" Feb 18 15:00:00 crc kubenswrapper[4957]: I0218 15:00:00.320232 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-sync-stkjp"] Feb 18 15:00:00 crc kubenswrapper[4957]: I0218 15:00:00.322369 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-stkjp" Feb 18 15:00:00 crc kubenswrapper[4957]: I0218 15:00:00.324886 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 18 15:00:00 crc kubenswrapper[4957]: I0218 15:00:00.345558 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-stkjp"] Feb 18 15:00:00 crc kubenswrapper[4957]: I0218 15:00:00.420003 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fe56e22-16a2-4e2a-a473-54592ab46673-combined-ca-bundle\") pod \"aodh-db-sync-stkjp\" (UID: \"6fe56e22-16a2-4e2a-a473-54592ab46673\") " pod="openstack/aodh-db-sync-stkjp" Feb 18 15:00:00 crc kubenswrapper[4957]: I0218 15:00:00.420077 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6fe56e22-16a2-4e2a-a473-54592ab46673-config-data\") pod \"aodh-db-sync-stkjp\" (UID: \"6fe56e22-16a2-4e2a-a473-54592ab46673\") " pod="openstack/aodh-db-sync-stkjp" Feb 18 15:00:00 crc kubenswrapper[4957]: I0218 15:00:00.420113 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lnctc\" (UniqueName: \"kubernetes.io/projected/951cc00a-60f2-4f26-a0a9-8c9313980f92-kube-api-access-lnctc\") pod \"collect-profiles-29523780-xzvqh\" (UID: \"951cc00a-60f2-4f26-a0a9-8c9313980f92\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523780-xzvqh" Feb 18 15:00:00 crc kubenswrapper[4957]: I0218 15:00:00.420144 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5nk4l\" (UniqueName: \"kubernetes.io/projected/6fe56e22-16a2-4e2a-a473-54592ab46673-kube-api-access-5nk4l\") pod \"aodh-db-sync-stkjp\" (UID: \"6fe56e22-16a2-4e2a-a473-54592ab46673\") " pod="openstack/aodh-db-sync-stkjp" Feb 18 15:00:00 crc kubenswrapper[4957]: I0218 15:00:00.420173 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/951cc00a-60f2-4f26-a0a9-8c9313980f92-config-volume\") pod \"collect-profiles-29523780-xzvqh\" (UID: \"951cc00a-60f2-4f26-a0a9-8c9313980f92\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523780-xzvqh" Feb 18 15:00:00 crc kubenswrapper[4957]: I0218 15:00:00.420597 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/951cc00a-60f2-4f26-a0a9-8c9313980f92-secret-volume\") pod \"collect-profiles-29523780-xzvqh\" (UID: \"951cc00a-60f2-4f26-a0a9-8c9313980f92\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523780-xzvqh" Feb 18 15:00:00 crc kubenswrapper[4957]: I0218 15:00:00.420669 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6fe56e22-16a2-4e2a-a473-54592ab46673-scripts\") pod \"aodh-db-sync-stkjp\" (UID: \"6fe56e22-16a2-4e2a-a473-54592ab46673\") " pod="openstack/aodh-db-sync-stkjp" Feb 18 15:00:00 crc kubenswrapper[4957]: I0218 15:00:00.421555 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/951cc00a-60f2-4f26-a0a9-8c9313980f92-config-volume\") pod \"collect-profiles-29523780-xzvqh\" (UID: \"951cc00a-60f2-4f26-a0a9-8c9313980f92\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523780-xzvqh" Feb 18 15:00:00 crc kubenswrapper[4957]: I0218 15:00:00.437593 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/951cc00a-60f2-4f26-a0a9-8c9313980f92-secret-volume\") pod \"collect-profiles-29523780-xzvqh\" (UID: \"951cc00a-60f2-4f26-a0a9-8c9313980f92\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523780-xzvqh" Feb 18 15:00:00 crc kubenswrapper[4957]: I0218 15:00:00.438364 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lnctc\" (UniqueName: \"kubernetes.io/projected/951cc00a-60f2-4f26-a0a9-8c9313980f92-kube-api-access-lnctc\") pod \"collect-profiles-29523780-xzvqh\" (UID: \"951cc00a-60f2-4f26-a0a9-8c9313980f92\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523780-xzvqh" Feb 18 15:00:00 crc kubenswrapper[4957]: I0218 15:00:00.523103 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6fe56e22-16a2-4e2a-a473-54592ab46673-scripts\") pod \"aodh-db-sync-stkjp\" (UID: \"6fe56e22-16a2-4e2a-a473-54592ab46673\") " pod="openstack/aodh-db-sync-stkjp" Feb 18 15:00:00 crc kubenswrapper[4957]: I0218 15:00:00.523525 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fe56e22-16a2-4e2a-a473-54592ab46673-combined-ca-bundle\") pod \"aodh-db-sync-stkjp\" (UID: \"6fe56e22-16a2-4e2a-a473-54592ab46673\") " pod="openstack/aodh-db-sync-stkjp" Feb 18 15:00:00 crc kubenswrapper[4957]: I0218 15:00:00.523656 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6fe56e22-16a2-4e2a-a473-54592ab46673-config-data\") pod \"aodh-db-sync-stkjp\" (UID: \"6fe56e22-16a2-4e2a-a473-54592ab46673\") " pod="openstack/aodh-db-sync-stkjp" Feb 18 15:00:00 crc kubenswrapper[4957]: I0218 15:00:00.523781 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5nk4l\" (UniqueName: \"kubernetes.io/projected/6fe56e22-16a2-4e2a-a473-54592ab46673-kube-api-access-5nk4l\") pod \"aodh-db-sync-stkjp\" (UID: \"6fe56e22-16a2-4e2a-a473-54592ab46673\") " pod="openstack/aodh-db-sync-stkjp" Feb 18 15:00:00 crc kubenswrapper[4957]: I0218 15:00:00.525496 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29523780-xzvqh" Feb 18 15:00:00 crc kubenswrapper[4957]: I0218 15:00:00.527949 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6fe56e22-16a2-4e2a-a473-54592ab46673-scripts\") pod \"aodh-db-sync-stkjp\" (UID: \"6fe56e22-16a2-4e2a-a473-54592ab46673\") " pod="openstack/aodh-db-sync-stkjp" Feb 18 15:00:00 crc kubenswrapper[4957]: I0218 15:00:00.528156 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fe56e22-16a2-4e2a-a473-54592ab46673-combined-ca-bundle\") pod \"aodh-db-sync-stkjp\" (UID: \"6fe56e22-16a2-4e2a-a473-54592ab46673\") " pod="openstack/aodh-db-sync-stkjp" Feb 18 15:00:00 crc kubenswrapper[4957]: I0218 15:00:00.530963 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6fe56e22-16a2-4e2a-a473-54592ab46673-config-data\") pod \"aodh-db-sync-stkjp\" (UID: \"6fe56e22-16a2-4e2a-a473-54592ab46673\") " pod="openstack/aodh-db-sync-stkjp" Feb 18 15:00:00 crc kubenswrapper[4957]: I0218 15:00:00.547010 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5nk4l\" (UniqueName: \"kubernetes.io/projected/6fe56e22-16a2-4e2a-a473-54592ab46673-kube-api-access-5nk4l\") pod \"aodh-db-sync-stkjp\" (UID: \"6fe56e22-16a2-4e2a-a473-54592ab46673\") " pod="openstack/aodh-db-sync-stkjp" Feb 18 15:00:00 crc kubenswrapper[4957]: I0218 15:00:00.648806 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-stkjp" Feb 18 15:00:01 crc kubenswrapper[4957]: I0218 15:00:01.690251 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-687db6759-27j8z" Feb 18 15:00:01 crc kubenswrapper[4957]: I0218 15:00:01.844046 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29523780-xzvqh"] Feb 18 15:00:01 crc kubenswrapper[4957]: I0218 15:00:01.855365 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-stkjp"] Feb 18 15:00:01 crc kubenswrapper[4957]: I0218 15:00:01.859000 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p4qtg\" (UniqueName: \"kubernetes.io/projected/befc3a14-3df1-45fd-9d4c-91033abb4d61-kube-api-access-p4qtg\") pod \"befc3a14-3df1-45fd-9d4c-91033abb4d61\" (UID: \"befc3a14-3df1-45fd-9d4c-91033abb4d61\") " Feb 18 15:00:01 crc kubenswrapper[4957]: I0218 15:00:01.859169 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/befc3a14-3df1-45fd-9d4c-91033abb4d61-config-data-custom\") pod \"befc3a14-3df1-45fd-9d4c-91033abb4d61\" (UID: \"befc3a14-3df1-45fd-9d4c-91033abb4d61\") " Feb 18 15:00:01 crc kubenswrapper[4957]: I0218 15:00:01.859339 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/befc3a14-3df1-45fd-9d4c-91033abb4d61-combined-ca-bundle\") pod \"befc3a14-3df1-45fd-9d4c-91033abb4d61\" (UID: \"befc3a14-3df1-45fd-9d4c-91033abb4d61\") " Feb 18 15:00:01 crc kubenswrapper[4957]: I0218 15:00:01.859468 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/befc3a14-3df1-45fd-9d4c-91033abb4d61-config-data\") pod \"befc3a14-3df1-45fd-9d4c-91033abb4d61\" (UID: \"befc3a14-3df1-45fd-9d4c-91033abb4d61\") " Feb 18 15:00:01 crc kubenswrapper[4957]: W0218 15:00:01.871005 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6fe56e22_16a2_4e2a_a473_54592ab46673.slice/crio-5631a22f394a58d3437f1ebdff09f66e3b44a7b11fcedfe73f18120118af9c97 WatchSource:0}: Error finding container 5631a22f394a58d3437f1ebdff09f66e3b44a7b11fcedfe73f18120118af9c97: Status 404 returned error can't find the container with id 5631a22f394a58d3437f1ebdff09f66e3b44a7b11fcedfe73f18120118af9c97 Feb 18 15:00:01 crc kubenswrapper[4957]: I0218 15:00:01.873855 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/befc3a14-3df1-45fd-9d4c-91033abb4d61-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "befc3a14-3df1-45fd-9d4c-91033abb4d61" (UID: "befc3a14-3df1-45fd-9d4c-91033abb4d61"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 15:00:01 crc kubenswrapper[4957]: I0218 15:00:01.875716 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/befc3a14-3df1-45fd-9d4c-91033abb4d61-kube-api-access-p4qtg" (OuterVolumeSpecName: "kube-api-access-p4qtg") pod "befc3a14-3df1-45fd-9d4c-91033abb4d61" (UID: "befc3a14-3df1-45fd-9d4c-91033abb4d61"). InnerVolumeSpecName "kube-api-access-p4qtg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 15:00:01 crc kubenswrapper[4957]: I0218 15:00:01.912718 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/befc3a14-3df1-45fd-9d4c-91033abb4d61-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "befc3a14-3df1-45fd-9d4c-91033abb4d61" (UID: "befc3a14-3df1-45fd-9d4c-91033abb4d61"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 15:00:01 crc kubenswrapper[4957]: I0218 15:00:01.934836 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/befc3a14-3df1-45fd-9d4c-91033abb4d61-config-data" (OuterVolumeSpecName: "config-data") pod "befc3a14-3df1-45fd-9d4c-91033abb4d61" (UID: "befc3a14-3df1-45fd-9d4c-91033abb4d61"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 15:00:01 crc kubenswrapper[4957]: I0218 15:00:01.962203 4957 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/befc3a14-3df1-45fd-9d4c-91033abb4d61-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 15:00:01 crc kubenswrapper[4957]: I0218 15:00:01.962238 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p4qtg\" (UniqueName: \"kubernetes.io/projected/befc3a14-3df1-45fd-9d4c-91033abb4d61-kube-api-access-p4qtg\") on node \"crc\" DevicePath \"\"" Feb 18 15:00:01 crc kubenswrapper[4957]: I0218 15:00:01.962251 4957 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/befc3a14-3df1-45fd-9d4c-91033abb4d61-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 18 15:00:01 crc kubenswrapper[4957]: I0218 15:00:01.962260 4957 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/befc3a14-3df1-45fd-9d4c-91033abb4d61-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 15:00:02 crc kubenswrapper[4957]: I0218 15:00:02.250134 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29523780-xzvqh" event={"ID":"951cc00a-60f2-4f26-a0a9-8c9313980f92","Type":"ContainerStarted","Data":"fc401c94ce371f721f34a16ad4c8c8adea7e25d149a3ef0ca9b1f0010f8a81ca"} Feb 18 15:00:02 crc kubenswrapper[4957]: I0218 15:00:02.250553 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29523780-xzvqh" event={"ID":"951cc00a-60f2-4f26-a0a9-8c9313980f92","Type":"ContainerStarted","Data":"4b276415d9a73842291b423174768504df9b494eaa6884f09b06abd62fc97beb"} Feb 18 15:00:02 crc kubenswrapper[4957]: I0218 15:00:02.254258 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-dsk48" event={"ID":"2cef43ea-a55c-4f05-8598-54b9bfc950b3","Type":"ContainerStarted","Data":"3f1fcb4321663508d7314f54625be538eefdd05d7ad6f3203921530777a690d3"} Feb 18 15:00:02 crc kubenswrapper[4957]: I0218 15:00:02.257143 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-687db6759-27j8z" event={"ID":"befc3a14-3df1-45fd-9d4c-91033abb4d61","Type":"ContainerDied","Data":"394e3a2d64c211a306d9b2e49ec74d77ff38ee7a750bdd9ea1bac6f05b8bd113"} Feb 18 15:00:02 crc kubenswrapper[4957]: I0218 15:00:02.257188 4957 scope.go:117] "RemoveContainer" containerID="4acb5f41410eaf0ea424db0e036cd829ec37d06029defac53868da981a760108" Feb 18 15:00:02 crc kubenswrapper[4957]: I0218 15:00:02.257201 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-687db6759-27j8z" Feb 18 15:00:02 crc kubenswrapper[4957]: I0218 15:00:02.261377 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-stkjp" event={"ID":"6fe56e22-16a2-4e2a-a473-54592ab46673","Type":"ContainerStarted","Data":"5631a22f394a58d3437f1ebdff09f66e3b44a7b11fcedfe73f18120118af9c97"} Feb 18 15:00:02 crc kubenswrapper[4957]: I0218 15:00:02.293657 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29523780-xzvqh" podStartSLOduration=2.293627779 podStartE2EDuration="2.293627779s" podCreationTimestamp="2026-02-18 15:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 15:00:02.267750983 +0000 UTC m=+1708.788615747" watchObservedRunningTime="2026-02-18 15:00:02.293627779 +0000 UTC m=+1708.814492523" Feb 18 15:00:02 crc kubenswrapper[4957]: I0218 15:00:02.304475 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-dsk48" podStartSLOduration=2.8962739280000003 podStartE2EDuration="13.304447522s" podCreationTimestamp="2026-02-18 14:59:49 +0000 UTC" firstStartedPulling="2026-02-18 14:59:50.881503648 +0000 UTC m=+1697.402368382" lastFinishedPulling="2026-02-18 15:00:01.289677232 +0000 UTC m=+1707.810541976" observedRunningTime="2026-02-18 15:00:02.29158147 +0000 UTC m=+1708.812446214" watchObservedRunningTime="2026-02-18 15:00:02.304447522 +0000 UTC m=+1708.825312266" Feb 18 15:00:02 crc kubenswrapper[4957]: I0218 15:00:02.330857 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-687db6759-27j8z"] Feb 18 15:00:02 crc kubenswrapper[4957]: I0218 15:00:02.346675 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-engine-687db6759-27j8z"] Feb 18 15:00:02 crc kubenswrapper[4957]: I0218 15:00:02.431772 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 18 15:00:03 crc kubenswrapper[4957]: I0218 15:00:03.276910 4957 generic.go:334] "Generic (PLEG): container finished" podID="951cc00a-60f2-4f26-a0a9-8c9313980f92" containerID="fc401c94ce371f721f34a16ad4c8c8adea7e25d149a3ef0ca9b1f0010f8a81ca" exitCode=0 Feb 18 15:00:03 crc kubenswrapper[4957]: I0218 15:00:03.278903 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29523780-xzvqh" event={"ID":"951cc00a-60f2-4f26-a0a9-8c9313980f92","Type":"ContainerDied","Data":"fc401c94ce371f721f34a16ad4c8c8adea7e25d149a3ef0ca9b1f0010f8a81ca"} Feb 18 15:00:04 crc kubenswrapper[4957]: I0218 15:00:04.235749 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="befc3a14-3df1-45fd-9d4c-91033abb4d61" path="/var/lib/kubelet/pods/befc3a14-3df1-45fd-9d4c-91033abb4d61/volumes" Feb 18 15:00:04 crc kubenswrapper[4957]: I0218 15:00:04.648198 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-2" Feb 18 15:00:04 crc kubenswrapper[4957]: I0218 15:00:04.752148 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-1"] Feb 18 15:00:07 crc kubenswrapper[4957]: I0218 15:00:07.048295 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29523780-xzvqh" Feb 18 15:00:07 crc kubenswrapper[4957]: I0218 15:00:07.128969 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/951cc00a-60f2-4f26-a0a9-8c9313980f92-config-volume\") pod \"951cc00a-60f2-4f26-a0a9-8c9313980f92\" (UID: \"951cc00a-60f2-4f26-a0a9-8c9313980f92\") " Feb 18 15:00:07 crc kubenswrapper[4957]: I0218 15:00:07.129171 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/951cc00a-60f2-4f26-a0a9-8c9313980f92-secret-volume\") pod \"951cc00a-60f2-4f26-a0a9-8c9313980f92\" (UID: \"951cc00a-60f2-4f26-a0a9-8c9313980f92\") " Feb 18 15:00:07 crc kubenswrapper[4957]: I0218 15:00:07.129230 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lnctc\" (UniqueName: \"kubernetes.io/projected/951cc00a-60f2-4f26-a0a9-8c9313980f92-kube-api-access-lnctc\") pod \"951cc00a-60f2-4f26-a0a9-8c9313980f92\" (UID: \"951cc00a-60f2-4f26-a0a9-8c9313980f92\") " Feb 18 15:00:07 crc kubenswrapper[4957]: I0218 15:00:07.129641 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/951cc00a-60f2-4f26-a0a9-8c9313980f92-config-volume" (OuterVolumeSpecName: "config-volume") pod "951cc00a-60f2-4f26-a0a9-8c9313980f92" (UID: "951cc00a-60f2-4f26-a0a9-8c9313980f92"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 15:00:07 crc kubenswrapper[4957]: I0218 15:00:07.129991 4957 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/951cc00a-60f2-4f26-a0a9-8c9313980f92-config-volume\") on node \"crc\" DevicePath \"\"" Feb 18 15:00:07 crc kubenswrapper[4957]: I0218 15:00:07.134548 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/951cc00a-60f2-4f26-a0a9-8c9313980f92-kube-api-access-lnctc" (OuterVolumeSpecName: "kube-api-access-lnctc") pod "951cc00a-60f2-4f26-a0a9-8c9313980f92" (UID: "951cc00a-60f2-4f26-a0a9-8c9313980f92"). InnerVolumeSpecName "kube-api-access-lnctc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 15:00:07 crc kubenswrapper[4957]: I0218 15:00:07.135688 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/951cc00a-60f2-4f26-a0a9-8c9313980f92-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "951cc00a-60f2-4f26-a0a9-8c9313980f92" (UID: "951cc00a-60f2-4f26-a0a9-8c9313980f92"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 15:00:07 crc kubenswrapper[4957]: I0218 15:00:07.213316 4957 scope.go:117] "RemoveContainer" containerID="c4e669be0ab1992542c422c61f78e7526ae93fe9c47c7d1457d0a05970a49398" Feb 18 15:00:07 crc kubenswrapper[4957]: E0218 15:00:07.213854 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-x8wwg_openshift-machine-config-operator(4cde17e3-43e9-4bed-afe8-5b76229e35cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" Feb 18 15:00:07 crc kubenswrapper[4957]: I0218 15:00:07.232288 4957 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/951cc00a-60f2-4f26-a0a9-8c9313980f92-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 18 15:00:07 crc kubenswrapper[4957]: I0218 15:00:07.232328 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lnctc\" (UniqueName: \"kubernetes.io/projected/951cc00a-60f2-4f26-a0a9-8c9313980f92-kube-api-access-lnctc\") on node \"crc\" DevicePath \"\"" Feb 18 15:00:07 crc kubenswrapper[4957]: I0218 15:00:07.353391 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29523780-xzvqh" event={"ID":"951cc00a-60f2-4f26-a0a9-8c9313980f92","Type":"ContainerDied","Data":"4b276415d9a73842291b423174768504df9b494eaa6884f09b06abd62fc97beb"} Feb 18 15:00:07 crc kubenswrapper[4957]: I0218 15:00:07.353441 4957 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4b276415d9a73842291b423174768504df9b494eaa6884f09b06abd62fc97beb" Feb 18 15:00:07 crc kubenswrapper[4957]: I0218 15:00:07.353568 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29523780-xzvqh" Feb 18 15:00:08 crc kubenswrapper[4957]: I0218 15:00:08.380196 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-stkjp" event={"ID":"6fe56e22-16a2-4e2a-a473-54592ab46673","Type":"ContainerStarted","Data":"0178af03f7af51d03a0a43381d9d1786c77f1ef247ff83c2d6c840956c7591e8"} Feb 18 15:00:08 crc kubenswrapper[4957]: I0218 15:00:08.396495 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-db-sync-stkjp" podStartSLOduration=2.70643201 podStartE2EDuration="8.396475479s" podCreationTimestamp="2026-02-18 15:00:00 +0000 UTC" firstStartedPulling="2026-02-18 15:00:01.888181951 +0000 UTC m=+1708.409046695" lastFinishedPulling="2026-02-18 15:00:07.57822541 +0000 UTC m=+1714.099090164" observedRunningTime="2026-02-18 15:00:08.394649916 +0000 UTC m=+1714.915514670" watchObservedRunningTime="2026-02-18 15:00:08.396475479 +0000 UTC m=+1714.917340223" Feb 18 15:00:09 crc kubenswrapper[4957]: I0218 15:00:09.324949 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-1" podUID="f237ab9c-fc69-491b-98da-97ce92214eb0" containerName="rabbitmq" containerID="cri-o://360a95ea3a9eed582ab8623dd5c867b4c8d186486451fb4fa22e41369f135fb6" gracePeriod=604796 Feb 18 15:00:10 crc kubenswrapper[4957]: I0218 15:00:10.448039 4957 generic.go:334] "Generic (PLEG): container finished" podID="6fe56e22-16a2-4e2a-a473-54592ab46673" containerID="0178af03f7af51d03a0a43381d9d1786c77f1ef247ff83c2d6c840956c7591e8" exitCode=0 Feb 18 15:00:10 crc kubenswrapper[4957]: I0218 15:00:10.448154 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-stkjp" event={"ID":"6fe56e22-16a2-4e2a-a473-54592ab46673","Type":"ContainerDied","Data":"0178af03f7af51d03a0a43381d9d1786c77f1ef247ff83c2d6c840956c7591e8"} Feb 18 15:00:11 crc kubenswrapper[4957]: I0218 15:00:11.894526 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-stkjp" Feb 18 15:00:11 crc kubenswrapper[4957]: I0218 15:00:11.945565 4957 scope.go:117] "RemoveContainer" containerID="f90cb7227354720dcb784183e790fc281faa8f9ddead2c64c3b1a34b749cc189" Feb 18 15:00:11 crc kubenswrapper[4957]: I0218 15:00:11.983478 4957 scope.go:117] "RemoveContainer" containerID="db5f4b49bc3138d1bb72a3a99d8840b031c7e803d218b53cfbc5deb258777807" Feb 18 15:00:12 crc kubenswrapper[4957]: I0218 15:00:12.062886 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6fe56e22-16a2-4e2a-a473-54592ab46673-config-data\") pod \"6fe56e22-16a2-4e2a-a473-54592ab46673\" (UID: \"6fe56e22-16a2-4e2a-a473-54592ab46673\") " Feb 18 15:00:12 crc kubenswrapper[4957]: I0218 15:00:12.062948 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6fe56e22-16a2-4e2a-a473-54592ab46673-scripts\") pod \"6fe56e22-16a2-4e2a-a473-54592ab46673\" (UID: \"6fe56e22-16a2-4e2a-a473-54592ab46673\") " Feb 18 15:00:12 crc kubenswrapper[4957]: I0218 15:00:12.062977 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fe56e22-16a2-4e2a-a473-54592ab46673-combined-ca-bundle\") pod \"6fe56e22-16a2-4e2a-a473-54592ab46673\" (UID: \"6fe56e22-16a2-4e2a-a473-54592ab46673\") " Feb 18 15:00:12 crc kubenswrapper[4957]: I0218 15:00:12.063161 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5nk4l\" (UniqueName: \"kubernetes.io/projected/6fe56e22-16a2-4e2a-a473-54592ab46673-kube-api-access-5nk4l\") pod \"6fe56e22-16a2-4e2a-a473-54592ab46673\" (UID: \"6fe56e22-16a2-4e2a-a473-54592ab46673\") " Feb 18 15:00:12 crc kubenswrapper[4957]: I0218 15:00:12.069661 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6fe56e22-16a2-4e2a-a473-54592ab46673-kube-api-access-5nk4l" (OuterVolumeSpecName: "kube-api-access-5nk4l") pod "6fe56e22-16a2-4e2a-a473-54592ab46673" (UID: "6fe56e22-16a2-4e2a-a473-54592ab46673"). InnerVolumeSpecName "kube-api-access-5nk4l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 15:00:12 crc kubenswrapper[4957]: I0218 15:00:12.071091 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6fe56e22-16a2-4e2a-a473-54592ab46673-scripts" (OuterVolumeSpecName: "scripts") pod "6fe56e22-16a2-4e2a-a473-54592ab46673" (UID: "6fe56e22-16a2-4e2a-a473-54592ab46673"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 15:00:12 crc kubenswrapper[4957]: I0218 15:00:12.098622 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6fe56e22-16a2-4e2a-a473-54592ab46673-config-data" (OuterVolumeSpecName: "config-data") pod "6fe56e22-16a2-4e2a-a473-54592ab46673" (UID: "6fe56e22-16a2-4e2a-a473-54592ab46673"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 15:00:12 crc kubenswrapper[4957]: I0218 15:00:12.103257 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6fe56e22-16a2-4e2a-a473-54592ab46673-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6fe56e22-16a2-4e2a-a473-54592ab46673" (UID: "6fe56e22-16a2-4e2a-a473-54592ab46673"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 15:00:12 crc kubenswrapper[4957]: I0218 15:00:12.166723 4957 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6fe56e22-16a2-4e2a-a473-54592ab46673-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 15:00:12 crc kubenswrapper[4957]: I0218 15:00:12.166754 4957 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6fe56e22-16a2-4e2a-a473-54592ab46673-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 15:00:12 crc kubenswrapper[4957]: I0218 15:00:12.166766 4957 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fe56e22-16a2-4e2a-a473-54592ab46673-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 15:00:12 crc kubenswrapper[4957]: I0218 15:00:12.166780 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5nk4l\" (UniqueName: \"kubernetes.io/projected/6fe56e22-16a2-4e2a-a473-54592ab46673-kube-api-access-5nk4l\") on node \"crc\" DevicePath \"\"" Feb 18 15:00:12 crc kubenswrapper[4957]: I0218 15:00:12.475715 4957 generic.go:334] "Generic (PLEG): container finished" podID="2cef43ea-a55c-4f05-8598-54b9bfc950b3" containerID="3f1fcb4321663508d7314f54625be538eefdd05d7ad6f3203921530777a690d3" exitCode=0 Feb 18 15:00:12 crc kubenswrapper[4957]: I0218 15:00:12.475800 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-dsk48" event={"ID":"2cef43ea-a55c-4f05-8598-54b9bfc950b3","Type":"ContainerDied","Data":"3f1fcb4321663508d7314f54625be538eefdd05d7ad6f3203921530777a690d3"} Feb 18 15:00:12 crc kubenswrapper[4957]: I0218 15:00:12.479991 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-stkjp" event={"ID":"6fe56e22-16a2-4e2a-a473-54592ab46673","Type":"ContainerDied","Data":"5631a22f394a58d3437f1ebdff09f66e3b44a7b11fcedfe73f18120118af9c97"} Feb 18 15:00:12 crc kubenswrapper[4957]: I0218 15:00:12.480032 4957 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5631a22f394a58d3437f1ebdff09f66e3b44a7b11fcedfe73f18120118af9c97" Feb 18 15:00:12 crc kubenswrapper[4957]: I0218 15:00:12.480093 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-stkjp" Feb 18 15:00:14 crc kubenswrapper[4957]: I0218 15:00:14.020348 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-dsk48" Feb 18 15:00:14 crc kubenswrapper[4957]: I0218 15:00:14.116573 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2cef43ea-a55c-4f05-8598-54b9bfc950b3-inventory\") pod \"2cef43ea-a55c-4f05-8598-54b9bfc950b3\" (UID: \"2cef43ea-a55c-4f05-8598-54b9bfc950b3\") " Feb 18 15:00:14 crc kubenswrapper[4957]: I0218 15:00:14.116690 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bpcdp\" (UniqueName: \"kubernetes.io/projected/2cef43ea-a55c-4f05-8598-54b9bfc950b3-kube-api-access-bpcdp\") pod \"2cef43ea-a55c-4f05-8598-54b9bfc950b3\" (UID: \"2cef43ea-a55c-4f05-8598-54b9bfc950b3\") " Feb 18 15:00:14 crc kubenswrapper[4957]: I0218 15:00:14.116867 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cef43ea-a55c-4f05-8598-54b9bfc950b3-repo-setup-combined-ca-bundle\") pod \"2cef43ea-a55c-4f05-8598-54b9bfc950b3\" (UID: \"2cef43ea-a55c-4f05-8598-54b9bfc950b3\") " Feb 18 15:00:14 crc kubenswrapper[4957]: I0218 15:00:14.116951 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2cef43ea-a55c-4f05-8598-54b9bfc950b3-ssh-key-openstack-edpm-ipam\") pod \"2cef43ea-a55c-4f05-8598-54b9bfc950b3\" (UID: \"2cef43ea-a55c-4f05-8598-54b9bfc950b3\") " Feb 18 15:00:14 crc kubenswrapper[4957]: I0218 15:00:14.122787 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2cef43ea-a55c-4f05-8598-54b9bfc950b3-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "2cef43ea-a55c-4f05-8598-54b9bfc950b3" (UID: "2cef43ea-a55c-4f05-8598-54b9bfc950b3"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 15:00:14 crc kubenswrapper[4957]: I0218 15:00:14.133796 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2cef43ea-a55c-4f05-8598-54b9bfc950b3-kube-api-access-bpcdp" (OuterVolumeSpecName: "kube-api-access-bpcdp") pod "2cef43ea-a55c-4f05-8598-54b9bfc950b3" (UID: "2cef43ea-a55c-4f05-8598-54b9bfc950b3"). InnerVolumeSpecName "kube-api-access-bpcdp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 15:00:14 crc kubenswrapper[4957]: I0218 15:00:14.150174 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2cef43ea-a55c-4f05-8598-54b9bfc950b3-inventory" (OuterVolumeSpecName: "inventory") pod "2cef43ea-a55c-4f05-8598-54b9bfc950b3" (UID: "2cef43ea-a55c-4f05-8598-54b9bfc950b3"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 15:00:14 crc kubenswrapper[4957]: I0218 15:00:14.160327 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2cef43ea-a55c-4f05-8598-54b9bfc950b3-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "2cef43ea-a55c-4f05-8598-54b9bfc950b3" (UID: "2cef43ea-a55c-4f05-8598-54b9bfc950b3"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 15:00:14 crc kubenswrapper[4957]: I0218 15:00:14.219979 4957 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2cef43ea-a55c-4f05-8598-54b9bfc950b3-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 15:00:14 crc kubenswrapper[4957]: I0218 15:00:14.220042 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bpcdp\" (UniqueName: \"kubernetes.io/projected/2cef43ea-a55c-4f05-8598-54b9bfc950b3-kube-api-access-bpcdp\") on node \"crc\" DevicePath \"\"" Feb 18 15:00:14 crc kubenswrapper[4957]: I0218 15:00:14.220057 4957 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cef43ea-a55c-4f05-8598-54b9bfc950b3-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 15:00:14 crc kubenswrapper[4957]: I0218 15:00:14.220071 4957 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2cef43ea-a55c-4f05-8598-54b9bfc950b3-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 15:00:14 crc kubenswrapper[4957]: I0218 15:00:14.510846 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-dsk48" event={"ID":"2cef43ea-a55c-4f05-8598-54b9bfc950b3","Type":"ContainerDied","Data":"2e7764d284428e4732671a82a4a5094e1b2155dd2b888690b4df2e36ea462a51"} Feb 18 15:00:14 crc kubenswrapper[4957]: I0218 15:00:14.510895 4957 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2e7764d284428e4732671a82a4a5094e1b2155dd2b888690b4df2e36ea462a51" Feb 18 15:00:14 crc kubenswrapper[4957]: I0218 15:00:14.511088 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-dsk48" Feb 18 15:00:14 crc kubenswrapper[4957]: I0218 15:00:14.633759 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-szxlv"] Feb 18 15:00:14 crc kubenswrapper[4957]: E0218 15:00:14.634502 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6fe56e22-16a2-4e2a-a473-54592ab46673" containerName="aodh-db-sync" Feb 18 15:00:14 crc kubenswrapper[4957]: I0218 15:00:14.634518 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fe56e22-16a2-4e2a-a473-54592ab46673" containerName="aodh-db-sync" Feb 18 15:00:14 crc kubenswrapper[4957]: E0218 15:00:14.634540 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="951cc00a-60f2-4f26-a0a9-8c9313980f92" containerName="collect-profiles" Feb 18 15:00:14 crc kubenswrapper[4957]: I0218 15:00:14.634549 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="951cc00a-60f2-4f26-a0a9-8c9313980f92" containerName="collect-profiles" Feb 18 15:00:14 crc kubenswrapper[4957]: E0218 15:00:14.634572 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="befc3a14-3df1-45fd-9d4c-91033abb4d61" containerName="heat-engine" Feb 18 15:00:14 crc kubenswrapper[4957]: I0218 15:00:14.634582 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="befc3a14-3df1-45fd-9d4c-91033abb4d61" containerName="heat-engine" Feb 18 15:00:14 crc kubenswrapper[4957]: E0218 15:00:14.634636 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cef43ea-a55c-4f05-8598-54b9bfc950b3" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 18 15:00:14 crc kubenswrapper[4957]: I0218 15:00:14.634645 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cef43ea-a55c-4f05-8598-54b9bfc950b3" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 18 15:00:14 crc kubenswrapper[4957]: I0218 15:00:14.634939 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="951cc00a-60f2-4f26-a0a9-8c9313980f92" containerName="collect-profiles" Feb 18 15:00:14 crc kubenswrapper[4957]: I0218 15:00:14.635042 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="2cef43ea-a55c-4f05-8598-54b9bfc950b3" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 18 15:00:14 crc kubenswrapper[4957]: I0218 15:00:14.635064 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="befc3a14-3df1-45fd-9d4c-91033abb4d61" containerName="heat-engine" Feb 18 15:00:14 crc kubenswrapper[4957]: I0218 15:00:14.635081 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="6fe56e22-16a2-4e2a-a473-54592ab46673" containerName="aodh-db-sync" Feb 18 15:00:14 crc kubenswrapper[4957]: I0218 15:00:14.636016 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-szxlv" Feb 18 15:00:14 crc kubenswrapper[4957]: I0218 15:00:14.638689 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-pmpvb" Feb 18 15:00:14 crc kubenswrapper[4957]: I0218 15:00:14.638858 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 15:00:14 crc kubenswrapper[4957]: I0218 15:00:14.638992 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 15:00:14 crc kubenswrapper[4957]: I0218 15:00:14.639629 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 15:00:14 crc kubenswrapper[4957]: I0218 15:00:14.660592 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-szxlv"] Feb 18 15:00:14 crc kubenswrapper[4957]: I0218 15:00:14.739969 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/61f57f78-cac7-4ae9-b5bd-eef2885ac7ce-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-szxlv\" (UID: \"61f57f78-cac7-4ae9-b5bd-eef2885ac7ce\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-szxlv" Feb 18 15:00:14 crc kubenswrapper[4957]: I0218 15:00:14.740047 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/61f57f78-cac7-4ae9-b5bd-eef2885ac7ce-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-szxlv\" (UID: \"61f57f78-cac7-4ae9-b5bd-eef2885ac7ce\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-szxlv" Feb 18 15:00:14 crc kubenswrapper[4957]: I0218 15:00:14.740183 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nk67x\" (UniqueName: \"kubernetes.io/projected/61f57f78-cac7-4ae9-b5bd-eef2885ac7ce-kube-api-access-nk67x\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-szxlv\" (UID: \"61f57f78-cac7-4ae9-b5bd-eef2885ac7ce\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-szxlv" Feb 18 15:00:14 crc kubenswrapper[4957]: I0218 15:00:14.842077 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/61f57f78-cac7-4ae9-b5bd-eef2885ac7ce-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-szxlv\" (UID: \"61f57f78-cac7-4ae9-b5bd-eef2885ac7ce\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-szxlv" Feb 18 15:00:14 crc kubenswrapper[4957]: I0218 15:00:14.842250 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nk67x\" (UniqueName: \"kubernetes.io/projected/61f57f78-cac7-4ae9-b5bd-eef2885ac7ce-kube-api-access-nk67x\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-szxlv\" (UID: \"61f57f78-cac7-4ae9-b5bd-eef2885ac7ce\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-szxlv" Feb 18 15:00:14 crc kubenswrapper[4957]: I0218 15:00:14.842457 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/61f57f78-cac7-4ae9-b5bd-eef2885ac7ce-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-szxlv\" (UID: \"61f57f78-cac7-4ae9-b5bd-eef2885ac7ce\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-szxlv" Feb 18 15:00:14 crc kubenswrapper[4957]: I0218 15:00:14.847682 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/61f57f78-cac7-4ae9-b5bd-eef2885ac7ce-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-szxlv\" (UID: \"61f57f78-cac7-4ae9-b5bd-eef2885ac7ce\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-szxlv" Feb 18 15:00:14 crc kubenswrapper[4957]: I0218 15:00:14.849073 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/61f57f78-cac7-4ae9-b5bd-eef2885ac7ce-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-szxlv\" (UID: \"61f57f78-cac7-4ae9-b5bd-eef2885ac7ce\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-szxlv" Feb 18 15:00:14 crc kubenswrapper[4957]: I0218 15:00:14.862536 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nk67x\" (UniqueName: \"kubernetes.io/projected/61f57f78-cac7-4ae9-b5bd-eef2885ac7ce-kube-api-access-nk67x\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-szxlv\" (UID: \"61f57f78-cac7-4ae9-b5bd-eef2885ac7ce\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-szxlv" Feb 18 15:00:14 crc kubenswrapper[4957]: I0218 15:00:14.961410 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-szxlv" Feb 18 15:00:15 crc kubenswrapper[4957]: I0218 15:00:15.386973 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Feb 18 15:00:15 crc kubenswrapper[4957]: I0218 15:00:15.387968 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="2f90e4b1-a149-424f-9ad1-228b7bf592ab" containerName="aodh-listener" containerID="cri-o://576b3b412af1ddff86227694ed5276800b493b1b45e7e0d8792c21e68bae1819" gracePeriod=30 Feb 18 15:00:15 crc kubenswrapper[4957]: I0218 15:00:15.388018 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="2f90e4b1-a149-424f-9ad1-228b7bf592ab" containerName="aodh-notifier" containerID="cri-o://a371aac4c296750ac28aaaee5cc6a975bb060cba4116d156601d170ba38bdb27" gracePeriod=30 Feb 18 15:00:15 crc kubenswrapper[4957]: I0218 15:00:15.388090 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="2f90e4b1-a149-424f-9ad1-228b7bf592ab" containerName="aodh-evaluator" containerID="cri-o://3bc17b390ab8d711811837c84ba7e1961670c272507fd4dd778136c6397de598" gracePeriod=30 Feb 18 15:00:15 crc kubenswrapper[4957]: I0218 15:00:15.387903 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="2f90e4b1-a149-424f-9ad1-228b7bf592ab" containerName="aodh-api" containerID="cri-o://365b3ecae2b7460331099c7af8bd676e85ecffef5c5e269f97a08d633e2544e6" gracePeriod=30 Feb 18 15:00:15 crc kubenswrapper[4957]: I0218 15:00:15.561516 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-szxlv"] Feb 18 15:00:15 crc kubenswrapper[4957]: I0218 15:00:15.856703 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Feb 18 15:00:15 crc kubenswrapper[4957]: I0218 15:00:15.997206 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f237ab9c-fc69-491b-98da-97ce92214eb0-pod-info\") pod \"f237ab9c-fc69-491b-98da-97ce92214eb0\" (UID: \"f237ab9c-fc69-491b-98da-97ce92214eb0\") " Feb 18 15:00:15 crc kubenswrapper[4957]: I0218 15:00:15.997344 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f237ab9c-fc69-491b-98da-97ce92214eb0-config-data\") pod \"f237ab9c-fc69-491b-98da-97ce92214eb0\" (UID: \"f237ab9c-fc69-491b-98da-97ce92214eb0\") " Feb 18 15:00:15 crc kubenswrapper[4957]: I0218 15:00:15.997477 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f237ab9c-fc69-491b-98da-97ce92214eb0-rabbitmq-tls\") pod \"f237ab9c-fc69-491b-98da-97ce92214eb0\" (UID: \"f237ab9c-fc69-491b-98da-97ce92214eb0\") " Feb 18 15:00:16 crc kubenswrapper[4957]: I0218 15:00:15.998922 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dac279e8-a928-4788-ad49-611c729ef5f6\") pod \"f237ab9c-fc69-491b-98da-97ce92214eb0\" (UID: \"f237ab9c-fc69-491b-98da-97ce92214eb0\") " Feb 18 15:00:16 crc kubenswrapper[4957]: I0218 15:00:15.998988 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f237ab9c-fc69-491b-98da-97ce92214eb0-plugins-conf\") pod \"f237ab9c-fc69-491b-98da-97ce92214eb0\" (UID: \"f237ab9c-fc69-491b-98da-97ce92214eb0\") " Feb 18 15:00:16 crc kubenswrapper[4957]: I0218 15:00:15.999064 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f237ab9c-fc69-491b-98da-97ce92214eb0-server-conf\") pod \"f237ab9c-fc69-491b-98da-97ce92214eb0\" (UID: \"f237ab9c-fc69-491b-98da-97ce92214eb0\") " Feb 18 15:00:16 crc kubenswrapper[4957]: I0218 15:00:15.999116 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f237ab9c-fc69-491b-98da-97ce92214eb0-rabbitmq-erlang-cookie\") pod \"f237ab9c-fc69-491b-98da-97ce92214eb0\" (UID: \"f237ab9c-fc69-491b-98da-97ce92214eb0\") " Feb 18 15:00:16 crc kubenswrapper[4957]: I0218 15:00:15.999139 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f237ab9c-fc69-491b-98da-97ce92214eb0-erlang-cookie-secret\") pod \"f237ab9c-fc69-491b-98da-97ce92214eb0\" (UID: \"f237ab9c-fc69-491b-98da-97ce92214eb0\") " Feb 18 15:00:16 crc kubenswrapper[4957]: I0218 15:00:15.999188 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f237ab9c-fc69-491b-98da-97ce92214eb0-rabbitmq-confd\") pod \"f237ab9c-fc69-491b-98da-97ce92214eb0\" (UID: \"f237ab9c-fc69-491b-98da-97ce92214eb0\") " Feb 18 15:00:16 crc kubenswrapper[4957]: I0218 15:00:15.999233 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gggwp\" (UniqueName: \"kubernetes.io/projected/f237ab9c-fc69-491b-98da-97ce92214eb0-kube-api-access-gggwp\") pod \"f237ab9c-fc69-491b-98da-97ce92214eb0\" (UID: \"f237ab9c-fc69-491b-98da-97ce92214eb0\") " Feb 18 15:00:16 crc kubenswrapper[4957]: I0218 15:00:15.999277 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f237ab9c-fc69-491b-98da-97ce92214eb0-rabbitmq-plugins\") pod \"f237ab9c-fc69-491b-98da-97ce92214eb0\" (UID: \"f237ab9c-fc69-491b-98da-97ce92214eb0\") " Feb 18 15:00:16 crc kubenswrapper[4957]: I0218 15:00:16.009631 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f237ab9c-fc69-491b-98da-97ce92214eb0-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "f237ab9c-fc69-491b-98da-97ce92214eb0" (UID: "f237ab9c-fc69-491b-98da-97ce92214eb0"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 15:00:16 crc kubenswrapper[4957]: I0218 15:00:16.011229 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f237ab9c-fc69-491b-98da-97ce92214eb0-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "f237ab9c-fc69-491b-98da-97ce92214eb0" (UID: "f237ab9c-fc69-491b-98da-97ce92214eb0"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 15:00:16 crc kubenswrapper[4957]: I0218 15:00:16.013004 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f237ab9c-fc69-491b-98da-97ce92214eb0-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "f237ab9c-fc69-491b-98da-97ce92214eb0" (UID: "f237ab9c-fc69-491b-98da-97ce92214eb0"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 15:00:16 crc kubenswrapper[4957]: I0218 15:00:16.021695 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f237ab9c-fc69-491b-98da-97ce92214eb0-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "f237ab9c-fc69-491b-98da-97ce92214eb0" (UID: "f237ab9c-fc69-491b-98da-97ce92214eb0"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 15:00:16 crc kubenswrapper[4957]: I0218 15:00:16.022659 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/f237ab9c-fc69-491b-98da-97ce92214eb0-pod-info" (OuterVolumeSpecName: "pod-info") pod "f237ab9c-fc69-491b-98da-97ce92214eb0" (UID: "f237ab9c-fc69-491b-98da-97ce92214eb0"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 18 15:00:16 crc kubenswrapper[4957]: I0218 15:00:16.025191 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f237ab9c-fc69-491b-98da-97ce92214eb0-kube-api-access-gggwp" (OuterVolumeSpecName: "kube-api-access-gggwp") pod "f237ab9c-fc69-491b-98da-97ce92214eb0" (UID: "f237ab9c-fc69-491b-98da-97ce92214eb0"). InnerVolumeSpecName "kube-api-access-gggwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 15:00:16 crc kubenswrapper[4957]: I0218 15:00:16.065777 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f237ab9c-fc69-491b-98da-97ce92214eb0-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "f237ab9c-fc69-491b-98da-97ce92214eb0" (UID: "f237ab9c-fc69-491b-98da-97ce92214eb0"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 15:00:16 crc kubenswrapper[4957]: I0218 15:00:16.105409 4957 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f237ab9c-fc69-491b-98da-97ce92214eb0-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 18 15:00:16 crc kubenswrapper[4957]: I0218 15:00:16.121521 4957 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f237ab9c-fc69-491b-98da-97ce92214eb0-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 18 15:00:16 crc kubenswrapper[4957]: I0218 15:00:16.121543 4957 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f237ab9c-fc69-491b-98da-97ce92214eb0-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 18 15:00:16 crc kubenswrapper[4957]: I0218 15:00:16.121557 4957 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f237ab9c-fc69-491b-98da-97ce92214eb0-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 18 15:00:16 crc kubenswrapper[4957]: I0218 15:00:16.121567 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gggwp\" (UniqueName: \"kubernetes.io/projected/f237ab9c-fc69-491b-98da-97ce92214eb0-kube-api-access-gggwp\") on node \"crc\" DevicePath \"\"" Feb 18 15:00:16 crc kubenswrapper[4957]: I0218 15:00:16.121578 4957 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f237ab9c-fc69-491b-98da-97ce92214eb0-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 18 15:00:16 crc kubenswrapper[4957]: I0218 15:00:16.121586 4957 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f237ab9c-fc69-491b-98da-97ce92214eb0-pod-info\") on node \"crc\" DevicePath \"\"" Feb 18 15:00:16 crc kubenswrapper[4957]: I0218 15:00:16.131157 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f237ab9c-fc69-491b-98da-97ce92214eb0-config-data" (OuterVolumeSpecName: "config-data") pod "f237ab9c-fc69-491b-98da-97ce92214eb0" (UID: "f237ab9c-fc69-491b-98da-97ce92214eb0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 15:00:16 crc kubenswrapper[4957]: I0218 15:00:16.203035 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f237ab9c-fc69-491b-98da-97ce92214eb0-server-conf" (OuterVolumeSpecName: "server-conf") pod "f237ab9c-fc69-491b-98da-97ce92214eb0" (UID: "f237ab9c-fc69-491b-98da-97ce92214eb0"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 15:00:16 crc kubenswrapper[4957]: I0218 15:00:16.230350 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dac279e8-a928-4788-ad49-611c729ef5f6" (OuterVolumeSpecName: "persistence") pod "f237ab9c-fc69-491b-98da-97ce92214eb0" (UID: "f237ab9c-fc69-491b-98da-97ce92214eb0"). InnerVolumeSpecName "pvc-dac279e8-a928-4788-ad49-611c729ef5f6". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 18 15:00:16 crc kubenswrapper[4957]: E0218 15:00:16.251709 4957 reconciler_common.go:156] "operationExecutor.UnmountVolume failed (controllerAttachDetachEnabled true) for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dac279e8-a928-4788-ad49-611c729ef5f6\") pod \"f237ab9c-fc69-491b-98da-97ce92214eb0\" (UID: \"f237ab9c-fc69-491b-98da-97ce92214eb0\") : UnmountVolume.NewUnmounter failed for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dac279e8-a928-4788-ad49-611c729ef5f6\") pod \"f237ab9c-fc69-491b-98da-97ce92214eb0\" (UID: \"f237ab9c-fc69-491b-98da-97ce92214eb0\") : kubernetes.io/csi: unmounter failed to load volume data file [/var/lib/kubelet/pods/f237ab9c-fc69-491b-98da-97ce92214eb0/volumes/kubernetes.io~csi/pvc-dac279e8-a928-4788-ad49-611c729ef5f6/mount]: kubernetes.io/csi: failed to open volume data file [/var/lib/kubelet/pods/f237ab9c-fc69-491b-98da-97ce92214eb0/volumes/kubernetes.io~csi/pvc-dac279e8-a928-4788-ad49-611c729ef5f6/vol_data.json]: open /var/lib/kubelet/pods/f237ab9c-fc69-491b-98da-97ce92214eb0/volumes/kubernetes.io~csi/pvc-dac279e8-a928-4788-ad49-611c729ef5f6/vol_data.json: no such file or directory" err="UnmountVolume.NewUnmounter failed for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dac279e8-a928-4788-ad49-611c729ef5f6\") pod \"f237ab9c-fc69-491b-98da-97ce92214eb0\" (UID: \"f237ab9c-fc69-491b-98da-97ce92214eb0\") : kubernetes.io/csi: unmounter failed to load volume data file [/var/lib/kubelet/pods/f237ab9c-fc69-491b-98da-97ce92214eb0/volumes/kubernetes.io~csi/pvc-dac279e8-a928-4788-ad49-611c729ef5f6/mount]: kubernetes.io/csi: failed to open volume data file [/var/lib/kubelet/pods/f237ab9c-fc69-491b-98da-97ce92214eb0/volumes/kubernetes.io~csi/pvc-dac279e8-a928-4788-ad49-611c729ef5f6/vol_data.json]: open /var/lib/kubelet/pods/f237ab9c-fc69-491b-98da-97ce92214eb0/volumes/kubernetes.io~csi/pvc-dac279e8-a928-4788-ad49-611c729ef5f6/vol_data.json: no such file or directory" Feb 18 15:00:16 crc kubenswrapper[4957]: I0218 15:00:16.253220 4957 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f237ab9c-fc69-491b-98da-97ce92214eb0-server-conf\") on node \"crc\" DevicePath \"\"" Feb 18 15:00:16 crc kubenswrapper[4957]: I0218 15:00:16.253250 4957 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f237ab9c-fc69-491b-98da-97ce92214eb0-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 15:00:16 crc kubenswrapper[4957]: I0218 15:00:16.253285 4957 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-dac279e8-a928-4788-ad49-611c729ef5f6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dac279e8-a928-4788-ad49-611c729ef5f6\") on node \"crc\" " Feb 18 15:00:16 crc kubenswrapper[4957]: I0218 15:00:16.342778 4957 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 18 15:00:16 crc kubenswrapper[4957]: I0218 15:00:16.343356 4957 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-dac279e8-a928-4788-ad49-611c729ef5f6" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dac279e8-a928-4788-ad49-611c729ef5f6") on node "crc" Feb 18 15:00:16 crc kubenswrapper[4957]: I0218 15:00:16.357303 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f237ab9c-fc69-491b-98da-97ce92214eb0-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "f237ab9c-fc69-491b-98da-97ce92214eb0" (UID: "f237ab9c-fc69-491b-98da-97ce92214eb0"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 15:00:16 crc kubenswrapper[4957]: I0218 15:00:16.361921 4957 reconciler_common.go:293] "Volume detached for volume \"pvc-dac279e8-a928-4788-ad49-611c729ef5f6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dac279e8-a928-4788-ad49-611c729ef5f6\") on node \"crc\" DevicePath \"\"" Feb 18 15:00:16 crc kubenswrapper[4957]: I0218 15:00:16.361963 4957 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f237ab9c-fc69-491b-98da-97ce92214eb0-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 18 15:00:16 crc kubenswrapper[4957]: I0218 15:00:16.559862 4957 generic.go:334] "Generic (PLEG): container finished" podID="f237ab9c-fc69-491b-98da-97ce92214eb0" containerID="360a95ea3a9eed582ab8623dd5c867b4c8d186486451fb4fa22e41369f135fb6" exitCode=0 Feb 18 15:00:16 crc kubenswrapper[4957]: I0218 15:00:16.559926 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Feb 18 15:00:16 crc kubenswrapper[4957]: I0218 15:00:16.559975 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"f237ab9c-fc69-491b-98da-97ce92214eb0","Type":"ContainerDied","Data":"360a95ea3a9eed582ab8623dd5c867b4c8d186486451fb4fa22e41369f135fb6"} Feb 18 15:00:16 crc kubenswrapper[4957]: I0218 15:00:16.560008 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"f237ab9c-fc69-491b-98da-97ce92214eb0","Type":"ContainerDied","Data":"13dcdb4960568a4dd191f12e79245344602d80cb9347623ce8312359b5d97738"} Feb 18 15:00:16 crc kubenswrapper[4957]: I0218 15:00:16.560028 4957 scope.go:117] "RemoveContainer" containerID="360a95ea3a9eed582ab8623dd5c867b4c8d186486451fb4fa22e41369f135fb6" Feb 18 15:00:16 crc kubenswrapper[4957]: I0218 15:00:16.570385 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-szxlv" event={"ID":"61f57f78-cac7-4ae9-b5bd-eef2885ac7ce","Type":"ContainerStarted","Data":"110651b06c24aec7968cad70a9786f0a24eb9f49b297c0310f18952e9e7e77c6"} Feb 18 15:00:16 crc kubenswrapper[4957]: I0218 15:00:16.570445 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-szxlv" event={"ID":"61f57f78-cac7-4ae9-b5bd-eef2885ac7ce","Type":"ContainerStarted","Data":"d88f66e363c18b410de1aa5ec3d228717c48d80321610797c5f5e6ddd857bea9"} Feb 18 15:00:16 crc kubenswrapper[4957]: I0218 15:00:16.574973 4957 generic.go:334] "Generic (PLEG): container finished" podID="2f90e4b1-a149-424f-9ad1-228b7bf592ab" containerID="3bc17b390ab8d711811837c84ba7e1961670c272507fd4dd778136c6397de598" exitCode=0 Feb 18 15:00:16 crc kubenswrapper[4957]: I0218 15:00:16.575002 4957 generic.go:334] "Generic (PLEG): container finished" podID="2f90e4b1-a149-424f-9ad1-228b7bf592ab" containerID="365b3ecae2b7460331099c7af8bd676e85ecffef5c5e269f97a08d633e2544e6" exitCode=0 Feb 18 15:00:16 crc kubenswrapper[4957]: I0218 15:00:16.575026 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"2f90e4b1-a149-424f-9ad1-228b7bf592ab","Type":"ContainerDied","Data":"3bc17b390ab8d711811837c84ba7e1961670c272507fd4dd778136c6397de598"} Feb 18 15:00:16 crc kubenswrapper[4957]: I0218 15:00:16.575050 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"2f90e4b1-a149-424f-9ad1-228b7bf592ab","Type":"ContainerDied","Data":"365b3ecae2b7460331099c7af8bd676e85ecffef5c5e269f97a08d633e2544e6"} Feb 18 15:00:16 crc kubenswrapper[4957]: I0218 15:00:16.590337 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-szxlv" podStartSLOduration=2.101033384 podStartE2EDuration="2.590319002s" podCreationTimestamp="2026-02-18 15:00:14 +0000 UTC" firstStartedPulling="2026-02-18 15:00:15.56784913 +0000 UTC m=+1722.088713874" lastFinishedPulling="2026-02-18 15:00:16.057134748 +0000 UTC m=+1722.577999492" observedRunningTime="2026-02-18 15:00:16.584191985 +0000 UTC m=+1723.105056739" watchObservedRunningTime="2026-02-18 15:00:16.590319002 +0000 UTC m=+1723.111183746" Feb 18 15:00:16 crc kubenswrapper[4957]: I0218 15:00:16.607716 4957 scope.go:117] "RemoveContainer" containerID="d656f6ea94db1d3987b5635bb9c637ada193464854e5a9f7930c8f18d02e6007" Feb 18 15:00:16 crc kubenswrapper[4957]: I0218 15:00:16.642804 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-1"] Feb 18 15:00:16 crc kubenswrapper[4957]: I0218 15:00:16.650705 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-1"] Feb 18 15:00:16 crc kubenswrapper[4957]: I0218 15:00:16.664604 4957 scope.go:117] "RemoveContainer" containerID="360a95ea3a9eed582ab8623dd5c867b4c8d186486451fb4fa22e41369f135fb6" Feb 18 15:00:16 crc kubenswrapper[4957]: I0218 15:00:16.666618 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-1"] Feb 18 15:00:16 crc kubenswrapper[4957]: E0218 15:00:16.667379 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f237ab9c-fc69-491b-98da-97ce92214eb0" containerName="setup-container" Feb 18 15:00:16 crc kubenswrapper[4957]: I0218 15:00:16.667407 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="f237ab9c-fc69-491b-98da-97ce92214eb0" containerName="setup-container" Feb 18 15:00:16 crc kubenswrapper[4957]: E0218 15:00:16.667474 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f237ab9c-fc69-491b-98da-97ce92214eb0" containerName="rabbitmq" Feb 18 15:00:16 crc kubenswrapper[4957]: I0218 15:00:16.667484 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="f237ab9c-fc69-491b-98da-97ce92214eb0" containerName="rabbitmq" Feb 18 15:00:16 crc kubenswrapper[4957]: I0218 15:00:16.667863 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="f237ab9c-fc69-491b-98da-97ce92214eb0" containerName="rabbitmq" Feb 18 15:00:16 crc kubenswrapper[4957]: E0218 15:00:16.669590 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"360a95ea3a9eed582ab8623dd5c867b4c8d186486451fb4fa22e41369f135fb6\": container with ID starting with 360a95ea3a9eed582ab8623dd5c867b4c8d186486451fb4fa22e41369f135fb6 not found: ID does not exist" containerID="360a95ea3a9eed582ab8623dd5c867b4c8d186486451fb4fa22e41369f135fb6" Feb 18 15:00:16 crc kubenswrapper[4957]: I0218 15:00:16.669658 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Feb 18 15:00:16 crc kubenswrapper[4957]: I0218 15:00:16.669649 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"360a95ea3a9eed582ab8623dd5c867b4c8d186486451fb4fa22e41369f135fb6"} err="failed to get container status \"360a95ea3a9eed582ab8623dd5c867b4c8d186486451fb4fa22e41369f135fb6\": rpc error: code = NotFound desc = could not find container \"360a95ea3a9eed582ab8623dd5c867b4c8d186486451fb4fa22e41369f135fb6\": container with ID starting with 360a95ea3a9eed582ab8623dd5c867b4c8d186486451fb4fa22e41369f135fb6 not found: ID does not exist" Feb 18 15:00:16 crc kubenswrapper[4957]: I0218 15:00:16.670343 4957 scope.go:117] "RemoveContainer" containerID="d656f6ea94db1d3987b5635bb9c637ada193464854e5a9f7930c8f18d02e6007" Feb 18 15:00:16 crc kubenswrapper[4957]: E0218 15:00:16.671064 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d656f6ea94db1d3987b5635bb9c637ada193464854e5a9f7930c8f18d02e6007\": container with ID starting with d656f6ea94db1d3987b5635bb9c637ada193464854e5a9f7930c8f18d02e6007 not found: ID does not exist" containerID="d656f6ea94db1d3987b5635bb9c637ada193464854e5a9f7930c8f18d02e6007" Feb 18 15:00:16 crc kubenswrapper[4957]: I0218 15:00:16.671298 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d656f6ea94db1d3987b5635bb9c637ada193464854e5a9f7930c8f18d02e6007"} err="failed to get container status \"d656f6ea94db1d3987b5635bb9c637ada193464854e5a9f7930c8f18d02e6007\": rpc error: code = NotFound desc = could not find container \"d656f6ea94db1d3987b5635bb9c637ada193464854e5a9f7930c8f18d02e6007\": container with ID starting with d656f6ea94db1d3987b5635bb9c637ada193464854e5a9f7930c8f18d02e6007 not found: ID does not exist" Feb 18 15:00:16 crc kubenswrapper[4957]: I0218 15:00:16.681621 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-1"] Feb 18 15:00:16 crc kubenswrapper[4957]: I0218 15:00:16.781506 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e6f1982d-1c44-43d2-8a39-6e247a6f09c8-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"e6f1982d-1c44-43d2-8a39-6e247a6f09c8\") " pod="openstack/rabbitmq-server-1" Feb 18 15:00:16 crc kubenswrapper[4957]: I0218 15:00:16.782081 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e6f1982d-1c44-43d2-8a39-6e247a6f09c8-config-data\") pod \"rabbitmq-server-1\" (UID: \"e6f1982d-1c44-43d2-8a39-6e247a6f09c8\") " pod="openstack/rabbitmq-server-1" Feb 18 15:00:16 crc kubenswrapper[4957]: I0218 15:00:16.782198 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e6f1982d-1c44-43d2-8a39-6e247a6f09c8-server-conf\") pod \"rabbitmq-server-1\" (UID: \"e6f1982d-1c44-43d2-8a39-6e247a6f09c8\") " pod="openstack/rabbitmq-server-1" Feb 18 15:00:16 crc kubenswrapper[4957]: I0218 15:00:16.782367 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-dac279e8-a928-4788-ad49-611c729ef5f6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dac279e8-a928-4788-ad49-611c729ef5f6\") pod \"rabbitmq-server-1\" (UID: \"e6f1982d-1c44-43d2-8a39-6e247a6f09c8\") " pod="openstack/rabbitmq-server-1" Feb 18 15:00:16 crc kubenswrapper[4957]: I0218 15:00:16.782487 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jnr78\" (UniqueName: \"kubernetes.io/projected/e6f1982d-1c44-43d2-8a39-6e247a6f09c8-kube-api-access-jnr78\") pod \"rabbitmq-server-1\" (UID: \"e6f1982d-1c44-43d2-8a39-6e247a6f09c8\") " pod="openstack/rabbitmq-server-1" Feb 18 15:00:16 crc kubenswrapper[4957]: I0218 15:00:16.782591 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e6f1982d-1c44-43d2-8a39-6e247a6f09c8-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"e6f1982d-1c44-43d2-8a39-6e247a6f09c8\") " pod="openstack/rabbitmq-server-1" Feb 18 15:00:16 crc kubenswrapper[4957]: I0218 15:00:16.782669 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e6f1982d-1c44-43d2-8a39-6e247a6f09c8-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"e6f1982d-1c44-43d2-8a39-6e247a6f09c8\") " pod="openstack/rabbitmq-server-1" Feb 18 15:00:16 crc kubenswrapper[4957]: I0218 15:00:16.782792 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e6f1982d-1c44-43d2-8a39-6e247a6f09c8-pod-info\") pod \"rabbitmq-server-1\" (UID: \"e6f1982d-1c44-43d2-8a39-6e247a6f09c8\") " pod="openstack/rabbitmq-server-1" Feb 18 15:00:16 crc kubenswrapper[4957]: I0218 15:00:16.782887 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e6f1982d-1c44-43d2-8a39-6e247a6f09c8-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"e6f1982d-1c44-43d2-8a39-6e247a6f09c8\") " pod="openstack/rabbitmq-server-1" Feb 18 15:00:16 crc kubenswrapper[4957]: I0218 15:00:16.782968 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e6f1982d-1c44-43d2-8a39-6e247a6f09c8-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"e6f1982d-1c44-43d2-8a39-6e247a6f09c8\") " pod="openstack/rabbitmq-server-1" Feb 18 15:00:16 crc kubenswrapper[4957]: I0218 15:00:16.783083 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e6f1982d-1c44-43d2-8a39-6e247a6f09c8-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"e6f1982d-1c44-43d2-8a39-6e247a6f09c8\") " pod="openstack/rabbitmq-server-1" Feb 18 15:00:16 crc kubenswrapper[4957]: I0218 15:00:16.885683 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e6f1982d-1c44-43d2-8a39-6e247a6f09c8-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"e6f1982d-1c44-43d2-8a39-6e247a6f09c8\") " pod="openstack/rabbitmq-server-1" Feb 18 15:00:16 crc kubenswrapper[4957]: I0218 15:00:16.885799 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e6f1982d-1c44-43d2-8a39-6e247a6f09c8-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"e6f1982d-1c44-43d2-8a39-6e247a6f09c8\") " pod="openstack/rabbitmq-server-1" Feb 18 15:00:16 crc kubenswrapper[4957]: I0218 15:00:16.885913 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e6f1982d-1c44-43d2-8a39-6e247a6f09c8-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"e6f1982d-1c44-43d2-8a39-6e247a6f09c8\") " pod="openstack/rabbitmq-server-1" Feb 18 15:00:16 crc kubenswrapper[4957]: I0218 15:00:16.885952 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e6f1982d-1c44-43d2-8a39-6e247a6f09c8-config-data\") pod \"rabbitmq-server-1\" (UID: \"e6f1982d-1c44-43d2-8a39-6e247a6f09c8\") " pod="openstack/rabbitmq-server-1" Feb 18 15:00:16 crc kubenswrapper[4957]: I0218 15:00:16.886002 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e6f1982d-1c44-43d2-8a39-6e247a6f09c8-server-conf\") pod \"rabbitmq-server-1\" (UID: \"e6f1982d-1c44-43d2-8a39-6e247a6f09c8\") " pod="openstack/rabbitmq-server-1" Feb 18 15:00:16 crc kubenswrapper[4957]: I0218 15:00:16.886035 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-dac279e8-a928-4788-ad49-611c729ef5f6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dac279e8-a928-4788-ad49-611c729ef5f6\") pod \"rabbitmq-server-1\" (UID: \"e6f1982d-1c44-43d2-8a39-6e247a6f09c8\") " pod="openstack/rabbitmq-server-1" Feb 18 15:00:16 crc kubenswrapper[4957]: I0218 15:00:16.886069 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jnr78\" (UniqueName: \"kubernetes.io/projected/e6f1982d-1c44-43d2-8a39-6e247a6f09c8-kube-api-access-jnr78\") pod \"rabbitmq-server-1\" (UID: \"e6f1982d-1c44-43d2-8a39-6e247a6f09c8\") " pod="openstack/rabbitmq-server-1" Feb 18 15:00:16 crc kubenswrapper[4957]: I0218 15:00:16.886108 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e6f1982d-1c44-43d2-8a39-6e247a6f09c8-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"e6f1982d-1c44-43d2-8a39-6e247a6f09c8\") " pod="openstack/rabbitmq-server-1" Feb 18 15:00:16 crc kubenswrapper[4957]: I0218 15:00:16.886141 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e6f1982d-1c44-43d2-8a39-6e247a6f09c8-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"e6f1982d-1c44-43d2-8a39-6e247a6f09c8\") " pod="openstack/rabbitmq-server-1" Feb 18 15:00:16 crc kubenswrapper[4957]: I0218 15:00:16.886184 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e6f1982d-1c44-43d2-8a39-6e247a6f09c8-pod-info\") pod \"rabbitmq-server-1\" (UID: \"e6f1982d-1c44-43d2-8a39-6e247a6f09c8\") " pod="openstack/rabbitmq-server-1" Feb 18 15:00:16 crc kubenswrapper[4957]: I0218 15:00:16.886217 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e6f1982d-1c44-43d2-8a39-6e247a6f09c8-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"e6f1982d-1c44-43d2-8a39-6e247a6f09c8\") " pod="openstack/rabbitmq-server-1" Feb 18 15:00:16 crc kubenswrapper[4957]: I0218 15:00:16.887322 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e6f1982d-1c44-43d2-8a39-6e247a6f09c8-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"e6f1982d-1c44-43d2-8a39-6e247a6f09c8\") " pod="openstack/rabbitmq-server-1" Feb 18 15:00:16 crc kubenswrapper[4957]: I0218 15:00:16.887937 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e6f1982d-1c44-43d2-8a39-6e247a6f09c8-config-data\") pod \"rabbitmq-server-1\" (UID: \"e6f1982d-1c44-43d2-8a39-6e247a6f09c8\") " pod="openstack/rabbitmq-server-1" Feb 18 15:00:16 crc kubenswrapper[4957]: I0218 15:00:16.888148 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e6f1982d-1c44-43d2-8a39-6e247a6f09c8-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"e6f1982d-1c44-43d2-8a39-6e247a6f09c8\") " pod="openstack/rabbitmq-server-1" Feb 18 15:00:16 crc kubenswrapper[4957]: I0218 15:00:16.889031 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e6f1982d-1c44-43d2-8a39-6e247a6f09c8-server-conf\") pod \"rabbitmq-server-1\" (UID: \"e6f1982d-1c44-43d2-8a39-6e247a6f09c8\") " pod="openstack/rabbitmq-server-1" Feb 18 15:00:16 crc kubenswrapper[4957]: I0218 15:00:16.889143 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e6f1982d-1c44-43d2-8a39-6e247a6f09c8-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"e6f1982d-1c44-43d2-8a39-6e247a6f09c8\") " pod="openstack/rabbitmq-server-1" Feb 18 15:00:16 crc kubenswrapper[4957]: I0218 15:00:16.891766 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e6f1982d-1c44-43d2-8a39-6e247a6f09c8-pod-info\") pod \"rabbitmq-server-1\" (UID: \"e6f1982d-1c44-43d2-8a39-6e247a6f09c8\") " pod="openstack/rabbitmq-server-1" Feb 18 15:00:16 crc kubenswrapper[4957]: I0218 15:00:16.896525 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e6f1982d-1c44-43d2-8a39-6e247a6f09c8-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"e6f1982d-1c44-43d2-8a39-6e247a6f09c8\") " pod="openstack/rabbitmq-server-1" Feb 18 15:00:16 crc kubenswrapper[4957]: I0218 15:00:16.896929 4957 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 18 15:00:16 crc kubenswrapper[4957]: I0218 15:00:16.896998 4957 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-dac279e8-a928-4788-ad49-611c729ef5f6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dac279e8-a928-4788-ad49-611c729ef5f6\") pod \"rabbitmq-server-1\" (UID: \"e6f1982d-1c44-43d2-8a39-6e247a6f09c8\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/99592bf2358da8fc86c08666f7bb1935d4f3939ad446ee641006a3e81849f401/globalmount\"" pod="openstack/rabbitmq-server-1" Feb 18 15:00:16 crc kubenswrapper[4957]: I0218 15:00:16.896933 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e6f1982d-1c44-43d2-8a39-6e247a6f09c8-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"e6f1982d-1c44-43d2-8a39-6e247a6f09c8\") " pod="openstack/rabbitmq-server-1" Feb 18 15:00:16 crc kubenswrapper[4957]: I0218 15:00:16.897376 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e6f1982d-1c44-43d2-8a39-6e247a6f09c8-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"e6f1982d-1c44-43d2-8a39-6e247a6f09c8\") " pod="openstack/rabbitmq-server-1" Feb 18 15:00:16 crc kubenswrapper[4957]: I0218 15:00:16.916527 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jnr78\" (UniqueName: \"kubernetes.io/projected/e6f1982d-1c44-43d2-8a39-6e247a6f09c8-kube-api-access-jnr78\") pod \"rabbitmq-server-1\" (UID: \"e6f1982d-1c44-43d2-8a39-6e247a6f09c8\") " pod="openstack/rabbitmq-server-1" Feb 18 15:00:17 crc kubenswrapper[4957]: I0218 15:00:17.015347 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-dac279e8-a928-4788-ad49-611c729ef5f6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dac279e8-a928-4788-ad49-611c729ef5f6\") pod \"rabbitmq-server-1\" (UID: \"e6f1982d-1c44-43d2-8a39-6e247a6f09c8\") " pod="openstack/rabbitmq-server-1" Feb 18 15:00:17 crc kubenswrapper[4957]: I0218 15:00:17.313358 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Feb 18 15:00:17 crc kubenswrapper[4957]: I0218 15:00:17.814397 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-1"] Feb 18 15:00:18 crc kubenswrapper[4957]: I0218 15:00:18.228925 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f237ab9c-fc69-491b-98da-97ce92214eb0" path="/var/lib/kubelet/pods/f237ab9c-fc69-491b-98da-97ce92214eb0/volumes" Feb 18 15:00:18 crc kubenswrapper[4957]: I0218 15:00:18.634451 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"e6f1982d-1c44-43d2-8a39-6e247a6f09c8","Type":"ContainerStarted","Data":"6dbffba2418c99d52b5cc681c1d61d795f4fa824a28cf04debb3cb23928b3a8c"} Feb 18 15:00:19 crc kubenswrapper[4957]: I0218 15:00:19.650109 4957 generic.go:334] "Generic (PLEG): container finished" podID="61f57f78-cac7-4ae9-b5bd-eef2885ac7ce" containerID="110651b06c24aec7968cad70a9786f0a24eb9f49b297c0310f18952e9e7e77c6" exitCode=0 Feb 18 15:00:19 crc kubenswrapper[4957]: I0218 15:00:19.650171 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-szxlv" event={"ID":"61f57f78-cac7-4ae9-b5bd-eef2885ac7ce","Type":"ContainerDied","Data":"110651b06c24aec7968cad70a9786f0a24eb9f49b297c0310f18952e9e7e77c6"} Feb 18 15:00:20 crc kubenswrapper[4957]: I0218 15:00:20.666340 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"e6f1982d-1c44-43d2-8a39-6e247a6f09c8","Type":"ContainerStarted","Data":"402be53bd884522454b7bfd60ab2ca17b02ff486c354c9e83621553281eedef8"} Feb 18 15:00:21 crc kubenswrapper[4957]: I0218 15:00:21.213819 4957 scope.go:117] "RemoveContainer" containerID="c4e669be0ab1992542c422c61f78e7526ae93fe9c47c7d1457d0a05970a49398" Feb 18 15:00:21 crc kubenswrapper[4957]: E0218 15:00:21.214499 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-x8wwg_openshift-machine-config-operator(4cde17e3-43e9-4bed-afe8-5b76229e35cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" Feb 18 15:00:21 crc kubenswrapper[4957]: I0218 15:00:21.241463 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-szxlv" Feb 18 15:00:21 crc kubenswrapper[4957]: I0218 15:00:21.317707 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/61f57f78-cac7-4ae9-b5bd-eef2885ac7ce-inventory\") pod \"61f57f78-cac7-4ae9-b5bd-eef2885ac7ce\" (UID: \"61f57f78-cac7-4ae9-b5bd-eef2885ac7ce\") " Feb 18 15:00:21 crc kubenswrapper[4957]: I0218 15:00:21.318177 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nk67x\" (UniqueName: \"kubernetes.io/projected/61f57f78-cac7-4ae9-b5bd-eef2885ac7ce-kube-api-access-nk67x\") pod \"61f57f78-cac7-4ae9-b5bd-eef2885ac7ce\" (UID: \"61f57f78-cac7-4ae9-b5bd-eef2885ac7ce\") " Feb 18 15:00:21 crc kubenswrapper[4957]: I0218 15:00:21.318271 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/61f57f78-cac7-4ae9-b5bd-eef2885ac7ce-ssh-key-openstack-edpm-ipam\") pod \"61f57f78-cac7-4ae9-b5bd-eef2885ac7ce\" (UID: \"61f57f78-cac7-4ae9-b5bd-eef2885ac7ce\") " Feb 18 15:00:21 crc kubenswrapper[4957]: I0218 15:00:21.326950 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61f57f78-cac7-4ae9-b5bd-eef2885ac7ce-kube-api-access-nk67x" (OuterVolumeSpecName: "kube-api-access-nk67x") pod "61f57f78-cac7-4ae9-b5bd-eef2885ac7ce" (UID: "61f57f78-cac7-4ae9-b5bd-eef2885ac7ce"). InnerVolumeSpecName "kube-api-access-nk67x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 15:00:21 crc kubenswrapper[4957]: I0218 15:00:21.358694 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61f57f78-cac7-4ae9-b5bd-eef2885ac7ce-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "61f57f78-cac7-4ae9-b5bd-eef2885ac7ce" (UID: "61f57f78-cac7-4ae9-b5bd-eef2885ac7ce"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 15:00:21 crc kubenswrapper[4957]: I0218 15:00:21.360188 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61f57f78-cac7-4ae9-b5bd-eef2885ac7ce-inventory" (OuterVolumeSpecName: "inventory") pod "61f57f78-cac7-4ae9-b5bd-eef2885ac7ce" (UID: "61f57f78-cac7-4ae9-b5bd-eef2885ac7ce"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 15:00:21 crc kubenswrapper[4957]: I0218 15:00:21.422441 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nk67x\" (UniqueName: \"kubernetes.io/projected/61f57f78-cac7-4ae9-b5bd-eef2885ac7ce-kube-api-access-nk67x\") on node \"crc\" DevicePath \"\"" Feb 18 15:00:21 crc kubenswrapper[4957]: I0218 15:00:21.422481 4957 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/61f57f78-cac7-4ae9-b5bd-eef2885ac7ce-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 15:00:21 crc kubenswrapper[4957]: I0218 15:00:21.422495 4957 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/61f57f78-cac7-4ae9-b5bd-eef2885ac7ce-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 15:00:21 crc kubenswrapper[4957]: I0218 15:00:21.688483 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-szxlv" event={"ID":"61f57f78-cac7-4ae9-b5bd-eef2885ac7ce","Type":"ContainerDied","Data":"d88f66e363c18b410de1aa5ec3d228717c48d80321610797c5f5e6ddd857bea9"} Feb 18 15:00:21 crc kubenswrapper[4957]: I0218 15:00:21.688515 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-szxlv" Feb 18 15:00:21 crc kubenswrapper[4957]: I0218 15:00:21.688541 4957 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d88f66e363c18b410de1aa5ec3d228717c48d80321610797c5f5e6ddd857bea9" Feb 18 15:00:21 crc kubenswrapper[4957]: I0218 15:00:21.758510 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-45npd"] Feb 18 15:00:21 crc kubenswrapper[4957]: E0218 15:00:21.759302 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61f57f78-cac7-4ae9-b5bd-eef2885ac7ce" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 18 15:00:21 crc kubenswrapper[4957]: I0218 15:00:21.759334 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="61f57f78-cac7-4ae9-b5bd-eef2885ac7ce" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 18 15:00:21 crc kubenswrapper[4957]: I0218 15:00:21.759787 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="61f57f78-cac7-4ae9-b5bd-eef2885ac7ce" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 18 15:00:21 crc kubenswrapper[4957]: I0218 15:00:21.761164 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-45npd" Feb 18 15:00:21 crc kubenswrapper[4957]: I0218 15:00:21.764082 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 15:00:21 crc kubenswrapper[4957]: I0218 15:00:21.764546 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 15:00:21 crc kubenswrapper[4957]: I0218 15:00:21.766493 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-pmpvb" Feb 18 15:00:21 crc kubenswrapper[4957]: I0218 15:00:21.766787 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 15:00:21 crc kubenswrapper[4957]: I0218 15:00:21.771261 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-45npd"] Feb 18 15:00:21 crc kubenswrapper[4957]: I0218 15:00:21.834685 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce22a60c-ac91-4fcd-a298-330ace1c4d68-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-45npd\" (UID: \"ce22a60c-ac91-4fcd-a298-330ace1c4d68\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-45npd" Feb 18 15:00:21 crc kubenswrapper[4957]: I0218 15:00:21.834846 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ce22a60c-ac91-4fcd-a298-330ace1c4d68-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-45npd\" (UID: \"ce22a60c-ac91-4fcd-a298-330ace1c4d68\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-45npd" Feb 18 15:00:21 crc kubenswrapper[4957]: I0218 15:00:21.834876 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ce22a60c-ac91-4fcd-a298-330ace1c4d68-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-45npd\" (UID: \"ce22a60c-ac91-4fcd-a298-330ace1c4d68\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-45npd" Feb 18 15:00:21 crc kubenswrapper[4957]: I0218 15:00:21.835091 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wdb5\" (UniqueName: \"kubernetes.io/projected/ce22a60c-ac91-4fcd-a298-330ace1c4d68-kube-api-access-2wdb5\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-45npd\" (UID: \"ce22a60c-ac91-4fcd-a298-330ace1c4d68\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-45npd" Feb 18 15:00:21 crc kubenswrapper[4957]: I0218 15:00:21.937852 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce22a60c-ac91-4fcd-a298-330ace1c4d68-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-45npd\" (UID: \"ce22a60c-ac91-4fcd-a298-330ace1c4d68\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-45npd" Feb 18 15:00:21 crc kubenswrapper[4957]: I0218 15:00:21.938545 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ce22a60c-ac91-4fcd-a298-330ace1c4d68-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-45npd\" (UID: \"ce22a60c-ac91-4fcd-a298-330ace1c4d68\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-45npd" Feb 18 15:00:21 crc kubenswrapper[4957]: I0218 15:00:21.938734 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ce22a60c-ac91-4fcd-a298-330ace1c4d68-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-45npd\" (UID: \"ce22a60c-ac91-4fcd-a298-330ace1c4d68\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-45npd" Feb 18 15:00:21 crc kubenswrapper[4957]: I0218 15:00:21.939409 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2wdb5\" (UniqueName: \"kubernetes.io/projected/ce22a60c-ac91-4fcd-a298-330ace1c4d68-kube-api-access-2wdb5\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-45npd\" (UID: \"ce22a60c-ac91-4fcd-a298-330ace1c4d68\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-45npd" Feb 18 15:00:21 crc kubenswrapper[4957]: I0218 15:00:21.943699 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ce22a60c-ac91-4fcd-a298-330ace1c4d68-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-45npd\" (UID: \"ce22a60c-ac91-4fcd-a298-330ace1c4d68\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-45npd" Feb 18 15:00:21 crc kubenswrapper[4957]: I0218 15:00:21.944569 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ce22a60c-ac91-4fcd-a298-330ace1c4d68-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-45npd\" (UID: \"ce22a60c-ac91-4fcd-a298-330ace1c4d68\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-45npd" Feb 18 15:00:21 crc kubenswrapper[4957]: I0218 15:00:21.945675 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce22a60c-ac91-4fcd-a298-330ace1c4d68-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-45npd\" (UID: \"ce22a60c-ac91-4fcd-a298-330ace1c4d68\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-45npd" Feb 18 15:00:21 crc kubenswrapper[4957]: I0218 15:00:21.957784 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2wdb5\" (UniqueName: \"kubernetes.io/projected/ce22a60c-ac91-4fcd-a298-330ace1c4d68-kube-api-access-2wdb5\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-45npd\" (UID: \"ce22a60c-ac91-4fcd-a298-330ace1c4d68\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-45npd" Feb 18 15:00:22 crc kubenswrapper[4957]: I0218 15:00:22.082108 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-45npd" Feb 18 15:00:22 crc kubenswrapper[4957]: I0218 15:00:22.706080 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-45npd"] Feb 18 15:00:23 crc kubenswrapper[4957]: I0218 15:00:23.368121 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 18 15:00:23 crc kubenswrapper[4957]: I0218 15:00:23.407896 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fz2jh\" (UniqueName: \"kubernetes.io/projected/2f90e4b1-a149-424f-9ad1-228b7bf592ab-kube-api-access-fz2jh\") pod \"2f90e4b1-a149-424f-9ad1-228b7bf592ab\" (UID: \"2f90e4b1-a149-424f-9ad1-228b7bf592ab\") " Feb 18 15:00:23 crc kubenswrapper[4957]: I0218 15:00:23.408223 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2f90e4b1-a149-424f-9ad1-228b7bf592ab-internal-tls-certs\") pod \"2f90e4b1-a149-424f-9ad1-228b7bf592ab\" (UID: \"2f90e4b1-a149-424f-9ad1-228b7bf592ab\") " Feb 18 15:00:23 crc kubenswrapper[4957]: I0218 15:00:23.408359 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2f90e4b1-a149-424f-9ad1-228b7bf592ab-scripts\") pod \"2f90e4b1-a149-424f-9ad1-228b7bf592ab\" (UID: \"2f90e4b1-a149-424f-9ad1-228b7bf592ab\") " Feb 18 15:00:23 crc kubenswrapper[4957]: I0218 15:00:23.408623 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f90e4b1-a149-424f-9ad1-228b7bf592ab-combined-ca-bundle\") pod \"2f90e4b1-a149-424f-9ad1-228b7bf592ab\" (UID: \"2f90e4b1-a149-424f-9ad1-228b7bf592ab\") " Feb 18 15:00:23 crc kubenswrapper[4957]: I0218 15:00:23.408760 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2f90e4b1-a149-424f-9ad1-228b7bf592ab-public-tls-certs\") pod \"2f90e4b1-a149-424f-9ad1-228b7bf592ab\" (UID: \"2f90e4b1-a149-424f-9ad1-228b7bf592ab\") " Feb 18 15:00:23 crc kubenswrapper[4957]: I0218 15:00:23.408932 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f90e4b1-a149-424f-9ad1-228b7bf592ab-config-data\") pod \"2f90e4b1-a149-424f-9ad1-228b7bf592ab\" (UID: \"2f90e4b1-a149-424f-9ad1-228b7bf592ab\") " Feb 18 15:00:23 crc kubenswrapper[4957]: I0218 15:00:23.413656 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f90e4b1-a149-424f-9ad1-228b7bf592ab-kube-api-access-fz2jh" (OuterVolumeSpecName: "kube-api-access-fz2jh") pod "2f90e4b1-a149-424f-9ad1-228b7bf592ab" (UID: "2f90e4b1-a149-424f-9ad1-228b7bf592ab"). InnerVolumeSpecName "kube-api-access-fz2jh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 15:00:23 crc kubenswrapper[4957]: I0218 15:00:23.421199 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f90e4b1-a149-424f-9ad1-228b7bf592ab-scripts" (OuterVolumeSpecName: "scripts") pod "2f90e4b1-a149-424f-9ad1-228b7bf592ab" (UID: "2f90e4b1-a149-424f-9ad1-228b7bf592ab"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 15:00:23 crc kubenswrapper[4957]: I0218 15:00:23.512092 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fz2jh\" (UniqueName: \"kubernetes.io/projected/2f90e4b1-a149-424f-9ad1-228b7bf592ab-kube-api-access-fz2jh\") on node \"crc\" DevicePath \"\"" Feb 18 15:00:23 crc kubenswrapper[4957]: I0218 15:00:23.512123 4957 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2f90e4b1-a149-424f-9ad1-228b7bf592ab-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 15:00:23 crc kubenswrapper[4957]: I0218 15:00:23.557031 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f90e4b1-a149-424f-9ad1-228b7bf592ab-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "2f90e4b1-a149-424f-9ad1-228b7bf592ab" (UID: "2f90e4b1-a149-424f-9ad1-228b7bf592ab"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 15:00:23 crc kubenswrapper[4957]: I0218 15:00:23.561619 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f90e4b1-a149-424f-9ad1-228b7bf592ab-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "2f90e4b1-a149-424f-9ad1-228b7bf592ab" (UID: "2f90e4b1-a149-424f-9ad1-228b7bf592ab"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 15:00:23 crc kubenswrapper[4957]: I0218 15:00:23.617810 4957 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2f90e4b1-a149-424f-9ad1-228b7bf592ab-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 15:00:23 crc kubenswrapper[4957]: I0218 15:00:23.617868 4957 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2f90e4b1-a149-424f-9ad1-228b7bf592ab-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 15:00:23 crc kubenswrapper[4957]: I0218 15:00:23.618230 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f90e4b1-a149-424f-9ad1-228b7bf592ab-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2f90e4b1-a149-424f-9ad1-228b7bf592ab" (UID: "2f90e4b1-a149-424f-9ad1-228b7bf592ab"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 15:00:23 crc kubenswrapper[4957]: I0218 15:00:23.647734 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f90e4b1-a149-424f-9ad1-228b7bf592ab-config-data" (OuterVolumeSpecName: "config-data") pod "2f90e4b1-a149-424f-9ad1-228b7bf592ab" (UID: "2f90e4b1-a149-424f-9ad1-228b7bf592ab"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 15:00:23 crc kubenswrapper[4957]: I0218 15:00:23.714801 4957 generic.go:334] "Generic (PLEG): container finished" podID="2f90e4b1-a149-424f-9ad1-228b7bf592ab" containerID="576b3b412af1ddff86227694ed5276800b493b1b45e7e0d8792c21e68bae1819" exitCode=0 Feb 18 15:00:23 crc kubenswrapper[4957]: I0218 15:00:23.715021 4957 generic.go:334] "Generic (PLEG): container finished" podID="2f90e4b1-a149-424f-9ad1-228b7bf592ab" containerID="a371aac4c296750ac28aaaee5cc6a975bb060cba4116d156601d170ba38bdb27" exitCode=0 Feb 18 15:00:23 crc kubenswrapper[4957]: I0218 15:00:23.715062 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"2f90e4b1-a149-424f-9ad1-228b7bf592ab","Type":"ContainerDied","Data":"576b3b412af1ddff86227694ed5276800b493b1b45e7e0d8792c21e68bae1819"} Feb 18 15:00:23 crc kubenswrapper[4957]: I0218 15:00:23.715090 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"2f90e4b1-a149-424f-9ad1-228b7bf592ab","Type":"ContainerDied","Data":"a371aac4c296750ac28aaaee5cc6a975bb060cba4116d156601d170ba38bdb27"} Feb 18 15:00:23 crc kubenswrapper[4957]: I0218 15:00:23.715099 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"2f90e4b1-a149-424f-9ad1-228b7bf592ab","Type":"ContainerDied","Data":"9f9cae1968763df0f7b8acab6e412fc94a1619f33ff92438f71d794de823e133"} Feb 18 15:00:23 crc kubenswrapper[4957]: I0218 15:00:23.715113 4957 scope.go:117] "RemoveContainer" containerID="576b3b412af1ddff86227694ed5276800b493b1b45e7e0d8792c21e68bae1819" Feb 18 15:00:23 crc kubenswrapper[4957]: I0218 15:00:23.715267 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 18 15:00:23 crc kubenswrapper[4957]: I0218 15:00:23.718747 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-45npd" event={"ID":"ce22a60c-ac91-4fcd-a298-330ace1c4d68","Type":"ContainerStarted","Data":"65d75e09233d5a1321649dbe39740c7982a35b4c214b7165dc136659da1d8ad3"} Feb 18 15:00:23 crc kubenswrapper[4957]: I0218 15:00:23.718802 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-45npd" event={"ID":"ce22a60c-ac91-4fcd-a298-330ace1c4d68","Type":"ContainerStarted","Data":"cd27c316c463c55fed6d092d67fd0b59ebf74df94e1b406d0671769dca00f722"} Feb 18 15:00:23 crc kubenswrapper[4957]: I0218 15:00:23.721913 4957 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f90e4b1-a149-424f-9ad1-228b7bf592ab-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 15:00:23 crc kubenswrapper[4957]: I0218 15:00:23.721954 4957 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f90e4b1-a149-424f-9ad1-228b7bf592ab-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 15:00:23 crc kubenswrapper[4957]: I0218 15:00:23.745255 4957 scope.go:117] "RemoveContainer" containerID="a371aac4c296750ac28aaaee5cc6a975bb060cba4116d156601d170ba38bdb27" Feb 18 15:00:23 crc kubenswrapper[4957]: I0218 15:00:23.749921 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-45npd" podStartSLOduration=2.3297391689999998 podStartE2EDuration="2.749900102s" podCreationTimestamp="2026-02-18 15:00:21 +0000 UTC" firstStartedPulling="2026-02-18 15:00:22.717920986 +0000 UTC m=+1729.238785730" lastFinishedPulling="2026-02-18 15:00:23.138081919 +0000 UTC m=+1729.658946663" observedRunningTime="2026-02-18 15:00:23.746958707 +0000 UTC m=+1730.267823461" watchObservedRunningTime="2026-02-18 15:00:23.749900102 +0000 UTC m=+1730.270764856" Feb 18 15:00:23 crc kubenswrapper[4957]: I0218 15:00:23.776269 4957 scope.go:117] "RemoveContainer" containerID="3bc17b390ab8d711811837c84ba7e1961670c272507fd4dd778136c6397de598" Feb 18 15:00:23 crc kubenswrapper[4957]: I0218 15:00:23.799892 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Feb 18 15:00:23 crc kubenswrapper[4957]: I0218 15:00:23.813217 4957 scope.go:117] "RemoveContainer" containerID="365b3ecae2b7460331099c7af8bd676e85ecffef5c5e269f97a08d633e2544e6" Feb 18 15:00:23 crc kubenswrapper[4957]: I0218 15:00:23.815142 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-0"] Feb 18 15:00:23 crc kubenswrapper[4957]: I0218 15:00:23.845251 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Feb 18 15:00:23 crc kubenswrapper[4957]: E0218 15:00:23.845766 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f90e4b1-a149-424f-9ad1-228b7bf592ab" containerName="aodh-api" Feb 18 15:00:23 crc kubenswrapper[4957]: I0218 15:00:23.845779 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f90e4b1-a149-424f-9ad1-228b7bf592ab" containerName="aodh-api" Feb 18 15:00:23 crc kubenswrapper[4957]: E0218 15:00:23.845797 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f90e4b1-a149-424f-9ad1-228b7bf592ab" containerName="aodh-listener" Feb 18 15:00:23 crc kubenswrapper[4957]: I0218 15:00:23.845803 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f90e4b1-a149-424f-9ad1-228b7bf592ab" containerName="aodh-listener" Feb 18 15:00:23 crc kubenswrapper[4957]: E0218 15:00:23.845813 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f90e4b1-a149-424f-9ad1-228b7bf592ab" containerName="aodh-notifier" Feb 18 15:00:23 crc kubenswrapper[4957]: I0218 15:00:23.845820 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f90e4b1-a149-424f-9ad1-228b7bf592ab" containerName="aodh-notifier" Feb 18 15:00:23 crc kubenswrapper[4957]: E0218 15:00:23.845846 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f90e4b1-a149-424f-9ad1-228b7bf592ab" containerName="aodh-evaluator" Feb 18 15:00:23 crc kubenswrapper[4957]: I0218 15:00:23.845852 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f90e4b1-a149-424f-9ad1-228b7bf592ab" containerName="aodh-evaluator" Feb 18 15:00:23 crc kubenswrapper[4957]: I0218 15:00:23.846066 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f90e4b1-a149-424f-9ad1-228b7bf592ab" containerName="aodh-notifier" Feb 18 15:00:23 crc kubenswrapper[4957]: I0218 15:00:23.846081 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f90e4b1-a149-424f-9ad1-228b7bf592ab" containerName="aodh-api" Feb 18 15:00:23 crc kubenswrapper[4957]: I0218 15:00:23.846096 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f90e4b1-a149-424f-9ad1-228b7bf592ab" containerName="aodh-evaluator" Feb 18 15:00:23 crc kubenswrapper[4957]: I0218 15:00:23.846107 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f90e4b1-a149-424f-9ad1-228b7bf592ab" containerName="aodh-listener" Feb 18 15:00:23 crc kubenswrapper[4957]: I0218 15:00:23.858711 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 18 15:00:23 crc kubenswrapper[4957]: I0218 15:00:23.859754 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Feb 18 15:00:23 crc kubenswrapper[4957]: I0218 15:00:23.862384 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-public-svc" Feb 18 15:00:23 crc kubenswrapper[4957]: I0218 15:00:23.862829 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Feb 18 15:00:23 crc kubenswrapper[4957]: I0218 15:00:23.862904 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-62fdg" Feb 18 15:00:23 crc kubenswrapper[4957]: I0218 15:00:23.862979 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Feb 18 15:00:23 crc kubenswrapper[4957]: I0218 15:00:23.863007 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-internal-svc" Feb 18 15:00:23 crc kubenswrapper[4957]: I0218 15:00:23.871758 4957 scope.go:117] "RemoveContainer" containerID="576b3b412af1ddff86227694ed5276800b493b1b45e7e0d8792c21e68bae1819" Feb 18 15:00:23 crc kubenswrapper[4957]: E0218 15:00:23.872368 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"576b3b412af1ddff86227694ed5276800b493b1b45e7e0d8792c21e68bae1819\": container with ID starting with 576b3b412af1ddff86227694ed5276800b493b1b45e7e0d8792c21e68bae1819 not found: ID does not exist" containerID="576b3b412af1ddff86227694ed5276800b493b1b45e7e0d8792c21e68bae1819" Feb 18 15:00:23 crc kubenswrapper[4957]: I0218 15:00:23.872405 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"576b3b412af1ddff86227694ed5276800b493b1b45e7e0d8792c21e68bae1819"} err="failed to get container status \"576b3b412af1ddff86227694ed5276800b493b1b45e7e0d8792c21e68bae1819\": rpc error: code = NotFound desc = could not find container \"576b3b412af1ddff86227694ed5276800b493b1b45e7e0d8792c21e68bae1819\": container with ID starting with 576b3b412af1ddff86227694ed5276800b493b1b45e7e0d8792c21e68bae1819 not found: ID does not exist" Feb 18 15:00:23 crc kubenswrapper[4957]: I0218 15:00:23.872449 4957 scope.go:117] "RemoveContainer" containerID="a371aac4c296750ac28aaaee5cc6a975bb060cba4116d156601d170ba38bdb27" Feb 18 15:00:23 crc kubenswrapper[4957]: E0218 15:00:23.872819 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a371aac4c296750ac28aaaee5cc6a975bb060cba4116d156601d170ba38bdb27\": container with ID starting with a371aac4c296750ac28aaaee5cc6a975bb060cba4116d156601d170ba38bdb27 not found: ID does not exist" containerID="a371aac4c296750ac28aaaee5cc6a975bb060cba4116d156601d170ba38bdb27" Feb 18 15:00:23 crc kubenswrapper[4957]: I0218 15:00:23.872865 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a371aac4c296750ac28aaaee5cc6a975bb060cba4116d156601d170ba38bdb27"} err="failed to get container status \"a371aac4c296750ac28aaaee5cc6a975bb060cba4116d156601d170ba38bdb27\": rpc error: code = NotFound desc = could not find container \"a371aac4c296750ac28aaaee5cc6a975bb060cba4116d156601d170ba38bdb27\": container with ID starting with a371aac4c296750ac28aaaee5cc6a975bb060cba4116d156601d170ba38bdb27 not found: ID does not exist" Feb 18 15:00:23 crc kubenswrapper[4957]: I0218 15:00:23.872897 4957 scope.go:117] "RemoveContainer" containerID="3bc17b390ab8d711811837c84ba7e1961670c272507fd4dd778136c6397de598" Feb 18 15:00:23 crc kubenswrapper[4957]: E0218 15:00:23.873161 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3bc17b390ab8d711811837c84ba7e1961670c272507fd4dd778136c6397de598\": container with ID starting with 3bc17b390ab8d711811837c84ba7e1961670c272507fd4dd778136c6397de598 not found: ID does not exist" containerID="3bc17b390ab8d711811837c84ba7e1961670c272507fd4dd778136c6397de598" Feb 18 15:00:23 crc kubenswrapper[4957]: I0218 15:00:23.873185 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3bc17b390ab8d711811837c84ba7e1961670c272507fd4dd778136c6397de598"} err="failed to get container status \"3bc17b390ab8d711811837c84ba7e1961670c272507fd4dd778136c6397de598\": rpc error: code = NotFound desc = could not find container \"3bc17b390ab8d711811837c84ba7e1961670c272507fd4dd778136c6397de598\": container with ID starting with 3bc17b390ab8d711811837c84ba7e1961670c272507fd4dd778136c6397de598 not found: ID does not exist" Feb 18 15:00:23 crc kubenswrapper[4957]: I0218 15:00:23.873200 4957 scope.go:117] "RemoveContainer" containerID="365b3ecae2b7460331099c7af8bd676e85ecffef5c5e269f97a08d633e2544e6" Feb 18 15:00:23 crc kubenswrapper[4957]: E0218 15:00:23.873390 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"365b3ecae2b7460331099c7af8bd676e85ecffef5c5e269f97a08d633e2544e6\": container with ID starting with 365b3ecae2b7460331099c7af8bd676e85ecffef5c5e269f97a08d633e2544e6 not found: ID does not exist" containerID="365b3ecae2b7460331099c7af8bd676e85ecffef5c5e269f97a08d633e2544e6" Feb 18 15:00:23 crc kubenswrapper[4957]: I0218 15:00:23.873442 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"365b3ecae2b7460331099c7af8bd676e85ecffef5c5e269f97a08d633e2544e6"} err="failed to get container status \"365b3ecae2b7460331099c7af8bd676e85ecffef5c5e269f97a08d633e2544e6\": rpc error: code = NotFound desc = could not find container \"365b3ecae2b7460331099c7af8bd676e85ecffef5c5e269f97a08d633e2544e6\": container with ID starting with 365b3ecae2b7460331099c7af8bd676e85ecffef5c5e269f97a08d633e2544e6 not found: ID does not exist" Feb 18 15:00:23 crc kubenswrapper[4957]: I0218 15:00:23.873463 4957 scope.go:117] "RemoveContainer" containerID="576b3b412af1ddff86227694ed5276800b493b1b45e7e0d8792c21e68bae1819" Feb 18 15:00:23 crc kubenswrapper[4957]: I0218 15:00:23.873665 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"576b3b412af1ddff86227694ed5276800b493b1b45e7e0d8792c21e68bae1819"} err="failed to get container status \"576b3b412af1ddff86227694ed5276800b493b1b45e7e0d8792c21e68bae1819\": rpc error: code = NotFound desc = could not find container \"576b3b412af1ddff86227694ed5276800b493b1b45e7e0d8792c21e68bae1819\": container with ID starting with 576b3b412af1ddff86227694ed5276800b493b1b45e7e0d8792c21e68bae1819 not found: ID does not exist" Feb 18 15:00:23 crc kubenswrapper[4957]: I0218 15:00:23.873691 4957 scope.go:117] "RemoveContainer" containerID="a371aac4c296750ac28aaaee5cc6a975bb060cba4116d156601d170ba38bdb27" Feb 18 15:00:23 crc kubenswrapper[4957]: I0218 15:00:23.874698 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a371aac4c296750ac28aaaee5cc6a975bb060cba4116d156601d170ba38bdb27"} err="failed to get container status \"a371aac4c296750ac28aaaee5cc6a975bb060cba4116d156601d170ba38bdb27\": rpc error: code = NotFound desc = could not find container \"a371aac4c296750ac28aaaee5cc6a975bb060cba4116d156601d170ba38bdb27\": container with ID starting with a371aac4c296750ac28aaaee5cc6a975bb060cba4116d156601d170ba38bdb27 not found: ID does not exist" Feb 18 15:00:23 crc kubenswrapper[4957]: I0218 15:00:23.874722 4957 scope.go:117] "RemoveContainer" containerID="3bc17b390ab8d711811837c84ba7e1961670c272507fd4dd778136c6397de598" Feb 18 15:00:23 crc kubenswrapper[4957]: I0218 15:00:23.874971 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3bc17b390ab8d711811837c84ba7e1961670c272507fd4dd778136c6397de598"} err="failed to get container status \"3bc17b390ab8d711811837c84ba7e1961670c272507fd4dd778136c6397de598\": rpc error: code = NotFound desc = could not find container \"3bc17b390ab8d711811837c84ba7e1961670c272507fd4dd778136c6397de598\": container with ID starting with 3bc17b390ab8d711811837c84ba7e1961670c272507fd4dd778136c6397de598 not found: ID does not exist" Feb 18 15:00:23 crc kubenswrapper[4957]: I0218 15:00:23.874994 4957 scope.go:117] "RemoveContainer" containerID="365b3ecae2b7460331099c7af8bd676e85ecffef5c5e269f97a08d633e2544e6" Feb 18 15:00:23 crc kubenswrapper[4957]: I0218 15:00:23.875358 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"365b3ecae2b7460331099c7af8bd676e85ecffef5c5e269f97a08d633e2544e6"} err="failed to get container status \"365b3ecae2b7460331099c7af8bd676e85ecffef5c5e269f97a08d633e2544e6\": rpc error: code = NotFound desc = could not find container \"365b3ecae2b7460331099c7af8bd676e85ecffef5c5e269f97a08d633e2544e6\": container with ID starting with 365b3ecae2b7460331099c7af8bd676e85ecffef5c5e269f97a08d633e2544e6 not found: ID does not exist" Feb 18 15:00:23 crc kubenswrapper[4957]: I0218 15:00:23.929103 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bdf7772f-356b-41a5-ad1d-80b40c742d36-public-tls-certs\") pod \"aodh-0\" (UID: \"bdf7772f-356b-41a5-ad1d-80b40c742d36\") " pod="openstack/aodh-0" Feb 18 15:00:23 crc kubenswrapper[4957]: I0218 15:00:23.929213 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jxlr\" (UniqueName: \"kubernetes.io/projected/bdf7772f-356b-41a5-ad1d-80b40c742d36-kube-api-access-9jxlr\") pod \"aodh-0\" (UID: \"bdf7772f-356b-41a5-ad1d-80b40c742d36\") " pod="openstack/aodh-0" Feb 18 15:00:23 crc kubenswrapper[4957]: I0218 15:00:23.929236 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bdf7772f-356b-41a5-ad1d-80b40c742d36-internal-tls-certs\") pod \"aodh-0\" (UID: \"bdf7772f-356b-41a5-ad1d-80b40c742d36\") " pod="openstack/aodh-0" Feb 18 15:00:23 crc kubenswrapper[4957]: I0218 15:00:23.929388 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bdf7772f-356b-41a5-ad1d-80b40c742d36-config-data\") pod \"aodh-0\" (UID: \"bdf7772f-356b-41a5-ad1d-80b40c742d36\") " pod="openstack/aodh-0" Feb 18 15:00:23 crc kubenswrapper[4957]: I0218 15:00:23.929441 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdf7772f-356b-41a5-ad1d-80b40c742d36-combined-ca-bundle\") pod \"aodh-0\" (UID: \"bdf7772f-356b-41a5-ad1d-80b40c742d36\") " pod="openstack/aodh-0" Feb 18 15:00:23 crc kubenswrapper[4957]: I0218 15:00:23.929495 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bdf7772f-356b-41a5-ad1d-80b40c742d36-scripts\") pod \"aodh-0\" (UID: \"bdf7772f-356b-41a5-ad1d-80b40c742d36\") " pod="openstack/aodh-0" Feb 18 15:00:24 crc kubenswrapper[4957]: I0218 15:00:24.031401 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bdf7772f-356b-41a5-ad1d-80b40c742d36-scripts\") pod \"aodh-0\" (UID: \"bdf7772f-356b-41a5-ad1d-80b40c742d36\") " pod="openstack/aodh-0" Feb 18 15:00:24 crc kubenswrapper[4957]: I0218 15:00:24.031556 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bdf7772f-356b-41a5-ad1d-80b40c742d36-public-tls-certs\") pod \"aodh-0\" (UID: \"bdf7772f-356b-41a5-ad1d-80b40c742d36\") " pod="openstack/aodh-0" Feb 18 15:00:24 crc kubenswrapper[4957]: I0218 15:00:24.031615 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bdf7772f-356b-41a5-ad1d-80b40c742d36-internal-tls-certs\") pod \"aodh-0\" (UID: \"bdf7772f-356b-41a5-ad1d-80b40c742d36\") " pod="openstack/aodh-0" Feb 18 15:00:24 crc kubenswrapper[4957]: I0218 15:00:24.031630 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9jxlr\" (UniqueName: \"kubernetes.io/projected/bdf7772f-356b-41a5-ad1d-80b40c742d36-kube-api-access-9jxlr\") pod \"aodh-0\" (UID: \"bdf7772f-356b-41a5-ad1d-80b40c742d36\") " pod="openstack/aodh-0" Feb 18 15:00:24 crc kubenswrapper[4957]: I0218 15:00:24.031730 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdf7772f-356b-41a5-ad1d-80b40c742d36-combined-ca-bundle\") pod \"aodh-0\" (UID: \"bdf7772f-356b-41a5-ad1d-80b40c742d36\") " pod="openstack/aodh-0" Feb 18 15:00:24 crc kubenswrapper[4957]: I0218 15:00:24.031748 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bdf7772f-356b-41a5-ad1d-80b40c742d36-config-data\") pod \"aodh-0\" (UID: \"bdf7772f-356b-41a5-ad1d-80b40c742d36\") " pod="openstack/aodh-0" Feb 18 15:00:24 crc kubenswrapper[4957]: I0218 15:00:24.036212 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bdf7772f-356b-41a5-ad1d-80b40c742d36-scripts\") pod \"aodh-0\" (UID: \"bdf7772f-356b-41a5-ad1d-80b40c742d36\") " pod="openstack/aodh-0" Feb 18 15:00:24 crc kubenswrapper[4957]: I0218 15:00:24.036345 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bdf7772f-356b-41a5-ad1d-80b40c742d36-public-tls-certs\") pod \"aodh-0\" (UID: \"bdf7772f-356b-41a5-ad1d-80b40c742d36\") " pod="openstack/aodh-0" Feb 18 15:00:24 crc kubenswrapper[4957]: I0218 15:00:24.037109 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bdf7772f-356b-41a5-ad1d-80b40c742d36-internal-tls-certs\") pod \"aodh-0\" (UID: \"bdf7772f-356b-41a5-ad1d-80b40c742d36\") " pod="openstack/aodh-0" Feb 18 15:00:24 crc kubenswrapper[4957]: I0218 15:00:24.037485 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdf7772f-356b-41a5-ad1d-80b40c742d36-combined-ca-bundle\") pod \"aodh-0\" (UID: \"bdf7772f-356b-41a5-ad1d-80b40c742d36\") " pod="openstack/aodh-0" Feb 18 15:00:24 crc kubenswrapper[4957]: I0218 15:00:24.042959 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bdf7772f-356b-41a5-ad1d-80b40c742d36-config-data\") pod \"aodh-0\" (UID: \"bdf7772f-356b-41a5-ad1d-80b40c742d36\") " pod="openstack/aodh-0" Feb 18 15:00:24 crc kubenswrapper[4957]: I0218 15:00:24.052050 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9jxlr\" (UniqueName: \"kubernetes.io/projected/bdf7772f-356b-41a5-ad1d-80b40c742d36-kube-api-access-9jxlr\") pod \"aodh-0\" (UID: \"bdf7772f-356b-41a5-ad1d-80b40c742d36\") " pod="openstack/aodh-0" Feb 18 15:00:24 crc kubenswrapper[4957]: I0218 15:00:24.195047 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 18 15:00:24 crc kubenswrapper[4957]: I0218 15:00:24.231103 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f90e4b1-a149-424f-9ad1-228b7bf592ab" path="/var/lib/kubelet/pods/2f90e4b1-a149-424f-9ad1-228b7bf592ab/volumes" Feb 18 15:00:24 crc kubenswrapper[4957]: I0218 15:00:24.706413 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Feb 18 15:00:24 crc kubenswrapper[4957]: I0218 15:00:24.736189 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"bdf7772f-356b-41a5-ad1d-80b40c742d36","Type":"ContainerStarted","Data":"4e6fcdf7a1935b189dd8ac56837affadd3703b6d583a1d8df3de49df6325eaa2"} Feb 18 15:00:25 crc kubenswrapper[4957]: I0218 15:00:25.759534 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"bdf7772f-356b-41a5-ad1d-80b40c742d36","Type":"ContainerStarted","Data":"0d0cdfdba4fc779c1bea0a900541792dac36b77ac51fb7a09df8230dd88a8f43"} Feb 18 15:00:26 crc kubenswrapper[4957]: I0218 15:00:26.773249 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"bdf7772f-356b-41a5-ad1d-80b40c742d36","Type":"ContainerStarted","Data":"3b5a8d75cffb0f881866c6c96a6afa33aa6142dee427c49eaa816511731e07f8"} Feb 18 15:00:27 crc kubenswrapper[4957]: I0218 15:00:27.794819 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"bdf7772f-356b-41a5-ad1d-80b40c742d36","Type":"ContainerStarted","Data":"3ff9a078679b3b40573e3e3a6a15da8f374ec921273d25f7a9ef1216d7e23012"} Feb 18 15:00:28 crc kubenswrapper[4957]: I0218 15:00:28.807409 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"bdf7772f-356b-41a5-ad1d-80b40c742d36","Type":"ContainerStarted","Data":"e7cd0951a1785d7589c59ed1faa2bc2952c4a2f879828188a501aec95f3c8253"} Feb 18 15:00:28 crc kubenswrapper[4957]: I0218 15:00:28.826965 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=2.159774563 podStartE2EDuration="5.826946674s" podCreationTimestamp="2026-02-18 15:00:23 +0000 UTC" firstStartedPulling="2026-02-18 15:00:24.706555826 +0000 UTC m=+1731.227420610" lastFinishedPulling="2026-02-18 15:00:28.373727977 +0000 UTC m=+1734.894592721" observedRunningTime="2026-02-18 15:00:28.825821562 +0000 UTC m=+1735.346686306" watchObservedRunningTime="2026-02-18 15:00:28.826946674 +0000 UTC m=+1735.347811428" Feb 18 15:00:35 crc kubenswrapper[4957]: I0218 15:00:35.213259 4957 scope.go:117] "RemoveContainer" containerID="c4e669be0ab1992542c422c61f78e7526ae93fe9c47c7d1457d0a05970a49398" Feb 18 15:00:35 crc kubenswrapper[4957]: E0218 15:00:35.214334 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-x8wwg_openshift-machine-config-operator(4cde17e3-43e9-4bed-afe8-5b76229e35cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" Feb 18 15:00:47 crc kubenswrapper[4957]: I0218 15:00:47.213853 4957 scope.go:117] "RemoveContainer" containerID="c4e669be0ab1992542c422c61f78e7526ae93fe9c47c7d1457d0a05970a49398" Feb 18 15:00:47 crc kubenswrapper[4957]: E0218 15:00:47.214989 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-x8wwg_openshift-machine-config-operator(4cde17e3-43e9-4bed-afe8-5b76229e35cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" Feb 18 15:00:52 crc kubenswrapper[4957]: I0218 15:00:52.167095 4957 generic.go:334] "Generic (PLEG): container finished" podID="e6f1982d-1c44-43d2-8a39-6e247a6f09c8" containerID="402be53bd884522454b7bfd60ab2ca17b02ff486c354c9e83621553281eedef8" exitCode=0 Feb 18 15:00:52 crc kubenswrapper[4957]: I0218 15:00:52.167366 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"e6f1982d-1c44-43d2-8a39-6e247a6f09c8","Type":"ContainerDied","Data":"402be53bd884522454b7bfd60ab2ca17b02ff486c354c9e83621553281eedef8"} Feb 18 15:00:53 crc kubenswrapper[4957]: I0218 15:00:53.181244 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"e6f1982d-1c44-43d2-8a39-6e247a6f09c8","Type":"ContainerStarted","Data":"d93c1e00e87314a08e1123cf3829feeb134f0ead51cc9bbed22f8cc04fbf033e"} Feb 18 15:00:53 crc kubenswrapper[4957]: I0218 15:00:53.183388 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-1" Feb 18 15:00:53 crc kubenswrapper[4957]: I0218 15:00:53.216264 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-1" podStartSLOduration=37.216246995 podStartE2EDuration="37.216246995s" podCreationTimestamp="2026-02-18 15:00:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 15:00:53.210018915 +0000 UTC m=+1759.730883679" watchObservedRunningTime="2026-02-18 15:00:53.216246995 +0000 UTC m=+1759.737111739" Feb 18 15:00:59 crc kubenswrapper[4957]: I0218 15:00:59.213470 4957 scope.go:117] "RemoveContainer" containerID="c4e669be0ab1992542c422c61f78e7526ae93fe9c47c7d1457d0a05970a49398" Feb 18 15:00:59 crc kubenswrapper[4957]: E0218 15:00:59.214480 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-x8wwg_openshift-machine-config-operator(4cde17e3-43e9-4bed-afe8-5b76229e35cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" Feb 18 15:01:00 crc kubenswrapper[4957]: I0218 15:01:00.165980 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29523781-pkckt"] Feb 18 15:01:00 crc kubenswrapper[4957]: I0218 15:01:00.167930 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29523781-pkckt" Feb 18 15:01:00 crc kubenswrapper[4957]: I0218 15:01:00.225171 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29523781-pkckt"] Feb 18 15:01:00 crc kubenswrapper[4957]: I0218 15:01:00.309882 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbx4n\" (UniqueName: \"kubernetes.io/projected/c5829257-e7f8-4a49-8e8d-1780b76c346a-kube-api-access-sbx4n\") pod \"keystone-cron-29523781-pkckt\" (UID: \"c5829257-e7f8-4a49-8e8d-1780b76c346a\") " pod="openstack/keystone-cron-29523781-pkckt" Feb 18 15:01:00 crc kubenswrapper[4957]: I0218 15:01:00.309945 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5829257-e7f8-4a49-8e8d-1780b76c346a-combined-ca-bundle\") pod \"keystone-cron-29523781-pkckt\" (UID: \"c5829257-e7f8-4a49-8e8d-1780b76c346a\") " pod="openstack/keystone-cron-29523781-pkckt" Feb 18 15:01:00 crc kubenswrapper[4957]: I0218 15:01:00.310034 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c5829257-e7f8-4a49-8e8d-1780b76c346a-fernet-keys\") pod \"keystone-cron-29523781-pkckt\" (UID: \"c5829257-e7f8-4a49-8e8d-1780b76c346a\") " pod="openstack/keystone-cron-29523781-pkckt" Feb 18 15:01:00 crc kubenswrapper[4957]: I0218 15:01:00.310580 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5829257-e7f8-4a49-8e8d-1780b76c346a-config-data\") pod \"keystone-cron-29523781-pkckt\" (UID: \"c5829257-e7f8-4a49-8e8d-1780b76c346a\") " pod="openstack/keystone-cron-29523781-pkckt" Feb 18 15:01:00 crc kubenswrapper[4957]: I0218 15:01:00.413530 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c5829257-e7f8-4a49-8e8d-1780b76c346a-fernet-keys\") pod \"keystone-cron-29523781-pkckt\" (UID: \"c5829257-e7f8-4a49-8e8d-1780b76c346a\") " pod="openstack/keystone-cron-29523781-pkckt" Feb 18 15:01:00 crc kubenswrapper[4957]: I0218 15:01:00.413725 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5829257-e7f8-4a49-8e8d-1780b76c346a-config-data\") pod \"keystone-cron-29523781-pkckt\" (UID: \"c5829257-e7f8-4a49-8e8d-1780b76c346a\") " pod="openstack/keystone-cron-29523781-pkckt" Feb 18 15:01:00 crc kubenswrapper[4957]: I0218 15:01:00.413875 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbx4n\" (UniqueName: \"kubernetes.io/projected/c5829257-e7f8-4a49-8e8d-1780b76c346a-kube-api-access-sbx4n\") pod \"keystone-cron-29523781-pkckt\" (UID: \"c5829257-e7f8-4a49-8e8d-1780b76c346a\") " pod="openstack/keystone-cron-29523781-pkckt" Feb 18 15:01:00 crc kubenswrapper[4957]: I0218 15:01:00.413915 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5829257-e7f8-4a49-8e8d-1780b76c346a-combined-ca-bundle\") pod \"keystone-cron-29523781-pkckt\" (UID: \"c5829257-e7f8-4a49-8e8d-1780b76c346a\") " pod="openstack/keystone-cron-29523781-pkckt" Feb 18 15:01:00 crc kubenswrapper[4957]: I0218 15:01:00.419844 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5829257-e7f8-4a49-8e8d-1780b76c346a-combined-ca-bundle\") pod \"keystone-cron-29523781-pkckt\" (UID: \"c5829257-e7f8-4a49-8e8d-1780b76c346a\") " pod="openstack/keystone-cron-29523781-pkckt" Feb 18 15:01:00 crc kubenswrapper[4957]: I0218 15:01:00.421024 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5829257-e7f8-4a49-8e8d-1780b76c346a-config-data\") pod \"keystone-cron-29523781-pkckt\" (UID: \"c5829257-e7f8-4a49-8e8d-1780b76c346a\") " pod="openstack/keystone-cron-29523781-pkckt" Feb 18 15:01:00 crc kubenswrapper[4957]: I0218 15:01:00.424602 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c5829257-e7f8-4a49-8e8d-1780b76c346a-fernet-keys\") pod \"keystone-cron-29523781-pkckt\" (UID: \"c5829257-e7f8-4a49-8e8d-1780b76c346a\") " pod="openstack/keystone-cron-29523781-pkckt" Feb 18 15:01:00 crc kubenswrapper[4957]: I0218 15:01:00.434533 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sbx4n\" (UniqueName: \"kubernetes.io/projected/c5829257-e7f8-4a49-8e8d-1780b76c346a-kube-api-access-sbx4n\") pod \"keystone-cron-29523781-pkckt\" (UID: \"c5829257-e7f8-4a49-8e8d-1780b76c346a\") " pod="openstack/keystone-cron-29523781-pkckt" Feb 18 15:01:00 crc kubenswrapper[4957]: I0218 15:01:00.489661 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29523781-pkckt" Feb 18 15:01:00 crc kubenswrapper[4957]: I0218 15:01:00.994044 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29523781-pkckt"] Feb 18 15:01:01 crc kubenswrapper[4957]: I0218 15:01:01.285737 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29523781-pkckt" event={"ID":"c5829257-e7f8-4a49-8e8d-1780b76c346a","Type":"ContainerStarted","Data":"f3d45d430486e2c7bdcb40e6cd3e4877b83b4895fdd314261d467bc3b970023e"} Feb 18 15:01:01 crc kubenswrapper[4957]: I0218 15:01:01.286169 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29523781-pkckt" event={"ID":"c5829257-e7f8-4a49-8e8d-1780b76c346a","Type":"ContainerStarted","Data":"f2b5f44aa93a982d1624c05b9cbc850616950d74135b30c61d3724da67d22c76"} Feb 18 15:01:02 crc kubenswrapper[4957]: I0218 15:01:02.321063 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29523781-pkckt" podStartSLOduration=2.3210396810000002 podStartE2EDuration="2.321039681s" podCreationTimestamp="2026-02-18 15:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 15:01:02.313082621 +0000 UTC m=+1768.833947365" watchObservedRunningTime="2026-02-18 15:01:02.321039681 +0000 UTC m=+1768.841904425" Feb 18 15:01:05 crc kubenswrapper[4957]: I0218 15:01:05.327967 4957 generic.go:334] "Generic (PLEG): container finished" podID="c5829257-e7f8-4a49-8e8d-1780b76c346a" containerID="f3d45d430486e2c7bdcb40e6cd3e4877b83b4895fdd314261d467bc3b970023e" exitCode=0 Feb 18 15:01:05 crc kubenswrapper[4957]: I0218 15:01:05.328550 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29523781-pkckt" event={"ID":"c5829257-e7f8-4a49-8e8d-1780b76c346a","Type":"ContainerDied","Data":"f3d45d430486e2c7bdcb40e6cd3e4877b83b4895fdd314261d467bc3b970023e"} Feb 18 15:01:06 crc kubenswrapper[4957]: I0218 15:01:06.855849 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29523781-pkckt" Feb 18 15:01:07 crc kubenswrapper[4957]: I0218 15:01:07.014460 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c5829257-e7f8-4a49-8e8d-1780b76c346a-fernet-keys\") pod \"c5829257-e7f8-4a49-8e8d-1780b76c346a\" (UID: \"c5829257-e7f8-4a49-8e8d-1780b76c346a\") " Feb 18 15:01:07 crc kubenswrapper[4957]: I0218 15:01:07.014665 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5829257-e7f8-4a49-8e8d-1780b76c346a-config-data\") pod \"c5829257-e7f8-4a49-8e8d-1780b76c346a\" (UID: \"c5829257-e7f8-4a49-8e8d-1780b76c346a\") " Feb 18 15:01:07 crc kubenswrapper[4957]: I0218 15:01:07.014694 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5829257-e7f8-4a49-8e8d-1780b76c346a-combined-ca-bundle\") pod \"c5829257-e7f8-4a49-8e8d-1780b76c346a\" (UID: \"c5829257-e7f8-4a49-8e8d-1780b76c346a\") " Feb 18 15:01:07 crc kubenswrapper[4957]: I0218 15:01:07.014749 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sbx4n\" (UniqueName: \"kubernetes.io/projected/c5829257-e7f8-4a49-8e8d-1780b76c346a-kube-api-access-sbx4n\") pod \"c5829257-e7f8-4a49-8e8d-1780b76c346a\" (UID: \"c5829257-e7f8-4a49-8e8d-1780b76c346a\") " Feb 18 15:01:07 crc kubenswrapper[4957]: I0218 15:01:07.021791 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5829257-e7f8-4a49-8e8d-1780b76c346a-kube-api-access-sbx4n" (OuterVolumeSpecName: "kube-api-access-sbx4n") pod "c5829257-e7f8-4a49-8e8d-1780b76c346a" (UID: "c5829257-e7f8-4a49-8e8d-1780b76c346a"). InnerVolumeSpecName "kube-api-access-sbx4n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 15:01:07 crc kubenswrapper[4957]: I0218 15:01:07.021787 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5829257-e7f8-4a49-8e8d-1780b76c346a-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "c5829257-e7f8-4a49-8e8d-1780b76c346a" (UID: "c5829257-e7f8-4a49-8e8d-1780b76c346a"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 15:01:07 crc kubenswrapper[4957]: I0218 15:01:07.046430 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5829257-e7f8-4a49-8e8d-1780b76c346a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c5829257-e7f8-4a49-8e8d-1780b76c346a" (UID: "c5829257-e7f8-4a49-8e8d-1780b76c346a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 15:01:07 crc kubenswrapper[4957]: I0218 15:01:07.077123 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5829257-e7f8-4a49-8e8d-1780b76c346a-config-data" (OuterVolumeSpecName: "config-data") pod "c5829257-e7f8-4a49-8e8d-1780b76c346a" (UID: "c5829257-e7f8-4a49-8e8d-1780b76c346a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 15:01:07 crc kubenswrapper[4957]: I0218 15:01:07.117884 4957 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c5829257-e7f8-4a49-8e8d-1780b76c346a-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 18 15:01:07 crc kubenswrapper[4957]: I0218 15:01:07.117918 4957 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5829257-e7f8-4a49-8e8d-1780b76c346a-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 15:01:07 crc kubenswrapper[4957]: I0218 15:01:07.117927 4957 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5829257-e7f8-4a49-8e8d-1780b76c346a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 15:01:07 crc kubenswrapper[4957]: I0218 15:01:07.117938 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sbx4n\" (UniqueName: \"kubernetes.io/projected/c5829257-e7f8-4a49-8e8d-1780b76c346a-kube-api-access-sbx4n\") on node \"crc\" DevicePath \"\"" Feb 18 15:01:07 crc kubenswrapper[4957]: I0218 15:01:07.315631 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-1" Feb 18 15:01:07 crc kubenswrapper[4957]: I0218 15:01:07.385871 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29523781-pkckt" event={"ID":"c5829257-e7f8-4a49-8e8d-1780b76c346a","Type":"ContainerDied","Data":"f2b5f44aa93a982d1624c05b9cbc850616950d74135b30c61d3724da67d22c76"} Feb 18 15:01:07 crc kubenswrapper[4957]: I0218 15:01:07.385909 4957 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f2b5f44aa93a982d1624c05b9cbc850616950d74135b30c61d3724da67d22c76" Feb 18 15:01:07 crc kubenswrapper[4957]: I0218 15:01:07.385967 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29523781-pkckt" Feb 18 15:01:07 crc kubenswrapper[4957]: I0218 15:01:07.420108 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 18 15:01:12 crc kubenswrapper[4957]: I0218 15:01:12.182042 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="a0e8ec2b-400b-4454-acdd-517a1727e9f8" containerName="rabbitmq" containerID="cri-o://6a1aba853a6b85f2dc58df840dede15df72b5d14a98d48ba6dec3e389c1d1050" gracePeriod=604796 Feb 18 15:01:12 crc kubenswrapper[4957]: I0218 15:01:12.206512 4957 scope.go:117] "RemoveContainer" containerID="77670be4dca07a5341a66ef224ba145df08a99a0a4195e0749d8b4fc0773cc5d" Feb 18 15:01:12 crc kubenswrapper[4957]: I0218 15:01:12.275788 4957 scope.go:117] "RemoveContainer" containerID="ec910696aeece37da26fa7ea20c29f50c0b0104dcac0861ada821dfc42313f6f" Feb 18 15:01:12 crc kubenswrapper[4957]: I0218 15:01:12.339210 4957 scope.go:117] "RemoveContainer" containerID="7af7e42045fe579837329e1cb64f1e6505f8fdfdeeb82dc284b3e513d17e3132" Feb 18 15:01:12 crc kubenswrapper[4957]: I0218 15:01:12.385557 4957 scope.go:117] "RemoveContainer" containerID="1e2f8288271c4a9f01a34ca593d2f6701401f7c597ca71e28d78a04c62be5f52" Feb 18 15:01:12 crc kubenswrapper[4957]: I0218 15:01:12.417525 4957 scope.go:117] "RemoveContainer" containerID="529c06b3faf50c75256e4096a91e6712527ce6db69b811695032087f8aab43f8" Feb 18 15:01:12 crc kubenswrapper[4957]: I0218 15:01:12.450977 4957 scope.go:117] "RemoveContainer" containerID="3b7016aca82e4bbdb5f8f9706dd6604e9b40eac55b3e0c37a1b53f441c2b51c9" Feb 18 15:01:12 crc kubenswrapper[4957]: I0218 15:01:12.533958 4957 scope.go:117] "RemoveContainer" containerID="a23e1035e2130f7ffc0042e2de1aeb8444de6b117f9304a7d6f61a57faddbd56" Feb 18 15:01:12 crc kubenswrapper[4957]: I0218 15:01:12.567003 4957 scope.go:117] "RemoveContainer" containerID="3d81b08dc34086f30147f8acf3bba813d2ee5d1ebdb6ff06cd95fdc0a01a1280" Feb 18 15:01:13 crc kubenswrapper[4957]: I0218 15:01:13.214714 4957 scope.go:117] "RemoveContainer" containerID="c4e669be0ab1992542c422c61f78e7526ae93fe9c47c7d1457d0a05970a49398" Feb 18 15:01:13 crc kubenswrapper[4957]: E0218 15:01:13.215833 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-x8wwg_openshift-machine-config-operator(4cde17e3-43e9-4bed-afe8-5b76229e35cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" Feb 18 15:01:18 crc kubenswrapper[4957]: I0218 15:01:18.542573 4957 generic.go:334] "Generic (PLEG): container finished" podID="a0e8ec2b-400b-4454-acdd-517a1727e9f8" containerID="6a1aba853a6b85f2dc58df840dede15df72b5d14a98d48ba6dec3e389c1d1050" exitCode=0 Feb 18 15:01:18 crc kubenswrapper[4957]: I0218 15:01:18.542640 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"a0e8ec2b-400b-4454-acdd-517a1727e9f8","Type":"ContainerDied","Data":"6a1aba853a6b85f2dc58df840dede15df72b5d14a98d48ba6dec3e389c1d1050"} Feb 18 15:01:18 crc kubenswrapper[4957]: I0218 15:01:18.971578 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 18 15:01:19 crc kubenswrapper[4957]: I0218 15:01:19.037083 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a0e8ec2b-400b-4454-acdd-517a1727e9f8-server-conf\") pod \"a0e8ec2b-400b-4454-acdd-517a1727e9f8\" (UID: \"a0e8ec2b-400b-4454-acdd-517a1727e9f8\") " Feb 18 15:01:19 crc kubenswrapper[4957]: I0218 15:01:19.037393 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a0e8ec2b-400b-4454-acdd-517a1727e9f8-config-data\") pod \"a0e8ec2b-400b-4454-acdd-517a1727e9f8\" (UID: \"a0e8ec2b-400b-4454-acdd-517a1727e9f8\") " Feb 18 15:01:19 crc kubenswrapper[4957]: I0218 15:01:19.037723 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a0e8ec2b-400b-4454-acdd-517a1727e9f8-rabbitmq-tls\") pod \"a0e8ec2b-400b-4454-acdd-517a1727e9f8\" (UID: \"a0e8ec2b-400b-4454-acdd-517a1727e9f8\") " Feb 18 15:01:19 crc kubenswrapper[4957]: I0218 15:01:19.038655 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d9925550-aa8b-4d58-92f5-133e76b1ee6d\") pod \"a0e8ec2b-400b-4454-acdd-517a1727e9f8\" (UID: \"a0e8ec2b-400b-4454-acdd-517a1727e9f8\") " Feb 18 15:01:19 crc kubenswrapper[4957]: I0218 15:01:19.038758 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a0e8ec2b-400b-4454-acdd-517a1727e9f8-rabbitmq-confd\") pod \"a0e8ec2b-400b-4454-acdd-517a1727e9f8\" (UID: \"a0e8ec2b-400b-4454-acdd-517a1727e9f8\") " Feb 18 15:01:19 crc kubenswrapper[4957]: I0218 15:01:19.038814 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xkbjz\" (UniqueName: \"kubernetes.io/projected/a0e8ec2b-400b-4454-acdd-517a1727e9f8-kube-api-access-xkbjz\") pod \"a0e8ec2b-400b-4454-acdd-517a1727e9f8\" (UID: \"a0e8ec2b-400b-4454-acdd-517a1727e9f8\") " Feb 18 15:01:19 crc kubenswrapper[4957]: I0218 15:01:19.038883 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a0e8ec2b-400b-4454-acdd-517a1727e9f8-erlang-cookie-secret\") pod \"a0e8ec2b-400b-4454-acdd-517a1727e9f8\" (UID: \"a0e8ec2b-400b-4454-acdd-517a1727e9f8\") " Feb 18 15:01:19 crc kubenswrapper[4957]: I0218 15:01:19.038964 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a0e8ec2b-400b-4454-acdd-517a1727e9f8-pod-info\") pod \"a0e8ec2b-400b-4454-acdd-517a1727e9f8\" (UID: \"a0e8ec2b-400b-4454-acdd-517a1727e9f8\") " Feb 18 15:01:19 crc kubenswrapper[4957]: I0218 15:01:19.039065 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a0e8ec2b-400b-4454-acdd-517a1727e9f8-rabbitmq-plugins\") pod \"a0e8ec2b-400b-4454-acdd-517a1727e9f8\" (UID: \"a0e8ec2b-400b-4454-acdd-517a1727e9f8\") " Feb 18 15:01:19 crc kubenswrapper[4957]: I0218 15:01:19.039101 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a0e8ec2b-400b-4454-acdd-517a1727e9f8-rabbitmq-erlang-cookie\") pod \"a0e8ec2b-400b-4454-acdd-517a1727e9f8\" (UID: \"a0e8ec2b-400b-4454-acdd-517a1727e9f8\") " Feb 18 15:01:19 crc kubenswrapper[4957]: I0218 15:01:19.039187 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a0e8ec2b-400b-4454-acdd-517a1727e9f8-plugins-conf\") pod \"a0e8ec2b-400b-4454-acdd-517a1727e9f8\" (UID: \"a0e8ec2b-400b-4454-acdd-517a1727e9f8\") " Feb 18 15:01:19 crc kubenswrapper[4957]: I0218 15:01:19.040348 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a0e8ec2b-400b-4454-acdd-517a1727e9f8-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "a0e8ec2b-400b-4454-acdd-517a1727e9f8" (UID: "a0e8ec2b-400b-4454-acdd-517a1727e9f8"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 15:01:19 crc kubenswrapper[4957]: I0218 15:01:19.040883 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a0e8ec2b-400b-4454-acdd-517a1727e9f8-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "a0e8ec2b-400b-4454-acdd-517a1727e9f8" (UID: "a0e8ec2b-400b-4454-acdd-517a1727e9f8"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 15:01:19 crc kubenswrapper[4957]: I0218 15:01:19.041246 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a0e8ec2b-400b-4454-acdd-517a1727e9f8-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "a0e8ec2b-400b-4454-acdd-517a1727e9f8" (UID: "a0e8ec2b-400b-4454-acdd-517a1727e9f8"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 15:01:19 crc kubenswrapper[4957]: I0218 15:01:19.041896 4957 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a0e8ec2b-400b-4454-acdd-517a1727e9f8-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 18 15:01:19 crc kubenswrapper[4957]: I0218 15:01:19.041925 4957 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a0e8ec2b-400b-4454-acdd-517a1727e9f8-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 18 15:01:19 crc kubenswrapper[4957]: I0218 15:01:19.041941 4957 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a0e8ec2b-400b-4454-acdd-517a1727e9f8-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 18 15:01:19 crc kubenswrapper[4957]: I0218 15:01:19.061586 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0e8ec2b-400b-4454-acdd-517a1727e9f8-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "a0e8ec2b-400b-4454-acdd-517a1727e9f8" (UID: "a0e8ec2b-400b-4454-acdd-517a1727e9f8"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 15:01:19 crc kubenswrapper[4957]: I0218 15:01:19.062182 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/a0e8ec2b-400b-4454-acdd-517a1727e9f8-pod-info" (OuterVolumeSpecName: "pod-info") pod "a0e8ec2b-400b-4454-acdd-517a1727e9f8" (UID: "a0e8ec2b-400b-4454-acdd-517a1727e9f8"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 18 15:01:19 crc kubenswrapper[4957]: I0218 15:01:19.062728 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0e8ec2b-400b-4454-acdd-517a1727e9f8-kube-api-access-xkbjz" (OuterVolumeSpecName: "kube-api-access-xkbjz") pod "a0e8ec2b-400b-4454-acdd-517a1727e9f8" (UID: "a0e8ec2b-400b-4454-acdd-517a1727e9f8"). InnerVolumeSpecName "kube-api-access-xkbjz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 15:01:19 crc kubenswrapper[4957]: I0218 15:01:19.064190 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0e8ec2b-400b-4454-acdd-517a1727e9f8-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "a0e8ec2b-400b-4454-acdd-517a1727e9f8" (UID: "a0e8ec2b-400b-4454-acdd-517a1727e9f8"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 15:01:19 crc kubenswrapper[4957]: I0218 15:01:19.074594 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a0e8ec2b-400b-4454-acdd-517a1727e9f8-config-data" (OuterVolumeSpecName: "config-data") pod "a0e8ec2b-400b-4454-acdd-517a1727e9f8" (UID: "a0e8ec2b-400b-4454-acdd-517a1727e9f8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 15:01:19 crc kubenswrapper[4957]: I0218 15:01:19.111458 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a0e8ec2b-400b-4454-acdd-517a1727e9f8-server-conf" (OuterVolumeSpecName: "server-conf") pod "a0e8ec2b-400b-4454-acdd-517a1727e9f8" (UID: "a0e8ec2b-400b-4454-acdd-517a1727e9f8"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 15:01:19 crc kubenswrapper[4957]: I0218 15:01:19.127881 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d9925550-aa8b-4d58-92f5-133e76b1ee6d" (OuterVolumeSpecName: "persistence") pod "a0e8ec2b-400b-4454-acdd-517a1727e9f8" (UID: "a0e8ec2b-400b-4454-acdd-517a1727e9f8"). InnerVolumeSpecName "pvc-d9925550-aa8b-4d58-92f5-133e76b1ee6d". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 18 15:01:19 crc kubenswrapper[4957]: I0218 15:01:19.144206 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xkbjz\" (UniqueName: \"kubernetes.io/projected/a0e8ec2b-400b-4454-acdd-517a1727e9f8-kube-api-access-xkbjz\") on node \"crc\" DevicePath \"\"" Feb 18 15:01:19 crc kubenswrapper[4957]: I0218 15:01:19.144244 4957 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a0e8ec2b-400b-4454-acdd-517a1727e9f8-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 18 15:01:19 crc kubenswrapper[4957]: I0218 15:01:19.144255 4957 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a0e8ec2b-400b-4454-acdd-517a1727e9f8-pod-info\") on node \"crc\" DevicePath \"\"" Feb 18 15:01:19 crc kubenswrapper[4957]: I0218 15:01:19.144266 4957 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a0e8ec2b-400b-4454-acdd-517a1727e9f8-server-conf\") on node \"crc\" DevicePath \"\"" Feb 18 15:01:19 crc kubenswrapper[4957]: I0218 15:01:19.144275 4957 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a0e8ec2b-400b-4454-acdd-517a1727e9f8-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 15:01:19 crc kubenswrapper[4957]: I0218 15:01:19.144288 4957 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a0e8ec2b-400b-4454-acdd-517a1727e9f8-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 18 15:01:19 crc kubenswrapper[4957]: I0218 15:01:19.144322 4957 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-d9925550-aa8b-4d58-92f5-133e76b1ee6d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d9925550-aa8b-4d58-92f5-133e76b1ee6d\") on node \"crc\" " Feb 18 15:01:19 crc kubenswrapper[4957]: I0218 15:01:19.186994 4957 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 18 15:01:19 crc kubenswrapper[4957]: I0218 15:01:19.187203 4957 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-d9925550-aa8b-4d58-92f5-133e76b1ee6d" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d9925550-aa8b-4d58-92f5-133e76b1ee6d") on node "crc" Feb 18 15:01:19 crc kubenswrapper[4957]: I0218 15:01:19.243702 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0e8ec2b-400b-4454-acdd-517a1727e9f8-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "a0e8ec2b-400b-4454-acdd-517a1727e9f8" (UID: "a0e8ec2b-400b-4454-acdd-517a1727e9f8"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 15:01:19 crc kubenswrapper[4957]: I0218 15:01:19.246209 4957 reconciler_common.go:293] "Volume detached for volume \"pvc-d9925550-aa8b-4d58-92f5-133e76b1ee6d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d9925550-aa8b-4d58-92f5-133e76b1ee6d\") on node \"crc\" DevicePath \"\"" Feb 18 15:01:19 crc kubenswrapper[4957]: I0218 15:01:19.246240 4957 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a0e8ec2b-400b-4454-acdd-517a1727e9f8-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 18 15:01:19 crc kubenswrapper[4957]: I0218 15:01:19.557471 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"a0e8ec2b-400b-4454-acdd-517a1727e9f8","Type":"ContainerDied","Data":"d67250fb9db8781552259b7dbac5385acbed98df4080642ade0a4a394cd97556"} Feb 18 15:01:19 crc kubenswrapper[4957]: I0218 15:01:19.557820 4957 scope.go:117] "RemoveContainer" containerID="6a1aba853a6b85f2dc58df840dede15df72b5d14a98d48ba6dec3e389c1d1050" Feb 18 15:01:19 crc kubenswrapper[4957]: I0218 15:01:19.557568 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 18 15:01:19 crc kubenswrapper[4957]: I0218 15:01:19.584834 4957 scope.go:117] "RemoveContainer" containerID="5ae784f50deb6c10121274079b9c6592c06242581b699c710aac5ddd82f24d65" Feb 18 15:01:19 crc kubenswrapper[4957]: I0218 15:01:19.603794 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 18 15:01:19 crc kubenswrapper[4957]: I0218 15:01:19.640569 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 18 15:01:19 crc kubenswrapper[4957]: I0218 15:01:19.656456 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 18 15:01:19 crc kubenswrapper[4957]: E0218 15:01:19.657018 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0e8ec2b-400b-4454-acdd-517a1727e9f8" containerName="setup-container" Feb 18 15:01:19 crc kubenswrapper[4957]: I0218 15:01:19.657045 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0e8ec2b-400b-4454-acdd-517a1727e9f8" containerName="setup-container" Feb 18 15:01:19 crc kubenswrapper[4957]: E0218 15:01:19.657070 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5829257-e7f8-4a49-8e8d-1780b76c346a" containerName="keystone-cron" Feb 18 15:01:19 crc kubenswrapper[4957]: I0218 15:01:19.657079 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5829257-e7f8-4a49-8e8d-1780b76c346a" containerName="keystone-cron" Feb 18 15:01:19 crc kubenswrapper[4957]: E0218 15:01:19.657139 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0e8ec2b-400b-4454-acdd-517a1727e9f8" containerName="rabbitmq" Feb 18 15:01:19 crc kubenswrapper[4957]: I0218 15:01:19.657149 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0e8ec2b-400b-4454-acdd-517a1727e9f8" containerName="rabbitmq" Feb 18 15:01:19 crc kubenswrapper[4957]: I0218 15:01:19.657438 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0e8ec2b-400b-4454-acdd-517a1727e9f8" containerName="rabbitmq" Feb 18 15:01:19 crc kubenswrapper[4957]: I0218 15:01:19.657486 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5829257-e7f8-4a49-8e8d-1780b76c346a" containerName="keystone-cron" Feb 18 15:01:19 crc kubenswrapper[4957]: I0218 15:01:19.659020 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 18 15:01:19 crc kubenswrapper[4957]: I0218 15:01:19.670430 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 18 15:01:19 crc kubenswrapper[4957]: I0218 15:01:19.759723 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7c71c78f-243c-40f3-aa9b-1cae17afc260-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"7c71c78f-243c-40f3-aa9b-1cae17afc260\") " pod="openstack/rabbitmq-server-0" Feb 18 15:01:19 crc kubenswrapper[4957]: I0218 15:01:19.759923 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7c71c78f-243c-40f3-aa9b-1cae17afc260-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"7c71c78f-243c-40f3-aa9b-1cae17afc260\") " pod="openstack/rabbitmq-server-0" Feb 18 15:01:19 crc kubenswrapper[4957]: I0218 15:01:19.760081 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7c71c78f-243c-40f3-aa9b-1cae17afc260-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"7c71c78f-243c-40f3-aa9b-1cae17afc260\") " pod="openstack/rabbitmq-server-0" Feb 18 15:01:19 crc kubenswrapper[4957]: I0218 15:01:19.760190 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-d9925550-aa8b-4d58-92f5-133e76b1ee6d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d9925550-aa8b-4d58-92f5-133e76b1ee6d\") pod \"rabbitmq-server-0\" (UID: \"7c71c78f-243c-40f3-aa9b-1cae17afc260\") " pod="openstack/rabbitmq-server-0" Feb 18 15:01:19 crc kubenswrapper[4957]: I0218 15:01:19.760293 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7c71c78f-243c-40f3-aa9b-1cae17afc260-config-data\") pod \"rabbitmq-server-0\" (UID: \"7c71c78f-243c-40f3-aa9b-1cae17afc260\") " pod="openstack/rabbitmq-server-0" Feb 18 15:01:19 crc kubenswrapper[4957]: I0218 15:01:19.760354 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dswxh\" (UniqueName: \"kubernetes.io/projected/7c71c78f-243c-40f3-aa9b-1cae17afc260-kube-api-access-dswxh\") pod \"rabbitmq-server-0\" (UID: \"7c71c78f-243c-40f3-aa9b-1cae17afc260\") " pod="openstack/rabbitmq-server-0" Feb 18 15:01:19 crc kubenswrapper[4957]: I0218 15:01:19.760491 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7c71c78f-243c-40f3-aa9b-1cae17afc260-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"7c71c78f-243c-40f3-aa9b-1cae17afc260\") " pod="openstack/rabbitmq-server-0" Feb 18 15:01:19 crc kubenswrapper[4957]: I0218 15:01:19.760594 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7c71c78f-243c-40f3-aa9b-1cae17afc260-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"7c71c78f-243c-40f3-aa9b-1cae17afc260\") " pod="openstack/rabbitmq-server-0" Feb 18 15:01:19 crc kubenswrapper[4957]: I0218 15:01:19.760654 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7c71c78f-243c-40f3-aa9b-1cae17afc260-pod-info\") pod \"rabbitmq-server-0\" (UID: \"7c71c78f-243c-40f3-aa9b-1cae17afc260\") " pod="openstack/rabbitmq-server-0" Feb 18 15:01:19 crc kubenswrapper[4957]: I0218 15:01:19.760738 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7c71c78f-243c-40f3-aa9b-1cae17afc260-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"7c71c78f-243c-40f3-aa9b-1cae17afc260\") " pod="openstack/rabbitmq-server-0" Feb 18 15:01:19 crc kubenswrapper[4957]: I0218 15:01:19.760765 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7c71c78f-243c-40f3-aa9b-1cae17afc260-server-conf\") pod \"rabbitmq-server-0\" (UID: \"7c71c78f-243c-40f3-aa9b-1cae17afc260\") " pod="openstack/rabbitmq-server-0" Feb 18 15:01:19 crc kubenswrapper[4957]: I0218 15:01:19.863471 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7c71c78f-243c-40f3-aa9b-1cae17afc260-config-data\") pod \"rabbitmq-server-0\" (UID: \"7c71c78f-243c-40f3-aa9b-1cae17afc260\") " pod="openstack/rabbitmq-server-0" Feb 18 15:01:19 crc kubenswrapper[4957]: I0218 15:01:19.863539 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dswxh\" (UniqueName: \"kubernetes.io/projected/7c71c78f-243c-40f3-aa9b-1cae17afc260-kube-api-access-dswxh\") pod \"rabbitmq-server-0\" (UID: \"7c71c78f-243c-40f3-aa9b-1cae17afc260\") " pod="openstack/rabbitmq-server-0" Feb 18 15:01:19 crc kubenswrapper[4957]: I0218 15:01:19.863576 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7c71c78f-243c-40f3-aa9b-1cae17afc260-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"7c71c78f-243c-40f3-aa9b-1cae17afc260\") " pod="openstack/rabbitmq-server-0" Feb 18 15:01:19 crc kubenswrapper[4957]: I0218 15:01:19.863612 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7c71c78f-243c-40f3-aa9b-1cae17afc260-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"7c71c78f-243c-40f3-aa9b-1cae17afc260\") " pod="openstack/rabbitmq-server-0" Feb 18 15:01:19 crc kubenswrapper[4957]: I0218 15:01:19.863647 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7c71c78f-243c-40f3-aa9b-1cae17afc260-pod-info\") pod \"rabbitmq-server-0\" (UID: \"7c71c78f-243c-40f3-aa9b-1cae17afc260\") " pod="openstack/rabbitmq-server-0" Feb 18 15:01:19 crc kubenswrapper[4957]: I0218 15:01:19.863691 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7c71c78f-243c-40f3-aa9b-1cae17afc260-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"7c71c78f-243c-40f3-aa9b-1cae17afc260\") " pod="openstack/rabbitmq-server-0" Feb 18 15:01:19 crc kubenswrapper[4957]: I0218 15:01:19.863714 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7c71c78f-243c-40f3-aa9b-1cae17afc260-server-conf\") pod \"rabbitmq-server-0\" (UID: \"7c71c78f-243c-40f3-aa9b-1cae17afc260\") " pod="openstack/rabbitmq-server-0" Feb 18 15:01:19 crc kubenswrapper[4957]: I0218 15:01:19.863755 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7c71c78f-243c-40f3-aa9b-1cae17afc260-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"7c71c78f-243c-40f3-aa9b-1cae17afc260\") " pod="openstack/rabbitmq-server-0" Feb 18 15:01:19 crc kubenswrapper[4957]: I0218 15:01:19.863874 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7c71c78f-243c-40f3-aa9b-1cae17afc260-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"7c71c78f-243c-40f3-aa9b-1cae17afc260\") " pod="openstack/rabbitmq-server-0" Feb 18 15:01:19 crc kubenswrapper[4957]: I0218 15:01:19.863921 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7c71c78f-243c-40f3-aa9b-1cae17afc260-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"7c71c78f-243c-40f3-aa9b-1cae17afc260\") " pod="openstack/rabbitmq-server-0" Feb 18 15:01:19 crc kubenswrapper[4957]: I0218 15:01:19.863970 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-d9925550-aa8b-4d58-92f5-133e76b1ee6d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d9925550-aa8b-4d58-92f5-133e76b1ee6d\") pod \"rabbitmq-server-0\" (UID: \"7c71c78f-243c-40f3-aa9b-1cae17afc260\") " pod="openstack/rabbitmq-server-0" Feb 18 15:01:19 crc kubenswrapper[4957]: I0218 15:01:19.864597 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7c71c78f-243c-40f3-aa9b-1cae17afc260-config-data\") pod \"rabbitmq-server-0\" (UID: \"7c71c78f-243c-40f3-aa9b-1cae17afc260\") " pod="openstack/rabbitmq-server-0" Feb 18 15:01:19 crc kubenswrapper[4957]: I0218 15:01:19.866915 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7c71c78f-243c-40f3-aa9b-1cae17afc260-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"7c71c78f-243c-40f3-aa9b-1cae17afc260\") " pod="openstack/rabbitmq-server-0" Feb 18 15:01:19 crc kubenswrapper[4957]: I0218 15:01:19.867239 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7c71c78f-243c-40f3-aa9b-1cae17afc260-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"7c71c78f-243c-40f3-aa9b-1cae17afc260\") " pod="openstack/rabbitmq-server-0" Feb 18 15:01:19 crc kubenswrapper[4957]: I0218 15:01:19.867358 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7c71c78f-243c-40f3-aa9b-1cae17afc260-server-conf\") pod \"rabbitmq-server-0\" (UID: \"7c71c78f-243c-40f3-aa9b-1cae17afc260\") " pod="openstack/rabbitmq-server-0" Feb 18 15:01:19 crc kubenswrapper[4957]: I0218 15:01:19.867656 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7c71c78f-243c-40f3-aa9b-1cae17afc260-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"7c71c78f-243c-40f3-aa9b-1cae17afc260\") " pod="openstack/rabbitmq-server-0" Feb 18 15:01:19 crc kubenswrapper[4957]: I0218 15:01:19.870740 4957 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 18 15:01:19 crc kubenswrapper[4957]: I0218 15:01:19.870786 4957 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-d9925550-aa8b-4d58-92f5-133e76b1ee6d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d9925550-aa8b-4d58-92f5-133e76b1ee6d\") pod \"rabbitmq-server-0\" (UID: \"7c71c78f-243c-40f3-aa9b-1cae17afc260\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/bc18ac0c40d0aacb89d93d2c4c0188f67a28fb5cdb73a182e239f99271fd422f/globalmount\"" pod="openstack/rabbitmq-server-0" Feb 18 15:01:19 crc kubenswrapper[4957]: I0218 15:01:19.873256 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7c71c78f-243c-40f3-aa9b-1cae17afc260-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"7c71c78f-243c-40f3-aa9b-1cae17afc260\") " pod="openstack/rabbitmq-server-0" Feb 18 15:01:19 crc kubenswrapper[4957]: I0218 15:01:19.879899 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7c71c78f-243c-40f3-aa9b-1cae17afc260-pod-info\") pod \"rabbitmq-server-0\" (UID: \"7c71c78f-243c-40f3-aa9b-1cae17afc260\") " pod="openstack/rabbitmq-server-0" Feb 18 15:01:19 crc kubenswrapper[4957]: I0218 15:01:19.881039 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7c71c78f-243c-40f3-aa9b-1cae17afc260-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"7c71c78f-243c-40f3-aa9b-1cae17afc260\") " pod="openstack/rabbitmq-server-0" Feb 18 15:01:19 crc kubenswrapper[4957]: I0218 15:01:19.882600 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dswxh\" (UniqueName: \"kubernetes.io/projected/7c71c78f-243c-40f3-aa9b-1cae17afc260-kube-api-access-dswxh\") pod \"rabbitmq-server-0\" (UID: \"7c71c78f-243c-40f3-aa9b-1cae17afc260\") " pod="openstack/rabbitmq-server-0" Feb 18 15:01:19 crc kubenswrapper[4957]: I0218 15:01:19.887749 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7c71c78f-243c-40f3-aa9b-1cae17afc260-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"7c71c78f-243c-40f3-aa9b-1cae17afc260\") " pod="openstack/rabbitmq-server-0" Feb 18 15:01:19 crc kubenswrapper[4957]: I0218 15:01:19.963825 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-d9925550-aa8b-4d58-92f5-133e76b1ee6d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d9925550-aa8b-4d58-92f5-133e76b1ee6d\") pod \"rabbitmq-server-0\" (UID: \"7c71c78f-243c-40f3-aa9b-1cae17afc260\") " pod="openstack/rabbitmq-server-0" Feb 18 15:01:19 crc kubenswrapper[4957]: I0218 15:01:19.995494 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 18 15:01:20 crc kubenswrapper[4957]: I0218 15:01:20.247182 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0e8ec2b-400b-4454-acdd-517a1727e9f8" path="/var/lib/kubelet/pods/a0e8ec2b-400b-4454-acdd-517a1727e9f8/volumes" Feb 18 15:01:20 crc kubenswrapper[4957]: I0218 15:01:20.495763 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 18 15:01:20 crc kubenswrapper[4957]: I0218 15:01:20.586980 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"7c71c78f-243c-40f3-aa9b-1cae17afc260","Type":"ContainerStarted","Data":"6c954facde5d79ebf99856be8cb6d35b7f697206de08dc598fa317887fb43791"} Feb 18 15:01:23 crc kubenswrapper[4957]: I0218 15:01:23.610941 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="a0e8ec2b-400b-4454-acdd-517a1727e9f8" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.129:5671: i/o timeout" Feb 18 15:01:23 crc kubenswrapper[4957]: I0218 15:01:23.625954 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"7c71c78f-243c-40f3-aa9b-1cae17afc260","Type":"ContainerStarted","Data":"8bb162f1cb241872eb92a735b1b89491b05d4cab22f5a65d0d113f88aae61962"} Feb 18 15:01:26 crc kubenswrapper[4957]: I0218 15:01:26.213561 4957 scope.go:117] "RemoveContainer" containerID="c4e669be0ab1992542c422c61f78e7526ae93fe9c47c7d1457d0a05970a49398" Feb 18 15:01:26 crc kubenswrapper[4957]: E0218 15:01:26.214462 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-x8wwg_openshift-machine-config-operator(4cde17e3-43e9-4bed-afe8-5b76229e35cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" Feb 18 15:01:41 crc kubenswrapper[4957]: I0218 15:01:41.213763 4957 scope.go:117] "RemoveContainer" containerID="c4e669be0ab1992542c422c61f78e7526ae93fe9c47c7d1457d0a05970a49398" Feb 18 15:01:41 crc kubenswrapper[4957]: E0218 15:01:41.214438 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-x8wwg_openshift-machine-config-operator(4cde17e3-43e9-4bed-afe8-5b76229e35cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" Feb 18 15:01:53 crc kubenswrapper[4957]: I0218 15:01:53.213099 4957 scope.go:117] "RemoveContainer" containerID="c4e669be0ab1992542c422c61f78e7526ae93fe9c47c7d1457d0a05970a49398" Feb 18 15:01:53 crc kubenswrapper[4957]: E0218 15:01:53.214134 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-x8wwg_openshift-machine-config-operator(4cde17e3-43e9-4bed-afe8-5b76229e35cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" Feb 18 15:01:56 crc kubenswrapper[4957]: I0218 15:01:56.033139 4957 generic.go:334] "Generic (PLEG): container finished" podID="7c71c78f-243c-40f3-aa9b-1cae17afc260" containerID="8bb162f1cb241872eb92a735b1b89491b05d4cab22f5a65d0d113f88aae61962" exitCode=0 Feb 18 15:01:56 crc kubenswrapper[4957]: I0218 15:01:56.033226 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"7c71c78f-243c-40f3-aa9b-1cae17afc260","Type":"ContainerDied","Data":"8bb162f1cb241872eb92a735b1b89491b05d4cab22f5a65d0d113f88aae61962"} Feb 18 15:01:57 crc kubenswrapper[4957]: I0218 15:01:57.051879 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"7c71c78f-243c-40f3-aa9b-1cae17afc260","Type":"ContainerStarted","Data":"7221d205bcf72bde6674cc5f683b2891325b5d372cb1eedda9a7f56f8c10ef78"} Feb 18 15:01:57 crc kubenswrapper[4957]: I0218 15:01:57.052866 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 18 15:01:57 crc kubenswrapper[4957]: I0218 15:01:57.088870 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=38.088847945 podStartE2EDuration="38.088847945s" podCreationTimestamp="2026-02-18 15:01:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 15:01:57.077198639 +0000 UTC m=+1823.598063433" watchObservedRunningTime="2026-02-18 15:01:57.088847945 +0000 UTC m=+1823.609712699" Feb 18 15:02:05 crc kubenswrapper[4957]: I0218 15:02:05.213505 4957 scope.go:117] "RemoveContainer" containerID="c4e669be0ab1992542c422c61f78e7526ae93fe9c47c7d1457d0a05970a49398" Feb 18 15:02:05 crc kubenswrapper[4957]: E0218 15:02:05.214460 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-x8wwg_openshift-machine-config-operator(4cde17e3-43e9-4bed-afe8-5b76229e35cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" Feb 18 15:02:09 crc kubenswrapper[4957]: I0218 15:02:09.998635 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 18 15:02:20 crc kubenswrapper[4957]: I0218 15:02:20.213281 4957 scope.go:117] "RemoveContainer" containerID="c4e669be0ab1992542c422c61f78e7526ae93fe9c47c7d1457d0a05970a49398" Feb 18 15:02:21 crc kubenswrapper[4957]: I0218 15:02:21.406873 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" event={"ID":"4cde17e3-43e9-4bed-afe8-5b76229e35cf","Type":"ContainerStarted","Data":"9c5e9cb6585cb5698a0ccbe75444be81ef24ea059d215d129f850aa9f0b11cc9"} Feb 18 15:03:16 crc kubenswrapper[4957]: I0218 15:03:16.061880 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-7f89w"] Feb 18 15:03:16 crc kubenswrapper[4957]: I0218 15:03:16.074229 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-7f89w"] Feb 18 15:03:16 crc kubenswrapper[4957]: I0218 15:03:16.225979 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="644bd125-826c-4c6a-85e7-ab56ade2412f" path="/var/lib/kubelet/pods/644bd125-826c-4c6a-85e7-ab56ade2412f/volumes" Feb 18 15:03:17 crc kubenswrapper[4957]: I0218 15:03:17.034398 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-43b7-account-create-update-69wnm"] Feb 18 15:03:17 crc kubenswrapper[4957]: I0218 15:03:17.056371 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-43b7-account-create-update-69wnm"] Feb 18 15:03:18 crc kubenswrapper[4957]: I0218 15:03:18.226803 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a720a9a-d59f-4a82-8b76-87196880798e" path="/var/lib/kubelet/pods/9a720a9a-d59f-4a82-8b76-87196880798e/volumes" Feb 18 15:03:19 crc kubenswrapper[4957]: I0218 15:03:19.050553 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-6fnm7"] Feb 18 15:03:19 crc kubenswrapper[4957]: I0218 15:03:19.070016 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-6fnm7"] Feb 18 15:03:20 crc kubenswrapper[4957]: I0218 15:03:20.066628 4957 generic.go:334] "Generic (PLEG): container finished" podID="ce22a60c-ac91-4fcd-a298-330ace1c4d68" containerID="65d75e09233d5a1321649dbe39740c7982a35b4c214b7165dc136659da1d8ad3" exitCode=0 Feb 18 15:03:20 crc kubenswrapper[4957]: I0218 15:03:20.066709 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-45npd" event={"ID":"ce22a60c-ac91-4fcd-a298-330ace1c4d68","Type":"ContainerDied","Data":"65d75e09233d5a1321649dbe39740c7982a35b4c214b7165dc136659da1d8ad3"} Feb 18 15:03:20 crc kubenswrapper[4957]: I0218 15:03:20.241445 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="91430091-a9a7-4cb1-b10e-85ccaeb24fbc" path="/var/lib/kubelet/pods/91430091-a9a7-4cb1-b10e-85ccaeb24fbc/volumes" Feb 18 15:03:21 crc kubenswrapper[4957]: I0218 15:03:21.676412 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-45npd" Feb 18 15:03:21 crc kubenswrapper[4957]: I0218 15:03:21.843736 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce22a60c-ac91-4fcd-a298-330ace1c4d68-bootstrap-combined-ca-bundle\") pod \"ce22a60c-ac91-4fcd-a298-330ace1c4d68\" (UID: \"ce22a60c-ac91-4fcd-a298-330ace1c4d68\") " Feb 18 15:03:21 crc kubenswrapper[4957]: I0218 15:03:21.843926 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ce22a60c-ac91-4fcd-a298-330ace1c4d68-ssh-key-openstack-edpm-ipam\") pod \"ce22a60c-ac91-4fcd-a298-330ace1c4d68\" (UID: \"ce22a60c-ac91-4fcd-a298-330ace1c4d68\") " Feb 18 15:03:21 crc kubenswrapper[4957]: I0218 15:03:21.843983 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2wdb5\" (UniqueName: \"kubernetes.io/projected/ce22a60c-ac91-4fcd-a298-330ace1c4d68-kube-api-access-2wdb5\") pod \"ce22a60c-ac91-4fcd-a298-330ace1c4d68\" (UID: \"ce22a60c-ac91-4fcd-a298-330ace1c4d68\") " Feb 18 15:03:21 crc kubenswrapper[4957]: I0218 15:03:21.844073 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ce22a60c-ac91-4fcd-a298-330ace1c4d68-inventory\") pod \"ce22a60c-ac91-4fcd-a298-330ace1c4d68\" (UID: \"ce22a60c-ac91-4fcd-a298-330ace1c4d68\") " Feb 18 15:03:21 crc kubenswrapper[4957]: I0218 15:03:21.851990 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce22a60c-ac91-4fcd-a298-330ace1c4d68-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "ce22a60c-ac91-4fcd-a298-330ace1c4d68" (UID: "ce22a60c-ac91-4fcd-a298-330ace1c4d68"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 15:03:21 crc kubenswrapper[4957]: I0218 15:03:21.852211 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce22a60c-ac91-4fcd-a298-330ace1c4d68-kube-api-access-2wdb5" (OuterVolumeSpecName: "kube-api-access-2wdb5") pod "ce22a60c-ac91-4fcd-a298-330ace1c4d68" (UID: "ce22a60c-ac91-4fcd-a298-330ace1c4d68"). InnerVolumeSpecName "kube-api-access-2wdb5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 15:03:21 crc kubenswrapper[4957]: I0218 15:03:21.889539 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce22a60c-ac91-4fcd-a298-330ace1c4d68-inventory" (OuterVolumeSpecName: "inventory") pod "ce22a60c-ac91-4fcd-a298-330ace1c4d68" (UID: "ce22a60c-ac91-4fcd-a298-330ace1c4d68"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 15:03:21 crc kubenswrapper[4957]: I0218 15:03:21.890243 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce22a60c-ac91-4fcd-a298-330ace1c4d68-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "ce22a60c-ac91-4fcd-a298-330ace1c4d68" (UID: "ce22a60c-ac91-4fcd-a298-330ace1c4d68"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 15:03:21 crc kubenswrapper[4957]: I0218 15:03:21.948149 4957 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce22a60c-ac91-4fcd-a298-330ace1c4d68-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 15:03:21 crc kubenswrapper[4957]: I0218 15:03:21.948190 4957 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ce22a60c-ac91-4fcd-a298-330ace1c4d68-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 15:03:21 crc kubenswrapper[4957]: I0218 15:03:21.948202 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2wdb5\" (UniqueName: \"kubernetes.io/projected/ce22a60c-ac91-4fcd-a298-330ace1c4d68-kube-api-access-2wdb5\") on node \"crc\" DevicePath \"\"" Feb 18 15:03:21 crc kubenswrapper[4957]: I0218 15:03:21.948211 4957 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ce22a60c-ac91-4fcd-a298-330ace1c4d68-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 15:03:22 crc kubenswrapper[4957]: I0218 15:03:22.096739 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-45npd" event={"ID":"ce22a60c-ac91-4fcd-a298-330ace1c4d68","Type":"ContainerDied","Data":"cd27c316c463c55fed6d092d67fd0b59ebf74df94e1b406d0671769dca00f722"} Feb 18 15:03:22 crc kubenswrapper[4957]: I0218 15:03:22.096787 4957 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cd27c316c463c55fed6d092d67fd0b59ebf74df94e1b406d0671769dca00f722" Feb 18 15:03:22 crc kubenswrapper[4957]: I0218 15:03:22.096833 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-45npd" Feb 18 15:03:22 crc kubenswrapper[4957]: I0218 15:03:22.188204 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bpv8q"] Feb 18 15:03:22 crc kubenswrapper[4957]: E0218 15:03:22.188897 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce22a60c-ac91-4fcd-a298-330ace1c4d68" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 18 15:03:22 crc kubenswrapper[4957]: I0218 15:03:22.188921 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce22a60c-ac91-4fcd-a298-330ace1c4d68" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 18 15:03:22 crc kubenswrapper[4957]: I0218 15:03:22.189126 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce22a60c-ac91-4fcd-a298-330ace1c4d68" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 18 15:03:22 crc kubenswrapper[4957]: I0218 15:03:22.190045 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bpv8q" Feb 18 15:03:22 crc kubenswrapper[4957]: I0218 15:03:22.193843 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-pmpvb" Feb 18 15:03:22 crc kubenswrapper[4957]: I0218 15:03:22.194068 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 15:03:22 crc kubenswrapper[4957]: I0218 15:03:22.196264 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 15:03:22 crc kubenswrapper[4957]: I0218 15:03:22.196619 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 15:03:22 crc kubenswrapper[4957]: I0218 15:03:22.208287 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bpv8q"] Feb 18 15:03:22 crc kubenswrapper[4957]: I0218 15:03:22.357481 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1a0ed9bb-80bf-415e-b67d-8b79fb24cff8-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-bpv8q\" (UID: \"1a0ed9bb-80bf-415e-b67d-8b79fb24cff8\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bpv8q" Feb 18 15:03:22 crc kubenswrapper[4957]: I0218 15:03:22.357642 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wnlwc\" (UniqueName: \"kubernetes.io/projected/1a0ed9bb-80bf-415e-b67d-8b79fb24cff8-kube-api-access-wnlwc\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-bpv8q\" (UID: \"1a0ed9bb-80bf-415e-b67d-8b79fb24cff8\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bpv8q" Feb 18 15:03:22 crc kubenswrapper[4957]: I0218 15:03:22.357727 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1a0ed9bb-80bf-415e-b67d-8b79fb24cff8-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-bpv8q\" (UID: \"1a0ed9bb-80bf-415e-b67d-8b79fb24cff8\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bpv8q" Feb 18 15:03:22 crc kubenswrapper[4957]: I0218 15:03:22.459728 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1a0ed9bb-80bf-415e-b67d-8b79fb24cff8-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-bpv8q\" (UID: \"1a0ed9bb-80bf-415e-b67d-8b79fb24cff8\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bpv8q" Feb 18 15:03:22 crc kubenswrapper[4957]: I0218 15:03:22.459898 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1a0ed9bb-80bf-415e-b67d-8b79fb24cff8-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-bpv8q\" (UID: \"1a0ed9bb-80bf-415e-b67d-8b79fb24cff8\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bpv8q" Feb 18 15:03:22 crc kubenswrapper[4957]: I0218 15:03:22.460013 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wnlwc\" (UniqueName: \"kubernetes.io/projected/1a0ed9bb-80bf-415e-b67d-8b79fb24cff8-kube-api-access-wnlwc\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-bpv8q\" (UID: \"1a0ed9bb-80bf-415e-b67d-8b79fb24cff8\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bpv8q" Feb 18 15:03:22 crc kubenswrapper[4957]: I0218 15:03:22.466499 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1a0ed9bb-80bf-415e-b67d-8b79fb24cff8-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-bpv8q\" (UID: \"1a0ed9bb-80bf-415e-b67d-8b79fb24cff8\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bpv8q" Feb 18 15:03:22 crc kubenswrapper[4957]: I0218 15:03:22.471052 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1a0ed9bb-80bf-415e-b67d-8b79fb24cff8-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-bpv8q\" (UID: \"1a0ed9bb-80bf-415e-b67d-8b79fb24cff8\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bpv8q" Feb 18 15:03:22 crc kubenswrapper[4957]: I0218 15:03:22.477100 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wnlwc\" (UniqueName: \"kubernetes.io/projected/1a0ed9bb-80bf-415e-b67d-8b79fb24cff8-kube-api-access-wnlwc\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-bpv8q\" (UID: \"1a0ed9bb-80bf-415e-b67d-8b79fb24cff8\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bpv8q" Feb 18 15:03:22 crc kubenswrapper[4957]: I0218 15:03:22.509886 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bpv8q" Feb 18 15:03:23 crc kubenswrapper[4957]: I0218 15:03:23.128652 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bpv8q"] Feb 18 15:03:24 crc kubenswrapper[4957]: I0218 15:03:24.142951 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bpv8q" event={"ID":"1a0ed9bb-80bf-415e-b67d-8b79fb24cff8","Type":"ContainerStarted","Data":"bc9c36cfa25e32f690087b71061f581565b153e78d5b693fc01450352e5c2244"} Feb 18 15:03:24 crc kubenswrapper[4957]: I0218 15:03:24.144794 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bpv8q" event={"ID":"1a0ed9bb-80bf-415e-b67d-8b79fb24cff8","Type":"ContainerStarted","Data":"ed62f184edb2a935942c51773193a00503c2450705dc13ac5c30bd02115de4df"} Feb 18 15:03:24 crc kubenswrapper[4957]: I0218 15:03:24.167959 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bpv8q" podStartSLOduration=1.790368583 podStartE2EDuration="2.167907614s" podCreationTimestamp="2026-02-18 15:03:22 +0000 UTC" firstStartedPulling="2026-02-18 15:03:23.142816396 +0000 UTC m=+1909.663681140" lastFinishedPulling="2026-02-18 15:03:23.520355417 +0000 UTC m=+1910.041220171" observedRunningTime="2026-02-18 15:03:24.160612916 +0000 UTC m=+1910.681477660" watchObservedRunningTime="2026-02-18 15:03:24.167907614 +0000 UTC m=+1910.688772358" Feb 18 15:03:25 crc kubenswrapper[4957]: I0218 15:03:25.033956 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-7083-account-create-update-hrlxp"] Feb 18 15:03:25 crc kubenswrapper[4957]: I0218 15:03:25.048619 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-66ea-account-create-update-ttrz9"] Feb 18 15:03:25 crc kubenswrapper[4957]: I0218 15:03:25.060419 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-f4rlf"] Feb 18 15:03:25 crc kubenswrapper[4957]: I0218 15:03:25.072124 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-g6wjq"] Feb 18 15:03:25 crc kubenswrapper[4957]: I0218 15:03:25.082947 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-a8b2-account-create-update-pwb27"] Feb 18 15:03:25 crc kubenswrapper[4957]: I0218 15:03:25.095715 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-66ea-account-create-update-ttrz9"] Feb 18 15:03:25 crc kubenswrapper[4957]: I0218 15:03:25.107806 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-7083-account-create-update-hrlxp"] Feb 18 15:03:25 crc kubenswrapper[4957]: I0218 15:03:25.122643 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-f4rlf"] Feb 18 15:03:25 crc kubenswrapper[4957]: I0218 15:03:25.138526 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-g6wjq"] Feb 18 15:03:25 crc kubenswrapper[4957]: I0218 15:03:25.150731 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-a8b2-account-create-update-pwb27"] Feb 18 15:03:26 crc kubenswrapper[4957]: I0218 15:03:26.231553 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="147d3a76-36f3-4ba7-85ad-a05dfe2ec485" path="/var/lib/kubelet/pods/147d3a76-36f3-4ba7-85ad-a05dfe2ec485/volumes" Feb 18 15:03:26 crc kubenswrapper[4957]: I0218 15:03:26.234486 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="23769552-282f-49ac-b650-6047f54aa60e" path="/var/lib/kubelet/pods/23769552-282f-49ac-b650-6047f54aa60e/volumes" Feb 18 15:03:26 crc kubenswrapper[4957]: I0218 15:03:26.237091 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9976d545-5d6c-44b9-993d-14890ae6a93e" path="/var/lib/kubelet/pods/9976d545-5d6c-44b9-993d-14890ae6a93e/volumes" Feb 18 15:03:26 crc kubenswrapper[4957]: I0218 15:03:26.238363 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a6828e6e-401e-4774-83d8-eb1dab6661a3" path="/var/lib/kubelet/pods/a6828e6e-401e-4774-83d8-eb1dab6661a3/volumes" Feb 18 15:03:26 crc kubenswrapper[4957]: I0218 15:03:26.240650 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7e53e53-e1cc-43a4-8b83-ba7ed5bb323b" path="/var/lib/kubelet/pods/c7e53e53-e1cc-43a4-8b83-ba7ed5bb323b/volumes" Feb 18 15:03:28 crc kubenswrapper[4957]: I0218 15:03:28.059498 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-fb8lv"] Feb 18 15:03:28 crc kubenswrapper[4957]: I0218 15:03:28.090320 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-fb8lv"] Feb 18 15:03:28 crc kubenswrapper[4957]: I0218 15:03:28.228357 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9333dc0-cbe3-4497-93e0-bb2473de68b0" path="/var/lib/kubelet/pods/c9333dc0-cbe3-4497-93e0-bb2473de68b0/volumes" Feb 18 15:03:34 crc kubenswrapper[4957]: I0218 15:03:34.046683 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-6fgdz"] Feb 18 15:03:34 crc kubenswrapper[4957]: I0218 15:03:34.064273 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-7b22-account-create-update-zxhd8"] Feb 18 15:03:34 crc kubenswrapper[4957]: I0218 15:03:34.078261 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-7b22-account-create-update-zxhd8"] Feb 18 15:03:34 crc kubenswrapper[4957]: I0218 15:03:34.090746 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-6fgdz"] Feb 18 15:03:34 crc kubenswrapper[4957]: I0218 15:03:34.228873 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c49ef747-0aae-4f86-9688-69e8fd172494" path="/var/lib/kubelet/pods/c49ef747-0aae-4f86-9688-69e8fd172494/volumes" Feb 18 15:03:34 crc kubenswrapper[4957]: I0218 15:03:34.230550 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8d94c2a-28d8-4811-97be-0020687f1773" path="/var/lib/kubelet/pods/d8d94c2a-28d8-4811-97be-0020687f1773/volumes" Feb 18 15:03:57 crc kubenswrapper[4957]: I0218 15:03:57.050958 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-n9k7h"] Feb 18 15:03:57 crc kubenswrapper[4957]: I0218 15:03:57.062534 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-n9k7h"] Feb 18 15:03:58 crc kubenswrapper[4957]: I0218 15:03:58.235325 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e292891-edfa-438c-aadb-3a12e7fdd9a4" path="/var/lib/kubelet/pods/7e292891-edfa-438c-aadb-3a12e7fdd9a4/volumes" Feb 18 15:04:12 crc kubenswrapper[4957]: I0218 15:04:12.864118 4957 scope.go:117] "RemoveContainer" containerID="54647dfc8eddac82556d1a05a747cc199d72ca7ab1847176d3da33bb8107b08d" Feb 18 15:04:12 crc kubenswrapper[4957]: I0218 15:04:12.914914 4957 scope.go:117] "RemoveContainer" containerID="df0543d7ec8630cf94804960e03800bd22ba1c6caadf5370cfe686c2dd710a74" Feb 18 15:04:12 crc kubenswrapper[4957]: I0218 15:04:12.965157 4957 scope.go:117] "RemoveContainer" containerID="80851f0c18779713941fc2673565d410018a4662791dbb85c663bcbc5d866a26" Feb 18 15:04:13 crc kubenswrapper[4957]: I0218 15:04:13.058768 4957 scope.go:117] "RemoveContainer" containerID="ab1eb9f7c718801405de55e7bcd5bb513c967c41a647c2cd7f8373128378eb56" Feb 18 15:04:13 crc kubenswrapper[4957]: I0218 15:04:13.126591 4957 scope.go:117] "RemoveContainer" containerID="330f00f70ba75594a5d00cf4550a12dfd55ef09908bc75397b83d21789ec1bf5" Feb 18 15:04:13 crc kubenswrapper[4957]: I0218 15:04:13.201017 4957 scope.go:117] "RemoveContainer" containerID="f7db9bb83d2e8d7b375f755c3737b86b6776a8d268f39ccec4e0e74b0e0ab0aa" Feb 18 15:04:13 crc kubenswrapper[4957]: I0218 15:04:13.229550 4957 scope.go:117] "RemoveContainer" containerID="ffcba3119d19cadd3913bde02e917a950f9dcad735ea738b9a917c81bbec112d" Feb 18 15:04:13 crc kubenswrapper[4957]: I0218 15:04:13.256827 4957 scope.go:117] "RemoveContainer" containerID="b29e628ff5cb4f796ed60b9c7972af2f184fd134d446f1e92f2d922990b9cd01" Feb 18 15:04:13 crc kubenswrapper[4957]: I0218 15:04:13.323631 4957 scope.go:117] "RemoveContainer" containerID="dc8f259ad6031168350986394195d61c1bcddbafa9478819494bfb158caa9f0f" Feb 18 15:04:13 crc kubenswrapper[4957]: I0218 15:04:13.351772 4957 scope.go:117] "RemoveContainer" containerID="daec13b1ac2780187283f3f5b92aa166d43a39e17de2cd241beca8f2bfe84743" Feb 18 15:04:13 crc kubenswrapper[4957]: I0218 15:04:13.397697 4957 scope.go:117] "RemoveContainer" containerID="a8afd1b0df8368e492ffee19d866f2b61b22e1b01968a86b65e563fa8cd29b12" Feb 18 15:04:13 crc kubenswrapper[4957]: I0218 15:04:13.438728 4957 scope.go:117] "RemoveContainer" containerID="122f6d9b86ad2b090728f8a5dab106362988b765b1e16cabea16df7911f18af1" Feb 18 15:04:13 crc kubenswrapper[4957]: I0218 15:04:13.468647 4957 scope.go:117] "RemoveContainer" containerID="b8ba44a9044a907512244e6690a33cd19c71efaa27f8e53fa4e8e8f6f432144a" Feb 18 15:04:13 crc kubenswrapper[4957]: I0218 15:04:13.493001 4957 scope.go:117] "RemoveContainer" containerID="9be13743bbecc985224ec75eb657b6d3f2cabf139ea7ce55bedec28a1de9ebaf" Feb 18 15:04:13 crc kubenswrapper[4957]: I0218 15:04:13.550751 4957 scope.go:117] "RemoveContainer" containerID="7267c9aa2a039d95be3611d4e5e41801655a8fc77da9ab18d32c09815ec615a2" Feb 18 15:04:13 crc kubenswrapper[4957]: I0218 15:04:13.574866 4957 scope.go:117] "RemoveContainer" containerID="ce85bc347667418526982f4f5c3199f9ae0b0615c5e84c47687a5fabc71d899b" Feb 18 15:04:13 crc kubenswrapper[4957]: I0218 15:04:13.628651 4957 scope.go:117] "RemoveContainer" containerID="9a9697902f52127868f1c437e783f67935f02fc41ba4e03511ff79d0b1afaa2f" Feb 18 15:04:17 crc kubenswrapper[4957]: I0218 15:04:17.074937 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-8r67m"] Feb 18 15:04:17 crc kubenswrapper[4957]: I0218 15:04:17.132519 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-create-5j8tc"] Feb 18 15:04:17 crc kubenswrapper[4957]: I0218 15:04:17.160155 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-fqdmc"] Feb 18 15:04:17 crc kubenswrapper[4957]: I0218 15:04:17.171688 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-a24c-account-create-update-j4ppv"] Feb 18 15:04:17 crc kubenswrapper[4957]: I0218 15:04:17.186645 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-14cd-account-create-update-cnqzt"] Feb 18 15:04:17 crc kubenswrapper[4957]: I0218 15:04:17.195646 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-339a-account-create-update-rjpcj"] Feb 18 15:04:17 crc kubenswrapper[4957]: I0218 15:04:17.207522 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-tsbrk"] Feb 18 15:04:17 crc kubenswrapper[4957]: I0218 15:04:17.219546 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-8r67m"] Feb 18 15:04:17 crc kubenswrapper[4957]: I0218 15:04:17.231916 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-create-5j8tc"] Feb 18 15:04:17 crc kubenswrapper[4957]: I0218 15:04:17.245407 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-7b22-account-create-update-t5mdx"] Feb 18 15:04:17 crc kubenswrapper[4957]: I0218 15:04:17.255989 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-339a-account-create-update-rjpcj"] Feb 18 15:04:17 crc kubenswrapper[4957]: I0218 15:04:17.267238 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-tsbrk"] Feb 18 15:04:17 crc kubenswrapper[4957]: I0218 15:04:17.277831 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-a24c-account-create-update-j4ppv"] Feb 18 15:04:17 crc kubenswrapper[4957]: I0218 15:04:17.287900 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-7b22-account-create-update-t5mdx"] Feb 18 15:04:17 crc kubenswrapper[4957]: I0218 15:04:17.297736 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-fqdmc"] Feb 18 15:04:17 crc kubenswrapper[4957]: I0218 15:04:17.307384 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-14cd-account-create-update-cnqzt"] Feb 18 15:04:18 crc kubenswrapper[4957]: I0218 15:04:18.233973 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="13bcc539-d1e7-4b12-b87b-d989a5f8db2d" path="/var/lib/kubelet/pods/13bcc539-d1e7-4b12-b87b-d989a5f8db2d/volumes" Feb 18 15:04:18 crc kubenswrapper[4957]: I0218 15:04:18.236808 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4fe03eaf-305f-4528-b0df-d1d435a75f30" path="/var/lib/kubelet/pods/4fe03eaf-305f-4528-b0df-d1d435a75f30/volumes" Feb 18 15:04:18 crc kubenswrapper[4957]: I0218 15:04:18.244248 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57c6557d-c336-47c4-b261-2896f28b3a6b" path="/var/lib/kubelet/pods/57c6557d-c336-47c4-b261-2896f28b3a6b/volumes" Feb 18 15:04:18 crc kubenswrapper[4957]: I0218 15:04:18.247603 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83a8fb77-b238-40f4-8905-b6ebe3595115" path="/var/lib/kubelet/pods/83a8fb77-b238-40f4-8905-b6ebe3595115/volumes" Feb 18 15:04:18 crc kubenswrapper[4957]: I0218 15:04:18.252844 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87460367-575b-4083-bff0-78dc45a41598" path="/var/lib/kubelet/pods/87460367-575b-4083-bff0-78dc45a41598/volumes" Feb 18 15:04:18 crc kubenswrapper[4957]: I0218 15:04:18.253655 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aeb1c40c-9227-4912-944f-c18fafab0fc6" path="/var/lib/kubelet/pods/aeb1c40c-9227-4912-944f-c18fafab0fc6/volumes" Feb 18 15:04:18 crc kubenswrapper[4957]: I0218 15:04:18.255617 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aff441ba-44a2-40e6-b5ff-aad169a28811" path="/var/lib/kubelet/pods/aff441ba-44a2-40e6-b5ff-aad169a28811/volumes" Feb 18 15:04:18 crc kubenswrapper[4957]: I0218 15:04:18.256778 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b38115e0-8b2a-42e5-8d73-a2009264ce13" path="/var/lib/kubelet/pods/b38115e0-8b2a-42e5-8d73-a2009264ce13/volumes" Feb 18 15:04:21 crc kubenswrapper[4957]: I0218 15:04:21.038759 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-mflsm"] Feb 18 15:04:21 crc kubenswrapper[4957]: I0218 15:04:21.052271 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-mflsm"] Feb 18 15:04:22 crc kubenswrapper[4957]: I0218 15:04:22.225378 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fcf177d4-d412-4457-ad6d-a3423ee3dce0" path="/var/lib/kubelet/pods/fcf177d4-d412-4457-ad6d-a3423ee3dce0/volumes" Feb 18 15:04:37 crc kubenswrapper[4957]: I0218 15:04:37.278850 4957 patch_prober.go:28] interesting pod/machine-config-daemon-x8wwg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 15:04:37 crc kubenswrapper[4957]: I0218 15:04:37.279496 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 15:04:51 crc kubenswrapper[4957]: I0218 15:04:51.060564 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-b2p7g"] Feb 18 15:04:51 crc kubenswrapper[4957]: I0218 15:04:51.071004 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-b2p7g"] Feb 18 15:04:52 crc kubenswrapper[4957]: I0218 15:04:52.228255 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c233a613-22d9-4534-811e-31acfe4eb302" path="/var/lib/kubelet/pods/c233a613-22d9-4534-811e-31acfe4eb302/volumes" Feb 18 15:05:07 crc kubenswrapper[4957]: I0218 15:05:07.279353 4957 patch_prober.go:28] interesting pod/machine-config-daemon-x8wwg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 15:05:07 crc kubenswrapper[4957]: I0218 15:05:07.280644 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 15:05:08 crc kubenswrapper[4957]: I0218 15:05:08.085772 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-jmmp8"] Feb 18 15:05:08 crc kubenswrapper[4957]: I0218 15:05:08.099698 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-d89wt"] Feb 18 15:05:08 crc kubenswrapper[4957]: I0218 15:05:08.109484 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-dfvgf"] Feb 18 15:05:08 crc kubenswrapper[4957]: I0218 15:05:08.119960 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-d89wt"] Feb 18 15:05:08 crc kubenswrapper[4957]: I0218 15:05:08.130945 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-dfvgf"] Feb 18 15:05:08 crc kubenswrapper[4957]: I0218 15:05:08.143401 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-jmmp8"] Feb 18 15:05:08 crc kubenswrapper[4957]: I0218 15:05:08.227060 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="19ecbaa7-5ddf-4835-b9b7-01a0c156b75e" path="/var/lib/kubelet/pods/19ecbaa7-5ddf-4835-b9b7-01a0c156b75e/volumes" Feb 18 15:05:08 crc kubenswrapper[4957]: I0218 15:05:08.228923 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f3c5afd-c757-4e78-9f08-3d55f1b32ab6" path="/var/lib/kubelet/pods/9f3c5afd-c757-4e78-9f08-3d55f1b32ab6/volumes" Feb 18 15:05:08 crc kubenswrapper[4957]: I0218 15:05:08.232494 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c6255f04-0970-449b-b48c-2a812f42b7c5" path="/var/lib/kubelet/pods/c6255f04-0970-449b-b48c-2a812f42b7c5/volumes" Feb 18 15:05:14 crc kubenswrapper[4957]: I0218 15:05:14.067503 4957 scope.go:117] "RemoveContainer" containerID="dcdeccaa18d47ed9aba61a7acce59cd112e1073f48ac721be148f83adf44d04e" Feb 18 15:05:14 crc kubenswrapper[4957]: I0218 15:05:14.108738 4957 scope.go:117] "RemoveContainer" containerID="b8469d23458e28e2eb80e787472cfd03130fba22b4c4bd1c6c4fcc5ebe59e877" Feb 18 15:05:14 crc kubenswrapper[4957]: I0218 15:05:14.180004 4957 scope.go:117] "RemoveContainer" containerID="98a6371c228f55174fd2bce27cd78ad2e220e930622bb81f2528b1980b2651aa" Feb 18 15:05:14 crc kubenswrapper[4957]: I0218 15:05:14.248777 4957 scope.go:117] "RemoveContainer" containerID="731abfa56ca6f9c33d8db0d3a24173e07b4e8b32b32805453f0ea5349d13cba3" Feb 18 15:05:14 crc kubenswrapper[4957]: I0218 15:05:14.322492 4957 scope.go:117] "RemoveContainer" containerID="b06d36b41a26396c73cd03472cfd89ec2f223e907c596fcaf6b0c07bcf06c549" Feb 18 15:05:14 crc kubenswrapper[4957]: I0218 15:05:14.392893 4957 scope.go:117] "RemoveContainer" containerID="375e4383c9a1ac11f448ebed339d8d05a5fb70e6593323b6e0289e7df8b8b7c6" Feb 18 15:05:14 crc kubenswrapper[4957]: I0218 15:05:14.478808 4957 scope.go:117] "RemoveContainer" containerID="580a1ba9ac8bfbfeb7a6c690c3ef34c7a8876d5c75cfb75290deeff74bc718c2" Feb 18 15:05:14 crc kubenswrapper[4957]: I0218 15:05:14.499409 4957 scope.go:117] "RemoveContainer" containerID="5cd1786ec9ef9559a17db9df0d074bb006fae71bf22c18377d29ae099acc8d4b" Feb 18 15:05:14 crc kubenswrapper[4957]: I0218 15:05:14.525592 4957 scope.go:117] "RemoveContainer" containerID="1641bdb6592b0882e3e7fa9edcfd2133ceb319917c89978a2b2c95ac12c5c6d1" Feb 18 15:05:14 crc kubenswrapper[4957]: I0218 15:05:14.549466 4957 scope.go:117] "RemoveContainer" containerID="da015619ca0f0a5498e7901cca85c8401fe3baaf6ba02eed568d37ebcdd4db97" Feb 18 15:05:14 crc kubenswrapper[4957]: I0218 15:05:14.571139 4957 scope.go:117] "RemoveContainer" containerID="50c99879806d6b17c98ffb370d86f137f5fd3f295e1ce4e6646b901bd59a279e" Feb 18 15:05:14 crc kubenswrapper[4957]: I0218 15:05:14.601629 4957 scope.go:117] "RemoveContainer" containerID="66b30415e27ae4448ad716f032c63f380c2d3287d510cea022a5669e21607e9e" Feb 18 15:05:14 crc kubenswrapper[4957]: I0218 15:05:14.628662 4957 scope.go:117] "RemoveContainer" containerID="934ba6875ed91c5f5497a89930bdc9804dd8370aed7d6ff922e51b5f30ec1ef2" Feb 18 15:05:22 crc kubenswrapper[4957]: I0218 15:05:22.617913 4957 generic.go:334] "Generic (PLEG): container finished" podID="1a0ed9bb-80bf-415e-b67d-8b79fb24cff8" containerID="bc9c36cfa25e32f690087b71061f581565b153e78d5b693fc01450352e5c2244" exitCode=0 Feb 18 15:05:22 crc kubenswrapper[4957]: I0218 15:05:22.617984 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bpv8q" event={"ID":"1a0ed9bb-80bf-415e-b67d-8b79fb24cff8","Type":"ContainerDied","Data":"bc9c36cfa25e32f690087b71061f581565b153e78d5b693fc01450352e5c2244"} Feb 18 15:05:24 crc kubenswrapper[4957]: I0218 15:05:24.033653 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-5sgzz"] Feb 18 15:05:24 crc kubenswrapper[4957]: I0218 15:05:24.044179 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-5sgzz"] Feb 18 15:05:24 crc kubenswrapper[4957]: I0218 15:05:24.229062 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9205f99b-873e-4f53-9d89-85b77ca7adc1" path="/var/lib/kubelet/pods/9205f99b-873e-4f53-9d89-85b77ca7adc1/volumes" Feb 18 15:05:24 crc kubenswrapper[4957]: I0218 15:05:24.514238 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bpv8q" Feb 18 15:05:24 crc kubenswrapper[4957]: I0218 15:05:24.597611 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wnlwc\" (UniqueName: \"kubernetes.io/projected/1a0ed9bb-80bf-415e-b67d-8b79fb24cff8-kube-api-access-wnlwc\") pod \"1a0ed9bb-80bf-415e-b67d-8b79fb24cff8\" (UID: \"1a0ed9bb-80bf-415e-b67d-8b79fb24cff8\") " Feb 18 15:05:24 crc kubenswrapper[4957]: I0218 15:05:24.597760 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1a0ed9bb-80bf-415e-b67d-8b79fb24cff8-ssh-key-openstack-edpm-ipam\") pod \"1a0ed9bb-80bf-415e-b67d-8b79fb24cff8\" (UID: \"1a0ed9bb-80bf-415e-b67d-8b79fb24cff8\") " Feb 18 15:05:24 crc kubenswrapper[4957]: I0218 15:05:24.597876 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1a0ed9bb-80bf-415e-b67d-8b79fb24cff8-inventory\") pod \"1a0ed9bb-80bf-415e-b67d-8b79fb24cff8\" (UID: \"1a0ed9bb-80bf-415e-b67d-8b79fb24cff8\") " Feb 18 15:05:24 crc kubenswrapper[4957]: I0218 15:05:24.607156 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a0ed9bb-80bf-415e-b67d-8b79fb24cff8-kube-api-access-wnlwc" (OuterVolumeSpecName: "kube-api-access-wnlwc") pod "1a0ed9bb-80bf-415e-b67d-8b79fb24cff8" (UID: "1a0ed9bb-80bf-415e-b67d-8b79fb24cff8"). InnerVolumeSpecName "kube-api-access-wnlwc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 15:05:24 crc kubenswrapper[4957]: I0218 15:05:24.634186 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a0ed9bb-80bf-415e-b67d-8b79fb24cff8-inventory" (OuterVolumeSpecName: "inventory") pod "1a0ed9bb-80bf-415e-b67d-8b79fb24cff8" (UID: "1a0ed9bb-80bf-415e-b67d-8b79fb24cff8"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 15:05:24 crc kubenswrapper[4957]: I0218 15:05:24.636576 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a0ed9bb-80bf-415e-b67d-8b79fb24cff8-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "1a0ed9bb-80bf-415e-b67d-8b79fb24cff8" (UID: "1a0ed9bb-80bf-415e-b67d-8b79fb24cff8"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 15:05:24 crc kubenswrapper[4957]: I0218 15:05:24.645724 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bpv8q" event={"ID":"1a0ed9bb-80bf-415e-b67d-8b79fb24cff8","Type":"ContainerDied","Data":"ed62f184edb2a935942c51773193a00503c2450705dc13ac5c30bd02115de4df"} Feb 18 15:05:24 crc kubenswrapper[4957]: I0218 15:05:24.645777 4957 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ed62f184edb2a935942c51773193a00503c2450705dc13ac5c30bd02115de4df" Feb 18 15:05:24 crc kubenswrapper[4957]: I0218 15:05:24.645789 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bpv8q" Feb 18 15:05:24 crc kubenswrapper[4957]: I0218 15:05:24.702450 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wnlwc\" (UniqueName: \"kubernetes.io/projected/1a0ed9bb-80bf-415e-b67d-8b79fb24cff8-kube-api-access-wnlwc\") on node \"crc\" DevicePath \"\"" Feb 18 15:05:24 crc kubenswrapper[4957]: I0218 15:05:24.702507 4957 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1a0ed9bb-80bf-415e-b67d-8b79fb24cff8-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 15:05:24 crc kubenswrapper[4957]: I0218 15:05:24.702527 4957 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1a0ed9bb-80bf-415e-b67d-8b79fb24cff8-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 15:05:24 crc kubenswrapper[4957]: I0218 15:05:24.748814 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5h6ql"] Feb 18 15:05:24 crc kubenswrapper[4957]: E0218 15:05:24.749323 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a0ed9bb-80bf-415e-b67d-8b79fb24cff8" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 18 15:05:24 crc kubenswrapper[4957]: I0218 15:05:24.749342 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a0ed9bb-80bf-415e-b67d-8b79fb24cff8" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 18 15:05:24 crc kubenswrapper[4957]: I0218 15:05:24.749572 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a0ed9bb-80bf-415e-b67d-8b79fb24cff8" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 18 15:05:24 crc kubenswrapper[4957]: I0218 15:05:24.750597 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5h6ql" Feb 18 15:05:24 crc kubenswrapper[4957]: I0218 15:05:24.754975 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-pmpvb" Feb 18 15:05:24 crc kubenswrapper[4957]: I0218 15:05:24.755356 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 15:05:24 crc kubenswrapper[4957]: I0218 15:05:24.755538 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 15:05:24 crc kubenswrapper[4957]: I0218 15:05:24.755672 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 15:05:24 crc kubenswrapper[4957]: I0218 15:05:24.763112 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5h6ql"] Feb 18 15:05:24 crc kubenswrapper[4957]: I0218 15:05:24.907583 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a4eba666-1695-49b1-8825-a5ba56bee93e-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-5h6ql\" (UID: \"a4eba666-1695-49b1-8825-a5ba56bee93e\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5h6ql" Feb 18 15:05:24 crc kubenswrapper[4957]: I0218 15:05:24.907864 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxp8q\" (UniqueName: \"kubernetes.io/projected/a4eba666-1695-49b1-8825-a5ba56bee93e-kube-api-access-gxp8q\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-5h6ql\" (UID: \"a4eba666-1695-49b1-8825-a5ba56bee93e\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5h6ql" Feb 18 15:05:24 crc kubenswrapper[4957]: I0218 15:05:24.908058 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a4eba666-1695-49b1-8825-a5ba56bee93e-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-5h6ql\" (UID: \"a4eba666-1695-49b1-8825-a5ba56bee93e\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5h6ql" Feb 18 15:05:25 crc kubenswrapper[4957]: I0218 15:05:25.011279 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a4eba666-1695-49b1-8825-a5ba56bee93e-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-5h6ql\" (UID: \"a4eba666-1695-49b1-8825-a5ba56bee93e\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5h6ql" Feb 18 15:05:25 crc kubenswrapper[4957]: I0218 15:05:25.011403 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gxp8q\" (UniqueName: \"kubernetes.io/projected/a4eba666-1695-49b1-8825-a5ba56bee93e-kube-api-access-gxp8q\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-5h6ql\" (UID: \"a4eba666-1695-49b1-8825-a5ba56bee93e\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5h6ql" Feb 18 15:05:25 crc kubenswrapper[4957]: I0218 15:05:25.011565 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a4eba666-1695-49b1-8825-a5ba56bee93e-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-5h6ql\" (UID: \"a4eba666-1695-49b1-8825-a5ba56bee93e\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5h6ql" Feb 18 15:05:25 crc kubenswrapper[4957]: I0218 15:05:25.017330 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a4eba666-1695-49b1-8825-a5ba56bee93e-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-5h6ql\" (UID: \"a4eba666-1695-49b1-8825-a5ba56bee93e\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5h6ql" Feb 18 15:05:25 crc kubenswrapper[4957]: I0218 15:05:25.018333 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a4eba666-1695-49b1-8825-a5ba56bee93e-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-5h6ql\" (UID: \"a4eba666-1695-49b1-8825-a5ba56bee93e\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5h6ql" Feb 18 15:05:25 crc kubenswrapper[4957]: I0218 15:05:25.040716 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gxp8q\" (UniqueName: \"kubernetes.io/projected/a4eba666-1695-49b1-8825-a5ba56bee93e-kube-api-access-gxp8q\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-5h6ql\" (UID: \"a4eba666-1695-49b1-8825-a5ba56bee93e\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5h6ql" Feb 18 15:05:25 crc kubenswrapper[4957]: I0218 15:05:25.105165 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5h6ql" Feb 18 15:05:25 crc kubenswrapper[4957]: I0218 15:05:25.683758 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5h6ql"] Feb 18 15:05:25 crc kubenswrapper[4957]: I0218 15:05:25.688684 4957 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 15:05:26 crc kubenswrapper[4957]: I0218 15:05:26.668619 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5h6ql" event={"ID":"a4eba666-1695-49b1-8825-a5ba56bee93e","Type":"ContainerStarted","Data":"81727221fbea1c1e17536e6270f677df5921cb3e09aa2dff20b301300a8ed7be"} Feb 18 15:05:26 crc kubenswrapper[4957]: I0218 15:05:26.670849 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5h6ql" event={"ID":"a4eba666-1695-49b1-8825-a5ba56bee93e","Type":"ContainerStarted","Data":"ad4d087cd80eb5aebfc7132ab9e22ca7c4d06384e287061cd29e1a5014bd859d"} Feb 18 15:05:26 crc kubenswrapper[4957]: I0218 15:05:26.691468 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5h6ql" podStartSLOduration=2.221676257 podStartE2EDuration="2.69144212s" podCreationTimestamp="2026-02-18 15:05:24 +0000 UTC" firstStartedPulling="2026-02-18 15:05:25.688477802 +0000 UTC m=+2032.209342546" lastFinishedPulling="2026-02-18 15:05:26.158243665 +0000 UTC m=+2032.679108409" observedRunningTime="2026-02-18 15:05:26.686571082 +0000 UTC m=+2033.207435916" watchObservedRunningTime="2026-02-18 15:05:26.69144212 +0000 UTC m=+2033.212306864" Feb 18 15:05:37 crc kubenswrapper[4957]: I0218 15:05:37.278998 4957 patch_prober.go:28] interesting pod/machine-config-daemon-x8wwg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 15:05:37 crc kubenswrapper[4957]: I0218 15:05:37.279687 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 15:05:37 crc kubenswrapper[4957]: I0218 15:05:37.279771 4957 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" Feb 18 15:05:37 crc kubenswrapper[4957]: I0218 15:05:37.280898 4957 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9c5e9cb6585cb5698a0ccbe75444be81ef24ea059d215d129f850aa9f0b11cc9"} pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 15:05:37 crc kubenswrapper[4957]: I0218 15:05:37.280997 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" containerName="machine-config-daemon" containerID="cri-o://9c5e9cb6585cb5698a0ccbe75444be81ef24ea059d215d129f850aa9f0b11cc9" gracePeriod=600 Feb 18 15:05:37 crc kubenswrapper[4957]: I0218 15:05:37.806819 4957 generic.go:334] "Generic (PLEG): container finished" podID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" containerID="9c5e9cb6585cb5698a0ccbe75444be81ef24ea059d215d129f850aa9f0b11cc9" exitCode=0 Feb 18 15:05:37 crc kubenswrapper[4957]: I0218 15:05:37.806871 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" event={"ID":"4cde17e3-43e9-4bed-afe8-5b76229e35cf","Type":"ContainerDied","Data":"9c5e9cb6585cb5698a0ccbe75444be81ef24ea059d215d129f850aa9f0b11cc9"} Feb 18 15:05:37 crc kubenswrapper[4957]: I0218 15:05:37.807194 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" event={"ID":"4cde17e3-43e9-4bed-afe8-5b76229e35cf","Type":"ContainerStarted","Data":"44b5f20e8c741a1ccc4ddb62c02968c2612966a76e0271e585bc5a70580f1aba"} Feb 18 15:05:37 crc kubenswrapper[4957]: I0218 15:05:37.807224 4957 scope.go:117] "RemoveContainer" containerID="c4e669be0ab1992542c422c61f78e7526ae93fe9c47c7d1457d0a05970a49398" Feb 18 15:06:04 crc kubenswrapper[4957]: I0218 15:06:04.477522 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-68hls"] Feb 18 15:06:04 crc kubenswrapper[4957]: I0218 15:06:04.486017 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-68hls" Feb 18 15:06:04 crc kubenswrapper[4957]: I0218 15:06:04.493852 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-68hls"] Feb 18 15:06:04 crc kubenswrapper[4957]: I0218 15:06:04.635914 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/99e326ae-6c3a-4a5e-b51c-b52ebe486217-catalog-content\") pod \"redhat-operators-68hls\" (UID: \"99e326ae-6c3a-4a5e-b51c-b52ebe486217\") " pod="openshift-marketplace/redhat-operators-68hls" Feb 18 15:06:04 crc kubenswrapper[4957]: I0218 15:06:04.636539 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dhsr\" (UniqueName: \"kubernetes.io/projected/99e326ae-6c3a-4a5e-b51c-b52ebe486217-kube-api-access-2dhsr\") pod \"redhat-operators-68hls\" (UID: \"99e326ae-6c3a-4a5e-b51c-b52ebe486217\") " pod="openshift-marketplace/redhat-operators-68hls" Feb 18 15:06:04 crc kubenswrapper[4957]: I0218 15:06:04.636628 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/99e326ae-6c3a-4a5e-b51c-b52ebe486217-utilities\") pod \"redhat-operators-68hls\" (UID: \"99e326ae-6c3a-4a5e-b51c-b52ebe486217\") " pod="openshift-marketplace/redhat-operators-68hls" Feb 18 15:06:04 crc kubenswrapper[4957]: I0218 15:06:04.739151 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/99e326ae-6c3a-4a5e-b51c-b52ebe486217-catalog-content\") pod \"redhat-operators-68hls\" (UID: \"99e326ae-6c3a-4a5e-b51c-b52ebe486217\") " pod="openshift-marketplace/redhat-operators-68hls" Feb 18 15:06:04 crc kubenswrapper[4957]: I0218 15:06:04.739287 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dhsr\" (UniqueName: \"kubernetes.io/projected/99e326ae-6c3a-4a5e-b51c-b52ebe486217-kube-api-access-2dhsr\") pod \"redhat-operators-68hls\" (UID: \"99e326ae-6c3a-4a5e-b51c-b52ebe486217\") " pod="openshift-marketplace/redhat-operators-68hls" Feb 18 15:06:04 crc kubenswrapper[4957]: I0218 15:06:04.739312 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/99e326ae-6c3a-4a5e-b51c-b52ebe486217-utilities\") pod \"redhat-operators-68hls\" (UID: \"99e326ae-6c3a-4a5e-b51c-b52ebe486217\") " pod="openshift-marketplace/redhat-operators-68hls" Feb 18 15:06:04 crc kubenswrapper[4957]: I0218 15:06:04.739686 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/99e326ae-6c3a-4a5e-b51c-b52ebe486217-catalog-content\") pod \"redhat-operators-68hls\" (UID: \"99e326ae-6c3a-4a5e-b51c-b52ebe486217\") " pod="openshift-marketplace/redhat-operators-68hls" Feb 18 15:06:04 crc kubenswrapper[4957]: I0218 15:06:04.739791 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/99e326ae-6c3a-4a5e-b51c-b52ebe486217-utilities\") pod \"redhat-operators-68hls\" (UID: \"99e326ae-6c3a-4a5e-b51c-b52ebe486217\") " pod="openshift-marketplace/redhat-operators-68hls" Feb 18 15:06:04 crc kubenswrapper[4957]: I0218 15:06:04.763229 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2dhsr\" (UniqueName: \"kubernetes.io/projected/99e326ae-6c3a-4a5e-b51c-b52ebe486217-kube-api-access-2dhsr\") pod \"redhat-operators-68hls\" (UID: \"99e326ae-6c3a-4a5e-b51c-b52ebe486217\") " pod="openshift-marketplace/redhat-operators-68hls" Feb 18 15:06:04 crc kubenswrapper[4957]: I0218 15:06:04.821196 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-68hls" Feb 18 15:06:05 crc kubenswrapper[4957]: I0218 15:06:05.343427 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-68hls"] Feb 18 15:06:06 crc kubenswrapper[4957]: I0218 15:06:06.158707 4957 generic.go:334] "Generic (PLEG): container finished" podID="99e326ae-6c3a-4a5e-b51c-b52ebe486217" containerID="415ba83c80863e71be0453c60835fea5b8711f2ea19d614c62b561e5f2986101" exitCode=0 Feb 18 15:06:06 crc kubenswrapper[4957]: I0218 15:06:06.158744 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-68hls" event={"ID":"99e326ae-6c3a-4a5e-b51c-b52ebe486217","Type":"ContainerDied","Data":"415ba83c80863e71be0453c60835fea5b8711f2ea19d614c62b561e5f2986101"} Feb 18 15:06:06 crc kubenswrapper[4957]: I0218 15:06:06.158948 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-68hls" event={"ID":"99e326ae-6c3a-4a5e-b51c-b52ebe486217","Type":"ContainerStarted","Data":"42a57cc72a6e5dc4ffadcc7b635a8a4a8e8f6416d3187bb3f06f412e5ecdddc5"} Feb 18 15:06:07 crc kubenswrapper[4957]: I0218 15:06:07.172036 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-68hls" event={"ID":"99e326ae-6c3a-4a5e-b51c-b52ebe486217","Type":"ContainerStarted","Data":"4db9d0850e4cd76c4e01d58f51ebc93341f02c4df7181eb41bdf2a3ad4ef2a39"} Feb 18 15:06:12 crc kubenswrapper[4957]: I0218 15:06:12.229540 4957 generic.go:334] "Generic (PLEG): container finished" podID="99e326ae-6c3a-4a5e-b51c-b52ebe486217" containerID="4db9d0850e4cd76c4e01d58f51ebc93341f02c4df7181eb41bdf2a3ad4ef2a39" exitCode=0 Feb 18 15:06:12 crc kubenswrapper[4957]: I0218 15:06:12.232810 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-68hls" event={"ID":"99e326ae-6c3a-4a5e-b51c-b52ebe486217","Type":"ContainerDied","Data":"4db9d0850e4cd76c4e01d58f51ebc93341f02c4df7181eb41bdf2a3ad4ef2a39"} Feb 18 15:06:13 crc kubenswrapper[4957]: I0218 15:06:13.069266 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-b67z9"] Feb 18 15:06:13 crc kubenswrapper[4957]: I0218 15:06:13.084106 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-hh59v"] Feb 18 15:06:13 crc kubenswrapper[4957]: I0218 15:06:13.119997 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-b67z9"] Feb 18 15:06:13 crc kubenswrapper[4957]: I0218 15:06:13.129861 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-hh59v"] Feb 18 15:06:14 crc kubenswrapper[4957]: I0218 15:06:14.040506 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0197-account-create-update-tlpj9"] Feb 18 15:06:14 crc kubenswrapper[4957]: I0218 15:06:14.050628 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-f855-account-create-update-2d5wz"] Feb 18 15:06:14 crc kubenswrapper[4957]: I0218 15:06:14.063361 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-6416-account-create-update-xvbh7"] Feb 18 15:06:14 crc kubenswrapper[4957]: I0218 15:06:14.074301 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-f855-account-create-update-2d5wz"] Feb 18 15:06:14 crc kubenswrapper[4957]: I0218 15:06:14.084309 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-6416-account-create-update-xvbh7"] Feb 18 15:06:14 crc kubenswrapper[4957]: I0218 15:06:14.094953 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0197-account-create-update-tlpj9"] Feb 18 15:06:14 crc kubenswrapper[4957]: I0218 15:06:14.108003 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-sgw96"] Feb 18 15:06:14 crc kubenswrapper[4957]: I0218 15:06:14.128887 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-sgw96"] Feb 18 15:06:14 crc kubenswrapper[4957]: I0218 15:06:14.229816 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="099706d3-04cd-4729-b03d-774bc14ae8b5" path="/var/lib/kubelet/pods/099706d3-04cd-4729-b03d-774bc14ae8b5/volumes" Feb 18 15:06:14 crc kubenswrapper[4957]: I0218 15:06:14.231198 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7dd324f-84f9-4860-8cd0-c00e9eba5367" path="/var/lib/kubelet/pods/a7dd324f-84f9-4860-8cd0-c00e9eba5367/volumes" Feb 18 15:06:14 crc kubenswrapper[4957]: I0218 15:06:14.232011 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c916872c-8d06-4608-84d8-1159ad3c99eb" path="/var/lib/kubelet/pods/c916872c-8d06-4608-84d8-1159ad3c99eb/volumes" Feb 18 15:06:14 crc kubenswrapper[4957]: I0218 15:06:14.234028 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb47212d-7882-43e9-bea7-a114f2e4f629" path="/var/lib/kubelet/pods/cb47212d-7882-43e9-bea7-a114f2e4f629/volumes" Feb 18 15:06:14 crc kubenswrapper[4957]: I0218 15:06:14.235355 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb399cd7-737a-423f-8a68-71d4a3c4f592" path="/var/lib/kubelet/pods/eb399cd7-737a-423f-8a68-71d4a3c4f592/volumes" Feb 18 15:06:14 crc kubenswrapper[4957]: I0218 15:06:14.236569 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fbcc208f-6b78-4c4c-88d9-043d963343de" path="/var/lib/kubelet/pods/fbcc208f-6b78-4c4c-88d9-043d963343de/volumes" Feb 18 15:06:14 crc kubenswrapper[4957]: I0218 15:06:14.254069 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-68hls" event={"ID":"99e326ae-6c3a-4a5e-b51c-b52ebe486217","Type":"ContainerStarted","Data":"f7ec761731d8244daa627fd18baa19fd56863be135b016308d8750379ed41464"} Feb 18 15:06:14 crc kubenswrapper[4957]: I0218 15:06:14.280601 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-68hls" podStartSLOduration=3.462939616 podStartE2EDuration="10.2805745s" podCreationTimestamp="2026-02-18 15:06:04 +0000 UTC" firstStartedPulling="2026-02-18 15:06:06.16099866 +0000 UTC m=+2072.681863404" lastFinishedPulling="2026-02-18 15:06:12.978633544 +0000 UTC m=+2079.499498288" observedRunningTime="2026-02-18 15:06:14.274556139 +0000 UTC m=+2080.795420903" watchObservedRunningTime="2026-02-18 15:06:14.2805745 +0000 UTC m=+2080.801439244" Feb 18 15:06:14 crc kubenswrapper[4957]: I0218 15:06:14.821848 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-68hls" Feb 18 15:06:14 crc kubenswrapper[4957]: I0218 15:06:14.821901 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-68hls" Feb 18 15:06:14 crc kubenswrapper[4957]: I0218 15:06:14.928331 4957 scope.go:117] "RemoveContainer" containerID="716110cbddac71890976bfd1a4c49210212732b27bb2bf28f291137c8d596708" Feb 18 15:06:15 crc kubenswrapper[4957]: I0218 15:06:15.000075 4957 scope.go:117] "RemoveContainer" containerID="884f7835142ab18776738373d9e1490cdd17b294e1b88591dfa7d04c36fa3716" Feb 18 15:06:15 crc kubenswrapper[4957]: I0218 15:06:15.064076 4957 scope.go:117] "RemoveContainer" containerID="a467ab3655df39ef6387264594b9fe1a027463d0576d224cadbd9e56f2573717" Feb 18 15:06:15 crc kubenswrapper[4957]: I0218 15:06:15.141130 4957 scope.go:117] "RemoveContainer" containerID="40c7b293d24561fef5e2e690dc155987e816f9c9eb40019c0084028a66501342" Feb 18 15:06:15 crc kubenswrapper[4957]: I0218 15:06:15.197611 4957 scope.go:117] "RemoveContainer" containerID="ffc19cb12c23d8889812cecd6f0f6af7f8f1ab7d3328e956f842b34069817284" Feb 18 15:06:15 crc kubenswrapper[4957]: I0218 15:06:15.245022 4957 scope.go:117] "RemoveContainer" containerID="4a3b20a3f54458baa4887945c0f72d355994507e67f82655beaca5e29bbe5530" Feb 18 15:06:15 crc kubenswrapper[4957]: I0218 15:06:15.307076 4957 scope.go:117] "RemoveContainer" containerID="aaebe5fbdb152684860ae096b14c21d0c576ef2f34946a12b07f0cd5158b5ae4" Feb 18 15:06:15 crc kubenswrapper[4957]: I0218 15:06:15.886851 4957 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-68hls" podUID="99e326ae-6c3a-4a5e-b51c-b52ebe486217" containerName="registry-server" probeResult="failure" output=< Feb 18 15:06:15 crc kubenswrapper[4957]: timeout: failed to connect service ":50051" within 1s Feb 18 15:06:15 crc kubenswrapper[4957]: > Feb 18 15:06:24 crc kubenswrapper[4957]: I0218 15:06:24.906957 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-68hls" Feb 18 15:06:24 crc kubenswrapper[4957]: I0218 15:06:24.960273 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-68hls" Feb 18 15:06:25 crc kubenswrapper[4957]: I0218 15:06:25.159828 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-68hls"] Feb 18 15:06:26 crc kubenswrapper[4957]: I0218 15:06:26.382806 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-68hls" podUID="99e326ae-6c3a-4a5e-b51c-b52ebe486217" containerName="registry-server" containerID="cri-o://f7ec761731d8244daa627fd18baa19fd56863be135b016308d8750379ed41464" gracePeriod=2 Feb 18 15:06:26 crc kubenswrapper[4957]: I0218 15:06:26.947729 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-68hls" Feb 18 15:06:27 crc kubenswrapper[4957]: I0218 15:06:27.105591 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2dhsr\" (UniqueName: \"kubernetes.io/projected/99e326ae-6c3a-4a5e-b51c-b52ebe486217-kube-api-access-2dhsr\") pod \"99e326ae-6c3a-4a5e-b51c-b52ebe486217\" (UID: \"99e326ae-6c3a-4a5e-b51c-b52ebe486217\") " Feb 18 15:06:27 crc kubenswrapper[4957]: I0218 15:06:27.106012 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/99e326ae-6c3a-4a5e-b51c-b52ebe486217-utilities\") pod \"99e326ae-6c3a-4a5e-b51c-b52ebe486217\" (UID: \"99e326ae-6c3a-4a5e-b51c-b52ebe486217\") " Feb 18 15:06:27 crc kubenswrapper[4957]: I0218 15:06:27.106119 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/99e326ae-6c3a-4a5e-b51c-b52ebe486217-catalog-content\") pod \"99e326ae-6c3a-4a5e-b51c-b52ebe486217\" (UID: \"99e326ae-6c3a-4a5e-b51c-b52ebe486217\") " Feb 18 15:06:27 crc kubenswrapper[4957]: I0218 15:06:27.106794 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/99e326ae-6c3a-4a5e-b51c-b52ebe486217-utilities" (OuterVolumeSpecName: "utilities") pod "99e326ae-6c3a-4a5e-b51c-b52ebe486217" (UID: "99e326ae-6c3a-4a5e-b51c-b52ebe486217"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 15:06:27 crc kubenswrapper[4957]: I0218 15:06:27.107217 4957 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/99e326ae-6c3a-4a5e-b51c-b52ebe486217-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 15:06:27 crc kubenswrapper[4957]: I0218 15:06:27.116532 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/99e326ae-6c3a-4a5e-b51c-b52ebe486217-kube-api-access-2dhsr" (OuterVolumeSpecName: "kube-api-access-2dhsr") pod "99e326ae-6c3a-4a5e-b51c-b52ebe486217" (UID: "99e326ae-6c3a-4a5e-b51c-b52ebe486217"). InnerVolumeSpecName "kube-api-access-2dhsr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 15:06:27 crc kubenswrapper[4957]: I0218 15:06:27.209546 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2dhsr\" (UniqueName: \"kubernetes.io/projected/99e326ae-6c3a-4a5e-b51c-b52ebe486217-kube-api-access-2dhsr\") on node \"crc\" DevicePath \"\"" Feb 18 15:06:27 crc kubenswrapper[4957]: I0218 15:06:27.237870 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/99e326ae-6c3a-4a5e-b51c-b52ebe486217-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "99e326ae-6c3a-4a5e-b51c-b52ebe486217" (UID: "99e326ae-6c3a-4a5e-b51c-b52ebe486217"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 15:06:27 crc kubenswrapper[4957]: I0218 15:06:27.312683 4957 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/99e326ae-6c3a-4a5e-b51c-b52ebe486217-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 15:06:27 crc kubenswrapper[4957]: I0218 15:06:27.394797 4957 generic.go:334] "Generic (PLEG): container finished" podID="99e326ae-6c3a-4a5e-b51c-b52ebe486217" containerID="f7ec761731d8244daa627fd18baa19fd56863be135b016308d8750379ed41464" exitCode=0 Feb 18 15:06:27 crc kubenswrapper[4957]: I0218 15:06:27.394856 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-68hls" event={"ID":"99e326ae-6c3a-4a5e-b51c-b52ebe486217","Type":"ContainerDied","Data":"f7ec761731d8244daa627fd18baa19fd56863be135b016308d8750379ed41464"} Feb 18 15:06:27 crc kubenswrapper[4957]: I0218 15:06:27.394915 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-68hls" event={"ID":"99e326ae-6c3a-4a5e-b51c-b52ebe486217","Type":"ContainerDied","Data":"42a57cc72a6e5dc4ffadcc7b635a8a4a8e8f6416d3187bb3f06f412e5ecdddc5"} Feb 18 15:06:27 crc kubenswrapper[4957]: I0218 15:06:27.394939 4957 scope.go:117] "RemoveContainer" containerID="f7ec761731d8244daa627fd18baa19fd56863be135b016308d8750379ed41464" Feb 18 15:06:27 crc kubenswrapper[4957]: I0218 15:06:27.395181 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-68hls" Feb 18 15:06:27 crc kubenswrapper[4957]: I0218 15:06:27.428527 4957 scope.go:117] "RemoveContainer" containerID="4db9d0850e4cd76c4e01d58f51ebc93341f02c4df7181eb41bdf2a3ad4ef2a39" Feb 18 15:06:27 crc kubenswrapper[4957]: I0218 15:06:27.447173 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-68hls"] Feb 18 15:06:27 crc kubenswrapper[4957]: I0218 15:06:27.459364 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-68hls"] Feb 18 15:06:27 crc kubenswrapper[4957]: I0218 15:06:27.484064 4957 scope.go:117] "RemoveContainer" containerID="415ba83c80863e71be0453c60835fea5b8711f2ea19d614c62b561e5f2986101" Feb 18 15:06:27 crc kubenswrapper[4957]: I0218 15:06:27.522878 4957 scope.go:117] "RemoveContainer" containerID="f7ec761731d8244daa627fd18baa19fd56863be135b016308d8750379ed41464" Feb 18 15:06:27 crc kubenswrapper[4957]: E0218 15:06:27.523780 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f7ec761731d8244daa627fd18baa19fd56863be135b016308d8750379ed41464\": container with ID starting with f7ec761731d8244daa627fd18baa19fd56863be135b016308d8750379ed41464 not found: ID does not exist" containerID="f7ec761731d8244daa627fd18baa19fd56863be135b016308d8750379ed41464" Feb 18 15:06:27 crc kubenswrapper[4957]: I0218 15:06:27.523897 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7ec761731d8244daa627fd18baa19fd56863be135b016308d8750379ed41464"} err="failed to get container status \"f7ec761731d8244daa627fd18baa19fd56863be135b016308d8750379ed41464\": rpc error: code = NotFound desc = could not find container \"f7ec761731d8244daa627fd18baa19fd56863be135b016308d8750379ed41464\": container with ID starting with f7ec761731d8244daa627fd18baa19fd56863be135b016308d8750379ed41464 not found: ID does not exist" Feb 18 15:06:27 crc kubenswrapper[4957]: I0218 15:06:27.524011 4957 scope.go:117] "RemoveContainer" containerID="4db9d0850e4cd76c4e01d58f51ebc93341f02c4df7181eb41bdf2a3ad4ef2a39" Feb 18 15:06:27 crc kubenswrapper[4957]: E0218 15:06:27.524562 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4db9d0850e4cd76c4e01d58f51ebc93341f02c4df7181eb41bdf2a3ad4ef2a39\": container with ID starting with 4db9d0850e4cd76c4e01d58f51ebc93341f02c4df7181eb41bdf2a3ad4ef2a39 not found: ID does not exist" containerID="4db9d0850e4cd76c4e01d58f51ebc93341f02c4df7181eb41bdf2a3ad4ef2a39" Feb 18 15:06:27 crc kubenswrapper[4957]: I0218 15:06:27.524666 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4db9d0850e4cd76c4e01d58f51ebc93341f02c4df7181eb41bdf2a3ad4ef2a39"} err="failed to get container status \"4db9d0850e4cd76c4e01d58f51ebc93341f02c4df7181eb41bdf2a3ad4ef2a39\": rpc error: code = NotFound desc = could not find container \"4db9d0850e4cd76c4e01d58f51ebc93341f02c4df7181eb41bdf2a3ad4ef2a39\": container with ID starting with 4db9d0850e4cd76c4e01d58f51ebc93341f02c4df7181eb41bdf2a3ad4ef2a39 not found: ID does not exist" Feb 18 15:06:27 crc kubenswrapper[4957]: I0218 15:06:27.524773 4957 scope.go:117] "RemoveContainer" containerID="415ba83c80863e71be0453c60835fea5b8711f2ea19d614c62b561e5f2986101" Feb 18 15:06:27 crc kubenswrapper[4957]: E0218 15:06:27.526501 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"415ba83c80863e71be0453c60835fea5b8711f2ea19d614c62b561e5f2986101\": container with ID starting with 415ba83c80863e71be0453c60835fea5b8711f2ea19d614c62b561e5f2986101 not found: ID does not exist" containerID="415ba83c80863e71be0453c60835fea5b8711f2ea19d614c62b561e5f2986101" Feb 18 15:06:27 crc kubenswrapper[4957]: I0218 15:06:27.526876 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"415ba83c80863e71be0453c60835fea5b8711f2ea19d614c62b561e5f2986101"} err="failed to get container status \"415ba83c80863e71be0453c60835fea5b8711f2ea19d614c62b561e5f2986101\": rpc error: code = NotFound desc = could not find container \"415ba83c80863e71be0453c60835fea5b8711f2ea19d614c62b561e5f2986101\": container with ID starting with 415ba83c80863e71be0453c60835fea5b8711f2ea19d614c62b561e5f2986101 not found: ID does not exist" Feb 18 15:06:28 crc kubenswrapper[4957]: I0218 15:06:28.225401 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="99e326ae-6c3a-4a5e-b51c-b52ebe486217" path="/var/lib/kubelet/pods/99e326ae-6c3a-4a5e-b51c-b52ebe486217/volumes" Feb 18 15:06:33 crc kubenswrapper[4957]: I0218 15:06:33.455310 4957 generic.go:334] "Generic (PLEG): container finished" podID="a4eba666-1695-49b1-8825-a5ba56bee93e" containerID="81727221fbea1c1e17536e6270f677df5921cb3e09aa2dff20b301300a8ed7be" exitCode=0 Feb 18 15:06:33 crc kubenswrapper[4957]: I0218 15:06:33.455396 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5h6ql" event={"ID":"a4eba666-1695-49b1-8825-a5ba56bee93e","Type":"ContainerDied","Data":"81727221fbea1c1e17536e6270f677df5921cb3e09aa2dff20b301300a8ed7be"} Feb 18 15:06:34 crc kubenswrapper[4957]: I0218 15:06:34.907409 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5h6ql" Feb 18 15:06:35 crc kubenswrapper[4957]: I0218 15:06:35.004521 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a4eba666-1695-49b1-8825-a5ba56bee93e-ssh-key-openstack-edpm-ipam\") pod \"a4eba666-1695-49b1-8825-a5ba56bee93e\" (UID: \"a4eba666-1695-49b1-8825-a5ba56bee93e\") " Feb 18 15:06:35 crc kubenswrapper[4957]: I0218 15:06:35.004593 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gxp8q\" (UniqueName: \"kubernetes.io/projected/a4eba666-1695-49b1-8825-a5ba56bee93e-kube-api-access-gxp8q\") pod \"a4eba666-1695-49b1-8825-a5ba56bee93e\" (UID: \"a4eba666-1695-49b1-8825-a5ba56bee93e\") " Feb 18 15:06:35 crc kubenswrapper[4957]: I0218 15:06:35.004711 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a4eba666-1695-49b1-8825-a5ba56bee93e-inventory\") pod \"a4eba666-1695-49b1-8825-a5ba56bee93e\" (UID: \"a4eba666-1695-49b1-8825-a5ba56bee93e\") " Feb 18 15:06:35 crc kubenswrapper[4957]: I0218 15:06:35.010697 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4eba666-1695-49b1-8825-a5ba56bee93e-kube-api-access-gxp8q" (OuterVolumeSpecName: "kube-api-access-gxp8q") pod "a4eba666-1695-49b1-8825-a5ba56bee93e" (UID: "a4eba666-1695-49b1-8825-a5ba56bee93e"). InnerVolumeSpecName "kube-api-access-gxp8q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 15:06:35 crc kubenswrapper[4957]: I0218 15:06:35.040475 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4eba666-1695-49b1-8825-a5ba56bee93e-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "a4eba666-1695-49b1-8825-a5ba56bee93e" (UID: "a4eba666-1695-49b1-8825-a5ba56bee93e"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 15:06:35 crc kubenswrapper[4957]: I0218 15:06:35.046874 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4eba666-1695-49b1-8825-a5ba56bee93e-inventory" (OuterVolumeSpecName: "inventory") pod "a4eba666-1695-49b1-8825-a5ba56bee93e" (UID: "a4eba666-1695-49b1-8825-a5ba56bee93e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 15:06:35 crc kubenswrapper[4957]: I0218 15:06:35.106180 4957 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a4eba666-1695-49b1-8825-a5ba56bee93e-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 15:06:35 crc kubenswrapper[4957]: I0218 15:06:35.106219 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gxp8q\" (UniqueName: \"kubernetes.io/projected/a4eba666-1695-49b1-8825-a5ba56bee93e-kube-api-access-gxp8q\") on node \"crc\" DevicePath \"\"" Feb 18 15:06:35 crc kubenswrapper[4957]: I0218 15:06:35.106235 4957 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a4eba666-1695-49b1-8825-a5ba56bee93e-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 15:06:35 crc kubenswrapper[4957]: I0218 15:06:35.475891 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5h6ql" event={"ID":"a4eba666-1695-49b1-8825-a5ba56bee93e","Type":"ContainerDied","Data":"ad4d087cd80eb5aebfc7132ab9e22ca7c4d06384e287061cd29e1a5014bd859d"} Feb 18 15:06:35 crc kubenswrapper[4957]: I0218 15:06:35.475940 4957 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ad4d087cd80eb5aebfc7132ab9e22ca7c4d06384e287061cd29e1a5014bd859d" Feb 18 15:06:35 crc kubenswrapper[4957]: I0218 15:06:35.475950 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5h6ql" Feb 18 15:06:35 crc kubenswrapper[4957]: I0218 15:06:35.755491 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jn2jf"] Feb 18 15:06:35 crc kubenswrapper[4957]: E0218 15:06:35.757097 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99e326ae-6c3a-4a5e-b51c-b52ebe486217" containerName="registry-server" Feb 18 15:06:35 crc kubenswrapper[4957]: I0218 15:06:35.757209 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="99e326ae-6c3a-4a5e-b51c-b52ebe486217" containerName="registry-server" Feb 18 15:06:35 crc kubenswrapper[4957]: E0218 15:06:35.757295 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4eba666-1695-49b1-8825-a5ba56bee93e" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 18 15:06:35 crc kubenswrapper[4957]: I0218 15:06:35.757349 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4eba666-1695-49b1-8825-a5ba56bee93e" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 18 15:06:35 crc kubenswrapper[4957]: E0218 15:06:35.757411 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99e326ae-6c3a-4a5e-b51c-b52ebe486217" containerName="extract-content" Feb 18 15:06:35 crc kubenswrapper[4957]: I0218 15:06:35.757487 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="99e326ae-6c3a-4a5e-b51c-b52ebe486217" containerName="extract-content" Feb 18 15:06:35 crc kubenswrapper[4957]: E0218 15:06:35.757561 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99e326ae-6c3a-4a5e-b51c-b52ebe486217" containerName="extract-utilities" Feb 18 15:06:35 crc kubenswrapper[4957]: I0218 15:06:35.757613 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="99e326ae-6c3a-4a5e-b51c-b52ebe486217" containerName="extract-utilities" Feb 18 15:06:35 crc kubenswrapper[4957]: I0218 15:06:35.757872 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="99e326ae-6c3a-4a5e-b51c-b52ebe486217" containerName="registry-server" Feb 18 15:06:35 crc kubenswrapper[4957]: I0218 15:06:35.757949 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4eba666-1695-49b1-8825-a5ba56bee93e" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 18 15:06:35 crc kubenswrapper[4957]: I0218 15:06:35.758836 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jn2jf" Feb 18 15:06:35 crc kubenswrapper[4957]: I0218 15:06:35.770962 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-pmpvb" Feb 18 15:06:35 crc kubenswrapper[4957]: I0218 15:06:35.771398 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 15:06:35 crc kubenswrapper[4957]: I0218 15:06:35.771718 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 15:06:35 crc kubenswrapper[4957]: I0218 15:06:35.771908 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 15:06:35 crc kubenswrapper[4957]: I0218 15:06:35.780162 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jn2jf"] Feb 18 15:06:35 crc kubenswrapper[4957]: I0218 15:06:35.826612 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b5039a76-1c37-420a-9427-87f7d9b35576-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-jn2jf\" (UID: \"b5039a76-1c37-420a-9427-87f7d9b35576\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jn2jf" Feb 18 15:06:35 crc kubenswrapper[4957]: I0218 15:06:35.826690 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b5039a76-1c37-420a-9427-87f7d9b35576-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-jn2jf\" (UID: \"b5039a76-1c37-420a-9427-87f7d9b35576\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jn2jf" Feb 18 15:06:35 crc kubenswrapper[4957]: I0218 15:06:35.826793 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6mbf2\" (UniqueName: \"kubernetes.io/projected/b5039a76-1c37-420a-9427-87f7d9b35576-kube-api-access-6mbf2\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-jn2jf\" (UID: \"b5039a76-1c37-420a-9427-87f7d9b35576\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jn2jf" Feb 18 15:06:35 crc kubenswrapper[4957]: I0218 15:06:35.928923 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b5039a76-1c37-420a-9427-87f7d9b35576-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-jn2jf\" (UID: \"b5039a76-1c37-420a-9427-87f7d9b35576\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jn2jf" Feb 18 15:06:35 crc kubenswrapper[4957]: I0218 15:06:35.930016 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b5039a76-1c37-420a-9427-87f7d9b35576-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-jn2jf\" (UID: \"b5039a76-1c37-420a-9427-87f7d9b35576\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jn2jf" Feb 18 15:06:35 crc kubenswrapper[4957]: I0218 15:06:35.930330 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6mbf2\" (UniqueName: \"kubernetes.io/projected/b5039a76-1c37-420a-9427-87f7d9b35576-kube-api-access-6mbf2\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-jn2jf\" (UID: \"b5039a76-1c37-420a-9427-87f7d9b35576\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jn2jf" Feb 18 15:06:35 crc kubenswrapper[4957]: I0218 15:06:35.933937 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b5039a76-1c37-420a-9427-87f7d9b35576-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-jn2jf\" (UID: \"b5039a76-1c37-420a-9427-87f7d9b35576\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jn2jf" Feb 18 15:06:35 crc kubenswrapper[4957]: I0218 15:06:35.942296 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b5039a76-1c37-420a-9427-87f7d9b35576-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-jn2jf\" (UID: \"b5039a76-1c37-420a-9427-87f7d9b35576\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jn2jf" Feb 18 15:06:35 crc kubenswrapper[4957]: I0218 15:06:35.948342 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6mbf2\" (UniqueName: \"kubernetes.io/projected/b5039a76-1c37-420a-9427-87f7d9b35576-kube-api-access-6mbf2\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-jn2jf\" (UID: \"b5039a76-1c37-420a-9427-87f7d9b35576\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jn2jf" Feb 18 15:06:36 crc kubenswrapper[4957]: I0218 15:06:36.099502 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jn2jf" Feb 18 15:06:36 crc kubenswrapper[4957]: I0218 15:06:36.726795 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jn2jf"] Feb 18 15:06:37 crc kubenswrapper[4957]: I0218 15:06:37.505391 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jn2jf" event={"ID":"b5039a76-1c37-420a-9427-87f7d9b35576","Type":"ContainerStarted","Data":"76b25a739e4fe9d781c1098a0f38c786cccd309e77f9aca3ae3eb2427f04ae16"} Feb 18 15:06:38 crc kubenswrapper[4957]: I0218 15:06:38.521301 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jn2jf" event={"ID":"b5039a76-1c37-420a-9427-87f7d9b35576","Type":"ContainerStarted","Data":"b7579e4ea209bb7b4c10e801ba885ffe444204e00f96c39b9a5ef91f43000dc0"} Feb 18 15:06:38 crc kubenswrapper[4957]: I0218 15:06:38.541321 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jn2jf" podStartSLOduration=2.9548545859999997 podStartE2EDuration="3.541285265s" podCreationTimestamp="2026-02-18 15:06:35 +0000 UTC" firstStartedPulling="2026-02-18 15:06:36.733521362 +0000 UTC m=+2103.254386126" lastFinishedPulling="2026-02-18 15:06:37.319952061 +0000 UTC m=+2103.840816805" observedRunningTime="2026-02-18 15:06:38.53651209 +0000 UTC m=+2105.057376824" watchObservedRunningTime="2026-02-18 15:06:38.541285265 +0000 UTC m=+2105.062150009" Feb 18 15:06:43 crc kubenswrapper[4957]: I0218 15:06:43.569749 4957 generic.go:334] "Generic (PLEG): container finished" podID="b5039a76-1c37-420a-9427-87f7d9b35576" containerID="b7579e4ea209bb7b4c10e801ba885ffe444204e00f96c39b9a5ef91f43000dc0" exitCode=0 Feb 18 15:06:43 crc kubenswrapper[4957]: I0218 15:06:43.569833 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jn2jf" event={"ID":"b5039a76-1c37-420a-9427-87f7d9b35576","Type":"ContainerDied","Data":"b7579e4ea209bb7b4c10e801ba885ffe444204e00f96c39b9a5ef91f43000dc0"} Feb 18 15:06:45 crc kubenswrapper[4957]: I0218 15:06:45.098480 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jn2jf" Feb 18 15:06:45 crc kubenswrapper[4957]: I0218 15:06:45.201876 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b5039a76-1c37-420a-9427-87f7d9b35576-ssh-key-openstack-edpm-ipam\") pod \"b5039a76-1c37-420a-9427-87f7d9b35576\" (UID: \"b5039a76-1c37-420a-9427-87f7d9b35576\") " Feb 18 15:06:45 crc kubenswrapper[4957]: I0218 15:06:45.201918 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6mbf2\" (UniqueName: \"kubernetes.io/projected/b5039a76-1c37-420a-9427-87f7d9b35576-kube-api-access-6mbf2\") pod \"b5039a76-1c37-420a-9427-87f7d9b35576\" (UID: \"b5039a76-1c37-420a-9427-87f7d9b35576\") " Feb 18 15:06:45 crc kubenswrapper[4957]: I0218 15:06:45.202035 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b5039a76-1c37-420a-9427-87f7d9b35576-inventory\") pod \"b5039a76-1c37-420a-9427-87f7d9b35576\" (UID: \"b5039a76-1c37-420a-9427-87f7d9b35576\") " Feb 18 15:06:45 crc kubenswrapper[4957]: I0218 15:06:45.209916 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5039a76-1c37-420a-9427-87f7d9b35576-kube-api-access-6mbf2" (OuterVolumeSpecName: "kube-api-access-6mbf2") pod "b5039a76-1c37-420a-9427-87f7d9b35576" (UID: "b5039a76-1c37-420a-9427-87f7d9b35576"). InnerVolumeSpecName "kube-api-access-6mbf2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 15:06:45 crc kubenswrapper[4957]: I0218 15:06:45.235485 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5039a76-1c37-420a-9427-87f7d9b35576-inventory" (OuterVolumeSpecName: "inventory") pod "b5039a76-1c37-420a-9427-87f7d9b35576" (UID: "b5039a76-1c37-420a-9427-87f7d9b35576"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 15:06:45 crc kubenswrapper[4957]: I0218 15:06:45.242170 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5039a76-1c37-420a-9427-87f7d9b35576-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "b5039a76-1c37-420a-9427-87f7d9b35576" (UID: "b5039a76-1c37-420a-9427-87f7d9b35576"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 15:06:45 crc kubenswrapper[4957]: I0218 15:06:45.305605 4957 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b5039a76-1c37-420a-9427-87f7d9b35576-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 15:06:45 crc kubenswrapper[4957]: I0218 15:06:45.305641 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6mbf2\" (UniqueName: \"kubernetes.io/projected/b5039a76-1c37-420a-9427-87f7d9b35576-kube-api-access-6mbf2\") on node \"crc\" DevicePath \"\"" Feb 18 15:06:45 crc kubenswrapper[4957]: I0218 15:06:45.305655 4957 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b5039a76-1c37-420a-9427-87f7d9b35576-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 15:06:45 crc kubenswrapper[4957]: I0218 15:06:45.600974 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jn2jf" event={"ID":"b5039a76-1c37-420a-9427-87f7d9b35576","Type":"ContainerDied","Data":"76b25a739e4fe9d781c1098a0f38c786cccd309e77f9aca3ae3eb2427f04ae16"} Feb 18 15:06:45 crc kubenswrapper[4957]: I0218 15:06:45.601012 4957 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="76b25a739e4fe9d781c1098a0f38c786cccd309e77f9aca3ae3eb2427f04ae16" Feb 18 15:06:45 crc kubenswrapper[4957]: I0218 15:06:45.601068 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jn2jf" Feb 18 15:06:45 crc kubenswrapper[4957]: I0218 15:06:45.669704 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-66bnk"] Feb 18 15:06:45 crc kubenswrapper[4957]: E0218 15:06:45.670336 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5039a76-1c37-420a-9427-87f7d9b35576" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 18 15:06:45 crc kubenswrapper[4957]: I0218 15:06:45.670359 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5039a76-1c37-420a-9427-87f7d9b35576" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 18 15:06:45 crc kubenswrapper[4957]: I0218 15:06:45.670738 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5039a76-1c37-420a-9427-87f7d9b35576" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 18 15:06:45 crc kubenswrapper[4957]: I0218 15:06:45.671776 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-66bnk" Feb 18 15:06:45 crc kubenswrapper[4957]: I0218 15:06:45.674530 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 15:06:45 crc kubenswrapper[4957]: I0218 15:06:45.674752 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 15:06:45 crc kubenswrapper[4957]: I0218 15:06:45.675315 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-pmpvb" Feb 18 15:06:45 crc kubenswrapper[4957]: I0218 15:06:45.675766 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 15:06:45 crc kubenswrapper[4957]: I0218 15:06:45.681172 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-66bnk"] Feb 18 15:06:45 crc kubenswrapper[4957]: I0218 15:06:45.714815 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/98d2a9b7-af08-49d1-b1d0-6e9c09a09bf1-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-66bnk\" (UID: \"98d2a9b7-af08-49d1-b1d0-6e9c09a09bf1\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-66bnk" Feb 18 15:06:45 crc kubenswrapper[4957]: I0218 15:06:45.715081 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gz4h4\" (UniqueName: \"kubernetes.io/projected/98d2a9b7-af08-49d1-b1d0-6e9c09a09bf1-kube-api-access-gz4h4\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-66bnk\" (UID: \"98d2a9b7-af08-49d1-b1d0-6e9c09a09bf1\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-66bnk" Feb 18 15:06:45 crc kubenswrapper[4957]: I0218 15:06:45.715587 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/98d2a9b7-af08-49d1-b1d0-6e9c09a09bf1-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-66bnk\" (UID: \"98d2a9b7-af08-49d1-b1d0-6e9c09a09bf1\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-66bnk" Feb 18 15:06:45 crc kubenswrapper[4957]: I0218 15:06:45.818086 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/98d2a9b7-af08-49d1-b1d0-6e9c09a09bf1-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-66bnk\" (UID: \"98d2a9b7-af08-49d1-b1d0-6e9c09a09bf1\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-66bnk" Feb 18 15:06:45 crc kubenswrapper[4957]: I0218 15:06:45.818579 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/98d2a9b7-af08-49d1-b1d0-6e9c09a09bf1-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-66bnk\" (UID: \"98d2a9b7-af08-49d1-b1d0-6e9c09a09bf1\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-66bnk" Feb 18 15:06:45 crc kubenswrapper[4957]: I0218 15:06:45.818658 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gz4h4\" (UniqueName: \"kubernetes.io/projected/98d2a9b7-af08-49d1-b1d0-6e9c09a09bf1-kube-api-access-gz4h4\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-66bnk\" (UID: \"98d2a9b7-af08-49d1-b1d0-6e9c09a09bf1\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-66bnk" Feb 18 15:06:45 crc kubenswrapper[4957]: I0218 15:06:45.822507 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/98d2a9b7-af08-49d1-b1d0-6e9c09a09bf1-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-66bnk\" (UID: \"98d2a9b7-af08-49d1-b1d0-6e9c09a09bf1\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-66bnk" Feb 18 15:06:45 crc kubenswrapper[4957]: I0218 15:06:45.823320 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/98d2a9b7-af08-49d1-b1d0-6e9c09a09bf1-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-66bnk\" (UID: \"98d2a9b7-af08-49d1-b1d0-6e9c09a09bf1\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-66bnk" Feb 18 15:06:45 crc kubenswrapper[4957]: I0218 15:06:45.837719 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gz4h4\" (UniqueName: \"kubernetes.io/projected/98d2a9b7-af08-49d1-b1d0-6e9c09a09bf1-kube-api-access-gz4h4\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-66bnk\" (UID: \"98d2a9b7-af08-49d1-b1d0-6e9c09a09bf1\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-66bnk" Feb 18 15:06:45 crc kubenswrapper[4957]: I0218 15:06:45.996740 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-66bnk" Feb 18 15:06:46 crc kubenswrapper[4957]: I0218 15:06:46.592739 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-66bnk"] Feb 18 15:06:46 crc kubenswrapper[4957]: I0218 15:06:46.611135 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-66bnk" event={"ID":"98d2a9b7-af08-49d1-b1d0-6e9c09a09bf1","Type":"ContainerStarted","Data":"991f752c7a7f23f4188a497a03f40d3b85055731fbefafbca552f7ecdcebdca6"} Feb 18 15:06:47 crc kubenswrapper[4957]: I0218 15:06:47.050378 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-b2srx"] Feb 18 15:06:47 crc kubenswrapper[4957]: I0218 15:06:47.080050 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-b2srx"] Feb 18 15:06:47 crc kubenswrapper[4957]: I0218 15:06:47.662953 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-66bnk" event={"ID":"98d2a9b7-af08-49d1-b1d0-6e9c09a09bf1","Type":"ContainerStarted","Data":"a0b11cd610c3c5275cc4615820cf6b8576ac2d2591b7634e3bb1e136c26bf78c"} Feb 18 15:06:47 crc kubenswrapper[4957]: I0218 15:06:47.692741 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-66bnk" podStartSLOduration=2.034673742 podStartE2EDuration="2.692720216s" podCreationTimestamp="2026-02-18 15:06:45 +0000 UTC" firstStartedPulling="2026-02-18 15:06:46.600563182 +0000 UTC m=+2113.121427926" lastFinishedPulling="2026-02-18 15:06:47.258609656 +0000 UTC m=+2113.779474400" observedRunningTime="2026-02-18 15:06:47.690178183 +0000 UTC m=+2114.211042937" watchObservedRunningTime="2026-02-18 15:06:47.692720216 +0000 UTC m=+2114.213584960" Feb 18 15:06:48 crc kubenswrapper[4957]: I0218 15:06:48.230127 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb7ac820-e63c-4996-af6b-b0f45530ef91" path="/var/lib/kubelet/pods/cb7ac820-e63c-4996-af6b-b0f45530ef91/volumes" Feb 18 15:07:08 crc kubenswrapper[4957]: I0218 15:07:08.083110 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-2c2d-account-create-update-tn2f2"] Feb 18 15:07:08 crc kubenswrapper[4957]: I0218 15:07:08.096877 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-2c2d-account-create-update-tn2f2"] Feb 18 15:07:08 crc kubenswrapper[4957]: I0218 15:07:08.111327 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-create-f7zmd"] Feb 18 15:07:08 crc kubenswrapper[4957]: I0218 15:07:08.122382 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-create-f7zmd"] Feb 18 15:07:08 crc kubenswrapper[4957]: I0218 15:07:08.247359 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="184b1171-6492-48b6-bf23-2286c360264b" path="/var/lib/kubelet/pods/184b1171-6492-48b6-bf23-2286c360264b/volumes" Feb 18 15:07:08 crc kubenswrapper[4957]: I0218 15:07:08.250172 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ab939f4-a04d-445f-92fd-d26bd08f852c" path="/var/lib/kubelet/pods/7ab939f4-a04d-445f-92fd-d26bd08f852c/volumes" Feb 18 15:07:15 crc kubenswrapper[4957]: I0218 15:07:15.461718 4957 scope.go:117] "RemoveContainer" containerID="e1f1fea28bb35b29ef07e94128f5b946a8099b11fea98c3bc578547fbb5b4ec4" Feb 18 15:07:15 crc kubenswrapper[4957]: I0218 15:07:15.492675 4957 scope.go:117] "RemoveContainer" containerID="d034e5e901bf87db3b68accb8d7445274e8e34fb1ea4765dd77aa842b1b74514" Feb 18 15:07:15 crc kubenswrapper[4957]: I0218 15:07:15.583286 4957 scope.go:117] "RemoveContainer" containerID="79a244a584f702c4fb08a1653bc760578b6a60356bb6d816e0d2737600890137" Feb 18 15:07:18 crc kubenswrapper[4957]: I0218 15:07:18.058733 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-k2zxb"] Feb 18 15:07:18 crc kubenswrapper[4957]: I0218 15:07:18.071821 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-k2zxb"] Feb 18 15:07:18 crc kubenswrapper[4957]: I0218 15:07:18.226531 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6fa8a4e3-39e3-4e9b-8529-9f712fbc509a" path="/var/lib/kubelet/pods/6fa8a4e3-39e3-4e9b-8529-9f712fbc509a/volumes" Feb 18 15:07:21 crc kubenswrapper[4957]: I0218 15:07:21.041192 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-qcgxs"] Feb 18 15:07:21 crc kubenswrapper[4957]: I0218 15:07:21.058483 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-qcgxs"] Feb 18 15:07:22 crc kubenswrapper[4957]: I0218 15:07:22.226176 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1361fd5c-3b5c-41e3-9c89-8df6ce0ea622" path="/var/lib/kubelet/pods/1361fd5c-3b5c-41e3-9c89-8df6ce0ea622/volumes" Feb 18 15:07:24 crc kubenswrapper[4957]: I0218 15:07:24.127000 4957 generic.go:334] "Generic (PLEG): container finished" podID="98d2a9b7-af08-49d1-b1d0-6e9c09a09bf1" containerID="a0b11cd610c3c5275cc4615820cf6b8576ac2d2591b7634e3bb1e136c26bf78c" exitCode=0 Feb 18 15:07:24 crc kubenswrapper[4957]: I0218 15:07:24.127088 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-66bnk" event={"ID":"98d2a9b7-af08-49d1-b1d0-6e9c09a09bf1","Type":"ContainerDied","Data":"a0b11cd610c3c5275cc4615820cf6b8576ac2d2591b7634e3bb1e136c26bf78c"} Feb 18 15:07:25 crc kubenswrapper[4957]: I0218 15:07:25.828640 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-66bnk" Feb 18 15:07:25 crc kubenswrapper[4957]: I0218 15:07:25.903213 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gz4h4\" (UniqueName: \"kubernetes.io/projected/98d2a9b7-af08-49d1-b1d0-6e9c09a09bf1-kube-api-access-gz4h4\") pod \"98d2a9b7-af08-49d1-b1d0-6e9c09a09bf1\" (UID: \"98d2a9b7-af08-49d1-b1d0-6e9c09a09bf1\") " Feb 18 15:07:25 crc kubenswrapper[4957]: I0218 15:07:25.903506 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/98d2a9b7-af08-49d1-b1d0-6e9c09a09bf1-inventory\") pod \"98d2a9b7-af08-49d1-b1d0-6e9c09a09bf1\" (UID: \"98d2a9b7-af08-49d1-b1d0-6e9c09a09bf1\") " Feb 18 15:07:25 crc kubenswrapper[4957]: I0218 15:07:25.903577 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/98d2a9b7-af08-49d1-b1d0-6e9c09a09bf1-ssh-key-openstack-edpm-ipam\") pod \"98d2a9b7-af08-49d1-b1d0-6e9c09a09bf1\" (UID: \"98d2a9b7-af08-49d1-b1d0-6e9c09a09bf1\") " Feb 18 15:07:25 crc kubenswrapper[4957]: I0218 15:07:25.914750 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98d2a9b7-af08-49d1-b1d0-6e9c09a09bf1-kube-api-access-gz4h4" (OuterVolumeSpecName: "kube-api-access-gz4h4") pod "98d2a9b7-af08-49d1-b1d0-6e9c09a09bf1" (UID: "98d2a9b7-af08-49d1-b1d0-6e9c09a09bf1"). InnerVolumeSpecName "kube-api-access-gz4h4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 15:07:25 crc kubenswrapper[4957]: I0218 15:07:25.938767 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98d2a9b7-af08-49d1-b1d0-6e9c09a09bf1-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "98d2a9b7-af08-49d1-b1d0-6e9c09a09bf1" (UID: "98d2a9b7-af08-49d1-b1d0-6e9c09a09bf1"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 15:07:25 crc kubenswrapper[4957]: I0218 15:07:25.942713 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98d2a9b7-af08-49d1-b1d0-6e9c09a09bf1-inventory" (OuterVolumeSpecName: "inventory") pod "98d2a9b7-af08-49d1-b1d0-6e9c09a09bf1" (UID: "98d2a9b7-af08-49d1-b1d0-6e9c09a09bf1"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 15:07:26 crc kubenswrapper[4957]: I0218 15:07:26.006275 4957 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/98d2a9b7-af08-49d1-b1d0-6e9c09a09bf1-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 15:07:26 crc kubenswrapper[4957]: I0218 15:07:26.006316 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gz4h4\" (UniqueName: \"kubernetes.io/projected/98d2a9b7-af08-49d1-b1d0-6e9c09a09bf1-kube-api-access-gz4h4\") on node \"crc\" DevicePath \"\"" Feb 18 15:07:26 crc kubenswrapper[4957]: I0218 15:07:26.006326 4957 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/98d2a9b7-af08-49d1-b1d0-6e9c09a09bf1-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 15:07:26 crc kubenswrapper[4957]: I0218 15:07:26.149128 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-66bnk" event={"ID":"98d2a9b7-af08-49d1-b1d0-6e9c09a09bf1","Type":"ContainerDied","Data":"991f752c7a7f23f4188a497a03f40d3b85055731fbefafbca552f7ecdcebdca6"} Feb 18 15:07:26 crc kubenswrapper[4957]: I0218 15:07:26.149171 4957 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="991f752c7a7f23f4188a497a03f40d3b85055731fbefafbca552f7ecdcebdca6" Feb 18 15:07:26 crc kubenswrapper[4957]: I0218 15:07:26.149177 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-66bnk" Feb 18 15:07:26 crc kubenswrapper[4957]: I0218 15:07:26.286135 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-rvs68"] Feb 18 15:07:26 crc kubenswrapper[4957]: E0218 15:07:26.286753 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98d2a9b7-af08-49d1-b1d0-6e9c09a09bf1" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 18 15:07:26 crc kubenswrapper[4957]: I0218 15:07:26.286776 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="98d2a9b7-af08-49d1-b1d0-6e9c09a09bf1" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 18 15:07:26 crc kubenswrapper[4957]: I0218 15:07:26.287094 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="98d2a9b7-af08-49d1-b1d0-6e9c09a09bf1" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 18 15:07:26 crc kubenswrapper[4957]: I0218 15:07:26.288063 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-rvs68" Feb 18 15:07:26 crc kubenswrapper[4957]: I0218 15:07:26.291505 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-pmpvb" Feb 18 15:07:26 crc kubenswrapper[4957]: I0218 15:07:26.293673 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 15:07:26 crc kubenswrapper[4957]: I0218 15:07:26.294164 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 15:07:26 crc kubenswrapper[4957]: I0218 15:07:26.294408 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 15:07:26 crc kubenswrapper[4957]: I0218 15:07:26.296976 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-rvs68"] Feb 18 15:07:26 crc kubenswrapper[4957]: I0218 15:07:26.416769 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gnzj\" (UniqueName: \"kubernetes.io/projected/4156e79b-8cb7-4a2f-95a8-d782eed526a3-kube-api-access-8gnzj\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-rvs68\" (UID: \"4156e79b-8cb7-4a2f-95a8-d782eed526a3\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-rvs68" Feb 18 15:07:26 crc kubenswrapper[4957]: I0218 15:07:26.417146 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4156e79b-8cb7-4a2f-95a8-d782eed526a3-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-rvs68\" (UID: \"4156e79b-8cb7-4a2f-95a8-d782eed526a3\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-rvs68" Feb 18 15:07:26 crc kubenswrapper[4957]: I0218 15:07:26.417303 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4156e79b-8cb7-4a2f-95a8-d782eed526a3-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-rvs68\" (UID: \"4156e79b-8cb7-4a2f-95a8-d782eed526a3\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-rvs68" Feb 18 15:07:26 crc kubenswrapper[4957]: I0218 15:07:26.519290 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8gnzj\" (UniqueName: \"kubernetes.io/projected/4156e79b-8cb7-4a2f-95a8-d782eed526a3-kube-api-access-8gnzj\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-rvs68\" (UID: \"4156e79b-8cb7-4a2f-95a8-d782eed526a3\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-rvs68" Feb 18 15:07:26 crc kubenswrapper[4957]: I0218 15:07:26.519402 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4156e79b-8cb7-4a2f-95a8-d782eed526a3-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-rvs68\" (UID: \"4156e79b-8cb7-4a2f-95a8-d782eed526a3\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-rvs68" Feb 18 15:07:26 crc kubenswrapper[4957]: I0218 15:07:26.519497 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4156e79b-8cb7-4a2f-95a8-d782eed526a3-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-rvs68\" (UID: \"4156e79b-8cb7-4a2f-95a8-d782eed526a3\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-rvs68" Feb 18 15:07:26 crc kubenswrapper[4957]: I0218 15:07:26.526253 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4156e79b-8cb7-4a2f-95a8-d782eed526a3-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-rvs68\" (UID: \"4156e79b-8cb7-4a2f-95a8-d782eed526a3\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-rvs68" Feb 18 15:07:26 crc kubenswrapper[4957]: I0218 15:07:26.530150 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4156e79b-8cb7-4a2f-95a8-d782eed526a3-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-rvs68\" (UID: \"4156e79b-8cb7-4a2f-95a8-d782eed526a3\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-rvs68" Feb 18 15:07:26 crc kubenswrapper[4957]: I0218 15:07:26.544806 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8gnzj\" (UniqueName: \"kubernetes.io/projected/4156e79b-8cb7-4a2f-95a8-d782eed526a3-kube-api-access-8gnzj\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-rvs68\" (UID: \"4156e79b-8cb7-4a2f-95a8-d782eed526a3\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-rvs68" Feb 18 15:07:26 crc kubenswrapper[4957]: I0218 15:07:26.619097 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-rvs68" Feb 18 15:07:27 crc kubenswrapper[4957]: I0218 15:07:27.151726 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-rvs68"] Feb 18 15:07:28 crc kubenswrapper[4957]: I0218 15:07:28.170733 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-rvs68" event={"ID":"4156e79b-8cb7-4a2f-95a8-d782eed526a3","Type":"ContainerStarted","Data":"99185f156007383f528f7be3fbeaa0af6862e71b1e0bf3802034b487f081f787"} Feb 18 15:07:28 crc kubenswrapper[4957]: I0218 15:07:28.171396 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-rvs68" event={"ID":"4156e79b-8cb7-4a2f-95a8-d782eed526a3","Type":"ContainerStarted","Data":"111bbed9c8d56ab7e9a97394c1a03740b989a380ef499e439d06f4dfe84c2abf"} Feb 18 15:07:28 crc kubenswrapper[4957]: I0218 15:07:28.195846 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-rvs68" podStartSLOduration=1.667545402 podStartE2EDuration="2.19582518s" podCreationTimestamp="2026-02-18 15:07:26 +0000 UTC" firstStartedPulling="2026-02-18 15:07:27.172647313 +0000 UTC m=+2153.693512057" lastFinishedPulling="2026-02-18 15:07:27.700927091 +0000 UTC m=+2154.221791835" observedRunningTime="2026-02-18 15:07:28.183758867 +0000 UTC m=+2154.704623611" watchObservedRunningTime="2026-02-18 15:07:28.19582518 +0000 UTC m=+2154.716689924" Feb 18 15:07:37 crc kubenswrapper[4957]: I0218 15:07:37.279174 4957 patch_prober.go:28] interesting pod/machine-config-daemon-x8wwg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 15:07:37 crc kubenswrapper[4957]: I0218 15:07:37.279593 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 15:08:02 crc kubenswrapper[4957]: I0218 15:08:02.042469 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-npjhw"] Feb 18 15:08:02 crc kubenswrapper[4957]: I0218 15:08:02.052946 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-npjhw"] Feb 18 15:08:02 crc kubenswrapper[4957]: I0218 15:08:02.226191 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4fe65613-300e-43e4-82df-4480ee80a335" path="/var/lib/kubelet/pods/4fe65613-300e-43e4-82df-4480ee80a335/volumes" Feb 18 15:08:07 crc kubenswrapper[4957]: I0218 15:08:07.279022 4957 patch_prober.go:28] interesting pod/machine-config-daemon-x8wwg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 15:08:07 crc kubenswrapper[4957]: I0218 15:08:07.279708 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 15:08:14 crc kubenswrapper[4957]: I0218 15:08:14.663639 4957 generic.go:334] "Generic (PLEG): container finished" podID="4156e79b-8cb7-4a2f-95a8-d782eed526a3" containerID="99185f156007383f528f7be3fbeaa0af6862e71b1e0bf3802034b487f081f787" exitCode=0 Feb 18 15:08:14 crc kubenswrapper[4957]: I0218 15:08:14.663737 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-rvs68" event={"ID":"4156e79b-8cb7-4a2f-95a8-d782eed526a3","Type":"ContainerDied","Data":"99185f156007383f528f7be3fbeaa0af6862e71b1e0bf3802034b487f081f787"} Feb 18 15:08:15 crc kubenswrapper[4957]: I0218 15:08:15.734748 4957 scope.go:117] "RemoveContainer" containerID="edf131b19d8fe0b28dfbd541a784a4ad56855eeedfb67fee7f5171f0ea8ac5ed" Feb 18 15:08:15 crc kubenswrapper[4957]: I0218 15:08:15.786106 4957 scope.go:117] "RemoveContainer" containerID="3d9c2b9fc3ab71da01f89461a3fb761c6e10c8f3e271eb5de464b72ab8001f95" Feb 18 15:08:15 crc kubenswrapper[4957]: I0218 15:08:15.847594 4957 scope.go:117] "RemoveContainer" containerID="01730d6d735d58461dda1209cdedeaa90b4fcf641bf04b52122f1a25dd8f91a7" Feb 18 15:08:16 crc kubenswrapper[4957]: I0218 15:08:16.216664 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-rvs68" Feb 18 15:08:16 crc kubenswrapper[4957]: I0218 15:08:16.294024 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4156e79b-8cb7-4a2f-95a8-d782eed526a3-ssh-key-openstack-edpm-ipam\") pod \"4156e79b-8cb7-4a2f-95a8-d782eed526a3\" (UID: \"4156e79b-8cb7-4a2f-95a8-d782eed526a3\") " Feb 18 15:08:16 crc kubenswrapper[4957]: I0218 15:08:16.294087 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8gnzj\" (UniqueName: \"kubernetes.io/projected/4156e79b-8cb7-4a2f-95a8-d782eed526a3-kube-api-access-8gnzj\") pod \"4156e79b-8cb7-4a2f-95a8-d782eed526a3\" (UID: \"4156e79b-8cb7-4a2f-95a8-d782eed526a3\") " Feb 18 15:08:16 crc kubenswrapper[4957]: I0218 15:08:16.294294 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4156e79b-8cb7-4a2f-95a8-d782eed526a3-inventory\") pod \"4156e79b-8cb7-4a2f-95a8-d782eed526a3\" (UID: \"4156e79b-8cb7-4a2f-95a8-d782eed526a3\") " Feb 18 15:08:16 crc kubenswrapper[4957]: I0218 15:08:16.300799 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4156e79b-8cb7-4a2f-95a8-d782eed526a3-kube-api-access-8gnzj" (OuterVolumeSpecName: "kube-api-access-8gnzj") pod "4156e79b-8cb7-4a2f-95a8-d782eed526a3" (UID: "4156e79b-8cb7-4a2f-95a8-d782eed526a3"). InnerVolumeSpecName "kube-api-access-8gnzj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 15:08:16 crc kubenswrapper[4957]: I0218 15:08:16.337627 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4156e79b-8cb7-4a2f-95a8-d782eed526a3-inventory" (OuterVolumeSpecName: "inventory") pod "4156e79b-8cb7-4a2f-95a8-d782eed526a3" (UID: "4156e79b-8cb7-4a2f-95a8-d782eed526a3"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 15:08:16 crc kubenswrapper[4957]: I0218 15:08:16.339605 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4156e79b-8cb7-4a2f-95a8-d782eed526a3-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "4156e79b-8cb7-4a2f-95a8-d782eed526a3" (UID: "4156e79b-8cb7-4a2f-95a8-d782eed526a3"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 15:08:16 crc kubenswrapper[4957]: I0218 15:08:16.397212 4957 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4156e79b-8cb7-4a2f-95a8-d782eed526a3-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 15:08:16 crc kubenswrapper[4957]: I0218 15:08:16.397244 4957 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4156e79b-8cb7-4a2f-95a8-d782eed526a3-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 15:08:16 crc kubenswrapper[4957]: I0218 15:08:16.397259 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8gnzj\" (UniqueName: \"kubernetes.io/projected/4156e79b-8cb7-4a2f-95a8-d782eed526a3-kube-api-access-8gnzj\") on node \"crc\" DevicePath \"\"" Feb 18 15:08:16 crc kubenswrapper[4957]: I0218 15:08:16.685726 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-rvs68" Feb 18 15:08:16 crc kubenswrapper[4957]: I0218 15:08:16.685733 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-rvs68" event={"ID":"4156e79b-8cb7-4a2f-95a8-d782eed526a3","Type":"ContainerDied","Data":"111bbed9c8d56ab7e9a97394c1a03740b989a380ef499e439d06f4dfe84c2abf"} Feb 18 15:08:16 crc kubenswrapper[4957]: I0218 15:08:16.685767 4957 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="111bbed9c8d56ab7e9a97394c1a03740b989a380ef499e439d06f4dfe84c2abf" Feb 18 15:08:16 crc kubenswrapper[4957]: I0218 15:08:16.778531 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-44n6z"] Feb 18 15:08:16 crc kubenswrapper[4957]: E0218 15:08:16.779129 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4156e79b-8cb7-4a2f-95a8-d782eed526a3" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 18 15:08:16 crc kubenswrapper[4957]: I0218 15:08:16.779146 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="4156e79b-8cb7-4a2f-95a8-d782eed526a3" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 18 15:08:16 crc kubenswrapper[4957]: I0218 15:08:16.779480 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="4156e79b-8cb7-4a2f-95a8-d782eed526a3" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 18 15:08:16 crc kubenswrapper[4957]: I0218 15:08:16.780572 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-44n6z" Feb 18 15:08:16 crc kubenswrapper[4957]: I0218 15:08:16.783656 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 15:08:16 crc kubenswrapper[4957]: I0218 15:08:16.783736 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 15:08:16 crc kubenswrapper[4957]: I0218 15:08:16.783924 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-pmpvb" Feb 18 15:08:16 crc kubenswrapper[4957]: I0218 15:08:16.784016 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 15:08:16 crc kubenswrapper[4957]: I0218 15:08:16.789161 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-44n6z"] Feb 18 15:08:16 crc kubenswrapper[4957]: I0218 15:08:16.909035 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/3cfe39f1-7ef2-4668-aef1-22d3b50fb8e9-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-44n6z\" (UID: \"3cfe39f1-7ef2-4668-aef1-22d3b50fb8e9\") " pod="openstack/ssh-known-hosts-edpm-deployment-44n6z" Feb 18 15:08:16 crc kubenswrapper[4957]: I0218 15:08:16.909442 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-st76c\" (UniqueName: \"kubernetes.io/projected/3cfe39f1-7ef2-4668-aef1-22d3b50fb8e9-kube-api-access-st76c\") pod \"ssh-known-hosts-edpm-deployment-44n6z\" (UID: \"3cfe39f1-7ef2-4668-aef1-22d3b50fb8e9\") " pod="openstack/ssh-known-hosts-edpm-deployment-44n6z" Feb 18 15:08:16 crc kubenswrapper[4957]: I0218 15:08:16.909771 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3cfe39f1-7ef2-4668-aef1-22d3b50fb8e9-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-44n6z\" (UID: \"3cfe39f1-7ef2-4668-aef1-22d3b50fb8e9\") " pod="openstack/ssh-known-hosts-edpm-deployment-44n6z" Feb 18 15:08:17 crc kubenswrapper[4957]: I0218 15:08:17.012550 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3cfe39f1-7ef2-4668-aef1-22d3b50fb8e9-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-44n6z\" (UID: \"3cfe39f1-7ef2-4668-aef1-22d3b50fb8e9\") " pod="openstack/ssh-known-hosts-edpm-deployment-44n6z" Feb 18 15:08:17 crc kubenswrapper[4957]: I0218 15:08:17.012640 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/3cfe39f1-7ef2-4668-aef1-22d3b50fb8e9-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-44n6z\" (UID: \"3cfe39f1-7ef2-4668-aef1-22d3b50fb8e9\") " pod="openstack/ssh-known-hosts-edpm-deployment-44n6z" Feb 18 15:08:17 crc kubenswrapper[4957]: I0218 15:08:17.012808 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-st76c\" (UniqueName: \"kubernetes.io/projected/3cfe39f1-7ef2-4668-aef1-22d3b50fb8e9-kube-api-access-st76c\") pod \"ssh-known-hosts-edpm-deployment-44n6z\" (UID: \"3cfe39f1-7ef2-4668-aef1-22d3b50fb8e9\") " pod="openstack/ssh-known-hosts-edpm-deployment-44n6z" Feb 18 15:08:17 crc kubenswrapper[4957]: I0218 15:08:17.017975 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3cfe39f1-7ef2-4668-aef1-22d3b50fb8e9-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-44n6z\" (UID: \"3cfe39f1-7ef2-4668-aef1-22d3b50fb8e9\") " pod="openstack/ssh-known-hosts-edpm-deployment-44n6z" Feb 18 15:08:17 crc kubenswrapper[4957]: I0218 15:08:17.025064 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/3cfe39f1-7ef2-4668-aef1-22d3b50fb8e9-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-44n6z\" (UID: \"3cfe39f1-7ef2-4668-aef1-22d3b50fb8e9\") " pod="openstack/ssh-known-hosts-edpm-deployment-44n6z" Feb 18 15:08:17 crc kubenswrapper[4957]: I0218 15:08:17.029131 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-st76c\" (UniqueName: \"kubernetes.io/projected/3cfe39f1-7ef2-4668-aef1-22d3b50fb8e9-kube-api-access-st76c\") pod \"ssh-known-hosts-edpm-deployment-44n6z\" (UID: \"3cfe39f1-7ef2-4668-aef1-22d3b50fb8e9\") " pod="openstack/ssh-known-hosts-edpm-deployment-44n6z" Feb 18 15:08:17 crc kubenswrapper[4957]: I0218 15:08:17.145093 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-44n6z" Feb 18 15:08:17 crc kubenswrapper[4957]: W0218 15:08:17.753188 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3cfe39f1_7ef2_4668_aef1_22d3b50fb8e9.slice/crio-05fda282a7276428effcddac2f41e50e135999db56c66c33cec28c75859c4605 WatchSource:0}: Error finding container 05fda282a7276428effcddac2f41e50e135999db56c66c33cec28c75859c4605: Status 404 returned error can't find the container with id 05fda282a7276428effcddac2f41e50e135999db56c66c33cec28c75859c4605 Feb 18 15:08:17 crc kubenswrapper[4957]: I0218 15:08:17.754600 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-44n6z"] Feb 18 15:08:18 crc kubenswrapper[4957]: I0218 15:08:18.709747 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-44n6z" event={"ID":"3cfe39f1-7ef2-4668-aef1-22d3b50fb8e9","Type":"ContainerStarted","Data":"a6477ef32e20c46f2a1caa222b451a4dad1dbc875fdaf727da1b171562daf5a5"} Feb 18 15:08:18 crc kubenswrapper[4957]: I0218 15:08:18.710445 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-44n6z" event={"ID":"3cfe39f1-7ef2-4668-aef1-22d3b50fb8e9","Type":"ContainerStarted","Data":"05fda282a7276428effcddac2f41e50e135999db56c66c33cec28c75859c4605"} Feb 18 15:08:18 crc kubenswrapper[4957]: I0218 15:08:18.740284 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-44n6z" podStartSLOduration=2.253327889 podStartE2EDuration="2.740261318s" podCreationTimestamp="2026-02-18 15:08:16 +0000 UTC" firstStartedPulling="2026-02-18 15:08:17.75766608 +0000 UTC m=+2204.278530824" lastFinishedPulling="2026-02-18 15:08:18.244599509 +0000 UTC m=+2204.765464253" observedRunningTime="2026-02-18 15:08:18.726339725 +0000 UTC m=+2205.247204489" watchObservedRunningTime="2026-02-18 15:08:18.740261318 +0000 UTC m=+2205.261126072" Feb 18 15:08:21 crc kubenswrapper[4957]: I0218 15:08:21.331094 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-jnt5s"] Feb 18 15:08:21 crc kubenswrapper[4957]: I0218 15:08:21.334700 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jnt5s" Feb 18 15:08:21 crc kubenswrapper[4957]: I0218 15:08:21.346511 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jnt5s"] Feb 18 15:08:21 crc kubenswrapper[4957]: I0218 15:08:21.436093 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z272t\" (UniqueName: \"kubernetes.io/projected/819cabb8-0c5b-4165-93a4-69aabb019756-kube-api-access-z272t\") pod \"certified-operators-jnt5s\" (UID: \"819cabb8-0c5b-4165-93a4-69aabb019756\") " pod="openshift-marketplace/certified-operators-jnt5s" Feb 18 15:08:21 crc kubenswrapper[4957]: I0218 15:08:21.436397 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/819cabb8-0c5b-4165-93a4-69aabb019756-utilities\") pod \"certified-operators-jnt5s\" (UID: \"819cabb8-0c5b-4165-93a4-69aabb019756\") " pod="openshift-marketplace/certified-operators-jnt5s" Feb 18 15:08:21 crc kubenswrapper[4957]: I0218 15:08:21.436512 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/819cabb8-0c5b-4165-93a4-69aabb019756-catalog-content\") pod \"certified-operators-jnt5s\" (UID: \"819cabb8-0c5b-4165-93a4-69aabb019756\") " pod="openshift-marketplace/certified-operators-jnt5s" Feb 18 15:08:21 crc kubenswrapper[4957]: I0218 15:08:21.538918 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/819cabb8-0c5b-4165-93a4-69aabb019756-utilities\") pod \"certified-operators-jnt5s\" (UID: \"819cabb8-0c5b-4165-93a4-69aabb019756\") " pod="openshift-marketplace/certified-operators-jnt5s" Feb 18 15:08:21 crc kubenswrapper[4957]: I0218 15:08:21.538970 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/819cabb8-0c5b-4165-93a4-69aabb019756-catalog-content\") pod \"certified-operators-jnt5s\" (UID: \"819cabb8-0c5b-4165-93a4-69aabb019756\") " pod="openshift-marketplace/certified-operators-jnt5s" Feb 18 15:08:21 crc kubenswrapper[4957]: I0218 15:08:21.539058 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z272t\" (UniqueName: \"kubernetes.io/projected/819cabb8-0c5b-4165-93a4-69aabb019756-kube-api-access-z272t\") pod \"certified-operators-jnt5s\" (UID: \"819cabb8-0c5b-4165-93a4-69aabb019756\") " pod="openshift-marketplace/certified-operators-jnt5s" Feb 18 15:08:21 crc kubenswrapper[4957]: I0218 15:08:21.539654 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/819cabb8-0c5b-4165-93a4-69aabb019756-utilities\") pod \"certified-operators-jnt5s\" (UID: \"819cabb8-0c5b-4165-93a4-69aabb019756\") " pod="openshift-marketplace/certified-operators-jnt5s" Feb 18 15:08:21 crc kubenswrapper[4957]: I0218 15:08:21.539779 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/819cabb8-0c5b-4165-93a4-69aabb019756-catalog-content\") pod \"certified-operators-jnt5s\" (UID: \"819cabb8-0c5b-4165-93a4-69aabb019756\") " pod="openshift-marketplace/certified-operators-jnt5s" Feb 18 15:08:21 crc kubenswrapper[4957]: I0218 15:08:21.568839 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z272t\" (UniqueName: \"kubernetes.io/projected/819cabb8-0c5b-4165-93a4-69aabb019756-kube-api-access-z272t\") pod \"certified-operators-jnt5s\" (UID: \"819cabb8-0c5b-4165-93a4-69aabb019756\") " pod="openshift-marketplace/certified-operators-jnt5s" Feb 18 15:08:21 crc kubenswrapper[4957]: I0218 15:08:21.697035 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jnt5s" Feb 18 15:08:22 crc kubenswrapper[4957]: W0218 15:08:22.276318 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod819cabb8_0c5b_4165_93a4_69aabb019756.slice/crio-c6167013ed1bd209a7d508fd2fd4f7754d17bd174af2ee37eb63f11d510d0be7 WatchSource:0}: Error finding container c6167013ed1bd209a7d508fd2fd4f7754d17bd174af2ee37eb63f11d510d0be7: Status 404 returned error can't find the container with id c6167013ed1bd209a7d508fd2fd4f7754d17bd174af2ee37eb63f11d510d0be7 Feb 18 15:08:22 crc kubenswrapper[4957]: I0218 15:08:22.280196 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jnt5s"] Feb 18 15:08:22 crc kubenswrapper[4957]: I0218 15:08:22.760703 4957 generic.go:334] "Generic (PLEG): container finished" podID="819cabb8-0c5b-4165-93a4-69aabb019756" containerID="671bdd1cd49e3428be54c70f476f76ea0259cee7eeb3885ae261d7f80455c770" exitCode=0 Feb 18 15:08:22 crc kubenswrapper[4957]: I0218 15:08:22.760760 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jnt5s" event={"ID":"819cabb8-0c5b-4165-93a4-69aabb019756","Type":"ContainerDied","Data":"671bdd1cd49e3428be54c70f476f76ea0259cee7eeb3885ae261d7f80455c770"} Feb 18 15:08:22 crc kubenswrapper[4957]: I0218 15:08:22.760792 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jnt5s" event={"ID":"819cabb8-0c5b-4165-93a4-69aabb019756","Type":"ContainerStarted","Data":"c6167013ed1bd209a7d508fd2fd4f7754d17bd174af2ee37eb63f11d510d0be7"} Feb 18 15:08:23 crc kubenswrapper[4957]: I0218 15:08:23.772474 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jnt5s" event={"ID":"819cabb8-0c5b-4165-93a4-69aabb019756","Type":"ContainerStarted","Data":"c20bd75635208702dafcca712ad180735ee49f83837665e998f4ba00cb732f6f"} Feb 18 15:08:25 crc kubenswrapper[4957]: I0218 15:08:25.795838 4957 generic.go:334] "Generic (PLEG): container finished" podID="819cabb8-0c5b-4165-93a4-69aabb019756" containerID="c20bd75635208702dafcca712ad180735ee49f83837665e998f4ba00cb732f6f" exitCode=0 Feb 18 15:08:25 crc kubenswrapper[4957]: I0218 15:08:25.795888 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jnt5s" event={"ID":"819cabb8-0c5b-4165-93a4-69aabb019756","Type":"ContainerDied","Data":"c20bd75635208702dafcca712ad180735ee49f83837665e998f4ba00cb732f6f"} Feb 18 15:08:25 crc kubenswrapper[4957]: I0218 15:08:25.798069 4957 generic.go:334] "Generic (PLEG): container finished" podID="3cfe39f1-7ef2-4668-aef1-22d3b50fb8e9" containerID="a6477ef32e20c46f2a1caa222b451a4dad1dbc875fdaf727da1b171562daf5a5" exitCode=0 Feb 18 15:08:25 crc kubenswrapper[4957]: I0218 15:08:25.798105 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-44n6z" event={"ID":"3cfe39f1-7ef2-4668-aef1-22d3b50fb8e9","Type":"ContainerDied","Data":"a6477ef32e20c46f2a1caa222b451a4dad1dbc875fdaf727da1b171562daf5a5"} Feb 18 15:08:26 crc kubenswrapper[4957]: I0218 15:08:26.813493 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jnt5s" event={"ID":"819cabb8-0c5b-4165-93a4-69aabb019756","Type":"ContainerStarted","Data":"1e68b87c4c282057b067f23d1a55df68e4688c16ea16f9cd23ef9ecc99036f5d"} Feb 18 15:08:26 crc kubenswrapper[4957]: I0218 15:08:26.841128 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-jnt5s" podStartSLOduration=2.3652862519999998 podStartE2EDuration="5.84109785s" podCreationTimestamp="2026-02-18 15:08:21 +0000 UTC" firstStartedPulling="2026-02-18 15:08:22.763600763 +0000 UTC m=+2209.284465507" lastFinishedPulling="2026-02-18 15:08:26.239412361 +0000 UTC m=+2212.760277105" observedRunningTime="2026-02-18 15:08:26.833334706 +0000 UTC m=+2213.354199470" watchObservedRunningTime="2026-02-18 15:08:26.84109785 +0000 UTC m=+2213.361962614" Feb 18 15:08:27 crc kubenswrapper[4957]: I0218 15:08:27.417941 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-44n6z" Feb 18 15:08:27 crc kubenswrapper[4957]: I0218 15:08:27.483607 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-st76c\" (UniqueName: \"kubernetes.io/projected/3cfe39f1-7ef2-4668-aef1-22d3b50fb8e9-kube-api-access-st76c\") pod \"3cfe39f1-7ef2-4668-aef1-22d3b50fb8e9\" (UID: \"3cfe39f1-7ef2-4668-aef1-22d3b50fb8e9\") " Feb 18 15:08:27 crc kubenswrapper[4957]: I0218 15:08:27.483761 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3cfe39f1-7ef2-4668-aef1-22d3b50fb8e9-ssh-key-openstack-edpm-ipam\") pod \"3cfe39f1-7ef2-4668-aef1-22d3b50fb8e9\" (UID: \"3cfe39f1-7ef2-4668-aef1-22d3b50fb8e9\") " Feb 18 15:08:27 crc kubenswrapper[4957]: I0218 15:08:27.483836 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/3cfe39f1-7ef2-4668-aef1-22d3b50fb8e9-inventory-0\") pod \"3cfe39f1-7ef2-4668-aef1-22d3b50fb8e9\" (UID: \"3cfe39f1-7ef2-4668-aef1-22d3b50fb8e9\") " Feb 18 15:08:27 crc kubenswrapper[4957]: I0218 15:08:27.491603 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cfe39f1-7ef2-4668-aef1-22d3b50fb8e9-kube-api-access-st76c" (OuterVolumeSpecName: "kube-api-access-st76c") pod "3cfe39f1-7ef2-4668-aef1-22d3b50fb8e9" (UID: "3cfe39f1-7ef2-4668-aef1-22d3b50fb8e9"). InnerVolumeSpecName "kube-api-access-st76c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 15:08:27 crc kubenswrapper[4957]: I0218 15:08:27.521179 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3cfe39f1-7ef2-4668-aef1-22d3b50fb8e9-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "3cfe39f1-7ef2-4668-aef1-22d3b50fb8e9" (UID: "3cfe39f1-7ef2-4668-aef1-22d3b50fb8e9"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 15:08:27 crc kubenswrapper[4957]: I0218 15:08:27.521609 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3cfe39f1-7ef2-4668-aef1-22d3b50fb8e9-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "3cfe39f1-7ef2-4668-aef1-22d3b50fb8e9" (UID: "3cfe39f1-7ef2-4668-aef1-22d3b50fb8e9"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 15:08:27 crc kubenswrapper[4957]: I0218 15:08:27.587517 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-st76c\" (UniqueName: \"kubernetes.io/projected/3cfe39f1-7ef2-4668-aef1-22d3b50fb8e9-kube-api-access-st76c\") on node \"crc\" DevicePath \"\"" Feb 18 15:08:27 crc kubenswrapper[4957]: I0218 15:08:27.587828 4957 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3cfe39f1-7ef2-4668-aef1-22d3b50fb8e9-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 15:08:27 crc kubenswrapper[4957]: I0218 15:08:27.587851 4957 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/3cfe39f1-7ef2-4668-aef1-22d3b50fb8e9-inventory-0\") on node \"crc\" DevicePath \"\"" Feb 18 15:08:27 crc kubenswrapper[4957]: I0218 15:08:27.824884 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-44n6z" event={"ID":"3cfe39f1-7ef2-4668-aef1-22d3b50fb8e9","Type":"ContainerDied","Data":"05fda282a7276428effcddac2f41e50e135999db56c66c33cec28c75859c4605"} Feb 18 15:08:27 crc kubenswrapper[4957]: I0218 15:08:27.825153 4957 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="05fda282a7276428effcddac2f41e50e135999db56c66c33cec28c75859c4605" Feb 18 15:08:27 crc kubenswrapper[4957]: I0218 15:08:27.824961 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-44n6z" Feb 18 15:08:27 crc kubenswrapper[4957]: I0218 15:08:27.949922 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-5c4px"] Feb 18 15:08:27 crc kubenswrapper[4957]: E0218 15:08:27.950389 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cfe39f1-7ef2-4668-aef1-22d3b50fb8e9" containerName="ssh-known-hosts-edpm-deployment" Feb 18 15:08:27 crc kubenswrapper[4957]: I0218 15:08:27.950406 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cfe39f1-7ef2-4668-aef1-22d3b50fb8e9" containerName="ssh-known-hosts-edpm-deployment" Feb 18 15:08:27 crc kubenswrapper[4957]: I0218 15:08:27.950685 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="3cfe39f1-7ef2-4668-aef1-22d3b50fb8e9" containerName="ssh-known-hosts-edpm-deployment" Feb 18 15:08:27 crc kubenswrapper[4957]: I0218 15:08:27.951480 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5c4px" Feb 18 15:08:27 crc kubenswrapper[4957]: I0218 15:08:27.953386 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-pmpvb" Feb 18 15:08:27 crc kubenswrapper[4957]: I0218 15:08:27.954151 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 15:08:27 crc kubenswrapper[4957]: I0218 15:08:27.955601 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 15:08:27 crc kubenswrapper[4957]: I0218 15:08:27.955699 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 15:08:27 crc kubenswrapper[4957]: I0218 15:08:27.966354 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-5c4px"] Feb 18 15:08:27 crc kubenswrapper[4957]: I0218 15:08:27.996395 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98648\" (UniqueName: \"kubernetes.io/projected/7eb43167-b3f1-4daa-a843-70abb56b314f-kube-api-access-98648\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-5c4px\" (UID: \"7eb43167-b3f1-4daa-a843-70abb56b314f\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5c4px" Feb 18 15:08:27 crc kubenswrapper[4957]: I0218 15:08:27.996485 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7eb43167-b3f1-4daa-a843-70abb56b314f-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-5c4px\" (UID: \"7eb43167-b3f1-4daa-a843-70abb56b314f\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5c4px" Feb 18 15:08:27 crc kubenswrapper[4957]: I0218 15:08:27.996608 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7eb43167-b3f1-4daa-a843-70abb56b314f-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-5c4px\" (UID: \"7eb43167-b3f1-4daa-a843-70abb56b314f\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5c4px" Feb 18 15:08:28 crc kubenswrapper[4957]: I0218 15:08:28.098760 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7eb43167-b3f1-4daa-a843-70abb56b314f-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-5c4px\" (UID: \"7eb43167-b3f1-4daa-a843-70abb56b314f\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5c4px" Feb 18 15:08:28 crc kubenswrapper[4957]: I0218 15:08:28.098946 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98648\" (UniqueName: \"kubernetes.io/projected/7eb43167-b3f1-4daa-a843-70abb56b314f-kube-api-access-98648\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-5c4px\" (UID: \"7eb43167-b3f1-4daa-a843-70abb56b314f\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5c4px" Feb 18 15:08:28 crc kubenswrapper[4957]: I0218 15:08:28.098977 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7eb43167-b3f1-4daa-a843-70abb56b314f-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-5c4px\" (UID: \"7eb43167-b3f1-4daa-a843-70abb56b314f\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5c4px" Feb 18 15:08:28 crc kubenswrapper[4957]: I0218 15:08:28.105270 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7eb43167-b3f1-4daa-a843-70abb56b314f-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-5c4px\" (UID: \"7eb43167-b3f1-4daa-a843-70abb56b314f\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5c4px" Feb 18 15:08:28 crc kubenswrapper[4957]: I0218 15:08:28.107133 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7eb43167-b3f1-4daa-a843-70abb56b314f-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-5c4px\" (UID: \"7eb43167-b3f1-4daa-a843-70abb56b314f\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5c4px" Feb 18 15:08:28 crc kubenswrapper[4957]: I0218 15:08:28.115846 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98648\" (UniqueName: \"kubernetes.io/projected/7eb43167-b3f1-4daa-a843-70abb56b314f-kube-api-access-98648\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-5c4px\" (UID: \"7eb43167-b3f1-4daa-a843-70abb56b314f\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5c4px" Feb 18 15:08:28 crc kubenswrapper[4957]: I0218 15:08:28.287268 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5c4px" Feb 18 15:08:28 crc kubenswrapper[4957]: I0218 15:08:28.724788 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-2pzkx"] Feb 18 15:08:28 crc kubenswrapper[4957]: I0218 15:08:28.727327 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2pzkx" Feb 18 15:08:28 crc kubenswrapper[4957]: I0218 15:08:28.751064 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2pzkx"] Feb 18 15:08:28 crc kubenswrapper[4957]: I0218 15:08:28.816976 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91bf72a3-1700-459c-ba18-b7317044a1e2-utilities\") pod \"community-operators-2pzkx\" (UID: \"91bf72a3-1700-459c-ba18-b7317044a1e2\") " pod="openshift-marketplace/community-operators-2pzkx" Feb 18 15:08:28 crc kubenswrapper[4957]: I0218 15:08:28.817186 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbf7f\" (UniqueName: \"kubernetes.io/projected/91bf72a3-1700-459c-ba18-b7317044a1e2-kube-api-access-cbf7f\") pod \"community-operators-2pzkx\" (UID: \"91bf72a3-1700-459c-ba18-b7317044a1e2\") " pod="openshift-marketplace/community-operators-2pzkx" Feb 18 15:08:28 crc kubenswrapper[4957]: I0218 15:08:28.817262 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91bf72a3-1700-459c-ba18-b7317044a1e2-catalog-content\") pod \"community-operators-2pzkx\" (UID: \"91bf72a3-1700-459c-ba18-b7317044a1e2\") " pod="openshift-marketplace/community-operators-2pzkx" Feb 18 15:08:28 crc kubenswrapper[4957]: I0218 15:08:28.846447 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-5c4px"] Feb 18 15:08:28 crc kubenswrapper[4957]: W0218 15:08:28.849921 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7eb43167_b3f1_4daa_a843_70abb56b314f.slice/crio-20d35286b9decf6401c3eb2ba2713025bdfc3848cb63095ffa648ed9ad459859 WatchSource:0}: Error finding container 20d35286b9decf6401c3eb2ba2713025bdfc3848cb63095ffa648ed9ad459859: Status 404 returned error can't find the container with id 20d35286b9decf6401c3eb2ba2713025bdfc3848cb63095ffa648ed9ad459859 Feb 18 15:08:28 crc kubenswrapper[4957]: I0218 15:08:28.922035 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbf7f\" (UniqueName: \"kubernetes.io/projected/91bf72a3-1700-459c-ba18-b7317044a1e2-kube-api-access-cbf7f\") pod \"community-operators-2pzkx\" (UID: \"91bf72a3-1700-459c-ba18-b7317044a1e2\") " pod="openshift-marketplace/community-operators-2pzkx" Feb 18 15:08:28 crc kubenswrapper[4957]: I0218 15:08:28.922171 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91bf72a3-1700-459c-ba18-b7317044a1e2-catalog-content\") pod \"community-operators-2pzkx\" (UID: \"91bf72a3-1700-459c-ba18-b7317044a1e2\") " pod="openshift-marketplace/community-operators-2pzkx" Feb 18 15:08:28 crc kubenswrapper[4957]: I0218 15:08:28.922213 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91bf72a3-1700-459c-ba18-b7317044a1e2-utilities\") pod \"community-operators-2pzkx\" (UID: \"91bf72a3-1700-459c-ba18-b7317044a1e2\") " pod="openshift-marketplace/community-operators-2pzkx" Feb 18 15:08:28 crc kubenswrapper[4957]: I0218 15:08:28.922916 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91bf72a3-1700-459c-ba18-b7317044a1e2-utilities\") pod \"community-operators-2pzkx\" (UID: \"91bf72a3-1700-459c-ba18-b7317044a1e2\") " pod="openshift-marketplace/community-operators-2pzkx" Feb 18 15:08:28 crc kubenswrapper[4957]: I0218 15:08:28.922967 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91bf72a3-1700-459c-ba18-b7317044a1e2-catalog-content\") pod \"community-operators-2pzkx\" (UID: \"91bf72a3-1700-459c-ba18-b7317044a1e2\") " pod="openshift-marketplace/community-operators-2pzkx" Feb 18 15:08:28 crc kubenswrapper[4957]: I0218 15:08:28.943215 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbf7f\" (UniqueName: \"kubernetes.io/projected/91bf72a3-1700-459c-ba18-b7317044a1e2-kube-api-access-cbf7f\") pod \"community-operators-2pzkx\" (UID: \"91bf72a3-1700-459c-ba18-b7317044a1e2\") " pod="openshift-marketplace/community-operators-2pzkx" Feb 18 15:08:29 crc kubenswrapper[4957]: I0218 15:08:29.051723 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2pzkx" Feb 18 15:08:30 crc kubenswrapper[4957]: I0218 15:08:29.686956 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2pzkx"] Feb 18 15:08:30 crc kubenswrapper[4957]: I0218 15:08:29.852725 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5c4px" event={"ID":"7eb43167-b3f1-4daa-a843-70abb56b314f","Type":"ContainerStarted","Data":"cd35b556c56d2d1b3f3bce644c5d697f29c53b115b683306206758a54342a8fe"} Feb 18 15:08:30 crc kubenswrapper[4957]: I0218 15:08:29.853058 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5c4px" event={"ID":"7eb43167-b3f1-4daa-a843-70abb56b314f","Type":"ContainerStarted","Data":"20d35286b9decf6401c3eb2ba2713025bdfc3848cb63095ffa648ed9ad459859"} Feb 18 15:08:30 crc kubenswrapper[4957]: I0218 15:08:29.855216 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2pzkx" event={"ID":"91bf72a3-1700-459c-ba18-b7317044a1e2","Type":"ContainerStarted","Data":"fec45782a5706f9952b0da99d83fed1634c389d408d261072b098059ce363217"} Feb 18 15:08:30 crc kubenswrapper[4957]: I0218 15:08:29.871351 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5c4px" podStartSLOduration=2.322186166 podStartE2EDuration="2.871331917s" podCreationTimestamp="2026-02-18 15:08:27 +0000 UTC" firstStartedPulling="2026-02-18 15:08:28.852562403 +0000 UTC m=+2215.373427147" lastFinishedPulling="2026-02-18 15:08:29.401708154 +0000 UTC m=+2215.922572898" observedRunningTime="2026-02-18 15:08:29.870208826 +0000 UTC m=+2216.391073590" watchObservedRunningTime="2026-02-18 15:08:29.871331917 +0000 UTC m=+2216.392196661" Feb 18 15:08:30 crc kubenswrapper[4957]: I0218 15:08:30.867129 4957 generic.go:334] "Generic (PLEG): container finished" podID="91bf72a3-1700-459c-ba18-b7317044a1e2" containerID="57bd19d44932be2baba4c3ad03cb53764c2784181cffe9f6a04ea0a5c9b94677" exitCode=0 Feb 18 15:08:30 crc kubenswrapper[4957]: I0218 15:08:30.867598 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2pzkx" event={"ID":"91bf72a3-1700-459c-ba18-b7317044a1e2","Type":"ContainerDied","Data":"57bd19d44932be2baba4c3ad03cb53764c2784181cffe9f6a04ea0a5c9b94677"} Feb 18 15:08:31 crc kubenswrapper[4957]: I0218 15:08:31.697884 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-jnt5s" Feb 18 15:08:31 crc kubenswrapper[4957]: I0218 15:08:31.698284 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-jnt5s" Feb 18 15:08:31 crc kubenswrapper[4957]: I0218 15:08:31.751906 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-jnt5s" Feb 18 15:08:31 crc kubenswrapper[4957]: I0218 15:08:31.880215 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2pzkx" event={"ID":"91bf72a3-1700-459c-ba18-b7317044a1e2","Type":"ContainerStarted","Data":"fa3af1c94886c7f648690861f2c2bbfb0038b3e5c2790cbac3f925f03a38eba5"} Feb 18 15:08:31 crc kubenswrapper[4957]: I0218 15:08:31.939444 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-jnt5s" Feb 18 15:08:33 crc kubenswrapper[4957]: I0218 15:08:33.702809 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jnt5s"] Feb 18 15:08:33 crc kubenswrapper[4957]: I0218 15:08:33.901270 4957 generic.go:334] "Generic (PLEG): container finished" podID="91bf72a3-1700-459c-ba18-b7317044a1e2" containerID="fa3af1c94886c7f648690861f2c2bbfb0038b3e5c2790cbac3f925f03a38eba5" exitCode=0 Feb 18 15:08:33 crc kubenswrapper[4957]: I0218 15:08:33.901337 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2pzkx" event={"ID":"91bf72a3-1700-459c-ba18-b7317044a1e2","Type":"ContainerDied","Data":"fa3af1c94886c7f648690861f2c2bbfb0038b3e5c2790cbac3f925f03a38eba5"} Feb 18 15:08:33 crc kubenswrapper[4957]: I0218 15:08:33.901496 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-jnt5s" podUID="819cabb8-0c5b-4165-93a4-69aabb019756" containerName="registry-server" containerID="cri-o://1e68b87c4c282057b067f23d1a55df68e4688c16ea16f9cd23ef9ecc99036f5d" gracePeriod=2 Feb 18 15:08:34 crc kubenswrapper[4957]: I0218 15:08:34.428946 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jnt5s" Feb 18 15:08:34 crc kubenswrapper[4957]: I0218 15:08:34.478735 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/819cabb8-0c5b-4165-93a4-69aabb019756-catalog-content\") pod \"819cabb8-0c5b-4165-93a4-69aabb019756\" (UID: \"819cabb8-0c5b-4165-93a4-69aabb019756\") " Feb 18 15:08:34 crc kubenswrapper[4957]: I0218 15:08:34.478944 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z272t\" (UniqueName: \"kubernetes.io/projected/819cabb8-0c5b-4165-93a4-69aabb019756-kube-api-access-z272t\") pod \"819cabb8-0c5b-4165-93a4-69aabb019756\" (UID: \"819cabb8-0c5b-4165-93a4-69aabb019756\") " Feb 18 15:08:34 crc kubenswrapper[4957]: I0218 15:08:34.479077 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/819cabb8-0c5b-4165-93a4-69aabb019756-utilities\") pod \"819cabb8-0c5b-4165-93a4-69aabb019756\" (UID: \"819cabb8-0c5b-4165-93a4-69aabb019756\") " Feb 18 15:08:34 crc kubenswrapper[4957]: I0218 15:08:34.481049 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/819cabb8-0c5b-4165-93a4-69aabb019756-utilities" (OuterVolumeSpecName: "utilities") pod "819cabb8-0c5b-4165-93a4-69aabb019756" (UID: "819cabb8-0c5b-4165-93a4-69aabb019756"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 15:08:34 crc kubenswrapper[4957]: I0218 15:08:34.487250 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/819cabb8-0c5b-4165-93a4-69aabb019756-kube-api-access-z272t" (OuterVolumeSpecName: "kube-api-access-z272t") pod "819cabb8-0c5b-4165-93a4-69aabb019756" (UID: "819cabb8-0c5b-4165-93a4-69aabb019756"). InnerVolumeSpecName "kube-api-access-z272t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 15:08:34 crc kubenswrapper[4957]: I0218 15:08:34.546736 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/819cabb8-0c5b-4165-93a4-69aabb019756-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "819cabb8-0c5b-4165-93a4-69aabb019756" (UID: "819cabb8-0c5b-4165-93a4-69aabb019756"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 15:08:34 crc kubenswrapper[4957]: I0218 15:08:34.582585 4957 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/819cabb8-0c5b-4165-93a4-69aabb019756-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 15:08:34 crc kubenswrapper[4957]: I0218 15:08:34.582629 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z272t\" (UniqueName: \"kubernetes.io/projected/819cabb8-0c5b-4165-93a4-69aabb019756-kube-api-access-z272t\") on node \"crc\" DevicePath \"\"" Feb 18 15:08:34 crc kubenswrapper[4957]: I0218 15:08:34.582645 4957 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/819cabb8-0c5b-4165-93a4-69aabb019756-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 15:08:34 crc kubenswrapper[4957]: I0218 15:08:34.913176 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2pzkx" event={"ID":"91bf72a3-1700-459c-ba18-b7317044a1e2","Type":"ContainerStarted","Data":"77edba93cdbff2311b9a47fd3d58b1673b59532c3e36d9b9d639ef5180c2fc04"} Feb 18 15:08:34 crc kubenswrapper[4957]: I0218 15:08:34.916334 4957 generic.go:334] "Generic (PLEG): container finished" podID="819cabb8-0c5b-4165-93a4-69aabb019756" containerID="1e68b87c4c282057b067f23d1a55df68e4688c16ea16f9cd23ef9ecc99036f5d" exitCode=0 Feb 18 15:08:34 crc kubenswrapper[4957]: I0218 15:08:34.916445 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jnt5s" Feb 18 15:08:34 crc kubenswrapper[4957]: I0218 15:08:34.916399 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jnt5s" event={"ID":"819cabb8-0c5b-4165-93a4-69aabb019756","Type":"ContainerDied","Data":"1e68b87c4c282057b067f23d1a55df68e4688c16ea16f9cd23ef9ecc99036f5d"} Feb 18 15:08:34 crc kubenswrapper[4957]: I0218 15:08:34.916658 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jnt5s" event={"ID":"819cabb8-0c5b-4165-93a4-69aabb019756","Type":"ContainerDied","Data":"c6167013ed1bd209a7d508fd2fd4f7754d17bd174af2ee37eb63f11d510d0be7"} Feb 18 15:08:34 crc kubenswrapper[4957]: I0218 15:08:34.916706 4957 scope.go:117] "RemoveContainer" containerID="1e68b87c4c282057b067f23d1a55df68e4688c16ea16f9cd23ef9ecc99036f5d" Feb 18 15:08:34 crc kubenswrapper[4957]: I0218 15:08:34.946284 4957 scope.go:117] "RemoveContainer" containerID="c20bd75635208702dafcca712ad180735ee49f83837665e998f4ba00cb732f6f" Feb 18 15:08:34 crc kubenswrapper[4957]: I0218 15:08:34.948670 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-2pzkx" podStartSLOduration=3.30143607 podStartE2EDuration="6.948657167s" podCreationTimestamp="2026-02-18 15:08:28 +0000 UTC" firstStartedPulling="2026-02-18 15:08:30.87004141 +0000 UTC m=+2217.390906154" lastFinishedPulling="2026-02-18 15:08:34.517262497 +0000 UTC m=+2221.038127251" observedRunningTime="2026-02-18 15:08:34.939895676 +0000 UTC m=+2221.460760430" watchObservedRunningTime="2026-02-18 15:08:34.948657167 +0000 UTC m=+2221.469521911" Feb 18 15:08:34 crc kubenswrapper[4957]: I0218 15:08:34.968544 4957 scope.go:117] "RemoveContainer" containerID="671bdd1cd49e3428be54c70f476f76ea0259cee7eeb3885ae261d7f80455c770" Feb 18 15:08:34 crc kubenswrapper[4957]: I0218 15:08:34.969373 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jnt5s"] Feb 18 15:08:34 crc kubenswrapper[4957]: I0218 15:08:34.981811 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-jnt5s"] Feb 18 15:08:34 crc kubenswrapper[4957]: I0218 15:08:34.987892 4957 scope.go:117] "RemoveContainer" containerID="1e68b87c4c282057b067f23d1a55df68e4688c16ea16f9cd23ef9ecc99036f5d" Feb 18 15:08:34 crc kubenswrapper[4957]: E0218 15:08:34.988313 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e68b87c4c282057b067f23d1a55df68e4688c16ea16f9cd23ef9ecc99036f5d\": container with ID starting with 1e68b87c4c282057b067f23d1a55df68e4688c16ea16f9cd23ef9ecc99036f5d not found: ID does not exist" containerID="1e68b87c4c282057b067f23d1a55df68e4688c16ea16f9cd23ef9ecc99036f5d" Feb 18 15:08:34 crc kubenswrapper[4957]: I0218 15:08:34.988359 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e68b87c4c282057b067f23d1a55df68e4688c16ea16f9cd23ef9ecc99036f5d"} err="failed to get container status \"1e68b87c4c282057b067f23d1a55df68e4688c16ea16f9cd23ef9ecc99036f5d\": rpc error: code = NotFound desc = could not find container \"1e68b87c4c282057b067f23d1a55df68e4688c16ea16f9cd23ef9ecc99036f5d\": container with ID starting with 1e68b87c4c282057b067f23d1a55df68e4688c16ea16f9cd23ef9ecc99036f5d not found: ID does not exist" Feb 18 15:08:34 crc kubenswrapper[4957]: I0218 15:08:34.988388 4957 scope.go:117] "RemoveContainer" containerID="c20bd75635208702dafcca712ad180735ee49f83837665e998f4ba00cb732f6f" Feb 18 15:08:34 crc kubenswrapper[4957]: E0218 15:08:34.988716 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c20bd75635208702dafcca712ad180735ee49f83837665e998f4ba00cb732f6f\": container with ID starting with c20bd75635208702dafcca712ad180735ee49f83837665e998f4ba00cb732f6f not found: ID does not exist" containerID="c20bd75635208702dafcca712ad180735ee49f83837665e998f4ba00cb732f6f" Feb 18 15:08:34 crc kubenswrapper[4957]: I0218 15:08:34.988755 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c20bd75635208702dafcca712ad180735ee49f83837665e998f4ba00cb732f6f"} err="failed to get container status \"c20bd75635208702dafcca712ad180735ee49f83837665e998f4ba00cb732f6f\": rpc error: code = NotFound desc = could not find container \"c20bd75635208702dafcca712ad180735ee49f83837665e998f4ba00cb732f6f\": container with ID starting with c20bd75635208702dafcca712ad180735ee49f83837665e998f4ba00cb732f6f not found: ID does not exist" Feb 18 15:08:34 crc kubenswrapper[4957]: I0218 15:08:34.988779 4957 scope.go:117] "RemoveContainer" containerID="671bdd1cd49e3428be54c70f476f76ea0259cee7eeb3885ae261d7f80455c770" Feb 18 15:08:34 crc kubenswrapper[4957]: E0218 15:08:34.989011 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"671bdd1cd49e3428be54c70f476f76ea0259cee7eeb3885ae261d7f80455c770\": container with ID starting with 671bdd1cd49e3428be54c70f476f76ea0259cee7eeb3885ae261d7f80455c770 not found: ID does not exist" containerID="671bdd1cd49e3428be54c70f476f76ea0259cee7eeb3885ae261d7f80455c770" Feb 18 15:08:34 crc kubenswrapper[4957]: I0218 15:08:34.989044 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"671bdd1cd49e3428be54c70f476f76ea0259cee7eeb3885ae261d7f80455c770"} err="failed to get container status \"671bdd1cd49e3428be54c70f476f76ea0259cee7eeb3885ae261d7f80455c770\": rpc error: code = NotFound desc = could not find container \"671bdd1cd49e3428be54c70f476f76ea0259cee7eeb3885ae261d7f80455c770\": container with ID starting with 671bdd1cd49e3428be54c70f476f76ea0259cee7eeb3885ae261d7f80455c770 not found: ID does not exist" Feb 18 15:08:36 crc kubenswrapper[4957]: I0218 15:08:36.227894 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="819cabb8-0c5b-4165-93a4-69aabb019756" path="/var/lib/kubelet/pods/819cabb8-0c5b-4165-93a4-69aabb019756/volumes" Feb 18 15:08:37 crc kubenswrapper[4957]: I0218 15:08:37.279319 4957 patch_prober.go:28] interesting pod/machine-config-daemon-x8wwg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 15:08:37 crc kubenswrapper[4957]: I0218 15:08:37.279684 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 15:08:37 crc kubenswrapper[4957]: I0218 15:08:37.279753 4957 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" Feb 18 15:08:37 crc kubenswrapper[4957]: I0218 15:08:37.280676 4957 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"44b5f20e8c741a1ccc4ddb62c02968c2612966a76e0271e585bc5a70580f1aba"} pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 15:08:37 crc kubenswrapper[4957]: I0218 15:08:37.280740 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" containerName="machine-config-daemon" containerID="cri-o://44b5f20e8c741a1ccc4ddb62c02968c2612966a76e0271e585bc5a70580f1aba" gracePeriod=600 Feb 18 15:08:37 crc kubenswrapper[4957]: E0218 15:08:37.402163 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-x8wwg_openshift-machine-config-operator(4cde17e3-43e9-4bed-afe8-5b76229e35cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" Feb 18 15:08:37 crc kubenswrapper[4957]: I0218 15:08:37.946483 4957 generic.go:334] "Generic (PLEG): container finished" podID="7eb43167-b3f1-4daa-a843-70abb56b314f" containerID="cd35b556c56d2d1b3f3bce644c5d697f29c53b115b683306206758a54342a8fe" exitCode=0 Feb 18 15:08:37 crc kubenswrapper[4957]: I0218 15:08:37.946582 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5c4px" event={"ID":"7eb43167-b3f1-4daa-a843-70abb56b314f","Type":"ContainerDied","Data":"cd35b556c56d2d1b3f3bce644c5d697f29c53b115b683306206758a54342a8fe"} Feb 18 15:08:37 crc kubenswrapper[4957]: I0218 15:08:37.948726 4957 generic.go:334] "Generic (PLEG): container finished" podID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" containerID="44b5f20e8c741a1ccc4ddb62c02968c2612966a76e0271e585bc5a70580f1aba" exitCode=0 Feb 18 15:08:37 crc kubenswrapper[4957]: I0218 15:08:37.948771 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" event={"ID":"4cde17e3-43e9-4bed-afe8-5b76229e35cf","Type":"ContainerDied","Data":"44b5f20e8c741a1ccc4ddb62c02968c2612966a76e0271e585bc5a70580f1aba"} Feb 18 15:08:37 crc kubenswrapper[4957]: I0218 15:08:37.948837 4957 scope.go:117] "RemoveContainer" containerID="9c5e9cb6585cb5698a0ccbe75444be81ef24ea059d215d129f850aa9f0b11cc9" Feb 18 15:08:37 crc kubenswrapper[4957]: I0218 15:08:37.949753 4957 scope.go:117] "RemoveContainer" containerID="44b5f20e8c741a1ccc4ddb62c02968c2612966a76e0271e585bc5a70580f1aba" Feb 18 15:08:37 crc kubenswrapper[4957]: E0218 15:08:37.950185 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-x8wwg_openshift-machine-config-operator(4cde17e3-43e9-4bed-afe8-5b76229e35cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" Feb 18 15:08:39 crc kubenswrapper[4957]: I0218 15:08:39.051919 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-2pzkx" Feb 18 15:08:39 crc kubenswrapper[4957]: I0218 15:08:39.052240 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-2pzkx" Feb 18 15:08:39 crc kubenswrapper[4957]: I0218 15:08:39.103967 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-2pzkx" Feb 18 15:08:39 crc kubenswrapper[4957]: I0218 15:08:39.484391 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5c4px" Feb 18 15:08:39 crc kubenswrapper[4957]: I0218 15:08:39.516002 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7eb43167-b3f1-4daa-a843-70abb56b314f-ssh-key-openstack-edpm-ipam\") pod \"7eb43167-b3f1-4daa-a843-70abb56b314f\" (UID: \"7eb43167-b3f1-4daa-a843-70abb56b314f\") " Feb 18 15:08:39 crc kubenswrapper[4957]: I0218 15:08:39.516189 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-98648\" (UniqueName: \"kubernetes.io/projected/7eb43167-b3f1-4daa-a843-70abb56b314f-kube-api-access-98648\") pod \"7eb43167-b3f1-4daa-a843-70abb56b314f\" (UID: \"7eb43167-b3f1-4daa-a843-70abb56b314f\") " Feb 18 15:08:39 crc kubenswrapper[4957]: I0218 15:08:39.516472 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7eb43167-b3f1-4daa-a843-70abb56b314f-inventory\") pod \"7eb43167-b3f1-4daa-a843-70abb56b314f\" (UID: \"7eb43167-b3f1-4daa-a843-70abb56b314f\") " Feb 18 15:08:39 crc kubenswrapper[4957]: I0218 15:08:39.522946 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7eb43167-b3f1-4daa-a843-70abb56b314f-kube-api-access-98648" (OuterVolumeSpecName: "kube-api-access-98648") pod "7eb43167-b3f1-4daa-a843-70abb56b314f" (UID: "7eb43167-b3f1-4daa-a843-70abb56b314f"). InnerVolumeSpecName "kube-api-access-98648". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 15:08:39 crc kubenswrapper[4957]: I0218 15:08:39.553697 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7eb43167-b3f1-4daa-a843-70abb56b314f-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "7eb43167-b3f1-4daa-a843-70abb56b314f" (UID: "7eb43167-b3f1-4daa-a843-70abb56b314f"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 15:08:39 crc kubenswrapper[4957]: I0218 15:08:39.554710 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7eb43167-b3f1-4daa-a843-70abb56b314f-inventory" (OuterVolumeSpecName: "inventory") pod "7eb43167-b3f1-4daa-a843-70abb56b314f" (UID: "7eb43167-b3f1-4daa-a843-70abb56b314f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 15:08:39 crc kubenswrapper[4957]: I0218 15:08:39.619636 4957 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7eb43167-b3f1-4daa-a843-70abb56b314f-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 15:08:39 crc kubenswrapper[4957]: I0218 15:08:39.619684 4957 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7eb43167-b3f1-4daa-a843-70abb56b314f-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 15:08:39 crc kubenswrapper[4957]: I0218 15:08:39.619702 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-98648\" (UniqueName: \"kubernetes.io/projected/7eb43167-b3f1-4daa-a843-70abb56b314f-kube-api-access-98648\") on node \"crc\" DevicePath \"\"" Feb 18 15:08:39 crc kubenswrapper[4957]: I0218 15:08:39.986042 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5c4px" Feb 18 15:08:39 crc kubenswrapper[4957]: I0218 15:08:39.988633 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5c4px" event={"ID":"7eb43167-b3f1-4daa-a843-70abb56b314f","Type":"ContainerDied","Data":"20d35286b9decf6401c3eb2ba2713025bdfc3848cb63095ffa648ed9ad459859"} Feb 18 15:08:39 crc kubenswrapper[4957]: I0218 15:08:39.988782 4957 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="20d35286b9decf6401c3eb2ba2713025bdfc3848cb63095ffa648ed9ad459859" Feb 18 15:08:40 crc kubenswrapper[4957]: I0218 15:08:40.049579 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-94fqn"] Feb 18 15:08:40 crc kubenswrapper[4957]: E0218 15:08:40.050202 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7eb43167-b3f1-4daa-a843-70abb56b314f" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 18 15:08:40 crc kubenswrapper[4957]: I0218 15:08:40.050226 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="7eb43167-b3f1-4daa-a843-70abb56b314f" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 18 15:08:40 crc kubenswrapper[4957]: E0218 15:08:40.050255 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="819cabb8-0c5b-4165-93a4-69aabb019756" containerName="extract-content" Feb 18 15:08:40 crc kubenswrapper[4957]: I0218 15:08:40.050264 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="819cabb8-0c5b-4165-93a4-69aabb019756" containerName="extract-content" Feb 18 15:08:40 crc kubenswrapper[4957]: E0218 15:08:40.050291 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="819cabb8-0c5b-4165-93a4-69aabb019756" containerName="registry-server" Feb 18 15:08:40 crc kubenswrapper[4957]: I0218 15:08:40.050299 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="819cabb8-0c5b-4165-93a4-69aabb019756" containerName="registry-server" Feb 18 15:08:40 crc kubenswrapper[4957]: E0218 15:08:40.050338 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="819cabb8-0c5b-4165-93a4-69aabb019756" containerName="extract-utilities" Feb 18 15:08:40 crc kubenswrapper[4957]: I0218 15:08:40.050347 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="819cabb8-0c5b-4165-93a4-69aabb019756" containerName="extract-utilities" Feb 18 15:08:40 crc kubenswrapper[4957]: I0218 15:08:40.050664 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="7eb43167-b3f1-4daa-a843-70abb56b314f" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 18 15:08:40 crc kubenswrapper[4957]: I0218 15:08:40.050699 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="819cabb8-0c5b-4165-93a4-69aabb019756" containerName="registry-server" Feb 18 15:08:40 crc kubenswrapper[4957]: I0218 15:08:40.051763 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-94fqn" Feb 18 15:08:40 crc kubenswrapper[4957]: I0218 15:08:40.054049 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 15:08:40 crc kubenswrapper[4957]: I0218 15:08:40.054050 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 15:08:40 crc kubenswrapper[4957]: I0218 15:08:40.054115 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-pmpvb" Feb 18 15:08:40 crc kubenswrapper[4957]: I0218 15:08:40.058251 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 15:08:40 crc kubenswrapper[4957]: I0218 15:08:40.063861 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-2pzkx" Feb 18 15:08:40 crc kubenswrapper[4957]: I0218 15:08:40.073163 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-94fqn"] Feb 18 15:08:40 crc kubenswrapper[4957]: I0218 15:08:40.123594 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2pzkx"] Feb 18 15:08:40 crc kubenswrapper[4957]: I0218 15:08:40.130818 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7kzc\" (UniqueName: \"kubernetes.io/projected/37b31c92-756d-4b57-874f-c5278c279d8b-kube-api-access-v7kzc\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-94fqn\" (UID: \"37b31c92-756d-4b57-874f-c5278c279d8b\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-94fqn" Feb 18 15:08:40 crc kubenswrapper[4957]: I0218 15:08:40.130863 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/37b31c92-756d-4b57-874f-c5278c279d8b-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-94fqn\" (UID: \"37b31c92-756d-4b57-874f-c5278c279d8b\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-94fqn" Feb 18 15:08:40 crc kubenswrapper[4957]: I0218 15:08:40.130985 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/37b31c92-756d-4b57-874f-c5278c279d8b-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-94fqn\" (UID: \"37b31c92-756d-4b57-874f-c5278c279d8b\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-94fqn" Feb 18 15:08:40 crc kubenswrapper[4957]: I0218 15:08:40.233338 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v7kzc\" (UniqueName: \"kubernetes.io/projected/37b31c92-756d-4b57-874f-c5278c279d8b-kube-api-access-v7kzc\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-94fqn\" (UID: \"37b31c92-756d-4b57-874f-c5278c279d8b\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-94fqn" Feb 18 15:08:40 crc kubenswrapper[4957]: I0218 15:08:40.233384 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/37b31c92-756d-4b57-874f-c5278c279d8b-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-94fqn\" (UID: \"37b31c92-756d-4b57-874f-c5278c279d8b\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-94fqn" Feb 18 15:08:40 crc kubenswrapper[4957]: I0218 15:08:40.233492 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/37b31c92-756d-4b57-874f-c5278c279d8b-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-94fqn\" (UID: \"37b31c92-756d-4b57-874f-c5278c279d8b\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-94fqn" Feb 18 15:08:40 crc kubenswrapper[4957]: I0218 15:08:40.238571 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/37b31c92-756d-4b57-874f-c5278c279d8b-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-94fqn\" (UID: \"37b31c92-756d-4b57-874f-c5278c279d8b\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-94fqn" Feb 18 15:08:40 crc kubenswrapper[4957]: I0218 15:08:40.243244 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/37b31c92-756d-4b57-874f-c5278c279d8b-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-94fqn\" (UID: \"37b31c92-756d-4b57-874f-c5278c279d8b\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-94fqn" Feb 18 15:08:40 crc kubenswrapper[4957]: I0218 15:08:40.250040 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v7kzc\" (UniqueName: \"kubernetes.io/projected/37b31c92-756d-4b57-874f-c5278c279d8b-kube-api-access-v7kzc\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-94fqn\" (UID: \"37b31c92-756d-4b57-874f-c5278c279d8b\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-94fqn" Feb 18 15:08:40 crc kubenswrapper[4957]: I0218 15:08:40.374690 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-94fqn" Feb 18 15:08:40 crc kubenswrapper[4957]: I0218 15:08:40.985061 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-94fqn"] Feb 18 15:08:41 crc kubenswrapper[4957]: I0218 15:08:41.001413 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-94fqn" event={"ID":"37b31c92-756d-4b57-874f-c5278c279d8b","Type":"ContainerStarted","Data":"71a0795b0738b2ed5d2f320c03b994cd63b4cfa6f31b0c4ae8b65dd37fffcec3"} Feb 18 15:08:42 crc kubenswrapper[4957]: I0218 15:08:42.014649 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-94fqn" event={"ID":"37b31c92-756d-4b57-874f-c5278c279d8b","Type":"ContainerStarted","Data":"f5933a1403446cbe6c569fc39c2daab32d8dd5c37ecd0c95361029b93eebe3e9"} Feb 18 15:08:42 crc kubenswrapper[4957]: I0218 15:08:42.014897 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-2pzkx" podUID="91bf72a3-1700-459c-ba18-b7317044a1e2" containerName="registry-server" containerID="cri-o://77edba93cdbff2311b9a47fd3d58b1673b59532c3e36d9b9d639ef5180c2fc04" gracePeriod=2 Feb 18 15:08:42 crc kubenswrapper[4957]: I0218 15:08:42.036922 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-94fqn" podStartSLOduration=1.611510019 podStartE2EDuration="2.036905544s" podCreationTimestamp="2026-02-18 15:08:40 +0000 UTC" firstStartedPulling="2026-02-18 15:08:40.98768132 +0000 UTC m=+2227.508546064" lastFinishedPulling="2026-02-18 15:08:41.413076845 +0000 UTC m=+2227.933941589" observedRunningTime="2026-02-18 15:08:42.036877184 +0000 UTC m=+2228.557741968" watchObservedRunningTime="2026-02-18 15:08:42.036905544 +0000 UTC m=+2228.557770288" Feb 18 15:08:42 crc kubenswrapper[4957]: I0218 15:08:42.527248 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2pzkx" Feb 18 15:08:42 crc kubenswrapper[4957]: I0218 15:08:42.600341 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91bf72a3-1700-459c-ba18-b7317044a1e2-utilities\") pod \"91bf72a3-1700-459c-ba18-b7317044a1e2\" (UID: \"91bf72a3-1700-459c-ba18-b7317044a1e2\") " Feb 18 15:08:42 crc kubenswrapper[4957]: I0218 15:08:42.600537 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cbf7f\" (UniqueName: \"kubernetes.io/projected/91bf72a3-1700-459c-ba18-b7317044a1e2-kube-api-access-cbf7f\") pod \"91bf72a3-1700-459c-ba18-b7317044a1e2\" (UID: \"91bf72a3-1700-459c-ba18-b7317044a1e2\") " Feb 18 15:08:42 crc kubenswrapper[4957]: I0218 15:08:42.600609 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91bf72a3-1700-459c-ba18-b7317044a1e2-catalog-content\") pod \"91bf72a3-1700-459c-ba18-b7317044a1e2\" (UID: \"91bf72a3-1700-459c-ba18-b7317044a1e2\") " Feb 18 15:08:42 crc kubenswrapper[4957]: I0218 15:08:42.601465 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/91bf72a3-1700-459c-ba18-b7317044a1e2-utilities" (OuterVolumeSpecName: "utilities") pod "91bf72a3-1700-459c-ba18-b7317044a1e2" (UID: "91bf72a3-1700-459c-ba18-b7317044a1e2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 15:08:42 crc kubenswrapper[4957]: I0218 15:08:42.608636 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91bf72a3-1700-459c-ba18-b7317044a1e2-kube-api-access-cbf7f" (OuterVolumeSpecName: "kube-api-access-cbf7f") pod "91bf72a3-1700-459c-ba18-b7317044a1e2" (UID: "91bf72a3-1700-459c-ba18-b7317044a1e2"). InnerVolumeSpecName "kube-api-access-cbf7f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 15:08:42 crc kubenswrapper[4957]: I0218 15:08:42.662786 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/91bf72a3-1700-459c-ba18-b7317044a1e2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "91bf72a3-1700-459c-ba18-b7317044a1e2" (UID: "91bf72a3-1700-459c-ba18-b7317044a1e2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 15:08:42 crc kubenswrapper[4957]: I0218 15:08:42.704324 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cbf7f\" (UniqueName: \"kubernetes.io/projected/91bf72a3-1700-459c-ba18-b7317044a1e2-kube-api-access-cbf7f\") on node \"crc\" DevicePath \"\"" Feb 18 15:08:42 crc kubenswrapper[4957]: I0218 15:08:42.704631 4957 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91bf72a3-1700-459c-ba18-b7317044a1e2-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 15:08:42 crc kubenswrapper[4957]: I0218 15:08:42.704712 4957 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91bf72a3-1700-459c-ba18-b7317044a1e2-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 15:08:43 crc kubenswrapper[4957]: I0218 15:08:43.029995 4957 generic.go:334] "Generic (PLEG): container finished" podID="91bf72a3-1700-459c-ba18-b7317044a1e2" containerID="77edba93cdbff2311b9a47fd3d58b1673b59532c3e36d9b9d639ef5180c2fc04" exitCode=0 Feb 18 15:08:43 crc kubenswrapper[4957]: I0218 15:08:43.030068 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2pzkx" Feb 18 15:08:43 crc kubenswrapper[4957]: I0218 15:08:43.030074 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2pzkx" event={"ID":"91bf72a3-1700-459c-ba18-b7317044a1e2","Type":"ContainerDied","Data":"77edba93cdbff2311b9a47fd3d58b1673b59532c3e36d9b9d639ef5180c2fc04"} Feb 18 15:08:43 crc kubenswrapper[4957]: I0218 15:08:43.030456 4957 scope.go:117] "RemoveContainer" containerID="77edba93cdbff2311b9a47fd3d58b1673b59532c3e36d9b9d639ef5180c2fc04" Feb 18 15:08:43 crc kubenswrapper[4957]: I0218 15:08:43.030629 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2pzkx" event={"ID":"91bf72a3-1700-459c-ba18-b7317044a1e2","Type":"ContainerDied","Data":"fec45782a5706f9952b0da99d83fed1634c389d408d261072b098059ce363217"} Feb 18 15:08:43 crc kubenswrapper[4957]: I0218 15:08:43.068794 4957 scope.go:117] "RemoveContainer" containerID="fa3af1c94886c7f648690861f2c2bbfb0038b3e5c2790cbac3f925f03a38eba5" Feb 18 15:08:43 crc kubenswrapper[4957]: I0218 15:08:43.075642 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2pzkx"] Feb 18 15:08:43 crc kubenswrapper[4957]: I0218 15:08:43.099813 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-2pzkx"] Feb 18 15:08:43 crc kubenswrapper[4957]: I0218 15:08:43.128287 4957 scope.go:117] "RemoveContainer" containerID="57bd19d44932be2baba4c3ad03cb53764c2784181cffe9f6a04ea0a5c9b94677" Feb 18 15:08:43 crc kubenswrapper[4957]: I0218 15:08:43.168641 4957 scope.go:117] "RemoveContainer" containerID="77edba93cdbff2311b9a47fd3d58b1673b59532c3e36d9b9d639ef5180c2fc04" Feb 18 15:08:43 crc kubenswrapper[4957]: E0218 15:08:43.169292 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"77edba93cdbff2311b9a47fd3d58b1673b59532c3e36d9b9d639ef5180c2fc04\": container with ID starting with 77edba93cdbff2311b9a47fd3d58b1673b59532c3e36d9b9d639ef5180c2fc04 not found: ID does not exist" containerID="77edba93cdbff2311b9a47fd3d58b1673b59532c3e36d9b9d639ef5180c2fc04" Feb 18 15:08:43 crc kubenswrapper[4957]: I0218 15:08:43.169335 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"77edba93cdbff2311b9a47fd3d58b1673b59532c3e36d9b9d639ef5180c2fc04"} err="failed to get container status \"77edba93cdbff2311b9a47fd3d58b1673b59532c3e36d9b9d639ef5180c2fc04\": rpc error: code = NotFound desc = could not find container \"77edba93cdbff2311b9a47fd3d58b1673b59532c3e36d9b9d639ef5180c2fc04\": container with ID starting with 77edba93cdbff2311b9a47fd3d58b1673b59532c3e36d9b9d639ef5180c2fc04 not found: ID does not exist" Feb 18 15:08:43 crc kubenswrapper[4957]: I0218 15:08:43.169361 4957 scope.go:117] "RemoveContainer" containerID="fa3af1c94886c7f648690861f2c2bbfb0038b3e5c2790cbac3f925f03a38eba5" Feb 18 15:08:43 crc kubenswrapper[4957]: E0218 15:08:43.169653 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa3af1c94886c7f648690861f2c2bbfb0038b3e5c2790cbac3f925f03a38eba5\": container with ID starting with fa3af1c94886c7f648690861f2c2bbfb0038b3e5c2790cbac3f925f03a38eba5 not found: ID does not exist" containerID="fa3af1c94886c7f648690861f2c2bbfb0038b3e5c2790cbac3f925f03a38eba5" Feb 18 15:08:43 crc kubenswrapper[4957]: I0218 15:08:43.169685 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa3af1c94886c7f648690861f2c2bbfb0038b3e5c2790cbac3f925f03a38eba5"} err="failed to get container status \"fa3af1c94886c7f648690861f2c2bbfb0038b3e5c2790cbac3f925f03a38eba5\": rpc error: code = NotFound desc = could not find container \"fa3af1c94886c7f648690861f2c2bbfb0038b3e5c2790cbac3f925f03a38eba5\": container with ID starting with fa3af1c94886c7f648690861f2c2bbfb0038b3e5c2790cbac3f925f03a38eba5 not found: ID does not exist" Feb 18 15:08:43 crc kubenswrapper[4957]: I0218 15:08:43.169704 4957 scope.go:117] "RemoveContainer" containerID="57bd19d44932be2baba4c3ad03cb53764c2784181cffe9f6a04ea0a5c9b94677" Feb 18 15:08:43 crc kubenswrapper[4957]: E0218 15:08:43.169977 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"57bd19d44932be2baba4c3ad03cb53764c2784181cffe9f6a04ea0a5c9b94677\": container with ID starting with 57bd19d44932be2baba4c3ad03cb53764c2784181cffe9f6a04ea0a5c9b94677 not found: ID does not exist" containerID="57bd19d44932be2baba4c3ad03cb53764c2784181cffe9f6a04ea0a5c9b94677" Feb 18 15:08:43 crc kubenswrapper[4957]: I0218 15:08:43.170002 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57bd19d44932be2baba4c3ad03cb53764c2784181cffe9f6a04ea0a5c9b94677"} err="failed to get container status \"57bd19d44932be2baba4c3ad03cb53764c2784181cffe9f6a04ea0a5c9b94677\": rpc error: code = NotFound desc = could not find container \"57bd19d44932be2baba4c3ad03cb53764c2784181cffe9f6a04ea0a5c9b94677\": container with ID starting with 57bd19d44932be2baba4c3ad03cb53764c2784181cffe9f6a04ea0a5c9b94677 not found: ID does not exist" Feb 18 15:08:44 crc kubenswrapper[4957]: I0218 15:08:44.229384 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="91bf72a3-1700-459c-ba18-b7317044a1e2" path="/var/lib/kubelet/pods/91bf72a3-1700-459c-ba18-b7317044a1e2/volumes" Feb 18 15:08:50 crc kubenswrapper[4957]: I0218 15:08:50.213683 4957 scope.go:117] "RemoveContainer" containerID="44b5f20e8c741a1ccc4ddb62c02968c2612966a76e0271e585bc5a70580f1aba" Feb 18 15:08:50 crc kubenswrapper[4957]: E0218 15:08:50.214531 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-x8wwg_openshift-machine-config-operator(4cde17e3-43e9-4bed-afe8-5b76229e35cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" Feb 18 15:08:51 crc kubenswrapper[4957]: I0218 15:08:51.136645 4957 generic.go:334] "Generic (PLEG): container finished" podID="37b31c92-756d-4b57-874f-c5278c279d8b" containerID="f5933a1403446cbe6c569fc39c2daab32d8dd5c37ecd0c95361029b93eebe3e9" exitCode=0 Feb 18 15:08:51 crc kubenswrapper[4957]: I0218 15:08:51.136959 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-94fqn" event={"ID":"37b31c92-756d-4b57-874f-c5278c279d8b","Type":"ContainerDied","Data":"f5933a1403446cbe6c569fc39c2daab32d8dd5c37ecd0c95361029b93eebe3e9"} Feb 18 15:08:52 crc kubenswrapper[4957]: I0218 15:08:52.652026 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-94fqn" Feb 18 15:08:52 crc kubenswrapper[4957]: I0218 15:08:52.766671 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/37b31c92-756d-4b57-874f-c5278c279d8b-ssh-key-openstack-edpm-ipam\") pod \"37b31c92-756d-4b57-874f-c5278c279d8b\" (UID: \"37b31c92-756d-4b57-874f-c5278c279d8b\") " Feb 18 15:08:52 crc kubenswrapper[4957]: I0218 15:08:52.767080 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/37b31c92-756d-4b57-874f-c5278c279d8b-inventory\") pod \"37b31c92-756d-4b57-874f-c5278c279d8b\" (UID: \"37b31c92-756d-4b57-874f-c5278c279d8b\") " Feb 18 15:08:52 crc kubenswrapper[4957]: I0218 15:08:52.768073 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v7kzc\" (UniqueName: \"kubernetes.io/projected/37b31c92-756d-4b57-874f-c5278c279d8b-kube-api-access-v7kzc\") pod \"37b31c92-756d-4b57-874f-c5278c279d8b\" (UID: \"37b31c92-756d-4b57-874f-c5278c279d8b\") " Feb 18 15:08:52 crc kubenswrapper[4957]: I0218 15:08:52.773791 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37b31c92-756d-4b57-874f-c5278c279d8b-kube-api-access-v7kzc" (OuterVolumeSpecName: "kube-api-access-v7kzc") pod "37b31c92-756d-4b57-874f-c5278c279d8b" (UID: "37b31c92-756d-4b57-874f-c5278c279d8b"). InnerVolumeSpecName "kube-api-access-v7kzc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 15:08:52 crc kubenswrapper[4957]: I0218 15:08:52.810684 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37b31c92-756d-4b57-874f-c5278c279d8b-inventory" (OuterVolumeSpecName: "inventory") pod "37b31c92-756d-4b57-874f-c5278c279d8b" (UID: "37b31c92-756d-4b57-874f-c5278c279d8b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 15:08:52 crc kubenswrapper[4957]: I0218 15:08:52.811126 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37b31c92-756d-4b57-874f-c5278c279d8b-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "37b31c92-756d-4b57-874f-c5278c279d8b" (UID: "37b31c92-756d-4b57-874f-c5278c279d8b"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 15:08:52 crc kubenswrapper[4957]: I0218 15:08:52.871347 4957 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/37b31c92-756d-4b57-874f-c5278c279d8b-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 15:08:52 crc kubenswrapper[4957]: I0218 15:08:52.871409 4957 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/37b31c92-756d-4b57-874f-c5278c279d8b-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 15:08:52 crc kubenswrapper[4957]: I0218 15:08:52.871451 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v7kzc\" (UniqueName: \"kubernetes.io/projected/37b31c92-756d-4b57-874f-c5278c279d8b-kube-api-access-v7kzc\") on node \"crc\" DevicePath \"\"" Feb 18 15:08:53 crc kubenswrapper[4957]: I0218 15:08:53.159468 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-94fqn" event={"ID":"37b31c92-756d-4b57-874f-c5278c279d8b","Type":"ContainerDied","Data":"71a0795b0738b2ed5d2f320c03b994cd63b4cfa6f31b0c4ae8b65dd37fffcec3"} Feb 18 15:08:53 crc kubenswrapper[4957]: I0218 15:08:53.159528 4957 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="71a0795b0738b2ed5d2f320c03b994cd63b4cfa6f31b0c4ae8b65dd37fffcec3" Feb 18 15:08:53 crc kubenswrapper[4957]: I0218 15:08:53.159563 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-94fqn" Feb 18 15:08:53 crc kubenswrapper[4957]: I0218 15:08:53.243717 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wf22p"] Feb 18 15:08:53 crc kubenswrapper[4957]: E0218 15:08:53.244189 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91bf72a3-1700-459c-ba18-b7317044a1e2" containerName="extract-content" Feb 18 15:08:53 crc kubenswrapper[4957]: I0218 15:08:53.244206 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="91bf72a3-1700-459c-ba18-b7317044a1e2" containerName="extract-content" Feb 18 15:08:53 crc kubenswrapper[4957]: E0218 15:08:53.244228 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37b31c92-756d-4b57-874f-c5278c279d8b" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 18 15:08:53 crc kubenswrapper[4957]: I0218 15:08:53.244236 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="37b31c92-756d-4b57-874f-c5278c279d8b" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 18 15:08:53 crc kubenswrapper[4957]: E0218 15:08:53.244258 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91bf72a3-1700-459c-ba18-b7317044a1e2" containerName="registry-server" Feb 18 15:08:53 crc kubenswrapper[4957]: I0218 15:08:53.244264 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="91bf72a3-1700-459c-ba18-b7317044a1e2" containerName="registry-server" Feb 18 15:08:53 crc kubenswrapper[4957]: E0218 15:08:53.244274 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91bf72a3-1700-459c-ba18-b7317044a1e2" containerName="extract-utilities" Feb 18 15:08:53 crc kubenswrapper[4957]: I0218 15:08:53.244280 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="91bf72a3-1700-459c-ba18-b7317044a1e2" containerName="extract-utilities" Feb 18 15:08:53 crc kubenswrapper[4957]: I0218 15:08:53.244511 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="91bf72a3-1700-459c-ba18-b7317044a1e2" containerName="registry-server" Feb 18 15:08:53 crc kubenswrapper[4957]: I0218 15:08:53.244537 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="37b31c92-756d-4b57-874f-c5278c279d8b" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 18 15:08:53 crc kubenswrapper[4957]: I0218 15:08:53.245366 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wf22p" Feb 18 15:08:53 crc kubenswrapper[4957]: I0218 15:08:53.247828 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 15:08:53 crc kubenswrapper[4957]: I0218 15:08:53.248402 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0" Feb 18 15:08:53 crc kubenswrapper[4957]: I0218 15:08:53.248489 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Feb 18 15:08:53 crc kubenswrapper[4957]: I0218 15:08:53.248522 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Feb 18 15:08:53 crc kubenswrapper[4957]: I0218 15:08:53.248726 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Feb 18 15:08:53 crc kubenswrapper[4957]: I0218 15:08:53.248756 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 15:08:53 crc kubenswrapper[4957]: I0218 15:08:53.248775 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-pmpvb" Feb 18 15:08:53 crc kubenswrapper[4957]: I0218 15:08:53.248950 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Feb 18 15:08:53 crc kubenswrapper[4957]: I0218 15:08:53.249337 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 15:08:53 crc kubenswrapper[4957]: I0218 15:08:53.265970 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wf22p"] Feb 18 15:08:53 crc kubenswrapper[4957]: I0218 15:08:53.385352 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47b95013-4d1a-4b81-b1c7-1fed8ecff2b1-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wf22p\" (UID: \"47b95013-4d1a-4b81-b1c7-1fed8ecff2b1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wf22p" Feb 18 15:08:53 crc kubenswrapper[4957]: I0218 15:08:53.385439 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/47b95013-4d1a-4b81-b1c7-1fed8ecff2b1-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wf22p\" (UID: \"47b95013-4d1a-4b81-b1c7-1fed8ecff2b1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wf22p" Feb 18 15:08:53 crc kubenswrapper[4957]: I0218 15:08:53.385499 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47b95013-4d1a-4b81-b1c7-1fed8ecff2b1-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wf22p\" (UID: \"47b95013-4d1a-4b81-b1c7-1fed8ecff2b1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wf22p" Feb 18 15:08:53 crc kubenswrapper[4957]: I0218 15:08:53.385521 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47b95013-4d1a-4b81-b1c7-1fed8ecff2b1-telemetry-power-monitoring-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wf22p\" (UID: \"47b95013-4d1a-4b81-b1c7-1fed8ecff2b1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wf22p" Feb 18 15:08:53 crc kubenswrapper[4957]: I0218 15:08:53.385558 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4cv99\" (UniqueName: \"kubernetes.io/projected/47b95013-4d1a-4b81-b1c7-1fed8ecff2b1-kube-api-access-4cv99\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wf22p\" (UID: \"47b95013-4d1a-4b81-b1c7-1fed8ecff2b1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wf22p" Feb 18 15:08:53 crc kubenswrapper[4957]: I0218 15:08:53.385582 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47b95013-4d1a-4b81-b1c7-1fed8ecff2b1-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wf22p\" (UID: \"47b95013-4d1a-4b81-b1c7-1fed8ecff2b1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wf22p" Feb 18 15:08:53 crc kubenswrapper[4957]: I0218 15:08:53.385628 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/47b95013-4d1a-4b81-b1c7-1fed8ecff2b1-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wf22p\" (UID: \"47b95013-4d1a-4b81-b1c7-1fed8ecff2b1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wf22p" Feb 18 15:08:53 crc kubenswrapper[4957]: I0218 15:08:53.385650 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/47b95013-4d1a-4b81-b1c7-1fed8ecff2b1-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wf22p\" (UID: \"47b95013-4d1a-4b81-b1c7-1fed8ecff2b1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wf22p" Feb 18 15:08:53 crc kubenswrapper[4957]: I0218 15:08:53.385676 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47b95013-4d1a-4b81-b1c7-1fed8ecff2b1-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wf22p\" (UID: \"47b95013-4d1a-4b81-b1c7-1fed8ecff2b1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wf22p" Feb 18 15:08:53 crc kubenswrapper[4957]: I0218 15:08:53.385694 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47b95013-4d1a-4b81-b1c7-1fed8ecff2b1-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wf22p\" (UID: \"47b95013-4d1a-4b81-b1c7-1fed8ecff2b1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wf22p" Feb 18 15:08:53 crc kubenswrapper[4957]: I0218 15:08:53.385726 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/47b95013-4d1a-4b81-b1c7-1fed8ecff2b1-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wf22p\" (UID: \"47b95013-4d1a-4b81-b1c7-1fed8ecff2b1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wf22p" Feb 18 15:08:53 crc kubenswrapper[4957]: I0218 15:08:53.385764 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/47b95013-4d1a-4b81-b1c7-1fed8ecff2b1-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wf22p\" (UID: \"47b95013-4d1a-4b81-b1c7-1fed8ecff2b1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wf22p" Feb 18 15:08:53 crc kubenswrapper[4957]: I0218 15:08:53.385787 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47b95013-4d1a-4b81-b1c7-1fed8ecff2b1-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wf22p\" (UID: \"47b95013-4d1a-4b81-b1c7-1fed8ecff2b1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wf22p" Feb 18 15:08:53 crc kubenswrapper[4957]: I0218 15:08:53.385804 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/47b95013-4d1a-4b81-b1c7-1fed8ecff2b1-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wf22p\" (UID: \"47b95013-4d1a-4b81-b1c7-1fed8ecff2b1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wf22p" Feb 18 15:08:53 crc kubenswrapper[4957]: I0218 15:08:53.385846 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/47b95013-4d1a-4b81-b1c7-1fed8ecff2b1-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wf22p\" (UID: \"47b95013-4d1a-4b81-b1c7-1fed8ecff2b1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wf22p" Feb 18 15:08:53 crc kubenswrapper[4957]: I0218 15:08:53.385871 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47b95013-4d1a-4b81-b1c7-1fed8ecff2b1-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wf22p\" (UID: \"47b95013-4d1a-4b81-b1c7-1fed8ecff2b1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wf22p" Feb 18 15:08:53 crc kubenswrapper[4957]: I0218 15:08:53.487695 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47b95013-4d1a-4b81-b1c7-1fed8ecff2b1-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wf22p\" (UID: \"47b95013-4d1a-4b81-b1c7-1fed8ecff2b1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wf22p" Feb 18 15:08:53 crc kubenswrapper[4957]: I0218 15:08:53.487784 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/47b95013-4d1a-4b81-b1c7-1fed8ecff2b1-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wf22p\" (UID: \"47b95013-4d1a-4b81-b1c7-1fed8ecff2b1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wf22p" Feb 18 15:08:53 crc kubenswrapper[4957]: I0218 15:08:53.487859 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47b95013-4d1a-4b81-b1c7-1fed8ecff2b1-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wf22p\" (UID: \"47b95013-4d1a-4b81-b1c7-1fed8ecff2b1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wf22p" Feb 18 15:08:53 crc kubenswrapper[4957]: I0218 15:08:53.487887 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47b95013-4d1a-4b81-b1c7-1fed8ecff2b1-telemetry-power-monitoring-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wf22p\" (UID: \"47b95013-4d1a-4b81-b1c7-1fed8ecff2b1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wf22p" Feb 18 15:08:53 crc kubenswrapper[4957]: I0218 15:08:53.487936 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4cv99\" (UniqueName: \"kubernetes.io/projected/47b95013-4d1a-4b81-b1c7-1fed8ecff2b1-kube-api-access-4cv99\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wf22p\" (UID: \"47b95013-4d1a-4b81-b1c7-1fed8ecff2b1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wf22p" Feb 18 15:08:53 crc kubenswrapper[4957]: I0218 15:08:53.487965 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47b95013-4d1a-4b81-b1c7-1fed8ecff2b1-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wf22p\" (UID: \"47b95013-4d1a-4b81-b1c7-1fed8ecff2b1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wf22p" Feb 18 15:08:53 crc kubenswrapper[4957]: I0218 15:08:53.488034 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/47b95013-4d1a-4b81-b1c7-1fed8ecff2b1-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wf22p\" (UID: \"47b95013-4d1a-4b81-b1c7-1fed8ecff2b1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wf22p" Feb 18 15:08:53 crc kubenswrapper[4957]: I0218 15:08:53.488068 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/47b95013-4d1a-4b81-b1c7-1fed8ecff2b1-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wf22p\" (UID: \"47b95013-4d1a-4b81-b1c7-1fed8ecff2b1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wf22p" Feb 18 15:08:53 crc kubenswrapper[4957]: I0218 15:08:53.488106 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47b95013-4d1a-4b81-b1c7-1fed8ecff2b1-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wf22p\" (UID: \"47b95013-4d1a-4b81-b1c7-1fed8ecff2b1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wf22p" Feb 18 15:08:53 crc kubenswrapper[4957]: I0218 15:08:53.488130 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47b95013-4d1a-4b81-b1c7-1fed8ecff2b1-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wf22p\" (UID: \"47b95013-4d1a-4b81-b1c7-1fed8ecff2b1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wf22p" Feb 18 15:08:53 crc kubenswrapper[4957]: I0218 15:08:53.488181 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/47b95013-4d1a-4b81-b1c7-1fed8ecff2b1-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wf22p\" (UID: \"47b95013-4d1a-4b81-b1c7-1fed8ecff2b1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wf22p" Feb 18 15:08:53 crc kubenswrapper[4957]: I0218 15:08:53.488239 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/47b95013-4d1a-4b81-b1c7-1fed8ecff2b1-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wf22p\" (UID: \"47b95013-4d1a-4b81-b1c7-1fed8ecff2b1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wf22p" Feb 18 15:08:53 crc kubenswrapper[4957]: I0218 15:08:53.488275 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47b95013-4d1a-4b81-b1c7-1fed8ecff2b1-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wf22p\" (UID: \"47b95013-4d1a-4b81-b1c7-1fed8ecff2b1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wf22p" Feb 18 15:08:53 crc kubenswrapper[4957]: I0218 15:08:53.488299 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/47b95013-4d1a-4b81-b1c7-1fed8ecff2b1-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wf22p\" (UID: \"47b95013-4d1a-4b81-b1c7-1fed8ecff2b1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wf22p" Feb 18 15:08:53 crc kubenswrapper[4957]: I0218 15:08:53.488377 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/47b95013-4d1a-4b81-b1c7-1fed8ecff2b1-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wf22p\" (UID: \"47b95013-4d1a-4b81-b1c7-1fed8ecff2b1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wf22p" Feb 18 15:08:53 crc kubenswrapper[4957]: I0218 15:08:53.488438 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47b95013-4d1a-4b81-b1c7-1fed8ecff2b1-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wf22p\" (UID: \"47b95013-4d1a-4b81-b1c7-1fed8ecff2b1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wf22p" Feb 18 15:08:53 crc kubenswrapper[4957]: I0218 15:08:53.492376 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/47b95013-4d1a-4b81-b1c7-1fed8ecff2b1-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wf22p\" (UID: \"47b95013-4d1a-4b81-b1c7-1fed8ecff2b1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wf22p" Feb 18 15:08:53 crc kubenswrapper[4957]: I0218 15:08:53.492605 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47b95013-4d1a-4b81-b1c7-1fed8ecff2b1-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wf22p\" (UID: \"47b95013-4d1a-4b81-b1c7-1fed8ecff2b1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wf22p" Feb 18 15:08:53 crc kubenswrapper[4957]: I0218 15:08:53.493360 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/47b95013-4d1a-4b81-b1c7-1fed8ecff2b1-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wf22p\" (UID: \"47b95013-4d1a-4b81-b1c7-1fed8ecff2b1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wf22p" Feb 18 15:08:53 crc kubenswrapper[4957]: I0218 15:08:53.493365 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47b95013-4d1a-4b81-b1c7-1fed8ecff2b1-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wf22p\" (UID: \"47b95013-4d1a-4b81-b1c7-1fed8ecff2b1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wf22p" Feb 18 15:08:53 crc kubenswrapper[4957]: I0218 15:08:53.493549 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/47b95013-4d1a-4b81-b1c7-1fed8ecff2b1-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wf22p\" (UID: \"47b95013-4d1a-4b81-b1c7-1fed8ecff2b1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wf22p" Feb 18 15:08:53 crc kubenswrapper[4957]: I0218 15:08:53.493756 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47b95013-4d1a-4b81-b1c7-1fed8ecff2b1-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wf22p\" (UID: \"47b95013-4d1a-4b81-b1c7-1fed8ecff2b1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wf22p" Feb 18 15:08:53 crc kubenswrapper[4957]: I0218 15:08:53.494618 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47b95013-4d1a-4b81-b1c7-1fed8ecff2b1-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wf22p\" (UID: \"47b95013-4d1a-4b81-b1c7-1fed8ecff2b1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wf22p" Feb 18 15:08:53 crc kubenswrapper[4957]: I0218 15:08:53.494690 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47b95013-4d1a-4b81-b1c7-1fed8ecff2b1-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wf22p\" (UID: \"47b95013-4d1a-4b81-b1c7-1fed8ecff2b1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wf22p" Feb 18 15:08:53 crc kubenswrapper[4957]: I0218 15:08:53.494759 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47b95013-4d1a-4b81-b1c7-1fed8ecff2b1-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wf22p\" (UID: \"47b95013-4d1a-4b81-b1c7-1fed8ecff2b1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wf22p" Feb 18 15:08:53 crc kubenswrapper[4957]: I0218 15:08:53.494963 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/47b95013-4d1a-4b81-b1c7-1fed8ecff2b1-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wf22p\" (UID: \"47b95013-4d1a-4b81-b1c7-1fed8ecff2b1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wf22p" Feb 18 15:08:53 crc kubenswrapper[4957]: I0218 15:08:53.495531 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47b95013-4d1a-4b81-b1c7-1fed8ecff2b1-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wf22p\" (UID: \"47b95013-4d1a-4b81-b1c7-1fed8ecff2b1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wf22p" Feb 18 15:08:53 crc kubenswrapper[4957]: I0218 15:08:53.496467 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/47b95013-4d1a-4b81-b1c7-1fed8ecff2b1-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wf22p\" (UID: \"47b95013-4d1a-4b81-b1c7-1fed8ecff2b1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wf22p" Feb 18 15:08:53 crc kubenswrapper[4957]: I0218 15:08:53.497880 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/47b95013-4d1a-4b81-b1c7-1fed8ecff2b1-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wf22p\" (UID: \"47b95013-4d1a-4b81-b1c7-1fed8ecff2b1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wf22p" Feb 18 15:08:53 crc kubenswrapper[4957]: I0218 15:08:53.501265 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47b95013-4d1a-4b81-b1c7-1fed8ecff2b1-telemetry-power-monitoring-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wf22p\" (UID: \"47b95013-4d1a-4b81-b1c7-1fed8ecff2b1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wf22p" Feb 18 15:08:53 crc kubenswrapper[4957]: I0218 15:08:53.502269 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/47b95013-4d1a-4b81-b1c7-1fed8ecff2b1-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wf22p\" (UID: \"47b95013-4d1a-4b81-b1c7-1fed8ecff2b1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wf22p" Feb 18 15:08:53 crc kubenswrapper[4957]: I0218 15:08:53.510976 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4cv99\" (UniqueName: \"kubernetes.io/projected/47b95013-4d1a-4b81-b1c7-1fed8ecff2b1-kube-api-access-4cv99\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wf22p\" (UID: \"47b95013-4d1a-4b81-b1c7-1fed8ecff2b1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wf22p" Feb 18 15:08:53 crc kubenswrapper[4957]: I0218 15:08:53.570725 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wf22p" Feb 18 15:08:54 crc kubenswrapper[4957]: I0218 15:08:54.184188 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wf22p"] Feb 18 15:08:54 crc kubenswrapper[4957]: W0218 15:08:54.194393 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod47b95013_4d1a_4b81_b1c7_1fed8ecff2b1.slice/crio-7af19bed6eb104509d7e215a9273b18500ea04d435ccb2841121fd53bd5cdc2b WatchSource:0}: Error finding container 7af19bed6eb104509d7e215a9273b18500ea04d435ccb2841121fd53bd5cdc2b: Status 404 returned error can't find the container with id 7af19bed6eb104509d7e215a9273b18500ea04d435ccb2841121fd53bd5cdc2b Feb 18 15:08:55 crc kubenswrapper[4957]: I0218 15:08:55.192038 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wf22p" event={"ID":"47b95013-4d1a-4b81-b1c7-1fed8ecff2b1","Type":"ContainerStarted","Data":"92a4ff1f8b159619bee0f2510ea0aca8af8992361cb3a3a947800c79cac01191"} Feb 18 15:08:55 crc kubenswrapper[4957]: I0218 15:08:55.192490 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wf22p" event={"ID":"47b95013-4d1a-4b81-b1c7-1fed8ecff2b1","Type":"ContainerStarted","Data":"7af19bed6eb104509d7e215a9273b18500ea04d435ccb2841121fd53bd5cdc2b"} Feb 18 15:08:55 crc kubenswrapper[4957]: I0218 15:08:55.223239 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wf22p" podStartSLOduration=1.8421776479999998 podStartE2EDuration="2.22321615s" podCreationTimestamp="2026-02-18 15:08:53 +0000 UTC" firstStartedPulling="2026-02-18 15:08:54.196997931 +0000 UTC m=+2240.717862685" lastFinishedPulling="2026-02-18 15:08:54.578036413 +0000 UTC m=+2241.098901187" observedRunningTime="2026-02-18 15:08:55.220067964 +0000 UTC m=+2241.740932718" watchObservedRunningTime="2026-02-18 15:08:55.22321615 +0000 UTC m=+2241.744080894" Feb 18 15:09:01 crc kubenswrapper[4957]: I0218 15:09:01.214713 4957 scope.go:117] "RemoveContainer" containerID="44b5f20e8c741a1ccc4ddb62c02968c2612966a76e0271e585bc5a70580f1aba" Feb 18 15:09:01 crc kubenswrapper[4957]: E0218 15:09:01.216024 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-x8wwg_openshift-machine-config-operator(4cde17e3-43e9-4bed-afe8-5b76229e35cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" Feb 18 15:09:10 crc kubenswrapper[4957]: I0218 15:09:10.704664 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-ml76k"] Feb 18 15:09:10 crc kubenswrapper[4957]: I0218 15:09:10.708697 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ml76k" Feb 18 15:09:10 crc kubenswrapper[4957]: I0218 15:09:10.717882 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ml76k"] Feb 18 15:09:10 crc kubenswrapper[4957]: I0218 15:09:10.797647 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1af582dd-5d21-4cda-b867-20436a531406-catalog-content\") pod \"redhat-marketplace-ml76k\" (UID: \"1af582dd-5d21-4cda-b867-20436a531406\") " pod="openshift-marketplace/redhat-marketplace-ml76k" Feb 18 15:09:10 crc kubenswrapper[4957]: I0218 15:09:10.797717 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6kmq\" (UniqueName: \"kubernetes.io/projected/1af582dd-5d21-4cda-b867-20436a531406-kube-api-access-g6kmq\") pod \"redhat-marketplace-ml76k\" (UID: \"1af582dd-5d21-4cda-b867-20436a531406\") " pod="openshift-marketplace/redhat-marketplace-ml76k" Feb 18 15:09:10 crc kubenswrapper[4957]: I0218 15:09:10.797826 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1af582dd-5d21-4cda-b867-20436a531406-utilities\") pod \"redhat-marketplace-ml76k\" (UID: \"1af582dd-5d21-4cda-b867-20436a531406\") " pod="openshift-marketplace/redhat-marketplace-ml76k" Feb 18 15:09:10 crc kubenswrapper[4957]: I0218 15:09:10.900283 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g6kmq\" (UniqueName: \"kubernetes.io/projected/1af582dd-5d21-4cda-b867-20436a531406-kube-api-access-g6kmq\") pod \"redhat-marketplace-ml76k\" (UID: \"1af582dd-5d21-4cda-b867-20436a531406\") " pod="openshift-marketplace/redhat-marketplace-ml76k" Feb 18 15:09:10 crc kubenswrapper[4957]: I0218 15:09:10.900631 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1af582dd-5d21-4cda-b867-20436a531406-utilities\") pod \"redhat-marketplace-ml76k\" (UID: \"1af582dd-5d21-4cda-b867-20436a531406\") " pod="openshift-marketplace/redhat-marketplace-ml76k" Feb 18 15:09:10 crc kubenswrapper[4957]: I0218 15:09:10.900762 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1af582dd-5d21-4cda-b867-20436a531406-catalog-content\") pod \"redhat-marketplace-ml76k\" (UID: \"1af582dd-5d21-4cda-b867-20436a531406\") " pod="openshift-marketplace/redhat-marketplace-ml76k" Feb 18 15:09:10 crc kubenswrapper[4957]: I0218 15:09:10.901183 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1af582dd-5d21-4cda-b867-20436a531406-utilities\") pod \"redhat-marketplace-ml76k\" (UID: \"1af582dd-5d21-4cda-b867-20436a531406\") " pod="openshift-marketplace/redhat-marketplace-ml76k" Feb 18 15:09:10 crc kubenswrapper[4957]: I0218 15:09:10.901220 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1af582dd-5d21-4cda-b867-20436a531406-catalog-content\") pod \"redhat-marketplace-ml76k\" (UID: \"1af582dd-5d21-4cda-b867-20436a531406\") " pod="openshift-marketplace/redhat-marketplace-ml76k" Feb 18 15:09:10 crc kubenswrapper[4957]: I0218 15:09:10.921393 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g6kmq\" (UniqueName: \"kubernetes.io/projected/1af582dd-5d21-4cda-b867-20436a531406-kube-api-access-g6kmq\") pod \"redhat-marketplace-ml76k\" (UID: \"1af582dd-5d21-4cda-b867-20436a531406\") " pod="openshift-marketplace/redhat-marketplace-ml76k" Feb 18 15:09:11 crc kubenswrapper[4957]: I0218 15:09:11.034886 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ml76k" Feb 18 15:09:11 crc kubenswrapper[4957]: I0218 15:09:11.574085 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ml76k"] Feb 18 15:09:12 crc kubenswrapper[4957]: I0218 15:09:12.401256 4957 generic.go:334] "Generic (PLEG): container finished" podID="1af582dd-5d21-4cda-b867-20436a531406" containerID="e36ee12d695c747de5f67364f45ea3c435aafc7a308d2dc4b42283b5b5340abc" exitCode=0 Feb 18 15:09:12 crc kubenswrapper[4957]: I0218 15:09:12.401359 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ml76k" event={"ID":"1af582dd-5d21-4cda-b867-20436a531406","Type":"ContainerDied","Data":"e36ee12d695c747de5f67364f45ea3c435aafc7a308d2dc4b42283b5b5340abc"} Feb 18 15:09:12 crc kubenswrapper[4957]: I0218 15:09:12.401616 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ml76k" event={"ID":"1af582dd-5d21-4cda-b867-20436a531406","Type":"ContainerStarted","Data":"8b67a40b83c851e7579928c1c2f8dd9804f901943505c3b34faceb9837de216d"} Feb 18 15:09:14 crc kubenswrapper[4957]: I0218 15:09:14.221159 4957 scope.go:117] "RemoveContainer" containerID="44b5f20e8c741a1ccc4ddb62c02968c2612966a76e0271e585bc5a70580f1aba" Feb 18 15:09:14 crc kubenswrapper[4957]: E0218 15:09:14.222476 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-x8wwg_openshift-machine-config-operator(4cde17e3-43e9-4bed-afe8-5b76229e35cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" Feb 18 15:09:14 crc kubenswrapper[4957]: I0218 15:09:14.427783 4957 generic.go:334] "Generic (PLEG): container finished" podID="1af582dd-5d21-4cda-b867-20436a531406" containerID="18f6b12d8059425760a4eb14fdae63df6d2eb91fab475f8112432825e65ddaa9" exitCode=0 Feb 18 15:09:14 crc kubenswrapper[4957]: I0218 15:09:14.427837 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ml76k" event={"ID":"1af582dd-5d21-4cda-b867-20436a531406","Type":"ContainerDied","Data":"18f6b12d8059425760a4eb14fdae63df6d2eb91fab475f8112432825e65ddaa9"} Feb 18 15:09:15 crc kubenswrapper[4957]: I0218 15:09:15.439050 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ml76k" event={"ID":"1af582dd-5d21-4cda-b867-20436a531406","Type":"ContainerStarted","Data":"68d317fbcdd44e2b7ec26cc96444eca489e10f21fd7ed372e8e4e00a3c3a79f6"} Feb 18 15:09:15 crc kubenswrapper[4957]: I0218 15:09:15.463982 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-ml76k" podStartSLOduration=3.007957829 podStartE2EDuration="5.463962563s" podCreationTimestamp="2026-02-18 15:09:10 +0000 UTC" firstStartedPulling="2026-02-18 15:09:12.403639047 +0000 UTC m=+2258.924503791" lastFinishedPulling="2026-02-18 15:09:14.859643771 +0000 UTC m=+2261.380508525" observedRunningTime="2026-02-18 15:09:15.45478225 +0000 UTC m=+2261.975647005" watchObservedRunningTime="2026-02-18 15:09:15.463962563 +0000 UTC m=+2261.984827307" Feb 18 15:09:21 crc kubenswrapper[4957]: I0218 15:09:21.035115 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-ml76k" Feb 18 15:09:21 crc kubenswrapper[4957]: I0218 15:09:21.035599 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-ml76k" Feb 18 15:09:21 crc kubenswrapper[4957]: I0218 15:09:21.079442 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-ml76k" Feb 18 15:09:21 crc kubenswrapper[4957]: I0218 15:09:21.559378 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-ml76k" Feb 18 15:09:21 crc kubenswrapper[4957]: I0218 15:09:21.617266 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ml76k"] Feb 18 15:09:23 crc kubenswrapper[4957]: I0218 15:09:23.526817 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-ml76k" podUID="1af582dd-5d21-4cda-b867-20436a531406" containerName="registry-server" containerID="cri-o://68d317fbcdd44e2b7ec26cc96444eca489e10f21fd7ed372e8e4e00a3c3a79f6" gracePeriod=2 Feb 18 15:09:24 crc kubenswrapper[4957]: I0218 15:09:24.049364 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ml76k" Feb 18 15:09:24 crc kubenswrapper[4957]: I0218 15:09:24.080394 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1af582dd-5d21-4cda-b867-20436a531406-utilities\") pod \"1af582dd-5d21-4cda-b867-20436a531406\" (UID: \"1af582dd-5d21-4cda-b867-20436a531406\") " Feb 18 15:09:24 crc kubenswrapper[4957]: I0218 15:09:24.080659 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g6kmq\" (UniqueName: \"kubernetes.io/projected/1af582dd-5d21-4cda-b867-20436a531406-kube-api-access-g6kmq\") pod \"1af582dd-5d21-4cda-b867-20436a531406\" (UID: \"1af582dd-5d21-4cda-b867-20436a531406\") " Feb 18 15:09:24 crc kubenswrapper[4957]: I0218 15:09:24.080750 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1af582dd-5d21-4cda-b867-20436a531406-catalog-content\") pod \"1af582dd-5d21-4cda-b867-20436a531406\" (UID: \"1af582dd-5d21-4cda-b867-20436a531406\") " Feb 18 15:09:24 crc kubenswrapper[4957]: I0218 15:09:24.083282 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1af582dd-5d21-4cda-b867-20436a531406-utilities" (OuterVolumeSpecName: "utilities") pod "1af582dd-5d21-4cda-b867-20436a531406" (UID: "1af582dd-5d21-4cda-b867-20436a531406"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 15:09:24 crc kubenswrapper[4957]: I0218 15:09:24.095101 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1af582dd-5d21-4cda-b867-20436a531406-kube-api-access-g6kmq" (OuterVolumeSpecName: "kube-api-access-g6kmq") pod "1af582dd-5d21-4cda-b867-20436a531406" (UID: "1af582dd-5d21-4cda-b867-20436a531406"). InnerVolumeSpecName "kube-api-access-g6kmq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 15:09:24 crc kubenswrapper[4957]: I0218 15:09:24.179277 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1af582dd-5d21-4cda-b867-20436a531406-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1af582dd-5d21-4cda-b867-20436a531406" (UID: "1af582dd-5d21-4cda-b867-20436a531406"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 15:09:24 crc kubenswrapper[4957]: I0218 15:09:24.184351 4957 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1af582dd-5d21-4cda-b867-20436a531406-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 15:09:24 crc kubenswrapper[4957]: I0218 15:09:24.184386 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g6kmq\" (UniqueName: \"kubernetes.io/projected/1af582dd-5d21-4cda-b867-20436a531406-kube-api-access-g6kmq\") on node \"crc\" DevicePath \"\"" Feb 18 15:09:24 crc kubenswrapper[4957]: I0218 15:09:24.184397 4957 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1af582dd-5d21-4cda-b867-20436a531406-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 15:09:24 crc kubenswrapper[4957]: I0218 15:09:24.548228 4957 generic.go:334] "Generic (PLEG): container finished" podID="1af582dd-5d21-4cda-b867-20436a531406" containerID="68d317fbcdd44e2b7ec26cc96444eca489e10f21fd7ed372e8e4e00a3c3a79f6" exitCode=0 Feb 18 15:09:24 crc kubenswrapper[4957]: I0218 15:09:24.548301 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ml76k" event={"ID":"1af582dd-5d21-4cda-b867-20436a531406","Type":"ContainerDied","Data":"68d317fbcdd44e2b7ec26cc96444eca489e10f21fd7ed372e8e4e00a3c3a79f6"} Feb 18 15:09:24 crc kubenswrapper[4957]: I0218 15:09:24.548326 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ml76k" event={"ID":"1af582dd-5d21-4cda-b867-20436a531406","Type":"ContainerDied","Data":"8b67a40b83c851e7579928c1c2f8dd9804f901943505c3b34faceb9837de216d"} Feb 18 15:09:24 crc kubenswrapper[4957]: I0218 15:09:24.548342 4957 scope.go:117] "RemoveContainer" containerID="68d317fbcdd44e2b7ec26cc96444eca489e10f21fd7ed372e8e4e00a3c3a79f6" Feb 18 15:09:24 crc kubenswrapper[4957]: I0218 15:09:24.548498 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ml76k" Feb 18 15:09:24 crc kubenswrapper[4957]: I0218 15:09:24.586012 4957 scope.go:117] "RemoveContainer" containerID="18f6b12d8059425760a4eb14fdae63df6d2eb91fab475f8112432825e65ddaa9" Feb 18 15:09:24 crc kubenswrapper[4957]: I0218 15:09:24.587007 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ml76k"] Feb 18 15:09:24 crc kubenswrapper[4957]: I0218 15:09:24.598761 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-ml76k"] Feb 18 15:09:24 crc kubenswrapper[4957]: I0218 15:09:24.619379 4957 scope.go:117] "RemoveContainer" containerID="e36ee12d695c747de5f67364f45ea3c435aafc7a308d2dc4b42283b5b5340abc" Feb 18 15:09:24 crc kubenswrapper[4957]: I0218 15:09:24.669238 4957 scope.go:117] "RemoveContainer" containerID="68d317fbcdd44e2b7ec26cc96444eca489e10f21fd7ed372e8e4e00a3c3a79f6" Feb 18 15:09:24 crc kubenswrapper[4957]: E0218 15:09:24.669765 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"68d317fbcdd44e2b7ec26cc96444eca489e10f21fd7ed372e8e4e00a3c3a79f6\": container with ID starting with 68d317fbcdd44e2b7ec26cc96444eca489e10f21fd7ed372e8e4e00a3c3a79f6 not found: ID does not exist" containerID="68d317fbcdd44e2b7ec26cc96444eca489e10f21fd7ed372e8e4e00a3c3a79f6" Feb 18 15:09:24 crc kubenswrapper[4957]: I0218 15:09:24.669899 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"68d317fbcdd44e2b7ec26cc96444eca489e10f21fd7ed372e8e4e00a3c3a79f6"} err="failed to get container status \"68d317fbcdd44e2b7ec26cc96444eca489e10f21fd7ed372e8e4e00a3c3a79f6\": rpc error: code = NotFound desc = could not find container \"68d317fbcdd44e2b7ec26cc96444eca489e10f21fd7ed372e8e4e00a3c3a79f6\": container with ID starting with 68d317fbcdd44e2b7ec26cc96444eca489e10f21fd7ed372e8e4e00a3c3a79f6 not found: ID does not exist" Feb 18 15:09:24 crc kubenswrapper[4957]: I0218 15:09:24.670017 4957 scope.go:117] "RemoveContainer" containerID="18f6b12d8059425760a4eb14fdae63df6d2eb91fab475f8112432825e65ddaa9" Feb 18 15:09:24 crc kubenswrapper[4957]: E0218 15:09:24.670365 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"18f6b12d8059425760a4eb14fdae63df6d2eb91fab475f8112432825e65ddaa9\": container with ID starting with 18f6b12d8059425760a4eb14fdae63df6d2eb91fab475f8112432825e65ddaa9 not found: ID does not exist" containerID="18f6b12d8059425760a4eb14fdae63df6d2eb91fab475f8112432825e65ddaa9" Feb 18 15:09:24 crc kubenswrapper[4957]: I0218 15:09:24.670398 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"18f6b12d8059425760a4eb14fdae63df6d2eb91fab475f8112432825e65ddaa9"} err="failed to get container status \"18f6b12d8059425760a4eb14fdae63df6d2eb91fab475f8112432825e65ddaa9\": rpc error: code = NotFound desc = could not find container \"18f6b12d8059425760a4eb14fdae63df6d2eb91fab475f8112432825e65ddaa9\": container with ID starting with 18f6b12d8059425760a4eb14fdae63df6d2eb91fab475f8112432825e65ddaa9 not found: ID does not exist" Feb 18 15:09:24 crc kubenswrapper[4957]: I0218 15:09:24.670432 4957 scope.go:117] "RemoveContainer" containerID="e36ee12d695c747de5f67364f45ea3c435aafc7a308d2dc4b42283b5b5340abc" Feb 18 15:09:24 crc kubenswrapper[4957]: E0218 15:09:24.670770 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e36ee12d695c747de5f67364f45ea3c435aafc7a308d2dc4b42283b5b5340abc\": container with ID starting with e36ee12d695c747de5f67364f45ea3c435aafc7a308d2dc4b42283b5b5340abc not found: ID does not exist" containerID="e36ee12d695c747de5f67364f45ea3c435aafc7a308d2dc4b42283b5b5340abc" Feb 18 15:09:24 crc kubenswrapper[4957]: I0218 15:09:24.670811 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e36ee12d695c747de5f67364f45ea3c435aafc7a308d2dc4b42283b5b5340abc"} err="failed to get container status \"e36ee12d695c747de5f67364f45ea3c435aafc7a308d2dc4b42283b5b5340abc\": rpc error: code = NotFound desc = could not find container \"e36ee12d695c747de5f67364f45ea3c435aafc7a308d2dc4b42283b5b5340abc\": container with ID starting with e36ee12d695c747de5f67364f45ea3c435aafc7a308d2dc4b42283b5b5340abc not found: ID does not exist" Feb 18 15:09:26 crc kubenswrapper[4957]: I0218 15:09:26.229864 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1af582dd-5d21-4cda-b867-20436a531406" path="/var/lib/kubelet/pods/1af582dd-5d21-4cda-b867-20436a531406/volumes" Feb 18 15:09:27 crc kubenswrapper[4957]: I0218 15:09:27.049576 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-sync-ppkbv"] Feb 18 15:09:27 crc kubenswrapper[4957]: I0218 15:09:27.062986 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-sync-ppkbv"] Feb 18 15:09:28 crc kubenswrapper[4957]: I0218 15:09:28.214102 4957 scope.go:117] "RemoveContainer" containerID="44b5f20e8c741a1ccc4ddb62c02968c2612966a76e0271e585bc5a70580f1aba" Feb 18 15:09:28 crc kubenswrapper[4957]: E0218 15:09:28.214562 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-x8wwg_openshift-machine-config-operator(4cde17e3-43e9-4bed-afe8-5b76229e35cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" Feb 18 15:09:28 crc kubenswrapper[4957]: I0218 15:09:28.228278 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c033e783-4e0d-4ec1-a8c1-877fad072b9b" path="/var/lib/kubelet/pods/c033e783-4e0d-4ec1-a8c1-877fad072b9b/volumes" Feb 18 15:09:37 crc kubenswrapper[4957]: I0218 15:09:37.714233 4957 generic.go:334] "Generic (PLEG): container finished" podID="47b95013-4d1a-4b81-b1c7-1fed8ecff2b1" containerID="92a4ff1f8b159619bee0f2510ea0aca8af8992361cb3a3a947800c79cac01191" exitCode=0 Feb 18 15:09:37 crc kubenswrapper[4957]: I0218 15:09:37.714314 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wf22p" event={"ID":"47b95013-4d1a-4b81-b1c7-1fed8ecff2b1","Type":"ContainerDied","Data":"92a4ff1f8b159619bee0f2510ea0aca8af8992361cb3a3a947800c79cac01191"} Feb 18 15:09:39 crc kubenswrapper[4957]: I0218 15:09:39.212898 4957 scope.go:117] "RemoveContainer" containerID="44b5f20e8c741a1ccc4ddb62c02968c2612966a76e0271e585bc5a70580f1aba" Feb 18 15:09:39 crc kubenswrapper[4957]: E0218 15:09:39.213607 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-x8wwg_openshift-machine-config-operator(4cde17e3-43e9-4bed-afe8-5b76229e35cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" Feb 18 15:09:39 crc kubenswrapper[4957]: I0218 15:09:39.277578 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wf22p" Feb 18 15:09:39 crc kubenswrapper[4957]: I0218 15:09:39.385970 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/47b95013-4d1a-4b81-b1c7-1fed8ecff2b1-openstack-edpm-ipam-ovn-default-certs-0\") pod \"47b95013-4d1a-4b81-b1c7-1fed8ecff2b1\" (UID: \"47b95013-4d1a-4b81-b1c7-1fed8ecff2b1\") " Feb 18 15:09:39 crc kubenswrapper[4957]: I0218 15:09:39.386385 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47b95013-4d1a-4b81-b1c7-1fed8ecff2b1-libvirt-combined-ca-bundle\") pod \"47b95013-4d1a-4b81-b1c7-1fed8ecff2b1\" (UID: \"47b95013-4d1a-4b81-b1c7-1fed8ecff2b1\") " Feb 18 15:09:39 crc kubenswrapper[4957]: I0218 15:09:39.386712 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/47b95013-4d1a-4b81-b1c7-1fed8ecff2b1-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"47b95013-4d1a-4b81-b1c7-1fed8ecff2b1\" (UID: \"47b95013-4d1a-4b81-b1c7-1fed8ecff2b1\") " Feb 18 15:09:39 crc kubenswrapper[4957]: I0218 15:09:39.386914 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/47b95013-4d1a-4b81-b1c7-1fed8ecff2b1-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"47b95013-4d1a-4b81-b1c7-1fed8ecff2b1\" (UID: \"47b95013-4d1a-4b81-b1c7-1fed8ecff2b1\") " Feb 18 15:09:39 crc kubenswrapper[4957]: I0218 15:09:39.386969 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/47b95013-4d1a-4b81-b1c7-1fed8ecff2b1-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"47b95013-4d1a-4b81-b1c7-1fed8ecff2b1\" (UID: \"47b95013-4d1a-4b81-b1c7-1fed8ecff2b1\") " Feb 18 15:09:39 crc kubenswrapper[4957]: I0218 15:09:39.387059 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/47b95013-4d1a-4b81-b1c7-1fed8ecff2b1-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"47b95013-4d1a-4b81-b1c7-1fed8ecff2b1\" (UID: \"47b95013-4d1a-4b81-b1c7-1fed8ecff2b1\") " Feb 18 15:09:39 crc kubenswrapper[4957]: I0218 15:09:39.387103 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47b95013-4d1a-4b81-b1c7-1fed8ecff2b1-nova-combined-ca-bundle\") pod \"47b95013-4d1a-4b81-b1c7-1fed8ecff2b1\" (UID: \"47b95013-4d1a-4b81-b1c7-1fed8ecff2b1\") " Feb 18 15:09:39 crc kubenswrapper[4957]: I0218 15:09:39.387552 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47b95013-4d1a-4b81-b1c7-1fed8ecff2b1-ovn-combined-ca-bundle\") pod \"47b95013-4d1a-4b81-b1c7-1fed8ecff2b1\" (UID: \"47b95013-4d1a-4b81-b1c7-1fed8ecff2b1\") " Feb 18 15:09:39 crc kubenswrapper[4957]: I0218 15:09:39.387620 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/47b95013-4d1a-4b81-b1c7-1fed8ecff2b1-ssh-key-openstack-edpm-ipam\") pod \"47b95013-4d1a-4b81-b1c7-1fed8ecff2b1\" (UID: \"47b95013-4d1a-4b81-b1c7-1fed8ecff2b1\") " Feb 18 15:09:39 crc kubenswrapper[4957]: I0218 15:09:39.387674 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4cv99\" (UniqueName: \"kubernetes.io/projected/47b95013-4d1a-4b81-b1c7-1fed8ecff2b1-kube-api-access-4cv99\") pod \"47b95013-4d1a-4b81-b1c7-1fed8ecff2b1\" (UID: \"47b95013-4d1a-4b81-b1c7-1fed8ecff2b1\") " Feb 18 15:09:39 crc kubenswrapper[4957]: I0218 15:09:39.387731 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47b95013-4d1a-4b81-b1c7-1fed8ecff2b1-bootstrap-combined-ca-bundle\") pod \"47b95013-4d1a-4b81-b1c7-1fed8ecff2b1\" (UID: \"47b95013-4d1a-4b81-b1c7-1fed8ecff2b1\") " Feb 18 15:09:39 crc kubenswrapper[4957]: I0218 15:09:39.387749 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47b95013-4d1a-4b81-b1c7-1fed8ecff2b1-telemetry-combined-ca-bundle\") pod \"47b95013-4d1a-4b81-b1c7-1fed8ecff2b1\" (UID: \"47b95013-4d1a-4b81-b1c7-1fed8ecff2b1\") " Feb 18 15:09:39 crc kubenswrapper[4957]: I0218 15:09:39.387785 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/47b95013-4d1a-4b81-b1c7-1fed8ecff2b1-inventory\") pod \"47b95013-4d1a-4b81-b1c7-1fed8ecff2b1\" (UID: \"47b95013-4d1a-4b81-b1c7-1fed8ecff2b1\") " Feb 18 15:09:39 crc kubenswrapper[4957]: I0218 15:09:39.387805 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47b95013-4d1a-4b81-b1c7-1fed8ecff2b1-repo-setup-combined-ca-bundle\") pod \"47b95013-4d1a-4b81-b1c7-1fed8ecff2b1\" (UID: \"47b95013-4d1a-4b81-b1c7-1fed8ecff2b1\") " Feb 18 15:09:39 crc kubenswrapper[4957]: I0218 15:09:39.387825 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47b95013-4d1a-4b81-b1c7-1fed8ecff2b1-telemetry-power-monitoring-combined-ca-bundle\") pod \"47b95013-4d1a-4b81-b1c7-1fed8ecff2b1\" (UID: \"47b95013-4d1a-4b81-b1c7-1fed8ecff2b1\") " Feb 18 15:09:39 crc kubenswrapper[4957]: I0218 15:09:39.387851 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47b95013-4d1a-4b81-b1c7-1fed8ecff2b1-neutron-metadata-combined-ca-bundle\") pod \"47b95013-4d1a-4b81-b1c7-1fed8ecff2b1\" (UID: \"47b95013-4d1a-4b81-b1c7-1fed8ecff2b1\") " Feb 18 15:09:39 crc kubenswrapper[4957]: I0218 15:09:39.394408 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47b95013-4d1a-4b81-b1c7-1fed8ecff2b1-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "47b95013-4d1a-4b81-b1c7-1fed8ecff2b1" (UID: "47b95013-4d1a-4b81-b1c7-1fed8ecff2b1"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 15:09:39 crc kubenswrapper[4957]: I0218 15:09:39.394664 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47b95013-4d1a-4b81-b1c7-1fed8ecff2b1-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "47b95013-4d1a-4b81-b1c7-1fed8ecff2b1" (UID: "47b95013-4d1a-4b81-b1c7-1fed8ecff2b1"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 15:09:39 crc kubenswrapper[4957]: I0218 15:09:39.395336 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47b95013-4d1a-4b81-b1c7-1fed8ecff2b1-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "47b95013-4d1a-4b81-b1c7-1fed8ecff2b1" (UID: "47b95013-4d1a-4b81-b1c7-1fed8ecff2b1"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 15:09:39 crc kubenswrapper[4957]: I0218 15:09:39.395340 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47b95013-4d1a-4b81-b1c7-1fed8ecff2b1-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "47b95013-4d1a-4b81-b1c7-1fed8ecff2b1" (UID: "47b95013-4d1a-4b81-b1c7-1fed8ecff2b1"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 15:09:39 crc kubenswrapper[4957]: I0218 15:09:39.395394 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47b95013-4d1a-4b81-b1c7-1fed8ecff2b1-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "47b95013-4d1a-4b81-b1c7-1fed8ecff2b1" (UID: "47b95013-4d1a-4b81-b1c7-1fed8ecff2b1"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 15:09:39 crc kubenswrapper[4957]: I0218 15:09:39.395848 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47b95013-4d1a-4b81-b1c7-1fed8ecff2b1-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0") pod "47b95013-4d1a-4b81-b1c7-1fed8ecff2b1" (UID: "47b95013-4d1a-4b81-b1c7-1fed8ecff2b1"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 15:09:39 crc kubenswrapper[4957]: I0218 15:09:39.395906 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47b95013-4d1a-4b81-b1c7-1fed8ecff2b1-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "47b95013-4d1a-4b81-b1c7-1fed8ecff2b1" (UID: "47b95013-4d1a-4b81-b1c7-1fed8ecff2b1"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 15:09:39 crc kubenswrapper[4957]: I0218 15:09:39.396223 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47b95013-4d1a-4b81-b1c7-1fed8ecff2b1-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "47b95013-4d1a-4b81-b1c7-1fed8ecff2b1" (UID: "47b95013-4d1a-4b81-b1c7-1fed8ecff2b1"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 15:09:39 crc kubenswrapper[4957]: I0218 15:09:39.396262 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47b95013-4d1a-4b81-b1c7-1fed8ecff2b1-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "47b95013-4d1a-4b81-b1c7-1fed8ecff2b1" (UID: "47b95013-4d1a-4b81-b1c7-1fed8ecff2b1"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 15:09:39 crc kubenswrapper[4957]: I0218 15:09:39.396973 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47b95013-4d1a-4b81-b1c7-1fed8ecff2b1-kube-api-access-4cv99" (OuterVolumeSpecName: "kube-api-access-4cv99") pod "47b95013-4d1a-4b81-b1c7-1fed8ecff2b1" (UID: "47b95013-4d1a-4b81-b1c7-1fed8ecff2b1"). InnerVolumeSpecName "kube-api-access-4cv99". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 15:09:39 crc kubenswrapper[4957]: I0218 15:09:39.398372 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47b95013-4d1a-4b81-b1c7-1fed8ecff2b1-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "47b95013-4d1a-4b81-b1c7-1fed8ecff2b1" (UID: "47b95013-4d1a-4b81-b1c7-1fed8ecff2b1"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 15:09:39 crc kubenswrapper[4957]: I0218 15:09:39.399491 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47b95013-4d1a-4b81-b1c7-1fed8ecff2b1-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "47b95013-4d1a-4b81-b1c7-1fed8ecff2b1" (UID: "47b95013-4d1a-4b81-b1c7-1fed8ecff2b1"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 15:09:39 crc kubenswrapper[4957]: I0218 15:09:39.399621 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47b95013-4d1a-4b81-b1c7-1fed8ecff2b1-telemetry-power-monitoring-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-power-monitoring-combined-ca-bundle") pod "47b95013-4d1a-4b81-b1c7-1fed8ecff2b1" (UID: "47b95013-4d1a-4b81-b1c7-1fed8ecff2b1"). InnerVolumeSpecName "telemetry-power-monitoring-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 15:09:39 crc kubenswrapper[4957]: I0218 15:09:39.402707 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47b95013-4d1a-4b81-b1c7-1fed8ecff2b1-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "47b95013-4d1a-4b81-b1c7-1fed8ecff2b1" (UID: "47b95013-4d1a-4b81-b1c7-1fed8ecff2b1"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 15:09:39 crc kubenswrapper[4957]: I0218 15:09:39.428935 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47b95013-4d1a-4b81-b1c7-1fed8ecff2b1-inventory" (OuterVolumeSpecName: "inventory") pod "47b95013-4d1a-4b81-b1c7-1fed8ecff2b1" (UID: "47b95013-4d1a-4b81-b1c7-1fed8ecff2b1"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 15:09:39 crc kubenswrapper[4957]: I0218 15:09:39.441698 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47b95013-4d1a-4b81-b1c7-1fed8ecff2b1-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "47b95013-4d1a-4b81-b1c7-1fed8ecff2b1" (UID: "47b95013-4d1a-4b81-b1c7-1fed8ecff2b1"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 15:09:39 crc kubenswrapper[4957]: I0218 15:09:39.490761 4957 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/47b95013-4d1a-4b81-b1c7-1fed8ecff2b1-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 18 15:09:39 crc kubenswrapper[4957]: I0218 15:09:39.490799 4957 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47b95013-4d1a-4b81-b1c7-1fed8ecff2b1-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 15:09:39 crc kubenswrapper[4957]: I0218 15:09:39.490811 4957 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47b95013-4d1a-4b81-b1c7-1fed8ecff2b1-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 15:09:39 crc kubenswrapper[4957]: I0218 15:09:39.490819 4957 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/47b95013-4d1a-4b81-b1c7-1fed8ecff2b1-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 15:09:39 crc kubenswrapper[4957]: I0218 15:09:39.490829 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4cv99\" (UniqueName: \"kubernetes.io/projected/47b95013-4d1a-4b81-b1c7-1fed8ecff2b1-kube-api-access-4cv99\") on node \"crc\" DevicePath \"\"" Feb 18 15:09:39 crc kubenswrapper[4957]: I0218 15:09:39.490837 4957 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47b95013-4d1a-4b81-b1c7-1fed8ecff2b1-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 15:09:39 crc kubenswrapper[4957]: I0218 15:09:39.490845 4957 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47b95013-4d1a-4b81-b1c7-1fed8ecff2b1-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 15:09:39 crc kubenswrapper[4957]: I0218 15:09:39.490854 4957 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/47b95013-4d1a-4b81-b1c7-1fed8ecff2b1-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 15:09:39 crc kubenswrapper[4957]: I0218 15:09:39.490863 4957 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47b95013-4d1a-4b81-b1c7-1fed8ecff2b1-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 15:09:39 crc kubenswrapper[4957]: I0218 15:09:39.490874 4957 reconciler_common.go:293] "Volume detached for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47b95013-4d1a-4b81-b1c7-1fed8ecff2b1-telemetry-power-monitoring-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 15:09:39 crc kubenswrapper[4957]: I0218 15:09:39.490884 4957 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47b95013-4d1a-4b81-b1c7-1fed8ecff2b1-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 15:09:39 crc kubenswrapper[4957]: I0218 15:09:39.490893 4957 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/47b95013-4d1a-4b81-b1c7-1fed8ecff2b1-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 18 15:09:39 crc kubenswrapper[4957]: I0218 15:09:39.490902 4957 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47b95013-4d1a-4b81-b1c7-1fed8ecff2b1-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 15:09:39 crc kubenswrapper[4957]: I0218 15:09:39.490911 4957 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/47b95013-4d1a-4b81-b1c7-1fed8ecff2b1-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 18 15:09:39 crc kubenswrapper[4957]: I0218 15:09:39.490920 4957 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/47b95013-4d1a-4b81-b1c7-1fed8ecff2b1-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 18 15:09:39 crc kubenswrapper[4957]: I0218 15:09:39.490930 4957 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/47b95013-4d1a-4b81-b1c7-1fed8ecff2b1-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 18 15:09:39 crc kubenswrapper[4957]: I0218 15:09:39.756588 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wf22p" event={"ID":"47b95013-4d1a-4b81-b1c7-1fed8ecff2b1","Type":"ContainerDied","Data":"7af19bed6eb104509d7e215a9273b18500ea04d435ccb2841121fd53bd5cdc2b"} Feb 18 15:09:39 crc kubenswrapper[4957]: I0218 15:09:39.756631 4957 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7af19bed6eb104509d7e215a9273b18500ea04d435ccb2841121fd53bd5cdc2b" Feb 18 15:09:39 crc kubenswrapper[4957]: I0218 15:09:39.756659 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wf22p" Feb 18 15:09:39 crc kubenswrapper[4957]: I0218 15:09:39.911435 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-b4qdb"] Feb 18 15:09:39 crc kubenswrapper[4957]: E0218 15:09:39.912405 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1af582dd-5d21-4cda-b867-20436a531406" containerName="extract-utilities" Feb 18 15:09:39 crc kubenswrapper[4957]: I0218 15:09:39.912447 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="1af582dd-5d21-4cda-b867-20436a531406" containerName="extract-utilities" Feb 18 15:09:39 crc kubenswrapper[4957]: E0218 15:09:39.912479 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1af582dd-5d21-4cda-b867-20436a531406" containerName="registry-server" Feb 18 15:09:39 crc kubenswrapper[4957]: I0218 15:09:39.912487 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="1af582dd-5d21-4cda-b867-20436a531406" containerName="registry-server" Feb 18 15:09:39 crc kubenswrapper[4957]: E0218 15:09:39.912514 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47b95013-4d1a-4b81-b1c7-1fed8ecff2b1" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 18 15:09:39 crc kubenswrapper[4957]: I0218 15:09:39.912522 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="47b95013-4d1a-4b81-b1c7-1fed8ecff2b1" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 18 15:09:39 crc kubenswrapper[4957]: E0218 15:09:39.912544 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1af582dd-5d21-4cda-b867-20436a531406" containerName="extract-content" Feb 18 15:09:39 crc kubenswrapper[4957]: I0218 15:09:39.912549 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="1af582dd-5d21-4cda-b867-20436a531406" containerName="extract-content" Feb 18 15:09:39 crc kubenswrapper[4957]: I0218 15:09:39.912776 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="47b95013-4d1a-4b81-b1c7-1fed8ecff2b1" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 18 15:09:39 crc kubenswrapper[4957]: I0218 15:09:39.912808 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="1af582dd-5d21-4cda-b867-20436a531406" containerName="registry-server" Feb 18 15:09:39 crc kubenswrapper[4957]: I0218 15:09:39.915171 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-b4qdb" Feb 18 15:09:39 crc kubenswrapper[4957]: I0218 15:09:39.924672 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-b4qdb"] Feb 18 15:09:39 crc kubenswrapper[4957]: I0218 15:09:39.925298 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 15:09:39 crc kubenswrapper[4957]: I0218 15:09:39.925422 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-pmpvb" Feb 18 15:09:39 crc kubenswrapper[4957]: I0218 15:09:39.925529 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 15:09:39 crc kubenswrapper[4957]: I0218 15:09:39.926009 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 15:09:39 crc kubenswrapper[4957]: I0218 15:09:39.926041 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Feb 18 15:09:40 crc kubenswrapper[4957]: I0218 15:09:40.005438 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e041f9c9-d870-4b03-a9c1-98316547db7b-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-b4qdb\" (UID: \"e041f9c9-d870-4b03-a9c1-98316547db7b\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-b4qdb" Feb 18 15:09:40 crc kubenswrapper[4957]: I0218 15:09:40.005516 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/e041f9c9-d870-4b03-a9c1-98316547db7b-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-b4qdb\" (UID: \"e041f9c9-d870-4b03-a9c1-98316547db7b\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-b4qdb" Feb 18 15:09:40 crc kubenswrapper[4957]: I0218 15:09:40.005631 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e041f9c9-d870-4b03-a9c1-98316547db7b-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-b4qdb\" (UID: \"e041f9c9-d870-4b03-a9c1-98316547db7b\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-b4qdb" Feb 18 15:09:40 crc kubenswrapper[4957]: I0218 15:09:40.005661 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jb4vc\" (UniqueName: \"kubernetes.io/projected/e041f9c9-d870-4b03-a9c1-98316547db7b-kube-api-access-jb4vc\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-b4qdb\" (UID: \"e041f9c9-d870-4b03-a9c1-98316547db7b\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-b4qdb" Feb 18 15:09:40 crc kubenswrapper[4957]: I0218 15:09:40.005909 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e041f9c9-d870-4b03-a9c1-98316547db7b-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-b4qdb\" (UID: \"e041f9c9-d870-4b03-a9c1-98316547db7b\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-b4qdb" Feb 18 15:09:40 crc kubenswrapper[4957]: I0218 15:09:40.108732 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e041f9c9-d870-4b03-a9c1-98316547db7b-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-b4qdb\" (UID: \"e041f9c9-d870-4b03-a9c1-98316547db7b\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-b4qdb" Feb 18 15:09:40 crc kubenswrapper[4957]: I0218 15:09:40.108782 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jb4vc\" (UniqueName: \"kubernetes.io/projected/e041f9c9-d870-4b03-a9c1-98316547db7b-kube-api-access-jb4vc\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-b4qdb\" (UID: \"e041f9c9-d870-4b03-a9c1-98316547db7b\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-b4qdb" Feb 18 15:09:40 crc kubenswrapper[4957]: I0218 15:09:40.108906 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e041f9c9-d870-4b03-a9c1-98316547db7b-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-b4qdb\" (UID: \"e041f9c9-d870-4b03-a9c1-98316547db7b\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-b4qdb" Feb 18 15:09:40 crc kubenswrapper[4957]: I0218 15:09:40.109007 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e041f9c9-d870-4b03-a9c1-98316547db7b-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-b4qdb\" (UID: \"e041f9c9-d870-4b03-a9c1-98316547db7b\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-b4qdb" Feb 18 15:09:40 crc kubenswrapper[4957]: I0218 15:09:40.109029 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/e041f9c9-d870-4b03-a9c1-98316547db7b-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-b4qdb\" (UID: \"e041f9c9-d870-4b03-a9c1-98316547db7b\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-b4qdb" Feb 18 15:09:40 crc kubenswrapper[4957]: I0218 15:09:40.110123 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/e041f9c9-d870-4b03-a9c1-98316547db7b-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-b4qdb\" (UID: \"e041f9c9-d870-4b03-a9c1-98316547db7b\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-b4qdb" Feb 18 15:09:40 crc kubenswrapper[4957]: I0218 15:09:40.112333 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e041f9c9-d870-4b03-a9c1-98316547db7b-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-b4qdb\" (UID: \"e041f9c9-d870-4b03-a9c1-98316547db7b\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-b4qdb" Feb 18 15:09:40 crc kubenswrapper[4957]: I0218 15:09:40.112411 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e041f9c9-d870-4b03-a9c1-98316547db7b-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-b4qdb\" (UID: \"e041f9c9-d870-4b03-a9c1-98316547db7b\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-b4qdb" Feb 18 15:09:40 crc kubenswrapper[4957]: I0218 15:09:40.117000 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e041f9c9-d870-4b03-a9c1-98316547db7b-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-b4qdb\" (UID: \"e041f9c9-d870-4b03-a9c1-98316547db7b\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-b4qdb" Feb 18 15:09:40 crc kubenswrapper[4957]: I0218 15:09:40.123768 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jb4vc\" (UniqueName: \"kubernetes.io/projected/e041f9c9-d870-4b03-a9c1-98316547db7b-kube-api-access-jb4vc\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-b4qdb\" (UID: \"e041f9c9-d870-4b03-a9c1-98316547db7b\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-b4qdb" Feb 18 15:09:40 crc kubenswrapper[4957]: I0218 15:09:40.242192 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-b4qdb" Feb 18 15:09:40 crc kubenswrapper[4957]: I0218 15:09:40.850514 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-b4qdb"] Feb 18 15:09:41 crc kubenswrapper[4957]: I0218 15:09:41.778534 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-b4qdb" event={"ID":"e041f9c9-d870-4b03-a9c1-98316547db7b","Type":"ContainerStarted","Data":"412d9e31eeb49c004293d8b7de196f6b0b39db99c70f552c9773225ee79fad44"} Feb 18 15:09:41 crc kubenswrapper[4957]: I0218 15:09:41.778878 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-b4qdb" event={"ID":"e041f9c9-d870-4b03-a9c1-98316547db7b","Type":"ContainerStarted","Data":"654dece0fee78620e24d27204309868b5ade3d940493a6581280b552c9f3fa31"} Feb 18 15:09:41 crc kubenswrapper[4957]: I0218 15:09:41.800786 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-b4qdb" podStartSLOduration=2.386766379 podStartE2EDuration="2.800767759s" podCreationTimestamp="2026-02-18 15:09:39 +0000 UTC" firstStartedPulling="2026-02-18 15:09:40.856361913 +0000 UTC m=+2287.377226647" lastFinishedPulling="2026-02-18 15:09:41.270363283 +0000 UTC m=+2287.791228027" observedRunningTime="2026-02-18 15:09:41.796935834 +0000 UTC m=+2288.317800588" watchObservedRunningTime="2026-02-18 15:09:41.800767759 +0000 UTC m=+2288.321632503" Feb 18 15:09:53 crc kubenswrapper[4957]: I0218 15:09:53.213292 4957 scope.go:117] "RemoveContainer" containerID="44b5f20e8c741a1ccc4ddb62c02968c2612966a76e0271e585bc5a70580f1aba" Feb 18 15:09:53 crc kubenswrapper[4957]: E0218 15:09:53.214186 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-x8wwg_openshift-machine-config-operator(4cde17e3-43e9-4bed-afe8-5b76229e35cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" Feb 18 15:10:06 crc kubenswrapper[4957]: I0218 15:10:06.214269 4957 scope.go:117] "RemoveContainer" containerID="44b5f20e8c741a1ccc4ddb62c02968c2612966a76e0271e585bc5a70580f1aba" Feb 18 15:10:06 crc kubenswrapper[4957]: E0218 15:10:06.215053 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-x8wwg_openshift-machine-config-operator(4cde17e3-43e9-4bed-afe8-5b76229e35cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" Feb 18 15:10:12 crc kubenswrapper[4957]: I0218 15:10:12.043663 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-sync-stkjp"] Feb 18 15:10:12 crc kubenswrapper[4957]: I0218 15:10:12.056736 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-sync-stkjp"] Feb 18 15:10:12 crc kubenswrapper[4957]: I0218 15:10:12.229591 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6fe56e22-16a2-4e2a-a473-54592ab46673" path="/var/lib/kubelet/pods/6fe56e22-16a2-4e2a-a473-54592ab46673/volumes" Feb 18 15:10:16 crc kubenswrapper[4957]: I0218 15:10:16.090221 4957 scope.go:117] "RemoveContainer" containerID="3b6fe0430dc9e317cf23e7cbfb0a007c329fb86eb3fa6832d75b081dcf57537d" Feb 18 15:10:16 crc kubenswrapper[4957]: I0218 15:10:16.126088 4957 scope.go:117] "RemoveContainer" containerID="0178af03f7af51d03a0a43381d9d1786c77f1ef247ff83c2d6c840956c7591e8" Feb 18 15:10:20 crc kubenswrapper[4957]: I0218 15:10:20.214475 4957 scope.go:117] "RemoveContainer" containerID="44b5f20e8c741a1ccc4ddb62c02968c2612966a76e0271e585bc5a70580f1aba" Feb 18 15:10:20 crc kubenswrapper[4957]: E0218 15:10:20.215294 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-x8wwg_openshift-machine-config-operator(4cde17e3-43e9-4bed-afe8-5b76229e35cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" Feb 18 15:10:31 crc kubenswrapper[4957]: I0218 15:10:31.213990 4957 scope.go:117] "RemoveContainer" containerID="44b5f20e8c741a1ccc4ddb62c02968c2612966a76e0271e585bc5a70580f1aba" Feb 18 15:10:31 crc kubenswrapper[4957]: E0218 15:10:31.214899 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-x8wwg_openshift-machine-config-operator(4cde17e3-43e9-4bed-afe8-5b76229e35cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" Feb 18 15:10:41 crc kubenswrapper[4957]: I0218 15:10:41.434882 4957 generic.go:334] "Generic (PLEG): container finished" podID="e041f9c9-d870-4b03-a9c1-98316547db7b" containerID="412d9e31eeb49c004293d8b7de196f6b0b39db99c70f552c9773225ee79fad44" exitCode=0 Feb 18 15:10:41 crc kubenswrapper[4957]: I0218 15:10:41.435350 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-b4qdb" event={"ID":"e041f9c9-d870-4b03-a9c1-98316547db7b","Type":"ContainerDied","Data":"412d9e31eeb49c004293d8b7de196f6b0b39db99c70f552c9773225ee79fad44"} Feb 18 15:10:42 crc kubenswrapper[4957]: I0218 15:10:42.921582 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-b4qdb" Feb 18 15:10:43 crc kubenswrapper[4957]: I0218 15:10:43.071304 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e041f9c9-d870-4b03-a9c1-98316547db7b-ovn-combined-ca-bundle\") pod \"e041f9c9-d870-4b03-a9c1-98316547db7b\" (UID: \"e041f9c9-d870-4b03-a9c1-98316547db7b\") " Feb 18 15:10:43 crc kubenswrapper[4957]: I0218 15:10:43.071863 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e041f9c9-d870-4b03-a9c1-98316547db7b-inventory\") pod \"e041f9c9-d870-4b03-a9c1-98316547db7b\" (UID: \"e041f9c9-d870-4b03-a9c1-98316547db7b\") " Feb 18 15:10:43 crc kubenswrapper[4957]: I0218 15:10:43.071904 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e041f9c9-d870-4b03-a9c1-98316547db7b-ssh-key-openstack-edpm-ipam\") pod \"e041f9c9-d870-4b03-a9c1-98316547db7b\" (UID: \"e041f9c9-d870-4b03-a9c1-98316547db7b\") " Feb 18 15:10:43 crc kubenswrapper[4957]: I0218 15:10:43.071960 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jb4vc\" (UniqueName: \"kubernetes.io/projected/e041f9c9-d870-4b03-a9c1-98316547db7b-kube-api-access-jb4vc\") pod \"e041f9c9-d870-4b03-a9c1-98316547db7b\" (UID: \"e041f9c9-d870-4b03-a9c1-98316547db7b\") " Feb 18 15:10:43 crc kubenswrapper[4957]: I0218 15:10:43.072007 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/e041f9c9-d870-4b03-a9c1-98316547db7b-ovncontroller-config-0\") pod \"e041f9c9-d870-4b03-a9c1-98316547db7b\" (UID: \"e041f9c9-d870-4b03-a9c1-98316547db7b\") " Feb 18 15:10:43 crc kubenswrapper[4957]: I0218 15:10:43.081392 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e041f9c9-d870-4b03-a9c1-98316547db7b-kube-api-access-jb4vc" (OuterVolumeSpecName: "kube-api-access-jb4vc") pod "e041f9c9-d870-4b03-a9c1-98316547db7b" (UID: "e041f9c9-d870-4b03-a9c1-98316547db7b"). InnerVolumeSpecName "kube-api-access-jb4vc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 15:10:43 crc kubenswrapper[4957]: I0218 15:10:43.081392 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e041f9c9-d870-4b03-a9c1-98316547db7b-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "e041f9c9-d870-4b03-a9c1-98316547db7b" (UID: "e041f9c9-d870-4b03-a9c1-98316547db7b"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 15:10:43 crc kubenswrapper[4957]: I0218 15:10:43.109074 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e041f9c9-d870-4b03-a9c1-98316547db7b-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "e041f9c9-d870-4b03-a9c1-98316547db7b" (UID: "e041f9c9-d870-4b03-a9c1-98316547db7b"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 15:10:43 crc kubenswrapper[4957]: I0218 15:10:43.119402 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e041f9c9-d870-4b03-a9c1-98316547db7b-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "e041f9c9-d870-4b03-a9c1-98316547db7b" (UID: "e041f9c9-d870-4b03-a9c1-98316547db7b"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 15:10:43 crc kubenswrapper[4957]: I0218 15:10:43.120027 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e041f9c9-d870-4b03-a9c1-98316547db7b-inventory" (OuterVolumeSpecName: "inventory") pod "e041f9c9-d870-4b03-a9c1-98316547db7b" (UID: "e041f9c9-d870-4b03-a9c1-98316547db7b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 15:10:43 crc kubenswrapper[4957]: I0218 15:10:43.175238 4957 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e041f9c9-d870-4b03-a9c1-98316547db7b-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 15:10:43 crc kubenswrapper[4957]: I0218 15:10:43.175276 4957 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e041f9c9-d870-4b03-a9c1-98316547db7b-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 15:10:43 crc kubenswrapper[4957]: I0218 15:10:43.175288 4957 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e041f9c9-d870-4b03-a9c1-98316547db7b-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 15:10:43 crc kubenswrapper[4957]: I0218 15:10:43.175299 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jb4vc\" (UniqueName: \"kubernetes.io/projected/e041f9c9-d870-4b03-a9c1-98316547db7b-kube-api-access-jb4vc\") on node \"crc\" DevicePath \"\"" Feb 18 15:10:43 crc kubenswrapper[4957]: I0218 15:10:43.175311 4957 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/e041f9c9-d870-4b03-a9c1-98316547db7b-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Feb 18 15:10:43 crc kubenswrapper[4957]: I0218 15:10:43.457551 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-b4qdb" event={"ID":"e041f9c9-d870-4b03-a9c1-98316547db7b","Type":"ContainerDied","Data":"654dece0fee78620e24d27204309868b5ade3d940493a6581280b552c9f3fa31"} Feb 18 15:10:43 crc kubenswrapper[4957]: I0218 15:10:43.457608 4957 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="654dece0fee78620e24d27204309868b5ade3d940493a6581280b552c9f3fa31" Feb 18 15:10:43 crc kubenswrapper[4957]: I0218 15:10:43.457636 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-b4qdb" Feb 18 15:10:43 crc kubenswrapper[4957]: I0218 15:10:43.663555 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-w9htc"] Feb 18 15:10:43 crc kubenswrapper[4957]: E0218 15:10:43.666630 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e041f9c9-d870-4b03-a9c1-98316547db7b" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 18 15:10:43 crc kubenswrapper[4957]: I0218 15:10:43.666669 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="e041f9c9-d870-4b03-a9c1-98316547db7b" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 18 15:10:43 crc kubenswrapper[4957]: I0218 15:10:43.666989 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="e041f9c9-d870-4b03-a9c1-98316547db7b" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 18 15:10:43 crc kubenswrapper[4957]: I0218 15:10:43.668098 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-w9htc" Feb 18 15:10:43 crc kubenswrapper[4957]: I0218 15:10:43.671246 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Feb 18 15:10:43 crc kubenswrapper[4957]: I0218 15:10:43.673169 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-pmpvb" Feb 18 15:10:43 crc kubenswrapper[4957]: I0218 15:10:43.673373 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 15:10:43 crc kubenswrapper[4957]: I0218 15:10:43.673573 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 15:10:43 crc kubenswrapper[4957]: I0218 15:10:43.673614 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Feb 18 15:10:43 crc kubenswrapper[4957]: I0218 15:10:43.673608 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 15:10:43 crc kubenswrapper[4957]: I0218 15:10:43.684291 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-w9htc"] Feb 18 15:10:43 crc kubenswrapper[4957]: I0218 15:10:43.790908 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/afe6ed34-f3ab-456a-8628-d7128dcc602b-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-w9htc\" (UID: \"afe6ed34-f3ab-456a-8628-d7128dcc602b\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-w9htc" Feb 18 15:10:43 crc kubenswrapper[4957]: I0218 15:10:43.791062 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afe6ed34-f3ab-456a-8628-d7128dcc602b-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-w9htc\" (UID: \"afe6ed34-f3ab-456a-8628-d7128dcc602b\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-w9htc" Feb 18 15:10:43 crc kubenswrapper[4957]: I0218 15:10:43.791376 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/afe6ed34-f3ab-456a-8628-d7128dcc602b-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-w9htc\" (UID: \"afe6ed34-f3ab-456a-8628-d7128dcc602b\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-w9htc" Feb 18 15:10:43 crc kubenswrapper[4957]: I0218 15:10:43.791560 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/afe6ed34-f3ab-456a-8628-d7128dcc602b-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-w9htc\" (UID: \"afe6ed34-f3ab-456a-8628-d7128dcc602b\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-w9htc" Feb 18 15:10:43 crc kubenswrapper[4957]: I0218 15:10:43.791707 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ts7p7\" (UniqueName: \"kubernetes.io/projected/afe6ed34-f3ab-456a-8628-d7128dcc602b-kube-api-access-ts7p7\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-w9htc\" (UID: \"afe6ed34-f3ab-456a-8628-d7128dcc602b\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-w9htc" Feb 18 15:10:43 crc kubenswrapper[4957]: I0218 15:10:43.791838 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/afe6ed34-f3ab-456a-8628-d7128dcc602b-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-w9htc\" (UID: \"afe6ed34-f3ab-456a-8628-d7128dcc602b\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-w9htc" Feb 18 15:10:43 crc kubenswrapper[4957]: I0218 15:10:43.894235 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/afe6ed34-f3ab-456a-8628-d7128dcc602b-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-w9htc\" (UID: \"afe6ed34-f3ab-456a-8628-d7128dcc602b\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-w9htc" Feb 18 15:10:43 crc kubenswrapper[4957]: I0218 15:10:43.894286 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/afe6ed34-f3ab-456a-8628-d7128dcc602b-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-w9htc\" (UID: \"afe6ed34-f3ab-456a-8628-d7128dcc602b\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-w9htc" Feb 18 15:10:43 crc kubenswrapper[4957]: I0218 15:10:43.894349 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ts7p7\" (UniqueName: \"kubernetes.io/projected/afe6ed34-f3ab-456a-8628-d7128dcc602b-kube-api-access-ts7p7\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-w9htc\" (UID: \"afe6ed34-f3ab-456a-8628-d7128dcc602b\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-w9htc" Feb 18 15:10:43 crc kubenswrapper[4957]: I0218 15:10:43.894404 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/afe6ed34-f3ab-456a-8628-d7128dcc602b-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-w9htc\" (UID: \"afe6ed34-f3ab-456a-8628-d7128dcc602b\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-w9htc" Feb 18 15:10:43 crc kubenswrapper[4957]: I0218 15:10:43.894601 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/afe6ed34-f3ab-456a-8628-d7128dcc602b-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-w9htc\" (UID: \"afe6ed34-f3ab-456a-8628-d7128dcc602b\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-w9htc" Feb 18 15:10:43 crc kubenswrapper[4957]: I0218 15:10:43.894649 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afe6ed34-f3ab-456a-8628-d7128dcc602b-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-w9htc\" (UID: \"afe6ed34-f3ab-456a-8628-d7128dcc602b\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-w9htc" Feb 18 15:10:43 crc kubenswrapper[4957]: I0218 15:10:43.900376 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/afe6ed34-f3ab-456a-8628-d7128dcc602b-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-w9htc\" (UID: \"afe6ed34-f3ab-456a-8628-d7128dcc602b\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-w9htc" Feb 18 15:10:43 crc kubenswrapper[4957]: I0218 15:10:43.900664 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afe6ed34-f3ab-456a-8628-d7128dcc602b-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-w9htc\" (UID: \"afe6ed34-f3ab-456a-8628-d7128dcc602b\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-w9htc" Feb 18 15:10:43 crc kubenswrapper[4957]: I0218 15:10:43.900869 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/afe6ed34-f3ab-456a-8628-d7128dcc602b-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-w9htc\" (UID: \"afe6ed34-f3ab-456a-8628-d7128dcc602b\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-w9htc" Feb 18 15:10:43 crc kubenswrapper[4957]: I0218 15:10:43.900921 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/afe6ed34-f3ab-456a-8628-d7128dcc602b-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-w9htc\" (UID: \"afe6ed34-f3ab-456a-8628-d7128dcc602b\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-w9htc" Feb 18 15:10:43 crc kubenswrapper[4957]: I0218 15:10:43.901243 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/afe6ed34-f3ab-456a-8628-d7128dcc602b-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-w9htc\" (UID: \"afe6ed34-f3ab-456a-8628-d7128dcc602b\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-w9htc" Feb 18 15:10:43 crc kubenswrapper[4957]: I0218 15:10:43.913837 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ts7p7\" (UniqueName: \"kubernetes.io/projected/afe6ed34-f3ab-456a-8628-d7128dcc602b-kube-api-access-ts7p7\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-w9htc\" (UID: \"afe6ed34-f3ab-456a-8628-d7128dcc602b\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-w9htc" Feb 18 15:10:43 crc kubenswrapper[4957]: I0218 15:10:43.996002 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-w9htc" Feb 18 15:10:44 crc kubenswrapper[4957]: I0218 15:10:44.551630 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-w9htc"] Feb 18 15:10:44 crc kubenswrapper[4957]: I0218 15:10:44.554911 4957 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 15:10:45 crc kubenswrapper[4957]: I0218 15:10:45.213983 4957 scope.go:117] "RemoveContainer" containerID="44b5f20e8c741a1ccc4ddb62c02968c2612966a76e0271e585bc5a70580f1aba" Feb 18 15:10:45 crc kubenswrapper[4957]: E0218 15:10:45.214920 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-x8wwg_openshift-machine-config-operator(4cde17e3-43e9-4bed-afe8-5b76229e35cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" Feb 18 15:10:45 crc kubenswrapper[4957]: I0218 15:10:45.482004 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-w9htc" event={"ID":"afe6ed34-f3ab-456a-8628-d7128dcc602b","Type":"ContainerStarted","Data":"ee6bca349d9904ab793e26acdca45ec8ce67a6a5a6112700ce19dbff0a4435da"} Feb 18 15:10:46 crc kubenswrapper[4957]: I0218 15:10:46.496480 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-w9htc" event={"ID":"afe6ed34-f3ab-456a-8628-d7128dcc602b","Type":"ContainerStarted","Data":"f4aea0c33087fdfa2019727359207b400cb2563de323667c9f0d347daf933338"} Feb 18 15:10:59 crc kubenswrapper[4957]: I0218 15:10:59.213475 4957 scope.go:117] "RemoveContainer" containerID="44b5f20e8c741a1ccc4ddb62c02968c2612966a76e0271e585bc5a70580f1aba" Feb 18 15:10:59 crc kubenswrapper[4957]: E0218 15:10:59.214522 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-x8wwg_openshift-machine-config-operator(4cde17e3-43e9-4bed-afe8-5b76229e35cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" Feb 18 15:11:11 crc kubenswrapper[4957]: I0218 15:11:11.213990 4957 scope.go:117] "RemoveContainer" containerID="44b5f20e8c741a1ccc4ddb62c02968c2612966a76e0271e585bc5a70580f1aba" Feb 18 15:11:11 crc kubenswrapper[4957]: E0218 15:11:11.214718 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-x8wwg_openshift-machine-config-operator(4cde17e3-43e9-4bed-afe8-5b76229e35cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" Feb 18 15:11:25 crc kubenswrapper[4957]: I0218 15:11:25.213790 4957 scope.go:117] "RemoveContainer" containerID="44b5f20e8c741a1ccc4ddb62c02968c2612966a76e0271e585bc5a70580f1aba" Feb 18 15:11:25 crc kubenswrapper[4957]: E0218 15:11:25.214845 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-x8wwg_openshift-machine-config-operator(4cde17e3-43e9-4bed-afe8-5b76229e35cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" Feb 18 15:11:32 crc kubenswrapper[4957]: I0218 15:11:32.001159 4957 generic.go:334] "Generic (PLEG): container finished" podID="afe6ed34-f3ab-456a-8628-d7128dcc602b" containerID="f4aea0c33087fdfa2019727359207b400cb2563de323667c9f0d347daf933338" exitCode=0 Feb 18 15:11:32 crc kubenswrapper[4957]: I0218 15:11:32.001260 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-w9htc" event={"ID":"afe6ed34-f3ab-456a-8628-d7128dcc602b","Type":"ContainerDied","Data":"f4aea0c33087fdfa2019727359207b400cb2563de323667c9f0d347daf933338"} Feb 18 15:11:33 crc kubenswrapper[4957]: I0218 15:11:33.618178 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-w9htc" Feb 18 15:11:33 crc kubenswrapper[4957]: I0218 15:11:33.729532 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ts7p7\" (UniqueName: \"kubernetes.io/projected/afe6ed34-f3ab-456a-8628-d7128dcc602b-kube-api-access-ts7p7\") pod \"afe6ed34-f3ab-456a-8628-d7128dcc602b\" (UID: \"afe6ed34-f3ab-456a-8628-d7128dcc602b\") " Feb 18 15:11:33 crc kubenswrapper[4957]: I0218 15:11:33.729662 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/afe6ed34-f3ab-456a-8628-d7128dcc602b-neutron-ovn-metadata-agent-neutron-config-0\") pod \"afe6ed34-f3ab-456a-8628-d7128dcc602b\" (UID: \"afe6ed34-f3ab-456a-8628-d7128dcc602b\") " Feb 18 15:11:33 crc kubenswrapper[4957]: I0218 15:11:33.729705 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afe6ed34-f3ab-456a-8628-d7128dcc602b-neutron-metadata-combined-ca-bundle\") pod \"afe6ed34-f3ab-456a-8628-d7128dcc602b\" (UID: \"afe6ed34-f3ab-456a-8628-d7128dcc602b\") " Feb 18 15:11:33 crc kubenswrapper[4957]: I0218 15:11:33.729744 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/afe6ed34-f3ab-456a-8628-d7128dcc602b-inventory\") pod \"afe6ed34-f3ab-456a-8628-d7128dcc602b\" (UID: \"afe6ed34-f3ab-456a-8628-d7128dcc602b\") " Feb 18 15:11:33 crc kubenswrapper[4957]: I0218 15:11:33.729824 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/afe6ed34-f3ab-456a-8628-d7128dcc602b-ssh-key-openstack-edpm-ipam\") pod \"afe6ed34-f3ab-456a-8628-d7128dcc602b\" (UID: \"afe6ed34-f3ab-456a-8628-d7128dcc602b\") " Feb 18 15:11:33 crc kubenswrapper[4957]: I0218 15:11:33.729968 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/afe6ed34-f3ab-456a-8628-d7128dcc602b-nova-metadata-neutron-config-0\") pod \"afe6ed34-f3ab-456a-8628-d7128dcc602b\" (UID: \"afe6ed34-f3ab-456a-8628-d7128dcc602b\") " Feb 18 15:11:33 crc kubenswrapper[4957]: I0218 15:11:33.735878 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/afe6ed34-f3ab-456a-8628-d7128dcc602b-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "afe6ed34-f3ab-456a-8628-d7128dcc602b" (UID: "afe6ed34-f3ab-456a-8628-d7128dcc602b"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 15:11:33 crc kubenswrapper[4957]: I0218 15:11:33.738202 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/afe6ed34-f3ab-456a-8628-d7128dcc602b-kube-api-access-ts7p7" (OuterVolumeSpecName: "kube-api-access-ts7p7") pod "afe6ed34-f3ab-456a-8628-d7128dcc602b" (UID: "afe6ed34-f3ab-456a-8628-d7128dcc602b"). InnerVolumeSpecName "kube-api-access-ts7p7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 15:11:33 crc kubenswrapper[4957]: I0218 15:11:33.765677 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/afe6ed34-f3ab-456a-8628-d7128dcc602b-inventory" (OuterVolumeSpecName: "inventory") pod "afe6ed34-f3ab-456a-8628-d7128dcc602b" (UID: "afe6ed34-f3ab-456a-8628-d7128dcc602b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 15:11:33 crc kubenswrapper[4957]: I0218 15:11:33.769560 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/afe6ed34-f3ab-456a-8628-d7128dcc602b-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "afe6ed34-f3ab-456a-8628-d7128dcc602b" (UID: "afe6ed34-f3ab-456a-8628-d7128dcc602b"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 15:11:33 crc kubenswrapper[4957]: I0218 15:11:33.774523 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/afe6ed34-f3ab-456a-8628-d7128dcc602b-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "afe6ed34-f3ab-456a-8628-d7128dcc602b" (UID: "afe6ed34-f3ab-456a-8628-d7128dcc602b"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 15:11:33 crc kubenswrapper[4957]: I0218 15:11:33.789090 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/afe6ed34-f3ab-456a-8628-d7128dcc602b-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "afe6ed34-f3ab-456a-8628-d7128dcc602b" (UID: "afe6ed34-f3ab-456a-8628-d7128dcc602b"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 15:11:33 crc kubenswrapper[4957]: I0218 15:11:33.833248 4957 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/afe6ed34-f3ab-456a-8628-d7128dcc602b-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Feb 18 15:11:33 crc kubenswrapper[4957]: I0218 15:11:33.833307 4957 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afe6ed34-f3ab-456a-8628-d7128dcc602b-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 15:11:33 crc kubenswrapper[4957]: I0218 15:11:33.833325 4957 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/afe6ed34-f3ab-456a-8628-d7128dcc602b-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 15:11:33 crc kubenswrapper[4957]: I0218 15:11:33.833337 4957 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/afe6ed34-f3ab-456a-8628-d7128dcc602b-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 15:11:33 crc kubenswrapper[4957]: I0218 15:11:33.833348 4957 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/afe6ed34-f3ab-456a-8628-d7128dcc602b-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Feb 18 15:11:33 crc kubenswrapper[4957]: I0218 15:11:33.833356 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ts7p7\" (UniqueName: \"kubernetes.io/projected/afe6ed34-f3ab-456a-8628-d7128dcc602b-kube-api-access-ts7p7\") on node \"crc\" DevicePath \"\"" Feb 18 15:11:34 crc kubenswrapper[4957]: I0218 15:11:34.028887 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-w9htc" event={"ID":"afe6ed34-f3ab-456a-8628-d7128dcc602b","Type":"ContainerDied","Data":"ee6bca349d9904ab793e26acdca45ec8ce67a6a5a6112700ce19dbff0a4435da"} Feb 18 15:11:34 crc kubenswrapper[4957]: I0218 15:11:34.029267 4957 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ee6bca349d9904ab793e26acdca45ec8ce67a6a5a6112700ce19dbff0a4435da" Feb 18 15:11:34 crc kubenswrapper[4957]: I0218 15:11:34.028971 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-w9htc" Feb 18 15:11:34 crc kubenswrapper[4957]: I0218 15:11:34.152435 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-td67t"] Feb 18 15:11:34 crc kubenswrapper[4957]: E0218 15:11:34.153002 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afe6ed34-f3ab-456a-8628-d7128dcc602b" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 18 15:11:34 crc kubenswrapper[4957]: I0218 15:11:34.153026 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="afe6ed34-f3ab-456a-8628-d7128dcc602b" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 18 15:11:34 crc kubenswrapper[4957]: I0218 15:11:34.153377 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="afe6ed34-f3ab-456a-8628-d7128dcc602b" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 18 15:11:34 crc kubenswrapper[4957]: I0218 15:11:34.154332 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-td67t" Feb 18 15:11:34 crc kubenswrapper[4957]: I0218 15:11:34.156952 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 15:11:34 crc kubenswrapper[4957]: I0218 15:11:34.158956 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 15:11:34 crc kubenswrapper[4957]: I0218 15:11:34.158998 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Feb 18 15:11:34 crc kubenswrapper[4957]: I0218 15:11:34.159045 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-pmpvb" Feb 18 15:11:34 crc kubenswrapper[4957]: I0218 15:11:34.159776 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 15:11:34 crc kubenswrapper[4957]: I0218 15:11:34.173837 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-td67t"] Feb 18 15:11:34 crc kubenswrapper[4957]: I0218 15:11:34.245255 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee4446c2-295a-4f11-b689-78721a39f23f-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-td67t\" (UID: \"ee4446c2-295a-4f11-b689-78721a39f23f\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-td67t" Feb 18 15:11:34 crc kubenswrapper[4957]: I0218 15:11:34.245337 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmfgr\" (UniqueName: \"kubernetes.io/projected/ee4446c2-295a-4f11-b689-78721a39f23f-kube-api-access-vmfgr\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-td67t\" (UID: \"ee4446c2-295a-4f11-b689-78721a39f23f\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-td67t" Feb 18 15:11:34 crc kubenswrapper[4957]: I0218 15:11:34.245475 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ee4446c2-295a-4f11-b689-78721a39f23f-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-td67t\" (UID: \"ee4446c2-295a-4f11-b689-78721a39f23f\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-td67t" Feb 18 15:11:34 crc kubenswrapper[4957]: I0218 15:11:34.245637 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ee4446c2-295a-4f11-b689-78721a39f23f-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-td67t\" (UID: \"ee4446c2-295a-4f11-b689-78721a39f23f\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-td67t" Feb 18 15:11:34 crc kubenswrapper[4957]: I0218 15:11:34.245713 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/ee4446c2-295a-4f11-b689-78721a39f23f-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-td67t\" (UID: \"ee4446c2-295a-4f11-b689-78721a39f23f\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-td67t" Feb 18 15:11:34 crc kubenswrapper[4957]: I0218 15:11:34.347815 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee4446c2-295a-4f11-b689-78721a39f23f-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-td67t\" (UID: \"ee4446c2-295a-4f11-b689-78721a39f23f\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-td67t" Feb 18 15:11:34 crc kubenswrapper[4957]: I0218 15:11:34.347914 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vmfgr\" (UniqueName: \"kubernetes.io/projected/ee4446c2-295a-4f11-b689-78721a39f23f-kube-api-access-vmfgr\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-td67t\" (UID: \"ee4446c2-295a-4f11-b689-78721a39f23f\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-td67t" Feb 18 15:11:34 crc kubenswrapper[4957]: I0218 15:11:34.348032 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ee4446c2-295a-4f11-b689-78721a39f23f-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-td67t\" (UID: \"ee4446c2-295a-4f11-b689-78721a39f23f\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-td67t" Feb 18 15:11:34 crc kubenswrapper[4957]: I0218 15:11:34.348140 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ee4446c2-295a-4f11-b689-78721a39f23f-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-td67t\" (UID: \"ee4446c2-295a-4f11-b689-78721a39f23f\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-td67t" Feb 18 15:11:34 crc kubenswrapper[4957]: I0218 15:11:34.348212 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/ee4446c2-295a-4f11-b689-78721a39f23f-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-td67t\" (UID: \"ee4446c2-295a-4f11-b689-78721a39f23f\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-td67t" Feb 18 15:11:34 crc kubenswrapper[4957]: I0218 15:11:34.354265 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/ee4446c2-295a-4f11-b689-78721a39f23f-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-td67t\" (UID: \"ee4446c2-295a-4f11-b689-78721a39f23f\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-td67t" Feb 18 15:11:34 crc kubenswrapper[4957]: I0218 15:11:34.354346 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee4446c2-295a-4f11-b689-78721a39f23f-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-td67t\" (UID: \"ee4446c2-295a-4f11-b689-78721a39f23f\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-td67t" Feb 18 15:11:34 crc kubenswrapper[4957]: I0218 15:11:34.354362 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ee4446c2-295a-4f11-b689-78721a39f23f-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-td67t\" (UID: \"ee4446c2-295a-4f11-b689-78721a39f23f\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-td67t" Feb 18 15:11:34 crc kubenswrapper[4957]: I0218 15:11:34.358614 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ee4446c2-295a-4f11-b689-78721a39f23f-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-td67t\" (UID: \"ee4446c2-295a-4f11-b689-78721a39f23f\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-td67t" Feb 18 15:11:34 crc kubenswrapper[4957]: I0218 15:11:34.365241 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vmfgr\" (UniqueName: \"kubernetes.io/projected/ee4446c2-295a-4f11-b689-78721a39f23f-kube-api-access-vmfgr\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-td67t\" (UID: \"ee4446c2-295a-4f11-b689-78721a39f23f\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-td67t" Feb 18 15:11:34 crc kubenswrapper[4957]: I0218 15:11:34.477741 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-td67t" Feb 18 15:11:35 crc kubenswrapper[4957]: I0218 15:11:35.017027 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-td67t"] Feb 18 15:11:35 crc kubenswrapper[4957]: I0218 15:11:35.038219 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-td67t" event={"ID":"ee4446c2-295a-4f11-b689-78721a39f23f","Type":"ContainerStarted","Data":"baec9c798332cb2a861074c052177bab184f50d3f5de5f333b3585578c08f8c3"} Feb 18 15:11:36 crc kubenswrapper[4957]: I0218 15:11:36.049083 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-td67t" event={"ID":"ee4446c2-295a-4f11-b689-78721a39f23f","Type":"ContainerStarted","Data":"497b8ff425c9618953c8e13426dad7892e43bb4219e82f955b4ebb73d8d1b9b9"} Feb 18 15:11:36 crc kubenswrapper[4957]: I0218 15:11:36.076327 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-td67t" podStartSLOduration=1.6613085920000001 podStartE2EDuration="2.076243151s" podCreationTimestamp="2026-02-18 15:11:34 +0000 UTC" firstStartedPulling="2026-02-18 15:11:35.024065756 +0000 UTC m=+2401.544930500" lastFinishedPulling="2026-02-18 15:11:35.439000315 +0000 UTC m=+2401.959865059" observedRunningTime="2026-02-18 15:11:36.06785875 +0000 UTC m=+2402.588723494" watchObservedRunningTime="2026-02-18 15:11:36.076243151 +0000 UTC m=+2402.597107895" Feb 18 15:11:38 crc kubenswrapper[4957]: I0218 15:11:38.213872 4957 scope.go:117] "RemoveContainer" containerID="44b5f20e8c741a1ccc4ddb62c02968c2612966a76e0271e585bc5a70580f1aba" Feb 18 15:11:38 crc kubenswrapper[4957]: E0218 15:11:38.214485 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-x8wwg_openshift-machine-config-operator(4cde17e3-43e9-4bed-afe8-5b76229e35cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" Feb 18 15:11:51 crc kubenswrapper[4957]: I0218 15:11:51.213877 4957 scope.go:117] "RemoveContainer" containerID="44b5f20e8c741a1ccc4ddb62c02968c2612966a76e0271e585bc5a70580f1aba" Feb 18 15:11:51 crc kubenswrapper[4957]: E0218 15:11:51.214784 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-x8wwg_openshift-machine-config-operator(4cde17e3-43e9-4bed-afe8-5b76229e35cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" Feb 18 15:12:04 crc kubenswrapper[4957]: I0218 15:12:04.222053 4957 scope.go:117] "RemoveContainer" containerID="44b5f20e8c741a1ccc4ddb62c02968c2612966a76e0271e585bc5a70580f1aba" Feb 18 15:12:04 crc kubenswrapper[4957]: E0218 15:12:04.223079 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-x8wwg_openshift-machine-config-operator(4cde17e3-43e9-4bed-afe8-5b76229e35cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" Feb 18 15:12:15 crc kubenswrapper[4957]: I0218 15:12:15.213486 4957 scope.go:117] "RemoveContainer" containerID="44b5f20e8c741a1ccc4ddb62c02968c2612966a76e0271e585bc5a70580f1aba" Feb 18 15:12:15 crc kubenswrapper[4957]: E0218 15:12:15.214358 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-x8wwg_openshift-machine-config-operator(4cde17e3-43e9-4bed-afe8-5b76229e35cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" Feb 18 15:12:27 crc kubenswrapper[4957]: I0218 15:12:27.213544 4957 scope.go:117] "RemoveContainer" containerID="44b5f20e8c741a1ccc4ddb62c02968c2612966a76e0271e585bc5a70580f1aba" Feb 18 15:12:27 crc kubenswrapper[4957]: E0218 15:12:27.214388 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-x8wwg_openshift-machine-config-operator(4cde17e3-43e9-4bed-afe8-5b76229e35cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" Feb 18 15:12:42 crc kubenswrapper[4957]: I0218 15:12:42.213452 4957 scope.go:117] "RemoveContainer" containerID="44b5f20e8c741a1ccc4ddb62c02968c2612966a76e0271e585bc5a70580f1aba" Feb 18 15:12:42 crc kubenswrapper[4957]: E0218 15:12:42.214215 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-x8wwg_openshift-machine-config-operator(4cde17e3-43e9-4bed-afe8-5b76229e35cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" Feb 18 15:12:53 crc kubenswrapper[4957]: I0218 15:12:53.213370 4957 scope.go:117] "RemoveContainer" containerID="44b5f20e8c741a1ccc4ddb62c02968c2612966a76e0271e585bc5a70580f1aba" Feb 18 15:12:53 crc kubenswrapper[4957]: E0218 15:12:53.214287 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-x8wwg_openshift-machine-config-operator(4cde17e3-43e9-4bed-afe8-5b76229e35cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" Feb 18 15:13:05 crc kubenswrapper[4957]: I0218 15:13:05.213738 4957 scope.go:117] "RemoveContainer" containerID="44b5f20e8c741a1ccc4ddb62c02968c2612966a76e0271e585bc5a70580f1aba" Feb 18 15:13:05 crc kubenswrapper[4957]: E0218 15:13:05.214609 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-x8wwg_openshift-machine-config-operator(4cde17e3-43e9-4bed-afe8-5b76229e35cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" Feb 18 15:13:19 crc kubenswrapper[4957]: I0218 15:13:19.213715 4957 scope.go:117] "RemoveContainer" containerID="44b5f20e8c741a1ccc4ddb62c02968c2612966a76e0271e585bc5a70580f1aba" Feb 18 15:13:19 crc kubenswrapper[4957]: E0218 15:13:19.214465 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-x8wwg_openshift-machine-config-operator(4cde17e3-43e9-4bed-afe8-5b76229e35cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" Feb 18 15:13:32 crc kubenswrapper[4957]: I0218 15:13:32.213200 4957 scope.go:117] "RemoveContainer" containerID="44b5f20e8c741a1ccc4ddb62c02968c2612966a76e0271e585bc5a70580f1aba" Feb 18 15:13:32 crc kubenswrapper[4957]: E0218 15:13:32.213990 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-x8wwg_openshift-machine-config-operator(4cde17e3-43e9-4bed-afe8-5b76229e35cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" Feb 18 15:13:43 crc kubenswrapper[4957]: I0218 15:13:43.214593 4957 scope.go:117] "RemoveContainer" containerID="44b5f20e8c741a1ccc4ddb62c02968c2612966a76e0271e585bc5a70580f1aba" Feb 18 15:13:44 crc kubenswrapper[4957]: I0218 15:13:44.454696 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" event={"ID":"4cde17e3-43e9-4bed-afe8-5b76229e35cf","Type":"ContainerStarted","Data":"a69e9f0e650842af65aadbc2c889b8e494d9005f988f1644caa07f5a3ac25640"} Feb 18 15:15:00 crc kubenswrapper[4957]: I0218 15:15:00.156225 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29523795-4bq6c"] Feb 18 15:15:00 crc kubenswrapper[4957]: I0218 15:15:00.158329 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29523795-4bq6c" Feb 18 15:15:00 crc kubenswrapper[4957]: I0218 15:15:00.161746 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 18 15:15:00 crc kubenswrapper[4957]: I0218 15:15:00.162462 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 18 15:15:00 crc kubenswrapper[4957]: I0218 15:15:00.166809 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29523795-4bq6c"] Feb 18 15:15:00 crc kubenswrapper[4957]: I0218 15:15:00.312200 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8e5ee832-b364-4a76-8d7d-6d3a576713a8-config-volume\") pod \"collect-profiles-29523795-4bq6c\" (UID: \"8e5ee832-b364-4a76-8d7d-6d3a576713a8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523795-4bq6c" Feb 18 15:15:00 crc kubenswrapper[4957]: I0218 15:15:00.312254 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxxgc\" (UniqueName: \"kubernetes.io/projected/8e5ee832-b364-4a76-8d7d-6d3a576713a8-kube-api-access-gxxgc\") pod \"collect-profiles-29523795-4bq6c\" (UID: \"8e5ee832-b364-4a76-8d7d-6d3a576713a8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523795-4bq6c" Feb 18 15:15:00 crc kubenswrapper[4957]: I0218 15:15:00.312277 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8e5ee832-b364-4a76-8d7d-6d3a576713a8-secret-volume\") pod \"collect-profiles-29523795-4bq6c\" (UID: \"8e5ee832-b364-4a76-8d7d-6d3a576713a8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523795-4bq6c" Feb 18 15:15:00 crc kubenswrapper[4957]: I0218 15:15:00.415114 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8e5ee832-b364-4a76-8d7d-6d3a576713a8-config-volume\") pod \"collect-profiles-29523795-4bq6c\" (UID: \"8e5ee832-b364-4a76-8d7d-6d3a576713a8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523795-4bq6c" Feb 18 15:15:00 crc kubenswrapper[4957]: I0218 15:15:00.415219 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gxxgc\" (UniqueName: \"kubernetes.io/projected/8e5ee832-b364-4a76-8d7d-6d3a576713a8-kube-api-access-gxxgc\") pod \"collect-profiles-29523795-4bq6c\" (UID: \"8e5ee832-b364-4a76-8d7d-6d3a576713a8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523795-4bq6c" Feb 18 15:15:00 crc kubenswrapper[4957]: I0218 15:15:00.415300 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8e5ee832-b364-4a76-8d7d-6d3a576713a8-secret-volume\") pod \"collect-profiles-29523795-4bq6c\" (UID: \"8e5ee832-b364-4a76-8d7d-6d3a576713a8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523795-4bq6c" Feb 18 15:15:00 crc kubenswrapper[4957]: I0218 15:15:00.416033 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8e5ee832-b364-4a76-8d7d-6d3a576713a8-config-volume\") pod \"collect-profiles-29523795-4bq6c\" (UID: \"8e5ee832-b364-4a76-8d7d-6d3a576713a8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523795-4bq6c" Feb 18 15:15:00 crc kubenswrapper[4957]: I0218 15:15:00.420936 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8e5ee832-b364-4a76-8d7d-6d3a576713a8-secret-volume\") pod \"collect-profiles-29523795-4bq6c\" (UID: \"8e5ee832-b364-4a76-8d7d-6d3a576713a8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523795-4bq6c" Feb 18 15:15:00 crc kubenswrapper[4957]: I0218 15:15:00.438292 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gxxgc\" (UniqueName: \"kubernetes.io/projected/8e5ee832-b364-4a76-8d7d-6d3a576713a8-kube-api-access-gxxgc\") pod \"collect-profiles-29523795-4bq6c\" (UID: \"8e5ee832-b364-4a76-8d7d-6d3a576713a8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523795-4bq6c" Feb 18 15:15:00 crc kubenswrapper[4957]: I0218 15:15:00.492061 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29523795-4bq6c" Feb 18 15:15:00 crc kubenswrapper[4957]: I0218 15:15:00.958536 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29523795-4bq6c"] Feb 18 15:15:01 crc kubenswrapper[4957]: I0218 15:15:01.329644 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29523795-4bq6c" event={"ID":"8e5ee832-b364-4a76-8d7d-6d3a576713a8","Type":"ContainerStarted","Data":"9f6ecfbee96b92c21f3209b6456a4a4549e8d37666441c74e0089e4580a4dae3"} Feb 18 15:15:01 crc kubenswrapper[4957]: I0218 15:15:01.329921 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29523795-4bq6c" event={"ID":"8e5ee832-b364-4a76-8d7d-6d3a576713a8","Type":"ContainerStarted","Data":"66686b88abf7e8cc8cd1144dff0b632e620f46b98256e78c071809285e03d22d"} Feb 18 15:15:01 crc kubenswrapper[4957]: I0218 15:15:01.355024 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29523795-4bq6c" podStartSLOduration=1.355005855 podStartE2EDuration="1.355005855s" podCreationTimestamp="2026-02-18 15:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 15:15:01.341770076 +0000 UTC m=+2607.862634820" watchObservedRunningTime="2026-02-18 15:15:01.355005855 +0000 UTC m=+2607.875870599" Feb 18 15:15:02 crc kubenswrapper[4957]: I0218 15:15:02.356277 4957 generic.go:334] "Generic (PLEG): container finished" podID="8e5ee832-b364-4a76-8d7d-6d3a576713a8" containerID="9f6ecfbee96b92c21f3209b6456a4a4549e8d37666441c74e0089e4580a4dae3" exitCode=0 Feb 18 15:15:02 crc kubenswrapper[4957]: I0218 15:15:02.356396 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29523795-4bq6c" event={"ID":"8e5ee832-b364-4a76-8d7d-6d3a576713a8","Type":"ContainerDied","Data":"9f6ecfbee96b92c21f3209b6456a4a4549e8d37666441c74e0089e4580a4dae3"} Feb 18 15:15:03 crc kubenswrapper[4957]: I0218 15:15:03.802915 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29523795-4bq6c" Feb 18 15:15:03 crc kubenswrapper[4957]: I0218 15:15:03.904465 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gxxgc\" (UniqueName: \"kubernetes.io/projected/8e5ee832-b364-4a76-8d7d-6d3a576713a8-kube-api-access-gxxgc\") pod \"8e5ee832-b364-4a76-8d7d-6d3a576713a8\" (UID: \"8e5ee832-b364-4a76-8d7d-6d3a576713a8\") " Feb 18 15:15:03 crc kubenswrapper[4957]: I0218 15:15:03.904571 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8e5ee832-b364-4a76-8d7d-6d3a576713a8-config-volume\") pod \"8e5ee832-b364-4a76-8d7d-6d3a576713a8\" (UID: \"8e5ee832-b364-4a76-8d7d-6d3a576713a8\") " Feb 18 15:15:03 crc kubenswrapper[4957]: I0218 15:15:03.904778 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8e5ee832-b364-4a76-8d7d-6d3a576713a8-secret-volume\") pod \"8e5ee832-b364-4a76-8d7d-6d3a576713a8\" (UID: \"8e5ee832-b364-4a76-8d7d-6d3a576713a8\") " Feb 18 15:15:03 crc kubenswrapper[4957]: I0218 15:15:03.905229 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8e5ee832-b364-4a76-8d7d-6d3a576713a8-config-volume" (OuterVolumeSpecName: "config-volume") pod "8e5ee832-b364-4a76-8d7d-6d3a576713a8" (UID: "8e5ee832-b364-4a76-8d7d-6d3a576713a8"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 15:15:03 crc kubenswrapper[4957]: I0218 15:15:03.905858 4957 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8e5ee832-b364-4a76-8d7d-6d3a576713a8-config-volume\") on node \"crc\" DevicePath \"\"" Feb 18 15:15:03 crc kubenswrapper[4957]: I0218 15:15:03.918474 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e5ee832-b364-4a76-8d7d-6d3a576713a8-kube-api-access-gxxgc" (OuterVolumeSpecName: "kube-api-access-gxxgc") pod "8e5ee832-b364-4a76-8d7d-6d3a576713a8" (UID: "8e5ee832-b364-4a76-8d7d-6d3a576713a8"). InnerVolumeSpecName "kube-api-access-gxxgc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 15:15:03 crc kubenswrapper[4957]: I0218 15:15:03.919832 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e5ee832-b364-4a76-8d7d-6d3a576713a8-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "8e5ee832-b364-4a76-8d7d-6d3a576713a8" (UID: "8e5ee832-b364-4a76-8d7d-6d3a576713a8"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 15:15:04 crc kubenswrapper[4957]: I0218 15:15:04.008121 4957 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8e5ee832-b364-4a76-8d7d-6d3a576713a8-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 18 15:15:04 crc kubenswrapper[4957]: I0218 15:15:04.008453 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gxxgc\" (UniqueName: \"kubernetes.io/projected/8e5ee832-b364-4a76-8d7d-6d3a576713a8-kube-api-access-gxxgc\") on node \"crc\" DevicePath \"\"" Feb 18 15:15:04 crc kubenswrapper[4957]: I0218 15:15:04.394064 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29523795-4bq6c" event={"ID":"8e5ee832-b364-4a76-8d7d-6d3a576713a8","Type":"ContainerDied","Data":"66686b88abf7e8cc8cd1144dff0b632e620f46b98256e78c071809285e03d22d"} Feb 18 15:15:04 crc kubenswrapper[4957]: I0218 15:15:04.394110 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29523795-4bq6c" Feb 18 15:15:04 crc kubenswrapper[4957]: I0218 15:15:04.394115 4957 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="66686b88abf7e8cc8cd1144dff0b632e620f46b98256e78c071809285e03d22d" Feb 18 15:15:04 crc kubenswrapper[4957]: I0218 15:15:04.427174 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29523750-lq4hl"] Feb 18 15:15:04 crc kubenswrapper[4957]: I0218 15:15:04.437554 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29523750-lq4hl"] Feb 18 15:15:06 crc kubenswrapper[4957]: I0218 15:15:06.227720 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4cbe5d01-ad51-4b03-aba9-8757f5643bdb" path="/var/lib/kubelet/pods/4cbe5d01-ad51-4b03-aba9-8757f5643bdb/volumes" Feb 18 15:15:16 crc kubenswrapper[4957]: I0218 15:15:16.460352 4957 scope.go:117] "RemoveContainer" containerID="1af9bc629710bff03e0e53f21c128e9198f95e3d624450305c5de49ad7b1b75a" Feb 18 15:15:25 crc kubenswrapper[4957]: I0218 15:15:25.667627 4957 generic.go:334] "Generic (PLEG): container finished" podID="ee4446c2-295a-4f11-b689-78721a39f23f" containerID="497b8ff425c9618953c8e13426dad7892e43bb4219e82f955b4ebb73d8d1b9b9" exitCode=0 Feb 18 15:15:25 crc kubenswrapper[4957]: I0218 15:15:25.667703 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-td67t" event={"ID":"ee4446c2-295a-4f11-b689-78721a39f23f","Type":"ContainerDied","Data":"497b8ff425c9618953c8e13426dad7892e43bb4219e82f955b4ebb73d8d1b9b9"} Feb 18 15:15:27 crc kubenswrapper[4957]: I0218 15:15:27.248644 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-td67t" Feb 18 15:15:27 crc kubenswrapper[4957]: I0218 15:15:27.401218 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ee4446c2-295a-4f11-b689-78721a39f23f-ssh-key-openstack-edpm-ipam\") pod \"ee4446c2-295a-4f11-b689-78721a39f23f\" (UID: \"ee4446c2-295a-4f11-b689-78721a39f23f\") " Feb 18 15:15:27 crc kubenswrapper[4957]: I0218 15:15:27.401365 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vmfgr\" (UniqueName: \"kubernetes.io/projected/ee4446c2-295a-4f11-b689-78721a39f23f-kube-api-access-vmfgr\") pod \"ee4446c2-295a-4f11-b689-78721a39f23f\" (UID: \"ee4446c2-295a-4f11-b689-78721a39f23f\") " Feb 18 15:15:27 crc kubenswrapper[4957]: I0218 15:15:27.401643 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ee4446c2-295a-4f11-b689-78721a39f23f-inventory\") pod \"ee4446c2-295a-4f11-b689-78721a39f23f\" (UID: \"ee4446c2-295a-4f11-b689-78721a39f23f\") " Feb 18 15:15:27 crc kubenswrapper[4957]: I0218 15:15:27.401727 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee4446c2-295a-4f11-b689-78721a39f23f-libvirt-combined-ca-bundle\") pod \"ee4446c2-295a-4f11-b689-78721a39f23f\" (UID: \"ee4446c2-295a-4f11-b689-78721a39f23f\") " Feb 18 15:15:27 crc kubenswrapper[4957]: I0218 15:15:27.401786 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/ee4446c2-295a-4f11-b689-78721a39f23f-libvirt-secret-0\") pod \"ee4446c2-295a-4f11-b689-78721a39f23f\" (UID: \"ee4446c2-295a-4f11-b689-78721a39f23f\") " Feb 18 15:15:27 crc kubenswrapper[4957]: I0218 15:15:27.407077 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee4446c2-295a-4f11-b689-78721a39f23f-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "ee4446c2-295a-4f11-b689-78721a39f23f" (UID: "ee4446c2-295a-4f11-b689-78721a39f23f"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 15:15:27 crc kubenswrapper[4957]: I0218 15:15:27.407248 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee4446c2-295a-4f11-b689-78721a39f23f-kube-api-access-vmfgr" (OuterVolumeSpecName: "kube-api-access-vmfgr") pod "ee4446c2-295a-4f11-b689-78721a39f23f" (UID: "ee4446c2-295a-4f11-b689-78721a39f23f"). InnerVolumeSpecName "kube-api-access-vmfgr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 15:15:27 crc kubenswrapper[4957]: I0218 15:15:27.432642 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee4446c2-295a-4f11-b689-78721a39f23f-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "ee4446c2-295a-4f11-b689-78721a39f23f" (UID: "ee4446c2-295a-4f11-b689-78721a39f23f"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 15:15:27 crc kubenswrapper[4957]: I0218 15:15:27.438824 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee4446c2-295a-4f11-b689-78721a39f23f-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "ee4446c2-295a-4f11-b689-78721a39f23f" (UID: "ee4446c2-295a-4f11-b689-78721a39f23f"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 15:15:27 crc kubenswrapper[4957]: I0218 15:15:27.457007 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee4446c2-295a-4f11-b689-78721a39f23f-inventory" (OuterVolumeSpecName: "inventory") pod "ee4446c2-295a-4f11-b689-78721a39f23f" (UID: "ee4446c2-295a-4f11-b689-78721a39f23f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 15:15:27 crc kubenswrapper[4957]: I0218 15:15:27.505534 4957 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee4446c2-295a-4f11-b689-78721a39f23f-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 15:15:27 crc kubenswrapper[4957]: I0218 15:15:27.505578 4957 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/ee4446c2-295a-4f11-b689-78721a39f23f-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Feb 18 15:15:27 crc kubenswrapper[4957]: I0218 15:15:27.505593 4957 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ee4446c2-295a-4f11-b689-78721a39f23f-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 15:15:27 crc kubenswrapper[4957]: I0218 15:15:27.505605 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vmfgr\" (UniqueName: \"kubernetes.io/projected/ee4446c2-295a-4f11-b689-78721a39f23f-kube-api-access-vmfgr\") on node \"crc\" DevicePath \"\"" Feb 18 15:15:27 crc kubenswrapper[4957]: I0218 15:15:27.505619 4957 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ee4446c2-295a-4f11-b689-78721a39f23f-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 15:15:27 crc kubenswrapper[4957]: I0218 15:15:27.692709 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-td67t" event={"ID":"ee4446c2-295a-4f11-b689-78721a39f23f","Type":"ContainerDied","Data":"baec9c798332cb2a861074c052177bab184f50d3f5de5f333b3585578c08f8c3"} Feb 18 15:15:27 crc kubenswrapper[4957]: I0218 15:15:27.692766 4957 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="baec9c798332cb2a861074c052177bab184f50d3f5de5f333b3585578c08f8c3" Feb 18 15:15:27 crc kubenswrapper[4957]: I0218 15:15:27.692835 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-td67t" Feb 18 15:15:27 crc kubenswrapper[4957]: I0218 15:15:27.786736 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-rdvhw"] Feb 18 15:15:27 crc kubenswrapper[4957]: E0218 15:15:27.787406 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e5ee832-b364-4a76-8d7d-6d3a576713a8" containerName="collect-profiles" Feb 18 15:15:27 crc kubenswrapper[4957]: I0218 15:15:27.787457 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e5ee832-b364-4a76-8d7d-6d3a576713a8" containerName="collect-profiles" Feb 18 15:15:27 crc kubenswrapper[4957]: E0218 15:15:27.787513 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee4446c2-295a-4f11-b689-78721a39f23f" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 18 15:15:27 crc kubenswrapper[4957]: I0218 15:15:27.787525 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee4446c2-295a-4f11-b689-78721a39f23f" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 18 15:15:27 crc kubenswrapper[4957]: I0218 15:15:27.787929 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee4446c2-295a-4f11-b689-78721a39f23f" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 18 15:15:27 crc kubenswrapper[4957]: I0218 15:15:27.787975 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e5ee832-b364-4a76-8d7d-6d3a576713a8" containerName="collect-profiles" Feb 18 15:15:27 crc kubenswrapper[4957]: I0218 15:15:27.789519 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rdvhw" Feb 18 15:15:27 crc kubenswrapper[4957]: I0218 15:15:27.791868 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 15:15:27 crc kubenswrapper[4957]: I0218 15:15:27.792133 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Feb 18 15:15:27 crc kubenswrapper[4957]: I0218 15:15:27.792652 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-pmpvb" Feb 18 15:15:27 crc kubenswrapper[4957]: I0218 15:15:27.792698 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 15:15:27 crc kubenswrapper[4957]: I0218 15:15:27.793157 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Feb 18 15:15:27 crc kubenswrapper[4957]: I0218 15:15:27.792992 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 15:15:27 crc kubenswrapper[4957]: I0218 15:15:27.797140 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Feb 18 15:15:27 crc kubenswrapper[4957]: I0218 15:15:27.813199 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-rdvhw"] Feb 18 15:15:27 crc kubenswrapper[4957]: I0218 15:15:27.915394 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/13f17c48-8243-416b-939e-7ba8a50f08d4-nova-cell1-compute-config-2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rdvhw\" (UID: \"13f17c48-8243-416b-939e-7ba8a50f08d4\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rdvhw" Feb 18 15:15:27 crc kubenswrapper[4957]: I0218 15:15:27.915455 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/13f17c48-8243-416b-939e-7ba8a50f08d4-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rdvhw\" (UID: \"13f17c48-8243-416b-939e-7ba8a50f08d4\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rdvhw" Feb 18 15:15:27 crc kubenswrapper[4957]: I0218 15:15:27.915533 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/13f17c48-8243-416b-939e-7ba8a50f08d4-nova-cell1-compute-config-3\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rdvhw\" (UID: \"13f17c48-8243-416b-939e-7ba8a50f08d4\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rdvhw" Feb 18 15:15:27 crc kubenswrapper[4957]: I0218 15:15:27.915567 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13f17c48-8243-416b-939e-7ba8a50f08d4-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rdvhw\" (UID: \"13f17c48-8243-416b-939e-7ba8a50f08d4\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rdvhw" Feb 18 15:15:27 crc kubenswrapper[4957]: I0218 15:15:27.915589 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzsj8\" (UniqueName: \"kubernetes.io/projected/13f17c48-8243-416b-939e-7ba8a50f08d4-kube-api-access-lzsj8\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rdvhw\" (UID: \"13f17c48-8243-416b-939e-7ba8a50f08d4\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rdvhw" Feb 18 15:15:27 crc kubenswrapper[4957]: I0218 15:15:27.915682 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/13f17c48-8243-416b-939e-7ba8a50f08d4-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rdvhw\" (UID: \"13f17c48-8243-416b-939e-7ba8a50f08d4\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rdvhw" Feb 18 15:15:27 crc kubenswrapper[4957]: I0218 15:15:27.915724 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/13f17c48-8243-416b-939e-7ba8a50f08d4-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rdvhw\" (UID: \"13f17c48-8243-416b-939e-7ba8a50f08d4\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rdvhw" Feb 18 15:15:27 crc kubenswrapper[4957]: I0218 15:15:27.915807 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/13f17c48-8243-416b-939e-7ba8a50f08d4-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rdvhw\" (UID: \"13f17c48-8243-416b-939e-7ba8a50f08d4\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rdvhw" Feb 18 15:15:27 crc kubenswrapper[4957]: I0218 15:15:27.915839 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/13f17c48-8243-416b-939e-7ba8a50f08d4-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rdvhw\" (UID: \"13f17c48-8243-416b-939e-7ba8a50f08d4\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rdvhw" Feb 18 15:15:27 crc kubenswrapper[4957]: I0218 15:15:27.915913 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/13f17c48-8243-416b-939e-7ba8a50f08d4-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rdvhw\" (UID: \"13f17c48-8243-416b-939e-7ba8a50f08d4\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rdvhw" Feb 18 15:15:27 crc kubenswrapper[4957]: I0218 15:15:27.915998 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/13f17c48-8243-416b-939e-7ba8a50f08d4-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rdvhw\" (UID: \"13f17c48-8243-416b-939e-7ba8a50f08d4\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rdvhw" Feb 18 15:15:28 crc kubenswrapper[4957]: I0218 15:15:28.018273 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/13f17c48-8243-416b-939e-7ba8a50f08d4-nova-cell1-compute-config-2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rdvhw\" (UID: \"13f17c48-8243-416b-939e-7ba8a50f08d4\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rdvhw" Feb 18 15:15:28 crc kubenswrapper[4957]: I0218 15:15:28.018332 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/13f17c48-8243-416b-939e-7ba8a50f08d4-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rdvhw\" (UID: \"13f17c48-8243-416b-939e-7ba8a50f08d4\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rdvhw" Feb 18 15:15:28 crc kubenswrapper[4957]: I0218 15:15:28.018513 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/13f17c48-8243-416b-939e-7ba8a50f08d4-nova-cell1-compute-config-3\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rdvhw\" (UID: \"13f17c48-8243-416b-939e-7ba8a50f08d4\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rdvhw" Feb 18 15:15:28 crc kubenswrapper[4957]: I0218 15:15:28.018570 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13f17c48-8243-416b-939e-7ba8a50f08d4-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rdvhw\" (UID: \"13f17c48-8243-416b-939e-7ba8a50f08d4\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rdvhw" Feb 18 15:15:28 crc kubenswrapper[4957]: I0218 15:15:28.018601 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lzsj8\" (UniqueName: \"kubernetes.io/projected/13f17c48-8243-416b-939e-7ba8a50f08d4-kube-api-access-lzsj8\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rdvhw\" (UID: \"13f17c48-8243-416b-939e-7ba8a50f08d4\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rdvhw" Feb 18 15:15:28 crc kubenswrapper[4957]: I0218 15:15:28.018647 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/13f17c48-8243-416b-939e-7ba8a50f08d4-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rdvhw\" (UID: \"13f17c48-8243-416b-939e-7ba8a50f08d4\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rdvhw" Feb 18 15:15:28 crc kubenswrapper[4957]: I0218 15:15:28.018676 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/13f17c48-8243-416b-939e-7ba8a50f08d4-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rdvhw\" (UID: \"13f17c48-8243-416b-939e-7ba8a50f08d4\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rdvhw" Feb 18 15:15:28 crc kubenswrapper[4957]: I0218 15:15:28.018723 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/13f17c48-8243-416b-939e-7ba8a50f08d4-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rdvhw\" (UID: \"13f17c48-8243-416b-939e-7ba8a50f08d4\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rdvhw" Feb 18 15:15:28 crc kubenswrapper[4957]: I0218 15:15:28.018787 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/13f17c48-8243-416b-939e-7ba8a50f08d4-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rdvhw\" (UID: \"13f17c48-8243-416b-939e-7ba8a50f08d4\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rdvhw" Feb 18 15:15:28 crc kubenswrapper[4957]: I0218 15:15:28.018860 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/13f17c48-8243-416b-939e-7ba8a50f08d4-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rdvhw\" (UID: \"13f17c48-8243-416b-939e-7ba8a50f08d4\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rdvhw" Feb 18 15:15:28 crc kubenswrapper[4957]: I0218 15:15:28.018939 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/13f17c48-8243-416b-939e-7ba8a50f08d4-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rdvhw\" (UID: \"13f17c48-8243-416b-939e-7ba8a50f08d4\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rdvhw" Feb 18 15:15:28 crc kubenswrapper[4957]: I0218 15:15:28.019412 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/13f17c48-8243-416b-939e-7ba8a50f08d4-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rdvhw\" (UID: \"13f17c48-8243-416b-939e-7ba8a50f08d4\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rdvhw" Feb 18 15:15:28 crc kubenswrapper[4957]: I0218 15:15:28.024449 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/13f17c48-8243-416b-939e-7ba8a50f08d4-nova-cell1-compute-config-3\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rdvhw\" (UID: \"13f17c48-8243-416b-939e-7ba8a50f08d4\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rdvhw" Feb 18 15:15:28 crc kubenswrapper[4957]: I0218 15:15:28.024509 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/13f17c48-8243-416b-939e-7ba8a50f08d4-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rdvhw\" (UID: \"13f17c48-8243-416b-939e-7ba8a50f08d4\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rdvhw" Feb 18 15:15:28 crc kubenswrapper[4957]: I0218 15:15:28.024806 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/13f17c48-8243-416b-939e-7ba8a50f08d4-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rdvhw\" (UID: \"13f17c48-8243-416b-939e-7ba8a50f08d4\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rdvhw" Feb 18 15:15:28 crc kubenswrapper[4957]: I0218 15:15:28.027064 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/13f17c48-8243-416b-939e-7ba8a50f08d4-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rdvhw\" (UID: \"13f17c48-8243-416b-939e-7ba8a50f08d4\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rdvhw" Feb 18 15:15:28 crc kubenswrapper[4957]: I0218 15:15:28.027195 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/13f17c48-8243-416b-939e-7ba8a50f08d4-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rdvhw\" (UID: \"13f17c48-8243-416b-939e-7ba8a50f08d4\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rdvhw" Feb 18 15:15:28 crc kubenswrapper[4957]: I0218 15:15:28.030858 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/13f17c48-8243-416b-939e-7ba8a50f08d4-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rdvhw\" (UID: \"13f17c48-8243-416b-939e-7ba8a50f08d4\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rdvhw" Feb 18 15:15:28 crc kubenswrapper[4957]: I0218 15:15:28.031380 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/13f17c48-8243-416b-939e-7ba8a50f08d4-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rdvhw\" (UID: \"13f17c48-8243-416b-939e-7ba8a50f08d4\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rdvhw" Feb 18 15:15:28 crc kubenswrapper[4957]: I0218 15:15:28.038486 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13f17c48-8243-416b-939e-7ba8a50f08d4-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rdvhw\" (UID: \"13f17c48-8243-416b-939e-7ba8a50f08d4\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rdvhw" Feb 18 15:15:28 crc kubenswrapper[4957]: I0218 15:15:28.043332 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/13f17c48-8243-416b-939e-7ba8a50f08d4-nova-cell1-compute-config-2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rdvhw\" (UID: \"13f17c48-8243-416b-939e-7ba8a50f08d4\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rdvhw" Feb 18 15:15:28 crc kubenswrapper[4957]: I0218 15:15:28.045256 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lzsj8\" (UniqueName: \"kubernetes.io/projected/13f17c48-8243-416b-939e-7ba8a50f08d4-kube-api-access-lzsj8\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rdvhw\" (UID: \"13f17c48-8243-416b-939e-7ba8a50f08d4\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rdvhw" Feb 18 15:15:28 crc kubenswrapper[4957]: I0218 15:15:28.109049 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rdvhw" Feb 18 15:15:28 crc kubenswrapper[4957]: I0218 15:15:28.651235 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-rdvhw"] Feb 18 15:15:28 crc kubenswrapper[4957]: W0218 15:15:28.651745 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod13f17c48_8243_416b_939e_7ba8a50f08d4.slice/crio-fbc5d114b682b4224471ef0a6ddff0eda6c570ddb70b38922b3bdefa6f80fa4e WatchSource:0}: Error finding container fbc5d114b682b4224471ef0a6ddff0eda6c570ddb70b38922b3bdefa6f80fa4e: Status 404 returned error can't find the container with id fbc5d114b682b4224471ef0a6ddff0eda6c570ddb70b38922b3bdefa6f80fa4e Feb 18 15:15:28 crc kubenswrapper[4957]: I0218 15:15:28.704082 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rdvhw" event={"ID":"13f17c48-8243-416b-939e-7ba8a50f08d4","Type":"ContainerStarted","Data":"fbc5d114b682b4224471ef0a6ddff0eda6c570ddb70b38922b3bdefa6f80fa4e"} Feb 18 15:15:29 crc kubenswrapper[4957]: I0218 15:15:29.717082 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rdvhw" event={"ID":"13f17c48-8243-416b-939e-7ba8a50f08d4","Type":"ContainerStarted","Data":"a1a1f8bb4397fe118b10f980ee81679a727733460663ae73c9b13e8055017cee"} Feb 18 15:15:29 crc kubenswrapper[4957]: I0218 15:15:29.741082 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rdvhw" podStartSLOduration=2.181107515 podStartE2EDuration="2.741064915s" podCreationTimestamp="2026-02-18 15:15:27 +0000 UTC" firstStartedPulling="2026-02-18 15:15:28.65442897 +0000 UTC m=+2635.175293714" lastFinishedPulling="2026-02-18 15:15:29.21438637 +0000 UTC m=+2635.735251114" observedRunningTime="2026-02-18 15:15:29.738774569 +0000 UTC m=+2636.259639313" watchObservedRunningTime="2026-02-18 15:15:29.741064915 +0000 UTC m=+2636.261929659" Feb 18 15:16:07 crc kubenswrapper[4957]: I0218 15:16:07.278716 4957 patch_prober.go:28] interesting pod/machine-config-daemon-x8wwg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 15:16:07 crc kubenswrapper[4957]: I0218 15:16:07.279413 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 15:16:37 crc kubenswrapper[4957]: I0218 15:16:37.278988 4957 patch_prober.go:28] interesting pod/machine-config-daemon-x8wwg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 15:16:37 crc kubenswrapper[4957]: I0218 15:16:37.279639 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 15:17:07 crc kubenswrapper[4957]: I0218 15:17:07.279486 4957 patch_prober.go:28] interesting pod/machine-config-daemon-x8wwg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 15:17:07 crc kubenswrapper[4957]: I0218 15:17:07.279919 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 15:17:07 crc kubenswrapper[4957]: I0218 15:17:07.279959 4957 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" Feb 18 15:17:07 crc kubenswrapper[4957]: I0218 15:17:07.280823 4957 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a69e9f0e650842af65aadbc2c889b8e494d9005f988f1644caa07f5a3ac25640"} pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 15:17:07 crc kubenswrapper[4957]: I0218 15:17:07.280875 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" containerName="machine-config-daemon" containerID="cri-o://a69e9f0e650842af65aadbc2c889b8e494d9005f988f1644caa07f5a3ac25640" gracePeriod=600 Feb 18 15:17:08 crc kubenswrapper[4957]: I0218 15:17:08.121024 4957 generic.go:334] "Generic (PLEG): container finished" podID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" containerID="a69e9f0e650842af65aadbc2c889b8e494d9005f988f1644caa07f5a3ac25640" exitCode=0 Feb 18 15:17:08 crc kubenswrapper[4957]: I0218 15:17:08.121107 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" event={"ID":"4cde17e3-43e9-4bed-afe8-5b76229e35cf","Type":"ContainerDied","Data":"a69e9f0e650842af65aadbc2c889b8e494d9005f988f1644caa07f5a3ac25640"} Feb 18 15:17:08 crc kubenswrapper[4957]: I0218 15:17:08.121631 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" event={"ID":"4cde17e3-43e9-4bed-afe8-5b76229e35cf","Type":"ContainerStarted","Data":"39b5b1257cbb2be9af0e5e8acb7db08c789d6b14136ed8d184c2c99f85230571"} Feb 18 15:17:08 crc kubenswrapper[4957]: I0218 15:17:08.121661 4957 scope.go:117] "RemoveContainer" containerID="44b5f20e8c741a1ccc4ddb62c02968c2612966a76e0271e585bc5a70580f1aba" Feb 18 15:17:16 crc kubenswrapper[4957]: I0218 15:17:16.082252 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-dj57n"] Feb 18 15:17:16 crc kubenswrapper[4957]: I0218 15:17:16.085547 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dj57n" Feb 18 15:17:16 crc kubenswrapper[4957]: I0218 15:17:16.097572 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dj57n"] Feb 18 15:17:16 crc kubenswrapper[4957]: I0218 15:17:16.255299 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nznx5\" (UniqueName: \"kubernetes.io/projected/4e137151-421b-4394-8299-d1ce915c0ff4-kube-api-access-nznx5\") pod \"redhat-operators-dj57n\" (UID: \"4e137151-421b-4394-8299-d1ce915c0ff4\") " pod="openshift-marketplace/redhat-operators-dj57n" Feb 18 15:17:16 crc kubenswrapper[4957]: I0218 15:17:16.255371 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e137151-421b-4394-8299-d1ce915c0ff4-utilities\") pod \"redhat-operators-dj57n\" (UID: \"4e137151-421b-4394-8299-d1ce915c0ff4\") " pod="openshift-marketplace/redhat-operators-dj57n" Feb 18 15:17:16 crc kubenswrapper[4957]: I0218 15:17:16.255433 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e137151-421b-4394-8299-d1ce915c0ff4-catalog-content\") pod \"redhat-operators-dj57n\" (UID: \"4e137151-421b-4394-8299-d1ce915c0ff4\") " pod="openshift-marketplace/redhat-operators-dj57n" Feb 18 15:17:16 crc kubenswrapper[4957]: I0218 15:17:16.358083 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e137151-421b-4394-8299-d1ce915c0ff4-utilities\") pod \"redhat-operators-dj57n\" (UID: \"4e137151-421b-4394-8299-d1ce915c0ff4\") " pod="openshift-marketplace/redhat-operators-dj57n" Feb 18 15:17:16 crc kubenswrapper[4957]: I0218 15:17:16.358179 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e137151-421b-4394-8299-d1ce915c0ff4-catalog-content\") pod \"redhat-operators-dj57n\" (UID: \"4e137151-421b-4394-8299-d1ce915c0ff4\") " pod="openshift-marketplace/redhat-operators-dj57n" Feb 18 15:17:16 crc kubenswrapper[4957]: I0218 15:17:16.358494 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nznx5\" (UniqueName: \"kubernetes.io/projected/4e137151-421b-4394-8299-d1ce915c0ff4-kube-api-access-nznx5\") pod \"redhat-operators-dj57n\" (UID: \"4e137151-421b-4394-8299-d1ce915c0ff4\") " pod="openshift-marketplace/redhat-operators-dj57n" Feb 18 15:17:16 crc kubenswrapper[4957]: I0218 15:17:16.358646 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e137151-421b-4394-8299-d1ce915c0ff4-utilities\") pod \"redhat-operators-dj57n\" (UID: \"4e137151-421b-4394-8299-d1ce915c0ff4\") " pod="openshift-marketplace/redhat-operators-dj57n" Feb 18 15:17:16 crc kubenswrapper[4957]: I0218 15:17:16.358728 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e137151-421b-4394-8299-d1ce915c0ff4-catalog-content\") pod \"redhat-operators-dj57n\" (UID: \"4e137151-421b-4394-8299-d1ce915c0ff4\") " pod="openshift-marketplace/redhat-operators-dj57n" Feb 18 15:17:16 crc kubenswrapper[4957]: I0218 15:17:16.376967 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nznx5\" (UniqueName: \"kubernetes.io/projected/4e137151-421b-4394-8299-d1ce915c0ff4-kube-api-access-nznx5\") pod \"redhat-operators-dj57n\" (UID: \"4e137151-421b-4394-8299-d1ce915c0ff4\") " pod="openshift-marketplace/redhat-operators-dj57n" Feb 18 15:17:16 crc kubenswrapper[4957]: I0218 15:17:16.415613 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dj57n" Feb 18 15:17:17 crc kubenswrapper[4957]: I0218 15:17:17.015825 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dj57n"] Feb 18 15:17:17 crc kubenswrapper[4957]: I0218 15:17:17.215720 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dj57n" event={"ID":"4e137151-421b-4394-8299-d1ce915c0ff4","Type":"ContainerStarted","Data":"5d8a42cda86778007b3effaad5235d716f540817cd9ce879cc7b5f2bb44cf17d"} Feb 18 15:17:18 crc kubenswrapper[4957]: I0218 15:17:18.228116 4957 generic.go:334] "Generic (PLEG): container finished" podID="4e137151-421b-4394-8299-d1ce915c0ff4" containerID="5824f9bd8e852a7bdfaebefe27eeedd7bf951e849358f4f0b893ca1384bf2b6c" exitCode=0 Feb 18 15:17:18 crc kubenswrapper[4957]: I0218 15:17:18.229446 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dj57n" event={"ID":"4e137151-421b-4394-8299-d1ce915c0ff4","Type":"ContainerDied","Data":"5824f9bd8e852a7bdfaebefe27eeedd7bf951e849358f4f0b893ca1384bf2b6c"} Feb 18 15:17:18 crc kubenswrapper[4957]: I0218 15:17:18.232109 4957 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 15:17:19 crc kubenswrapper[4957]: I0218 15:17:19.245890 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dj57n" event={"ID":"4e137151-421b-4394-8299-d1ce915c0ff4","Type":"ContainerStarted","Data":"654613c20c97a0ab190a4b00ca9821635417625b9fb6d4f5049c2094e1e471cd"} Feb 18 15:17:24 crc kubenswrapper[4957]: I0218 15:17:24.299484 4957 generic.go:334] "Generic (PLEG): container finished" podID="4e137151-421b-4394-8299-d1ce915c0ff4" containerID="654613c20c97a0ab190a4b00ca9821635417625b9fb6d4f5049c2094e1e471cd" exitCode=0 Feb 18 15:17:24 crc kubenswrapper[4957]: I0218 15:17:24.299571 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dj57n" event={"ID":"4e137151-421b-4394-8299-d1ce915c0ff4","Type":"ContainerDied","Data":"654613c20c97a0ab190a4b00ca9821635417625b9fb6d4f5049c2094e1e471cd"} Feb 18 15:17:25 crc kubenswrapper[4957]: I0218 15:17:25.314394 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dj57n" event={"ID":"4e137151-421b-4394-8299-d1ce915c0ff4","Type":"ContainerStarted","Data":"3c01e8f8aa02b2f112cc9a5b80ef8fc31146d6cc18c8cbc30dde9df540c61d7a"} Feb 18 15:17:25 crc kubenswrapper[4957]: I0218 15:17:25.353106 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-dj57n" podStartSLOduration=2.837955425 podStartE2EDuration="9.353083343s" podCreationTimestamp="2026-02-18 15:17:16 +0000 UTC" firstStartedPulling="2026-02-18 15:17:18.231889233 +0000 UTC m=+2744.752753977" lastFinishedPulling="2026-02-18 15:17:24.747017141 +0000 UTC m=+2751.267881895" observedRunningTime="2026-02-18 15:17:25.343874349 +0000 UTC m=+2751.864739113" watchObservedRunningTime="2026-02-18 15:17:25.353083343 +0000 UTC m=+2751.873948087" Feb 18 15:17:26 crc kubenswrapper[4957]: I0218 15:17:26.416708 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-dj57n" Feb 18 15:17:26 crc kubenswrapper[4957]: I0218 15:17:26.417034 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-dj57n" Feb 18 15:17:27 crc kubenswrapper[4957]: I0218 15:17:27.469446 4957 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-dj57n" podUID="4e137151-421b-4394-8299-d1ce915c0ff4" containerName="registry-server" probeResult="failure" output=< Feb 18 15:17:27 crc kubenswrapper[4957]: timeout: failed to connect service ":50051" within 1s Feb 18 15:17:27 crc kubenswrapper[4957]: > Feb 18 15:17:37 crc kubenswrapper[4957]: I0218 15:17:37.464168 4957 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-dj57n" podUID="4e137151-421b-4394-8299-d1ce915c0ff4" containerName="registry-server" probeResult="failure" output=< Feb 18 15:17:37 crc kubenswrapper[4957]: timeout: failed to connect service ":50051" within 1s Feb 18 15:17:37 crc kubenswrapper[4957]: > Feb 18 15:17:47 crc kubenswrapper[4957]: I0218 15:17:47.470535 4957 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-dj57n" podUID="4e137151-421b-4394-8299-d1ce915c0ff4" containerName="registry-server" probeResult="failure" output=< Feb 18 15:17:47 crc kubenswrapper[4957]: timeout: failed to connect service ":50051" within 1s Feb 18 15:17:47 crc kubenswrapper[4957]: > Feb 18 15:17:55 crc kubenswrapper[4957]: I0218 15:17:55.308191 4957 generic.go:334] "Generic (PLEG): container finished" podID="13f17c48-8243-416b-939e-7ba8a50f08d4" containerID="a1a1f8bb4397fe118b10f980ee81679a727733460663ae73c9b13e8055017cee" exitCode=0 Feb 18 15:17:55 crc kubenswrapper[4957]: I0218 15:17:55.308276 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rdvhw" event={"ID":"13f17c48-8243-416b-939e-7ba8a50f08d4","Type":"ContainerDied","Data":"a1a1f8bb4397fe118b10f980ee81679a727733460663ae73c9b13e8055017cee"} Feb 18 15:17:56 crc kubenswrapper[4957]: I0218 15:17:56.485065 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-dj57n" Feb 18 15:17:56 crc kubenswrapper[4957]: I0218 15:17:56.555866 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-dj57n" Feb 18 15:17:56 crc kubenswrapper[4957]: I0218 15:17:56.738633 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dj57n"] Feb 18 15:17:57 crc kubenswrapper[4957]: I0218 15:17:57.001475 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rdvhw" Feb 18 15:17:57 crc kubenswrapper[4957]: I0218 15:17:57.167441 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/13f17c48-8243-416b-939e-7ba8a50f08d4-nova-cell1-compute-config-2\") pod \"13f17c48-8243-416b-939e-7ba8a50f08d4\" (UID: \"13f17c48-8243-416b-939e-7ba8a50f08d4\") " Feb 18 15:17:57 crc kubenswrapper[4957]: I0218 15:17:57.168003 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/13f17c48-8243-416b-939e-7ba8a50f08d4-nova-extra-config-0\") pod \"13f17c48-8243-416b-939e-7ba8a50f08d4\" (UID: \"13f17c48-8243-416b-939e-7ba8a50f08d4\") " Feb 18 15:17:57 crc kubenswrapper[4957]: I0218 15:17:57.168150 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/13f17c48-8243-416b-939e-7ba8a50f08d4-nova-migration-ssh-key-1\") pod \"13f17c48-8243-416b-939e-7ba8a50f08d4\" (UID: \"13f17c48-8243-416b-939e-7ba8a50f08d4\") " Feb 18 15:17:57 crc kubenswrapper[4957]: I0218 15:17:57.168285 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/13f17c48-8243-416b-939e-7ba8a50f08d4-inventory\") pod \"13f17c48-8243-416b-939e-7ba8a50f08d4\" (UID: \"13f17c48-8243-416b-939e-7ba8a50f08d4\") " Feb 18 15:17:57 crc kubenswrapper[4957]: I0218 15:17:57.168434 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13f17c48-8243-416b-939e-7ba8a50f08d4-nova-combined-ca-bundle\") pod \"13f17c48-8243-416b-939e-7ba8a50f08d4\" (UID: \"13f17c48-8243-416b-939e-7ba8a50f08d4\") " Feb 18 15:17:57 crc kubenswrapper[4957]: I0218 15:17:57.168588 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzsj8\" (UniqueName: \"kubernetes.io/projected/13f17c48-8243-416b-939e-7ba8a50f08d4-kube-api-access-lzsj8\") pod \"13f17c48-8243-416b-939e-7ba8a50f08d4\" (UID: \"13f17c48-8243-416b-939e-7ba8a50f08d4\") " Feb 18 15:17:57 crc kubenswrapper[4957]: I0218 15:17:57.168846 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/13f17c48-8243-416b-939e-7ba8a50f08d4-nova-cell1-compute-config-1\") pod \"13f17c48-8243-416b-939e-7ba8a50f08d4\" (UID: \"13f17c48-8243-416b-939e-7ba8a50f08d4\") " Feb 18 15:17:57 crc kubenswrapper[4957]: I0218 15:17:57.169042 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/13f17c48-8243-416b-939e-7ba8a50f08d4-nova-migration-ssh-key-0\") pod \"13f17c48-8243-416b-939e-7ba8a50f08d4\" (UID: \"13f17c48-8243-416b-939e-7ba8a50f08d4\") " Feb 18 15:17:57 crc kubenswrapper[4957]: I0218 15:17:57.169281 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/13f17c48-8243-416b-939e-7ba8a50f08d4-ssh-key-openstack-edpm-ipam\") pod \"13f17c48-8243-416b-939e-7ba8a50f08d4\" (UID: \"13f17c48-8243-416b-939e-7ba8a50f08d4\") " Feb 18 15:17:57 crc kubenswrapper[4957]: I0218 15:17:57.169446 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/13f17c48-8243-416b-939e-7ba8a50f08d4-nova-cell1-compute-config-3\") pod \"13f17c48-8243-416b-939e-7ba8a50f08d4\" (UID: \"13f17c48-8243-416b-939e-7ba8a50f08d4\") " Feb 18 15:17:57 crc kubenswrapper[4957]: I0218 15:17:57.169548 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/13f17c48-8243-416b-939e-7ba8a50f08d4-nova-cell1-compute-config-0\") pod \"13f17c48-8243-416b-939e-7ba8a50f08d4\" (UID: \"13f17c48-8243-416b-939e-7ba8a50f08d4\") " Feb 18 15:17:57 crc kubenswrapper[4957]: I0218 15:17:57.183589 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13f17c48-8243-416b-939e-7ba8a50f08d4-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "13f17c48-8243-416b-939e-7ba8a50f08d4" (UID: "13f17c48-8243-416b-939e-7ba8a50f08d4"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 15:17:57 crc kubenswrapper[4957]: I0218 15:17:57.183845 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13f17c48-8243-416b-939e-7ba8a50f08d4-kube-api-access-lzsj8" (OuterVolumeSpecName: "kube-api-access-lzsj8") pod "13f17c48-8243-416b-939e-7ba8a50f08d4" (UID: "13f17c48-8243-416b-939e-7ba8a50f08d4"). InnerVolumeSpecName "kube-api-access-lzsj8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 15:17:57 crc kubenswrapper[4957]: I0218 15:17:57.203497 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13f17c48-8243-416b-939e-7ba8a50f08d4-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "13f17c48-8243-416b-939e-7ba8a50f08d4" (UID: "13f17c48-8243-416b-939e-7ba8a50f08d4"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 15:17:57 crc kubenswrapper[4957]: I0218 15:17:57.204866 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13f17c48-8243-416b-939e-7ba8a50f08d4-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "13f17c48-8243-416b-939e-7ba8a50f08d4" (UID: "13f17c48-8243-416b-939e-7ba8a50f08d4"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 15:17:57 crc kubenswrapper[4957]: I0218 15:17:57.209015 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13f17c48-8243-416b-939e-7ba8a50f08d4-nova-cell1-compute-config-2" (OuterVolumeSpecName: "nova-cell1-compute-config-2") pod "13f17c48-8243-416b-939e-7ba8a50f08d4" (UID: "13f17c48-8243-416b-939e-7ba8a50f08d4"). InnerVolumeSpecName "nova-cell1-compute-config-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 15:17:57 crc kubenswrapper[4957]: I0218 15:17:57.219514 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13f17c48-8243-416b-939e-7ba8a50f08d4-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "13f17c48-8243-416b-939e-7ba8a50f08d4" (UID: "13f17c48-8243-416b-939e-7ba8a50f08d4"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 15:17:57 crc kubenswrapper[4957]: I0218 15:17:57.220501 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13f17c48-8243-416b-939e-7ba8a50f08d4-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "13f17c48-8243-416b-939e-7ba8a50f08d4" (UID: "13f17c48-8243-416b-939e-7ba8a50f08d4"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 15:17:57 crc kubenswrapper[4957]: I0218 15:17:57.229374 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13f17c48-8243-416b-939e-7ba8a50f08d4-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "13f17c48-8243-416b-939e-7ba8a50f08d4" (UID: "13f17c48-8243-416b-939e-7ba8a50f08d4"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 15:17:57 crc kubenswrapper[4957]: I0218 15:17:57.236392 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13f17c48-8243-416b-939e-7ba8a50f08d4-nova-cell1-compute-config-3" (OuterVolumeSpecName: "nova-cell1-compute-config-3") pod "13f17c48-8243-416b-939e-7ba8a50f08d4" (UID: "13f17c48-8243-416b-939e-7ba8a50f08d4"). InnerVolumeSpecName "nova-cell1-compute-config-3". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 15:17:57 crc kubenswrapper[4957]: I0218 15:17:57.237144 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13f17c48-8243-416b-939e-7ba8a50f08d4-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "13f17c48-8243-416b-939e-7ba8a50f08d4" (UID: "13f17c48-8243-416b-939e-7ba8a50f08d4"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 15:17:57 crc kubenswrapper[4957]: I0218 15:17:57.238116 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13f17c48-8243-416b-939e-7ba8a50f08d4-inventory" (OuterVolumeSpecName: "inventory") pod "13f17c48-8243-416b-939e-7ba8a50f08d4" (UID: "13f17c48-8243-416b-939e-7ba8a50f08d4"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 15:17:57 crc kubenswrapper[4957]: I0218 15:17:57.274585 4957 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/13f17c48-8243-416b-939e-7ba8a50f08d4-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Feb 18 15:17:57 crc kubenswrapper[4957]: I0218 15:17:57.274619 4957 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/13f17c48-8243-416b-939e-7ba8a50f08d4-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Feb 18 15:17:57 crc kubenswrapper[4957]: I0218 15:17:57.274629 4957 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/13f17c48-8243-416b-939e-7ba8a50f08d4-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 15:17:57 crc kubenswrapper[4957]: I0218 15:17:57.274638 4957 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/13f17c48-8243-416b-939e-7ba8a50f08d4-nova-cell1-compute-config-3\") on node \"crc\" DevicePath \"\"" Feb 18 15:17:57 crc kubenswrapper[4957]: I0218 15:17:57.274647 4957 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/13f17c48-8243-416b-939e-7ba8a50f08d4-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Feb 18 15:17:57 crc kubenswrapper[4957]: I0218 15:17:57.274656 4957 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/13f17c48-8243-416b-939e-7ba8a50f08d4-nova-cell1-compute-config-2\") on node \"crc\" DevicePath \"\"" Feb 18 15:17:57 crc kubenswrapper[4957]: I0218 15:17:57.274666 4957 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/13f17c48-8243-416b-939e-7ba8a50f08d4-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Feb 18 15:17:57 crc kubenswrapper[4957]: I0218 15:17:57.274675 4957 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/13f17c48-8243-416b-939e-7ba8a50f08d4-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Feb 18 15:17:57 crc kubenswrapper[4957]: I0218 15:17:57.274683 4957 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/13f17c48-8243-416b-939e-7ba8a50f08d4-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 15:17:57 crc kubenswrapper[4957]: I0218 15:17:57.274691 4957 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13f17c48-8243-416b-939e-7ba8a50f08d4-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 15:17:57 crc kubenswrapper[4957]: I0218 15:17:57.274700 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzsj8\" (UniqueName: \"kubernetes.io/projected/13f17c48-8243-416b-939e-7ba8a50f08d4-kube-api-access-lzsj8\") on node \"crc\" DevicePath \"\"" Feb 18 15:17:57 crc kubenswrapper[4957]: I0218 15:17:57.342609 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rdvhw" Feb 18 15:17:57 crc kubenswrapper[4957]: I0218 15:17:57.342764 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rdvhw" event={"ID":"13f17c48-8243-416b-939e-7ba8a50f08d4","Type":"ContainerDied","Data":"fbc5d114b682b4224471ef0a6ddff0eda6c570ddb70b38922b3bdefa6f80fa4e"} Feb 18 15:17:57 crc kubenswrapper[4957]: I0218 15:17:57.343189 4957 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fbc5d114b682b4224471ef0a6ddff0eda6c570ddb70b38922b3bdefa6f80fa4e" Feb 18 15:17:57 crc kubenswrapper[4957]: I0218 15:17:57.447754 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-w6sdx"] Feb 18 15:17:57 crc kubenswrapper[4957]: E0218 15:17:57.453348 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13f17c48-8243-416b-939e-7ba8a50f08d4" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 18 15:17:57 crc kubenswrapper[4957]: I0218 15:17:57.453389 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="13f17c48-8243-416b-939e-7ba8a50f08d4" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 18 15:17:57 crc kubenswrapper[4957]: I0218 15:17:57.455643 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="13f17c48-8243-416b-939e-7ba8a50f08d4" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 18 15:17:57 crc kubenswrapper[4957]: I0218 15:17:57.463958 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-w6sdx" Feb 18 15:17:57 crc kubenswrapper[4957]: I0218 15:17:57.477552 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-w6sdx"] Feb 18 15:17:57 crc kubenswrapper[4957]: I0218 15:17:57.479007 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 15:17:57 crc kubenswrapper[4957]: I0218 15:17:57.479706 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 15:17:57 crc kubenswrapper[4957]: I0218 15:17:57.480128 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 15:17:57 crc kubenswrapper[4957]: I0218 15:17:57.480491 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-pmpvb" Feb 18 15:17:57 crc kubenswrapper[4957]: I0218 15:17:57.480639 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Feb 18 15:17:57 crc kubenswrapper[4957]: E0218 15:17:57.500083 4957 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod13f17c48_8243_416b_939e_7ba8a50f08d4.slice\": RecentStats: unable to find data in memory cache]" Feb 18 15:17:57 crc kubenswrapper[4957]: I0218 15:17:57.585396 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2735d3be-2856-4c38-8944-8e8698f1fc14-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-w6sdx\" (UID: \"2735d3be-2856-4c38-8944-8e8698f1fc14\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-w6sdx" Feb 18 15:17:57 crc kubenswrapper[4957]: I0218 15:17:57.585500 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2735d3be-2856-4c38-8944-8e8698f1fc14-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-w6sdx\" (UID: \"2735d3be-2856-4c38-8944-8e8698f1fc14\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-w6sdx" Feb 18 15:17:57 crc kubenswrapper[4957]: I0218 15:17:57.585528 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2735d3be-2856-4c38-8944-8e8698f1fc14-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-w6sdx\" (UID: \"2735d3be-2856-4c38-8944-8e8698f1fc14\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-w6sdx" Feb 18 15:17:57 crc kubenswrapper[4957]: I0218 15:17:57.585603 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76nrg\" (UniqueName: \"kubernetes.io/projected/2735d3be-2856-4c38-8944-8e8698f1fc14-kube-api-access-76nrg\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-w6sdx\" (UID: \"2735d3be-2856-4c38-8944-8e8698f1fc14\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-w6sdx" Feb 18 15:17:57 crc kubenswrapper[4957]: I0218 15:17:57.585677 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/2735d3be-2856-4c38-8944-8e8698f1fc14-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-w6sdx\" (UID: \"2735d3be-2856-4c38-8944-8e8698f1fc14\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-w6sdx" Feb 18 15:17:57 crc kubenswrapper[4957]: I0218 15:17:57.585817 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/2735d3be-2856-4c38-8944-8e8698f1fc14-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-w6sdx\" (UID: \"2735d3be-2856-4c38-8944-8e8698f1fc14\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-w6sdx" Feb 18 15:17:57 crc kubenswrapper[4957]: I0218 15:17:57.585920 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/2735d3be-2856-4c38-8944-8e8698f1fc14-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-w6sdx\" (UID: \"2735d3be-2856-4c38-8944-8e8698f1fc14\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-w6sdx" Feb 18 15:17:57 crc kubenswrapper[4957]: I0218 15:17:57.688353 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2735d3be-2856-4c38-8944-8e8698f1fc14-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-w6sdx\" (UID: \"2735d3be-2856-4c38-8944-8e8698f1fc14\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-w6sdx" Feb 18 15:17:57 crc kubenswrapper[4957]: I0218 15:17:57.688432 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2735d3be-2856-4c38-8944-8e8698f1fc14-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-w6sdx\" (UID: \"2735d3be-2856-4c38-8944-8e8698f1fc14\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-w6sdx" Feb 18 15:17:57 crc kubenswrapper[4957]: I0218 15:17:57.688466 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2735d3be-2856-4c38-8944-8e8698f1fc14-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-w6sdx\" (UID: \"2735d3be-2856-4c38-8944-8e8698f1fc14\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-w6sdx" Feb 18 15:17:57 crc kubenswrapper[4957]: I0218 15:17:57.688534 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-76nrg\" (UniqueName: \"kubernetes.io/projected/2735d3be-2856-4c38-8944-8e8698f1fc14-kube-api-access-76nrg\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-w6sdx\" (UID: \"2735d3be-2856-4c38-8944-8e8698f1fc14\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-w6sdx" Feb 18 15:17:57 crc kubenswrapper[4957]: I0218 15:17:57.688578 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/2735d3be-2856-4c38-8944-8e8698f1fc14-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-w6sdx\" (UID: \"2735d3be-2856-4c38-8944-8e8698f1fc14\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-w6sdx" Feb 18 15:17:57 crc kubenswrapper[4957]: I0218 15:17:57.688684 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/2735d3be-2856-4c38-8944-8e8698f1fc14-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-w6sdx\" (UID: \"2735d3be-2856-4c38-8944-8e8698f1fc14\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-w6sdx" Feb 18 15:17:57 crc kubenswrapper[4957]: I0218 15:17:57.688755 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/2735d3be-2856-4c38-8944-8e8698f1fc14-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-w6sdx\" (UID: \"2735d3be-2856-4c38-8944-8e8698f1fc14\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-w6sdx" Feb 18 15:17:57 crc kubenswrapper[4957]: I0218 15:17:57.693957 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/2735d3be-2856-4c38-8944-8e8698f1fc14-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-w6sdx\" (UID: \"2735d3be-2856-4c38-8944-8e8698f1fc14\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-w6sdx" Feb 18 15:17:57 crc kubenswrapper[4957]: I0218 15:17:57.694203 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/2735d3be-2856-4c38-8944-8e8698f1fc14-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-w6sdx\" (UID: \"2735d3be-2856-4c38-8944-8e8698f1fc14\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-w6sdx" Feb 18 15:17:57 crc kubenswrapper[4957]: I0218 15:17:57.695047 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2735d3be-2856-4c38-8944-8e8698f1fc14-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-w6sdx\" (UID: \"2735d3be-2856-4c38-8944-8e8698f1fc14\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-w6sdx" Feb 18 15:17:57 crc kubenswrapper[4957]: I0218 15:17:57.695496 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2735d3be-2856-4c38-8944-8e8698f1fc14-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-w6sdx\" (UID: \"2735d3be-2856-4c38-8944-8e8698f1fc14\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-w6sdx" Feb 18 15:17:57 crc kubenswrapper[4957]: I0218 15:17:57.697175 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2735d3be-2856-4c38-8944-8e8698f1fc14-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-w6sdx\" (UID: \"2735d3be-2856-4c38-8944-8e8698f1fc14\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-w6sdx" Feb 18 15:17:57 crc kubenswrapper[4957]: I0218 15:17:57.699989 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/2735d3be-2856-4c38-8944-8e8698f1fc14-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-w6sdx\" (UID: \"2735d3be-2856-4c38-8944-8e8698f1fc14\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-w6sdx" Feb 18 15:17:57 crc kubenswrapper[4957]: I0218 15:17:57.707941 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-76nrg\" (UniqueName: \"kubernetes.io/projected/2735d3be-2856-4c38-8944-8e8698f1fc14-kube-api-access-76nrg\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-w6sdx\" (UID: \"2735d3be-2856-4c38-8944-8e8698f1fc14\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-w6sdx" Feb 18 15:17:57 crc kubenswrapper[4957]: I0218 15:17:57.798697 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-w6sdx" Feb 18 15:17:58 crc kubenswrapper[4957]: I0218 15:17:58.355800 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-dj57n" podUID="4e137151-421b-4394-8299-d1ce915c0ff4" containerName="registry-server" containerID="cri-o://3c01e8f8aa02b2f112cc9a5b80ef8fc31146d6cc18c8cbc30dde9df540c61d7a" gracePeriod=2 Feb 18 15:17:58 crc kubenswrapper[4957]: I0218 15:17:58.430936 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-w6sdx"] Feb 18 15:17:58 crc kubenswrapper[4957]: I0218 15:17:58.951715 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dj57n" Feb 18 15:17:59 crc kubenswrapper[4957]: I0218 15:17:59.035146 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nznx5\" (UniqueName: \"kubernetes.io/projected/4e137151-421b-4394-8299-d1ce915c0ff4-kube-api-access-nznx5\") pod \"4e137151-421b-4394-8299-d1ce915c0ff4\" (UID: \"4e137151-421b-4394-8299-d1ce915c0ff4\") " Feb 18 15:17:59 crc kubenswrapper[4957]: I0218 15:17:59.035230 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e137151-421b-4394-8299-d1ce915c0ff4-utilities\") pod \"4e137151-421b-4394-8299-d1ce915c0ff4\" (UID: \"4e137151-421b-4394-8299-d1ce915c0ff4\") " Feb 18 15:17:59 crc kubenswrapper[4957]: I0218 15:17:59.035369 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e137151-421b-4394-8299-d1ce915c0ff4-catalog-content\") pod \"4e137151-421b-4394-8299-d1ce915c0ff4\" (UID: \"4e137151-421b-4394-8299-d1ce915c0ff4\") " Feb 18 15:17:59 crc kubenswrapper[4957]: I0218 15:17:59.039102 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e137151-421b-4394-8299-d1ce915c0ff4-utilities" (OuterVolumeSpecName: "utilities") pod "4e137151-421b-4394-8299-d1ce915c0ff4" (UID: "4e137151-421b-4394-8299-d1ce915c0ff4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 15:17:59 crc kubenswrapper[4957]: I0218 15:17:59.041535 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e137151-421b-4394-8299-d1ce915c0ff4-kube-api-access-nznx5" (OuterVolumeSpecName: "kube-api-access-nznx5") pod "4e137151-421b-4394-8299-d1ce915c0ff4" (UID: "4e137151-421b-4394-8299-d1ce915c0ff4"). InnerVolumeSpecName "kube-api-access-nznx5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 15:17:59 crc kubenswrapper[4957]: I0218 15:17:59.138009 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nznx5\" (UniqueName: \"kubernetes.io/projected/4e137151-421b-4394-8299-d1ce915c0ff4-kube-api-access-nznx5\") on node \"crc\" DevicePath \"\"" Feb 18 15:17:59 crc kubenswrapper[4957]: I0218 15:17:59.138240 4957 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e137151-421b-4394-8299-d1ce915c0ff4-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 15:17:59 crc kubenswrapper[4957]: I0218 15:17:59.166688 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e137151-421b-4394-8299-d1ce915c0ff4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4e137151-421b-4394-8299-d1ce915c0ff4" (UID: "4e137151-421b-4394-8299-d1ce915c0ff4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 15:17:59 crc kubenswrapper[4957]: I0218 15:17:59.243045 4957 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e137151-421b-4394-8299-d1ce915c0ff4-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 15:17:59 crc kubenswrapper[4957]: I0218 15:17:59.369843 4957 generic.go:334] "Generic (PLEG): container finished" podID="4e137151-421b-4394-8299-d1ce915c0ff4" containerID="3c01e8f8aa02b2f112cc9a5b80ef8fc31146d6cc18c8cbc30dde9df540c61d7a" exitCode=0 Feb 18 15:17:59 crc kubenswrapper[4957]: I0218 15:17:59.369922 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dj57n" event={"ID":"4e137151-421b-4394-8299-d1ce915c0ff4","Type":"ContainerDied","Data":"3c01e8f8aa02b2f112cc9a5b80ef8fc31146d6cc18c8cbc30dde9df540c61d7a"} Feb 18 15:17:59 crc kubenswrapper[4957]: I0218 15:17:59.369951 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dj57n" event={"ID":"4e137151-421b-4394-8299-d1ce915c0ff4","Type":"ContainerDied","Data":"5d8a42cda86778007b3effaad5235d716f540817cd9ce879cc7b5f2bb44cf17d"} Feb 18 15:17:59 crc kubenswrapper[4957]: I0218 15:17:59.369968 4957 scope.go:117] "RemoveContainer" containerID="3c01e8f8aa02b2f112cc9a5b80ef8fc31146d6cc18c8cbc30dde9df540c61d7a" Feb 18 15:17:59 crc kubenswrapper[4957]: I0218 15:17:59.370124 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dj57n" Feb 18 15:17:59 crc kubenswrapper[4957]: I0218 15:17:59.373894 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-w6sdx" event={"ID":"2735d3be-2856-4c38-8944-8e8698f1fc14","Type":"ContainerStarted","Data":"a139aa21f9ed80b9f3485068acd33e64e4374880c3796625fc892137e58649ee"} Feb 18 15:17:59 crc kubenswrapper[4957]: I0218 15:17:59.373934 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-w6sdx" event={"ID":"2735d3be-2856-4c38-8944-8e8698f1fc14","Type":"ContainerStarted","Data":"a78c0bdd8da3846d60065ce4e73aa2a65bcdea3014a5e16e3dbfa92cb389f4c0"} Feb 18 15:17:59 crc kubenswrapper[4957]: I0218 15:17:59.400925 4957 scope.go:117] "RemoveContainer" containerID="654613c20c97a0ab190a4b00ca9821635417625b9fb6d4f5049c2094e1e471cd" Feb 18 15:17:59 crc kubenswrapper[4957]: I0218 15:17:59.419960 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-w6sdx" podStartSLOduration=1.9390851310000001 podStartE2EDuration="2.419932741s" podCreationTimestamp="2026-02-18 15:17:57 +0000 UTC" firstStartedPulling="2026-02-18 15:17:58.50634444 +0000 UTC m=+2785.027209184" lastFinishedPulling="2026-02-18 15:17:58.98719205 +0000 UTC m=+2785.508056794" observedRunningTime="2026-02-18 15:17:59.401997387 +0000 UTC m=+2785.922862131" watchObservedRunningTime="2026-02-18 15:17:59.419932741 +0000 UTC m=+2785.940797495" Feb 18 15:17:59 crc kubenswrapper[4957]: I0218 15:17:59.442199 4957 scope.go:117] "RemoveContainer" containerID="5824f9bd8e852a7bdfaebefe27eeedd7bf951e849358f4f0b893ca1384bf2b6c" Feb 18 15:17:59 crc kubenswrapper[4957]: I0218 15:17:59.451797 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dj57n"] Feb 18 15:17:59 crc kubenswrapper[4957]: I0218 15:17:59.481105 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-dj57n"] Feb 18 15:17:59 crc kubenswrapper[4957]: I0218 15:17:59.503877 4957 scope.go:117] "RemoveContainer" containerID="3c01e8f8aa02b2f112cc9a5b80ef8fc31146d6cc18c8cbc30dde9df540c61d7a" Feb 18 15:17:59 crc kubenswrapper[4957]: E0218 15:17:59.504374 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c01e8f8aa02b2f112cc9a5b80ef8fc31146d6cc18c8cbc30dde9df540c61d7a\": container with ID starting with 3c01e8f8aa02b2f112cc9a5b80ef8fc31146d6cc18c8cbc30dde9df540c61d7a not found: ID does not exist" containerID="3c01e8f8aa02b2f112cc9a5b80ef8fc31146d6cc18c8cbc30dde9df540c61d7a" Feb 18 15:17:59 crc kubenswrapper[4957]: I0218 15:17:59.504407 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c01e8f8aa02b2f112cc9a5b80ef8fc31146d6cc18c8cbc30dde9df540c61d7a"} err="failed to get container status \"3c01e8f8aa02b2f112cc9a5b80ef8fc31146d6cc18c8cbc30dde9df540c61d7a\": rpc error: code = NotFound desc = could not find container \"3c01e8f8aa02b2f112cc9a5b80ef8fc31146d6cc18c8cbc30dde9df540c61d7a\": container with ID starting with 3c01e8f8aa02b2f112cc9a5b80ef8fc31146d6cc18c8cbc30dde9df540c61d7a not found: ID does not exist" Feb 18 15:17:59 crc kubenswrapper[4957]: I0218 15:17:59.504450 4957 scope.go:117] "RemoveContainer" containerID="654613c20c97a0ab190a4b00ca9821635417625b9fb6d4f5049c2094e1e471cd" Feb 18 15:17:59 crc kubenswrapper[4957]: E0218 15:17:59.504765 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"654613c20c97a0ab190a4b00ca9821635417625b9fb6d4f5049c2094e1e471cd\": container with ID starting with 654613c20c97a0ab190a4b00ca9821635417625b9fb6d4f5049c2094e1e471cd not found: ID does not exist" containerID="654613c20c97a0ab190a4b00ca9821635417625b9fb6d4f5049c2094e1e471cd" Feb 18 15:17:59 crc kubenswrapper[4957]: I0218 15:17:59.504808 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"654613c20c97a0ab190a4b00ca9821635417625b9fb6d4f5049c2094e1e471cd"} err="failed to get container status \"654613c20c97a0ab190a4b00ca9821635417625b9fb6d4f5049c2094e1e471cd\": rpc error: code = NotFound desc = could not find container \"654613c20c97a0ab190a4b00ca9821635417625b9fb6d4f5049c2094e1e471cd\": container with ID starting with 654613c20c97a0ab190a4b00ca9821635417625b9fb6d4f5049c2094e1e471cd not found: ID does not exist" Feb 18 15:17:59 crc kubenswrapper[4957]: I0218 15:17:59.504834 4957 scope.go:117] "RemoveContainer" containerID="5824f9bd8e852a7bdfaebefe27eeedd7bf951e849358f4f0b893ca1384bf2b6c" Feb 18 15:17:59 crc kubenswrapper[4957]: E0218 15:17:59.505181 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5824f9bd8e852a7bdfaebefe27eeedd7bf951e849358f4f0b893ca1384bf2b6c\": container with ID starting with 5824f9bd8e852a7bdfaebefe27eeedd7bf951e849358f4f0b893ca1384bf2b6c not found: ID does not exist" containerID="5824f9bd8e852a7bdfaebefe27eeedd7bf951e849358f4f0b893ca1384bf2b6c" Feb 18 15:17:59 crc kubenswrapper[4957]: I0218 15:17:59.505227 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5824f9bd8e852a7bdfaebefe27eeedd7bf951e849358f4f0b893ca1384bf2b6c"} err="failed to get container status \"5824f9bd8e852a7bdfaebefe27eeedd7bf951e849358f4f0b893ca1384bf2b6c\": rpc error: code = NotFound desc = could not find container \"5824f9bd8e852a7bdfaebefe27eeedd7bf951e849358f4f0b893ca1384bf2b6c\": container with ID starting with 5824f9bd8e852a7bdfaebefe27eeedd7bf951e849358f4f0b893ca1384bf2b6c not found: ID does not exist" Feb 18 15:18:00 crc kubenswrapper[4957]: I0218 15:18:00.229654 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e137151-421b-4394-8299-d1ce915c0ff4" path="/var/lib/kubelet/pods/4e137151-421b-4394-8299-d1ce915c0ff4/volumes" Feb 18 15:18:32 crc kubenswrapper[4957]: I0218 15:18:32.780464 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-8qq66"] Feb 18 15:18:32 crc kubenswrapper[4957]: E0218 15:18:32.781598 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e137151-421b-4394-8299-d1ce915c0ff4" containerName="extract-content" Feb 18 15:18:32 crc kubenswrapper[4957]: I0218 15:18:32.781614 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e137151-421b-4394-8299-d1ce915c0ff4" containerName="extract-content" Feb 18 15:18:32 crc kubenswrapper[4957]: E0218 15:18:32.781628 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e137151-421b-4394-8299-d1ce915c0ff4" containerName="extract-utilities" Feb 18 15:18:32 crc kubenswrapper[4957]: I0218 15:18:32.781635 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e137151-421b-4394-8299-d1ce915c0ff4" containerName="extract-utilities" Feb 18 15:18:32 crc kubenswrapper[4957]: E0218 15:18:32.781663 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e137151-421b-4394-8299-d1ce915c0ff4" containerName="registry-server" Feb 18 15:18:32 crc kubenswrapper[4957]: I0218 15:18:32.781670 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e137151-421b-4394-8299-d1ce915c0ff4" containerName="registry-server" Feb 18 15:18:32 crc kubenswrapper[4957]: I0218 15:18:32.781892 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e137151-421b-4394-8299-d1ce915c0ff4" containerName="registry-server" Feb 18 15:18:32 crc kubenswrapper[4957]: I0218 15:18:32.783576 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8qq66" Feb 18 15:18:32 crc kubenswrapper[4957]: I0218 15:18:32.791019 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8qq66"] Feb 18 15:18:32 crc kubenswrapper[4957]: I0218 15:18:32.887516 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xw7bk\" (UniqueName: \"kubernetes.io/projected/f3e93532-4208-4a1a-a8a4-a991d54c36e2-kube-api-access-xw7bk\") pod \"certified-operators-8qq66\" (UID: \"f3e93532-4208-4a1a-a8a4-a991d54c36e2\") " pod="openshift-marketplace/certified-operators-8qq66" Feb 18 15:18:32 crc kubenswrapper[4957]: I0218 15:18:32.887721 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3e93532-4208-4a1a-a8a4-a991d54c36e2-utilities\") pod \"certified-operators-8qq66\" (UID: \"f3e93532-4208-4a1a-a8a4-a991d54c36e2\") " pod="openshift-marketplace/certified-operators-8qq66" Feb 18 15:18:32 crc kubenswrapper[4957]: I0218 15:18:32.887946 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3e93532-4208-4a1a-a8a4-a991d54c36e2-catalog-content\") pod \"certified-operators-8qq66\" (UID: \"f3e93532-4208-4a1a-a8a4-a991d54c36e2\") " pod="openshift-marketplace/certified-operators-8qq66" Feb 18 15:18:32 crc kubenswrapper[4957]: I0218 15:18:32.990032 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3e93532-4208-4a1a-a8a4-a991d54c36e2-catalog-content\") pod \"certified-operators-8qq66\" (UID: \"f3e93532-4208-4a1a-a8a4-a991d54c36e2\") " pod="openshift-marketplace/certified-operators-8qq66" Feb 18 15:18:32 crc kubenswrapper[4957]: I0218 15:18:32.990197 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xw7bk\" (UniqueName: \"kubernetes.io/projected/f3e93532-4208-4a1a-a8a4-a991d54c36e2-kube-api-access-xw7bk\") pod \"certified-operators-8qq66\" (UID: \"f3e93532-4208-4a1a-a8a4-a991d54c36e2\") " pod="openshift-marketplace/certified-operators-8qq66" Feb 18 15:18:32 crc kubenswrapper[4957]: I0218 15:18:32.990343 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3e93532-4208-4a1a-a8a4-a991d54c36e2-utilities\") pod \"certified-operators-8qq66\" (UID: \"f3e93532-4208-4a1a-a8a4-a991d54c36e2\") " pod="openshift-marketplace/certified-operators-8qq66" Feb 18 15:18:32 crc kubenswrapper[4957]: I0218 15:18:32.990747 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3e93532-4208-4a1a-a8a4-a991d54c36e2-catalog-content\") pod \"certified-operators-8qq66\" (UID: \"f3e93532-4208-4a1a-a8a4-a991d54c36e2\") " pod="openshift-marketplace/certified-operators-8qq66" Feb 18 15:18:32 crc kubenswrapper[4957]: I0218 15:18:32.990836 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3e93532-4208-4a1a-a8a4-a991d54c36e2-utilities\") pod \"certified-operators-8qq66\" (UID: \"f3e93532-4208-4a1a-a8a4-a991d54c36e2\") " pod="openshift-marketplace/certified-operators-8qq66" Feb 18 15:18:33 crc kubenswrapper[4957]: I0218 15:18:33.010328 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xw7bk\" (UniqueName: \"kubernetes.io/projected/f3e93532-4208-4a1a-a8a4-a991d54c36e2-kube-api-access-xw7bk\") pod \"certified-operators-8qq66\" (UID: \"f3e93532-4208-4a1a-a8a4-a991d54c36e2\") " pod="openshift-marketplace/certified-operators-8qq66" Feb 18 15:18:33 crc kubenswrapper[4957]: I0218 15:18:33.106361 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8qq66" Feb 18 15:18:33 crc kubenswrapper[4957]: I0218 15:18:33.824327 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8qq66"] Feb 18 15:18:34 crc kubenswrapper[4957]: I0218 15:18:34.796997 4957 generic.go:334] "Generic (PLEG): container finished" podID="f3e93532-4208-4a1a-a8a4-a991d54c36e2" containerID="12a25e103185474dd0aa957236dd4c1ff51aa082c363d076d9ecd5e80d1b25f8" exitCode=0 Feb 18 15:18:34 crc kubenswrapper[4957]: I0218 15:18:34.797081 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8qq66" event={"ID":"f3e93532-4208-4a1a-a8a4-a991d54c36e2","Type":"ContainerDied","Data":"12a25e103185474dd0aa957236dd4c1ff51aa082c363d076d9ecd5e80d1b25f8"} Feb 18 15:18:34 crc kubenswrapper[4957]: I0218 15:18:34.797371 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8qq66" event={"ID":"f3e93532-4208-4a1a-a8a4-a991d54c36e2","Type":"ContainerStarted","Data":"07fe6c0b3de83794d92e9fddae2b67d059ddc116c895079d11d80678352772a9"} Feb 18 15:18:35 crc kubenswrapper[4957]: I0218 15:18:35.809661 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8qq66" event={"ID":"f3e93532-4208-4a1a-a8a4-a991d54c36e2","Type":"ContainerStarted","Data":"d3145086be140aef0122ee889e8dd7ad1f0bd0089a91c43093391bfb06395dcf"} Feb 18 15:18:37 crc kubenswrapper[4957]: I0218 15:18:37.829041 4957 generic.go:334] "Generic (PLEG): container finished" podID="f3e93532-4208-4a1a-a8a4-a991d54c36e2" containerID="d3145086be140aef0122ee889e8dd7ad1f0bd0089a91c43093391bfb06395dcf" exitCode=0 Feb 18 15:18:37 crc kubenswrapper[4957]: I0218 15:18:37.829133 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8qq66" event={"ID":"f3e93532-4208-4a1a-a8a4-a991d54c36e2","Type":"ContainerDied","Data":"d3145086be140aef0122ee889e8dd7ad1f0bd0089a91c43093391bfb06395dcf"} Feb 18 15:18:38 crc kubenswrapper[4957]: I0218 15:18:38.843781 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8qq66" event={"ID":"f3e93532-4208-4a1a-a8a4-a991d54c36e2","Type":"ContainerStarted","Data":"ff676dfd960a14fc14f3d56011ca3b8efa5d40c9a6c2fa92d62eb300e6f7278c"} Feb 18 15:18:38 crc kubenswrapper[4957]: I0218 15:18:38.876281 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-8qq66" podStartSLOduration=3.4209309709999998 podStartE2EDuration="6.876263297s" podCreationTimestamp="2026-02-18 15:18:32 +0000 UTC" firstStartedPulling="2026-02-18 15:18:34.79991054 +0000 UTC m=+2821.320775284" lastFinishedPulling="2026-02-18 15:18:38.255242866 +0000 UTC m=+2824.776107610" observedRunningTime="2026-02-18 15:18:38.86558754 +0000 UTC m=+2825.386452284" watchObservedRunningTime="2026-02-18 15:18:38.876263297 +0000 UTC m=+2825.397128041" Feb 18 15:18:43 crc kubenswrapper[4957]: I0218 15:18:43.107048 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-8qq66" Feb 18 15:18:43 crc kubenswrapper[4957]: I0218 15:18:43.107707 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-8qq66" Feb 18 15:18:43 crc kubenswrapper[4957]: I0218 15:18:43.172361 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-8qq66" Feb 18 15:18:43 crc kubenswrapper[4957]: I0218 15:18:43.942980 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-8qq66" Feb 18 15:18:43 crc kubenswrapper[4957]: I0218 15:18:43.994724 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8qq66"] Feb 18 15:18:45 crc kubenswrapper[4957]: I0218 15:18:45.819320 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-ftbkh"] Feb 18 15:18:45 crc kubenswrapper[4957]: I0218 15:18:45.822716 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ftbkh" Feb 18 15:18:45 crc kubenswrapper[4957]: I0218 15:18:45.866903 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ftbkh"] Feb 18 15:18:45 crc kubenswrapper[4957]: I0218 15:18:45.915137 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-8qq66" podUID="f3e93532-4208-4a1a-a8a4-a991d54c36e2" containerName="registry-server" containerID="cri-o://ff676dfd960a14fc14f3d56011ca3b8efa5d40c9a6c2fa92d62eb300e6f7278c" gracePeriod=2 Feb 18 15:18:45 crc kubenswrapper[4957]: I0218 15:18:45.954640 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/38cd6af1-b9b8-4065-81db-c5588a585b06-catalog-content\") pod \"community-operators-ftbkh\" (UID: \"38cd6af1-b9b8-4065-81db-c5588a585b06\") " pod="openshift-marketplace/community-operators-ftbkh" Feb 18 15:18:45 crc kubenswrapper[4957]: I0218 15:18:45.954674 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/38cd6af1-b9b8-4065-81db-c5588a585b06-utilities\") pod \"community-operators-ftbkh\" (UID: \"38cd6af1-b9b8-4065-81db-c5588a585b06\") " pod="openshift-marketplace/community-operators-ftbkh" Feb 18 15:18:45 crc kubenswrapper[4957]: I0218 15:18:45.954793 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xjrd\" (UniqueName: \"kubernetes.io/projected/38cd6af1-b9b8-4065-81db-c5588a585b06-kube-api-access-5xjrd\") pod \"community-operators-ftbkh\" (UID: \"38cd6af1-b9b8-4065-81db-c5588a585b06\") " pod="openshift-marketplace/community-operators-ftbkh" Feb 18 15:18:46 crc kubenswrapper[4957]: I0218 15:18:46.057135 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/38cd6af1-b9b8-4065-81db-c5588a585b06-catalog-content\") pod \"community-operators-ftbkh\" (UID: \"38cd6af1-b9b8-4065-81db-c5588a585b06\") " pod="openshift-marketplace/community-operators-ftbkh" Feb 18 15:18:46 crc kubenswrapper[4957]: I0218 15:18:46.057193 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/38cd6af1-b9b8-4065-81db-c5588a585b06-utilities\") pod \"community-operators-ftbkh\" (UID: \"38cd6af1-b9b8-4065-81db-c5588a585b06\") " pod="openshift-marketplace/community-operators-ftbkh" Feb 18 15:18:46 crc kubenswrapper[4957]: I0218 15:18:46.057379 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5xjrd\" (UniqueName: \"kubernetes.io/projected/38cd6af1-b9b8-4065-81db-c5588a585b06-kube-api-access-5xjrd\") pod \"community-operators-ftbkh\" (UID: \"38cd6af1-b9b8-4065-81db-c5588a585b06\") " pod="openshift-marketplace/community-operators-ftbkh" Feb 18 15:18:46 crc kubenswrapper[4957]: I0218 15:18:46.057730 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/38cd6af1-b9b8-4065-81db-c5588a585b06-catalog-content\") pod \"community-operators-ftbkh\" (UID: \"38cd6af1-b9b8-4065-81db-c5588a585b06\") " pod="openshift-marketplace/community-operators-ftbkh" Feb 18 15:18:46 crc kubenswrapper[4957]: I0218 15:18:46.058009 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/38cd6af1-b9b8-4065-81db-c5588a585b06-utilities\") pod \"community-operators-ftbkh\" (UID: \"38cd6af1-b9b8-4065-81db-c5588a585b06\") " pod="openshift-marketplace/community-operators-ftbkh" Feb 18 15:18:46 crc kubenswrapper[4957]: I0218 15:18:46.077803 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xjrd\" (UniqueName: \"kubernetes.io/projected/38cd6af1-b9b8-4065-81db-c5588a585b06-kube-api-access-5xjrd\") pod \"community-operators-ftbkh\" (UID: \"38cd6af1-b9b8-4065-81db-c5588a585b06\") " pod="openshift-marketplace/community-operators-ftbkh" Feb 18 15:18:46 crc kubenswrapper[4957]: I0218 15:18:46.156913 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ftbkh" Feb 18 15:18:46 crc kubenswrapper[4957]: I0218 15:18:46.573564 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8qq66" Feb 18 15:18:46 crc kubenswrapper[4957]: I0218 15:18:46.687231 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3e93532-4208-4a1a-a8a4-a991d54c36e2-utilities\") pod \"f3e93532-4208-4a1a-a8a4-a991d54c36e2\" (UID: \"f3e93532-4208-4a1a-a8a4-a991d54c36e2\") " Feb 18 15:18:46 crc kubenswrapper[4957]: I0218 15:18:46.687378 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xw7bk\" (UniqueName: \"kubernetes.io/projected/f3e93532-4208-4a1a-a8a4-a991d54c36e2-kube-api-access-xw7bk\") pod \"f3e93532-4208-4a1a-a8a4-a991d54c36e2\" (UID: \"f3e93532-4208-4a1a-a8a4-a991d54c36e2\") " Feb 18 15:18:46 crc kubenswrapper[4957]: I0218 15:18:46.687542 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3e93532-4208-4a1a-a8a4-a991d54c36e2-catalog-content\") pod \"f3e93532-4208-4a1a-a8a4-a991d54c36e2\" (UID: \"f3e93532-4208-4a1a-a8a4-a991d54c36e2\") " Feb 18 15:18:46 crc kubenswrapper[4957]: I0218 15:18:46.693035 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3e93532-4208-4a1a-a8a4-a991d54c36e2-kube-api-access-xw7bk" (OuterVolumeSpecName: "kube-api-access-xw7bk") pod "f3e93532-4208-4a1a-a8a4-a991d54c36e2" (UID: "f3e93532-4208-4a1a-a8a4-a991d54c36e2"). InnerVolumeSpecName "kube-api-access-xw7bk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 15:18:46 crc kubenswrapper[4957]: I0218 15:18:46.702923 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f3e93532-4208-4a1a-a8a4-a991d54c36e2-utilities" (OuterVolumeSpecName: "utilities") pod "f3e93532-4208-4a1a-a8a4-a991d54c36e2" (UID: "f3e93532-4208-4a1a-a8a4-a991d54c36e2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 15:18:46 crc kubenswrapper[4957]: I0218 15:18:46.790978 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xw7bk\" (UniqueName: \"kubernetes.io/projected/f3e93532-4208-4a1a-a8a4-a991d54c36e2-kube-api-access-xw7bk\") on node \"crc\" DevicePath \"\"" Feb 18 15:18:46 crc kubenswrapper[4957]: I0218 15:18:46.791238 4957 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3e93532-4208-4a1a-a8a4-a991d54c36e2-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 15:18:46 crc kubenswrapper[4957]: I0218 15:18:46.935753 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8qq66" Feb 18 15:18:46 crc kubenswrapper[4957]: I0218 15:18:46.935677 4957 generic.go:334] "Generic (PLEG): container finished" podID="f3e93532-4208-4a1a-a8a4-a991d54c36e2" containerID="ff676dfd960a14fc14f3d56011ca3b8efa5d40c9a6c2fa92d62eb300e6f7278c" exitCode=0 Feb 18 15:18:46 crc kubenswrapper[4957]: I0218 15:18:46.936957 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8qq66" event={"ID":"f3e93532-4208-4a1a-a8a4-a991d54c36e2","Type":"ContainerDied","Data":"ff676dfd960a14fc14f3d56011ca3b8efa5d40c9a6c2fa92d62eb300e6f7278c"} Feb 18 15:18:46 crc kubenswrapper[4957]: I0218 15:18:46.937123 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8qq66" event={"ID":"f3e93532-4208-4a1a-a8a4-a991d54c36e2","Type":"ContainerDied","Data":"07fe6c0b3de83794d92e9fddae2b67d059ddc116c895079d11d80678352772a9"} Feb 18 15:18:46 crc kubenswrapper[4957]: I0218 15:18:46.937205 4957 scope.go:117] "RemoveContainer" containerID="ff676dfd960a14fc14f3d56011ca3b8efa5d40c9a6c2fa92d62eb300e6f7278c" Feb 18 15:18:46 crc kubenswrapper[4957]: I0218 15:18:46.970016 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ftbkh"] Feb 18 15:18:46 crc kubenswrapper[4957]: I0218 15:18:46.970646 4957 scope.go:117] "RemoveContainer" containerID="d3145086be140aef0122ee889e8dd7ad1f0bd0089a91c43093391bfb06395dcf" Feb 18 15:18:46 crc kubenswrapper[4957]: I0218 15:18:46.994618 4957 scope.go:117] "RemoveContainer" containerID="12a25e103185474dd0aa957236dd4c1ff51aa082c363d076d9ecd5e80d1b25f8" Feb 18 15:18:47 crc kubenswrapper[4957]: I0218 15:18:47.032359 4957 scope.go:117] "RemoveContainer" containerID="ff676dfd960a14fc14f3d56011ca3b8efa5d40c9a6c2fa92d62eb300e6f7278c" Feb 18 15:18:47 crc kubenswrapper[4957]: E0218 15:18:47.033140 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ff676dfd960a14fc14f3d56011ca3b8efa5d40c9a6c2fa92d62eb300e6f7278c\": container with ID starting with ff676dfd960a14fc14f3d56011ca3b8efa5d40c9a6c2fa92d62eb300e6f7278c not found: ID does not exist" containerID="ff676dfd960a14fc14f3d56011ca3b8efa5d40c9a6c2fa92d62eb300e6f7278c" Feb 18 15:18:47 crc kubenswrapper[4957]: I0218 15:18:47.033230 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff676dfd960a14fc14f3d56011ca3b8efa5d40c9a6c2fa92d62eb300e6f7278c"} err="failed to get container status \"ff676dfd960a14fc14f3d56011ca3b8efa5d40c9a6c2fa92d62eb300e6f7278c\": rpc error: code = NotFound desc = could not find container \"ff676dfd960a14fc14f3d56011ca3b8efa5d40c9a6c2fa92d62eb300e6f7278c\": container with ID starting with ff676dfd960a14fc14f3d56011ca3b8efa5d40c9a6c2fa92d62eb300e6f7278c not found: ID does not exist" Feb 18 15:18:47 crc kubenswrapper[4957]: I0218 15:18:47.033302 4957 scope.go:117] "RemoveContainer" containerID="d3145086be140aef0122ee889e8dd7ad1f0bd0089a91c43093391bfb06395dcf" Feb 18 15:18:47 crc kubenswrapper[4957]: E0218 15:18:47.033620 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d3145086be140aef0122ee889e8dd7ad1f0bd0089a91c43093391bfb06395dcf\": container with ID starting with d3145086be140aef0122ee889e8dd7ad1f0bd0089a91c43093391bfb06395dcf not found: ID does not exist" containerID="d3145086be140aef0122ee889e8dd7ad1f0bd0089a91c43093391bfb06395dcf" Feb 18 15:18:47 crc kubenswrapper[4957]: I0218 15:18:47.033736 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3145086be140aef0122ee889e8dd7ad1f0bd0089a91c43093391bfb06395dcf"} err="failed to get container status \"d3145086be140aef0122ee889e8dd7ad1f0bd0089a91c43093391bfb06395dcf\": rpc error: code = NotFound desc = could not find container \"d3145086be140aef0122ee889e8dd7ad1f0bd0089a91c43093391bfb06395dcf\": container with ID starting with d3145086be140aef0122ee889e8dd7ad1f0bd0089a91c43093391bfb06395dcf not found: ID does not exist" Feb 18 15:18:47 crc kubenswrapper[4957]: I0218 15:18:47.033827 4957 scope.go:117] "RemoveContainer" containerID="12a25e103185474dd0aa957236dd4c1ff51aa082c363d076d9ecd5e80d1b25f8" Feb 18 15:18:47 crc kubenswrapper[4957]: E0218 15:18:47.034325 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"12a25e103185474dd0aa957236dd4c1ff51aa082c363d076d9ecd5e80d1b25f8\": container with ID starting with 12a25e103185474dd0aa957236dd4c1ff51aa082c363d076d9ecd5e80d1b25f8 not found: ID does not exist" containerID="12a25e103185474dd0aa957236dd4c1ff51aa082c363d076d9ecd5e80d1b25f8" Feb 18 15:18:47 crc kubenswrapper[4957]: I0218 15:18:47.034352 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"12a25e103185474dd0aa957236dd4c1ff51aa082c363d076d9ecd5e80d1b25f8"} err="failed to get container status \"12a25e103185474dd0aa957236dd4c1ff51aa082c363d076d9ecd5e80d1b25f8\": rpc error: code = NotFound desc = could not find container \"12a25e103185474dd0aa957236dd4c1ff51aa082c363d076d9ecd5e80d1b25f8\": container with ID starting with 12a25e103185474dd0aa957236dd4c1ff51aa082c363d076d9ecd5e80d1b25f8 not found: ID does not exist" Feb 18 15:18:47 crc kubenswrapper[4957]: I0218 15:18:47.201909 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f3e93532-4208-4a1a-a8a4-a991d54c36e2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f3e93532-4208-4a1a-a8a4-a991d54c36e2" (UID: "f3e93532-4208-4a1a-a8a4-a991d54c36e2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 15:18:47 crc kubenswrapper[4957]: I0218 15:18:47.277764 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8qq66"] Feb 18 15:18:47 crc kubenswrapper[4957]: I0218 15:18:47.288631 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-8qq66"] Feb 18 15:18:47 crc kubenswrapper[4957]: I0218 15:18:47.302193 4957 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3e93532-4208-4a1a-a8a4-a991d54c36e2-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 15:18:47 crc kubenswrapper[4957]: I0218 15:18:47.950471 4957 generic.go:334] "Generic (PLEG): container finished" podID="38cd6af1-b9b8-4065-81db-c5588a585b06" containerID="43611abe54777cf22eb4679aa16d87282ea0399d9ef93b09cec9aabaa8c68972" exitCode=0 Feb 18 15:18:47 crc kubenswrapper[4957]: I0218 15:18:47.951061 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ftbkh" event={"ID":"38cd6af1-b9b8-4065-81db-c5588a585b06","Type":"ContainerDied","Data":"43611abe54777cf22eb4679aa16d87282ea0399d9ef93b09cec9aabaa8c68972"} Feb 18 15:18:47 crc kubenswrapper[4957]: I0218 15:18:47.951899 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ftbkh" event={"ID":"38cd6af1-b9b8-4065-81db-c5588a585b06","Type":"ContainerStarted","Data":"e5fbfa61a62cfaf7d52b9706f566643016d886e550b8dffc0a69964edceb62eb"} Feb 18 15:18:48 crc kubenswrapper[4957]: I0218 15:18:48.231487 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f3e93532-4208-4a1a-a8a4-a991d54c36e2" path="/var/lib/kubelet/pods/f3e93532-4208-4a1a-a8a4-a991d54c36e2/volumes" Feb 18 15:18:48 crc kubenswrapper[4957]: I0218 15:18:48.971366 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ftbkh" event={"ID":"38cd6af1-b9b8-4065-81db-c5588a585b06","Type":"ContainerStarted","Data":"55ed4c7a4c718630536bd0abe299db331a29c97ec3d148828351d4d1523cc73c"} Feb 18 15:18:50 crc kubenswrapper[4957]: I0218 15:18:50.993601 4957 generic.go:334] "Generic (PLEG): container finished" podID="38cd6af1-b9b8-4065-81db-c5588a585b06" containerID="55ed4c7a4c718630536bd0abe299db331a29c97ec3d148828351d4d1523cc73c" exitCode=0 Feb 18 15:18:50 crc kubenswrapper[4957]: I0218 15:18:50.993676 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ftbkh" event={"ID":"38cd6af1-b9b8-4065-81db-c5588a585b06","Type":"ContainerDied","Data":"55ed4c7a4c718630536bd0abe299db331a29c97ec3d148828351d4d1523cc73c"} Feb 18 15:18:52 crc kubenswrapper[4957]: I0218 15:18:52.007086 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ftbkh" event={"ID":"38cd6af1-b9b8-4065-81db-c5588a585b06","Type":"ContainerStarted","Data":"d07f66ddeda7b51e43229f3727d0131404deaa67f06323ce43993defb6e643c0"} Feb 18 15:18:52 crc kubenswrapper[4957]: I0218 15:18:52.027609 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-ftbkh" podStartSLOduration=3.336197781 podStartE2EDuration="7.027593397s" podCreationTimestamp="2026-02-18 15:18:45 +0000 UTC" firstStartedPulling="2026-02-18 15:18:47.953074813 +0000 UTC m=+2834.473939557" lastFinishedPulling="2026-02-18 15:18:51.644470429 +0000 UTC m=+2838.165335173" observedRunningTime="2026-02-18 15:18:52.026095554 +0000 UTC m=+2838.546960308" watchObservedRunningTime="2026-02-18 15:18:52.027593397 +0000 UTC m=+2838.548458141" Feb 18 15:18:56 crc kubenswrapper[4957]: I0218 15:18:56.157631 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-ftbkh" Feb 18 15:18:56 crc kubenswrapper[4957]: I0218 15:18:56.158268 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-ftbkh" Feb 18 15:18:56 crc kubenswrapper[4957]: I0218 15:18:56.229069 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-ftbkh" Feb 18 15:18:57 crc kubenswrapper[4957]: I0218 15:18:57.127114 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-ftbkh" Feb 18 15:18:57 crc kubenswrapper[4957]: I0218 15:18:57.183900 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ftbkh"] Feb 18 15:18:59 crc kubenswrapper[4957]: I0218 15:18:59.087631 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-ftbkh" podUID="38cd6af1-b9b8-4065-81db-c5588a585b06" containerName="registry-server" containerID="cri-o://d07f66ddeda7b51e43229f3727d0131404deaa67f06323ce43993defb6e643c0" gracePeriod=2 Feb 18 15:18:59 crc kubenswrapper[4957]: I0218 15:18:59.703100 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ftbkh" Feb 18 15:18:59 crc kubenswrapper[4957]: I0218 15:18:59.811285 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/38cd6af1-b9b8-4065-81db-c5588a585b06-utilities\") pod \"38cd6af1-b9b8-4065-81db-c5588a585b06\" (UID: \"38cd6af1-b9b8-4065-81db-c5588a585b06\") " Feb 18 15:18:59 crc kubenswrapper[4957]: I0218 15:18:59.811434 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/38cd6af1-b9b8-4065-81db-c5588a585b06-catalog-content\") pod \"38cd6af1-b9b8-4065-81db-c5588a585b06\" (UID: \"38cd6af1-b9b8-4065-81db-c5588a585b06\") " Feb 18 15:18:59 crc kubenswrapper[4957]: I0218 15:18:59.811569 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5xjrd\" (UniqueName: \"kubernetes.io/projected/38cd6af1-b9b8-4065-81db-c5588a585b06-kube-api-access-5xjrd\") pod \"38cd6af1-b9b8-4065-81db-c5588a585b06\" (UID: \"38cd6af1-b9b8-4065-81db-c5588a585b06\") " Feb 18 15:18:59 crc kubenswrapper[4957]: I0218 15:18:59.812116 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/38cd6af1-b9b8-4065-81db-c5588a585b06-utilities" (OuterVolumeSpecName: "utilities") pod "38cd6af1-b9b8-4065-81db-c5588a585b06" (UID: "38cd6af1-b9b8-4065-81db-c5588a585b06"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 15:18:59 crc kubenswrapper[4957]: I0218 15:18:59.812960 4957 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/38cd6af1-b9b8-4065-81db-c5588a585b06-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 15:18:59 crc kubenswrapper[4957]: I0218 15:18:59.818668 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38cd6af1-b9b8-4065-81db-c5588a585b06-kube-api-access-5xjrd" (OuterVolumeSpecName: "kube-api-access-5xjrd") pod "38cd6af1-b9b8-4065-81db-c5588a585b06" (UID: "38cd6af1-b9b8-4065-81db-c5588a585b06"). InnerVolumeSpecName "kube-api-access-5xjrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 15:18:59 crc kubenswrapper[4957]: I0218 15:18:59.860886 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/38cd6af1-b9b8-4065-81db-c5588a585b06-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "38cd6af1-b9b8-4065-81db-c5588a585b06" (UID: "38cd6af1-b9b8-4065-81db-c5588a585b06"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 15:18:59 crc kubenswrapper[4957]: I0218 15:18:59.915735 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5xjrd\" (UniqueName: \"kubernetes.io/projected/38cd6af1-b9b8-4065-81db-c5588a585b06-kube-api-access-5xjrd\") on node \"crc\" DevicePath \"\"" Feb 18 15:18:59 crc kubenswrapper[4957]: I0218 15:18:59.915965 4957 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/38cd6af1-b9b8-4065-81db-c5588a585b06-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 15:19:00 crc kubenswrapper[4957]: I0218 15:19:00.098207 4957 generic.go:334] "Generic (PLEG): container finished" podID="38cd6af1-b9b8-4065-81db-c5588a585b06" containerID="d07f66ddeda7b51e43229f3727d0131404deaa67f06323ce43993defb6e643c0" exitCode=0 Feb 18 15:19:00 crc kubenswrapper[4957]: I0218 15:19:00.098244 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ftbkh" Feb 18 15:19:00 crc kubenswrapper[4957]: I0218 15:19:00.098275 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ftbkh" event={"ID":"38cd6af1-b9b8-4065-81db-c5588a585b06","Type":"ContainerDied","Data":"d07f66ddeda7b51e43229f3727d0131404deaa67f06323ce43993defb6e643c0"} Feb 18 15:19:00 crc kubenswrapper[4957]: I0218 15:19:00.098705 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ftbkh" event={"ID":"38cd6af1-b9b8-4065-81db-c5588a585b06","Type":"ContainerDied","Data":"e5fbfa61a62cfaf7d52b9706f566643016d886e550b8dffc0a69964edceb62eb"} Feb 18 15:19:00 crc kubenswrapper[4957]: I0218 15:19:00.098736 4957 scope.go:117] "RemoveContainer" containerID="d07f66ddeda7b51e43229f3727d0131404deaa67f06323ce43993defb6e643c0" Feb 18 15:19:00 crc kubenswrapper[4957]: I0218 15:19:00.145513 4957 scope.go:117] "RemoveContainer" containerID="55ed4c7a4c718630536bd0abe299db331a29c97ec3d148828351d4d1523cc73c" Feb 18 15:19:00 crc kubenswrapper[4957]: I0218 15:19:00.149615 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ftbkh"] Feb 18 15:19:00 crc kubenswrapper[4957]: I0218 15:19:00.169093 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-ftbkh"] Feb 18 15:19:00 crc kubenswrapper[4957]: I0218 15:19:00.183068 4957 scope.go:117] "RemoveContainer" containerID="43611abe54777cf22eb4679aa16d87282ea0399d9ef93b09cec9aabaa8c68972" Feb 18 15:19:00 crc kubenswrapper[4957]: I0218 15:19:00.226736 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="38cd6af1-b9b8-4065-81db-c5588a585b06" path="/var/lib/kubelet/pods/38cd6af1-b9b8-4065-81db-c5588a585b06/volumes" Feb 18 15:19:00 crc kubenswrapper[4957]: I0218 15:19:00.258030 4957 scope.go:117] "RemoveContainer" containerID="d07f66ddeda7b51e43229f3727d0131404deaa67f06323ce43993defb6e643c0" Feb 18 15:19:00 crc kubenswrapper[4957]: E0218 15:19:00.258590 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d07f66ddeda7b51e43229f3727d0131404deaa67f06323ce43993defb6e643c0\": container with ID starting with d07f66ddeda7b51e43229f3727d0131404deaa67f06323ce43993defb6e643c0 not found: ID does not exist" containerID="d07f66ddeda7b51e43229f3727d0131404deaa67f06323ce43993defb6e643c0" Feb 18 15:19:00 crc kubenswrapper[4957]: I0218 15:19:00.258625 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d07f66ddeda7b51e43229f3727d0131404deaa67f06323ce43993defb6e643c0"} err="failed to get container status \"d07f66ddeda7b51e43229f3727d0131404deaa67f06323ce43993defb6e643c0\": rpc error: code = NotFound desc = could not find container \"d07f66ddeda7b51e43229f3727d0131404deaa67f06323ce43993defb6e643c0\": container with ID starting with d07f66ddeda7b51e43229f3727d0131404deaa67f06323ce43993defb6e643c0 not found: ID does not exist" Feb 18 15:19:00 crc kubenswrapper[4957]: I0218 15:19:00.258649 4957 scope.go:117] "RemoveContainer" containerID="55ed4c7a4c718630536bd0abe299db331a29c97ec3d148828351d4d1523cc73c" Feb 18 15:19:00 crc kubenswrapper[4957]: E0218 15:19:00.258993 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"55ed4c7a4c718630536bd0abe299db331a29c97ec3d148828351d4d1523cc73c\": container with ID starting with 55ed4c7a4c718630536bd0abe299db331a29c97ec3d148828351d4d1523cc73c not found: ID does not exist" containerID="55ed4c7a4c718630536bd0abe299db331a29c97ec3d148828351d4d1523cc73c" Feb 18 15:19:00 crc kubenswrapper[4957]: I0218 15:19:00.259025 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"55ed4c7a4c718630536bd0abe299db331a29c97ec3d148828351d4d1523cc73c"} err="failed to get container status \"55ed4c7a4c718630536bd0abe299db331a29c97ec3d148828351d4d1523cc73c\": rpc error: code = NotFound desc = could not find container \"55ed4c7a4c718630536bd0abe299db331a29c97ec3d148828351d4d1523cc73c\": container with ID starting with 55ed4c7a4c718630536bd0abe299db331a29c97ec3d148828351d4d1523cc73c not found: ID does not exist" Feb 18 15:19:00 crc kubenswrapper[4957]: I0218 15:19:00.259042 4957 scope.go:117] "RemoveContainer" containerID="43611abe54777cf22eb4679aa16d87282ea0399d9ef93b09cec9aabaa8c68972" Feb 18 15:19:00 crc kubenswrapper[4957]: E0218 15:19:00.259292 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"43611abe54777cf22eb4679aa16d87282ea0399d9ef93b09cec9aabaa8c68972\": container with ID starting with 43611abe54777cf22eb4679aa16d87282ea0399d9ef93b09cec9aabaa8c68972 not found: ID does not exist" containerID="43611abe54777cf22eb4679aa16d87282ea0399d9ef93b09cec9aabaa8c68972" Feb 18 15:19:00 crc kubenswrapper[4957]: I0218 15:19:00.259317 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43611abe54777cf22eb4679aa16d87282ea0399d9ef93b09cec9aabaa8c68972"} err="failed to get container status \"43611abe54777cf22eb4679aa16d87282ea0399d9ef93b09cec9aabaa8c68972\": rpc error: code = NotFound desc = could not find container \"43611abe54777cf22eb4679aa16d87282ea0399d9ef93b09cec9aabaa8c68972\": container with ID starting with 43611abe54777cf22eb4679aa16d87282ea0399d9ef93b09cec9aabaa8c68972 not found: ID does not exist" Feb 18 15:19:07 crc kubenswrapper[4957]: I0218 15:19:07.279618 4957 patch_prober.go:28] interesting pod/machine-config-daemon-x8wwg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 15:19:07 crc kubenswrapper[4957]: I0218 15:19:07.280176 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 15:19:37 crc kubenswrapper[4957]: I0218 15:19:37.279538 4957 patch_prober.go:28] interesting pod/machine-config-daemon-x8wwg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 15:19:37 crc kubenswrapper[4957]: I0218 15:19:37.280210 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 15:20:07 crc kubenswrapper[4957]: I0218 15:20:07.278768 4957 patch_prober.go:28] interesting pod/machine-config-daemon-x8wwg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 15:20:07 crc kubenswrapper[4957]: I0218 15:20:07.279207 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 15:20:07 crc kubenswrapper[4957]: I0218 15:20:07.279256 4957 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" Feb 18 15:20:07 crc kubenswrapper[4957]: I0218 15:20:07.280144 4957 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"39b5b1257cbb2be9af0e5e8acb7db08c789d6b14136ed8d184c2c99f85230571"} pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 15:20:07 crc kubenswrapper[4957]: I0218 15:20:07.280212 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" containerName="machine-config-daemon" containerID="cri-o://39b5b1257cbb2be9af0e5e8acb7db08c789d6b14136ed8d184c2c99f85230571" gracePeriod=600 Feb 18 15:20:07 crc kubenswrapper[4957]: E0218 15:20:07.402136 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-x8wwg_openshift-machine-config-operator(4cde17e3-43e9-4bed-afe8-5b76229e35cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" Feb 18 15:20:07 crc kubenswrapper[4957]: I0218 15:20:07.916682 4957 generic.go:334] "Generic (PLEG): container finished" podID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" containerID="39b5b1257cbb2be9af0e5e8acb7db08c789d6b14136ed8d184c2c99f85230571" exitCode=0 Feb 18 15:20:07 crc kubenswrapper[4957]: I0218 15:20:07.916889 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" event={"ID":"4cde17e3-43e9-4bed-afe8-5b76229e35cf","Type":"ContainerDied","Data":"39b5b1257cbb2be9af0e5e8acb7db08c789d6b14136ed8d184c2c99f85230571"} Feb 18 15:20:07 crc kubenswrapper[4957]: I0218 15:20:07.917052 4957 scope.go:117] "RemoveContainer" containerID="a69e9f0e650842af65aadbc2c889b8e494d9005f988f1644caa07f5a3ac25640" Feb 18 15:20:07 crc kubenswrapper[4957]: I0218 15:20:07.918035 4957 scope.go:117] "RemoveContainer" containerID="39b5b1257cbb2be9af0e5e8acb7db08c789d6b14136ed8d184c2c99f85230571" Feb 18 15:20:07 crc kubenswrapper[4957]: E0218 15:20:07.918495 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-x8wwg_openshift-machine-config-operator(4cde17e3-43e9-4bed-afe8-5b76229e35cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" Feb 18 15:20:21 crc kubenswrapper[4957]: I0218 15:20:21.213091 4957 scope.go:117] "RemoveContainer" containerID="39b5b1257cbb2be9af0e5e8acb7db08c789d6b14136ed8d184c2c99f85230571" Feb 18 15:20:21 crc kubenswrapper[4957]: E0218 15:20:21.213974 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-x8wwg_openshift-machine-config-operator(4cde17e3-43e9-4bed-afe8-5b76229e35cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" Feb 18 15:20:25 crc kubenswrapper[4957]: I0218 15:20:25.107486 4957 generic.go:334] "Generic (PLEG): container finished" podID="2735d3be-2856-4c38-8944-8e8698f1fc14" containerID="a139aa21f9ed80b9f3485068acd33e64e4374880c3796625fc892137e58649ee" exitCode=0 Feb 18 15:20:25 crc kubenswrapper[4957]: I0218 15:20:25.107569 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-w6sdx" event={"ID":"2735d3be-2856-4c38-8944-8e8698f1fc14","Type":"ContainerDied","Data":"a139aa21f9ed80b9f3485068acd33e64e4374880c3796625fc892137e58649ee"} Feb 18 15:20:26 crc kubenswrapper[4957]: I0218 15:20:26.699366 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-w6sdx" Feb 18 15:20:26 crc kubenswrapper[4957]: I0218 15:20:26.805467 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2735d3be-2856-4c38-8944-8e8698f1fc14-ssh-key-openstack-edpm-ipam\") pod \"2735d3be-2856-4c38-8944-8e8698f1fc14\" (UID: \"2735d3be-2856-4c38-8944-8e8698f1fc14\") " Feb 18 15:20:26 crc kubenswrapper[4957]: I0218 15:20:26.805999 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/2735d3be-2856-4c38-8944-8e8698f1fc14-ceilometer-compute-config-data-2\") pod \"2735d3be-2856-4c38-8944-8e8698f1fc14\" (UID: \"2735d3be-2856-4c38-8944-8e8698f1fc14\") " Feb 18 15:20:26 crc kubenswrapper[4957]: I0218 15:20:26.806063 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/2735d3be-2856-4c38-8944-8e8698f1fc14-ceilometer-compute-config-data-0\") pod \"2735d3be-2856-4c38-8944-8e8698f1fc14\" (UID: \"2735d3be-2856-4c38-8944-8e8698f1fc14\") " Feb 18 15:20:26 crc kubenswrapper[4957]: I0218 15:20:26.806120 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2735d3be-2856-4c38-8944-8e8698f1fc14-telemetry-combined-ca-bundle\") pod \"2735d3be-2856-4c38-8944-8e8698f1fc14\" (UID: \"2735d3be-2856-4c38-8944-8e8698f1fc14\") " Feb 18 15:20:26 crc kubenswrapper[4957]: I0218 15:20:26.806214 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2735d3be-2856-4c38-8944-8e8698f1fc14-inventory\") pod \"2735d3be-2856-4c38-8944-8e8698f1fc14\" (UID: \"2735d3be-2856-4c38-8944-8e8698f1fc14\") " Feb 18 15:20:26 crc kubenswrapper[4957]: I0218 15:20:26.806310 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-76nrg\" (UniqueName: \"kubernetes.io/projected/2735d3be-2856-4c38-8944-8e8698f1fc14-kube-api-access-76nrg\") pod \"2735d3be-2856-4c38-8944-8e8698f1fc14\" (UID: \"2735d3be-2856-4c38-8944-8e8698f1fc14\") " Feb 18 15:20:26 crc kubenswrapper[4957]: I0218 15:20:26.806362 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/2735d3be-2856-4c38-8944-8e8698f1fc14-ceilometer-compute-config-data-1\") pod \"2735d3be-2856-4c38-8944-8e8698f1fc14\" (UID: \"2735d3be-2856-4c38-8944-8e8698f1fc14\") " Feb 18 15:20:26 crc kubenswrapper[4957]: I0218 15:20:26.811863 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2735d3be-2856-4c38-8944-8e8698f1fc14-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "2735d3be-2856-4c38-8944-8e8698f1fc14" (UID: "2735d3be-2856-4c38-8944-8e8698f1fc14"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 15:20:26 crc kubenswrapper[4957]: I0218 15:20:26.821159 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2735d3be-2856-4c38-8944-8e8698f1fc14-kube-api-access-76nrg" (OuterVolumeSpecName: "kube-api-access-76nrg") pod "2735d3be-2856-4c38-8944-8e8698f1fc14" (UID: "2735d3be-2856-4c38-8944-8e8698f1fc14"). InnerVolumeSpecName "kube-api-access-76nrg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 15:20:26 crc kubenswrapper[4957]: I0218 15:20:26.838871 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2735d3be-2856-4c38-8944-8e8698f1fc14-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "2735d3be-2856-4c38-8944-8e8698f1fc14" (UID: "2735d3be-2856-4c38-8944-8e8698f1fc14"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 15:20:26 crc kubenswrapper[4957]: I0218 15:20:26.844704 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2735d3be-2856-4c38-8944-8e8698f1fc14-inventory" (OuterVolumeSpecName: "inventory") pod "2735d3be-2856-4c38-8944-8e8698f1fc14" (UID: "2735d3be-2856-4c38-8944-8e8698f1fc14"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 15:20:26 crc kubenswrapper[4957]: I0218 15:20:26.846816 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2735d3be-2856-4c38-8944-8e8698f1fc14-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "2735d3be-2856-4c38-8944-8e8698f1fc14" (UID: "2735d3be-2856-4c38-8944-8e8698f1fc14"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 15:20:26 crc kubenswrapper[4957]: I0218 15:20:26.851782 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2735d3be-2856-4c38-8944-8e8698f1fc14-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "2735d3be-2856-4c38-8944-8e8698f1fc14" (UID: "2735d3be-2856-4c38-8944-8e8698f1fc14"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 15:20:26 crc kubenswrapper[4957]: I0218 15:20:26.853412 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2735d3be-2856-4c38-8944-8e8698f1fc14-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "2735d3be-2856-4c38-8944-8e8698f1fc14" (UID: "2735d3be-2856-4c38-8944-8e8698f1fc14"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 15:20:26 crc kubenswrapper[4957]: I0218 15:20:26.910103 4957 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/2735d3be-2856-4c38-8944-8e8698f1fc14-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Feb 18 15:20:26 crc kubenswrapper[4957]: I0218 15:20:26.910148 4957 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2735d3be-2856-4c38-8944-8e8698f1fc14-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 15:20:26 crc kubenswrapper[4957]: I0218 15:20:26.910161 4957 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/2735d3be-2856-4c38-8944-8e8698f1fc14-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Feb 18 15:20:26 crc kubenswrapper[4957]: I0218 15:20:26.910174 4957 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/2735d3be-2856-4c38-8944-8e8698f1fc14-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Feb 18 15:20:26 crc kubenswrapper[4957]: I0218 15:20:26.910187 4957 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2735d3be-2856-4c38-8944-8e8698f1fc14-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 15:20:26 crc kubenswrapper[4957]: I0218 15:20:26.910202 4957 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2735d3be-2856-4c38-8944-8e8698f1fc14-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 15:20:26 crc kubenswrapper[4957]: I0218 15:20:26.910213 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-76nrg\" (UniqueName: \"kubernetes.io/projected/2735d3be-2856-4c38-8944-8e8698f1fc14-kube-api-access-76nrg\") on node \"crc\" DevicePath \"\"" Feb 18 15:20:27 crc kubenswrapper[4957]: I0218 15:20:27.139445 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-w6sdx" event={"ID":"2735d3be-2856-4c38-8944-8e8698f1fc14","Type":"ContainerDied","Data":"a78c0bdd8da3846d60065ce4e73aa2a65bcdea3014a5e16e3dbfa92cb389f4c0"} Feb 18 15:20:27 crc kubenswrapper[4957]: I0218 15:20:27.139494 4957 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a78c0bdd8da3846d60065ce4e73aa2a65bcdea3014a5e16e3dbfa92cb389f4c0" Feb 18 15:20:27 crc kubenswrapper[4957]: I0218 15:20:27.139561 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-w6sdx" Feb 18 15:20:27 crc kubenswrapper[4957]: I0218 15:20:27.268084 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-xrx4l"] Feb 18 15:20:27 crc kubenswrapper[4957]: E0218 15:20:27.268736 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2735d3be-2856-4c38-8944-8e8698f1fc14" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 18 15:20:27 crc kubenswrapper[4957]: I0218 15:20:27.268764 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="2735d3be-2856-4c38-8944-8e8698f1fc14" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 18 15:20:27 crc kubenswrapper[4957]: E0218 15:20:27.268795 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3e93532-4208-4a1a-a8a4-a991d54c36e2" containerName="extract-content" Feb 18 15:20:27 crc kubenswrapper[4957]: I0218 15:20:27.268804 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3e93532-4208-4a1a-a8a4-a991d54c36e2" containerName="extract-content" Feb 18 15:20:27 crc kubenswrapper[4957]: E0218 15:20:27.268819 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3e93532-4208-4a1a-a8a4-a991d54c36e2" containerName="registry-server" Feb 18 15:20:27 crc kubenswrapper[4957]: I0218 15:20:27.268826 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3e93532-4208-4a1a-a8a4-a991d54c36e2" containerName="registry-server" Feb 18 15:20:27 crc kubenswrapper[4957]: E0218 15:20:27.268849 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38cd6af1-b9b8-4065-81db-c5588a585b06" containerName="extract-utilities" Feb 18 15:20:27 crc kubenswrapper[4957]: I0218 15:20:27.268857 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="38cd6af1-b9b8-4065-81db-c5588a585b06" containerName="extract-utilities" Feb 18 15:20:27 crc kubenswrapper[4957]: E0218 15:20:27.268884 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38cd6af1-b9b8-4065-81db-c5588a585b06" containerName="extract-content" Feb 18 15:20:27 crc kubenswrapper[4957]: I0218 15:20:27.268892 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="38cd6af1-b9b8-4065-81db-c5588a585b06" containerName="extract-content" Feb 18 15:20:27 crc kubenswrapper[4957]: E0218 15:20:27.268910 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3e93532-4208-4a1a-a8a4-a991d54c36e2" containerName="extract-utilities" Feb 18 15:20:27 crc kubenswrapper[4957]: I0218 15:20:27.268918 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3e93532-4208-4a1a-a8a4-a991d54c36e2" containerName="extract-utilities" Feb 18 15:20:27 crc kubenswrapper[4957]: E0218 15:20:27.268937 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38cd6af1-b9b8-4065-81db-c5588a585b06" containerName="registry-server" Feb 18 15:20:27 crc kubenswrapper[4957]: I0218 15:20:27.268944 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="38cd6af1-b9b8-4065-81db-c5588a585b06" containerName="registry-server" Feb 18 15:20:27 crc kubenswrapper[4957]: I0218 15:20:27.269208 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="2735d3be-2856-4c38-8944-8e8698f1fc14" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 18 15:20:27 crc kubenswrapper[4957]: I0218 15:20:27.269227 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3e93532-4208-4a1a-a8a4-a991d54c36e2" containerName="registry-server" Feb 18 15:20:27 crc kubenswrapper[4957]: I0218 15:20:27.269252 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="38cd6af1-b9b8-4065-81db-c5588a585b06" containerName="registry-server" Feb 18 15:20:27 crc kubenswrapper[4957]: I0218 15:20:27.270207 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-xrx4l" Feb 18 15:20:27 crc kubenswrapper[4957]: I0218 15:20:27.273640 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-ipmi-config-data" Feb 18 15:20:27 crc kubenswrapper[4957]: I0218 15:20:27.273995 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 15:20:27 crc kubenswrapper[4957]: I0218 15:20:27.274276 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 15:20:27 crc kubenswrapper[4957]: I0218 15:20:27.274509 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-pmpvb" Feb 18 15:20:27 crc kubenswrapper[4957]: I0218 15:20:27.275480 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 15:20:27 crc kubenswrapper[4957]: I0218 15:20:27.281961 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-xrx4l"] Feb 18 15:20:27 crc kubenswrapper[4957]: I0218 15:20:27.318351 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkmlb\" (UniqueName: \"kubernetes.io/projected/0705aa1e-ac4d-4316-a4b1-9ad967170574-kube-api-access-xkmlb\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-xrx4l\" (UID: \"0705aa1e-ac4d-4316-a4b1-9ad967170574\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-xrx4l" Feb 18 15:20:27 crc kubenswrapper[4957]: I0218 15:20:27.318462 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0705aa1e-ac4d-4316-a4b1-9ad967170574-ssh-key-openstack-edpm-ipam\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-xrx4l\" (UID: \"0705aa1e-ac4d-4316-a4b1-9ad967170574\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-xrx4l" Feb 18 15:20:27 crc kubenswrapper[4957]: I0218 15:20:27.318493 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0705aa1e-ac4d-4316-a4b1-9ad967170574-inventory\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-xrx4l\" (UID: \"0705aa1e-ac4d-4316-a4b1-9ad967170574\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-xrx4l" Feb 18 15:20:27 crc kubenswrapper[4957]: I0218 15:20:27.318527 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/0705aa1e-ac4d-4316-a4b1-9ad967170574-ceilometer-ipmi-config-data-0\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-xrx4l\" (UID: \"0705aa1e-ac4d-4316-a4b1-9ad967170574\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-xrx4l" Feb 18 15:20:27 crc kubenswrapper[4957]: I0218 15:20:27.318631 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0705aa1e-ac4d-4316-a4b1-9ad967170574-telemetry-power-monitoring-combined-ca-bundle\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-xrx4l\" (UID: \"0705aa1e-ac4d-4316-a4b1-9ad967170574\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-xrx4l" Feb 18 15:20:27 crc kubenswrapper[4957]: I0218 15:20:27.318654 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/0705aa1e-ac4d-4316-a4b1-9ad967170574-ceilometer-ipmi-config-data-2\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-xrx4l\" (UID: \"0705aa1e-ac4d-4316-a4b1-9ad967170574\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-xrx4l" Feb 18 15:20:27 crc kubenswrapper[4957]: I0218 15:20:27.318725 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/0705aa1e-ac4d-4316-a4b1-9ad967170574-ceilometer-ipmi-config-data-1\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-xrx4l\" (UID: \"0705aa1e-ac4d-4316-a4b1-9ad967170574\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-xrx4l" Feb 18 15:20:27 crc kubenswrapper[4957]: I0218 15:20:27.421164 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/0705aa1e-ac4d-4316-a4b1-9ad967170574-ceilometer-ipmi-config-data-0\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-xrx4l\" (UID: \"0705aa1e-ac4d-4316-a4b1-9ad967170574\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-xrx4l" Feb 18 15:20:27 crc kubenswrapper[4957]: I0218 15:20:27.421288 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0705aa1e-ac4d-4316-a4b1-9ad967170574-telemetry-power-monitoring-combined-ca-bundle\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-xrx4l\" (UID: \"0705aa1e-ac4d-4316-a4b1-9ad967170574\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-xrx4l" Feb 18 15:20:27 crc kubenswrapper[4957]: I0218 15:20:27.421332 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/0705aa1e-ac4d-4316-a4b1-9ad967170574-ceilometer-ipmi-config-data-2\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-xrx4l\" (UID: \"0705aa1e-ac4d-4316-a4b1-9ad967170574\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-xrx4l" Feb 18 15:20:27 crc kubenswrapper[4957]: I0218 15:20:27.421407 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/0705aa1e-ac4d-4316-a4b1-9ad967170574-ceilometer-ipmi-config-data-1\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-xrx4l\" (UID: \"0705aa1e-ac4d-4316-a4b1-9ad967170574\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-xrx4l" Feb 18 15:20:27 crc kubenswrapper[4957]: I0218 15:20:27.421566 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xkmlb\" (UniqueName: \"kubernetes.io/projected/0705aa1e-ac4d-4316-a4b1-9ad967170574-kube-api-access-xkmlb\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-xrx4l\" (UID: \"0705aa1e-ac4d-4316-a4b1-9ad967170574\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-xrx4l" Feb 18 15:20:27 crc kubenswrapper[4957]: I0218 15:20:27.421630 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0705aa1e-ac4d-4316-a4b1-9ad967170574-ssh-key-openstack-edpm-ipam\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-xrx4l\" (UID: \"0705aa1e-ac4d-4316-a4b1-9ad967170574\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-xrx4l" Feb 18 15:20:27 crc kubenswrapper[4957]: I0218 15:20:27.421675 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0705aa1e-ac4d-4316-a4b1-9ad967170574-inventory\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-xrx4l\" (UID: \"0705aa1e-ac4d-4316-a4b1-9ad967170574\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-xrx4l" Feb 18 15:20:27 crc kubenswrapper[4957]: I0218 15:20:27.424832 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/0705aa1e-ac4d-4316-a4b1-9ad967170574-ceilometer-ipmi-config-data-2\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-xrx4l\" (UID: \"0705aa1e-ac4d-4316-a4b1-9ad967170574\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-xrx4l" Feb 18 15:20:27 crc kubenswrapper[4957]: I0218 15:20:27.425047 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/0705aa1e-ac4d-4316-a4b1-9ad967170574-ceilometer-ipmi-config-data-1\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-xrx4l\" (UID: \"0705aa1e-ac4d-4316-a4b1-9ad967170574\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-xrx4l" Feb 18 15:20:27 crc kubenswrapper[4957]: I0218 15:20:27.425300 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0705aa1e-ac4d-4316-a4b1-9ad967170574-ssh-key-openstack-edpm-ipam\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-xrx4l\" (UID: \"0705aa1e-ac4d-4316-a4b1-9ad967170574\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-xrx4l" Feb 18 15:20:27 crc kubenswrapper[4957]: I0218 15:20:27.425733 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/0705aa1e-ac4d-4316-a4b1-9ad967170574-ceilometer-ipmi-config-data-0\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-xrx4l\" (UID: \"0705aa1e-ac4d-4316-a4b1-9ad967170574\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-xrx4l" Feb 18 15:20:27 crc kubenswrapper[4957]: I0218 15:20:27.426485 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0705aa1e-ac4d-4316-a4b1-9ad967170574-inventory\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-xrx4l\" (UID: \"0705aa1e-ac4d-4316-a4b1-9ad967170574\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-xrx4l" Feb 18 15:20:27 crc kubenswrapper[4957]: I0218 15:20:27.429277 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0705aa1e-ac4d-4316-a4b1-9ad967170574-telemetry-power-monitoring-combined-ca-bundle\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-xrx4l\" (UID: \"0705aa1e-ac4d-4316-a4b1-9ad967170574\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-xrx4l" Feb 18 15:20:27 crc kubenswrapper[4957]: I0218 15:20:27.447549 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xkmlb\" (UniqueName: \"kubernetes.io/projected/0705aa1e-ac4d-4316-a4b1-9ad967170574-kube-api-access-xkmlb\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-xrx4l\" (UID: \"0705aa1e-ac4d-4316-a4b1-9ad967170574\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-xrx4l" Feb 18 15:20:27 crc kubenswrapper[4957]: I0218 15:20:27.596812 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-xrx4l" Feb 18 15:20:28 crc kubenswrapper[4957]: I0218 15:20:28.173323 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-xrx4l"] Feb 18 15:20:29 crc kubenswrapper[4957]: I0218 15:20:29.161008 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-xrx4l" event={"ID":"0705aa1e-ac4d-4316-a4b1-9ad967170574","Type":"ContainerStarted","Data":"d0ed37443e5c0091a5aa75d4ee36b5bbd26a6898492a1e05fb12c5156494cbbd"} Feb 18 15:20:29 crc kubenswrapper[4957]: I0218 15:20:29.161377 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-xrx4l" event={"ID":"0705aa1e-ac4d-4316-a4b1-9ad967170574","Type":"ContainerStarted","Data":"fc308ce741896de4a633a591a4e413ec35665b8efdf0063c07505f768e77736d"} Feb 18 15:20:29 crc kubenswrapper[4957]: I0218 15:20:29.193892 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-xrx4l" podStartSLOduration=1.499751438 podStartE2EDuration="2.1938511s" podCreationTimestamp="2026-02-18 15:20:27 +0000 UTC" firstStartedPulling="2026-02-18 15:20:28.181149841 +0000 UTC m=+2934.702014585" lastFinishedPulling="2026-02-18 15:20:28.875249483 +0000 UTC m=+2935.396114247" observedRunningTime="2026-02-18 15:20:29.176064654 +0000 UTC m=+2935.696929408" watchObservedRunningTime="2026-02-18 15:20:29.1938511 +0000 UTC m=+2935.714715854" Feb 18 15:20:33 crc kubenswrapper[4957]: I0218 15:20:33.310033 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-w6wbm"] Feb 18 15:20:33 crc kubenswrapper[4957]: I0218 15:20:33.314838 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w6wbm" Feb 18 15:20:33 crc kubenswrapper[4957]: I0218 15:20:33.327496 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-w6wbm"] Feb 18 15:20:33 crc kubenswrapper[4957]: I0218 15:20:33.379030 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/043c8827-3f87-4d04-ac31-16c7693ac2be-catalog-content\") pod \"redhat-marketplace-w6wbm\" (UID: \"043c8827-3f87-4d04-ac31-16c7693ac2be\") " pod="openshift-marketplace/redhat-marketplace-w6wbm" Feb 18 15:20:33 crc kubenswrapper[4957]: I0218 15:20:33.379309 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/043c8827-3f87-4d04-ac31-16c7693ac2be-utilities\") pod \"redhat-marketplace-w6wbm\" (UID: \"043c8827-3f87-4d04-ac31-16c7693ac2be\") " pod="openshift-marketplace/redhat-marketplace-w6wbm" Feb 18 15:20:33 crc kubenswrapper[4957]: I0218 15:20:33.379379 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ksnx7\" (UniqueName: \"kubernetes.io/projected/043c8827-3f87-4d04-ac31-16c7693ac2be-kube-api-access-ksnx7\") pod \"redhat-marketplace-w6wbm\" (UID: \"043c8827-3f87-4d04-ac31-16c7693ac2be\") " pod="openshift-marketplace/redhat-marketplace-w6wbm" Feb 18 15:20:33 crc kubenswrapper[4957]: I0218 15:20:33.482383 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/043c8827-3f87-4d04-ac31-16c7693ac2be-catalog-content\") pod \"redhat-marketplace-w6wbm\" (UID: \"043c8827-3f87-4d04-ac31-16c7693ac2be\") " pod="openshift-marketplace/redhat-marketplace-w6wbm" Feb 18 15:20:33 crc kubenswrapper[4957]: I0218 15:20:33.482511 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/043c8827-3f87-4d04-ac31-16c7693ac2be-utilities\") pod \"redhat-marketplace-w6wbm\" (UID: \"043c8827-3f87-4d04-ac31-16c7693ac2be\") " pod="openshift-marketplace/redhat-marketplace-w6wbm" Feb 18 15:20:33 crc kubenswrapper[4957]: I0218 15:20:33.482541 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ksnx7\" (UniqueName: \"kubernetes.io/projected/043c8827-3f87-4d04-ac31-16c7693ac2be-kube-api-access-ksnx7\") pod \"redhat-marketplace-w6wbm\" (UID: \"043c8827-3f87-4d04-ac31-16c7693ac2be\") " pod="openshift-marketplace/redhat-marketplace-w6wbm" Feb 18 15:20:33 crc kubenswrapper[4957]: I0218 15:20:33.483033 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/043c8827-3f87-4d04-ac31-16c7693ac2be-catalog-content\") pod \"redhat-marketplace-w6wbm\" (UID: \"043c8827-3f87-4d04-ac31-16c7693ac2be\") " pod="openshift-marketplace/redhat-marketplace-w6wbm" Feb 18 15:20:33 crc kubenswrapper[4957]: I0218 15:20:33.483033 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/043c8827-3f87-4d04-ac31-16c7693ac2be-utilities\") pod \"redhat-marketplace-w6wbm\" (UID: \"043c8827-3f87-4d04-ac31-16c7693ac2be\") " pod="openshift-marketplace/redhat-marketplace-w6wbm" Feb 18 15:20:33 crc kubenswrapper[4957]: I0218 15:20:33.502220 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ksnx7\" (UniqueName: \"kubernetes.io/projected/043c8827-3f87-4d04-ac31-16c7693ac2be-kube-api-access-ksnx7\") pod \"redhat-marketplace-w6wbm\" (UID: \"043c8827-3f87-4d04-ac31-16c7693ac2be\") " pod="openshift-marketplace/redhat-marketplace-w6wbm" Feb 18 15:20:33 crc kubenswrapper[4957]: I0218 15:20:33.651346 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w6wbm" Feb 18 15:20:34 crc kubenswrapper[4957]: I0218 15:20:34.179269 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-w6wbm"] Feb 18 15:20:34 crc kubenswrapper[4957]: I0218 15:20:34.229187 4957 scope.go:117] "RemoveContainer" containerID="39b5b1257cbb2be9af0e5e8acb7db08c789d6b14136ed8d184c2c99f85230571" Feb 18 15:20:34 crc kubenswrapper[4957]: I0218 15:20:34.229716 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w6wbm" event={"ID":"043c8827-3f87-4d04-ac31-16c7693ac2be","Type":"ContainerStarted","Data":"149995119d7c7e5e60d2223ce518e40027925cd390fec671e527f9bd2f0ee912"} Feb 18 15:20:34 crc kubenswrapper[4957]: E0218 15:20:34.230148 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-x8wwg_openshift-machine-config-operator(4cde17e3-43e9-4bed-afe8-5b76229e35cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" Feb 18 15:20:35 crc kubenswrapper[4957]: I0218 15:20:35.239067 4957 generic.go:334] "Generic (PLEG): container finished" podID="043c8827-3f87-4d04-ac31-16c7693ac2be" containerID="df3800fc85a50363620e171e779567b2421a074c5165077d0de8247041aed4d8" exitCode=0 Feb 18 15:20:35 crc kubenswrapper[4957]: I0218 15:20:35.239255 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w6wbm" event={"ID":"043c8827-3f87-4d04-ac31-16c7693ac2be","Type":"ContainerDied","Data":"df3800fc85a50363620e171e779567b2421a074c5165077d0de8247041aed4d8"} Feb 18 15:20:37 crc kubenswrapper[4957]: I0218 15:20:37.263355 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w6wbm" event={"ID":"043c8827-3f87-4d04-ac31-16c7693ac2be","Type":"ContainerStarted","Data":"e2ea5fca8b7c00866f809da2504622a358096ef0dfa7941e0dc3492cc1e259d3"} Feb 18 15:20:38 crc kubenswrapper[4957]: I0218 15:20:38.274827 4957 generic.go:334] "Generic (PLEG): container finished" podID="043c8827-3f87-4d04-ac31-16c7693ac2be" containerID="e2ea5fca8b7c00866f809da2504622a358096ef0dfa7941e0dc3492cc1e259d3" exitCode=0 Feb 18 15:20:38 crc kubenswrapper[4957]: I0218 15:20:38.274883 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w6wbm" event={"ID":"043c8827-3f87-4d04-ac31-16c7693ac2be","Type":"ContainerDied","Data":"e2ea5fca8b7c00866f809da2504622a358096ef0dfa7941e0dc3492cc1e259d3"} Feb 18 15:20:39 crc kubenswrapper[4957]: I0218 15:20:39.293967 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w6wbm" event={"ID":"043c8827-3f87-4d04-ac31-16c7693ac2be","Type":"ContainerStarted","Data":"6c58af67f101e4b59e2eb367e1344055efb274878308becf4d9dc83a580fa585"} Feb 18 15:20:39 crc kubenswrapper[4957]: I0218 15:20:39.317817 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-w6wbm" podStartSLOduration=2.8687907580000003 podStartE2EDuration="6.317801341s" podCreationTimestamp="2026-02-18 15:20:33 +0000 UTC" firstStartedPulling="2026-02-18 15:20:35.242797723 +0000 UTC m=+2941.763662507" lastFinishedPulling="2026-02-18 15:20:38.691808336 +0000 UTC m=+2945.212673090" observedRunningTime="2026-02-18 15:20:39.314387434 +0000 UTC m=+2945.835252188" watchObservedRunningTime="2026-02-18 15:20:39.317801341 +0000 UTC m=+2945.838666085" Feb 18 15:20:43 crc kubenswrapper[4957]: I0218 15:20:43.651717 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-w6wbm" Feb 18 15:20:43 crc kubenswrapper[4957]: I0218 15:20:43.652361 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-w6wbm" Feb 18 15:20:43 crc kubenswrapper[4957]: I0218 15:20:43.725378 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-w6wbm" Feb 18 15:20:44 crc kubenswrapper[4957]: I0218 15:20:44.404361 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-w6wbm" Feb 18 15:20:44 crc kubenswrapper[4957]: I0218 15:20:44.462449 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-w6wbm"] Feb 18 15:20:45 crc kubenswrapper[4957]: I0218 15:20:45.213566 4957 scope.go:117] "RemoveContainer" containerID="39b5b1257cbb2be9af0e5e8acb7db08c789d6b14136ed8d184c2c99f85230571" Feb 18 15:20:45 crc kubenswrapper[4957]: E0218 15:20:45.214000 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-x8wwg_openshift-machine-config-operator(4cde17e3-43e9-4bed-afe8-5b76229e35cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" Feb 18 15:20:46 crc kubenswrapper[4957]: I0218 15:20:46.371759 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-w6wbm" podUID="043c8827-3f87-4d04-ac31-16c7693ac2be" containerName="registry-server" containerID="cri-o://6c58af67f101e4b59e2eb367e1344055efb274878308becf4d9dc83a580fa585" gracePeriod=2 Feb 18 15:20:46 crc kubenswrapper[4957]: I0218 15:20:46.973217 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w6wbm" Feb 18 15:20:47 crc kubenswrapper[4957]: I0218 15:20:47.072027 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/043c8827-3f87-4d04-ac31-16c7693ac2be-catalog-content\") pod \"043c8827-3f87-4d04-ac31-16c7693ac2be\" (UID: \"043c8827-3f87-4d04-ac31-16c7693ac2be\") " Feb 18 15:20:47 crc kubenswrapper[4957]: I0218 15:20:47.072272 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/043c8827-3f87-4d04-ac31-16c7693ac2be-utilities\") pod \"043c8827-3f87-4d04-ac31-16c7693ac2be\" (UID: \"043c8827-3f87-4d04-ac31-16c7693ac2be\") " Feb 18 15:20:47 crc kubenswrapper[4957]: I0218 15:20:47.072362 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ksnx7\" (UniqueName: \"kubernetes.io/projected/043c8827-3f87-4d04-ac31-16c7693ac2be-kube-api-access-ksnx7\") pod \"043c8827-3f87-4d04-ac31-16c7693ac2be\" (UID: \"043c8827-3f87-4d04-ac31-16c7693ac2be\") " Feb 18 15:20:47 crc kubenswrapper[4957]: I0218 15:20:47.073360 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/043c8827-3f87-4d04-ac31-16c7693ac2be-utilities" (OuterVolumeSpecName: "utilities") pod "043c8827-3f87-4d04-ac31-16c7693ac2be" (UID: "043c8827-3f87-4d04-ac31-16c7693ac2be"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 15:20:47 crc kubenswrapper[4957]: I0218 15:20:47.073819 4957 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/043c8827-3f87-4d04-ac31-16c7693ac2be-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 15:20:47 crc kubenswrapper[4957]: I0218 15:20:47.083518 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/043c8827-3f87-4d04-ac31-16c7693ac2be-kube-api-access-ksnx7" (OuterVolumeSpecName: "kube-api-access-ksnx7") pod "043c8827-3f87-4d04-ac31-16c7693ac2be" (UID: "043c8827-3f87-4d04-ac31-16c7693ac2be"). InnerVolumeSpecName "kube-api-access-ksnx7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 15:20:47 crc kubenswrapper[4957]: I0218 15:20:47.177332 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ksnx7\" (UniqueName: \"kubernetes.io/projected/043c8827-3f87-4d04-ac31-16c7693ac2be-kube-api-access-ksnx7\") on node \"crc\" DevicePath \"\"" Feb 18 15:20:47 crc kubenswrapper[4957]: I0218 15:20:47.218198 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/043c8827-3f87-4d04-ac31-16c7693ac2be-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "043c8827-3f87-4d04-ac31-16c7693ac2be" (UID: "043c8827-3f87-4d04-ac31-16c7693ac2be"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 15:20:47 crc kubenswrapper[4957]: I0218 15:20:47.279568 4957 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/043c8827-3f87-4d04-ac31-16c7693ac2be-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 15:20:47 crc kubenswrapper[4957]: I0218 15:20:47.385679 4957 generic.go:334] "Generic (PLEG): container finished" podID="043c8827-3f87-4d04-ac31-16c7693ac2be" containerID="6c58af67f101e4b59e2eb367e1344055efb274878308becf4d9dc83a580fa585" exitCode=0 Feb 18 15:20:47 crc kubenswrapper[4957]: I0218 15:20:47.385724 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w6wbm" event={"ID":"043c8827-3f87-4d04-ac31-16c7693ac2be","Type":"ContainerDied","Data":"6c58af67f101e4b59e2eb367e1344055efb274878308becf4d9dc83a580fa585"} Feb 18 15:20:47 crc kubenswrapper[4957]: I0218 15:20:47.385756 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w6wbm" event={"ID":"043c8827-3f87-4d04-ac31-16c7693ac2be","Type":"ContainerDied","Data":"149995119d7c7e5e60d2223ce518e40027925cd390fec671e527f9bd2f0ee912"} Feb 18 15:20:47 crc kubenswrapper[4957]: I0218 15:20:47.385759 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w6wbm" Feb 18 15:20:47 crc kubenswrapper[4957]: I0218 15:20:47.385775 4957 scope.go:117] "RemoveContainer" containerID="6c58af67f101e4b59e2eb367e1344055efb274878308becf4d9dc83a580fa585" Feb 18 15:20:47 crc kubenswrapper[4957]: I0218 15:20:47.408981 4957 scope.go:117] "RemoveContainer" containerID="e2ea5fca8b7c00866f809da2504622a358096ef0dfa7941e0dc3492cc1e259d3" Feb 18 15:20:47 crc kubenswrapper[4957]: I0218 15:20:47.426725 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-w6wbm"] Feb 18 15:20:47 crc kubenswrapper[4957]: I0218 15:20:47.438385 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-w6wbm"] Feb 18 15:20:47 crc kubenswrapper[4957]: I0218 15:20:47.457660 4957 scope.go:117] "RemoveContainer" containerID="df3800fc85a50363620e171e779567b2421a074c5165077d0de8247041aed4d8" Feb 18 15:20:47 crc kubenswrapper[4957]: I0218 15:20:47.504764 4957 scope.go:117] "RemoveContainer" containerID="6c58af67f101e4b59e2eb367e1344055efb274878308becf4d9dc83a580fa585" Feb 18 15:20:47 crc kubenswrapper[4957]: E0218 15:20:47.505353 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6c58af67f101e4b59e2eb367e1344055efb274878308becf4d9dc83a580fa585\": container with ID starting with 6c58af67f101e4b59e2eb367e1344055efb274878308becf4d9dc83a580fa585 not found: ID does not exist" containerID="6c58af67f101e4b59e2eb367e1344055efb274878308becf4d9dc83a580fa585" Feb 18 15:20:47 crc kubenswrapper[4957]: I0218 15:20:47.505396 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c58af67f101e4b59e2eb367e1344055efb274878308becf4d9dc83a580fa585"} err="failed to get container status \"6c58af67f101e4b59e2eb367e1344055efb274878308becf4d9dc83a580fa585\": rpc error: code = NotFound desc = could not find container \"6c58af67f101e4b59e2eb367e1344055efb274878308becf4d9dc83a580fa585\": container with ID starting with 6c58af67f101e4b59e2eb367e1344055efb274878308becf4d9dc83a580fa585 not found: ID does not exist" Feb 18 15:20:47 crc kubenswrapper[4957]: I0218 15:20:47.505445 4957 scope.go:117] "RemoveContainer" containerID="e2ea5fca8b7c00866f809da2504622a358096ef0dfa7941e0dc3492cc1e259d3" Feb 18 15:20:47 crc kubenswrapper[4957]: E0218 15:20:47.505941 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e2ea5fca8b7c00866f809da2504622a358096ef0dfa7941e0dc3492cc1e259d3\": container with ID starting with e2ea5fca8b7c00866f809da2504622a358096ef0dfa7941e0dc3492cc1e259d3 not found: ID does not exist" containerID="e2ea5fca8b7c00866f809da2504622a358096ef0dfa7941e0dc3492cc1e259d3" Feb 18 15:20:47 crc kubenswrapper[4957]: I0218 15:20:47.505997 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2ea5fca8b7c00866f809da2504622a358096ef0dfa7941e0dc3492cc1e259d3"} err="failed to get container status \"e2ea5fca8b7c00866f809da2504622a358096ef0dfa7941e0dc3492cc1e259d3\": rpc error: code = NotFound desc = could not find container \"e2ea5fca8b7c00866f809da2504622a358096ef0dfa7941e0dc3492cc1e259d3\": container with ID starting with e2ea5fca8b7c00866f809da2504622a358096ef0dfa7941e0dc3492cc1e259d3 not found: ID does not exist" Feb 18 15:20:47 crc kubenswrapper[4957]: I0218 15:20:47.506033 4957 scope.go:117] "RemoveContainer" containerID="df3800fc85a50363620e171e779567b2421a074c5165077d0de8247041aed4d8" Feb 18 15:20:47 crc kubenswrapper[4957]: E0218 15:20:47.506503 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"df3800fc85a50363620e171e779567b2421a074c5165077d0de8247041aed4d8\": container with ID starting with df3800fc85a50363620e171e779567b2421a074c5165077d0de8247041aed4d8 not found: ID does not exist" containerID="df3800fc85a50363620e171e779567b2421a074c5165077d0de8247041aed4d8" Feb 18 15:20:47 crc kubenswrapper[4957]: I0218 15:20:47.506530 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df3800fc85a50363620e171e779567b2421a074c5165077d0de8247041aed4d8"} err="failed to get container status \"df3800fc85a50363620e171e779567b2421a074c5165077d0de8247041aed4d8\": rpc error: code = NotFound desc = could not find container \"df3800fc85a50363620e171e779567b2421a074c5165077d0de8247041aed4d8\": container with ID starting with df3800fc85a50363620e171e779567b2421a074c5165077d0de8247041aed4d8 not found: ID does not exist" Feb 18 15:20:48 crc kubenswrapper[4957]: I0218 15:20:48.234759 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="043c8827-3f87-4d04-ac31-16c7693ac2be" path="/var/lib/kubelet/pods/043c8827-3f87-4d04-ac31-16c7693ac2be/volumes" Feb 18 15:20:56 crc kubenswrapper[4957]: I0218 15:20:56.213309 4957 scope.go:117] "RemoveContainer" containerID="39b5b1257cbb2be9af0e5e8acb7db08c789d6b14136ed8d184c2c99f85230571" Feb 18 15:20:56 crc kubenswrapper[4957]: E0218 15:20:56.214257 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-x8wwg_openshift-machine-config-operator(4cde17e3-43e9-4bed-afe8-5b76229e35cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" Feb 18 15:21:09 crc kubenswrapper[4957]: I0218 15:21:09.213500 4957 scope.go:117] "RemoveContainer" containerID="39b5b1257cbb2be9af0e5e8acb7db08c789d6b14136ed8d184c2c99f85230571" Feb 18 15:21:09 crc kubenswrapper[4957]: E0218 15:21:09.214349 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-x8wwg_openshift-machine-config-operator(4cde17e3-43e9-4bed-afe8-5b76229e35cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" Feb 18 15:21:20 crc kubenswrapper[4957]: I0218 15:21:20.213882 4957 scope.go:117] "RemoveContainer" containerID="39b5b1257cbb2be9af0e5e8acb7db08c789d6b14136ed8d184c2c99f85230571" Feb 18 15:21:20 crc kubenswrapper[4957]: E0218 15:21:20.214599 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-x8wwg_openshift-machine-config-operator(4cde17e3-43e9-4bed-afe8-5b76229e35cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" Feb 18 15:21:34 crc kubenswrapper[4957]: I0218 15:21:34.227055 4957 scope.go:117] "RemoveContainer" containerID="39b5b1257cbb2be9af0e5e8acb7db08c789d6b14136ed8d184c2c99f85230571" Feb 18 15:21:34 crc kubenswrapper[4957]: E0218 15:21:34.227870 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-x8wwg_openshift-machine-config-operator(4cde17e3-43e9-4bed-afe8-5b76229e35cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" Feb 18 15:21:45 crc kubenswrapper[4957]: I0218 15:21:45.214190 4957 scope.go:117] "RemoveContainer" containerID="39b5b1257cbb2be9af0e5e8acb7db08c789d6b14136ed8d184c2c99f85230571" Feb 18 15:21:45 crc kubenswrapper[4957]: E0218 15:21:45.215060 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-x8wwg_openshift-machine-config-operator(4cde17e3-43e9-4bed-afe8-5b76229e35cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" Feb 18 15:21:56 crc kubenswrapper[4957]: I0218 15:21:56.213825 4957 scope.go:117] "RemoveContainer" containerID="39b5b1257cbb2be9af0e5e8acb7db08c789d6b14136ed8d184c2c99f85230571" Feb 18 15:21:56 crc kubenswrapper[4957]: E0218 15:21:56.214795 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-x8wwg_openshift-machine-config-operator(4cde17e3-43e9-4bed-afe8-5b76229e35cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" Feb 18 15:22:07 crc kubenswrapper[4957]: I0218 15:22:07.213543 4957 scope.go:117] "RemoveContainer" containerID="39b5b1257cbb2be9af0e5e8acb7db08c789d6b14136ed8d184c2c99f85230571" Feb 18 15:22:07 crc kubenswrapper[4957]: E0218 15:22:07.214677 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-x8wwg_openshift-machine-config-operator(4cde17e3-43e9-4bed-afe8-5b76229e35cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" Feb 18 15:22:22 crc kubenswrapper[4957]: I0218 15:22:22.213014 4957 scope.go:117] "RemoveContainer" containerID="39b5b1257cbb2be9af0e5e8acb7db08c789d6b14136ed8d184c2c99f85230571" Feb 18 15:22:22 crc kubenswrapper[4957]: E0218 15:22:22.213934 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-x8wwg_openshift-machine-config-operator(4cde17e3-43e9-4bed-afe8-5b76229e35cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" Feb 18 15:22:25 crc kubenswrapper[4957]: I0218 15:22:25.477936 4957 generic.go:334] "Generic (PLEG): container finished" podID="0705aa1e-ac4d-4316-a4b1-9ad967170574" containerID="d0ed37443e5c0091a5aa75d4ee36b5bbd26a6898492a1e05fb12c5156494cbbd" exitCode=0 Feb 18 15:22:25 crc kubenswrapper[4957]: I0218 15:22:25.478014 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-xrx4l" event={"ID":"0705aa1e-ac4d-4316-a4b1-9ad967170574","Type":"ContainerDied","Data":"d0ed37443e5c0091a5aa75d4ee36b5bbd26a6898492a1e05fb12c5156494cbbd"} Feb 18 15:22:27 crc kubenswrapper[4957]: I0218 15:22:27.013971 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-xrx4l" Feb 18 15:22:27 crc kubenswrapper[4957]: I0218 15:22:27.059859 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xkmlb\" (UniqueName: \"kubernetes.io/projected/0705aa1e-ac4d-4316-a4b1-9ad967170574-kube-api-access-xkmlb\") pod \"0705aa1e-ac4d-4316-a4b1-9ad967170574\" (UID: \"0705aa1e-ac4d-4316-a4b1-9ad967170574\") " Feb 18 15:22:27 crc kubenswrapper[4957]: I0218 15:22:27.059947 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/0705aa1e-ac4d-4316-a4b1-9ad967170574-ceilometer-ipmi-config-data-1\") pod \"0705aa1e-ac4d-4316-a4b1-9ad967170574\" (UID: \"0705aa1e-ac4d-4316-a4b1-9ad967170574\") " Feb 18 15:22:27 crc kubenswrapper[4957]: I0218 15:22:27.060080 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/0705aa1e-ac4d-4316-a4b1-9ad967170574-ceilometer-ipmi-config-data-0\") pod \"0705aa1e-ac4d-4316-a4b1-9ad967170574\" (UID: \"0705aa1e-ac4d-4316-a4b1-9ad967170574\") " Feb 18 15:22:27 crc kubenswrapper[4957]: I0218 15:22:27.060116 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/0705aa1e-ac4d-4316-a4b1-9ad967170574-ceilometer-ipmi-config-data-2\") pod \"0705aa1e-ac4d-4316-a4b1-9ad967170574\" (UID: \"0705aa1e-ac4d-4316-a4b1-9ad967170574\") " Feb 18 15:22:27 crc kubenswrapper[4957]: I0218 15:22:27.060146 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0705aa1e-ac4d-4316-a4b1-9ad967170574-inventory\") pod \"0705aa1e-ac4d-4316-a4b1-9ad967170574\" (UID: \"0705aa1e-ac4d-4316-a4b1-9ad967170574\") " Feb 18 15:22:27 crc kubenswrapper[4957]: I0218 15:22:27.060178 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0705aa1e-ac4d-4316-a4b1-9ad967170574-ssh-key-openstack-edpm-ipam\") pod \"0705aa1e-ac4d-4316-a4b1-9ad967170574\" (UID: \"0705aa1e-ac4d-4316-a4b1-9ad967170574\") " Feb 18 15:22:27 crc kubenswrapper[4957]: I0218 15:22:27.060204 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0705aa1e-ac4d-4316-a4b1-9ad967170574-telemetry-power-monitoring-combined-ca-bundle\") pod \"0705aa1e-ac4d-4316-a4b1-9ad967170574\" (UID: \"0705aa1e-ac4d-4316-a4b1-9ad967170574\") " Feb 18 15:22:27 crc kubenswrapper[4957]: I0218 15:22:27.106693 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0705aa1e-ac4d-4316-a4b1-9ad967170574-telemetry-power-monitoring-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-power-monitoring-combined-ca-bundle") pod "0705aa1e-ac4d-4316-a4b1-9ad967170574" (UID: "0705aa1e-ac4d-4316-a4b1-9ad967170574"). InnerVolumeSpecName "telemetry-power-monitoring-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 15:22:27 crc kubenswrapper[4957]: I0218 15:22:27.120755 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0705aa1e-ac4d-4316-a4b1-9ad967170574-kube-api-access-xkmlb" (OuterVolumeSpecName: "kube-api-access-xkmlb") pod "0705aa1e-ac4d-4316-a4b1-9ad967170574" (UID: "0705aa1e-ac4d-4316-a4b1-9ad967170574"). InnerVolumeSpecName "kube-api-access-xkmlb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 15:22:27 crc kubenswrapper[4957]: I0218 15:22:27.174142 4957 reconciler_common.go:293] "Volume detached for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0705aa1e-ac4d-4316-a4b1-9ad967170574-telemetry-power-monitoring-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 15:22:27 crc kubenswrapper[4957]: I0218 15:22:27.174181 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xkmlb\" (UniqueName: \"kubernetes.io/projected/0705aa1e-ac4d-4316-a4b1-9ad967170574-kube-api-access-xkmlb\") on node \"crc\" DevicePath \"\"" Feb 18 15:22:27 crc kubenswrapper[4957]: I0218 15:22:27.227528 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0705aa1e-ac4d-4316-a4b1-9ad967170574-inventory" (OuterVolumeSpecName: "inventory") pod "0705aa1e-ac4d-4316-a4b1-9ad967170574" (UID: "0705aa1e-ac4d-4316-a4b1-9ad967170574"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 15:22:27 crc kubenswrapper[4957]: I0218 15:22:27.233150 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0705aa1e-ac4d-4316-a4b1-9ad967170574-ceilometer-ipmi-config-data-1" (OuterVolumeSpecName: "ceilometer-ipmi-config-data-1") pod "0705aa1e-ac4d-4316-a4b1-9ad967170574" (UID: "0705aa1e-ac4d-4316-a4b1-9ad967170574"). InnerVolumeSpecName "ceilometer-ipmi-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 15:22:27 crc kubenswrapper[4957]: I0218 15:22:27.254996 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0705aa1e-ac4d-4316-a4b1-9ad967170574-ceilometer-ipmi-config-data-0" (OuterVolumeSpecName: "ceilometer-ipmi-config-data-0") pod "0705aa1e-ac4d-4316-a4b1-9ad967170574" (UID: "0705aa1e-ac4d-4316-a4b1-9ad967170574"). InnerVolumeSpecName "ceilometer-ipmi-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 15:22:27 crc kubenswrapper[4957]: I0218 15:22:27.256870 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0705aa1e-ac4d-4316-a4b1-9ad967170574-ceilometer-ipmi-config-data-2" (OuterVolumeSpecName: "ceilometer-ipmi-config-data-2") pod "0705aa1e-ac4d-4316-a4b1-9ad967170574" (UID: "0705aa1e-ac4d-4316-a4b1-9ad967170574"). InnerVolumeSpecName "ceilometer-ipmi-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 15:22:27 crc kubenswrapper[4957]: I0218 15:22:27.264389 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0705aa1e-ac4d-4316-a4b1-9ad967170574-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "0705aa1e-ac4d-4316-a4b1-9ad967170574" (UID: "0705aa1e-ac4d-4316-a4b1-9ad967170574"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 15:22:27 crc kubenswrapper[4957]: I0218 15:22:27.278309 4957 reconciler_common.go:293] "Volume detached for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/0705aa1e-ac4d-4316-a4b1-9ad967170574-ceilometer-ipmi-config-data-1\") on node \"crc\" DevicePath \"\"" Feb 18 15:22:27 crc kubenswrapper[4957]: I0218 15:22:27.278353 4957 reconciler_common.go:293] "Volume detached for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/0705aa1e-ac4d-4316-a4b1-9ad967170574-ceilometer-ipmi-config-data-0\") on node \"crc\" DevicePath \"\"" Feb 18 15:22:27 crc kubenswrapper[4957]: I0218 15:22:27.278369 4957 reconciler_common.go:293] "Volume detached for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/0705aa1e-ac4d-4316-a4b1-9ad967170574-ceilometer-ipmi-config-data-2\") on node \"crc\" DevicePath \"\"" Feb 18 15:22:27 crc kubenswrapper[4957]: I0218 15:22:27.278382 4957 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0705aa1e-ac4d-4316-a4b1-9ad967170574-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 15:22:27 crc kubenswrapper[4957]: I0218 15:22:27.278398 4957 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0705aa1e-ac4d-4316-a4b1-9ad967170574-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 15:22:27 crc kubenswrapper[4957]: I0218 15:22:27.508295 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-xrx4l" event={"ID":"0705aa1e-ac4d-4316-a4b1-9ad967170574","Type":"ContainerDied","Data":"fc308ce741896de4a633a591a4e413ec35665b8efdf0063c07505f768e77736d"} Feb 18 15:22:27 crc kubenswrapper[4957]: I0218 15:22:27.508338 4957 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fc308ce741896de4a633a591a4e413ec35665b8efdf0063c07505f768e77736d" Feb 18 15:22:27 crc kubenswrapper[4957]: I0218 15:22:27.508389 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-xrx4l" Feb 18 15:22:27 crc kubenswrapper[4957]: I0218 15:22:27.626730 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/logging-edpm-deployment-openstack-edpm-ipam-tjx2d"] Feb 18 15:22:27 crc kubenswrapper[4957]: E0218 15:22:27.627847 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="043c8827-3f87-4d04-ac31-16c7693ac2be" containerName="extract-utilities" Feb 18 15:22:27 crc kubenswrapper[4957]: I0218 15:22:27.627868 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="043c8827-3f87-4d04-ac31-16c7693ac2be" containerName="extract-utilities" Feb 18 15:22:27 crc kubenswrapper[4957]: E0218 15:22:27.627891 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="043c8827-3f87-4d04-ac31-16c7693ac2be" containerName="extract-content" Feb 18 15:22:27 crc kubenswrapper[4957]: I0218 15:22:27.627901 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="043c8827-3f87-4d04-ac31-16c7693ac2be" containerName="extract-content" Feb 18 15:22:27 crc kubenswrapper[4957]: E0218 15:22:27.627935 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="043c8827-3f87-4d04-ac31-16c7693ac2be" containerName="registry-server" Feb 18 15:22:27 crc kubenswrapper[4957]: I0218 15:22:27.627942 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="043c8827-3f87-4d04-ac31-16c7693ac2be" containerName="registry-server" Feb 18 15:22:27 crc kubenswrapper[4957]: E0218 15:22:27.627952 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0705aa1e-ac4d-4316-a4b1-9ad967170574" containerName="telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam" Feb 18 15:22:27 crc kubenswrapper[4957]: I0218 15:22:27.627959 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="0705aa1e-ac4d-4316-a4b1-9ad967170574" containerName="telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam" Feb 18 15:22:27 crc kubenswrapper[4957]: I0218 15:22:27.628226 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="043c8827-3f87-4d04-ac31-16c7693ac2be" containerName="registry-server" Feb 18 15:22:27 crc kubenswrapper[4957]: I0218 15:22:27.628258 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="0705aa1e-ac4d-4316-a4b1-9ad967170574" containerName="telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam" Feb 18 15:22:27 crc kubenswrapper[4957]: I0218 15:22:27.629155 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-tjx2d" Feb 18 15:22:27 crc kubenswrapper[4957]: I0218 15:22:27.632968 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-pmpvb" Feb 18 15:22:27 crc kubenswrapper[4957]: I0218 15:22:27.633102 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 15:22:27 crc kubenswrapper[4957]: I0218 15:22:27.634386 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 15:22:27 crc kubenswrapper[4957]: I0218 15:22:27.634490 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 15:22:27 crc kubenswrapper[4957]: I0218 15:22:27.634552 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"logging-compute-config-data" Feb 18 15:22:27 crc kubenswrapper[4957]: I0218 15:22:27.640265 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/logging-edpm-deployment-openstack-edpm-ipam-tjx2d"] Feb 18 15:22:27 crc kubenswrapper[4957]: I0218 15:22:27.789179 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3d0d2840-7aaf-4aca-b1c8-f43e8ccffdb8-ssh-key-openstack-edpm-ipam\") pod \"logging-edpm-deployment-openstack-edpm-ipam-tjx2d\" (UID: \"3d0d2840-7aaf-4aca-b1c8-f43e8ccffdb8\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-tjx2d" Feb 18 15:22:27 crc kubenswrapper[4957]: I0218 15:22:27.790059 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/3d0d2840-7aaf-4aca-b1c8-f43e8ccffdb8-logging-compute-config-data-0\") pod \"logging-edpm-deployment-openstack-edpm-ipam-tjx2d\" (UID: \"3d0d2840-7aaf-4aca-b1c8-f43e8ccffdb8\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-tjx2d" Feb 18 15:22:27 crc kubenswrapper[4957]: I0218 15:22:27.790322 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3d0d2840-7aaf-4aca-b1c8-f43e8ccffdb8-inventory\") pod \"logging-edpm-deployment-openstack-edpm-ipam-tjx2d\" (UID: \"3d0d2840-7aaf-4aca-b1c8-f43e8ccffdb8\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-tjx2d" Feb 18 15:22:27 crc kubenswrapper[4957]: I0218 15:22:27.790695 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/3d0d2840-7aaf-4aca-b1c8-f43e8ccffdb8-logging-compute-config-data-1\") pod \"logging-edpm-deployment-openstack-edpm-ipam-tjx2d\" (UID: \"3d0d2840-7aaf-4aca-b1c8-f43e8ccffdb8\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-tjx2d" Feb 18 15:22:27 crc kubenswrapper[4957]: I0218 15:22:27.790823 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4f65j\" (UniqueName: \"kubernetes.io/projected/3d0d2840-7aaf-4aca-b1c8-f43e8ccffdb8-kube-api-access-4f65j\") pod \"logging-edpm-deployment-openstack-edpm-ipam-tjx2d\" (UID: \"3d0d2840-7aaf-4aca-b1c8-f43e8ccffdb8\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-tjx2d" Feb 18 15:22:27 crc kubenswrapper[4957]: I0218 15:22:27.893489 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/3d0d2840-7aaf-4aca-b1c8-f43e8ccffdb8-logging-compute-config-data-0\") pod \"logging-edpm-deployment-openstack-edpm-ipam-tjx2d\" (UID: \"3d0d2840-7aaf-4aca-b1c8-f43e8ccffdb8\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-tjx2d" Feb 18 15:22:27 crc kubenswrapper[4957]: I0218 15:22:27.893676 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3d0d2840-7aaf-4aca-b1c8-f43e8ccffdb8-inventory\") pod \"logging-edpm-deployment-openstack-edpm-ipam-tjx2d\" (UID: \"3d0d2840-7aaf-4aca-b1c8-f43e8ccffdb8\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-tjx2d" Feb 18 15:22:27 crc kubenswrapper[4957]: I0218 15:22:27.893919 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/3d0d2840-7aaf-4aca-b1c8-f43e8ccffdb8-logging-compute-config-data-1\") pod \"logging-edpm-deployment-openstack-edpm-ipam-tjx2d\" (UID: \"3d0d2840-7aaf-4aca-b1c8-f43e8ccffdb8\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-tjx2d" Feb 18 15:22:27 crc kubenswrapper[4957]: I0218 15:22:27.894063 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4f65j\" (UniqueName: \"kubernetes.io/projected/3d0d2840-7aaf-4aca-b1c8-f43e8ccffdb8-kube-api-access-4f65j\") pod \"logging-edpm-deployment-openstack-edpm-ipam-tjx2d\" (UID: \"3d0d2840-7aaf-4aca-b1c8-f43e8ccffdb8\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-tjx2d" Feb 18 15:22:27 crc kubenswrapper[4957]: I0218 15:22:27.894291 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3d0d2840-7aaf-4aca-b1c8-f43e8ccffdb8-ssh-key-openstack-edpm-ipam\") pod \"logging-edpm-deployment-openstack-edpm-ipam-tjx2d\" (UID: \"3d0d2840-7aaf-4aca-b1c8-f43e8ccffdb8\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-tjx2d" Feb 18 15:22:27 crc kubenswrapper[4957]: I0218 15:22:27.898554 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/3d0d2840-7aaf-4aca-b1c8-f43e8ccffdb8-logging-compute-config-data-1\") pod \"logging-edpm-deployment-openstack-edpm-ipam-tjx2d\" (UID: \"3d0d2840-7aaf-4aca-b1c8-f43e8ccffdb8\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-tjx2d" Feb 18 15:22:27 crc kubenswrapper[4957]: I0218 15:22:27.899980 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3d0d2840-7aaf-4aca-b1c8-f43e8ccffdb8-inventory\") pod \"logging-edpm-deployment-openstack-edpm-ipam-tjx2d\" (UID: \"3d0d2840-7aaf-4aca-b1c8-f43e8ccffdb8\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-tjx2d" Feb 18 15:22:27 crc kubenswrapper[4957]: I0218 15:22:27.900475 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/3d0d2840-7aaf-4aca-b1c8-f43e8ccffdb8-logging-compute-config-data-0\") pod \"logging-edpm-deployment-openstack-edpm-ipam-tjx2d\" (UID: \"3d0d2840-7aaf-4aca-b1c8-f43e8ccffdb8\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-tjx2d" Feb 18 15:22:27 crc kubenswrapper[4957]: I0218 15:22:27.901754 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3d0d2840-7aaf-4aca-b1c8-f43e8ccffdb8-ssh-key-openstack-edpm-ipam\") pod \"logging-edpm-deployment-openstack-edpm-ipam-tjx2d\" (UID: \"3d0d2840-7aaf-4aca-b1c8-f43e8ccffdb8\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-tjx2d" Feb 18 15:22:27 crc kubenswrapper[4957]: I0218 15:22:27.924934 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4f65j\" (UniqueName: \"kubernetes.io/projected/3d0d2840-7aaf-4aca-b1c8-f43e8ccffdb8-kube-api-access-4f65j\") pod \"logging-edpm-deployment-openstack-edpm-ipam-tjx2d\" (UID: \"3d0d2840-7aaf-4aca-b1c8-f43e8ccffdb8\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-tjx2d" Feb 18 15:22:27 crc kubenswrapper[4957]: I0218 15:22:27.949971 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-tjx2d" Feb 18 15:22:28 crc kubenswrapper[4957]: I0218 15:22:28.592827 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/logging-edpm-deployment-openstack-edpm-ipam-tjx2d"] Feb 18 15:22:28 crc kubenswrapper[4957]: I0218 15:22:28.601632 4957 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 15:22:29 crc kubenswrapper[4957]: I0218 15:22:29.532738 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-tjx2d" event={"ID":"3d0d2840-7aaf-4aca-b1c8-f43e8ccffdb8","Type":"ContainerStarted","Data":"e396b3dcce4f95a8bb80a6da05fe8d35823bc9180f87d154066c7e174979d7f0"} Feb 18 15:22:29 crc kubenswrapper[4957]: I0218 15:22:29.533253 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-tjx2d" event={"ID":"3d0d2840-7aaf-4aca-b1c8-f43e8ccffdb8","Type":"ContainerStarted","Data":"355a5438d9512da4e394fb8fc2d5332ccff1e473a8756555a481965c66214bac"} Feb 18 15:22:29 crc kubenswrapper[4957]: I0218 15:22:29.560514 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-tjx2d" podStartSLOduration=2.056262212 podStartE2EDuration="2.560470181s" podCreationTimestamp="2026-02-18 15:22:27 +0000 UTC" firstStartedPulling="2026-02-18 15:22:28.601304554 +0000 UTC m=+3055.122169318" lastFinishedPulling="2026-02-18 15:22:29.105512533 +0000 UTC m=+3055.626377287" observedRunningTime="2026-02-18 15:22:29.554605454 +0000 UTC m=+3056.075470198" watchObservedRunningTime="2026-02-18 15:22:29.560470181 +0000 UTC m=+3056.081334935" Feb 18 15:22:34 crc kubenswrapper[4957]: I0218 15:22:34.223036 4957 scope.go:117] "RemoveContainer" containerID="39b5b1257cbb2be9af0e5e8acb7db08c789d6b14136ed8d184c2c99f85230571" Feb 18 15:22:34 crc kubenswrapper[4957]: E0218 15:22:34.224024 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-x8wwg_openshift-machine-config-operator(4cde17e3-43e9-4bed-afe8-5b76229e35cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" Feb 18 15:22:44 crc kubenswrapper[4957]: I0218 15:22:44.738603 4957 generic.go:334] "Generic (PLEG): container finished" podID="3d0d2840-7aaf-4aca-b1c8-f43e8ccffdb8" containerID="e396b3dcce4f95a8bb80a6da05fe8d35823bc9180f87d154066c7e174979d7f0" exitCode=0 Feb 18 15:22:44 crc kubenswrapper[4957]: I0218 15:22:44.738674 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-tjx2d" event={"ID":"3d0d2840-7aaf-4aca-b1c8-f43e8ccffdb8","Type":"ContainerDied","Data":"e396b3dcce4f95a8bb80a6da05fe8d35823bc9180f87d154066c7e174979d7f0"} Feb 18 15:22:46 crc kubenswrapper[4957]: I0218 15:22:46.315097 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-tjx2d" Feb 18 15:22:46 crc kubenswrapper[4957]: I0218 15:22:46.406768 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3d0d2840-7aaf-4aca-b1c8-f43e8ccffdb8-ssh-key-openstack-edpm-ipam\") pod \"3d0d2840-7aaf-4aca-b1c8-f43e8ccffdb8\" (UID: \"3d0d2840-7aaf-4aca-b1c8-f43e8ccffdb8\") " Feb 18 15:22:46 crc kubenswrapper[4957]: I0218 15:22:46.406831 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3d0d2840-7aaf-4aca-b1c8-f43e8ccffdb8-inventory\") pod \"3d0d2840-7aaf-4aca-b1c8-f43e8ccffdb8\" (UID: \"3d0d2840-7aaf-4aca-b1c8-f43e8ccffdb8\") " Feb 18 15:22:46 crc kubenswrapper[4957]: I0218 15:22:46.406889 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4f65j\" (UniqueName: \"kubernetes.io/projected/3d0d2840-7aaf-4aca-b1c8-f43e8ccffdb8-kube-api-access-4f65j\") pod \"3d0d2840-7aaf-4aca-b1c8-f43e8ccffdb8\" (UID: \"3d0d2840-7aaf-4aca-b1c8-f43e8ccffdb8\") " Feb 18 15:22:46 crc kubenswrapper[4957]: I0218 15:22:46.406923 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/3d0d2840-7aaf-4aca-b1c8-f43e8ccffdb8-logging-compute-config-data-1\") pod \"3d0d2840-7aaf-4aca-b1c8-f43e8ccffdb8\" (UID: \"3d0d2840-7aaf-4aca-b1c8-f43e8ccffdb8\") " Feb 18 15:22:46 crc kubenswrapper[4957]: I0218 15:22:46.406942 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/3d0d2840-7aaf-4aca-b1c8-f43e8ccffdb8-logging-compute-config-data-0\") pod \"3d0d2840-7aaf-4aca-b1c8-f43e8ccffdb8\" (UID: \"3d0d2840-7aaf-4aca-b1c8-f43e8ccffdb8\") " Feb 18 15:22:46 crc kubenswrapper[4957]: I0218 15:22:46.419741 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d0d2840-7aaf-4aca-b1c8-f43e8ccffdb8-kube-api-access-4f65j" (OuterVolumeSpecName: "kube-api-access-4f65j") pod "3d0d2840-7aaf-4aca-b1c8-f43e8ccffdb8" (UID: "3d0d2840-7aaf-4aca-b1c8-f43e8ccffdb8"). InnerVolumeSpecName "kube-api-access-4f65j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 15:22:46 crc kubenswrapper[4957]: I0218 15:22:46.439229 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d0d2840-7aaf-4aca-b1c8-f43e8ccffdb8-logging-compute-config-data-0" (OuterVolumeSpecName: "logging-compute-config-data-0") pod "3d0d2840-7aaf-4aca-b1c8-f43e8ccffdb8" (UID: "3d0d2840-7aaf-4aca-b1c8-f43e8ccffdb8"). InnerVolumeSpecName "logging-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 15:22:46 crc kubenswrapper[4957]: I0218 15:22:46.443334 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d0d2840-7aaf-4aca-b1c8-f43e8ccffdb8-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "3d0d2840-7aaf-4aca-b1c8-f43e8ccffdb8" (UID: "3d0d2840-7aaf-4aca-b1c8-f43e8ccffdb8"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 15:22:46 crc kubenswrapper[4957]: I0218 15:22:46.444629 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d0d2840-7aaf-4aca-b1c8-f43e8ccffdb8-inventory" (OuterVolumeSpecName: "inventory") pod "3d0d2840-7aaf-4aca-b1c8-f43e8ccffdb8" (UID: "3d0d2840-7aaf-4aca-b1c8-f43e8ccffdb8"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 15:22:46 crc kubenswrapper[4957]: I0218 15:22:46.446108 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d0d2840-7aaf-4aca-b1c8-f43e8ccffdb8-logging-compute-config-data-1" (OuterVolumeSpecName: "logging-compute-config-data-1") pod "3d0d2840-7aaf-4aca-b1c8-f43e8ccffdb8" (UID: "3d0d2840-7aaf-4aca-b1c8-f43e8ccffdb8"). InnerVolumeSpecName "logging-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 15:22:46 crc kubenswrapper[4957]: I0218 15:22:46.510107 4957 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3d0d2840-7aaf-4aca-b1c8-f43e8ccffdb8-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 15:22:46 crc kubenswrapper[4957]: I0218 15:22:46.510489 4957 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3d0d2840-7aaf-4aca-b1c8-f43e8ccffdb8-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 15:22:46 crc kubenswrapper[4957]: I0218 15:22:46.510510 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4f65j\" (UniqueName: \"kubernetes.io/projected/3d0d2840-7aaf-4aca-b1c8-f43e8ccffdb8-kube-api-access-4f65j\") on node \"crc\" DevicePath \"\"" Feb 18 15:22:46 crc kubenswrapper[4957]: I0218 15:22:46.510528 4957 reconciler_common.go:293] "Volume detached for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/3d0d2840-7aaf-4aca-b1c8-f43e8ccffdb8-logging-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Feb 18 15:22:46 crc kubenswrapper[4957]: I0218 15:22:46.510547 4957 reconciler_common.go:293] "Volume detached for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/3d0d2840-7aaf-4aca-b1c8-f43e8ccffdb8-logging-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Feb 18 15:22:46 crc kubenswrapper[4957]: I0218 15:22:46.765234 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-tjx2d" event={"ID":"3d0d2840-7aaf-4aca-b1c8-f43e8ccffdb8","Type":"ContainerDied","Data":"355a5438d9512da4e394fb8fc2d5332ccff1e473a8756555a481965c66214bac"} Feb 18 15:22:46 crc kubenswrapper[4957]: I0218 15:22:46.765276 4957 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="355a5438d9512da4e394fb8fc2d5332ccff1e473a8756555a481965c66214bac" Feb 18 15:22:46 crc kubenswrapper[4957]: I0218 15:22:46.765317 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-tjx2d" Feb 18 15:22:49 crc kubenswrapper[4957]: I0218 15:22:49.213808 4957 scope.go:117] "RemoveContainer" containerID="39b5b1257cbb2be9af0e5e8acb7db08c789d6b14136ed8d184c2c99f85230571" Feb 18 15:22:49 crc kubenswrapper[4957]: E0218 15:22:49.216371 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-x8wwg_openshift-machine-config-operator(4cde17e3-43e9-4bed-afe8-5b76229e35cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" Feb 18 15:23:02 crc kubenswrapper[4957]: I0218 15:23:02.213166 4957 scope.go:117] "RemoveContainer" containerID="39b5b1257cbb2be9af0e5e8acb7db08c789d6b14136ed8d184c2c99f85230571" Feb 18 15:23:02 crc kubenswrapper[4957]: E0218 15:23:02.214088 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-x8wwg_openshift-machine-config-operator(4cde17e3-43e9-4bed-afe8-5b76229e35cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" Feb 18 15:23:14 crc kubenswrapper[4957]: I0218 15:23:14.245281 4957 scope.go:117] "RemoveContainer" containerID="39b5b1257cbb2be9af0e5e8acb7db08c789d6b14136ed8d184c2c99f85230571" Feb 18 15:23:14 crc kubenswrapper[4957]: E0218 15:23:14.246606 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-x8wwg_openshift-machine-config-operator(4cde17e3-43e9-4bed-afe8-5b76229e35cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" Feb 18 15:23:29 crc kubenswrapper[4957]: I0218 15:23:29.213205 4957 scope.go:117] "RemoveContainer" containerID="39b5b1257cbb2be9af0e5e8acb7db08c789d6b14136ed8d184c2c99f85230571" Feb 18 15:23:29 crc kubenswrapper[4957]: E0218 15:23:29.213939 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-x8wwg_openshift-machine-config-operator(4cde17e3-43e9-4bed-afe8-5b76229e35cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" Feb 18 15:23:42 crc kubenswrapper[4957]: I0218 15:23:42.213272 4957 scope.go:117] "RemoveContainer" containerID="39b5b1257cbb2be9af0e5e8acb7db08c789d6b14136ed8d184c2c99f85230571" Feb 18 15:23:42 crc kubenswrapper[4957]: E0218 15:23:42.214446 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-x8wwg_openshift-machine-config-operator(4cde17e3-43e9-4bed-afe8-5b76229e35cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" Feb 18 15:23:57 crc kubenswrapper[4957]: I0218 15:23:57.213396 4957 scope.go:117] "RemoveContainer" containerID="39b5b1257cbb2be9af0e5e8acb7db08c789d6b14136ed8d184c2c99f85230571" Feb 18 15:23:57 crc kubenswrapper[4957]: E0218 15:23:57.214491 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-x8wwg_openshift-machine-config-operator(4cde17e3-43e9-4bed-afe8-5b76229e35cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" Feb 18 15:24:12 crc kubenswrapper[4957]: I0218 15:24:12.213723 4957 scope.go:117] "RemoveContainer" containerID="39b5b1257cbb2be9af0e5e8acb7db08c789d6b14136ed8d184c2c99f85230571" Feb 18 15:24:12 crc kubenswrapper[4957]: E0218 15:24:12.214678 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-x8wwg_openshift-machine-config-operator(4cde17e3-43e9-4bed-afe8-5b76229e35cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" Feb 18 15:24:24 crc kubenswrapper[4957]: I0218 15:24:24.223251 4957 scope.go:117] "RemoveContainer" containerID="39b5b1257cbb2be9af0e5e8acb7db08c789d6b14136ed8d184c2c99f85230571" Feb 18 15:24:24 crc kubenswrapper[4957]: E0218 15:24:24.224359 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-x8wwg_openshift-machine-config-operator(4cde17e3-43e9-4bed-afe8-5b76229e35cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" Feb 18 15:24:38 crc kubenswrapper[4957]: I0218 15:24:38.213546 4957 scope.go:117] "RemoveContainer" containerID="39b5b1257cbb2be9af0e5e8acb7db08c789d6b14136ed8d184c2c99f85230571" Feb 18 15:24:38 crc kubenswrapper[4957]: E0218 15:24:38.214391 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-x8wwg_openshift-machine-config-operator(4cde17e3-43e9-4bed-afe8-5b76229e35cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" Feb 18 15:24:50 crc kubenswrapper[4957]: I0218 15:24:50.215241 4957 scope.go:117] "RemoveContainer" containerID="39b5b1257cbb2be9af0e5e8acb7db08c789d6b14136ed8d184c2c99f85230571" Feb 18 15:24:50 crc kubenswrapper[4957]: E0218 15:24:50.216559 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-x8wwg_openshift-machine-config-operator(4cde17e3-43e9-4bed-afe8-5b76229e35cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" Feb 18 15:25:02 crc kubenswrapper[4957]: I0218 15:25:02.213225 4957 scope.go:117] "RemoveContainer" containerID="39b5b1257cbb2be9af0e5e8acb7db08c789d6b14136ed8d184c2c99f85230571" Feb 18 15:25:02 crc kubenswrapper[4957]: E0218 15:25:02.214024 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-x8wwg_openshift-machine-config-operator(4cde17e3-43e9-4bed-afe8-5b76229e35cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" Feb 18 15:25:13 crc kubenswrapper[4957]: I0218 15:25:13.214125 4957 scope.go:117] "RemoveContainer" containerID="39b5b1257cbb2be9af0e5e8acb7db08c789d6b14136ed8d184c2c99f85230571" Feb 18 15:25:13 crc kubenswrapper[4957]: I0218 15:25:13.545234 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" event={"ID":"4cde17e3-43e9-4bed-afe8-5b76229e35cf","Type":"ContainerStarted","Data":"694a9df68d5854b0201397a1e27128c5bd3ff2804ec6969915987278dd3e5bec"} Feb 18 15:27:37 crc kubenswrapper[4957]: I0218 15:27:37.279199 4957 patch_prober.go:28] interesting pod/machine-config-daemon-x8wwg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 15:27:37 crc kubenswrapper[4957]: I0218 15:27:37.279758 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 15:27:41 crc kubenswrapper[4957]: I0218 15:27:41.332565 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-rvl7w"] Feb 18 15:27:41 crc kubenswrapper[4957]: E0218 15:27:41.333731 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d0d2840-7aaf-4aca-b1c8-f43e8ccffdb8" containerName="logging-edpm-deployment-openstack-edpm-ipam" Feb 18 15:27:41 crc kubenswrapper[4957]: I0218 15:27:41.333751 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d0d2840-7aaf-4aca-b1c8-f43e8ccffdb8" containerName="logging-edpm-deployment-openstack-edpm-ipam" Feb 18 15:27:41 crc kubenswrapper[4957]: I0218 15:27:41.334159 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d0d2840-7aaf-4aca-b1c8-f43e8ccffdb8" containerName="logging-edpm-deployment-openstack-edpm-ipam" Feb 18 15:27:41 crc kubenswrapper[4957]: I0218 15:27:41.336133 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rvl7w" Feb 18 15:27:41 crc kubenswrapper[4957]: I0218 15:27:41.351321 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rvl7w"] Feb 18 15:27:41 crc kubenswrapper[4957]: I0218 15:27:41.395435 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6mdx\" (UniqueName: \"kubernetes.io/projected/5ff6b9dc-464f-484d-a941-f9bf98794417-kube-api-access-t6mdx\") pod \"redhat-operators-rvl7w\" (UID: \"5ff6b9dc-464f-484d-a941-f9bf98794417\") " pod="openshift-marketplace/redhat-operators-rvl7w" Feb 18 15:27:41 crc kubenswrapper[4957]: I0218 15:27:41.395593 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ff6b9dc-464f-484d-a941-f9bf98794417-catalog-content\") pod \"redhat-operators-rvl7w\" (UID: \"5ff6b9dc-464f-484d-a941-f9bf98794417\") " pod="openshift-marketplace/redhat-operators-rvl7w" Feb 18 15:27:41 crc kubenswrapper[4957]: I0218 15:27:41.395881 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ff6b9dc-464f-484d-a941-f9bf98794417-utilities\") pod \"redhat-operators-rvl7w\" (UID: \"5ff6b9dc-464f-484d-a941-f9bf98794417\") " pod="openshift-marketplace/redhat-operators-rvl7w" Feb 18 15:27:41 crc kubenswrapper[4957]: I0218 15:27:41.498508 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ff6b9dc-464f-484d-a941-f9bf98794417-utilities\") pod \"redhat-operators-rvl7w\" (UID: \"5ff6b9dc-464f-484d-a941-f9bf98794417\") " pod="openshift-marketplace/redhat-operators-rvl7w" Feb 18 15:27:41 crc kubenswrapper[4957]: I0218 15:27:41.498706 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t6mdx\" (UniqueName: \"kubernetes.io/projected/5ff6b9dc-464f-484d-a941-f9bf98794417-kube-api-access-t6mdx\") pod \"redhat-operators-rvl7w\" (UID: \"5ff6b9dc-464f-484d-a941-f9bf98794417\") " pod="openshift-marketplace/redhat-operators-rvl7w" Feb 18 15:27:41 crc kubenswrapper[4957]: I0218 15:27:41.498889 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ff6b9dc-464f-484d-a941-f9bf98794417-catalog-content\") pod \"redhat-operators-rvl7w\" (UID: \"5ff6b9dc-464f-484d-a941-f9bf98794417\") " pod="openshift-marketplace/redhat-operators-rvl7w" Feb 18 15:27:41 crc kubenswrapper[4957]: I0218 15:27:41.499011 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ff6b9dc-464f-484d-a941-f9bf98794417-utilities\") pod \"redhat-operators-rvl7w\" (UID: \"5ff6b9dc-464f-484d-a941-f9bf98794417\") " pod="openshift-marketplace/redhat-operators-rvl7w" Feb 18 15:27:41 crc kubenswrapper[4957]: I0218 15:27:41.499405 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ff6b9dc-464f-484d-a941-f9bf98794417-catalog-content\") pod \"redhat-operators-rvl7w\" (UID: \"5ff6b9dc-464f-484d-a941-f9bf98794417\") " pod="openshift-marketplace/redhat-operators-rvl7w" Feb 18 15:27:41 crc kubenswrapper[4957]: I0218 15:27:41.555030 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t6mdx\" (UniqueName: \"kubernetes.io/projected/5ff6b9dc-464f-484d-a941-f9bf98794417-kube-api-access-t6mdx\") pod \"redhat-operators-rvl7w\" (UID: \"5ff6b9dc-464f-484d-a941-f9bf98794417\") " pod="openshift-marketplace/redhat-operators-rvl7w" Feb 18 15:27:41 crc kubenswrapper[4957]: I0218 15:27:41.663106 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rvl7w" Feb 18 15:27:42 crc kubenswrapper[4957]: I0218 15:27:42.293844 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rvl7w"] Feb 18 15:27:43 crc kubenswrapper[4957]: I0218 15:27:43.242023 4957 generic.go:334] "Generic (PLEG): container finished" podID="5ff6b9dc-464f-484d-a941-f9bf98794417" containerID="740482c0834f467db55f5748f875aa20463fc8ee50f0da0b64630d2dea12c56d" exitCode=0 Feb 18 15:27:43 crc kubenswrapper[4957]: I0218 15:27:43.242104 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rvl7w" event={"ID":"5ff6b9dc-464f-484d-a941-f9bf98794417","Type":"ContainerDied","Data":"740482c0834f467db55f5748f875aa20463fc8ee50f0da0b64630d2dea12c56d"} Feb 18 15:27:43 crc kubenswrapper[4957]: I0218 15:27:43.243558 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rvl7w" event={"ID":"5ff6b9dc-464f-484d-a941-f9bf98794417","Type":"ContainerStarted","Data":"fd3e906810f6253f30f8490dfd81b2fd7c35bf4c7b86e96bca67d7043902f7c6"} Feb 18 15:27:43 crc kubenswrapper[4957]: I0218 15:27:43.245124 4957 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 15:27:44 crc kubenswrapper[4957]: I0218 15:27:44.257508 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rvl7w" event={"ID":"5ff6b9dc-464f-484d-a941-f9bf98794417","Type":"ContainerStarted","Data":"e1ef51e4c1f15275df7f752e96ec3a766ea3475f4468ea8fccca64d38389922b"} Feb 18 15:27:50 crc kubenswrapper[4957]: I0218 15:27:50.319923 4957 generic.go:334] "Generic (PLEG): container finished" podID="5ff6b9dc-464f-484d-a941-f9bf98794417" containerID="e1ef51e4c1f15275df7f752e96ec3a766ea3475f4468ea8fccca64d38389922b" exitCode=0 Feb 18 15:27:50 crc kubenswrapper[4957]: I0218 15:27:50.320013 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rvl7w" event={"ID":"5ff6b9dc-464f-484d-a941-f9bf98794417","Type":"ContainerDied","Data":"e1ef51e4c1f15275df7f752e96ec3a766ea3475f4468ea8fccca64d38389922b"} Feb 18 15:27:52 crc kubenswrapper[4957]: I0218 15:27:52.343852 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rvl7w" event={"ID":"5ff6b9dc-464f-484d-a941-f9bf98794417","Type":"ContainerStarted","Data":"c653c1acf5552b0a3820f58aa9de0c98b1b133eb92c672f3913f23179fc9b346"} Feb 18 15:27:52 crc kubenswrapper[4957]: I0218 15:27:52.371076 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-rvl7w" podStartSLOduration=3.087282475 podStartE2EDuration="11.371054977s" podCreationTimestamp="2026-02-18 15:27:41 +0000 UTC" firstStartedPulling="2026-02-18 15:27:43.244843941 +0000 UTC m=+3369.765708695" lastFinishedPulling="2026-02-18 15:27:51.528616453 +0000 UTC m=+3378.049481197" observedRunningTime="2026-02-18 15:27:52.359159779 +0000 UTC m=+3378.880024533" watchObservedRunningTime="2026-02-18 15:27:52.371054977 +0000 UTC m=+3378.891919721" Feb 18 15:28:01 crc kubenswrapper[4957]: I0218 15:28:01.663382 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-rvl7w" Feb 18 15:28:01 crc kubenswrapper[4957]: I0218 15:28:01.663886 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-rvl7w" Feb 18 15:28:02 crc kubenswrapper[4957]: I0218 15:28:02.719947 4957 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-rvl7w" podUID="5ff6b9dc-464f-484d-a941-f9bf98794417" containerName="registry-server" probeResult="failure" output=< Feb 18 15:28:02 crc kubenswrapper[4957]: timeout: failed to connect service ":50051" within 1s Feb 18 15:28:02 crc kubenswrapper[4957]: > Feb 18 15:28:07 crc kubenswrapper[4957]: I0218 15:28:07.279214 4957 patch_prober.go:28] interesting pod/machine-config-daemon-x8wwg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 15:28:07 crc kubenswrapper[4957]: I0218 15:28:07.279796 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 15:28:12 crc kubenswrapper[4957]: I0218 15:28:12.711174 4957 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-rvl7w" podUID="5ff6b9dc-464f-484d-a941-f9bf98794417" containerName="registry-server" probeResult="failure" output=< Feb 18 15:28:12 crc kubenswrapper[4957]: timeout: failed to connect service ":50051" within 1s Feb 18 15:28:12 crc kubenswrapper[4957]: > Feb 18 15:28:21 crc kubenswrapper[4957]: I0218 15:28:21.721259 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-rvl7w" Feb 18 15:28:21 crc kubenswrapper[4957]: I0218 15:28:21.771983 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-rvl7w" Feb 18 15:28:21 crc kubenswrapper[4957]: I0218 15:28:21.966630 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rvl7w"] Feb 18 15:28:23 crc kubenswrapper[4957]: I0218 15:28:23.677391 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-rvl7w" podUID="5ff6b9dc-464f-484d-a941-f9bf98794417" containerName="registry-server" containerID="cri-o://c653c1acf5552b0a3820f58aa9de0c98b1b133eb92c672f3913f23179fc9b346" gracePeriod=2 Feb 18 15:28:24 crc kubenswrapper[4957]: I0218 15:28:24.367887 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rvl7w" Feb 18 15:28:24 crc kubenswrapper[4957]: I0218 15:28:24.482835 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ff6b9dc-464f-484d-a941-f9bf98794417-utilities\") pod \"5ff6b9dc-464f-484d-a941-f9bf98794417\" (UID: \"5ff6b9dc-464f-484d-a941-f9bf98794417\") " Feb 18 15:28:24 crc kubenswrapper[4957]: I0218 15:28:24.482910 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ff6b9dc-464f-484d-a941-f9bf98794417-catalog-content\") pod \"5ff6b9dc-464f-484d-a941-f9bf98794417\" (UID: \"5ff6b9dc-464f-484d-a941-f9bf98794417\") " Feb 18 15:28:24 crc kubenswrapper[4957]: I0218 15:28:24.483122 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t6mdx\" (UniqueName: \"kubernetes.io/projected/5ff6b9dc-464f-484d-a941-f9bf98794417-kube-api-access-t6mdx\") pod \"5ff6b9dc-464f-484d-a941-f9bf98794417\" (UID: \"5ff6b9dc-464f-484d-a941-f9bf98794417\") " Feb 18 15:28:24 crc kubenswrapper[4957]: I0218 15:28:24.483937 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5ff6b9dc-464f-484d-a941-f9bf98794417-utilities" (OuterVolumeSpecName: "utilities") pod "5ff6b9dc-464f-484d-a941-f9bf98794417" (UID: "5ff6b9dc-464f-484d-a941-f9bf98794417"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 15:28:24 crc kubenswrapper[4957]: I0218 15:28:24.518298 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ff6b9dc-464f-484d-a941-f9bf98794417-kube-api-access-t6mdx" (OuterVolumeSpecName: "kube-api-access-t6mdx") pod "5ff6b9dc-464f-484d-a941-f9bf98794417" (UID: "5ff6b9dc-464f-484d-a941-f9bf98794417"). InnerVolumeSpecName "kube-api-access-t6mdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 15:28:24 crc kubenswrapper[4957]: I0218 15:28:24.586980 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t6mdx\" (UniqueName: \"kubernetes.io/projected/5ff6b9dc-464f-484d-a941-f9bf98794417-kube-api-access-t6mdx\") on node \"crc\" DevicePath \"\"" Feb 18 15:28:24 crc kubenswrapper[4957]: I0218 15:28:24.587022 4957 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ff6b9dc-464f-484d-a941-f9bf98794417-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 15:28:24 crc kubenswrapper[4957]: I0218 15:28:24.593074 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5ff6b9dc-464f-484d-a941-f9bf98794417-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5ff6b9dc-464f-484d-a941-f9bf98794417" (UID: "5ff6b9dc-464f-484d-a941-f9bf98794417"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 15:28:24 crc kubenswrapper[4957]: I0218 15:28:24.690655 4957 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ff6b9dc-464f-484d-a941-f9bf98794417-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 15:28:24 crc kubenswrapper[4957]: I0218 15:28:24.695430 4957 generic.go:334] "Generic (PLEG): container finished" podID="5ff6b9dc-464f-484d-a941-f9bf98794417" containerID="c653c1acf5552b0a3820f58aa9de0c98b1b133eb92c672f3913f23179fc9b346" exitCode=0 Feb 18 15:28:24 crc kubenswrapper[4957]: I0218 15:28:24.695504 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rvl7w" event={"ID":"5ff6b9dc-464f-484d-a941-f9bf98794417","Type":"ContainerDied","Data":"c653c1acf5552b0a3820f58aa9de0c98b1b133eb92c672f3913f23179fc9b346"} Feb 18 15:28:24 crc kubenswrapper[4957]: I0218 15:28:24.695541 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rvl7w" event={"ID":"5ff6b9dc-464f-484d-a941-f9bf98794417","Type":"ContainerDied","Data":"fd3e906810f6253f30f8490dfd81b2fd7c35bf4c7b86e96bca67d7043902f7c6"} Feb 18 15:28:24 crc kubenswrapper[4957]: I0218 15:28:24.695566 4957 scope.go:117] "RemoveContainer" containerID="c653c1acf5552b0a3820f58aa9de0c98b1b133eb92c672f3913f23179fc9b346" Feb 18 15:28:24 crc kubenswrapper[4957]: I0218 15:28:24.695758 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rvl7w" Feb 18 15:28:24 crc kubenswrapper[4957]: I0218 15:28:24.734944 4957 scope.go:117] "RemoveContainer" containerID="e1ef51e4c1f15275df7f752e96ec3a766ea3475f4468ea8fccca64d38389922b" Feb 18 15:28:24 crc kubenswrapper[4957]: I0218 15:28:24.745113 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rvl7w"] Feb 18 15:28:24 crc kubenswrapper[4957]: I0218 15:28:24.756907 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-rvl7w"] Feb 18 15:28:24 crc kubenswrapper[4957]: I0218 15:28:24.766236 4957 scope.go:117] "RemoveContainer" containerID="740482c0834f467db55f5748f875aa20463fc8ee50f0da0b64630d2dea12c56d" Feb 18 15:28:24 crc kubenswrapper[4957]: I0218 15:28:24.827158 4957 scope.go:117] "RemoveContainer" containerID="c653c1acf5552b0a3820f58aa9de0c98b1b133eb92c672f3913f23179fc9b346" Feb 18 15:28:24 crc kubenswrapper[4957]: E0218 15:28:24.827680 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c653c1acf5552b0a3820f58aa9de0c98b1b133eb92c672f3913f23179fc9b346\": container with ID starting with c653c1acf5552b0a3820f58aa9de0c98b1b133eb92c672f3913f23179fc9b346 not found: ID does not exist" containerID="c653c1acf5552b0a3820f58aa9de0c98b1b133eb92c672f3913f23179fc9b346" Feb 18 15:28:24 crc kubenswrapper[4957]: I0218 15:28:24.827735 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c653c1acf5552b0a3820f58aa9de0c98b1b133eb92c672f3913f23179fc9b346"} err="failed to get container status \"c653c1acf5552b0a3820f58aa9de0c98b1b133eb92c672f3913f23179fc9b346\": rpc error: code = NotFound desc = could not find container \"c653c1acf5552b0a3820f58aa9de0c98b1b133eb92c672f3913f23179fc9b346\": container with ID starting with c653c1acf5552b0a3820f58aa9de0c98b1b133eb92c672f3913f23179fc9b346 not found: ID does not exist" Feb 18 15:28:24 crc kubenswrapper[4957]: I0218 15:28:24.827770 4957 scope.go:117] "RemoveContainer" containerID="e1ef51e4c1f15275df7f752e96ec3a766ea3475f4468ea8fccca64d38389922b" Feb 18 15:28:24 crc kubenswrapper[4957]: E0218 15:28:24.828150 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e1ef51e4c1f15275df7f752e96ec3a766ea3475f4468ea8fccca64d38389922b\": container with ID starting with e1ef51e4c1f15275df7f752e96ec3a766ea3475f4468ea8fccca64d38389922b not found: ID does not exist" containerID="e1ef51e4c1f15275df7f752e96ec3a766ea3475f4468ea8fccca64d38389922b" Feb 18 15:28:24 crc kubenswrapper[4957]: I0218 15:28:24.828187 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1ef51e4c1f15275df7f752e96ec3a766ea3475f4468ea8fccca64d38389922b"} err="failed to get container status \"e1ef51e4c1f15275df7f752e96ec3a766ea3475f4468ea8fccca64d38389922b\": rpc error: code = NotFound desc = could not find container \"e1ef51e4c1f15275df7f752e96ec3a766ea3475f4468ea8fccca64d38389922b\": container with ID starting with e1ef51e4c1f15275df7f752e96ec3a766ea3475f4468ea8fccca64d38389922b not found: ID does not exist" Feb 18 15:28:24 crc kubenswrapper[4957]: I0218 15:28:24.828211 4957 scope.go:117] "RemoveContainer" containerID="740482c0834f467db55f5748f875aa20463fc8ee50f0da0b64630d2dea12c56d" Feb 18 15:28:24 crc kubenswrapper[4957]: E0218 15:28:24.828649 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"740482c0834f467db55f5748f875aa20463fc8ee50f0da0b64630d2dea12c56d\": container with ID starting with 740482c0834f467db55f5748f875aa20463fc8ee50f0da0b64630d2dea12c56d not found: ID does not exist" containerID="740482c0834f467db55f5748f875aa20463fc8ee50f0da0b64630d2dea12c56d" Feb 18 15:28:24 crc kubenswrapper[4957]: I0218 15:28:24.828681 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"740482c0834f467db55f5748f875aa20463fc8ee50f0da0b64630d2dea12c56d"} err="failed to get container status \"740482c0834f467db55f5748f875aa20463fc8ee50f0da0b64630d2dea12c56d\": rpc error: code = NotFound desc = could not find container \"740482c0834f467db55f5748f875aa20463fc8ee50f0da0b64630d2dea12c56d\": container with ID starting with 740482c0834f467db55f5748f875aa20463fc8ee50f0da0b64630d2dea12c56d not found: ID does not exist" Feb 18 15:28:26 crc kubenswrapper[4957]: I0218 15:28:26.232190 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ff6b9dc-464f-484d-a941-f9bf98794417" path="/var/lib/kubelet/pods/5ff6b9dc-464f-484d-a941-f9bf98794417/volumes" Feb 18 15:28:32 crc kubenswrapper[4957]: I0218 15:28:32.246135 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-v796d"] Feb 18 15:28:32 crc kubenswrapper[4957]: E0218 15:28:32.247250 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ff6b9dc-464f-484d-a941-f9bf98794417" containerName="extract-content" Feb 18 15:28:32 crc kubenswrapper[4957]: I0218 15:28:32.247265 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ff6b9dc-464f-484d-a941-f9bf98794417" containerName="extract-content" Feb 18 15:28:32 crc kubenswrapper[4957]: E0218 15:28:32.247299 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ff6b9dc-464f-484d-a941-f9bf98794417" containerName="registry-server" Feb 18 15:28:32 crc kubenswrapper[4957]: I0218 15:28:32.247305 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ff6b9dc-464f-484d-a941-f9bf98794417" containerName="registry-server" Feb 18 15:28:32 crc kubenswrapper[4957]: E0218 15:28:32.247330 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ff6b9dc-464f-484d-a941-f9bf98794417" containerName="extract-utilities" Feb 18 15:28:32 crc kubenswrapper[4957]: I0218 15:28:32.247337 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ff6b9dc-464f-484d-a941-f9bf98794417" containerName="extract-utilities" Feb 18 15:28:32 crc kubenswrapper[4957]: I0218 15:28:32.247614 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ff6b9dc-464f-484d-a941-f9bf98794417" containerName="registry-server" Feb 18 15:28:32 crc kubenswrapper[4957]: I0218 15:28:32.249491 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-v796d" Feb 18 15:28:32 crc kubenswrapper[4957]: I0218 15:28:32.259987 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-v796d"] Feb 18 15:28:32 crc kubenswrapper[4957]: I0218 15:28:32.346064 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwqgd\" (UniqueName: \"kubernetes.io/projected/af7bae28-2f56-4ea7-bf49-94e4696cb5a5-kube-api-access-gwqgd\") pod \"certified-operators-v796d\" (UID: \"af7bae28-2f56-4ea7-bf49-94e4696cb5a5\") " pod="openshift-marketplace/certified-operators-v796d" Feb 18 15:28:32 crc kubenswrapper[4957]: I0218 15:28:32.346495 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af7bae28-2f56-4ea7-bf49-94e4696cb5a5-catalog-content\") pod \"certified-operators-v796d\" (UID: \"af7bae28-2f56-4ea7-bf49-94e4696cb5a5\") " pod="openshift-marketplace/certified-operators-v796d" Feb 18 15:28:32 crc kubenswrapper[4957]: I0218 15:28:32.346592 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af7bae28-2f56-4ea7-bf49-94e4696cb5a5-utilities\") pod \"certified-operators-v796d\" (UID: \"af7bae28-2f56-4ea7-bf49-94e4696cb5a5\") " pod="openshift-marketplace/certified-operators-v796d" Feb 18 15:28:32 crc kubenswrapper[4957]: I0218 15:28:32.449344 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af7bae28-2f56-4ea7-bf49-94e4696cb5a5-catalog-content\") pod \"certified-operators-v796d\" (UID: \"af7bae28-2f56-4ea7-bf49-94e4696cb5a5\") " pod="openshift-marketplace/certified-operators-v796d" Feb 18 15:28:32 crc kubenswrapper[4957]: I0218 15:28:32.449517 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af7bae28-2f56-4ea7-bf49-94e4696cb5a5-utilities\") pod \"certified-operators-v796d\" (UID: \"af7bae28-2f56-4ea7-bf49-94e4696cb5a5\") " pod="openshift-marketplace/certified-operators-v796d" Feb 18 15:28:32 crc kubenswrapper[4957]: I0218 15:28:32.449701 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gwqgd\" (UniqueName: \"kubernetes.io/projected/af7bae28-2f56-4ea7-bf49-94e4696cb5a5-kube-api-access-gwqgd\") pod \"certified-operators-v796d\" (UID: \"af7bae28-2f56-4ea7-bf49-94e4696cb5a5\") " pod="openshift-marketplace/certified-operators-v796d" Feb 18 15:28:32 crc kubenswrapper[4957]: I0218 15:28:32.449929 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af7bae28-2f56-4ea7-bf49-94e4696cb5a5-catalog-content\") pod \"certified-operators-v796d\" (UID: \"af7bae28-2f56-4ea7-bf49-94e4696cb5a5\") " pod="openshift-marketplace/certified-operators-v796d" Feb 18 15:28:32 crc kubenswrapper[4957]: I0218 15:28:32.450010 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af7bae28-2f56-4ea7-bf49-94e4696cb5a5-utilities\") pod \"certified-operators-v796d\" (UID: \"af7bae28-2f56-4ea7-bf49-94e4696cb5a5\") " pod="openshift-marketplace/certified-operators-v796d" Feb 18 15:28:32 crc kubenswrapper[4957]: I0218 15:28:32.477620 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwqgd\" (UniqueName: \"kubernetes.io/projected/af7bae28-2f56-4ea7-bf49-94e4696cb5a5-kube-api-access-gwqgd\") pod \"certified-operators-v796d\" (UID: \"af7bae28-2f56-4ea7-bf49-94e4696cb5a5\") " pod="openshift-marketplace/certified-operators-v796d" Feb 18 15:28:32 crc kubenswrapper[4957]: I0218 15:28:32.582247 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-v796d" Feb 18 15:28:33 crc kubenswrapper[4957]: I0218 15:28:33.143869 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-v796d"] Feb 18 15:28:33 crc kubenswrapper[4957]: I0218 15:28:33.813240 4957 generic.go:334] "Generic (PLEG): container finished" podID="af7bae28-2f56-4ea7-bf49-94e4696cb5a5" containerID="549940e5aa0c0135673286dba1848d01c6b22cab994639573ea93326e8b715d4" exitCode=0 Feb 18 15:28:33 crc kubenswrapper[4957]: I0218 15:28:33.813324 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v796d" event={"ID":"af7bae28-2f56-4ea7-bf49-94e4696cb5a5","Type":"ContainerDied","Data":"549940e5aa0c0135673286dba1848d01c6b22cab994639573ea93326e8b715d4"} Feb 18 15:28:33 crc kubenswrapper[4957]: I0218 15:28:33.813544 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v796d" event={"ID":"af7bae28-2f56-4ea7-bf49-94e4696cb5a5","Type":"ContainerStarted","Data":"7fe57444ae60f3a57b4a95cef0b6c6e0d67650727f6a12838f4b47ef2a6fc1f6"} Feb 18 15:28:34 crc kubenswrapper[4957]: I0218 15:28:34.829623 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v796d" event={"ID":"af7bae28-2f56-4ea7-bf49-94e4696cb5a5","Type":"ContainerStarted","Data":"e4b6751a9e08d75ff286b18749a3758b58a7646a1fc7c17f0588ec87f8bf7c98"} Feb 18 15:28:37 crc kubenswrapper[4957]: I0218 15:28:37.279775 4957 patch_prober.go:28] interesting pod/machine-config-daemon-x8wwg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 15:28:37 crc kubenswrapper[4957]: I0218 15:28:37.280320 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 15:28:37 crc kubenswrapper[4957]: I0218 15:28:37.280374 4957 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" Feb 18 15:28:37 crc kubenswrapper[4957]: I0218 15:28:37.281223 4957 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"694a9df68d5854b0201397a1e27128c5bd3ff2804ec6969915987278dd3e5bec"} pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 15:28:37 crc kubenswrapper[4957]: I0218 15:28:37.281295 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" containerName="machine-config-daemon" containerID="cri-o://694a9df68d5854b0201397a1e27128c5bd3ff2804ec6969915987278dd3e5bec" gracePeriod=600 Feb 18 15:28:37 crc kubenswrapper[4957]: I0218 15:28:37.876080 4957 generic.go:334] "Generic (PLEG): container finished" podID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" containerID="694a9df68d5854b0201397a1e27128c5bd3ff2804ec6969915987278dd3e5bec" exitCode=0 Feb 18 15:28:37 crc kubenswrapper[4957]: I0218 15:28:37.876143 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" event={"ID":"4cde17e3-43e9-4bed-afe8-5b76229e35cf","Type":"ContainerDied","Data":"694a9df68d5854b0201397a1e27128c5bd3ff2804ec6969915987278dd3e5bec"} Feb 18 15:28:37 crc kubenswrapper[4957]: I0218 15:28:37.876452 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" event={"ID":"4cde17e3-43e9-4bed-afe8-5b76229e35cf","Type":"ContainerStarted","Data":"0d0795f27053579c310c9a8aa84cb704281da505980410f7ddf14c4b9d63bd95"} Feb 18 15:28:37 crc kubenswrapper[4957]: I0218 15:28:37.876493 4957 scope.go:117] "RemoveContainer" containerID="39b5b1257cbb2be9af0e5e8acb7db08c789d6b14136ed8d184c2c99f85230571" Feb 18 15:28:37 crc kubenswrapper[4957]: I0218 15:28:37.882886 4957 generic.go:334] "Generic (PLEG): container finished" podID="af7bae28-2f56-4ea7-bf49-94e4696cb5a5" containerID="e4b6751a9e08d75ff286b18749a3758b58a7646a1fc7c17f0588ec87f8bf7c98" exitCode=0 Feb 18 15:28:37 crc kubenswrapper[4957]: I0218 15:28:37.882968 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v796d" event={"ID":"af7bae28-2f56-4ea7-bf49-94e4696cb5a5","Type":"ContainerDied","Data":"e4b6751a9e08d75ff286b18749a3758b58a7646a1fc7c17f0588ec87f8bf7c98"} Feb 18 15:28:38 crc kubenswrapper[4957]: I0218 15:28:38.896088 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v796d" event={"ID":"af7bae28-2f56-4ea7-bf49-94e4696cb5a5","Type":"ContainerStarted","Data":"acc2454c03c63a5367a71d7ca5d9570663614639655150f8a6143cea681d629d"} Feb 18 15:28:38 crc kubenswrapper[4957]: I0218 15:28:38.921129 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-v796d" podStartSLOduration=2.27812341 podStartE2EDuration="6.921112332s" podCreationTimestamp="2026-02-18 15:28:32 +0000 UTC" firstStartedPulling="2026-02-18 15:28:33.814826385 +0000 UTC m=+3420.335691129" lastFinishedPulling="2026-02-18 15:28:38.457815267 +0000 UTC m=+3424.978680051" observedRunningTime="2026-02-18 15:28:38.917760376 +0000 UTC m=+3425.438625130" watchObservedRunningTime="2026-02-18 15:28:38.921112332 +0000 UTC m=+3425.441977076" Feb 18 15:28:42 crc kubenswrapper[4957]: I0218 15:28:42.582674 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-v796d" Feb 18 15:28:42 crc kubenswrapper[4957]: I0218 15:28:42.583439 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-v796d" Feb 18 15:28:43 crc kubenswrapper[4957]: I0218 15:28:43.643863 4957 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-v796d" podUID="af7bae28-2f56-4ea7-bf49-94e4696cb5a5" containerName="registry-server" probeResult="failure" output=< Feb 18 15:28:43 crc kubenswrapper[4957]: timeout: failed to connect service ":50051" within 1s Feb 18 15:28:43 crc kubenswrapper[4957]: > Feb 18 15:28:52 crc kubenswrapper[4957]: I0218 15:28:52.640302 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-v796d" Feb 18 15:28:52 crc kubenswrapper[4957]: I0218 15:28:52.693223 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-v796d" Feb 18 15:28:52 crc kubenswrapper[4957]: I0218 15:28:52.884335 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-v796d"] Feb 18 15:28:54 crc kubenswrapper[4957]: I0218 15:28:54.099181 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-v796d" podUID="af7bae28-2f56-4ea7-bf49-94e4696cb5a5" containerName="registry-server" containerID="cri-o://acc2454c03c63a5367a71d7ca5d9570663614639655150f8a6143cea681d629d" gracePeriod=2 Feb 18 15:28:54 crc kubenswrapper[4957]: I0218 15:28:54.711628 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-v796d" Feb 18 15:28:54 crc kubenswrapper[4957]: I0218 15:28:54.843050 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gwqgd\" (UniqueName: \"kubernetes.io/projected/af7bae28-2f56-4ea7-bf49-94e4696cb5a5-kube-api-access-gwqgd\") pod \"af7bae28-2f56-4ea7-bf49-94e4696cb5a5\" (UID: \"af7bae28-2f56-4ea7-bf49-94e4696cb5a5\") " Feb 18 15:28:54 crc kubenswrapper[4957]: I0218 15:28:54.843159 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af7bae28-2f56-4ea7-bf49-94e4696cb5a5-catalog-content\") pod \"af7bae28-2f56-4ea7-bf49-94e4696cb5a5\" (UID: \"af7bae28-2f56-4ea7-bf49-94e4696cb5a5\") " Feb 18 15:28:54 crc kubenswrapper[4957]: I0218 15:28:54.843297 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af7bae28-2f56-4ea7-bf49-94e4696cb5a5-utilities\") pod \"af7bae28-2f56-4ea7-bf49-94e4696cb5a5\" (UID: \"af7bae28-2f56-4ea7-bf49-94e4696cb5a5\") " Feb 18 15:28:54 crc kubenswrapper[4957]: I0218 15:28:54.844136 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/af7bae28-2f56-4ea7-bf49-94e4696cb5a5-utilities" (OuterVolumeSpecName: "utilities") pod "af7bae28-2f56-4ea7-bf49-94e4696cb5a5" (UID: "af7bae28-2f56-4ea7-bf49-94e4696cb5a5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 15:28:54 crc kubenswrapper[4957]: I0218 15:28:54.845037 4957 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af7bae28-2f56-4ea7-bf49-94e4696cb5a5-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 15:28:54 crc kubenswrapper[4957]: I0218 15:28:54.852163 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af7bae28-2f56-4ea7-bf49-94e4696cb5a5-kube-api-access-gwqgd" (OuterVolumeSpecName: "kube-api-access-gwqgd") pod "af7bae28-2f56-4ea7-bf49-94e4696cb5a5" (UID: "af7bae28-2f56-4ea7-bf49-94e4696cb5a5"). InnerVolumeSpecName "kube-api-access-gwqgd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 15:28:54 crc kubenswrapper[4957]: I0218 15:28:54.916482 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/af7bae28-2f56-4ea7-bf49-94e4696cb5a5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "af7bae28-2f56-4ea7-bf49-94e4696cb5a5" (UID: "af7bae28-2f56-4ea7-bf49-94e4696cb5a5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 15:28:54 crc kubenswrapper[4957]: I0218 15:28:54.949632 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gwqgd\" (UniqueName: \"kubernetes.io/projected/af7bae28-2f56-4ea7-bf49-94e4696cb5a5-kube-api-access-gwqgd\") on node \"crc\" DevicePath \"\"" Feb 18 15:28:54 crc kubenswrapper[4957]: I0218 15:28:54.949718 4957 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af7bae28-2f56-4ea7-bf49-94e4696cb5a5-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 15:28:55 crc kubenswrapper[4957]: I0218 15:28:55.110735 4957 generic.go:334] "Generic (PLEG): container finished" podID="af7bae28-2f56-4ea7-bf49-94e4696cb5a5" containerID="acc2454c03c63a5367a71d7ca5d9570663614639655150f8a6143cea681d629d" exitCode=0 Feb 18 15:28:55 crc kubenswrapper[4957]: I0218 15:28:55.110784 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v796d" event={"ID":"af7bae28-2f56-4ea7-bf49-94e4696cb5a5","Type":"ContainerDied","Data":"acc2454c03c63a5367a71d7ca5d9570663614639655150f8a6143cea681d629d"} Feb 18 15:28:55 crc kubenswrapper[4957]: I0218 15:28:55.110820 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v796d" event={"ID":"af7bae28-2f56-4ea7-bf49-94e4696cb5a5","Type":"ContainerDied","Data":"7fe57444ae60f3a57b4a95cef0b6c6e0d67650727f6a12838f4b47ef2a6fc1f6"} Feb 18 15:28:55 crc kubenswrapper[4957]: I0218 15:28:55.110817 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-v796d" Feb 18 15:28:55 crc kubenswrapper[4957]: I0218 15:28:55.110865 4957 scope.go:117] "RemoveContainer" containerID="acc2454c03c63a5367a71d7ca5d9570663614639655150f8a6143cea681d629d" Feb 18 15:28:55 crc kubenswrapper[4957]: I0218 15:28:55.136009 4957 scope.go:117] "RemoveContainer" containerID="e4b6751a9e08d75ff286b18749a3758b58a7646a1fc7c17f0588ec87f8bf7c98" Feb 18 15:28:55 crc kubenswrapper[4957]: I0218 15:28:55.151593 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-v796d"] Feb 18 15:28:55 crc kubenswrapper[4957]: I0218 15:28:55.164080 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-v796d"] Feb 18 15:28:55 crc kubenswrapper[4957]: I0218 15:28:55.171472 4957 scope.go:117] "RemoveContainer" containerID="549940e5aa0c0135673286dba1848d01c6b22cab994639573ea93326e8b715d4" Feb 18 15:28:55 crc kubenswrapper[4957]: I0218 15:28:55.217608 4957 scope.go:117] "RemoveContainer" containerID="acc2454c03c63a5367a71d7ca5d9570663614639655150f8a6143cea681d629d" Feb 18 15:28:55 crc kubenswrapper[4957]: E0218 15:28:55.218055 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"acc2454c03c63a5367a71d7ca5d9570663614639655150f8a6143cea681d629d\": container with ID starting with acc2454c03c63a5367a71d7ca5d9570663614639655150f8a6143cea681d629d not found: ID does not exist" containerID="acc2454c03c63a5367a71d7ca5d9570663614639655150f8a6143cea681d629d" Feb 18 15:28:55 crc kubenswrapper[4957]: I0218 15:28:55.218107 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"acc2454c03c63a5367a71d7ca5d9570663614639655150f8a6143cea681d629d"} err="failed to get container status \"acc2454c03c63a5367a71d7ca5d9570663614639655150f8a6143cea681d629d\": rpc error: code = NotFound desc = could not find container \"acc2454c03c63a5367a71d7ca5d9570663614639655150f8a6143cea681d629d\": container with ID starting with acc2454c03c63a5367a71d7ca5d9570663614639655150f8a6143cea681d629d not found: ID does not exist" Feb 18 15:28:55 crc kubenswrapper[4957]: I0218 15:28:55.218138 4957 scope.go:117] "RemoveContainer" containerID="e4b6751a9e08d75ff286b18749a3758b58a7646a1fc7c17f0588ec87f8bf7c98" Feb 18 15:28:55 crc kubenswrapper[4957]: E0218 15:28:55.218485 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e4b6751a9e08d75ff286b18749a3758b58a7646a1fc7c17f0588ec87f8bf7c98\": container with ID starting with e4b6751a9e08d75ff286b18749a3758b58a7646a1fc7c17f0588ec87f8bf7c98 not found: ID does not exist" containerID="e4b6751a9e08d75ff286b18749a3758b58a7646a1fc7c17f0588ec87f8bf7c98" Feb 18 15:28:55 crc kubenswrapper[4957]: I0218 15:28:55.218519 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4b6751a9e08d75ff286b18749a3758b58a7646a1fc7c17f0588ec87f8bf7c98"} err="failed to get container status \"e4b6751a9e08d75ff286b18749a3758b58a7646a1fc7c17f0588ec87f8bf7c98\": rpc error: code = NotFound desc = could not find container \"e4b6751a9e08d75ff286b18749a3758b58a7646a1fc7c17f0588ec87f8bf7c98\": container with ID starting with e4b6751a9e08d75ff286b18749a3758b58a7646a1fc7c17f0588ec87f8bf7c98 not found: ID does not exist" Feb 18 15:28:55 crc kubenswrapper[4957]: I0218 15:28:55.218539 4957 scope.go:117] "RemoveContainer" containerID="549940e5aa0c0135673286dba1848d01c6b22cab994639573ea93326e8b715d4" Feb 18 15:28:55 crc kubenswrapper[4957]: E0218 15:28:55.218801 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"549940e5aa0c0135673286dba1848d01c6b22cab994639573ea93326e8b715d4\": container with ID starting with 549940e5aa0c0135673286dba1848d01c6b22cab994639573ea93326e8b715d4 not found: ID does not exist" containerID="549940e5aa0c0135673286dba1848d01c6b22cab994639573ea93326e8b715d4" Feb 18 15:28:55 crc kubenswrapper[4957]: I0218 15:28:55.218826 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"549940e5aa0c0135673286dba1848d01c6b22cab994639573ea93326e8b715d4"} err="failed to get container status \"549940e5aa0c0135673286dba1848d01c6b22cab994639573ea93326e8b715d4\": rpc error: code = NotFound desc = could not find container \"549940e5aa0c0135673286dba1848d01c6b22cab994639573ea93326e8b715d4\": container with ID starting with 549940e5aa0c0135673286dba1848d01c6b22cab994639573ea93326e8b715d4 not found: ID does not exist" Feb 18 15:28:56 crc kubenswrapper[4957]: I0218 15:28:56.226817 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af7bae28-2f56-4ea7-bf49-94e4696cb5a5" path="/var/lib/kubelet/pods/af7bae28-2f56-4ea7-bf49-94e4696cb5a5/volumes" Feb 18 15:30:00 crc kubenswrapper[4957]: I0218 15:30:00.171265 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29523810-hsk9x"] Feb 18 15:30:00 crc kubenswrapper[4957]: E0218 15:30:00.172472 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af7bae28-2f56-4ea7-bf49-94e4696cb5a5" containerName="extract-content" Feb 18 15:30:00 crc kubenswrapper[4957]: I0218 15:30:00.172489 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="af7bae28-2f56-4ea7-bf49-94e4696cb5a5" containerName="extract-content" Feb 18 15:30:00 crc kubenswrapper[4957]: E0218 15:30:00.172516 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af7bae28-2f56-4ea7-bf49-94e4696cb5a5" containerName="extract-utilities" Feb 18 15:30:00 crc kubenswrapper[4957]: I0218 15:30:00.172522 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="af7bae28-2f56-4ea7-bf49-94e4696cb5a5" containerName="extract-utilities" Feb 18 15:30:00 crc kubenswrapper[4957]: E0218 15:30:00.172548 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af7bae28-2f56-4ea7-bf49-94e4696cb5a5" containerName="registry-server" Feb 18 15:30:00 crc kubenswrapper[4957]: I0218 15:30:00.172556 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="af7bae28-2f56-4ea7-bf49-94e4696cb5a5" containerName="registry-server" Feb 18 15:30:00 crc kubenswrapper[4957]: I0218 15:30:00.172795 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="af7bae28-2f56-4ea7-bf49-94e4696cb5a5" containerName="registry-server" Feb 18 15:30:00 crc kubenswrapper[4957]: I0218 15:30:00.173699 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29523810-hsk9x" Feb 18 15:30:00 crc kubenswrapper[4957]: I0218 15:30:00.177380 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 18 15:30:00 crc kubenswrapper[4957]: I0218 15:30:00.178713 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 18 15:30:00 crc kubenswrapper[4957]: I0218 15:30:00.210936 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29523810-hsk9x"] Feb 18 15:30:00 crc kubenswrapper[4957]: I0218 15:30:00.305855 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2dc1c9b2-8347-46c0-9b85-b75ec888134f-secret-volume\") pod \"collect-profiles-29523810-hsk9x\" (UID: \"2dc1c9b2-8347-46c0-9b85-b75ec888134f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523810-hsk9x" Feb 18 15:30:00 crc kubenswrapper[4957]: I0218 15:30:00.306057 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2dc1c9b2-8347-46c0-9b85-b75ec888134f-config-volume\") pod \"collect-profiles-29523810-hsk9x\" (UID: \"2dc1c9b2-8347-46c0-9b85-b75ec888134f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523810-hsk9x" Feb 18 15:30:00 crc kubenswrapper[4957]: I0218 15:30:00.306346 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5lrd\" (UniqueName: \"kubernetes.io/projected/2dc1c9b2-8347-46c0-9b85-b75ec888134f-kube-api-access-x5lrd\") pod \"collect-profiles-29523810-hsk9x\" (UID: \"2dc1c9b2-8347-46c0-9b85-b75ec888134f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523810-hsk9x" Feb 18 15:30:00 crc kubenswrapper[4957]: I0218 15:30:00.408868 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2dc1c9b2-8347-46c0-9b85-b75ec888134f-secret-volume\") pod \"collect-profiles-29523810-hsk9x\" (UID: \"2dc1c9b2-8347-46c0-9b85-b75ec888134f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523810-hsk9x" Feb 18 15:30:00 crc kubenswrapper[4957]: I0218 15:30:00.409115 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2dc1c9b2-8347-46c0-9b85-b75ec888134f-config-volume\") pod \"collect-profiles-29523810-hsk9x\" (UID: \"2dc1c9b2-8347-46c0-9b85-b75ec888134f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523810-hsk9x" Feb 18 15:30:00 crc kubenswrapper[4957]: I0218 15:30:00.409263 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x5lrd\" (UniqueName: \"kubernetes.io/projected/2dc1c9b2-8347-46c0-9b85-b75ec888134f-kube-api-access-x5lrd\") pod \"collect-profiles-29523810-hsk9x\" (UID: \"2dc1c9b2-8347-46c0-9b85-b75ec888134f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523810-hsk9x" Feb 18 15:30:00 crc kubenswrapper[4957]: I0218 15:30:00.411571 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2dc1c9b2-8347-46c0-9b85-b75ec888134f-config-volume\") pod \"collect-profiles-29523810-hsk9x\" (UID: \"2dc1c9b2-8347-46c0-9b85-b75ec888134f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523810-hsk9x" Feb 18 15:30:00 crc kubenswrapper[4957]: I0218 15:30:00.414378 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2dc1c9b2-8347-46c0-9b85-b75ec888134f-secret-volume\") pod \"collect-profiles-29523810-hsk9x\" (UID: \"2dc1c9b2-8347-46c0-9b85-b75ec888134f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523810-hsk9x" Feb 18 15:30:00 crc kubenswrapper[4957]: I0218 15:30:00.427066 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x5lrd\" (UniqueName: \"kubernetes.io/projected/2dc1c9b2-8347-46c0-9b85-b75ec888134f-kube-api-access-x5lrd\") pod \"collect-profiles-29523810-hsk9x\" (UID: \"2dc1c9b2-8347-46c0-9b85-b75ec888134f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523810-hsk9x" Feb 18 15:30:00 crc kubenswrapper[4957]: I0218 15:30:00.506256 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29523810-hsk9x" Feb 18 15:30:00 crc kubenswrapper[4957]: I0218 15:30:00.976798 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29523810-hsk9x"] Feb 18 15:30:01 crc kubenswrapper[4957]: I0218 15:30:01.911576 4957 generic.go:334] "Generic (PLEG): container finished" podID="2dc1c9b2-8347-46c0-9b85-b75ec888134f" containerID="7fb9abd76dc44d745f9a966cf5fbce15848a58162c23cbffd6ccdf6c6d455628" exitCode=0 Feb 18 15:30:01 crc kubenswrapper[4957]: I0218 15:30:01.912032 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29523810-hsk9x" event={"ID":"2dc1c9b2-8347-46c0-9b85-b75ec888134f","Type":"ContainerDied","Data":"7fb9abd76dc44d745f9a966cf5fbce15848a58162c23cbffd6ccdf6c6d455628"} Feb 18 15:30:01 crc kubenswrapper[4957]: I0218 15:30:01.912123 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29523810-hsk9x" event={"ID":"2dc1c9b2-8347-46c0-9b85-b75ec888134f","Type":"ContainerStarted","Data":"348cfd05afc06d528ffa9f224f1d16d8f189bc2c3c8880e717ee58aeeb7dd033"} Feb 18 15:30:03 crc kubenswrapper[4957]: I0218 15:30:03.373767 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29523810-hsk9x" Feb 18 15:30:03 crc kubenswrapper[4957]: I0218 15:30:03.503652 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2dc1c9b2-8347-46c0-9b85-b75ec888134f-config-volume\") pod \"2dc1c9b2-8347-46c0-9b85-b75ec888134f\" (UID: \"2dc1c9b2-8347-46c0-9b85-b75ec888134f\") " Feb 18 15:30:03 crc kubenswrapper[4957]: I0218 15:30:03.504092 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2dc1c9b2-8347-46c0-9b85-b75ec888134f-secret-volume\") pod \"2dc1c9b2-8347-46c0-9b85-b75ec888134f\" (UID: \"2dc1c9b2-8347-46c0-9b85-b75ec888134f\") " Feb 18 15:30:03 crc kubenswrapper[4957]: I0218 15:30:03.504193 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x5lrd\" (UniqueName: \"kubernetes.io/projected/2dc1c9b2-8347-46c0-9b85-b75ec888134f-kube-api-access-x5lrd\") pod \"2dc1c9b2-8347-46c0-9b85-b75ec888134f\" (UID: \"2dc1c9b2-8347-46c0-9b85-b75ec888134f\") " Feb 18 15:30:03 crc kubenswrapper[4957]: I0218 15:30:03.505039 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2dc1c9b2-8347-46c0-9b85-b75ec888134f-config-volume" (OuterVolumeSpecName: "config-volume") pod "2dc1c9b2-8347-46c0-9b85-b75ec888134f" (UID: "2dc1c9b2-8347-46c0-9b85-b75ec888134f"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 15:30:03 crc kubenswrapper[4957]: I0218 15:30:03.512321 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2dc1c9b2-8347-46c0-9b85-b75ec888134f-kube-api-access-x5lrd" (OuterVolumeSpecName: "kube-api-access-x5lrd") pod "2dc1c9b2-8347-46c0-9b85-b75ec888134f" (UID: "2dc1c9b2-8347-46c0-9b85-b75ec888134f"). InnerVolumeSpecName "kube-api-access-x5lrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 15:30:03 crc kubenswrapper[4957]: I0218 15:30:03.517020 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2dc1c9b2-8347-46c0-9b85-b75ec888134f-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "2dc1c9b2-8347-46c0-9b85-b75ec888134f" (UID: "2dc1c9b2-8347-46c0-9b85-b75ec888134f"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 15:30:03 crc kubenswrapper[4957]: I0218 15:30:03.607653 4957 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2dc1c9b2-8347-46c0-9b85-b75ec888134f-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 18 15:30:03 crc kubenswrapper[4957]: I0218 15:30:03.607694 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x5lrd\" (UniqueName: \"kubernetes.io/projected/2dc1c9b2-8347-46c0-9b85-b75ec888134f-kube-api-access-x5lrd\") on node \"crc\" DevicePath \"\"" Feb 18 15:30:03 crc kubenswrapper[4957]: I0218 15:30:03.607705 4957 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2dc1c9b2-8347-46c0-9b85-b75ec888134f-config-volume\") on node \"crc\" DevicePath \"\"" Feb 18 15:30:03 crc kubenswrapper[4957]: I0218 15:30:03.934960 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29523810-hsk9x" event={"ID":"2dc1c9b2-8347-46c0-9b85-b75ec888134f","Type":"ContainerDied","Data":"348cfd05afc06d528ffa9f224f1d16d8f189bc2c3c8880e717ee58aeeb7dd033"} Feb 18 15:30:03 crc kubenswrapper[4957]: I0218 15:30:03.935004 4957 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="348cfd05afc06d528ffa9f224f1d16d8f189bc2c3c8880e717ee58aeeb7dd033" Feb 18 15:30:03 crc kubenswrapper[4957]: I0218 15:30:03.935285 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29523810-hsk9x" Feb 18 15:30:04 crc kubenswrapper[4957]: I0218 15:30:04.472153 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29523765-nsgpn"] Feb 18 15:30:04 crc kubenswrapper[4957]: I0218 15:30:04.485643 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29523765-nsgpn"] Feb 18 15:30:06 crc kubenswrapper[4957]: I0218 15:30:06.231584 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83e53af5-e229-449c-a9e9-422344aaecef" path="/var/lib/kubelet/pods/83e53af5-e229-449c-a9e9-422344aaecef/volumes" Feb 18 15:30:17 crc kubenswrapper[4957]: I0218 15:30:17.007712 4957 scope.go:117] "RemoveContainer" containerID="4af4a642c75ffc855a545a026043ec21b9800efe61fc37d90afcc336ce460db8" Feb 18 15:30:37 crc kubenswrapper[4957]: I0218 15:30:37.278777 4957 patch_prober.go:28] interesting pod/machine-config-daemon-x8wwg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 15:30:37 crc kubenswrapper[4957]: I0218 15:30:37.279323 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 15:30:56 crc kubenswrapper[4957]: I0218 15:30:56.615770 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-7gqc2"] Feb 18 15:30:56 crc kubenswrapper[4957]: E0218 15:30:56.617153 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2dc1c9b2-8347-46c0-9b85-b75ec888134f" containerName="collect-profiles" Feb 18 15:30:56 crc kubenswrapper[4957]: I0218 15:30:56.617196 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="2dc1c9b2-8347-46c0-9b85-b75ec888134f" containerName="collect-profiles" Feb 18 15:30:56 crc kubenswrapper[4957]: I0218 15:30:56.617640 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="2dc1c9b2-8347-46c0-9b85-b75ec888134f" containerName="collect-profiles" Feb 18 15:30:56 crc kubenswrapper[4957]: I0218 15:30:56.625322 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7gqc2" Feb 18 15:30:56 crc kubenswrapper[4957]: I0218 15:30:56.640981 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7gqc2"] Feb 18 15:30:56 crc kubenswrapper[4957]: I0218 15:30:56.786631 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8hhg\" (UniqueName: \"kubernetes.io/projected/6cf52cfb-7ad9-451c-946b-20b9a9451084-kube-api-access-v8hhg\") pod \"redhat-marketplace-7gqc2\" (UID: \"6cf52cfb-7ad9-451c-946b-20b9a9451084\") " pod="openshift-marketplace/redhat-marketplace-7gqc2" Feb 18 15:30:56 crc kubenswrapper[4957]: I0218 15:30:56.786742 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6cf52cfb-7ad9-451c-946b-20b9a9451084-catalog-content\") pod \"redhat-marketplace-7gqc2\" (UID: \"6cf52cfb-7ad9-451c-946b-20b9a9451084\") " pod="openshift-marketplace/redhat-marketplace-7gqc2" Feb 18 15:30:56 crc kubenswrapper[4957]: I0218 15:30:56.786769 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6cf52cfb-7ad9-451c-946b-20b9a9451084-utilities\") pod \"redhat-marketplace-7gqc2\" (UID: \"6cf52cfb-7ad9-451c-946b-20b9a9451084\") " pod="openshift-marketplace/redhat-marketplace-7gqc2" Feb 18 15:30:56 crc kubenswrapper[4957]: I0218 15:30:56.889993 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v8hhg\" (UniqueName: \"kubernetes.io/projected/6cf52cfb-7ad9-451c-946b-20b9a9451084-kube-api-access-v8hhg\") pod \"redhat-marketplace-7gqc2\" (UID: \"6cf52cfb-7ad9-451c-946b-20b9a9451084\") " pod="openshift-marketplace/redhat-marketplace-7gqc2" Feb 18 15:30:56 crc kubenswrapper[4957]: I0218 15:30:56.890168 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6cf52cfb-7ad9-451c-946b-20b9a9451084-catalog-content\") pod \"redhat-marketplace-7gqc2\" (UID: \"6cf52cfb-7ad9-451c-946b-20b9a9451084\") " pod="openshift-marketplace/redhat-marketplace-7gqc2" Feb 18 15:30:56 crc kubenswrapper[4957]: I0218 15:30:56.890191 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6cf52cfb-7ad9-451c-946b-20b9a9451084-utilities\") pod \"redhat-marketplace-7gqc2\" (UID: \"6cf52cfb-7ad9-451c-946b-20b9a9451084\") " pod="openshift-marketplace/redhat-marketplace-7gqc2" Feb 18 15:30:56 crc kubenswrapper[4957]: I0218 15:30:56.890842 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6cf52cfb-7ad9-451c-946b-20b9a9451084-catalog-content\") pod \"redhat-marketplace-7gqc2\" (UID: \"6cf52cfb-7ad9-451c-946b-20b9a9451084\") " pod="openshift-marketplace/redhat-marketplace-7gqc2" Feb 18 15:30:56 crc kubenswrapper[4957]: I0218 15:30:56.890891 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6cf52cfb-7ad9-451c-946b-20b9a9451084-utilities\") pod \"redhat-marketplace-7gqc2\" (UID: \"6cf52cfb-7ad9-451c-946b-20b9a9451084\") " pod="openshift-marketplace/redhat-marketplace-7gqc2" Feb 18 15:30:56 crc kubenswrapper[4957]: I0218 15:30:56.915344 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v8hhg\" (UniqueName: \"kubernetes.io/projected/6cf52cfb-7ad9-451c-946b-20b9a9451084-kube-api-access-v8hhg\") pod \"redhat-marketplace-7gqc2\" (UID: \"6cf52cfb-7ad9-451c-946b-20b9a9451084\") " pod="openshift-marketplace/redhat-marketplace-7gqc2" Feb 18 15:30:56 crc kubenswrapper[4957]: I0218 15:30:56.946075 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7gqc2" Feb 18 15:30:57 crc kubenswrapper[4957]: I0218 15:30:57.515990 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7gqc2"] Feb 18 15:30:57 crc kubenswrapper[4957]: I0218 15:30:57.579045 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7gqc2" event={"ID":"6cf52cfb-7ad9-451c-946b-20b9a9451084","Type":"ContainerStarted","Data":"1d5aea56340479e266fe22dcff076f7a4d1d2ff0cd26cb1180db3aee4971cf27"} Feb 18 15:30:58 crc kubenswrapper[4957]: I0218 15:30:58.591029 4957 generic.go:334] "Generic (PLEG): container finished" podID="6cf52cfb-7ad9-451c-946b-20b9a9451084" containerID="b86b940f1c1255c679db363bb1a2bb3fac0f05aa32f51b2f3b25650a55786f29" exitCode=0 Feb 18 15:30:58 crc kubenswrapper[4957]: I0218 15:30:58.591122 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7gqc2" event={"ID":"6cf52cfb-7ad9-451c-946b-20b9a9451084","Type":"ContainerDied","Data":"b86b940f1c1255c679db363bb1a2bb3fac0f05aa32f51b2f3b25650a55786f29"} Feb 18 15:31:00 crc kubenswrapper[4957]: I0218 15:31:00.633998 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7gqc2" event={"ID":"6cf52cfb-7ad9-451c-946b-20b9a9451084","Type":"ContainerStarted","Data":"4515ea19b3cc34c5f83f6cdc58c01eb66093de01f8790a487adda4686ea3aed8"} Feb 18 15:31:02 crc kubenswrapper[4957]: I0218 15:31:02.657851 4957 generic.go:334] "Generic (PLEG): container finished" podID="6cf52cfb-7ad9-451c-946b-20b9a9451084" containerID="4515ea19b3cc34c5f83f6cdc58c01eb66093de01f8790a487adda4686ea3aed8" exitCode=0 Feb 18 15:31:02 crc kubenswrapper[4957]: I0218 15:31:02.657927 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7gqc2" event={"ID":"6cf52cfb-7ad9-451c-946b-20b9a9451084","Type":"ContainerDied","Data":"4515ea19b3cc34c5f83f6cdc58c01eb66093de01f8790a487adda4686ea3aed8"} Feb 18 15:31:03 crc kubenswrapper[4957]: I0218 15:31:03.672351 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7gqc2" event={"ID":"6cf52cfb-7ad9-451c-946b-20b9a9451084","Type":"ContainerStarted","Data":"34c87712ee855de3660003ea642388041ebc78e13a79b2cbcfc0040ee3131228"} Feb 18 15:31:03 crc kubenswrapper[4957]: I0218 15:31:03.696922 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-7gqc2" podStartSLOduration=3.212408465 podStartE2EDuration="7.696904396s" podCreationTimestamp="2026-02-18 15:30:56 +0000 UTC" firstStartedPulling="2026-02-18 15:30:58.593708505 +0000 UTC m=+3565.114573289" lastFinishedPulling="2026-02-18 15:31:03.078204476 +0000 UTC m=+3569.599069220" observedRunningTime="2026-02-18 15:31:03.691946114 +0000 UTC m=+3570.212810878" watchObservedRunningTime="2026-02-18 15:31:03.696904396 +0000 UTC m=+3570.217769150" Feb 18 15:31:06 crc kubenswrapper[4957]: I0218 15:31:06.948591 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-7gqc2" Feb 18 15:31:06 crc kubenswrapper[4957]: I0218 15:31:06.949338 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-7gqc2" Feb 18 15:31:07 crc kubenswrapper[4957]: I0218 15:31:07.028281 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-7gqc2" Feb 18 15:31:07 crc kubenswrapper[4957]: I0218 15:31:07.279801 4957 patch_prober.go:28] interesting pod/machine-config-daemon-x8wwg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 15:31:07 crc kubenswrapper[4957]: I0218 15:31:07.280207 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 15:31:17 crc kubenswrapper[4957]: I0218 15:31:17.000223 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-7gqc2" Feb 18 15:31:17 crc kubenswrapper[4957]: I0218 15:31:17.063565 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7gqc2"] Feb 18 15:31:17 crc kubenswrapper[4957]: I0218 15:31:17.822655 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-7gqc2" podUID="6cf52cfb-7ad9-451c-946b-20b9a9451084" containerName="registry-server" containerID="cri-o://34c87712ee855de3660003ea642388041ebc78e13a79b2cbcfc0040ee3131228" gracePeriod=2 Feb 18 15:31:18 crc kubenswrapper[4957]: I0218 15:31:18.381123 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7gqc2" Feb 18 15:31:18 crc kubenswrapper[4957]: I0218 15:31:18.497662 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6cf52cfb-7ad9-451c-946b-20b9a9451084-catalog-content\") pod \"6cf52cfb-7ad9-451c-946b-20b9a9451084\" (UID: \"6cf52cfb-7ad9-451c-946b-20b9a9451084\") " Feb 18 15:31:18 crc kubenswrapper[4957]: I0218 15:31:18.497791 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6cf52cfb-7ad9-451c-946b-20b9a9451084-utilities\") pod \"6cf52cfb-7ad9-451c-946b-20b9a9451084\" (UID: \"6cf52cfb-7ad9-451c-946b-20b9a9451084\") " Feb 18 15:31:18 crc kubenswrapper[4957]: I0218 15:31:18.497945 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v8hhg\" (UniqueName: \"kubernetes.io/projected/6cf52cfb-7ad9-451c-946b-20b9a9451084-kube-api-access-v8hhg\") pod \"6cf52cfb-7ad9-451c-946b-20b9a9451084\" (UID: \"6cf52cfb-7ad9-451c-946b-20b9a9451084\") " Feb 18 15:31:18 crc kubenswrapper[4957]: I0218 15:31:18.498987 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6cf52cfb-7ad9-451c-946b-20b9a9451084-utilities" (OuterVolumeSpecName: "utilities") pod "6cf52cfb-7ad9-451c-946b-20b9a9451084" (UID: "6cf52cfb-7ad9-451c-946b-20b9a9451084"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 15:31:18 crc kubenswrapper[4957]: I0218 15:31:18.511793 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6cf52cfb-7ad9-451c-946b-20b9a9451084-kube-api-access-v8hhg" (OuterVolumeSpecName: "kube-api-access-v8hhg") pod "6cf52cfb-7ad9-451c-946b-20b9a9451084" (UID: "6cf52cfb-7ad9-451c-946b-20b9a9451084"). InnerVolumeSpecName "kube-api-access-v8hhg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 15:31:18 crc kubenswrapper[4957]: I0218 15:31:18.531405 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6cf52cfb-7ad9-451c-946b-20b9a9451084-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6cf52cfb-7ad9-451c-946b-20b9a9451084" (UID: "6cf52cfb-7ad9-451c-946b-20b9a9451084"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 15:31:18 crc kubenswrapper[4957]: I0218 15:31:18.601578 4957 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6cf52cfb-7ad9-451c-946b-20b9a9451084-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 15:31:18 crc kubenswrapper[4957]: I0218 15:31:18.601616 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v8hhg\" (UniqueName: \"kubernetes.io/projected/6cf52cfb-7ad9-451c-946b-20b9a9451084-kube-api-access-v8hhg\") on node \"crc\" DevicePath \"\"" Feb 18 15:31:18 crc kubenswrapper[4957]: I0218 15:31:18.601636 4957 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6cf52cfb-7ad9-451c-946b-20b9a9451084-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 15:31:18 crc kubenswrapper[4957]: I0218 15:31:18.835805 4957 generic.go:334] "Generic (PLEG): container finished" podID="6cf52cfb-7ad9-451c-946b-20b9a9451084" containerID="34c87712ee855de3660003ea642388041ebc78e13a79b2cbcfc0040ee3131228" exitCode=0 Feb 18 15:31:18 crc kubenswrapper[4957]: I0218 15:31:18.835860 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7gqc2" event={"ID":"6cf52cfb-7ad9-451c-946b-20b9a9451084","Type":"ContainerDied","Data":"34c87712ee855de3660003ea642388041ebc78e13a79b2cbcfc0040ee3131228"} Feb 18 15:31:18 crc kubenswrapper[4957]: I0218 15:31:18.835894 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7gqc2" event={"ID":"6cf52cfb-7ad9-451c-946b-20b9a9451084","Type":"ContainerDied","Data":"1d5aea56340479e266fe22dcff076f7a4d1d2ff0cd26cb1180db3aee4971cf27"} Feb 18 15:31:18 crc kubenswrapper[4957]: I0218 15:31:18.835918 4957 scope.go:117] "RemoveContainer" containerID="34c87712ee855de3660003ea642388041ebc78e13a79b2cbcfc0040ee3131228" Feb 18 15:31:18 crc kubenswrapper[4957]: I0218 15:31:18.836095 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7gqc2" Feb 18 15:31:18 crc kubenswrapper[4957]: I0218 15:31:18.870982 4957 scope.go:117] "RemoveContainer" containerID="4515ea19b3cc34c5f83f6cdc58c01eb66093de01f8790a487adda4686ea3aed8" Feb 18 15:31:18 crc kubenswrapper[4957]: I0218 15:31:18.897379 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7gqc2"] Feb 18 15:31:18 crc kubenswrapper[4957]: I0218 15:31:18.915131 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-7gqc2"] Feb 18 15:31:18 crc kubenswrapper[4957]: I0218 15:31:18.915594 4957 scope.go:117] "RemoveContainer" containerID="b86b940f1c1255c679db363bb1a2bb3fac0f05aa32f51b2f3b25650a55786f29" Feb 18 15:31:19 crc kubenswrapper[4957]: I0218 15:31:19.051293 4957 scope.go:117] "RemoveContainer" containerID="34c87712ee855de3660003ea642388041ebc78e13a79b2cbcfc0040ee3131228" Feb 18 15:31:19 crc kubenswrapper[4957]: E0218 15:31:19.059623 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"34c87712ee855de3660003ea642388041ebc78e13a79b2cbcfc0040ee3131228\": container with ID starting with 34c87712ee855de3660003ea642388041ebc78e13a79b2cbcfc0040ee3131228 not found: ID does not exist" containerID="34c87712ee855de3660003ea642388041ebc78e13a79b2cbcfc0040ee3131228" Feb 18 15:31:19 crc kubenswrapper[4957]: I0218 15:31:19.059687 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34c87712ee855de3660003ea642388041ebc78e13a79b2cbcfc0040ee3131228"} err="failed to get container status \"34c87712ee855de3660003ea642388041ebc78e13a79b2cbcfc0040ee3131228\": rpc error: code = NotFound desc = could not find container \"34c87712ee855de3660003ea642388041ebc78e13a79b2cbcfc0040ee3131228\": container with ID starting with 34c87712ee855de3660003ea642388041ebc78e13a79b2cbcfc0040ee3131228 not found: ID does not exist" Feb 18 15:31:19 crc kubenswrapper[4957]: I0218 15:31:19.059728 4957 scope.go:117] "RemoveContainer" containerID="4515ea19b3cc34c5f83f6cdc58c01eb66093de01f8790a487adda4686ea3aed8" Feb 18 15:31:19 crc kubenswrapper[4957]: E0218 15:31:19.061267 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4515ea19b3cc34c5f83f6cdc58c01eb66093de01f8790a487adda4686ea3aed8\": container with ID starting with 4515ea19b3cc34c5f83f6cdc58c01eb66093de01f8790a487adda4686ea3aed8 not found: ID does not exist" containerID="4515ea19b3cc34c5f83f6cdc58c01eb66093de01f8790a487adda4686ea3aed8" Feb 18 15:31:19 crc kubenswrapper[4957]: I0218 15:31:19.061323 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4515ea19b3cc34c5f83f6cdc58c01eb66093de01f8790a487adda4686ea3aed8"} err="failed to get container status \"4515ea19b3cc34c5f83f6cdc58c01eb66093de01f8790a487adda4686ea3aed8\": rpc error: code = NotFound desc = could not find container \"4515ea19b3cc34c5f83f6cdc58c01eb66093de01f8790a487adda4686ea3aed8\": container with ID starting with 4515ea19b3cc34c5f83f6cdc58c01eb66093de01f8790a487adda4686ea3aed8 not found: ID does not exist" Feb 18 15:31:19 crc kubenswrapper[4957]: I0218 15:31:19.061344 4957 scope.go:117] "RemoveContainer" containerID="b86b940f1c1255c679db363bb1a2bb3fac0f05aa32f51b2f3b25650a55786f29" Feb 18 15:31:19 crc kubenswrapper[4957]: E0218 15:31:19.063819 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b86b940f1c1255c679db363bb1a2bb3fac0f05aa32f51b2f3b25650a55786f29\": container with ID starting with b86b940f1c1255c679db363bb1a2bb3fac0f05aa32f51b2f3b25650a55786f29 not found: ID does not exist" containerID="b86b940f1c1255c679db363bb1a2bb3fac0f05aa32f51b2f3b25650a55786f29" Feb 18 15:31:19 crc kubenswrapper[4957]: I0218 15:31:19.063893 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b86b940f1c1255c679db363bb1a2bb3fac0f05aa32f51b2f3b25650a55786f29"} err="failed to get container status \"b86b940f1c1255c679db363bb1a2bb3fac0f05aa32f51b2f3b25650a55786f29\": rpc error: code = NotFound desc = could not find container \"b86b940f1c1255c679db363bb1a2bb3fac0f05aa32f51b2f3b25650a55786f29\": container with ID starting with b86b940f1c1255c679db363bb1a2bb3fac0f05aa32f51b2f3b25650a55786f29 not found: ID does not exist" Feb 18 15:31:20 crc kubenswrapper[4957]: I0218 15:31:20.231964 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6cf52cfb-7ad9-451c-946b-20b9a9451084" path="/var/lib/kubelet/pods/6cf52cfb-7ad9-451c-946b-20b9a9451084/volumes" Feb 18 15:31:37 crc kubenswrapper[4957]: I0218 15:31:37.278965 4957 patch_prober.go:28] interesting pod/machine-config-daemon-x8wwg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 15:31:37 crc kubenswrapper[4957]: I0218 15:31:37.279733 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 15:31:37 crc kubenswrapper[4957]: I0218 15:31:37.279820 4957 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" Feb 18 15:31:37 crc kubenswrapper[4957]: I0218 15:31:37.280947 4957 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0d0795f27053579c310c9a8aa84cb704281da505980410f7ddf14c4b9d63bd95"} pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 15:31:37 crc kubenswrapper[4957]: I0218 15:31:37.281068 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" containerName="machine-config-daemon" containerID="cri-o://0d0795f27053579c310c9a8aa84cb704281da505980410f7ddf14c4b9d63bd95" gracePeriod=600 Feb 18 15:31:37 crc kubenswrapper[4957]: E0218 15:31:37.404543 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-x8wwg_openshift-machine-config-operator(4cde17e3-43e9-4bed-afe8-5b76229e35cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" Feb 18 15:31:38 crc kubenswrapper[4957]: I0218 15:31:38.070217 4957 generic.go:334] "Generic (PLEG): container finished" podID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" containerID="0d0795f27053579c310c9a8aa84cb704281da505980410f7ddf14c4b9d63bd95" exitCode=0 Feb 18 15:31:38 crc kubenswrapper[4957]: I0218 15:31:38.070264 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" event={"ID":"4cde17e3-43e9-4bed-afe8-5b76229e35cf","Type":"ContainerDied","Data":"0d0795f27053579c310c9a8aa84cb704281da505980410f7ddf14c4b9d63bd95"} Feb 18 15:31:38 crc kubenswrapper[4957]: I0218 15:31:38.070332 4957 scope.go:117] "RemoveContainer" containerID="694a9df68d5854b0201397a1e27128c5bd3ff2804ec6969915987278dd3e5bec" Feb 18 15:31:38 crc kubenswrapper[4957]: I0218 15:31:38.071071 4957 scope.go:117] "RemoveContainer" containerID="0d0795f27053579c310c9a8aa84cb704281da505980410f7ddf14c4b9d63bd95" Feb 18 15:31:38 crc kubenswrapper[4957]: E0218 15:31:38.071442 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-x8wwg_openshift-machine-config-operator(4cde17e3-43e9-4bed-afe8-5b76229e35cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" Feb 18 15:31:49 crc kubenswrapper[4957]: I0218 15:31:49.213879 4957 scope.go:117] "RemoveContainer" containerID="0d0795f27053579c310c9a8aa84cb704281da505980410f7ddf14c4b9d63bd95" Feb 18 15:31:49 crc kubenswrapper[4957]: E0218 15:31:49.214882 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-x8wwg_openshift-machine-config-operator(4cde17e3-43e9-4bed-afe8-5b76229e35cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" Feb 18 15:31:50 crc kubenswrapper[4957]: E0218 15:31:50.985749 4957 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.213:37026->38.102.83.213:46479: write tcp 38.102.83.213:37026->38.102.83.213:46479: write: connection reset by peer Feb 18 15:32:01 crc kubenswrapper[4957]: I0218 15:32:01.214320 4957 scope.go:117] "RemoveContainer" containerID="0d0795f27053579c310c9a8aa84cb704281da505980410f7ddf14c4b9d63bd95" Feb 18 15:32:01 crc kubenswrapper[4957]: E0218 15:32:01.215689 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-x8wwg_openshift-machine-config-operator(4cde17e3-43e9-4bed-afe8-5b76229e35cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" Feb 18 15:32:12 crc kubenswrapper[4957]: I0218 15:32:12.212766 4957 scope.go:117] "RemoveContainer" containerID="0d0795f27053579c310c9a8aa84cb704281da505980410f7ddf14c4b9d63bd95" Feb 18 15:32:12 crc kubenswrapper[4957]: E0218 15:32:12.213590 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-x8wwg_openshift-machine-config-operator(4cde17e3-43e9-4bed-afe8-5b76229e35cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" Feb 18 15:32:26 crc kubenswrapper[4957]: I0218 15:32:26.213648 4957 scope.go:117] "RemoveContainer" containerID="0d0795f27053579c310c9a8aa84cb704281da505980410f7ddf14c4b9d63bd95" Feb 18 15:32:26 crc kubenswrapper[4957]: E0218 15:32:26.215040 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-x8wwg_openshift-machine-config-operator(4cde17e3-43e9-4bed-afe8-5b76229e35cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" Feb 18 15:32:40 crc kubenswrapper[4957]: I0218 15:32:40.213618 4957 scope.go:117] "RemoveContainer" containerID="0d0795f27053579c310c9a8aa84cb704281da505980410f7ddf14c4b9d63bd95" Feb 18 15:32:40 crc kubenswrapper[4957]: E0218 15:32:40.220356 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-x8wwg_openshift-machine-config-operator(4cde17e3-43e9-4bed-afe8-5b76229e35cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" Feb 18 15:32:55 crc kubenswrapper[4957]: I0218 15:32:55.215170 4957 scope.go:117] "RemoveContainer" containerID="0d0795f27053579c310c9a8aa84cb704281da505980410f7ddf14c4b9d63bd95" Feb 18 15:32:55 crc kubenswrapper[4957]: E0218 15:32:55.215931 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-x8wwg_openshift-machine-config-operator(4cde17e3-43e9-4bed-afe8-5b76229e35cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" Feb 18 15:33:07 crc kubenswrapper[4957]: I0218 15:33:07.213494 4957 scope.go:117] "RemoveContainer" containerID="0d0795f27053579c310c9a8aa84cb704281da505980410f7ddf14c4b9d63bd95" Feb 18 15:33:07 crc kubenswrapper[4957]: E0218 15:33:07.214387 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-x8wwg_openshift-machine-config-operator(4cde17e3-43e9-4bed-afe8-5b76229e35cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" Feb 18 15:33:21 crc kubenswrapper[4957]: I0218 15:33:21.213707 4957 scope.go:117] "RemoveContainer" containerID="0d0795f27053579c310c9a8aa84cb704281da505980410f7ddf14c4b9d63bd95" Feb 18 15:33:21 crc kubenswrapper[4957]: E0218 15:33:21.214455 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-x8wwg_openshift-machine-config-operator(4cde17e3-43e9-4bed-afe8-5b76229e35cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" Feb 18 15:33:35 crc kubenswrapper[4957]: I0218 15:33:35.213309 4957 scope.go:117] "RemoveContainer" containerID="0d0795f27053579c310c9a8aa84cb704281da505980410f7ddf14c4b9d63bd95" Feb 18 15:33:35 crc kubenswrapper[4957]: E0218 15:33:35.214135 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-x8wwg_openshift-machine-config-operator(4cde17e3-43e9-4bed-afe8-5b76229e35cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" Feb 18 15:33:49 crc kubenswrapper[4957]: I0218 15:33:49.214356 4957 scope.go:117] "RemoveContainer" containerID="0d0795f27053579c310c9a8aa84cb704281da505980410f7ddf14c4b9d63bd95" Feb 18 15:33:49 crc kubenswrapper[4957]: E0218 15:33:49.215552 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-x8wwg_openshift-machine-config-operator(4cde17e3-43e9-4bed-afe8-5b76229e35cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" Feb 18 15:34:00 crc kubenswrapper[4957]: I0218 15:34:00.214087 4957 scope.go:117] "RemoveContainer" containerID="0d0795f27053579c310c9a8aa84cb704281da505980410f7ddf14c4b9d63bd95" Feb 18 15:34:00 crc kubenswrapper[4957]: E0218 15:34:00.215120 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-x8wwg_openshift-machine-config-operator(4cde17e3-43e9-4bed-afe8-5b76229e35cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" Feb 18 15:34:12 crc kubenswrapper[4957]: I0218 15:34:12.213569 4957 scope.go:117] "RemoveContainer" containerID="0d0795f27053579c310c9a8aa84cb704281da505980410f7ddf14c4b9d63bd95" Feb 18 15:34:12 crc kubenswrapper[4957]: E0218 15:34:12.214945 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-x8wwg_openshift-machine-config-operator(4cde17e3-43e9-4bed-afe8-5b76229e35cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" Feb 18 15:34:23 crc kubenswrapper[4957]: I0218 15:34:23.212886 4957 scope.go:117] "RemoveContainer" containerID="0d0795f27053579c310c9a8aa84cb704281da505980410f7ddf14c4b9d63bd95" Feb 18 15:34:23 crc kubenswrapper[4957]: E0218 15:34:23.213615 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-x8wwg_openshift-machine-config-operator(4cde17e3-43e9-4bed-afe8-5b76229e35cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" Feb 18 15:34:37 crc kubenswrapper[4957]: I0218 15:34:37.213265 4957 scope.go:117] "RemoveContainer" containerID="0d0795f27053579c310c9a8aa84cb704281da505980410f7ddf14c4b9d63bd95" Feb 18 15:34:37 crc kubenswrapper[4957]: E0218 15:34:37.214214 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-x8wwg_openshift-machine-config-operator(4cde17e3-43e9-4bed-afe8-5b76229e35cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" Feb 18 15:34:52 crc kubenswrapper[4957]: I0218 15:34:52.213307 4957 scope.go:117] "RemoveContainer" containerID="0d0795f27053579c310c9a8aa84cb704281da505980410f7ddf14c4b9d63bd95" Feb 18 15:34:52 crc kubenswrapper[4957]: E0218 15:34:52.214151 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-x8wwg_openshift-machine-config-operator(4cde17e3-43e9-4bed-afe8-5b76229e35cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" Feb 18 15:35:04 crc kubenswrapper[4957]: I0218 15:35:04.227491 4957 scope.go:117] "RemoveContainer" containerID="0d0795f27053579c310c9a8aa84cb704281da505980410f7ddf14c4b9d63bd95" Feb 18 15:35:04 crc kubenswrapper[4957]: E0218 15:35:04.228251 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-x8wwg_openshift-machine-config-operator(4cde17e3-43e9-4bed-afe8-5b76229e35cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" Feb 18 15:35:09 crc kubenswrapper[4957]: I0218 15:35:09.954809 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-phsm5"] Feb 18 15:35:09 crc kubenswrapper[4957]: E0218 15:35:09.956991 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6cf52cfb-7ad9-451c-946b-20b9a9451084" containerName="extract-utilities" Feb 18 15:35:09 crc kubenswrapper[4957]: I0218 15:35:09.957020 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="6cf52cfb-7ad9-451c-946b-20b9a9451084" containerName="extract-utilities" Feb 18 15:35:09 crc kubenswrapper[4957]: E0218 15:35:09.957063 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6cf52cfb-7ad9-451c-946b-20b9a9451084" containerName="extract-content" Feb 18 15:35:09 crc kubenswrapper[4957]: I0218 15:35:09.957072 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="6cf52cfb-7ad9-451c-946b-20b9a9451084" containerName="extract-content" Feb 18 15:35:09 crc kubenswrapper[4957]: E0218 15:35:09.957098 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6cf52cfb-7ad9-451c-946b-20b9a9451084" containerName="registry-server" Feb 18 15:35:09 crc kubenswrapper[4957]: I0218 15:35:09.957109 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="6cf52cfb-7ad9-451c-946b-20b9a9451084" containerName="registry-server" Feb 18 15:35:09 crc kubenswrapper[4957]: I0218 15:35:09.957632 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="6cf52cfb-7ad9-451c-946b-20b9a9451084" containerName="registry-server" Feb 18 15:35:09 crc kubenswrapper[4957]: I0218 15:35:09.960039 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-phsm5" Feb 18 15:35:10 crc kubenswrapper[4957]: I0218 15:35:10.044337 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a3dfa20f-2817-41e0-bd2a-6be0700e1f75-utilities\") pod \"community-operators-phsm5\" (UID: \"a3dfa20f-2817-41e0-bd2a-6be0700e1f75\") " pod="openshift-marketplace/community-operators-phsm5" Feb 18 15:35:10 crc kubenswrapper[4957]: I0218 15:35:10.044381 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-phsm5"] Feb 18 15:35:10 crc kubenswrapper[4957]: I0218 15:35:10.044412 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a3dfa20f-2817-41e0-bd2a-6be0700e1f75-catalog-content\") pod \"community-operators-phsm5\" (UID: \"a3dfa20f-2817-41e0-bd2a-6be0700e1f75\") " pod="openshift-marketplace/community-operators-phsm5" Feb 18 15:35:10 crc kubenswrapper[4957]: I0218 15:35:10.044654 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p79h6\" (UniqueName: \"kubernetes.io/projected/a3dfa20f-2817-41e0-bd2a-6be0700e1f75-kube-api-access-p79h6\") pod \"community-operators-phsm5\" (UID: \"a3dfa20f-2817-41e0-bd2a-6be0700e1f75\") " pod="openshift-marketplace/community-operators-phsm5" Feb 18 15:35:10 crc kubenswrapper[4957]: I0218 15:35:10.146409 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a3dfa20f-2817-41e0-bd2a-6be0700e1f75-utilities\") pod \"community-operators-phsm5\" (UID: \"a3dfa20f-2817-41e0-bd2a-6be0700e1f75\") " pod="openshift-marketplace/community-operators-phsm5" Feb 18 15:35:10 crc kubenswrapper[4957]: I0218 15:35:10.146749 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a3dfa20f-2817-41e0-bd2a-6be0700e1f75-catalog-content\") pod \"community-operators-phsm5\" (UID: \"a3dfa20f-2817-41e0-bd2a-6be0700e1f75\") " pod="openshift-marketplace/community-operators-phsm5" Feb 18 15:35:10 crc kubenswrapper[4957]: I0218 15:35:10.146881 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a3dfa20f-2817-41e0-bd2a-6be0700e1f75-utilities\") pod \"community-operators-phsm5\" (UID: \"a3dfa20f-2817-41e0-bd2a-6be0700e1f75\") " pod="openshift-marketplace/community-operators-phsm5" Feb 18 15:35:10 crc kubenswrapper[4957]: I0218 15:35:10.146955 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p79h6\" (UniqueName: \"kubernetes.io/projected/a3dfa20f-2817-41e0-bd2a-6be0700e1f75-kube-api-access-p79h6\") pod \"community-operators-phsm5\" (UID: \"a3dfa20f-2817-41e0-bd2a-6be0700e1f75\") " pod="openshift-marketplace/community-operators-phsm5" Feb 18 15:35:10 crc kubenswrapper[4957]: I0218 15:35:10.147216 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a3dfa20f-2817-41e0-bd2a-6be0700e1f75-catalog-content\") pod \"community-operators-phsm5\" (UID: \"a3dfa20f-2817-41e0-bd2a-6be0700e1f75\") " pod="openshift-marketplace/community-operators-phsm5" Feb 18 15:35:10 crc kubenswrapper[4957]: I0218 15:35:10.181801 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p79h6\" (UniqueName: \"kubernetes.io/projected/a3dfa20f-2817-41e0-bd2a-6be0700e1f75-kube-api-access-p79h6\") pod \"community-operators-phsm5\" (UID: \"a3dfa20f-2817-41e0-bd2a-6be0700e1f75\") " pod="openshift-marketplace/community-operators-phsm5" Feb 18 15:35:10 crc kubenswrapper[4957]: I0218 15:35:10.286409 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-phsm5" Feb 18 15:35:10 crc kubenswrapper[4957]: I0218 15:35:10.892833 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-phsm5"] Feb 18 15:35:11 crc kubenswrapper[4957]: I0218 15:35:11.688785 4957 generic.go:334] "Generic (PLEG): container finished" podID="a3dfa20f-2817-41e0-bd2a-6be0700e1f75" containerID="bdfc266648f258a0528302f4db378f3b065c6ea01b305ef03bbaf778ee8a60e8" exitCode=0 Feb 18 15:35:11 crc kubenswrapper[4957]: I0218 15:35:11.689060 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-phsm5" event={"ID":"a3dfa20f-2817-41e0-bd2a-6be0700e1f75","Type":"ContainerDied","Data":"bdfc266648f258a0528302f4db378f3b065c6ea01b305ef03bbaf778ee8a60e8"} Feb 18 15:35:11 crc kubenswrapper[4957]: I0218 15:35:11.689091 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-phsm5" event={"ID":"a3dfa20f-2817-41e0-bd2a-6be0700e1f75","Type":"ContainerStarted","Data":"0107a52e8a9f3ca5d629c1ed4a100b1a160e460f15658d409e6071833bd7cb24"} Feb 18 15:35:11 crc kubenswrapper[4957]: I0218 15:35:11.691867 4957 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 15:35:12 crc kubenswrapper[4957]: I0218 15:35:12.704667 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-phsm5" event={"ID":"a3dfa20f-2817-41e0-bd2a-6be0700e1f75","Type":"ContainerStarted","Data":"e9b125ab62e6a861bd1929bec33fc724a5fc06a8e235534be3b6970a31daf1c6"} Feb 18 15:35:14 crc kubenswrapper[4957]: E0218 15:35:14.595816 4957 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda3dfa20f_2817_41e0_bd2a_6be0700e1f75.slice/crio-e9b125ab62e6a861bd1929bec33fc724a5fc06a8e235534be3b6970a31daf1c6.scope\": RecentStats: unable to find data in memory cache]" Feb 18 15:35:14 crc kubenswrapper[4957]: I0218 15:35:14.735772 4957 generic.go:334] "Generic (PLEG): container finished" podID="a3dfa20f-2817-41e0-bd2a-6be0700e1f75" containerID="e9b125ab62e6a861bd1929bec33fc724a5fc06a8e235534be3b6970a31daf1c6" exitCode=0 Feb 18 15:35:14 crc kubenswrapper[4957]: I0218 15:35:14.735853 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-phsm5" event={"ID":"a3dfa20f-2817-41e0-bd2a-6be0700e1f75","Type":"ContainerDied","Data":"e9b125ab62e6a861bd1929bec33fc724a5fc06a8e235534be3b6970a31daf1c6"} Feb 18 15:35:15 crc kubenswrapper[4957]: I0218 15:35:15.774712 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-phsm5" event={"ID":"a3dfa20f-2817-41e0-bd2a-6be0700e1f75","Type":"ContainerStarted","Data":"17bdb7edf8e7e1766fde1db0f80da0a9e8b674c992342107acdccac84737bf97"} Feb 18 15:35:15 crc kubenswrapper[4957]: I0218 15:35:15.805676 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-phsm5" podStartSLOduration=3.275009659 podStartE2EDuration="6.80565886s" podCreationTimestamp="2026-02-18 15:35:09 +0000 UTC" firstStartedPulling="2026-02-18 15:35:11.691404091 +0000 UTC m=+3818.212268875" lastFinishedPulling="2026-02-18 15:35:15.222053332 +0000 UTC m=+3821.742918076" observedRunningTime="2026-02-18 15:35:15.800245505 +0000 UTC m=+3822.321110249" watchObservedRunningTime="2026-02-18 15:35:15.80565886 +0000 UTC m=+3822.326523604" Feb 18 15:35:17 crc kubenswrapper[4957]: I0218 15:35:17.212950 4957 scope.go:117] "RemoveContainer" containerID="0d0795f27053579c310c9a8aa84cb704281da505980410f7ddf14c4b9d63bd95" Feb 18 15:35:17 crc kubenswrapper[4957]: E0218 15:35:17.213718 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-x8wwg_openshift-machine-config-operator(4cde17e3-43e9-4bed-afe8-5b76229e35cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" Feb 18 15:35:20 crc kubenswrapper[4957]: I0218 15:35:20.287659 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-phsm5" Feb 18 15:35:20 crc kubenswrapper[4957]: I0218 15:35:20.288204 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-phsm5" Feb 18 15:35:20 crc kubenswrapper[4957]: I0218 15:35:20.338036 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-phsm5" Feb 18 15:35:20 crc kubenswrapper[4957]: I0218 15:35:20.906102 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-phsm5" Feb 18 15:35:20 crc kubenswrapper[4957]: I0218 15:35:20.971320 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-phsm5"] Feb 18 15:35:22 crc kubenswrapper[4957]: I0218 15:35:22.853135 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-phsm5" podUID="a3dfa20f-2817-41e0-bd2a-6be0700e1f75" containerName="registry-server" containerID="cri-o://17bdb7edf8e7e1766fde1db0f80da0a9e8b674c992342107acdccac84737bf97" gracePeriod=2 Feb 18 15:35:23 crc kubenswrapper[4957]: I0218 15:35:23.403931 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-phsm5" Feb 18 15:35:23 crc kubenswrapper[4957]: I0218 15:35:23.519232 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a3dfa20f-2817-41e0-bd2a-6be0700e1f75-utilities\") pod \"a3dfa20f-2817-41e0-bd2a-6be0700e1f75\" (UID: \"a3dfa20f-2817-41e0-bd2a-6be0700e1f75\") " Feb 18 15:35:23 crc kubenswrapper[4957]: I0218 15:35:23.519412 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a3dfa20f-2817-41e0-bd2a-6be0700e1f75-catalog-content\") pod \"a3dfa20f-2817-41e0-bd2a-6be0700e1f75\" (UID: \"a3dfa20f-2817-41e0-bd2a-6be0700e1f75\") " Feb 18 15:35:23 crc kubenswrapper[4957]: I0218 15:35:23.519515 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p79h6\" (UniqueName: \"kubernetes.io/projected/a3dfa20f-2817-41e0-bd2a-6be0700e1f75-kube-api-access-p79h6\") pod \"a3dfa20f-2817-41e0-bd2a-6be0700e1f75\" (UID: \"a3dfa20f-2817-41e0-bd2a-6be0700e1f75\") " Feb 18 15:35:23 crc kubenswrapper[4957]: I0218 15:35:23.520038 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a3dfa20f-2817-41e0-bd2a-6be0700e1f75-utilities" (OuterVolumeSpecName: "utilities") pod "a3dfa20f-2817-41e0-bd2a-6be0700e1f75" (UID: "a3dfa20f-2817-41e0-bd2a-6be0700e1f75"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 15:35:23 crc kubenswrapper[4957]: I0218 15:35:23.520277 4957 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a3dfa20f-2817-41e0-bd2a-6be0700e1f75-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 15:35:23 crc kubenswrapper[4957]: I0218 15:35:23.528254 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3dfa20f-2817-41e0-bd2a-6be0700e1f75-kube-api-access-p79h6" (OuterVolumeSpecName: "kube-api-access-p79h6") pod "a3dfa20f-2817-41e0-bd2a-6be0700e1f75" (UID: "a3dfa20f-2817-41e0-bd2a-6be0700e1f75"). InnerVolumeSpecName "kube-api-access-p79h6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 15:35:23 crc kubenswrapper[4957]: I0218 15:35:23.582486 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a3dfa20f-2817-41e0-bd2a-6be0700e1f75-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a3dfa20f-2817-41e0-bd2a-6be0700e1f75" (UID: "a3dfa20f-2817-41e0-bd2a-6be0700e1f75"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 15:35:23 crc kubenswrapper[4957]: I0218 15:35:23.649769 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p79h6\" (UniqueName: \"kubernetes.io/projected/a3dfa20f-2817-41e0-bd2a-6be0700e1f75-kube-api-access-p79h6\") on node \"crc\" DevicePath \"\"" Feb 18 15:35:23 crc kubenswrapper[4957]: I0218 15:35:23.649841 4957 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a3dfa20f-2817-41e0-bd2a-6be0700e1f75-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 15:35:23 crc kubenswrapper[4957]: I0218 15:35:23.866031 4957 generic.go:334] "Generic (PLEG): container finished" podID="a3dfa20f-2817-41e0-bd2a-6be0700e1f75" containerID="17bdb7edf8e7e1766fde1db0f80da0a9e8b674c992342107acdccac84737bf97" exitCode=0 Feb 18 15:35:23 crc kubenswrapper[4957]: I0218 15:35:23.866082 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-phsm5" event={"ID":"a3dfa20f-2817-41e0-bd2a-6be0700e1f75","Type":"ContainerDied","Data":"17bdb7edf8e7e1766fde1db0f80da0a9e8b674c992342107acdccac84737bf97"} Feb 18 15:35:23 crc kubenswrapper[4957]: I0218 15:35:23.866151 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-phsm5" event={"ID":"a3dfa20f-2817-41e0-bd2a-6be0700e1f75","Type":"ContainerDied","Data":"0107a52e8a9f3ca5d629c1ed4a100b1a160e460f15658d409e6071833bd7cb24"} Feb 18 15:35:23 crc kubenswrapper[4957]: I0218 15:35:23.866176 4957 scope.go:117] "RemoveContainer" containerID="17bdb7edf8e7e1766fde1db0f80da0a9e8b674c992342107acdccac84737bf97" Feb 18 15:35:23 crc kubenswrapper[4957]: I0218 15:35:23.866819 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-phsm5" Feb 18 15:35:23 crc kubenswrapper[4957]: I0218 15:35:23.903286 4957 scope.go:117] "RemoveContainer" containerID="e9b125ab62e6a861bd1929bec33fc724a5fc06a8e235534be3b6970a31daf1c6" Feb 18 15:35:23 crc kubenswrapper[4957]: I0218 15:35:23.905834 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-phsm5"] Feb 18 15:35:23 crc kubenswrapper[4957]: I0218 15:35:23.921280 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-phsm5"] Feb 18 15:35:23 crc kubenswrapper[4957]: I0218 15:35:23.935283 4957 scope.go:117] "RemoveContainer" containerID="bdfc266648f258a0528302f4db378f3b065c6ea01b305ef03bbaf778ee8a60e8" Feb 18 15:35:23 crc kubenswrapper[4957]: I0218 15:35:23.987626 4957 scope.go:117] "RemoveContainer" containerID="17bdb7edf8e7e1766fde1db0f80da0a9e8b674c992342107acdccac84737bf97" Feb 18 15:35:23 crc kubenswrapper[4957]: E0218 15:35:23.988799 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"17bdb7edf8e7e1766fde1db0f80da0a9e8b674c992342107acdccac84737bf97\": container with ID starting with 17bdb7edf8e7e1766fde1db0f80da0a9e8b674c992342107acdccac84737bf97 not found: ID does not exist" containerID="17bdb7edf8e7e1766fde1db0f80da0a9e8b674c992342107acdccac84737bf97" Feb 18 15:35:23 crc kubenswrapper[4957]: I0218 15:35:23.988831 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"17bdb7edf8e7e1766fde1db0f80da0a9e8b674c992342107acdccac84737bf97"} err="failed to get container status \"17bdb7edf8e7e1766fde1db0f80da0a9e8b674c992342107acdccac84737bf97\": rpc error: code = NotFound desc = could not find container \"17bdb7edf8e7e1766fde1db0f80da0a9e8b674c992342107acdccac84737bf97\": container with ID starting with 17bdb7edf8e7e1766fde1db0f80da0a9e8b674c992342107acdccac84737bf97 not found: ID does not exist" Feb 18 15:35:23 crc kubenswrapper[4957]: I0218 15:35:23.988853 4957 scope.go:117] "RemoveContainer" containerID="e9b125ab62e6a861bd1929bec33fc724a5fc06a8e235534be3b6970a31daf1c6" Feb 18 15:35:23 crc kubenswrapper[4957]: E0218 15:35:23.989237 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e9b125ab62e6a861bd1929bec33fc724a5fc06a8e235534be3b6970a31daf1c6\": container with ID starting with e9b125ab62e6a861bd1929bec33fc724a5fc06a8e235534be3b6970a31daf1c6 not found: ID does not exist" containerID="e9b125ab62e6a861bd1929bec33fc724a5fc06a8e235534be3b6970a31daf1c6" Feb 18 15:35:23 crc kubenswrapper[4957]: I0218 15:35:23.989329 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e9b125ab62e6a861bd1929bec33fc724a5fc06a8e235534be3b6970a31daf1c6"} err="failed to get container status \"e9b125ab62e6a861bd1929bec33fc724a5fc06a8e235534be3b6970a31daf1c6\": rpc error: code = NotFound desc = could not find container \"e9b125ab62e6a861bd1929bec33fc724a5fc06a8e235534be3b6970a31daf1c6\": container with ID starting with e9b125ab62e6a861bd1929bec33fc724a5fc06a8e235534be3b6970a31daf1c6 not found: ID does not exist" Feb 18 15:35:23 crc kubenswrapper[4957]: I0218 15:35:23.989405 4957 scope.go:117] "RemoveContainer" containerID="bdfc266648f258a0528302f4db378f3b065c6ea01b305ef03bbaf778ee8a60e8" Feb 18 15:35:23 crc kubenswrapper[4957]: E0218 15:35:23.989885 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bdfc266648f258a0528302f4db378f3b065c6ea01b305ef03bbaf778ee8a60e8\": container with ID starting with bdfc266648f258a0528302f4db378f3b065c6ea01b305ef03bbaf778ee8a60e8 not found: ID does not exist" containerID="bdfc266648f258a0528302f4db378f3b065c6ea01b305ef03bbaf778ee8a60e8" Feb 18 15:35:23 crc kubenswrapper[4957]: I0218 15:35:23.989912 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bdfc266648f258a0528302f4db378f3b065c6ea01b305ef03bbaf778ee8a60e8"} err="failed to get container status \"bdfc266648f258a0528302f4db378f3b065c6ea01b305ef03bbaf778ee8a60e8\": rpc error: code = NotFound desc = could not find container \"bdfc266648f258a0528302f4db378f3b065c6ea01b305ef03bbaf778ee8a60e8\": container with ID starting with bdfc266648f258a0528302f4db378f3b065c6ea01b305ef03bbaf778ee8a60e8 not found: ID does not exist" Feb 18 15:35:24 crc kubenswrapper[4957]: I0218 15:35:24.226589 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a3dfa20f-2817-41e0-bd2a-6be0700e1f75" path="/var/lib/kubelet/pods/a3dfa20f-2817-41e0-bd2a-6be0700e1f75/volumes" Feb 18 15:35:32 crc kubenswrapper[4957]: I0218 15:35:32.213822 4957 scope.go:117] "RemoveContainer" containerID="0d0795f27053579c310c9a8aa84cb704281da505980410f7ddf14c4b9d63bd95" Feb 18 15:35:32 crc kubenswrapper[4957]: E0218 15:35:32.214698 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-x8wwg_openshift-machine-config-operator(4cde17e3-43e9-4bed-afe8-5b76229e35cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" Feb 18 15:35:45 crc kubenswrapper[4957]: I0218 15:35:45.212962 4957 scope.go:117] "RemoveContainer" containerID="0d0795f27053579c310c9a8aa84cb704281da505980410f7ddf14c4b9d63bd95" Feb 18 15:35:45 crc kubenswrapper[4957]: E0218 15:35:45.213755 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-x8wwg_openshift-machine-config-operator(4cde17e3-43e9-4bed-afe8-5b76229e35cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" Feb 18 15:35:59 crc kubenswrapper[4957]: I0218 15:35:59.214736 4957 scope.go:117] "RemoveContainer" containerID="0d0795f27053579c310c9a8aa84cb704281da505980410f7ddf14c4b9d63bd95" Feb 18 15:35:59 crc kubenswrapper[4957]: E0218 15:35:59.215475 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-x8wwg_openshift-machine-config-operator(4cde17e3-43e9-4bed-afe8-5b76229e35cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" Feb 18 15:36:12 crc kubenswrapper[4957]: I0218 15:36:12.213712 4957 scope.go:117] "RemoveContainer" containerID="0d0795f27053579c310c9a8aa84cb704281da505980410f7ddf14c4b9d63bd95" Feb 18 15:36:12 crc kubenswrapper[4957]: E0218 15:36:12.214761 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-x8wwg_openshift-machine-config-operator(4cde17e3-43e9-4bed-afe8-5b76229e35cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" Feb 18 15:36:23 crc kubenswrapper[4957]: I0218 15:36:23.213329 4957 scope.go:117] "RemoveContainer" containerID="0d0795f27053579c310c9a8aa84cb704281da505980410f7ddf14c4b9d63bd95" Feb 18 15:36:23 crc kubenswrapper[4957]: E0218 15:36:23.214291 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-x8wwg_openshift-machine-config-operator(4cde17e3-43e9-4bed-afe8-5b76229e35cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" Feb 18 15:36:35 crc kubenswrapper[4957]: I0218 15:36:35.214010 4957 scope.go:117] "RemoveContainer" containerID="0d0795f27053579c310c9a8aa84cb704281da505980410f7ddf14c4b9d63bd95" Feb 18 15:36:35 crc kubenswrapper[4957]: E0218 15:36:35.217565 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-x8wwg_openshift-machine-config-operator(4cde17e3-43e9-4bed-afe8-5b76229e35cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" Feb 18 15:36:46 crc kubenswrapper[4957]: I0218 15:36:46.219727 4957 scope.go:117] "RemoveContainer" containerID="0d0795f27053579c310c9a8aa84cb704281da505980410f7ddf14c4b9d63bd95" Feb 18 15:36:47 crc kubenswrapper[4957]: I0218 15:36:47.943440 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" event={"ID":"4cde17e3-43e9-4bed-afe8-5b76229e35cf","Type":"ContainerStarted","Data":"ce24a050fa4f86c2157fad1dbfe19397a83b3ea4b1b26e8a090d6f43f8831d54"} Feb 18 15:39:07 crc kubenswrapper[4957]: I0218 15:39:07.278905 4957 patch_prober.go:28] interesting pod/machine-config-daemon-x8wwg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 15:39:07 crc kubenswrapper[4957]: I0218 15:39:07.279481 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 15:39:37 crc kubenswrapper[4957]: I0218 15:39:37.279186 4957 patch_prober.go:28] interesting pod/machine-config-daemon-x8wwg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 15:39:37 crc kubenswrapper[4957]: I0218 15:39:37.279833 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 15:40:07 crc kubenswrapper[4957]: I0218 15:40:07.325108 4957 patch_prober.go:28] interesting pod/machine-config-daemon-x8wwg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 15:40:07 crc kubenswrapper[4957]: I0218 15:40:07.325668 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 15:40:07 crc kubenswrapper[4957]: I0218 15:40:07.326175 4957 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" Feb 18 15:40:07 crc kubenswrapper[4957]: I0218 15:40:07.327540 4957 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ce24a050fa4f86c2157fad1dbfe19397a83b3ea4b1b26e8a090d6f43f8831d54"} pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 15:40:07 crc kubenswrapper[4957]: I0218 15:40:07.327629 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" containerName="machine-config-daemon" containerID="cri-o://ce24a050fa4f86c2157fad1dbfe19397a83b3ea4b1b26e8a090d6f43f8831d54" gracePeriod=600 Feb 18 15:40:08 crc kubenswrapper[4957]: I0218 15:40:08.410835 4957 generic.go:334] "Generic (PLEG): container finished" podID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" containerID="ce24a050fa4f86c2157fad1dbfe19397a83b3ea4b1b26e8a090d6f43f8831d54" exitCode=0 Feb 18 15:40:08 crc kubenswrapper[4957]: I0218 15:40:08.410882 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" event={"ID":"4cde17e3-43e9-4bed-afe8-5b76229e35cf","Type":"ContainerDied","Data":"ce24a050fa4f86c2157fad1dbfe19397a83b3ea4b1b26e8a090d6f43f8831d54"} Feb 18 15:40:08 crc kubenswrapper[4957]: I0218 15:40:08.411532 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" event={"ID":"4cde17e3-43e9-4bed-afe8-5b76229e35cf","Type":"ContainerStarted","Data":"f9077133a95a58a1f91a3ffae91cbaefd9887cd4916d6d9348fd595927183247"} Feb 18 15:40:08 crc kubenswrapper[4957]: I0218 15:40:08.411563 4957 scope.go:117] "RemoveContainer" containerID="0d0795f27053579c310c9a8aa84cb704281da505980410f7ddf14c4b9d63bd95" Feb 18 15:40:25 crc kubenswrapper[4957]: E0218 15:40:25.248378 4957 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.213:43538->38.102.83.213:46479: write tcp 38.102.83.213:43538->38.102.83.213:46479: write: broken pipe Feb 18 15:40:55 crc kubenswrapper[4957]: I0218 15:40:55.086772 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-mtbsd"] Feb 18 15:40:55 crc kubenswrapper[4957]: E0218 15:40:55.088439 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3dfa20f-2817-41e0-bd2a-6be0700e1f75" containerName="extract-utilities" Feb 18 15:40:55 crc kubenswrapper[4957]: I0218 15:40:55.088466 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3dfa20f-2817-41e0-bd2a-6be0700e1f75" containerName="extract-utilities" Feb 18 15:40:55 crc kubenswrapper[4957]: E0218 15:40:55.088510 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3dfa20f-2817-41e0-bd2a-6be0700e1f75" containerName="registry-server" Feb 18 15:40:55 crc kubenswrapper[4957]: I0218 15:40:55.088523 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3dfa20f-2817-41e0-bd2a-6be0700e1f75" containerName="registry-server" Feb 18 15:40:55 crc kubenswrapper[4957]: E0218 15:40:55.088570 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3dfa20f-2817-41e0-bd2a-6be0700e1f75" containerName="extract-content" Feb 18 15:40:55 crc kubenswrapper[4957]: I0218 15:40:55.088581 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3dfa20f-2817-41e0-bd2a-6be0700e1f75" containerName="extract-content" Feb 18 15:40:55 crc kubenswrapper[4957]: I0218 15:40:55.089055 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3dfa20f-2817-41e0-bd2a-6be0700e1f75" containerName="registry-server" Feb 18 15:40:55 crc kubenswrapper[4957]: I0218 15:40:55.092443 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mtbsd" Feb 18 15:40:55 crc kubenswrapper[4957]: I0218 15:40:55.107749 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mtbsd"] Feb 18 15:40:55 crc kubenswrapper[4957]: I0218 15:40:55.276824 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ns4vq\" (UniqueName: \"kubernetes.io/projected/66159dcb-b549-41a9-8baa-0fdd5141fc04-kube-api-access-ns4vq\") pod \"redhat-operators-mtbsd\" (UID: \"66159dcb-b549-41a9-8baa-0fdd5141fc04\") " pod="openshift-marketplace/redhat-operators-mtbsd" Feb 18 15:40:55 crc kubenswrapper[4957]: I0218 15:40:55.277237 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66159dcb-b549-41a9-8baa-0fdd5141fc04-utilities\") pod \"redhat-operators-mtbsd\" (UID: \"66159dcb-b549-41a9-8baa-0fdd5141fc04\") " pod="openshift-marketplace/redhat-operators-mtbsd" Feb 18 15:40:55 crc kubenswrapper[4957]: I0218 15:40:55.277334 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66159dcb-b549-41a9-8baa-0fdd5141fc04-catalog-content\") pod \"redhat-operators-mtbsd\" (UID: \"66159dcb-b549-41a9-8baa-0fdd5141fc04\") " pod="openshift-marketplace/redhat-operators-mtbsd" Feb 18 15:40:55 crc kubenswrapper[4957]: I0218 15:40:55.379217 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66159dcb-b549-41a9-8baa-0fdd5141fc04-utilities\") pod \"redhat-operators-mtbsd\" (UID: \"66159dcb-b549-41a9-8baa-0fdd5141fc04\") " pod="openshift-marketplace/redhat-operators-mtbsd" Feb 18 15:40:55 crc kubenswrapper[4957]: I0218 15:40:55.379630 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66159dcb-b549-41a9-8baa-0fdd5141fc04-catalog-content\") pod \"redhat-operators-mtbsd\" (UID: \"66159dcb-b549-41a9-8baa-0fdd5141fc04\") " pod="openshift-marketplace/redhat-operators-mtbsd" Feb 18 15:40:55 crc kubenswrapper[4957]: I0218 15:40:55.380106 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ns4vq\" (UniqueName: \"kubernetes.io/projected/66159dcb-b549-41a9-8baa-0fdd5141fc04-kube-api-access-ns4vq\") pod \"redhat-operators-mtbsd\" (UID: \"66159dcb-b549-41a9-8baa-0fdd5141fc04\") " pod="openshift-marketplace/redhat-operators-mtbsd" Feb 18 15:40:55 crc kubenswrapper[4957]: I0218 15:40:55.380961 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66159dcb-b549-41a9-8baa-0fdd5141fc04-utilities\") pod \"redhat-operators-mtbsd\" (UID: \"66159dcb-b549-41a9-8baa-0fdd5141fc04\") " pod="openshift-marketplace/redhat-operators-mtbsd" Feb 18 15:40:55 crc kubenswrapper[4957]: I0218 15:40:55.394161 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66159dcb-b549-41a9-8baa-0fdd5141fc04-catalog-content\") pod \"redhat-operators-mtbsd\" (UID: \"66159dcb-b549-41a9-8baa-0fdd5141fc04\") " pod="openshift-marketplace/redhat-operators-mtbsd" Feb 18 15:40:55 crc kubenswrapper[4957]: I0218 15:40:55.406744 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ns4vq\" (UniqueName: \"kubernetes.io/projected/66159dcb-b549-41a9-8baa-0fdd5141fc04-kube-api-access-ns4vq\") pod \"redhat-operators-mtbsd\" (UID: \"66159dcb-b549-41a9-8baa-0fdd5141fc04\") " pod="openshift-marketplace/redhat-operators-mtbsd" Feb 18 15:40:55 crc kubenswrapper[4957]: I0218 15:40:55.426092 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mtbsd" Feb 18 15:40:56 crc kubenswrapper[4957]: I0218 15:40:56.750305 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mtbsd"] Feb 18 15:40:56 crc kubenswrapper[4957]: I0218 15:40:56.879332 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-rv779"] Feb 18 15:40:56 crc kubenswrapper[4957]: I0218 15:40:56.883627 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rv779" Feb 18 15:40:56 crc kubenswrapper[4957]: I0218 15:40:56.911504 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rv779"] Feb 18 15:40:56 crc kubenswrapper[4957]: I0218 15:40:56.928218 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4b9050a-9f4f-4790-a7e5-8baaaff5e611-utilities\") pod \"certified-operators-rv779\" (UID: \"a4b9050a-9f4f-4790-a7e5-8baaaff5e611\") " pod="openshift-marketplace/certified-operators-rv779" Feb 18 15:40:56 crc kubenswrapper[4957]: I0218 15:40:56.928267 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdvvd\" (UniqueName: \"kubernetes.io/projected/a4b9050a-9f4f-4790-a7e5-8baaaff5e611-kube-api-access-vdvvd\") pod \"certified-operators-rv779\" (UID: \"a4b9050a-9f4f-4790-a7e5-8baaaff5e611\") " pod="openshift-marketplace/certified-operators-rv779" Feb 18 15:40:56 crc kubenswrapper[4957]: I0218 15:40:56.928324 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4b9050a-9f4f-4790-a7e5-8baaaff5e611-catalog-content\") pod \"certified-operators-rv779\" (UID: \"a4b9050a-9f4f-4790-a7e5-8baaaff5e611\") " pod="openshift-marketplace/certified-operators-rv779" Feb 18 15:40:57 crc kubenswrapper[4957]: I0218 15:40:57.033329 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vdvvd\" (UniqueName: \"kubernetes.io/projected/a4b9050a-9f4f-4790-a7e5-8baaaff5e611-kube-api-access-vdvvd\") pod \"certified-operators-rv779\" (UID: \"a4b9050a-9f4f-4790-a7e5-8baaaff5e611\") " pod="openshift-marketplace/certified-operators-rv779" Feb 18 15:40:57 crc kubenswrapper[4957]: I0218 15:40:57.033697 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4b9050a-9f4f-4790-a7e5-8baaaff5e611-catalog-content\") pod \"certified-operators-rv779\" (UID: \"a4b9050a-9f4f-4790-a7e5-8baaaff5e611\") " pod="openshift-marketplace/certified-operators-rv779" Feb 18 15:40:57 crc kubenswrapper[4957]: I0218 15:40:57.033917 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4b9050a-9f4f-4790-a7e5-8baaaff5e611-utilities\") pod \"certified-operators-rv779\" (UID: \"a4b9050a-9f4f-4790-a7e5-8baaaff5e611\") " pod="openshift-marketplace/certified-operators-rv779" Feb 18 15:40:57 crc kubenswrapper[4957]: I0218 15:40:57.034254 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4b9050a-9f4f-4790-a7e5-8baaaff5e611-catalog-content\") pod \"certified-operators-rv779\" (UID: \"a4b9050a-9f4f-4790-a7e5-8baaaff5e611\") " pod="openshift-marketplace/certified-operators-rv779" Feb 18 15:40:57 crc kubenswrapper[4957]: I0218 15:40:57.034351 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4b9050a-9f4f-4790-a7e5-8baaaff5e611-utilities\") pod \"certified-operators-rv779\" (UID: \"a4b9050a-9f4f-4790-a7e5-8baaaff5e611\") " pod="openshift-marketplace/certified-operators-rv779" Feb 18 15:40:57 crc kubenswrapper[4957]: I0218 15:40:57.040361 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mtbsd" event={"ID":"66159dcb-b549-41a9-8baa-0fdd5141fc04","Type":"ContainerStarted","Data":"7a3f548db1110196bd58b36becd320cfc7eae78682861fd2b3df894de89f8ae8"} Feb 18 15:40:57 crc kubenswrapper[4957]: I0218 15:40:57.055317 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vdvvd\" (UniqueName: \"kubernetes.io/projected/a4b9050a-9f4f-4790-a7e5-8baaaff5e611-kube-api-access-vdvvd\") pod \"certified-operators-rv779\" (UID: \"a4b9050a-9f4f-4790-a7e5-8baaaff5e611\") " pod="openshift-marketplace/certified-operators-rv779" Feb 18 15:40:57 crc kubenswrapper[4957]: I0218 15:40:57.275053 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rv779" Feb 18 15:40:57 crc kubenswrapper[4957]: I0218 15:40:57.816581 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rv779"] Feb 18 15:40:58 crc kubenswrapper[4957]: I0218 15:40:58.064186 4957 generic.go:334] "Generic (PLEG): container finished" podID="66159dcb-b549-41a9-8baa-0fdd5141fc04" containerID="aeec9bee4ce9b576897f63211b2a9d20819ccf11e4543061c834839db1213b13" exitCode=0 Feb 18 15:40:58 crc kubenswrapper[4957]: I0218 15:40:58.064348 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mtbsd" event={"ID":"66159dcb-b549-41a9-8baa-0fdd5141fc04","Type":"ContainerDied","Data":"aeec9bee4ce9b576897f63211b2a9d20819ccf11e4543061c834839db1213b13"} Feb 18 15:40:58 crc kubenswrapper[4957]: I0218 15:40:58.066572 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rv779" event={"ID":"a4b9050a-9f4f-4790-a7e5-8baaaff5e611","Type":"ContainerStarted","Data":"6c15cf957ff92df55a1eacda10fcd4fc417c62d8265950258960dea2f3be8e06"} Feb 18 15:40:58 crc kubenswrapper[4957]: I0218 15:40:58.067769 4957 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 15:40:59 crc kubenswrapper[4957]: I0218 15:40:59.084860 4957 generic.go:334] "Generic (PLEG): container finished" podID="a4b9050a-9f4f-4790-a7e5-8baaaff5e611" containerID="767cc968d4bb16080dd6bfdbedf1c4cdb9e759ff82e5dc67847f575c437f0893" exitCode=0 Feb 18 15:40:59 crc kubenswrapper[4957]: I0218 15:40:59.086045 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rv779" event={"ID":"a4b9050a-9f4f-4790-a7e5-8baaaff5e611","Type":"ContainerDied","Data":"767cc968d4bb16080dd6bfdbedf1c4cdb9e759ff82e5dc67847f575c437f0893"} Feb 18 15:41:00 crc kubenswrapper[4957]: I0218 15:41:00.097493 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mtbsd" event={"ID":"66159dcb-b549-41a9-8baa-0fdd5141fc04","Type":"ContainerStarted","Data":"f544992057ad192873e1c411bcb2fee0f5af37f86cf181749274af498d3afcc2"} Feb 18 15:41:01 crc kubenswrapper[4957]: I0218 15:41:01.117929 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rv779" event={"ID":"a4b9050a-9f4f-4790-a7e5-8baaaff5e611","Type":"ContainerStarted","Data":"8e32b1f453a6cb290c94732fa842a9187f34d43c6435ef46ac968ef0944f9278"} Feb 18 15:41:04 crc kubenswrapper[4957]: I0218 15:41:04.152992 4957 generic.go:334] "Generic (PLEG): container finished" podID="a4b9050a-9f4f-4790-a7e5-8baaaff5e611" containerID="8e32b1f453a6cb290c94732fa842a9187f34d43c6435ef46ac968ef0944f9278" exitCode=0 Feb 18 15:41:04 crc kubenswrapper[4957]: I0218 15:41:04.153724 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rv779" event={"ID":"a4b9050a-9f4f-4790-a7e5-8baaaff5e611","Type":"ContainerDied","Data":"8e32b1f453a6cb290c94732fa842a9187f34d43c6435ef46ac968ef0944f9278"} Feb 18 15:41:05 crc kubenswrapper[4957]: I0218 15:41:05.174774 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rv779" event={"ID":"a4b9050a-9f4f-4790-a7e5-8baaaff5e611","Type":"ContainerStarted","Data":"7847778f6f535ff05e9b8e60500464e41cb73df2119fa79ff9c234136e074068"} Feb 18 15:41:05 crc kubenswrapper[4957]: I0218 15:41:05.198012 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-rv779" podStartSLOduration=3.723359968 podStartE2EDuration="9.19799288s" podCreationTimestamp="2026-02-18 15:40:56 +0000 UTC" firstStartedPulling="2026-02-18 15:40:59.087346737 +0000 UTC m=+4165.608211481" lastFinishedPulling="2026-02-18 15:41:04.561979649 +0000 UTC m=+4171.082844393" observedRunningTime="2026-02-18 15:41:05.195077226 +0000 UTC m=+4171.715941990" watchObservedRunningTime="2026-02-18 15:41:05.19799288 +0000 UTC m=+4171.718857644" Feb 18 15:41:07 crc kubenswrapper[4957]: I0218 15:41:07.200243 4957 generic.go:334] "Generic (PLEG): container finished" podID="66159dcb-b549-41a9-8baa-0fdd5141fc04" containerID="f544992057ad192873e1c411bcb2fee0f5af37f86cf181749274af498d3afcc2" exitCode=0 Feb 18 15:41:07 crc kubenswrapper[4957]: I0218 15:41:07.200314 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mtbsd" event={"ID":"66159dcb-b549-41a9-8baa-0fdd5141fc04","Type":"ContainerDied","Data":"f544992057ad192873e1c411bcb2fee0f5af37f86cf181749274af498d3afcc2"} Feb 18 15:41:07 crc kubenswrapper[4957]: I0218 15:41:07.276183 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-rv779" Feb 18 15:41:07 crc kubenswrapper[4957]: I0218 15:41:07.276252 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-rv779" Feb 18 15:41:08 crc kubenswrapper[4957]: I0218 15:41:08.226783 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mtbsd" event={"ID":"66159dcb-b549-41a9-8baa-0fdd5141fc04","Type":"ContainerStarted","Data":"082092e27071624015ae464c791d41608d5a1b872181b26e6f087dcd488041e8"} Feb 18 15:41:08 crc kubenswrapper[4957]: I0218 15:41:08.255829 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-mtbsd" podStartSLOduration=3.723239051 podStartE2EDuration="13.255806982s" podCreationTimestamp="2026-02-18 15:40:55 +0000 UTC" firstStartedPulling="2026-02-18 15:40:58.067052774 +0000 UTC m=+4164.587917518" lastFinishedPulling="2026-02-18 15:41:07.599620695 +0000 UTC m=+4174.120485449" observedRunningTime="2026-02-18 15:41:08.241521034 +0000 UTC m=+4174.762385798" watchObservedRunningTime="2026-02-18 15:41:08.255806982 +0000 UTC m=+4174.776671726" Feb 18 15:41:08 crc kubenswrapper[4957]: I0218 15:41:08.333050 4957 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-rv779" podUID="a4b9050a-9f4f-4790-a7e5-8baaaff5e611" containerName="registry-server" probeResult="failure" output=< Feb 18 15:41:08 crc kubenswrapper[4957]: timeout: failed to connect service ":50051" within 1s Feb 18 15:41:08 crc kubenswrapper[4957]: > Feb 18 15:41:15 crc kubenswrapper[4957]: I0218 15:41:15.427806 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-mtbsd" Feb 18 15:41:15 crc kubenswrapper[4957]: I0218 15:41:15.428637 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-mtbsd" Feb 18 15:41:17 crc kubenswrapper[4957]: I0218 15:41:17.144384 4957 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-mtbsd" podUID="66159dcb-b549-41a9-8baa-0fdd5141fc04" containerName="registry-server" probeResult="failure" output=< Feb 18 15:41:17 crc kubenswrapper[4957]: timeout: failed to connect service ":50051" within 1s Feb 18 15:41:17 crc kubenswrapper[4957]: > Feb 18 15:41:18 crc kubenswrapper[4957]: I0218 15:41:18.331725 4957 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-rv779" podUID="a4b9050a-9f4f-4790-a7e5-8baaaff5e611" containerName="registry-server" probeResult="failure" output=< Feb 18 15:41:18 crc kubenswrapper[4957]: timeout: failed to connect service ":50051" within 1s Feb 18 15:41:18 crc kubenswrapper[4957]: > Feb 18 15:41:26 crc kubenswrapper[4957]: I0218 15:41:26.490632 4957 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-mtbsd" podUID="66159dcb-b549-41a9-8baa-0fdd5141fc04" containerName="registry-server" probeResult="failure" output=< Feb 18 15:41:26 crc kubenswrapper[4957]: timeout: failed to connect service ":50051" within 1s Feb 18 15:41:26 crc kubenswrapper[4957]: > Feb 18 15:41:27 crc kubenswrapper[4957]: I0218 15:41:27.456963 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-rv779" Feb 18 15:41:27 crc kubenswrapper[4957]: I0218 15:41:27.511907 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-rv779" Feb 18 15:41:28 crc kubenswrapper[4957]: I0218 15:41:28.083832 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rv779"] Feb 18 15:41:29 crc kubenswrapper[4957]: I0218 15:41:29.483392 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-rv779" podUID="a4b9050a-9f4f-4790-a7e5-8baaaff5e611" containerName="registry-server" containerID="cri-o://7847778f6f535ff05e9b8e60500464e41cb73df2119fa79ff9c234136e074068" gracePeriod=2 Feb 18 15:41:30 crc kubenswrapper[4957]: I0218 15:41:30.326525 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rv779" Feb 18 15:41:30 crc kubenswrapper[4957]: I0218 15:41:30.445812 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4b9050a-9f4f-4790-a7e5-8baaaff5e611-catalog-content\") pod \"a4b9050a-9f4f-4790-a7e5-8baaaff5e611\" (UID: \"a4b9050a-9f4f-4790-a7e5-8baaaff5e611\") " Feb 18 15:41:30 crc kubenswrapper[4957]: I0218 15:41:30.446020 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vdvvd\" (UniqueName: \"kubernetes.io/projected/a4b9050a-9f4f-4790-a7e5-8baaaff5e611-kube-api-access-vdvvd\") pod \"a4b9050a-9f4f-4790-a7e5-8baaaff5e611\" (UID: \"a4b9050a-9f4f-4790-a7e5-8baaaff5e611\") " Feb 18 15:41:30 crc kubenswrapper[4957]: I0218 15:41:30.446505 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4b9050a-9f4f-4790-a7e5-8baaaff5e611-utilities\") pod \"a4b9050a-9f4f-4790-a7e5-8baaaff5e611\" (UID: \"a4b9050a-9f4f-4790-a7e5-8baaaff5e611\") " Feb 18 15:41:30 crc kubenswrapper[4957]: I0218 15:41:30.447287 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a4b9050a-9f4f-4790-a7e5-8baaaff5e611-utilities" (OuterVolumeSpecName: "utilities") pod "a4b9050a-9f4f-4790-a7e5-8baaaff5e611" (UID: "a4b9050a-9f4f-4790-a7e5-8baaaff5e611"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 15:41:30 crc kubenswrapper[4957]: I0218 15:41:30.447617 4957 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4b9050a-9f4f-4790-a7e5-8baaaff5e611-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 15:41:30 crc kubenswrapper[4957]: I0218 15:41:30.453570 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4b9050a-9f4f-4790-a7e5-8baaaff5e611-kube-api-access-vdvvd" (OuterVolumeSpecName: "kube-api-access-vdvvd") pod "a4b9050a-9f4f-4790-a7e5-8baaaff5e611" (UID: "a4b9050a-9f4f-4790-a7e5-8baaaff5e611"). InnerVolumeSpecName "kube-api-access-vdvvd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 15:41:30 crc kubenswrapper[4957]: I0218 15:41:30.495573 4957 generic.go:334] "Generic (PLEG): container finished" podID="a4b9050a-9f4f-4790-a7e5-8baaaff5e611" containerID="7847778f6f535ff05e9b8e60500464e41cb73df2119fa79ff9c234136e074068" exitCode=0 Feb 18 15:41:30 crc kubenswrapper[4957]: I0218 15:41:30.495639 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rv779" event={"ID":"a4b9050a-9f4f-4790-a7e5-8baaaff5e611","Type":"ContainerDied","Data":"7847778f6f535ff05e9b8e60500464e41cb73df2119fa79ff9c234136e074068"} Feb 18 15:41:30 crc kubenswrapper[4957]: I0218 15:41:30.495666 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rv779" event={"ID":"a4b9050a-9f4f-4790-a7e5-8baaaff5e611","Type":"ContainerDied","Data":"6c15cf957ff92df55a1eacda10fcd4fc417c62d8265950258960dea2f3be8e06"} Feb 18 15:41:30 crc kubenswrapper[4957]: I0218 15:41:30.495703 4957 scope.go:117] "RemoveContainer" containerID="7847778f6f535ff05e9b8e60500464e41cb73df2119fa79ff9c234136e074068" Feb 18 15:41:30 crc kubenswrapper[4957]: I0218 15:41:30.495898 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rv779" Feb 18 15:41:30 crc kubenswrapper[4957]: I0218 15:41:30.508250 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a4b9050a-9f4f-4790-a7e5-8baaaff5e611-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a4b9050a-9f4f-4790-a7e5-8baaaff5e611" (UID: "a4b9050a-9f4f-4790-a7e5-8baaaff5e611"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 15:41:30 crc kubenswrapper[4957]: I0218 15:41:30.523324 4957 scope.go:117] "RemoveContainer" containerID="8e32b1f453a6cb290c94732fa842a9187f34d43c6435ef46ac968ef0944f9278" Feb 18 15:41:30 crc kubenswrapper[4957]: I0218 15:41:30.547746 4957 scope.go:117] "RemoveContainer" containerID="767cc968d4bb16080dd6bfdbedf1c4cdb9e759ff82e5dc67847f575c437f0893" Feb 18 15:41:30 crc kubenswrapper[4957]: I0218 15:41:30.551672 4957 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4b9050a-9f4f-4790-a7e5-8baaaff5e611-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 15:41:30 crc kubenswrapper[4957]: I0218 15:41:30.551713 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vdvvd\" (UniqueName: \"kubernetes.io/projected/a4b9050a-9f4f-4790-a7e5-8baaaff5e611-kube-api-access-vdvvd\") on node \"crc\" DevicePath \"\"" Feb 18 15:41:30 crc kubenswrapper[4957]: I0218 15:41:30.613118 4957 scope.go:117] "RemoveContainer" containerID="7847778f6f535ff05e9b8e60500464e41cb73df2119fa79ff9c234136e074068" Feb 18 15:41:30 crc kubenswrapper[4957]: E0218 15:41:30.613915 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7847778f6f535ff05e9b8e60500464e41cb73df2119fa79ff9c234136e074068\": container with ID starting with 7847778f6f535ff05e9b8e60500464e41cb73df2119fa79ff9c234136e074068 not found: ID does not exist" containerID="7847778f6f535ff05e9b8e60500464e41cb73df2119fa79ff9c234136e074068" Feb 18 15:41:30 crc kubenswrapper[4957]: I0218 15:41:30.613947 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7847778f6f535ff05e9b8e60500464e41cb73df2119fa79ff9c234136e074068"} err="failed to get container status \"7847778f6f535ff05e9b8e60500464e41cb73df2119fa79ff9c234136e074068\": rpc error: code = NotFound desc = could not find container \"7847778f6f535ff05e9b8e60500464e41cb73df2119fa79ff9c234136e074068\": container with ID starting with 7847778f6f535ff05e9b8e60500464e41cb73df2119fa79ff9c234136e074068 not found: ID does not exist" Feb 18 15:41:30 crc kubenswrapper[4957]: I0218 15:41:30.613969 4957 scope.go:117] "RemoveContainer" containerID="8e32b1f453a6cb290c94732fa842a9187f34d43c6435ef46ac968ef0944f9278" Feb 18 15:41:30 crc kubenswrapper[4957]: E0218 15:41:30.614277 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8e32b1f453a6cb290c94732fa842a9187f34d43c6435ef46ac968ef0944f9278\": container with ID starting with 8e32b1f453a6cb290c94732fa842a9187f34d43c6435ef46ac968ef0944f9278 not found: ID does not exist" containerID="8e32b1f453a6cb290c94732fa842a9187f34d43c6435ef46ac968ef0944f9278" Feb 18 15:41:30 crc kubenswrapper[4957]: I0218 15:41:30.614321 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e32b1f453a6cb290c94732fa842a9187f34d43c6435ef46ac968ef0944f9278"} err="failed to get container status \"8e32b1f453a6cb290c94732fa842a9187f34d43c6435ef46ac968ef0944f9278\": rpc error: code = NotFound desc = could not find container \"8e32b1f453a6cb290c94732fa842a9187f34d43c6435ef46ac968ef0944f9278\": container with ID starting with 8e32b1f453a6cb290c94732fa842a9187f34d43c6435ef46ac968ef0944f9278 not found: ID does not exist" Feb 18 15:41:30 crc kubenswrapper[4957]: I0218 15:41:30.614350 4957 scope.go:117] "RemoveContainer" containerID="767cc968d4bb16080dd6bfdbedf1c4cdb9e759ff82e5dc67847f575c437f0893" Feb 18 15:41:30 crc kubenswrapper[4957]: E0218 15:41:30.614724 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"767cc968d4bb16080dd6bfdbedf1c4cdb9e759ff82e5dc67847f575c437f0893\": container with ID starting with 767cc968d4bb16080dd6bfdbedf1c4cdb9e759ff82e5dc67847f575c437f0893 not found: ID does not exist" containerID="767cc968d4bb16080dd6bfdbedf1c4cdb9e759ff82e5dc67847f575c437f0893" Feb 18 15:41:30 crc kubenswrapper[4957]: I0218 15:41:30.614772 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"767cc968d4bb16080dd6bfdbedf1c4cdb9e759ff82e5dc67847f575c437f0893"} err="failed to get container status \"767cc968d4bb16080dd6bfdbedf1c4cdb9e759ff82e5dc67847f575c437f0893\": rpc error: code = NotFound desc = could not find container \"767cc968d4bb16080dd6bfdbedf1c4cdb9e759ff82e5dc67847f575c437f0893\": container with ID starting with 767cc968d4bb16080dd6bfdbedf1c4cdb9e759ff82e5dc67847f575c437f0893 not found: ID does not exist" Feb 18 15:41:30 crc kubenswrapper[4957]: I0218 15:41:30.876089 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rv779"] Feb 18 15:41:30 crc kubenswrapper[4957]: I0218 15:41:30.890057 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-rv779"] Feb 18 15:41:32 crc kubenswrapper[4957]: I0218 15:41:32.224642 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a4b9050a-9f4f-4790-a7e5-8baaaff5e611" path="/var/lib/kubelet/pods/a4b9050a-9f4f-4790-a7e5-8baaaff5e611/volumes" Feb 18 15:41:36 crc kubenswrapper[4957]: I0218 15:41:36.529079 4957 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-mtbsd" podUID="66159dcb-b549-41a9-8baa-0fdd5141fc04" containerName="registry-server" probeResult="failure" output=< Feb 18 15:41:36 crc kubenswrapper[4957]: timeout: failed to connect service ":50051" within 1s Feb 18 15:41:36 crc kubenswrapper[4957]: > Feb 18 15:41:46 crc kubenswrapper[4957]: I0218 15:41:46.479290 4957 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-mtbsd" podUID="66159dcb-b549-41a9-8baa-0fdd5141fc04" containerName="registry-server" probeResult="failure" output=< Feb 18 15:41:46 crc kubenswrapper[4957]: timeout: failed to connect service ":50051" within 1s Feb 18 15:41:46 crc kubenswrapper[4957]: > Feb 18 15:41:48 crc kubenswrapper[4957]: I0218 15:41:48.160366 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-5kzvk"] Feb 18 15:41:48 crc kubenswrapper[4957]: E0218 15:41:48.161492 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4b9050a-9f4f-4790-a7e5-8baaaff5e611" containerName="extract-utilities" Feb 18 15:41:48 crc kubenswrapper[4957]: I0218 15:41:48.161514 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4b9050a-9f4f-4790-a7e5-8baaaff5e611" containerName="extract-utilities" Feb 18 15:41:48 crc kubenswrapper[4957]: E0218 15:41:48.161537 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4b9050a-9f4f-4790-a7e5-8baaaff5e611" containerName="registry-server" Feb 18 15:41:48 crc kubenswrapper[4957]: I0218 15:41:48.161548 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4b9050a-9f4f-4790-a7e5-8baaaff5e611" containerName="registry-server" Feb 18 15:41:48 crc kubenswrapper[4957]: E0218 15:41:48.161573 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4b9050a-9f4f-4790-a7e5-8baaaff5e611" containerName="extract-content" Feb 18 15:41:48 crc kubenswrapper[4957]: I0218 15:41:48.161580 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4b9050a-9f4f-4790-a7e5-8baaaff5e611" containerName="extract-content" Feb 18 15:41:48 crc kubenswrapper[4957]: I0218 15:41:48.161861 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4b9050a-9f4f-4790-a7e5-8baaaff5e611" containerName="registry-server" Feb 18 15:41:48 crc kubenswrapper[4957]: I0218 15:41:48.163804 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5kzvk" Feb 18 15:41:48 crc kubenswrapper[4957]: I0218 15:41:48.172665 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5kzvk"] Feb 18 15:41:48 crc kubenswrapper[4957]: I0218 15:41:48.223205 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/54317c16-19fd-4cc7-adbc-dee75502de0a-catalog-content\") pod \"redhat-marketplace-5kzvk\" (UID: \"54317c16-19fd-4cc7-adbc-dee75502de0a\") " pod="openshift-marketplace/redhat-marketplace-5kzvk" Feb 18 15:41:48 crc kubenswrapper[4957]: I0218 15:41:48.223243 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xcjkz\" (UniqueName: \"kubernetes.io/projected/54317c16-19fd-4cc7-adbc-dee75502de0a-kube-api-access-xcjkz\") pod \"redhat-marketplace-5kzvk\" (UID: \"54317c16-19fd-4cc7-adbc-dee75502de0a\") " pod="openshift-marketplace/redhat-marketplace-5kzvk" Feb 18 15:41:48 crc kubenswrapper[4957]: I0218 15:41:48.223604 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/54317c16-19fd-4cc7-adbc-dee75502de0a-utilities\") pod \"redhat-marketplace-5kzvk\" (UID: \"54317c16-19fd-4cc7-adbc-dee75502de0a\") " pod="openshift-marketplace/redhat-marketplace-5kzvk" Feb 18 15:41:48 crc kubenswrapper[4957]: I0218 15:41:48.325636 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/54317c16-19fd-4cc7-adbc-dee75502de0a-utilities\") pod \"redhat-marketplace-5kzvk\" (UID: \"54317c16-19fd-4cc7-adbc-dee75502de0a\") " pod="openshift-marketplace/redhat-marketplace-5kzvk" Feb 18 15:41:48 crc kubenswrapper[4957]: I0218 15:41:48.325873 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/54317c16-19fd-4cc7-adbc-dee75502de0a-catalog-content\") pod \"redhat-marketplace-5kzvk\" (UID: \"54317c16-19fd-4cc7-adbc-dee75502de0a\") " pod="openshift-marketplace/redhat-marketplace-5kzvk" Feb 18 15:41:48 crc kubenswrapper[4957]: I0218 15:41:48.325929 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xcjkz\" (UniqueName: \"kubernetes.io/projected/54317c16-19fd-4cc7-adbc-dee75502de0a-kube-api-access-xcjkz\") pod \"redhat-marketplace-5kzvk\" (UID: \"54317c16-19fd-4cc7-adbc-dee75502de0a\") " pod="openshift-marketplace/redhat-marketplace-5kzvk" Feb 18 15:41:48 crc kubenswrapper[4957]: I0218 15:41:48.326540 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/54317c16-19fd-4cc7-adbc-dee75502de0a-utilities\") pod \"redhat-marketplace-5kzvk\" (UID: \"54317c16-19fd-4cc7-adbc-dee75502de0a\") " pod="openshift-marketplace/redhat-marketplace-5kzvk" Feb 18 15:41:48 crc kubenswrapper[4957]: I0218 15:41:48.326843 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/54317c16-19fd-4cc7-adbc-dee75502de0a-catalog-content\") pod \"redhat-marketplace-5kzvk\" (UID: \"54317c16-19fd-4cc7-adbc-dee75502de0a\") " pod="openshift-marketplace/redhat-marketplace-5kzvk" Feb 18 15:41:48 crc kubenswrapper[4957]: I0218 15:41:48.353585 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xcjkz\" (UniqueName: \"kubernetes.io/projected/54317c16-19fd-4cc7-adbc-dee75502de0a-kube-api-access-xcjkz\") pod \"redhat-marketplace-5kzvk\" (UID: \"54317c16-19fd-4cc7-adbc-dee75502de0a\") " pod="openshift-marketplace/redhat-marketplace-5kzvk" Feb 18 15:41:48 crc kubenswrapper[4957]: I0218 15:41:48.495584 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5kzvk" Feb 18 15:41:49 crc kubenswrapper[4957]: I0218 15:41:49.039383 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5kzvk"] Feb 18 15:41:49 crc kubenswrapper[4957]: I0218 15:41:49.774226 4957 generic.go:334] "Generic (PLEG): container finished" podID="54317c16-19fd-4cc7-adbc-dee75502de0a" containerID="0f329d925346a4fc8fb8a86ee1d9bdec2691116c093d51ecdc407f3427af7e06" exitCode=0 Feb 18 15:41:49 crc kubenswrapper[4957]: I0218 15:41:49.774833 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5kzvk" event={"ID":"54317c16-19fd-4cc7-adbc-dee75502de0a","Type":"ContainerDied","Data":"0f329d925346a4fc8fb8a86ee1d9bdec2691116c093d51ecdc407f3427af7e06"} Feb 18 15:41:49 crc kubenswrapper[4957]: I0218 15:41:49.774864 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5kzvk" event={"ID":"54317c16-19fd-4cc7-adbc-dee75502de0a","Type":"ContainerStarted","Data":"fcb6658c01bd895c07207c836aca33cb0c733eac946290be029884ea726e6c9c"} Feb 18 15:41:51 crc kubenswrapper[4957]: I0218 15:41:51.809918 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5kzvk" event={"ID":"54317c16-19fd-4cc7-adbc-dee75502de0a","Type":"ContainerStarted","Data":"fb106d6bed965295647451fb2252a6eb9e4f8492c0554535e60b5d3901205c85"} Feb 18 15:41:53 crc kubenswrapper[4957]: I0218 15:41:53.834827 4957 generic.go:334] "Generic (PLEG): container finished" podID="54317c16-19fd-4cc7-adbc-dee75502de0a" containerID="fb106d6bed965295647451fb2252a6eb9e4f8492c0554535e60b5d3901205c85" exitCode=0 Feb 18 15:41:53 crc kubenswrapper[4957]: I0218 15:41:53.835083 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5kzvk" event={"ID":"54317c16-19fd-4cc7-adbc-dee75502de0a","Type":"ContainerDied","Data":"fb106d6bed965295647451fb2252a6eb9e4f8492c0554535e60b5d3901205c85"} Feb 18 15:41:55 crc kubenswrapper[4957]: I0218 15:41:55.901217 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5kzvk" event={"ID":"54317c16-19fd-4cc7-adbc-dee75502de0a","Type":"ContainerStarted","Data":"59543a4a07060222c1645327e8b2fedf08a887286b39b559b224017ab723990c"} Feb 18 15:41:55 crc kubenswrapper[4957]: I0218 15:41:55.929324 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-5kzvk" podStartSLOduration=3.238503956 podStartE2EDuration="7.929303036s" podCreationTimestamp="2026-02-18 15:41:48 +0000 UTC" firstStartedPulling="2026-02-18 15:41:49.777308403 +0000 UTC m=+4216.298173147" lastFinishedPulling="2026-02-18 15:41:54.468107473 +0000 UTC m=+4220.988972227" observedRunningTime="2026-02-18 15:41:55.920278199 +0000 UTC m=+4222.441142943" watchObservedRunningTime="2026-02-18 15:41:55.929303036 +0000 UTC m=+4222.450167780" Feb 18 15:41:56 crc kubenswrapper[4957]: I0218 15:41:56.479692 4957 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-mtbsd" podUID="66159dcb-b549-41a9-8baa-0fdd5141fc04" containerName="registry-server" probeResult="failure" output=< Feb 18 15:41:56 crc kubenswrapper[4957]: timeout: failed to connect service ":50051" within 1s Feb 18 15:41:56 crc kubenswrapper[4957]: > Feb 18 15:41:58 crc kubenswrapper[4957]: I0218 15:41:58.497141 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-5kzvk" Feb 18 15:41:58 crc kubenswrapper[4957]: I0218 15:41:58.498099 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-5kzvk" Feb 18 15:41:58 crc kubenswrapper[4957]: I0218 15:41:58.564978 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-5kzvk" Feb 18 15:42:06 crc kubenswrapper[4957]: I0218 15:42:06.500011 4957 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-mtbsd" podUID="66159dcb-b549-41a9-8baa-0fdd5141fc04" containerName="registry-server" probeResult="failure" output=< Feb 18 15:42:06 crc kubenswrapper[4957]: timeout: failed to connect service ":50051" within 1s Feb 18 15:42:06 crc kubenswrapper[4957]: > Feb 18 15:42:07 crc kubenswrapper[4957]: I0218 15:42:07.009884 4957 trace.go:236] Trace[1385018752]: "Calculate volume metrics of persistence for pod openstack/rabbitmq-server-2" (18-Feb-2026 15:42:05.981) (total time: 1028ms): Feb 18 15:42:07 crc kubenswrapper[4957]: Trace[1385018752]: [1.028337653s] [1.028337653s] END Feb 18 15:42:07 crc kubenswrapper[4957]: I0218 15:42:07.278591 4957 patch_prober.go:28] interesting pod/machine-config-daemon-x8wwg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 15:42:07 crc kubenswrapper[4957]: I0218 15:42:07.278644 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 15:42:08 crc kubenswrapper[4957]: I0218 15:42:08.559666 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-5kzvk" Feb 18 15:42:08 crc kubenswrapper[4957]: I0218 15:42:08.620274 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5kzvk"] Feb 18 15:42:09 crc kubenswrapper[4957]: I0218 15:42:09.114447 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-5kzvk" podUID="54317c16-19fd-4cc7-adbc-dee75502de0a" containerName="registry-server" containerID="cri-o://59543a4a07060222c1645327e8b2fedf08a887286b39b559b224017ab723990c" gracePeriod=2 Feb 18 15:42:09 crc kubenswrapper[4957]: I0218 15:42:09.953586 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5kzvk" Feb 18 15:42:10 crc kubenswrapper[4957]: I0218 15:42:10.053157 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcjkz\" (UniqueName: \"kubernetes.io/projected/54317c16-19fd-4cc7-adbc-dee75502de0a-kube-api-access-xcjkz\") pod \"54317c16-19fd-4cc7-adbc-dee75502de0a\" (UID: \"54317c16-19fd-4cc7-adbc-dee75502de0a\") " Feb 18 15:42:10 crc kubenswrapper[4957]: I0218 15:42:10.053299 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/54317c16-19fd-4cc7-adbc-dee75502de0a-utilities\") pod \"54317c16-19fd-4cc7-adbc-dee75502de0a\" (UID: \"54317c16-19fd-4cc7-adbc-dee75502de0a\") " Feb 18 15:42:10 crc kubenswrapper[4957]: I0218 15:42:10.053495 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/54317c16-19fd-4cc7-adbc-dee75502de0a-catalog-content\") pod \"54317c16-19fd-4cc7-adbc-dee75502de0a\" (UID: \"54317c16-19fd-4cc7-adbc-dee75502de0a\") " Feb 18 15:42:10 crc kubenswrapper[4957]: I0218 15:42:10.053946 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/54317c16-19fd-4cc7-adbc-dee75502de0a-utilities" (OuterVolumeSpecName: "utilities") pod "54317c16-19fd-4cc7-adbc-dee75502de0a" (UID: "54317c16-19fd-4cc7-adbc-dee75502de0a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 15:42:10 crc kubenswrapper[4957]: I0218 15:42:10.054279 4957 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/54317c16-19fd-4cc7-adbc-dee75502de0a-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 15:42:10 crc kubenswrapper[4957]: I0218 15:42:10.063619 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/54317c16-19fd-4cc7-adbc-dee75502de0a-kube-api-access-xcjkz" (OuterVolumeSpecName: "kube-api-access-xcjkz") pod "54317c16-19fd-4cc7-adbc-dee75502de0a" (UID: "54317c16-19fd-4cc7-adbc-dee75502de0a"). InnerVolumeSpecName "kube-api-access-xcjkz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 15:42:10 crc kubenswrapper[4957]: I0218 15:42:10.085353 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/54317c16-19fd-4cc7-adbc-dee75502de0a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "54317c16-19fd-4cc7-adbc-dee75502de0a" (UID: "54317c16-19fd-4cc7-adbc-dee75502de0a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 15:42:10 crc kubenswrapper[4957]: I0218 15:42:10.135554 4957 generic.go:334] "Generic (PLEG): container finished" podID="54317c16-19fd-4cc7-adbc-dee75502de0a" containerID="59543a4a07060222c1645327e8b2fedf08a887286b39b559b224017ab723990c" exitCode=0 Feb 18 15:42:10 crc kubenswrapper[4957]: I0218 15:42:10.135614 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5kzvk" event={"ID":"54317c16-19fd-4cc7-adbc-dee75502de0a","Type":"ContainerDied","Data":"59543a4a07060222c1645327e8b2fedf08a887286b39b559b224017ab723990c"} Feb 18 15:42:10 crc kubenswrapper[4957]: I0218 15:42:10.135623 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5kzvk" Feb 18 15:42:10 crc kubenswrapper[4957]: I0218 15:42:10.135674 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5kzvk" event={"ID":"54317c16-19fd-4cc7-adbc-dee75502de0a","Type":"ContainerDied","Data":"fcb6658c01bd895c07207c836aca33cb0c733eac946290be029884ea726e6c9c"} Feb 18 15:42:10 crc kubenswrapper[4957]: I0218 15:42:10.135705 4957 scope.go:117] "RemoveContainer" containerID="59543a4a07060222c1645327e8b2fedf08a887286b39b559b224017ab723990c" Feb 18 15:42:10 crc kubenswrapper[4957]: I0218 15:42:10.156630 4957 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/54317c16-19fd-4cc7-adbc-dee75502de0a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 15:42:10 crc kubenswrapper[4957]: I0218 15:42:10.156672 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcjkz\" (UniqueName: \"kubernetes.io/projected/54317c16-19fd-4cc7-adbc-dee75502de0a-kube-api-access-xcjkz\") on node \"crc\" DevicePath \"\"" Feb 18 15:42:10 crc kubenswrapper[4957]: I0218 15:42:10.187096 4957 scope.go:117] "RemoveContainer" containerID="fb106d6bed965295647451fb2252a6eb9e4f8492c0554535e60b5d3901205c85" Feb 18 15:42:10 crc kubenswrapper[4957]: I0218 15:42:10.194079 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5kzvk"] Feb 18 15:42:10 crc kubenswrapper[4957]: I0218 15:42:10.209815 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-5kzvk"] Feb 18 15:42:10 crc kubenswrapper[4957]: I0218 15:42:10.209962 4957 scope.go:117] "RemoveContainer" containerID="0f329d925346a4fc8fb8a86ee1d9bdec2691116c093d51ecdc407f3427af7e06" Feb 18 15:42:10 crc kubenswrapper[4957]: I0218 15:42:10.238113 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="54317c16-19fd-4cc7-adbc-dee75502de0a" path="/var/lib/kubelet/pods/54317c16-19fd-4cc7-adbc-dee75502de0a/volumes" Feb 18 15:42:10 crc kubenswrapper[4957]: I0218 15:42:10.285401 4957 scope.go:117] "RemoveContainer" containerID="59543a4a07060222c1645327e8b2fedf08a887286b39b559b224017ab723990c" Feb 18 15:42:10 crc kubenswrapper[4957]: E0218 15:42:10.288797 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"59543a4a07060222c1645327e8b2fedf08a887286b39b559b224017ab723990c\": container with ID starting with 59543a4a07060222c1645327e8b2fedf08a887286b39b559b224017ab723990c not found: ID does not exist" containerID="59543a4a07060222c1645327e8b2fedf08a887286b39b559b224017ab723990c" Feb 18 15:42:10 crc kubenswrapper[4957]: I0218 15:42:10.288962 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"59543a4a07060222c1645327e8b2fedf08a887286b39b559b224017ab723990c"} err="failed to get container status \"59543a4a07060222c1645327e8b2fedf08a887286b39b559b224017ab723990c\": rpc error: code = NotFound desc = could not find container \"59543a4a07060222c1645327e8b2fedf08a887286b39b559b224017ab723990c\": container with ID starting with 59543a4a07060222c1645327e8b2fedf08a887286b39b559b224017ab723990c not found: ID does not exist" Feb 18 15:42:10 crc kubenswrapper[4957]: I0218 15:42:10.289064 4957 scope.go:117] "RemoveContainer" containerID="fb106d6bed965295647451fb2252a6eb9e4f8492c0554535e60b5d3901205c85" Feb 18 15:42:10 crc kubenswrapper[4957]: E0218 15:42:10.292681 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fb106d6bed965295647451fb2252a6eb9e4f8492c0554535e60b5d3901205c85\": container with ID starting with fb106d6bed965295647451fb2252a6eb9e4f8492c0554535e60b5d3901205c85 not found: ID does not exist" containerID="fb106d6bed965295647451fb2252a6eb9e4f8492c0554535e60b5d3901205c85" Feb 18 15:42:10 crc kubenswrapper[4957]: I0218 15:42:10.292815 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb106d6bed965295647451fb2252a6eb9e4f8492c0554535e60b5d3901205c85"} err="failed to get container status \"fb106d6bed965295647451fb2252a6eb9e4f8492c0554535e60b5d3901205c85\": rpc error: code = NotFound desc = could not find container \"fb106d6bed965295647451fb2252a6eb9e4f8492c0554535e60b5d3901205c85\": container with ID starting with fb106d6bed965295647451fb2252a6eb9e4f8492c0554535e60b5d3901205c85 not found: ID does not exist" Feb 18 15:42:10 crc kubenswrapper[4957]: I0218 15:42:10.292881 4957 scope.go:117] "RemoveContainer" containerID="0f329d925346a4fc8fb8a86ee1d9bdec2691116c093d51ecdc407f3427af7e06" Feb 18 15:42:10 crc kubenswrapper[4957]: E0218 15:42:10.293175 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0f329d925346a4fc8fb8a86ee1d9bdec2691116c093d51ecdc407f3427af7e06\": container with ID starting with 0f329d925346a4fc8fb8a86ee1d9bdec2691116c093d51ecdc407f3427af7e06 not found: ID does not exist" containerID="0f329d925346a4fc8fb8a86ee1d9bdec2691116c093d51ecdc407f3427af7e06" Feb 18 15:42:10 crc kubenswrapper[4957]: I0218 15:42:10.293223 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f329d925346a4fc8fb8a86ee1d9bdec2691116c093d51ecdc407f3427af7e06"} err="failed to get container status \"0f329d925346a4fc8fb8a86ee1d9bdec2691116c093d51ecdc407f3427af7e06\": rpc error: code = NotFound desc = could not find container \"0f329d925346a4fc8fb8a86ee1d9bdec2691116c093d51ecdc407f3427af7e06\": container with ID starting with 0f329d925346a4fc8fb8a86ee1d9bdec2691116c093d51ecdc407f3427af7e06 not found: ID does not exist" Feb 18 15:42:15 crc kubenswrapper[4957]: I0218 15:42:15.506985 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-mtbsd" Feb 18 15:42:15 crc kubenswrapper[4957]: I0218 15:42:15.563634 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-mtbsd" Feb 18 15:42:15 crc kubenswrapper[4957]: I0218 15:42:15.783749 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mtbsd"] Feb 18 15:42:17 crc kubenswrapper[4957]: I0218 15:42:17.218912 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-mtbsd" podUID="66159dcb-b549-41a9-8baa-0fdd5141fc04" containerName="registry-server" containerID="cri-o://082092e27071624015ae464c791d41608d5a1b872181b26e6f087dcd488041e8" gracePeriod=2 Feb 18 15:42:17 crc kubenswrapper[4957]: I0218 15:42:17.917044 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mtbsd" Feb 18 15:42:18 crc kubenswrapper[4957]: I0218 15:42:18.059212 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ns4vq\" (UniqueName: \"kubernetes.io/projected/66159dcb-b549-41a9-8baa-0fdd5141fc04-kube-api-access-ns4vq\") pod \"66159dcb-b549-41a9-8baa-0fdd5141fc04\" (UID: \"66159dcb-b549-41a9-8baa-0fdd5141fc04\") " Feb 18 15:42:18 crc kubenswrapper[4957]: I0218 15:42:18.060107 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66159dcb-b549-41a9-8baa-0fdd5141fc04-catalog-content\") pod \"66159dcb-b549-41a9-8baa-0fdd5141fc04\" (UID: \"66159dcb-b549-41a9-8baa-0fdd5141fc04\") " Feb 18 15:42:18 crc kubenswrapper[4957]: I0218 15:42:18.060188 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66159dcb-b549-41a9-8baa-0fdd5141fc04-utilities\") pod \"66159dcb-b549-41a9-8baa-0fdd5141fc04\" (UID: \"66159dcb-b549-41a9-8baa-0fdd5141fc04\") " Feb 18 15:42:18 crc kubenswrapper[4957]: I0218 15:42:18.063386 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/66159dcb-b549-41a9-8baa-0fdd5141fc04-utilities" (OuterVolumeSpecName: "utilities") pod "66159dcb-b549-41a9-8baa-0fdd5141fc04" (UID: "66159dcb-b549-41a9-8baa-0fdd5141fc04"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 15:42:18 crc kubenswrapper[4957]: I0218 15:42:18.082534 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66159dcb-b549-41a9-8baa-0fdd5141fc04-kube-api-access-ns4vq" (OuterVolumeSpecName: "kube-api-access-ns4vq") pod "66159dcb-b549-41a9-8baa-0fdd5141fc04" (UID: "66159dcb-b549-41a9-8baa-0fdd5141fc04"). InnerVolumeSpecName "kube-api-access-ns4vq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 15:42:18 crc kubenswrapper[4957]: I0218 15:42:18.164348 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ns4vq\" (UniqueName: \"kubernetes.io/projected/66159dcb-b549-41a9-8baa-0fdd5141fc04-kube-api-access-ns4vq\") on node \"crc\" DevicePath \"\"" Feb 18 15:42:18 crc kubenswrapper[4957]: I0218 15:42:18.164398 4957 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66159dcb-b549-41a9-8baa-0fdd5141fc04-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 15:42:18 crc kubenswrapper[4957]: I0218 15:42:18.184573 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/66159dcb-b549-41a9-8baa-0fdd5141fc04-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "66159dcb-b549-41a9-8baa-0fdd5141fc04" (UID: "66159dcb-b549-41a9-8baa-0fdd5141fc04"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 15:42:18 crc kubenswrapper[4957]: I0218 15:42:18.233695 4957 generic.go:334] "Generic (PLEG): container finished" podID="66159dcb-b549-41a9-8baa-0fdd5141fc04" containerID="082092e27071624015ae464c791d41608d5a1b872181b26e6f087dcd488041e8" exitCode=0 Feb 18 15:42:18 crc kubenswrapper[4957]: I0218 15:42:18.233743 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mtbsd" event={"ID":"66159dcb-b549-41a9-8baa-0fdd5141fc04","Type":"ContainerDied","Data":"082092e27071624015ae464c791d41608d5a1b872181b26e6f087dcd488041e8"} Feb 18 15:42:18 crc kubenswrapper[4957]: I0218 15:42:18.233756 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mtbsd" Feb 18 15:42:18 crc kubenswrapper[4957]: I0218 15:42:18.233772 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mtbsd" event={"ID":"66159dcb-b549-41a9-8baa-0fdd5141fc04","Type":"ContainerDied","Data":"7a3f548db1110196bd58b36becd320cfc7eae78682861fd2b3df894de89f8ae8"} Feb 18 15:42:18 crc kubenswrapper[4957]: I0218 15:42:18.233793 4957 scope.go:117] "RemoveContainer" containerID="082092e27071624015ae464c791d41608d5a1b872181b26e6f087dcd488041e8" Feb 18 15:42:18 crc kubenswrapper[4957]: I0218 15:42:18.262239 4957 scope.go:117] "RemoveContainer" containerID="f544992057ad192873e1c411bcb2fee0f5af37f86cf181749274af498d3afcc2" Feb 18 15:42:18 crc kubenswrapper[4957]: I0218 15:42:18.277015 4957 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66159dcb-b549-41a9-8baa-0fdd5141fc04-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 15:42:18 crc kubenswrapper[4957]: I0218 15:42:18.293697 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mtbsd"] Feb 18 15:42:18 crc kubenswrapper[4957]: I0218 15:42:18.311283 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-mtbsd"] Feb 18 15:42:18 crc kubenswrapper[4957]: I0218 15:42:18.312961 4957 scope.go:117] "RemoveContainer" containerID="aeec9bee4ce9b576897f63211b2a9d20819ccf11e4543061c834839db1213b13" Feb 18 15:42:18 crc kubenswrapper[4957]: I0218 15:42:18.374941 4957 scope.go:117] "RemoveContainer" containerID="082092e27071624015ae464c791d41608d5a1b872181b26e6f087dcd488041e8" Feb 18 15:42:18 crc kubenswrapper[4957]: E0218 15:42:18.375536 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"082092e27071624015ae464c791d41608d5a1b872181b26e6f087dcd488041e8\": container with ID starting with 082092e27071624015ae464c791d41608d5a1b872181b26e6f087dcd488041e8 not found: ID does not exist" containerID="082092e27071624015ae464c791d41608d5a1b872181b26e6f087dcd488041e8" Feb 18 15:42:18 crc kubenswrapper[4957]: I0218 15:42:18.375581 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"082092e27071624015ae464c791d41608d5a1b872181b26e6f087dcd488041e8"} err="failed to get container status \"082092e27071624015ae464c791d41608d5a1b872181b26e6f087dcd488041e8\": rpc error: code = NotFound desc = could not find container \"082092e27071624015ae464c791d41608d5a1b872181b26e6f087dcd488041e8\": container with ID starting with 082092e27071624015ae464c791d41608d5a1b872181b26e6f087dcd488041e8 not found: ID does not exist" Feb 18 15:42:18 crc kubenswrapper[4957]: I0218 15:42:18.375610 4957 scope.go:117] "RemoveContainer" containerID="f544992057ad192873e1c411bcb2fee0f5af37f86cf181749274af498d3afcc2" Feb 18 15:42:18 crc kubenswrapper[4957]: E0218 15:42:18.375999 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f544992057ad192873e1c411bcb2fee0f5af37f86cf181749274af498d3afcc2\": container with ID starting with f544992057ad192873e1c411bcb2fee0f5af37f86cf181749274af498d3afcc2 not found: ID does not exist" containerID="f544992057ad192873e1c411bcb2fee0f5af37f86cf181749274af498d3afcc2" Feb 18 15:42:18 crc kubenswrapper[4957]: I0218 15:42:18.376018 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f544992057ad192873e1c411bcb2fee0f5af37f86cf181749274af498d3afcc2"} err="failed to get container status \"f544992057ad192873e1c411bcb2fee0f5af37f86cf181749274af498d3afcc2\": rpc error: code = NotFound desc = could not find container \"f544992057ad192873e1c411bcb2fee0f5af37f86cf181749274af498d3afcc2\": container with ID starting with f544992057ad192873e1c411bcb2fee0f5af37f86cf181749274af498d3afcc2 not found: ID does not exist" Feb 18 15:42:18 crc kubenswrapper[4957]: I0218 15:42:18.376032 4957 scope.go:117] "RemoveContainer" containerID="aeec9bee4ce9b576897f63211b2a9d20819ccf11e4543061c834839db1213b13" Feb 18 15:42:18 crc kubenswrapper[4957]: E0218 15:42:18.376548 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aeec9bee4ce9b576897f63211b2a9d20819ccf11e4543061c834839db1213b13\": container with ID starting with aeec9bee4ce9b576897f63211b2a9d20819ccf11e4543061c834839db1213b13 not found: ID does not exist" containerID="aeec9bee4ce9b576897f63211b2a9d20819ccf11e4543061c834839db1213b13" Feb 18 15:42:18 crc kubenswrapper[4957]: I0218 15:42:18.376680 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aeec9bee4ce9b576897f63211b2a9d20819ccf11e4543061c834839db1213b13"} err="failed to get container status \"aeec9bee4ce9b576897f63211b2a9d20819ccf11e4543061c834839db1213b13\": rpc error: code = NotFound desc = could not find container \"aeec9bee4ce9b576897f63211b2a9d20819ccf11e4543061c834839db1213b13\": container with ID starting with aeec9bee4ce9b576897f63211b2a9d20819ccf11e4543061c834839db1213b13 not found: ID does not exist" Feb 18 15:42:20 crc kubenswrapper[4957]: I0218 15:42:20.233515 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="66159dcb-b549-41a9-8baa-0fdd5141fc04" path="/var/lib/kubelet/pods/66159dcb-b549-41a9-8baa-0fdd5141fc04/volumes" Feb 18 15:42:37 crc kubenswrapper[4957]: I0218 15:42:37.279512 4957 patch_prober.go:28] interesting pod/machine-config-daemon-x8wwg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 15:42:37 crc kubenswrapper[4957]: I0218 15:42:37.279929 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 15:43:06 crc kubenswrapper[4957]: E0218 15:43:06.207876 4957 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.213:38954->38.102.83.213:46479: write tcp 38.102.83.213:38954->38.102.83.213:46479: write: broken pipe Feb 18 15:43:07 crc kubenswrapper[4957]: I0218 15:43:07.279545 4957 patch_prober.go:28] interesting pod/machine-config-daemon-x8wwg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 15:43:07 crc kubenswrapper[4957]: I0218 15:43:07.279960 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 15:43:07 crc kubenswrapper[4957]: I0218 15:43:07.280009 4957 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" Feb 18 15:43:07 crc kubenswrapper[4957]: I0218 15:43:07.280958 4957 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f9077133a95a58a1f91a3ffae91cbaefd9887cd4916d6d9348fd595927183247"} pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 15:43:07 crc kubenswrapper[4957]: I0218 15:43:07.281017 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" containerName="machine-config-daemon" containerID="cri-o://f9077133a95a58a1f91a3ffae91cbaefd9887cd4916d6d9348fd595927183247" gracePeriod=600 Feb 18 15:43:07 crc kubenswrapper[4957]: E0218 15:43:07.438490 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-x8wwg_openshift-machine-config-operator(4cde17e3-43e9-4bed-afe8-5b76229e35cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" Feb 18 15:43:07 crc kubenswrapper[4957]: I0218 15:43:07.843307 4957 generic.go:334] "Generic (PLEG): container finished" podID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" containerID="f9077133a95a58a1f91a3ffae91cbaefd9887cd4916d6d9348fd595927183247" exitCode=0 Feb 18 15:43:07 crc kubenswrapper[4957]: I0218 15:43:07.843403 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" event={"ID":"4cde17e3-43e9-4bed-afe8-5b76229e35cf","Type":"ContainerDied","Data":"f9077133a95a58a1f91a3ffae91cbaefd9887cd4916d6d9348fd595927183247"} Feb 18 15:43:07 crc kubenswrapper[4957]: I0218 15:43:07.843727 4957 scope.go:117] "RemoveContainer" containerID="ce24a050fa4f86c2157fad1dbfe19397a83b3ea4b1b26e8a090d6f43f8831d54" Feb 18 15:43:07 crc kubenswrapper[4957]: I0218 15:43:07.844966 4957 scope.go:117] "RemoveContainer" containerID="f9077133a95a58a1f91a3ffae91cbaefd9887cd4916d6d9348fd595927183247" Feb 18 15:43:07 crc kubenswrapper[4957]: E0218 15:43:07.847695 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-x8wwg_openshift-machine-config-operator(4cde17e3-43e9-4bed-afe8-5b76229e35cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" Feb 18 15:43:22 crc kubenswrapper[4957]: I0218 15:43:22.217282 4957 scope.go:117] "RemoveContainer" containerID="f9077133a95a58a1f91a3ffae91cbaefd9887cd4916d6d9348fd595927183247" Feb 18 15:43:22 crc kubenswrapper[4957]: E0218 15:43:22.218034 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-x8wwg_openshift-machine-config-operator(4cde17e3-43e9-4bed-afe8-5b76229e35cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" Feb 18 15:43:35 crc kubenswrapper[4957]: I0218 15:43:35.213290 4957 scope.go:117] "RemoveContainer" containerID="f9077133a95a58a1f91a3ffae91cbaefd9887cd4916d6d9348fd595927183247" Feb 18 15:43:35 crc kubenswrapper[4957]: E0218 15:43:35.214567 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-x8wwg_openshift-machine-config-operator(4cde17e3-43e9-4bed-afe8-5b76229e35cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" Feb 18 15:43:46 crc kubenswrapper[4957]: I0218 15:43:46.213638 4957 scope.go:117] "RemoveContainer" containerID="f9077133a95a58a1f91a3ffae91cbaefd9887cd4916d6d9348fd595927183247" Feb 18 15:43:46 crc kubenswrapper[4957]: E0218 15:43:46.214478 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-x8wwg_openshift-machine-config-operator(4cde17e3-43e9-4bed-afe8-5b76229e35cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" Feb 18 15:44:01 crc kubenswrapper[4957]: I0218 15:44:01.212864 4957 scope.go:117] "RemoveContainer" containerID="f9077133a95a58a1f91a3ffae91cbaefd9887cd4916d6d9348fd595927183247" Feb 18 15:44:01 crc kubenswrapper[4957]: E0218 15:44:01.213809 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-x8wwg_openshift-machine-config-operator(4cde17e3-43e9-4bed-afe8-5b76229e35cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" Feb 18 15:44:12 crc kubenswrapper[4957]: I0218 15:44:12.214002 4957 scope.go:117] "RemoveContainer" containerID="f9077133a95a58a1f91a3ffae91cbaefd9887cd4916d6d9348fd595927183247" Feb 18 15:44:12 crc kubenswrapper[4957]: E0218 15:44:12.215161 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-x8wwg_openshift-machine-config-operator(4cde17e3-43e9-4bed-afe8-5b76229e35cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" Feb 18 15:44:24 crc kubenswrapper[4957]: I0218 15:44:24.232985 4957 scope.go:117] "RemoveContainer" containerID="f9077133a95a58a1f91a3ffae91cbaefd9887cd4916d6d9348fd595927183247" Feb 18 15:44:24 crc kubenswrapper[4957]: E0218 15:44:24.234328 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-x8wwg_openshift-machine-config-operator(4cde17e3-43e9-4bed-afe8-5b76229e35cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" Feb 18 15:44:36 crc kubenswrapper[4957]: I0218 15:44:36.213129 4957 scope.go:117] "RemoveContainer" containerID="f9077133a95a58a1f91a3ffae91cbaefd9887cd4916d6d9348fd595927183247" Feb 18 15:44:36 crc kubenswrapper[4957]: E0218 15:44:36.213930 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-x8wwg_openshift-machine-config-operator(4cde17e3-43e9-4bed-afe8-5b76229e35cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" Feb 18 15:44:48 crc kubenswrapper[4957]: I0218 15:44:48.215556 4957 scope.go:117] "RemoveContainer" containerID="f9077133a95a58a1f91a3ffae91cbaefd9887cd4916d6d9348fd595927183247" Feb 18 15:44:48 crc kubenswrapper[4957]: E0218 15:44:48.216869 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-x8wwg_openshift-machine-config-operator(4cde17e3-43e9-4bed-afe8-5b76229e35cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" Feb 18 15:45:00 crc kubenswrapper[4957]: I0218 15:45:00.227003 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29523825-4qt9d"] Feb 18 15:45:00 crc kubenswrapper[4957]: E0218 15:45:00.228085 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66159dcb-b549-41a9-8baa-0fdd5141fc04" containerName="extract-content" Feb 18 15:45:00 crc kubenswrapper[4957]: I0218 15:45:00.228104 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="66159dcb-b549-41a9-8baa-0fdd5141fc04" containerName="extract-content" Feb 18 15:45:00 crc kubenswrapper[4957]: E0218 15:45:00.228140 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54317c16-19fd-4cc7-adbc-dee75502de0a" containerName="extract-utilities" Feb 18 15:45:00 crc kubenswrapper[4957]: I0218 15:45:00.228150 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="54317c16-19fd-4cc7-adbc-dee75502de0a" containerName="extract-utilities" Feb 18 15:45:00 crc kubenswrapper[4957]: E0218 15:45:00.228182 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66159dcb-b549-41a9-8baa-0fdd5141fc04" containerName="extract-utilities" Feb 18 15:45:00 crc kubenswrapper[4957]: I0218 15:45:00.228191 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="66159dcb-b549-41a9-8baa-0fdd5141fc04" containerName="extract-utilities" Feb 18 15:45:00 crc kubenswrapper[4957]: E0218 15:45:00.228227 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66159dcb-b549-41a9-8baa-0fdd5141fc04" containerName="registry-server" Feb 18 15:45:00 crc kubenswrapper[4957]: I0218 15:45:00.228234 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="66159dcb-b549-41a9-8baa-0fdd5141fc04" containerName="registry-server" Feb 18 15:45:00 crc kubenswrapper[4957]: E0218 15:45:00.228245 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54317c16-19fd-4cc7-adbc-dee75502de0a" containerName="extract-content" Feb 18 15:45:00 crc kubenswrapper[4957]: I0218 15:45:00.228253 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="54317c16-19fd-4cc7-adbc-dee75502de0a" containerName="extract-content" Feb 18 15:45:00 crc kubenswrapper[4957]: E0218 15:45:00.228264 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54317c16-19fd-4cc7-adbc-dee75502de0a" containerName="registry-server" Feb 18 15:45:00 crc kubenswrapper[4957]: I0218 15:45:00.228273 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="54317c16-19fd-4cc7-adbc-dee75502de0a" containerName="registry-server" Feb 18 15:45:00 crc kubenswrapper[4957]: I0218 15:45:00.228580 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="66159dcb-b549-41a9-8baa-0fdd5141fc04" containerName="registry-server" Feb 18 15:45:00 crc kubenswrapper[4957]: I0218 15:45:00.228593 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="54317c16-19fd-4cc7-adbc-dee75502de0a" containerName="registry-server" Feb 18 15:45:00 crc kubenswrapper[4957]: I0218 15:45:00.229651 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29523825-4qt9d" Feb 18 15:45:00 crc kubenswrapper[4957]: I0218 15:45:00.231752 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 18 15:45:00 crc kubenswrapper[4957]: I0218 15:45:00.232466 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 18 15:45:00 crc kubenswrapper[4957]: I0218 15:45:00.245214 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29523825-4qt9d"] Feb 18 15:45:00 crc kubenswrapper[4957]: I0218 15:45:00.291764 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/76bff772-eb9c-4632-9335-cb4495a2e2bd-secret-volume\") pod \"collect-profiles-29523825-4qt9d\" (UID: \"76bff772-eb9c-4632-9335-cb4495a2e2bd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523825-4qt9d" Feb 18 15:45:00 crc kubenswrapper[4957]: I0218 15:45:00.292071 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8hnw\" (UniqueName: \"kubernetes.io/projected/76bff772-eb9c-4632-9335-cb4495a2e2bd-kube-api-access-g8hnw\") pod \"collect-profiles-29523825-4qt9d\" (UID: \"76bff772-eb9c-4632-9335-cb4495a2e2bd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523825-4qt9d" Feb 18 15:45:00 crc kubenswrapper[4957]: I0218 15:45:00.294198 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/76bff772-eb9c-4632-9335-cb4495a2e2bd-config-volume\") pod \"collect-profiles-29523825-4qt9d\" (UID: \"76bff772-eb9c-4632-9335-cb4495a2e2bd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523825-4qt9d" Feb 18 15:45:00 crc kubenswrapper[4957]: I0218 15:45:00.397468 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/76bff772-eb9c-4632-9335-cb4495a2e2bd-secret-volume\") pod \"collect-profiles-29523825-4qt9d\" (UID: \"76bff772-eb9c-4632-9335-cb4495a2e2bd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523825-4qt9d" Feb 18 15:45:00 crc kubenswrapper[4957]: I0218 15:45:00.397612 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g8hnw\" (UniqueName: \"kubernetes.io/projected/76bff772-eb9c-4632-9335-cb4495a2e2bd-kube-api-access-g8hnw\") pod \"collect-profiles-29523825-4qt9d\" (UID: \"76bff772-eb9c-4632-9335-cb4495a2e2bd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523825-4qt9d" Feb 18 15:45:00 crc kubenswrapper[4957]: I0218 15:45:00.397735 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/76bff772-eb9c-4632-9335-cb4495a2e2bd-config-volume\") pod \"collect-profiles-29523825-4qt9d\" (UID: \"76bff772-eb9c-4632-9335-cb4495a2e2bd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523825-4qt9d" Feb 18 15:45:00 crc kubenswrapper[4957]: I0218 15:45:00.398532 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/76bff772-eb9c-4632-9335-cb4495a2e2bd-config-volume\") pod \"collect-profiles-29523825-4qt9d\" (UID: \"76bff772-eb9c-4632-9335-cb4495a2e2bd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523825-4qt9d" Feb 18 15:45:00 crc kubenswrapper[4957]: I0218 15:45:00.406108 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/76bff772-eb9c-4632-9335-cb4495a2e2bd-secret-volume\") pod \"collect-profiles-29523825-4qt9d\" (UID: \"76bff772-eb9c-4632-9335-cb4495a2e2bd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523825-4qt9d" Feb 18 15:45:00 crc kubenswrapper[4957]: I0218 15:45:00.412877 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g8hnw\" (UniqueName: \"kubernetes.io/projected/76bff772-eb9c-4632-9335-cb4495a2e2bd-kube-api-access-g8hnw\") pod \"collect-profiles-29523825-4qt9d\" (UID: \"76bff772-eb9c-4632-9335-cb4495a2e2bd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523825-4qt9d" Feb 18 15:45:00 crc kubenswrapper[4957]: I0218 15:45:00.557281 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29523825-4qt9d" Feb 18 15:45:01 crc kubenswrapper[4957]: I0218 15:45:01.083064 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29523825-4qt9d"] Feb 18 15:45:01 crc kubenswrapper[4957]: I0218 15:45:01.459443 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29523825-4qt9d" event={"ID":"76bff772-eb9c-4632-9335-cb4495a2e2bd","Type":"ContainerStarted","Data":"476354d0c9245a7e9ae80eb75f4eff7b4e1b0b375cbbfa5baee31f4c00b6e8eb"} Feb 18 15:45:01 crc kubenswrapper[4957]: I0218 15:45:01.459726 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29523825-4qt9d" event={"ID":"76bff772-eb9c-4632-9335-cb4495a2e2bd","Type":"ContainerStarted","Data":"294970cbf96926474aa31e456b34d23803de6beabf6e222df0ee6f092c0fe731"} Feb 18 15:45:01 crc kubenswrapper[4957]: I0218 15:45:01.484802 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29523825-4qt9d" podStartSLOduration=1.4847796180000001 podStartE2EDuration="1.484779618s" podCreationTimestamp="2026-02-18 15:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 15:45:01.473886576 +0000 UTC m=+4407.994751330" watchObservedRunningTime="2026-02-18 15:45:01.484779618 +0000 UTC m=+4408.005644362" Feb 18 15:45:02 crc kubenswrapper[4957]: I0218 15:45:02.472539 4957 generic.go:334] "Generic (PLEG): container finished" podID="76bff772-eb9c-4632-9335-cb4495a2e2bd" containerID="476354d0c9245a7e9ae80eb75f4eff7b4e1b0b375cbbfa5baee31f4c00b6e8eb" exitCode=0 Feb 18 15:45:02 crc kubenswrapper[4957]: I0218 15:45:02.472630 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29523825-4qt9d" event={"ID":"76bff772-eb9c-4632-9335-cb4495a2e2bd","Type":"ContainerDied","Data":"476354d0c9245a7e9ae80eb75f4eff7b4e1b0b375cbbfa5baee31f4c00b6e8eb"} Feb 18 15:45:03 crc kubenswrapper[4957]: I0218 15:45:03.213495 4957 scope.go:117] "RemoveContainer" containerID="f9077133a95a58a1f91a3ffae91cbaefd9887cd4916d6d9348fd595927183247" Feb 18 15:45:03 crc kubenswrapper[4957]: E0218 15:45:03.214107 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-x8wwg_openshift-machine-config-operator(4cde17e3-43e9-4bed-afe8-5b76229e35cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" Feb 18 15:45:03 crc kubenswrapper[4957]: I0218 15:45:03.922852 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29523825-4qt9d" Feb 18 15:45:03 crc kubenswrapper[4957]: I0218 15:45:03.990108 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/76bff772-eb9c-4632-9335-cb4495a2e2bd-config-volume\") pod \"76bff772-eb9c-4632-9335-cb4495a2e2bd\" (UID: \"76bff772-eb9c-4632-9335-cb4495a2e2bd\") " Feb 18 15:45:03 crc kubenswrapper[4957]: I0218 15:45:03.990214 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/76bff772-eb9c-4632-9335-cb4495a2e2bd-secret-volume\") pod \"76bff772-eb9c-4632-9335-cb4495a2e2bd\" (UID: \"76bff772-eb9c-4632-9335-cb4495a2e2bd\") " Feb 18 15:45:03 crc kubenswrapper[4957]: I0218 15:45:03.990382 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g8hnw\" (UniqueName: \"kubernetes.io/projected/76bff772-eb9c-4632-9335-cb4495a2e2bd-kube-api-access-g8hnw\") pod \"76bff772-eb9c-4632-9335-cb4495a2e2bd\" (UID: \"76bff772-eb9c-4632-9335-cb4495a2e2bd\") " Feb 18 15:45:03 crc kubenswrapper[4957]: I0218 15:45:03.991525 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/76bff772-eb9c-4632-9335-cb4495a2e2bd-config-volume" (OuterVolumeSpecName: "config-volume") pod "76bff772-eb9c-4632-9335-cb4495a2e2bd" (UID: "76bff772-eb9c-4632-9335-cb4495a2e2bd"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 15:45:03 crc kubenswrapper[4957]: I0218 15:45:03.996572 4957 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/76bff772-eb9c-4632-9335-cb4495a2e2bd-config-volume\") on node \"crc\" DevicePath \"\"" Feb 18 15:45:03 crc kubenswrapper[4957]: I0218 15:45:03.999753 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76bff772-eb9c-4632-9335-cb4495a2e2bd-kube-api-access-g8hnw" (OuterVolumeSpecName: "kube-api-access-g8hnw") pod "76bff772-eb9c-4632-9335-cb4495a2e2bd" (UID: "76bff772-eb9c-4632-9335-cb4495a2e2bd"). InnerVolumeSpecName "kube-api-access-g8hnw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 15:45:04 crc kubenswrapper[4957]: I0218 15:45:04.096619 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76bff772-eb9c-4632-9335-cb4495a2e2bd-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "76bff772-eb9c-4632-9335-cb4495a2e2bd" (UID: "76bff772-eb9c-4632-9335-cb4495a2e2bd"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 15:45:04 crc kubenswrapper[4957]: I0218 15:45:04.098561 4957 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/76bff772-eb9c-4632-9335-cb4495a2e2bd-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 18 15:45:04 crc kubenswrapper[4957]: I0218 15:45:04.098666 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g8hnw\" (UniqueName: \"kubernetes.io/projected/76bff772-eb9c-4632-9335-cb4495a2e2bd-kube-api-access-g8hnw\") on node \"crc\" DevicePath \"\"" Feb 18 15:45:04 crc kubenswrapper[4957]: I0218 15:45:04.499911 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29523825-4qt9d" event={"ID":"76bff772-eb9c-4632-9335-cb4495a2e2bd","Type":"ContainerDied","Data":"294970cbf96926474aa31e456b34d23803de6beabf6e222df0ee6f092c0fe731"} Feb 18 15:45:04 crc kubenswrapper[4957]: I0218 15:45:04.499951 4957 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="294970cbf96926474aa31e456b34d23803de6beabf6e222df0ee6f092c0fe731" Feb 18 15:45:04 crc kubenswrapper[4957]: I0218 15:45:04.500032 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29523825-4qt9d" Feb 18 15:45:04 crc kubenswrapper[4957]: I0218 15:45:04.566205 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29523780-xzvqh"] Feb 18 15:45:04 crc kubenswrapper[4957]: I0218 15:45:04.575890 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29523780-xzvqh"] Feb 18 15:45:06 crc kubenswrapper[4957]: I0218 15:45:06.230518 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="951cc00a-60f2-4f26-a0a9-8c9313980f92" path="/var/lib/kubelet/pods/951cc00a-60f2-4f26-a0a9-8c9313980f92/volumes" Feb 18 15:45:15 crc kubenswrapper[4957]: I0218 15:45:15.213343 4957 scope.go:117] "RemoveContainer" containerID="f9077133a95a58a1f91a3ffae91cbaefd9887cd4916d6d9348fd595927183247" Feb 18 15:45:15 crc kubenswrapper[4957]: E0218 15:45:15.216291 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-x8wwg_openshift-machine-config-operator(4cde17e3-43e9-4bed-afe8-5b76229e35cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" Feb 18 15:45:17 crc kubenswrapper[4957]: I0218 15:45:17.546193 4957 scope.go:117] "RemoveContainer" containerID="fc401c94ce371f721f34a16ad4c8c8adea7e25d149a3ef0ca9b1f0010f8a81ca" Feb 18 15:45:19 crc kubenswrapper[4957]: I0218 15:45:19.372622 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-z6ztd"] Feb 18 15:45:19 crc kubenswrapper[4957]: E0218 15:45:19.374013 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76bff772-eb9c-4632-9335-cb4495a2e2bd" containerName="collect-profiles" Feb 18 15:45:19 crc kubenswrapper[4957]: I0218 15:45:19.374032 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="76bff772-eb9c-4632-9335-cb4495a2e2bd" containerName="collect-profiles" Feb 18 15:45:19 crc kubenswrapper[4957]: I0218 15:45:19.374295 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="76bff772-eb9c-4632-9335-cb4495a2e2bd" containerName="collect-profiles" Feb 18 15:45:19 crc kubenswrapper[4957]: I0218 15:45:19.376497 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z6ztd" Feb 18 15:45:19 crc kubenswrapper[4957]: I0218 15:45:19.386630 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-z6ztd"] Feb 18 15:45:19 crc kubenswrapper[4957]: I0218 15:45:19.463396 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mdd9v\" (UniqueName: \"kubernetes.io/projected/3b587434-6d96-435f-bb3b-571aeb575c67-kube-api-access-mdd9v\") pod \"community-operators-z6ztd\" (UID: \"3b587434-6d96-435f-bb3b-571aeb575c67\") " pod="openshift-marketplace/community-operators-z6ztd" Feb 18 15:45:19 crc kubenswrapper[4957]: I0218 15:45:19.463704 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b587434-6d96-435f-bb3b-571aeb575c67-catalog-content\") pod \"community-operators-z6ztd\" (UID: \"3b587434-6d96-435f-bb3b-571aeb575c67\") " pod="openshift-marketplace/community-operators-z6ztd" Feb 18 15:45:19 crc kubenswrapper[4957]: I0218 15:45:19.463778 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b587434-6d96-435f-bb3b-571aeb575c67-utilities\") pod \"community-operators-z6ztd\" (UID: \"3b587434-6d96-435f-bb3b-571aeb575c67\") " pod="openshift-marketplace/community-operators-z6ztd" Feb 18 15:45:19 crc kubenswrapper[4957]: I0218 15:45:19.566985 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mdd9v\" (UniqueName: \"kubernetes.io/projected/3b587434-6d96-435f-bb3b-571aeb575c67-kube-api-access-mdd9v\") pod \"community-operators-z6ztd\" (UID: \"3b587434-6d96-435f-bb3b-571aeb575c67\") " pod="openshift-marketplace/community-operators-z6ztd" Feb 18 15:45:19 crc kubenswrapper[4957]: I0218 15:45:19.567598 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b587434-6d96-435f-bb3b-571aeb575c67-catalog-content\") pod \"community-operators-z6ztd\" (UID: \"3b587434-6d96-435f-bb3b-571aeb575c67\") " pod="openshift-marketplace/community-operators-z6ztd" Feb 18 15:45:19 crc kubenswrapper[4957]: I0218 15:45:19.568148 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b587434-6d96-435f-bb3b-571aeb575c67-catalog-content\") pod \"community-operators-z6ztd\" (UID: \"3b587434-6d96-435f-bb3b-571aeb575c67\") " pod="openshift-marketplace/community-operators-z6ztd" Feb 18 15:45:19 crc kubenswrapper[4957]: I0218 15:45:19.568568 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b587434-6d96-435f-bb3b-571aeb575c67-utilities\") pod \"community-operators-z6ztd\" (UID: \"3b587434-6d96-435f-bb3b-571aeb575c67\") " pod="openshift-marketplace/community-operators-z6ztd" Feb 18 15:45:19 crc kubenswrapper[4957]: I0218 15:45:19.568627 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b587434-6d96-435f-bb3b-571aeb575c67-utilities\") pod \"community-operators-z6ztd\" (UID: \"3b587434-6d96-435f-bb3b-571aeb575c67\") " pod="openshift-marketplace/community-operators-z6ztd" Feb 18 15:45:19 crc kubenswrapper[4957]: I0218 15:45:19.596071 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mdd9v\" (UniqueName: \"kubernetes.io/projected/3b587434-6d96-435f-bb3b-571aeb575c67-kube-api-access-mdd9v\") pod \"community-operators-z6ztd\" (UID: \"3b587434-6d96-435f-bb3b-571aeb575c67\") " pod="openshift-marketplace/community-operators-z6ztd" Feb 18 15:45:19 crc kubenswrapper[4957]: I0218 15:45:19.705751 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z6ztd" Feb 18 15:45:20 crc kubenswrapper[4957]: I0218 15:45:20.280315 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-z6ztd"] Feb 18 15:45:20 crc kubenswrapper[4957]: W0218 15:45:20.282877 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b587434_6d96_435f_bb3b_571aeb575c67.slice/crio-d621cb98e4320f60fc5ca7d0c4826fd580c809e4e411b621274a871fadef68e4 WatchSource:0}: Error finding container d621cb98e4320f60fc5ca7d0c4826fd580c809e4e411b621274a871fadef68e4: Status 404 returned error can't find the container with id d621cb98e4320f60fc5ca7d0c4826fd580c809e4e411b621274a871fadef68e4 Feb 18 15:45:20 crc kubenswrapper[4957]: I0218 15:45:20.707214 4957 generic.go:334] "Generic (PLEG): container finished" podID="3b587434-6d96-435f-bb3b-571aeb575c67" containerID="dc8eb03560e36818cf81f0e2ffe91fe59d3687b321bad53181b396be2dc46a1a" exitCode=0 Feb 18 15:45:20 crc kubenswrapper[4957]: I0218 15:45:20.707273 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z6ztd" event={"ID":"3b587434-6d96-435f-bb3b-571aeb575c67","Type":"ContainerDied","Data":"dc8eb03560e36818cf81f0e2ffe91fe59d3687b321bad53181b396be2dc46a1a"} Feb 18 15:45:20 crc kubenswrapper[4957]: I0218 15:45:20.707611 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z6ztd" event={"ID":"3b587434-6d96-435f-bb3b-571aeb575c67","Type":"ContainerStarted","Data":"d621cb98e4320f60fc5ca7d0c4826fd580c809e4e411b621274a871fadef68e4"} Feb 18 15:45:22 crc kubenswrapper[4957]: I0218 15:45:22.736625 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z6ztd" event={"ID":"3b587434-6d96-435f-bb3b-571aeb575c67","Type":"ContainerStarted","Data":"8fed38fc5f65ffcd11ec571db11ae7614d59cee12018af276a83d4b8a09ebccb"} Feb 18 15:45:24 crc kubenswrapper[4957]: I0218 15:45:24.763258 4957 generic.go:334] "Generic (PLEG): container finished" podID="3b587434-6d96-435f-bb3b-571aeb575c67" containerID="8fed38fc5f65ffcd11ec571db11ae7614d59cee12018af276a83d4b8a09ebccb" exitCode=0 Feb 18 15:45:24 crc kubenswrapper[4957]: I0218 15:45:24.763364 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z6ztd" event={"ID":"3b587434-6d96-435f-bb3b-571aeb575c67","Type":"ContainerDied","Data":"8fed38fc5f65ffcd11ec571db11ae7614d59cee12018af276a83d4b8a09ebccb"} Feb 18 15:45:25 crc kubenswrapper[4957]: I0218 15:45:25.780340 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z6ztd" event={"ID":"3b587434-6d96-435f-bb3b-571aeb575c67","Type":"ContainerStarted","Data":"804ff423b59f964f63b21438d23cec1d80ecfddeae114284ad6738bf9e1f5362"} Feb 18 15:45:25 crc kubenswrapper[4957]: I0218 15:45:25.812489 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-z6ztd" podStartSLOduration=2.189870884 podStartE2EDuration="6.812452936s" podCreationTimestamp="2026-02-18 15:45:19 +0000 UTC" firstStartedPulling="2026-02-18 15:45:20.710497516 +0000 UTC m=+4427.231362270" lastFinishedPulling="2026-02-18 15:45:25.333079548 +0000 UTC m=+4431.853944322" observedRunningTime="2026-02-18 15:45:25.809255004 +0000 UTC m=+4432.330119748" watchObservedRunningTime="2026-02-18 15:45:25.812452936 +0000 UTC m=+4432.333317720" Feb 18 15:45:27 crc kubenswrapper[4957]: I0218 15:45:27.212698 4957 scope.go:117] "RemoveContainer" containerID="f9077133a95a58a1f91a3ffae91cbaefd9887cd4916d6d9348fd595927183247" Feb 18 15:45:27 crc kubenswrapper[4957]: E0218 15:45:27.213618 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-x8wwg_openshift-machine-config-operator(4cde17e3-43e9-4bed-afe8-5b76229e35cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" Feb 18 15:45:29 crc kubenswrapper[4957]: I0218 15:45:29.706662 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-z6ztd" Feb 18 15:45:29 crc kubenswrapper[4957]: I0218 15:45:29.707013 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-z6ztd" Feb 18 15:45:29 crc kubenswrapper[4957]: I0218 15:45:29.770764 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-z6ztd" Feb 18 15:45:39 crc kubenswrapper[4957]: I0218 15:45:39.773745 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-z6ztd" Feb 18 15:45:39 crc kubenswrapper[4957]: I0218 15:45:39.838186 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-z6ztd"] Feb 18 15:45:39 crc kubenswrapper[4957]: I0218 15:45:39.999246 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-z6ztd" podUID="3b587434-6d96-435f-bb3b-571aeb575c67" containerName="registry-server" containerID="cri-o://804ff423b59f964f63b21438d23cec1d80ecfddeae114284ad6738bf9e1f5362" gracePeriod=2 Feb 18 15:45:40 crc kubenswrapper[4957]: I0218 15:45:40.214041 4957 scope.go:117] "RemoveContainer" containerID="f9077133a95a58a1f91a3ffae91cbaefd9887cd4916d6d9348fd595927183247" Feb 18 15:45:40 crc kubenswrapper[4957]: E0218 15:45:40.214355 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-x8wwg_openshift-machine-config-operator(4cde17e3-43e9-4bed-afe8-5b76229e35cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" Feb 18 15:45:40 crc kubenswrapper[4957]: I0218 15:45:40.618771 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z6ztd" Feb 18 15:45:40 crc kubenswrapper[4957]: I0218 15:45:40.729808 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b587434-6d96-435f-bb3b-571aeb575c67-catalog-content\") pod \"3b587434-6d96-435f-bb3b-571aeb575c67\" (UID: \"3b587434-6d96-435f-bb3b-571aeb575c67\") " Feb 18 15:45:40 crc kubenswrapper[4957]: I0218 15:45:40.729893 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b587434-6d96-435f-bb3b-571aeb575c67-utilities\") pod \"3b587434-6d96-435f-bb3b-571aeb575c67\" (UID: \"3b587434-6d96-435f-bb3b-571aeb575c67\") " Feb 18 15:45:40 crc kubenswrapper[4957]: I0218 15:45:40.730053 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mdd9v\" (UniqueName: \"kubernetes.io/projected/3b587434-6d96-435f-bb3b-571aeb575c67-kube-api-access-mdd9v\") pod \"3b587434-6d96-435f-bb3b-571aeb575c67\" (UID: \"3b587434-6d96-435f-bb3b-571aeb575c67\") " Feb 18 15:45:40 crc kubenswrapper[4957]: I0218 15:45:40.731936 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b587434-6d96-435f-bb3b-571aeb575c67-utilities" (OuterVolumeSpecName: "utilities") pod "3b587434-6d96-435f-bb3b-571aeb575c67" (UID: "3b587434-6d96-435f-bb3b-571aeb575c67"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 15:45:40 crc kubenswrapper[4957]: I0218 15:45:40.754000 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b587434-6d96-435f-bb3b-571aeb575c67-kube-api-access-mdd9v" (OuterVolumeSpecName: "kube-api-access-mdd9v") pod "3b587434-6d96-435f-bb3b-571aeb575c67" (UID: "3b587434-6d96-435f-bb3b-571aeb575c67"). InnerVolumeSpecName "kube-api-access-mdd9v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 15:45:40 crc kubenswrapper[4957]: I0218 15:45:40.810684 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b587434-6d96-435f-bb3b-571aeb575c67-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3b587434-6d96-435f-bb3b-571aeb575c67" (UID: "3b587434-6d96-435f-bb3b-571aeb575c67"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 15:45:40 crc kubenswrapper[4957]: I0218 15:45:40.832451 4957 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b587434-6d96-435f-bb3b-571aeb575c67-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 15:45:40 crc kubenswrapper[4957]: I0218 15:45:40.832482 4957 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b587434-6d96-435f-bb3b-571aeb575c67-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 15:45:40 crc kubenswrapper[4957]: I0218 15:45:40.832493 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mdd9v\" (UniqueName: \"kubernetes.io/projected/3b587434-6d96-435f-bb3b-571aeb575c67-kube-api-access-mdd9v\") on node \"crc\" DevicePath \"\"" Feb 18 15:45:41 crc kubenswrapper[4957]: I0218 15:45:41.014286 4957 generic.go:334] "Generic (PLEG): container finished" podID="3b587434-6d96-435f-bb3b-571aeb575c67" containerID="804ff423b59f964f63b21438d23cec1d80ecfddeae114284ad6738bf9e1f5362" exitCode=0 Feb 18 15:45:41 crc kubenswrapper[4957]: I0218 15:45:41.014344 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z6ztd" event={"ID":"3b587434-6d96-435f-bb3b-571aeb575c67","Type":"ContainerDied","Data":"804ff423b59f964f63b21438d23cec1d80ecfddeae114284ad6738bf9e1f5362"} Feb 18 15:45:41 crc kubenswrapper[4957]: I0218 15:45:41.014379 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z6ztd" event={"ID":"3b587434-6d96-435f-bb3b-571aeb575c67","Type":"ContainerDied","Data":"d621cb98e4320f60fc5ca7d0c4826fd580c809e4e411b621274a871fadef68e4"} Feb 18 15:45:41 crc kubenswrapper[4957]: I0218 15:45:41.014381 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z6ztd" Feb 18 15:45:41 crc kubenswrapper[4957]: I0218 15:45:41.014396 4957 scope.go:117] "RemoveContainer" containerID="804ff423b59f964f63b21438d23cec1d80ecfddeae114284ad6738bf9e1f5362" Feb 18 15:45:41 crc kubenswrapper[4957]: I0218 15:45:41.067235 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-z6ztd"] Feb 18 15:45:41 crc kubenswrapper[4957]: I0218 15:45:41.070045 4957 scope.go:117] "RemoveContainer" containerID="8fed38fc5f65ffcd11ec571db11ae7614d59cee12018af276a83d4b8a09ebccb" Feb 18 15:45:41 crc kubenswrapper[4957]: I0218 15:45:41.080806 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-z6ztd"] Feb 18 15:45:41 crc kubenswrapper[4957]: I0218 15:45:41.105802 4957 scope.go:117] "RemoveContainer" containerID="dc8eb03560e36818cf81f0e2ffe91fe59d3687b321bad53181b396be2dc46a1a" Feb 18 15:45:41 crc kubenswrapper[4957]: I0218 15:45:41.211738 4957 scope.go:117] "RemoveContainer" containerID="804ff423b59f964f63b21438d23cec1d80ecfddeae114284ad6738bf9e1f5362" Feb 18 15:45:41 crc kubenswrapper[4957]: E0218 15:45:41.212462 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"804ff423b59f964f63b21438d23cec1d80ecfddeae114284ad6738bf9e1f5362\": container with ID starting with 804ff423b59f964f63b21438d23cec1d80ecfddeae114284ad6738bf9e1f5362 not found: ID does not exist" containerID="804ff423b59f964f63b21438d23cec1d80ecfddeae114284ad6738bf9e1f5362" Feb 18 15:45:41 crc kubenswrapper[4957]: I0218 15:45:41.212516 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"804ff423b59f964f63b21438d23cec1d80ecfddeae114284ad6738bf9e1f5362"} err="failed to get container status \"804ff423b59f964f63b21438d23cec1d80ecfddeae114284ad6738bf9e1f5362\": rpc error: code = NotFound desc = could not find container \"804ff423b59f964f63b21438d23cec1d80ecfddeae114284ad6738bf9e1f5362\": container with ID starting with 804ff423b59f964f63b21438d23cec1d80ecfddeae114284ad6738bf9e1f5362 not found: ID does not exist" Feb 18 15:45:41 crc kubenswrapper[4957]: I0218 15:45:41.212555 4957 scope.go:117] "RemoveContainer" containerID="8fed38fc5f65ffcd11ec571db11ae7614d59cee12018af276a83d4b8a09ebccb" Feb 18 15:45:41 crc kubenswrapper[4957]: E0218 15:45:41.212971 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8fed38fc5f65ffcd11ec571db11ae7614d59cee12018af276a83d4b8a09ebccb\": container with ID starting with 8fed38fc5f65ffcd11ec571db11ae7614d59cee12018af276a83d4b8a09ebccb not found: ID does not exist" containerID="8fed38fc5f65ffcd11ec571db11ae7614d59cee12018af276a83d4b8a09ebccb" Feb 18 15:45:41 crc kubenswrapper[4957]: I0218 15:45:41.213000 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8fed38fc5f65ffcd11ec571db11ae7614d59cee12018af276a83d4b8a09ebccb"} err="failed to get container status \"8fed38fc5f65ffcd11ec571db11ae7614d59cee12018af276a83d4b8a09ebccb\": rpc error: code = NotFound desc = could not find container \"8fed38fc5f65ffcd11ec571db11ae7614d59cee12018af276a83d4b8a09ebccb\": container with ID starting with 8fed38fc5f65ffcd11ec571db11ae7614d59cee12018af276a83d4b8a09ebccb not found: ID does not exist" Feb 18 15:45:41 crc kubenswrapper[4957]: I0218 15:45:41.213019 4957 scope.go:117] "RemoveContainer" containerID="dc8eb03560e36818cf81f0e2ffe91fe59d3687b321bad53181b396be2dc46a1a" Feb 18 15:45:41 crc kubenswrapper[4957]: E0218 15:45:41.213576 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc8eb03560e36818cf81f0e2ffe91fe59d3687b321bad53181b396be2dc46a1a\": container with ID starting with dc8eb03560e36818cf81f0e2ffe91fe59d3687b321bad53181b396be2dc46a1a not found: ID does not exist" containerID="dc8eb03560e36818cf81f0e2ffe91fe59d3687b321bad53181b396be2dc46a1a" Feb 18 15:45:41 crc kubenswrapper[4957]: I0218 15:45:41.213717 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc8eb03560e36818cf81f0e2ffe91fe59d3687b321bad53181b396be2dc46a1a"} err="failed to get container status \"dc8eb03560e36818cf81f0e2ffe91fe59d3687b321bad53181b396be2dc46a1a\": rpc error: code = NotFound desc = could not find container \"dc8eb03560e36818cf81f0e2ffe91fe59d3687b321bad53181b396be2dc46a1a\": container with ID starting with dc8eb03560e36818cf81f0e2ffe91fe59d3687b321bad53181b396be2dc46a1a not found: ID does not exist" Feb 18 15:45:42 crc kubenswrapper[4957]: I0218 15:45:42.230946 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b587434-6d96-435f-bb3b-571aeb575c67" path="/var/lib/kubelet/pods/3b587434-6d96-435f-bb3b-571aeb575c67/volumes" Feb 18 15:45:52 crc kubenswrapper[4957]: I0218 15:45:52.213628 4957 scope.go:117] "RemoveContainer" containerID="f9077133a95a58a1f91a3ffae91cbaefd9887cd4916d6d9348fd595927183247" Feb 18 15:45:52 crc kubenswrapper[4957]: E0218 15:45:52.214461 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-x8wwg_openshift-machine-config-operator(4cde17e3-43e9-4bed-afe8-5b76229e35cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" Feb 18 15:46:04 crc kubenswrapper[4957]: I0218 15:46:04.224094 4957 scope.go:117] "RemoveContainer" containerID="f9077133a95a58a1f91a3ffae91cbaefd9887cd4916d6d9348fd595927183247" Feb 18 15:46:04 crc kubenswrapper[4957]: E0218 15:46:04.224889 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-x8wwg_openshift-machine-config-operator(4cde17e3-43e9-4bed-afe8-5b76229e35cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" Feb 18 15:46:16 crc kubenswrapper[4957]: I0218 15:46:16.213329 4957 scope.go:117] "RemoveContainer" containerID="f9077133a95a58a1f91a3ffae91cbaefd9887cd4916d6d9348fd595927183247" Feb 18 15:46:16 crc kubenswrapper[4957]: E0218 15:46:16.216582 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-x8wwg_openshift-machine-config-operator(4cde17e3-43e9-4bed-afe8-5b76229e35cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" Feb 18 15:46:31 crc kubenswrapper[4957]: I0218 15:46:31.214295 4957 scope.go:117] "RemoveContainer" containerID="f9077133a95a58a1f91a3ffae91cbaefd9887cd4916d6d9348fd595927183247" Feb 18 15:46:31 crc kubenswrapper[4957]: E0218 15:46:31.215553 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-x8wwg_openshift-machine-config-operator(4cde17e3-43e9-4bed-afe8-5b76229e35cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" Feb 18 15:46:43 crc kubenswrapper[4957]: I0218 15:46:43.214034 4957 scope.go:117] "RemoveContainer" containerID="f9077133a95a58a1f91a3ffae91cbaefd9887cd4916d6d9348fd595927183247" Feb 18 15:46:43 crc kubenswrapper[4957]: E0218 15:46:43.214863 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-x8wwg_openshift-machine-config-operator(4cde17e3-43e9-4bed-afe8-5b76229e35cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" Feb 18 15:46:54 crc kubenswrapper[4957]: I0218 15:46:54.219763 4957 scope.go:117] "RemoveContainer" containerID="f9077133a95a58a1f91a3ffae91cbaefd9887cd4916d6d9348fd595927183247" Feb 18 15:46:54 crc kubenswrapper[4957]: E0218 15:46:54.220631 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-x8wwg_openshift-machine-config-operator(4cde17e3-43e9-4bed-afe8-5b76229e35cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" Feb 18 15:47:02 crc kubenswrapper[4957]: I0218 15:47:02.778538 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Feb 18 15:47:02 crc kubenswrapper[4957]: E0218 15:47:02.779537 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b587434-6d96-435f-bb3b-571aeb575c67" containerName="registry-server" Feb 18 15:47:02 crc kubenswrapper[4957]: I0218 15:47:02.779552 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b587434-6d96-435f-bb3b-571aeb575c67" containerName="registry-server" Feb 18 15:47:02 crc kubenswrapper[4957]: E0218 15:47:02.779587 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b587434-6d96-435f-bb3b-571aeb575c67" containerName="extract-content" Feb 18 15:47:02 crc kubenswrapper[4957]: I0218 15:47:02.779593 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b587434-6d96-435f-bb3b-571aeb575c67" containerName="extract-content" Feb 18 15:47:02 crc kubenswrapper[4957]: E0218 15:47:02.779608 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b587434-6d96-435f-bb3b-571aeb575c67" containerName="extract-utilities" Feb 18 15:47:02 crc kubenswrapper[4957]: I0218 15:47:02.779617 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b587434-6d96-435f-bb3b-571aeb575c67" containerName="extract-utilities" Feb 18 15:47:02 crc kubenswrapper[4957]: I0218 15:47:02.779854 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b587434-6d96-435f-bb3b-571aeb575c67" containerName="registry-server" Feb 18 15:47:02 crc kubenswrapper[4957]: I0218 15:47:02.780740 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 18 15:47:02 crc kubenswrapper[4957]: I0218 15:47:02.784698 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-l59wk" Feb 18 15:47:02 crc kubenswrapper[4957]: I0218 15:47:02.784866 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Feb 18 15:47:02 crc kubenswrapper[4957]: I0218 15:47:02.784970 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Feb 18 15:47:02 crc kubenswrapper[4957]: I0218 15:47:02.785497 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Feb 18 15:47:02 crc kubenswrapper[4957]: I0218 15:47:02.791849 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Feb 18 15:47:02 crc kubenswrapper[4957]: I0218 15:47:02.880570 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9dnwm\" (UniqueName: \"kubernetes.io/projected/b596b9fb-f116-4712-81fa-9382d13c295b-kube-api-access-9dnwm\") pod \"tempest-tests-tempest\" (UID: \"b596b9fb-f116-4712-81fa-9382d13c295b\") " pod="openstack/tempest-tests-tempest" Feb 18 15:47:02 crc kubenswrapper[4957]: I0218 15:47:02.880641 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/b596b9fb-f116-4712-81fa-9382d13c295b-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"b596b9fb-f116-4712-81fa-9382d13c295b\") " pod="openstack/tempest-tests-tempest" Feb 18 15:47:02 crc kubenswrapper[4957]: I0218 15:47:02.880776 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/b596b9fb-f116-4712-81fa-9382d13c295b-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"b596b9fb-f116-4712-81fa-9382d13c295b\") " pod="openstack/tempest-tests-tempest" Feb 18 15:47:02 crc kubenswrapper[4957]: I0218 15:47:02.880799 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"tempest-tests-tempest\" (UID: \"b596b9fb-f116-4712-81fa-9382d13c295b\") " pod="openstack/tempest-tests-tempest" Feb 18 15:47:02 crc kubenswrapper[4957]: I0218 15:47:02.880976 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b596b9fb-f116-4712-81fa-9382d13c295b-config-data\") pod \"tempest-tests-tempest\" (UID: \"b596b9fb-f116-4712-81fa-9382d13c295b\") " pod="openstack/tempest-tests-tempest" Feb 18 15:47:02 crc kubenswrapper[4957]: I0218 15:47:02.880997 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/b596b9fb-f116-4712-81fa-9382d13c295b-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"b596b9fb-f116-4712-81fa-9382d13c295b\") " pod="openstack/tempest-tests-tempest" Feb 18 15:47:02 crc kubenswrapper[4957]: I0218 15:47:02.881145 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b596b9fb-f116-4712-81fa-9382d13c295b-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"b596b9fb-f116-4712-81fa-9382d13c295b\") " pod="openstack/tempest-tests-tempest" Feb 18 15:47:02 crc kubenswrapper[4957]: I0218 15:47:02.881201 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/b596b9fb-f116-4712-81fa-9382d13c295b-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"b596b9fb-f116-4712-81fa-9382d13c295b\") " pod="openstack/tempest-tests-tempest" Feb 18 15:47:02 crc kubenswrapper[4957]: I0218 15:47:02.881244 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/b596b9fb-f116-4712-81fa-9382d13c295b-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"b596b9fb-f116-4712-81fa-9382d13c295b\") " pod="openstack/tempest-tests-tempest" Feb 18 15:47:02 crc kubenswrapper[4957]: I0218 15:47:02.984485 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/b596b9fb-f116-4712-81fa-9382d13c295b-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"b596b9fb-f116-4712-81fa-9382d13c295b\") " pod="openstack/tempest-tests-tempest" Feb 18 15:47:02 crc kubenswrapper[4957]: I0218 15:47:02.984664 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9dnwm\" (UniqueName: \"kubernetes.io/projected/b596b9fb-f116-4712-81fa-9382d13c295b-kube-api-access-9dnwm\") pod \"tempest-tests-tempest\" (UID: \"b596b9fb-f116-4712-81fa-9382d13c295b\") " pod="openstack/tempest-tests-tempest" Feb 18 15:47:02 crc kubenswrapper[4957]: I0218 15:47:02.984690 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/b596b9fb-f116-4712-81fa-9382d13c295b-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"b596b9fb-f116-4712-81fa-9382d13c295b\") " pod="openstack/tempest-tests-tempest" Feb 18 15:47:02 crc kubenswrapper[4957]: I0218 15:47:02.984730 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"tempest-tests-tempest\" (UID: \"b596b9fb-f116-4712-81fa-9382d13c295b\") " pod="openstack/tempest-tests-tempest" Feb 18 15:47:02 crc kubenswrapper[4957]: I0218 15:47:02.984754 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/b596b9fb-f116-4712-81fa-9382d13c295b-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"b596b9fb-f116-4712-81fa-9382d13c295b\") " pod="openstack/tempest-tests-tempest" Feb 18 15:47:02 crc kubenswrapper[4957]: I0218 15:47:02.984798 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b596b9fb-f116-4712-81fa-9382d13c295b-config-data\") pod \"tempest-tests-tempest\" (UID: \"b596b9fb-f116-4712-81fa-9382d13c295b\") " pod="openstack/tempest-tests-tempest" Feb 18 15:47:02 crc kubenswrapper[4957]: I0218 15:47:02.984832 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/b596b9fb-f116-4712-81fa-9382d13c295b-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"b596b9fb-f116-4712-81fa-9382d13c295b\") " pod="openstack/tempest-tests-tempest" Feb 18 15:47:02 crc kubenswrapper[4957]: I0218 15:47:02.984858 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b596b9fb-f116-4712-81fa-9382d13c295b-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"b596b9fb-f116-4712-81fa-9382d13c295b\") " pod="openstack/tempest-tests-tempest" Feb 18 15:47:02 crc kubenswrapper[4957]: I0218 15:47:02.984907 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/b596b9fb-f116-4712-81fa-9382d13c295b-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"b596b9fb-f116-4712-81fa-9382d13c295b\") " pod="openstack/tempest-tests-tempest" Feb 18 15:47:02 crc kubenswrapper[4957]: I0218 15:47:02.985538 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/b596b9fb-f116-4712-81fa-9382d13c295b-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"b596b9fb-f116-4712-81fa-9382d13c295b\") " pod="openstack/tempest-tests-tempest" Feb 18 15:47:02 crc kubenswrapper[4957]: I0218 15:47:02.986300 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/b596b9fb-f116-4712-81fa-9382d13c295b-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"b596b9fb-f116-4712-81fa-9382d13c295b\") " pod="openstack/tempest-tests-tempest" Feb 18 15:47:02 crc kubenswrapper[4957]: I0218 15:47:02.986836 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/b596b9fb-f116-4712-81fa-9382d13c295b-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"b596b9fb-f116-4712-81fa-9382d13c295b\") " pod="openstack/tempest-tests-tempest" Feb 18 15:47:02 crc kubenswrapper[4957]: I0218 15:47:02.987866 4957 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"tempest-tests-tempest\" (UID: \"b596b9fb-f116-4712-81fa-9382d13c295b\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/tempest-tests-tempest" Feb 18 15:47:02 crc kubenswrapper[4957]: I0218 15:47:02.988607 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b596b9fb-f116-4712-81fa-9382d13c295b-config-data\") pod \"tempest-tests-tempest\" (UID: \"b596b9fb-f116-4712-81fa-9382d13c295b\") " pod="openstack/tempest-tests-tempest" Feb 18 15:47:02 crc kubenswrapper[4957]: I0218 15:47:02.995239 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/b596b9fb-f116-4712-81fa-9382d13c295b-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"b596b9fb-f116-4712-81fa-9382d13c295b\") " pod="openstack/tempest-tests-tempest" Feb 18 15:47:02 crc kubenswrapper[4957]: I0218 15:47:02.996808 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/b596b9fb-f116-4712-81fa-9382d13c295b-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"b596b9fb-f116-4712-81fa-9382d13c295b\") " pod="openstack/tempest-tests-tempest" Feb 18 15:47:03 crc kubenswrapper[4957]: I0218 15:47:03.028119 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b596b9fb-f116-4712-81fa-9382d13c295b-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"b596b9fb-f116-4712-81fa-9382d13c295b\") " pod="openstack/tempest-tests-tempest" Feb 18 15:47:03 crc kubenswrapper[4957]: I0218 15:47:03.033153 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9dnwm\" (UniqueName: \"kubernetes.io/projected/b596b9fb-f116-4712-81fa-9382d13c295b-kube-api-access-9dnwm\") pod \"tempest-tests-tempest\" (UID: \"b596b9fb-f116-4712-81fa-9382d13c295b\") " pod="openstack/tempest-tests-tempest" Feb 18 15:47:03 crc kubenswrapper[4957]: I0218 15:47:03.095921 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"tempest-tests-tempest\" (UID: \"b596b9fb-f116-4712-81fa-9382d13c295b\") " pod="openstack/tempest-tests-tempest" Feb 18 15:47:03 crc kubenswrapper[4957]: I0218 15:47:03.411462 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 18 15:47:04 crc kubenswrapper[4957]: I0218 15:47:04.129919 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Feb 18 15:47:04 crc kubenswrapper[4957]: I0218 15:47:04.144021 4957 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 15:47:05 crc kubenswrapper[4957]: I0218 15:47:05.047628 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"b596b9fb-f116-4712-81fa-9382d13c295b","Type":"ContainerStarted","Data":"1584ff5c3201d5e2d0d251eb02c49df2003b8cf4a0533d7e045c83d3e6050a30"} Feb 18 15:47:06 crc kubenswrapper[4957]: I0218 15:47:06.215906 4957 scope.go:117] "RemoveContainer" containerID="f9077133a95a58a1f91a3ffae91cbaefd9887cd4916d6d9348fd595927183247" Feb 18 15:47:06 crc kubenswrapper[4957]: E0218 15:47:06.216381 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-x8wwg_openshift-machine-config-operator(4cde17e3-43e9-4bed-afe8-5b76229e35cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" Feb 18 15:47:20 crc kubenswrapper[4957]: I0218 15:47:20.213917 4957 scope.go:117] "RemoveContainer" containerID="f9077133a95a58a1f91a3ffae91cbaefd9887cd4916d6d9348fd595927183247" Feb 18 15:47:20 crc kubenswrapper[4957]: E0218 15:47:20.215066 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-x8wwg_openshift-machine-config-operator(4cde17e3-43e9-4bed-afe8-5b76229e35cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" Feb 18 15:47:34 crc kubenswrapper[4957]: I0218 15:47:34.222908 4957 scope.go:117] "RemoveContainer" containerID="f9077133a95a58a1f91a3ffae91cbaefd9887cd4916d6d9348fd595927183247" Feb 18 15:47:34 crc kubenswrapper[4957]: E0218 15:47:34.225604 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-x8wwg_openshift-machine-config-operator(4cde17e3-43e9-4bed-afe8-5b76229e35cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" Feb 18 15:47:43 crc kubenswrapper[4957]: E0218 15:47:43.646190 4957 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Feb 18 15:47:43 crc kubenswrapper[4957]: E0218 15:47:43.856244 4957 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9dnwm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(b596b9fb-f116-4712-81fa-9382d13c295b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 15:47:43 crc kubenswrapper[4957]: E0218 15:47:43.857510 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="b596b9fb-f116-4712-81fa-9382d13c295b" Feb 18 15:47:44 crc kubenswrapper[4957]: E0218 15:47:44.288327 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="b596b9fb-f116-4712-81fa-9382d13c295b" Feb 18 15:47:46 crc kubenswrapper[4957]: I0218 15:47:46.020616 4957 scope.go:117] "RemoveContainer" containerID="f9077133a95a58a1f91a3ffae91cbaefd9887cd4916d6d9348fd595927183247" Feb 18 15:47:46 crc kubenswrapper[4957]: E0218 15:47:46.020884 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-x8wwg_openshift-machine-config-operator(4cde17e3-43e9-4bed-afe8-5b76229e35cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" Feb 18 15:47:58 crc kubenswrapper[4957]: I0218 15:47:58.214113 4957 scope.go:117] "RemoveContainer" containerID="f9077133a95a58a1f91a3ffae91cbaefd9887cd4916d6d9348fd595927183247" Feb 18 15:47:58 crc kubenswrapper[4957]: E0218 15:47:58.214804 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-x8wwg_openshift-machine-config-operator(4cde17e3-43e9-4bed-afe8-5b76229e35cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" Feb 18 15:47:59 crc kubenswrapper[4957]: I0218 15:47:59.533861 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Feb 18 15:48:01 crc kubenswrapper[4957]: I0218 15:48:01.310936 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"b596b9fb-f116-4712-81fa-9382d13c295b","Type":"ContainerStarted","Data":"4e6f4ccd1a9e888690a11595ded507a9c3931c292d2c13f291becb44621b3c2b"} Feb 18 15:48:01 crc kubenswrapper[4957]: I0218 15:48:01.343494 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=4.960244316 podStartE2EDuration="1m0.343471553s" podCreationTimestamp="2026-02-18 15:47:01 +0000 UTC" firstStartedPulling="2026-02-18 15:47:04.143773088 +0000 UTC m=+4530.664637832" lastFinishedPulling="2026-02-18 15:47:59.527000305 +0000 UTC m=+4586.047865069" observedRunningTime="2026-02-18 15:48:01.335917127 +0000 UTC m=+4587.856781871" watchObservedRunningTime="2026-02-18 15:48:01.343471553 +0000 UTC m=+4587.864336297" Feb 18 15:48:13 crc kubenswrapper[4957]: I0218 15:48:13.213877 4957 scope.go:117] "RemoveContainer" containerID="f9077133a95a58a1f91a3ffae91cbaefd9887cd4916d6d9348fd595927183247" Feb 18 15:48:15 crc kubenswrapper[4957]: I0218 15:48:15.498370 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" event={"ID":"4cde17e3-43e9-4bed-afe8-5b76229e35cf","Type":"ContainerStarted","Data":"d09aa8248fe62b5495e82460606315a805e4d670c5f748bbdf39b1d2a62e0d2a"} Feb 18 15:50:00 crc kubenswrapper[4957]: I0218 15:50:00.833075 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="8e98adae-ca1c-4f1d-a7e4-6ea6ccea7070" containerName="galera" probeResult="failure" output="command timed out" Feb 18 15:50:00 crc kubenswrapper[4957]: I0218 15:50:00.833150 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="8e98adae-ca1c-4f1d-a7e4-6ea6ccea7070" containerName="galera" probeResult="failure" output="command timed out" Feb 18 15:50:01 crc kubenswrapper[4957]: I0218 15:50:01.213875 4957 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:01 crc kubenswrapper[4957]: I0218 15:50:01.217626 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:02 crc kubenswrapper[4957]: I0218 15:50:02.128551 4957 patch_prober.go:28] interesting pod/nmstate-webhook-866bcb46dc-vzkmx container/nmstate-webhook namespace/openshift-nmstate: Readiness probe status=failure output="Get \"https://10.217.0.88:9443/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:02 crc kubenswrapper[4957]: I0218 15:50:02.128894 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-vzkmx" podUID="7bf5dd6b-3bc3-4ead-8fab-478e02b32496" containerName="nmstate-webhook" probeResult="failure" output="Get \"https://10.217.0.88:9443/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:02 crc kubenswrapper[4957]: I0218 15:50:02.494584 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cwqlsr" podUID="8bd25216-306e-42c0-93da-a51803507c1f" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.117:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:02 crc kubenswrapper[4957]: I0218 15:50:02.494567 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cwqlsr" podUID="8bd25216-306e-42c0-93da-a51803507c1f" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.117:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:02 crc kubenswrapper[4957]: I0218 15:50:02.794706 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="a1f75ef8-c8ef-4d67-8c40-11ff5f8f5f4c" containerName="galera" probeResult="failure" output="command timed out" Feb 18 15:50:02 crc kubenswrapper[4957]: I0218 15:50:02.794863 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="a1f75ef8-c8ef-4d67-8c40-11ff5f8f5f4c" containerName="galera" probeResult="failure" output="command timed out" Feb 18 15:50:03 crc kubenswrapper[4957]: I0218 15:50:03.710465 4957 trace.go:236] Trace[267008339]: "Calculate volume metrics of mysql-db for pod openstack/openstack-cell1-galera-0" (18-Feb-2026 15:50:01.608) (total time: 2094ms): Feb 18 15:50:03 crc kubenswrapper[4957]: Trace[267008339]: [2.094037004s] [2.094037004s] END Feb 18 15:50:03 crc kubenswrapper[4957]: I0218 15:50:03.732927 4957 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-htr7f container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.72:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:03 crc kubenswrapper[4957]: I0218 15:50:03.732985 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-htr7f" podUID="eba7e3a4-c6f4-4473-bcec-23a777ba8798" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.72:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:03 crc kubenswrapper[4957]: I0218 15:50:03.733153 4957 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-htr7f container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.72:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:03 crc kubenswrapper[4957]: I0218 15:50:03.733207 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-htr7f" podUID="eba7e3a4-c6f4-4473-bcec-23a777ba8798" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.72:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:03 crc kubenswrapper[4957]: I0218 15:50:03.935023 4957 patch_prober.go:28] interesting pod/logging-loki-gateway-7b58bd6fcd-n8wjk container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.55:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:03 crc kubenswrapper[4957]: I0218 15:50:03.935106 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-7b58bd6fcd-n8wjk" podUID="ae719427-398b-455b-8d4f-d1f96df0e800" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.55:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:03 crc kubenswrapper[4957]: I0218 15:50:03.944153 4957 patch_prober.go:28] interesting pod/logging-loki-gateway-7b58bd6fcd-58vxq container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.56:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:03 crc kubenswrapper[4957]: I0218 15:50:03.944232 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-7b58bd6fcd-58vxq" podUID="6e82b47f-b61b-40dd-92f1-62180459082f" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.56:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:04 crc kubenswrapper[4957]: I0218 15:50:04.058499 4957 patch_prober.go:28] interesting pod/controller-manager-65c9ff5d9d-fz5wk container/controller-manager namespace/openshift-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.62:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:04 crc kubenswrapper[4957]: I0218 15:50:04.058773 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-controller-manager/controller-manager-65c9ff5d9d-fz5wk" podUID="1523e723-7145-4c5e-8834-990b6298db41" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.62:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:04 crc kubenswrapper[4957]: I0218 15:50:04.058534 4957 patch_prober.go:28] interesting pod/controller-manager-65c9ff5d9d-fz5wk container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.62:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:04 crc kubenswrapper[4957]: I0218 15:50:04.058992 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-65c9ff5d9d-fz5wk" podUID="1523e723-7145-4c5e-8834-990b6298db41" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.62:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:04 crc kubenswrapper[4957]: I0218 15:50:04.148578 4957 patch_prober.go:28] interesting pod/route-controller-manager-6f58845d78-bfqn7 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.63:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:04 crc kubenswrapper[4957]: I0218 15:50:04.148652 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6f58845d78-bfqn7" podUID="2fbd50ae-c490-4099-b01e-de491ad70559" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.63:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:04 crc kubenswrapper[4957]: I0218 15:50:04.148941 4957 patch_prober.go:28] interesting pod/route-controller-manager-6f58845d78-bfqn7 container/route-controller-manager namespace/openshift-route-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.63:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:04 crc kubenswrapper[4957]: I0218 15:50:04.148987 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-route-controller-manager/route-controller-manager-6f58845d78-bfqn7" podUID="2fbd50ae-c490-4099-b01e-de491ad70559" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.63:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:06 crc kubenswrapper[4957]: I0218 15:50:06.172621 4957 patch_prober.go:28] interesting pod/console-operator-58897d9998-d9hcp container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.217.0.14:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:06 crc kubenswrapper[4957]: I0218 15:50:06.173000 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-58897d9998-d9hcp" podUID="2d30d957-c658-4ce4-9b04-3f1d64fb67b7" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.14:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:06 crc kubenswrapper[4957]: I0218 15:50:06.201851 4957 patch_prober.go:28] interesting pod/oauth-openshift-76b96558df-8qxws container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.64:6443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:06 crc kubenswrapper[4957]: I0218 15:50:06.201909 4957 patch_prober.go:28] interesting pod/console-operator-58897d9998-d9hcp container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.14:8443/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:06 crc kubenswrapper[4957]: I0218 15:50:06.201955 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-d9hcp" podUID="2d30d957-c658-4ce4-9b04-3f1d64fb67b7" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.14:8443/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:06 crc kubenswrapper[4957]: I0218 15:50:06.201909 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-76b96558df-8qxws" podUID="3c63d324-6b52-4815-922d-3dc270315126" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.64:6443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:06 crc kubenswrapper[4957]: I0218 15:50:06.201857 4957 patch_prober.go:28] interesting pod/oauth-openshift-76b96558df-8qxws container/oauth-openshift namespace/openshift-authentication: Liveness probe status=failure output="Get \"https://10.217.0.64:6443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:06 crc kubenswrapper[4957]: I0218 15:50:06.202060 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication/oauth-openshift-76b96558df-8qxws" podUID="3c63d324-6b52-4815-922d-3dc270315126" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.64:6443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:06 crc kubenswrapper[4957]: I0218 15:50:06.423176 4957 patch_prober.go:28] interesting pod/metrics-server-7ffc4d6784-c7kvk container/metrics-server namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.79:10250/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:06 crc kubenswrapper[4957]: I0218 15:50:06.423231 4957 patch_prober.go:28] interesting pod/metrics-server-7ffc4d6784-c7kvk container/metrics-server namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.79:10250/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:06 crc kubenswrapper[4957]: I0218 15:50:06.423253 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/metrics-server-7ffc4d6784-c7kvk" podUID="bdcbb72c-6e5e-4167-baf4-ca754b4122a0" containerName="metrics-server" probeResult="failure" output="Get \"https://10.217.0.79:10250/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:06 crc kubenswrapper[4957]: I0218 15:50:06.423301 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/metrics-server-7ffc4d6784-c7kvk" podUID="bdcbb72c-6e5e-4167-baf4-ca754b4122a0" containerName="metrics-server" probeResult="failure" output="Get \"https://10.217.0.79:10250/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:06 crc kubenswrapper[4957]: I0218 15:50:06.692629 4957 patch_prober.go:28] interesting pod/router-default-5444994796-jg752 container/router namespace/openshift-ingress: Readiness probe status=failure output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:06 crc kubenswrapper[4957]: I0218 15:50:06.692672 4957 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-8bmsm container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.31:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:06 crc kubenswrapper[4957]: I0218 15:50:06.693040 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-5444994796-jg752" podUID="099076c9-9f78-47b8-87f1-3c9cc47e0b09" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:06 crc kubenswrapper[4957]: I0218 15:50:06.693117 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8bmsm" podUID="3925001b-348a-4dde-a066-e49891c345bb" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.31:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:06 crc kubenswrapper[4957]: I0218 15:50:06.692705 4957 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-8bmsm container/catalog-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.31:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:06 crc kubenswrapper[4957]: I0218 15:50:06.692723 4957 patch_prober.go:28] interesting pod/router-default-5444994796-jg752 container/router namespace/openshift-ingress: Liveness probe status=failure output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:06 crc kubenswrapper[4957]: I0218 15:50:06.693190 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-ingress/router-default-5444994796-jg752" podUID="099076c9-9f78-47b8-87f1-3c9cc47e0b09" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:06 crc kubenswrapper[4957]: I0218 15:50:06.693181 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8bmsm" podUID="3925001b-348a-4dde-a066-e49891c345bb" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.31:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:06 crc kubenswrapper[4957]: I0218 15:50:06.877573 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-vn8z8" podUID="8aaaba83-1c93-481a-9627-a46dbd3eef31" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.113:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:06 crc kubenswrapper[4957]: I0218 15:50:06.877730 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-vn8z8" podUID="8aaaba83-1c93-481a-9627-a46dbd3eef31" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.113:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:06 crc kubenswrapper[4957]: I0218 15:50:06.878104 4957 patch_prober.go:28] interesting pod/console-84fccf866c-pq2q4 container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.139:8443/health\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:06 crc kubenswrapper[4957]: I0218 15:50:06.878150 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-84fccf866c-pq2q4" podUID="95e72012-f049-432b-8490-b92c0e5724b8" containerName="console" probeResult="failure" output="Get \"https://10.217.0.139:8443/health\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:06 crc kubenswrapper[4957]: I0218 15:50:06.953795 4957 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-q4vp7 container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.38:5443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:06 crc kubenswrapper[4957]: I0218 15:50:06.953880 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-q4vp7" podUID="9a161f1b-77bb-4a9d-9bfc-345bb46d439b" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.38:5443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:06 crc kubenswrapper[4957]: I0218 15:50:06.954676 4957 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-q4vp7 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.38:5443/healthz\": context deadline exceeded" start-of-body= Feb 18 15:50:06 crc kubenswrapper[4957]: I0218 15:50:06.954730 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-q4vp7" podUID="9a161f1b-77bb-4a9d-9bfc-345bb46d439b" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.38:5443/healthz\": context deadline exceeded" Feb 18 15:50:07 crc kubenswrapper[4957]: I0218 15:50:07.011604 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-zc4tx" podUID="ff480d9a-ead3-47a1-a765-59507dfe0853" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.112:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:07 crc kubenswrapper[4957]: I0218 15:50:07.011641 4957 patch_prober.go:28] interesting pod/monitoring-plugin-6fb88c9bd-6wgxk container/monitoring-plugin namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.80:9443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:07 crc kubenswrapper[4957]: I0218 15:50:07.011693 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/monitoring-plugin-6fb88c9bd-6wgxk" podUID="4d4390cd-8a73-4bc0-8a08-8f018c308d17" containerName="monitoring-plugin" probeResult="failure" output="Get \"https://10.217.0.80:9443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:07 crc kubenswrapper[4957]: I0218 15:50:07.011712 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-zc4tx" podUID="ff480d9a-ead3-47a1-a765-59507dfe0853" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.112:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:07 crc kubenswrapper[4957]: I0218 15:50:07.293624 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/telemetry-operator-controller-manager-54bf66477-rc4j4" podUID="f507ee0e-6836-4f30-b79e-63979d76a449" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.120:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:07 crc kubenswrapper[4957]: I0218 15:50:07.295241 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/telemetry-operator-controller-manager-54bf66477-rc4j4" podUID="f507ee0e-6836-4f30-b79e-63979d76a449" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.120:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:07 crc kubenswrapper[4957]: I0218 15:50:07.742496 4957 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-mxz2r container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.21:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:07 crc kubenswrapper[4957]: I0218 15:50:07.742572 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-59bdc8b94-mxz2r" podUID="955eb799-56c6-47e7-b5f7-eccac4b52134" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.21:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:07 crc kubenswrapper[4957]: I0218 15:50:07.743300 4957 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-mxz2r container/operator namespace/openshift-operators: Liveness probe status=failure output="Get \"http://10.217.0.21:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:07 crc kubenswrapper[4957]: I0218 15:50:07.743369 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operators/observability-operator-59bdc8b94-mxz2r" podUID="955eb799-56c6-47e7-b5f7-eccac4b52134" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.21:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:09 crc kubenswrapper[4957]: I0218 15:50:09.053631 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-c8kqz" podUID="06e3f0bd-70d3-493b-ab24-e8f75298f7a3" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:09 crc kubenswrapper[4957]: I0218 15:50:09.105211 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-manager-549dd7c895-84tm5" podUID="33e1b915-d740-4ec7-b74e-b8b8b6356d4d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.123:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:10 crc kubenswrapper[4957]: I0218 15:50:10.800243 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="8e98adae-ca1c-4f1d-a7e4-6ea6ccea7070" containerName="galera" probeResult="failure" output="command timed out" Feb 18 15:50:10 crc kubenswrapper[4957]: I0218 15:50:10.800235 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="8e98adae-ca1c-4f1d-a7e4-6ea6ccea7070" containerName="galera" probeResult="failure" output="command timed out" Feb 18 15:50:12 crc kubenswrapper[4957]: I0218 15:50:12.143649 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/infra-operator-controller-manager-79d975b745-pgchj" podUID="6c6f7318-74c7-4971-9888-45a6c025bdde" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.109:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:12 crc kubenswrapper[4957]: I0218 15:50:12.451625 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cwqlsr" podUID="8bd25216-306e-42c0-93da-a51803507c1f" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.117:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:12 crc kubenswrapper[4957]: I0218 15:50:12.622968 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/redhat-operators-vrpnn" podUID="9f04a8b9-47dc-4fdf-b0fa-b39ee03d5f00" containerName="registry-server" probeResult="failure" output=< Feb 18 15:50:12 crc kubenswrapper[4957]: timeout: health rpc did not complete within 1s Feb 18 15:50:12 crc kubenswrapper[4957]: > Feb 18 15:50:12 crc kubenswrapper[4957]: I0218 15:50:12.622972 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/redhat-marketplace-gllqp" podUID="b95ede57-e275-4ba0-834d-43356f6b960b" containerName="registry-server" probeResult="failure" output=< Feb 18 15:50:12 crc kubenswrapper[4957]: timeout: health rpc did not complete within 1s Feb 18 15:50:12 crc kubenswrapper[4957]: > Feb 18 15:50:12 crc kubenswrapper[4957]: I0218 15:50:12.622968 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/redhat-operators-vrpnn" podUID="9f04a8b9-47dc-4fdf-b0fa-b39ee03d5f00" containerName="registry-server" probeResult="failure" output=< Feb 18 15:50:12 crc kubenswrapper[4957]: timeout: health rpc did not complete within 1s Feb 18 15:50:12 crc kubenswrapper[4957]: > Feb 18 15:50:12 crc kubenswrapper[4957]: I0218 15:50:12.622968 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-gllqp" podUID="b95ede57-e275-4ba0-834d-43356f6b960b" containerName="registry-server" probeResult="failure" output=< Feb 18 15:50:12 crc kubenswrapper[4957]: timeout: health rpc did not complete within 1s Feb 18 15:50:12 crc kubenswrapper[4957]: > Feb 18 15:50:12 crc kubenswrapper[4957]: I0218 15:50:12.795028 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="a1f75ef8-c8ef-4d67-8c40-11ff5f8f5f4c" containerName="galera" probeResult="failure" output="command timed out" Feb 18 15:50:12 crc kubenswrapper[4957]: I0218 15:50:12.795080 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="a1f75ef8-c8ef-4d67-8c40-11ff5f8f5f4c" containerName="galera" probeResult="failure" output="command timed out" Feb 18 15:50:13 crc kubenswrapper[4957]: I0218 15:50:13.175947 4957 patch_prober.go:28] interesting pod/thanos-querier-5bd76d8bd7-7bctr container/kube-rbac-proxy-web namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.77:9091/-/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:13 crc kubenswrapper[4957]: I0218 15:50:13.176445 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/thanos-querier-5bd76d8bd7-7bctr" podUID="513d3d65-d89a-4418-b0c2-4eadd3ae5600" containerName="kube-rbac-proxy-web" probeResult="failure" output="Get \"https://10.217.0.77:9091/-/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:13 crc kubenswrapper[4957]: I0218 15:50:13.735091 4957 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-htr7f container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.72:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:13 crc kubenswrapper[4957]: I0218 15:50:13.735090 4957 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-htr7f container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.72:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:13 crc kubenswrapper[4957]: I0218 15:50:13.735221 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-htr7f" podUID="eba7e3a4-c6f4-4473-bcec-23a777ba8798" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.72:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:13 crc kubenswrapper[4957]: I0218 15:50:13.735152 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-htr7f" podUID="eba7e3a4-c6f4-4473-bcec-23a777ba8798" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.72:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:13 crc kubenswrapper[4957]: I0218 15:50:13.937529 4957 patch_prober.go:28] interesting pod/logging-loki-gateway-7b58bd6fcd-n8wjk container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.55:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:13 crc kubenswrapper[4957]: I0218 15:50:13.937866 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-7b58bd6fcd-n8wjk" podUID="ae719427-398b-455b-8d4f-d1f96df0e800" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.55:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:13 crc kubenswrapper[4957]: I0218 15:50:13.937692 4957 patch_prober.go:28] interesting pod/logging-loki-gateway-7b58bd6fcd-n8wjk container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.55:8083/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:13 crc kubenswrapper[4957]: I0218 15:50:13.937959 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-7b58bd6fcd-n8wjk" podUID="ae719427-398b-455b-8d4f-d1f96df0e800" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.55:8083/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:13 crc kubenswrapper[4957]: I0218 15:50:13.945034 4957 patch_prober.go:28] interesting pod/logging-loki-gateway-7b58bd6fcd-58vxq container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.56:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:13 crc kubenswrapper[4957]: I0218 15:50:13.945113 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-7b58bd6fcd-58vxq" podUID="6e82b47f-b61b-40dd-92f1-62180459082f" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.56:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:13 crc kubenswrapper[4957]: I0218 15:50:13.945066 4957 patch_prober.go:28] interesting pod/logging-loki-gateway-7b58bd6fcd-58vxq container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.56:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:13 crc kubenswrapper[4957]: I0218 15:50:13.945211 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-7b58bd6fcd-58vxq" podUID="6e82b47f-b61b-40dd-92f1-62180459082f" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.56:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:14 crc kubenswrapper[4957]: I0218 15:50:14.060083 4957 patch_prober.go:28] interesting pod/controller-manager-65c9ff5d9d-fz5wk container/controller-manager namespace/openshift-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.62:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:14 crc kubenswrapper[4957]: I0218 15:50:14.060443 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-controller-manager/controller-manager-65c9ff5d9d-fz5wk" podUID="1523e723-7145-4c5e-8834-990b6298db41" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.62:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:14 crc kubenswrapper[4957]: I0218 15:50:14.060137 4957 patch_prober.go:28] interesting pod/controller-manager-65c9ff5d9d-fz5wk container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.62:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:14 crc kubenswrapper[4957]: I0218 15:50:14.060539 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-65c9ff5d9d-fz5wk" podUID="1523e723-7145-4c5e-8834-990b6298db41" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.62:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:14 crc kubenswrapper[4957]: I0218 15:50:14.065771 4957 patch_prober.go:28] interesting pod/route-controller-manager-6f58845d78-bfqn7 container/route-controller-manager namespace/openshift-route-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.63:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:14 crc kubenswrapper[4957]: I0218 15:50:14.065832 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-route-controller-manager/route-controller-manager-6f58845d78-bfqn7" podUID="2fbd50ae-c490-4099-b01e-de491ad70559" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.63:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:14 crc kubenswrapper[4957]: I0218 15:50:14.065958 4957 patch_prober.go:28] interesting pod/route-controller-manager-6f58845d78-bfqn7 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.63:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:14 crc kubenswrapper[4957]: I0218 15:50:14.066035 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6f58845d78-bfqn7" podUID="2fbd50ae-c490-4099-b01e-de491ad70559" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.63:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:14 crc kubenswrapper[4957]: I0218 15:50:14.795263 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-k8s-0" podUID="564a48e7-438e-4374-9b43-92409e093ae2" containerName="prometheus" probeResult="failure" output="command timed out" Feb 18 15:50:14 crc kubenswrapper[4957]: I0218 15:50:14.795673 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-k8s-0" podUID="564a48e7-438e-4374-9b43-92409e093ae2" containerName="prometheus" probeResult="failure" output="command timed out" Feb 18 15:50:14 crc kubenswrapper[4957]: I0218 15:50:14.959614 4957 patch_prober.go:28] interesting pod/loki-operator-controller-manager-669bf4b44b-ndlc7 container/manager namespace/openshift-operators-redhat: Readiness probe status=failure output="Get \"http://10.217.0.48:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:14 crc kubenswrapper[4957]: I0218 15:50:14.959957 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators-redhat/loki-operator-controller-manager-669bf4b44b-ndlc7" podUID="da87ca13-b23a-4345-b79d-46c8e9bec9b3" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.48:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:16 crc kubenswrapper[4957]: I0218 15:50:16.057687 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-slwlj" podUID="e6651ea1-6311-4597-81cc-a8637f8cc88a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.104:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:16 crc kubenswrapper[4957]: I0218 15:50:16.090500 4957 patch_prober.go:28] interesting pod/console-operator-58897d9998-d9hcp container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.14:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:16 crc kubenswrapper[4957]: I0218 15:50:16.090816 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-d9hcp" podUID="2d30d957-c658-4ce4-9b04-3f1d64fb67b7" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.14:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:16 crc kubenswrapper[4957]: I0218 15:50:16.090651 4957 patch_prober.go:28] interesting pod/console-operator-58897d9998-d9hcp container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.217.0.14:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:16 crc kubenswrapper[4957]: I0218 15:50:16.090942 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-58897d9998-d9hcp" podUID="2d30d957-c658-4ce4-9b04-3f1d64fb67b7" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.14:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:16 crc kubenswrapper[4957]: I0218 15:50:16.136585 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/glance-operator-controller-manager-77987464f4-b85vd" podUID="cc38dff8-4b46-4281-96a3-ff88c8200f59" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.105:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:16 crc kubenswrapper[4957]: I0218 15:50:16.282615 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-jv5dd" podUID="86c162c7-c82d-4627-bf84-11d5fb80199f" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.102:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:16 crc kubenswrapper[4957]: I0218 15:50:16.323657 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-xt2c9" podUID="07a618be-7572-49b8-aeb3-12ce37fbe7b3" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.103:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:16 crc kubenswrapper[4957]: I0218 15:50:16.412624 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="cert-manager/cert-manager-webhook-687f57d79b-q5pw9" podUID="77a4b221-67be-4248-beaa-1f4602e3b35b" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.44:6080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:16 crc kubenswrapper[4957]: I0218 15:50:16.412639 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="cert-manager/cert-manager-webhook-687f57d79b-q5pw9" podUID="77a4b221-67be-4248-beaa-1f4602e3b35b" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.44:6080/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:16 crc kubenswrapper[4957]: I0218 15:50:16.423447 4957 patch_prober.go:28] interesting pod/metrics-server-7ffc4d6784-c7kvk container/metrics-server namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.79:10250/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:16 crc kubenswrapper[4957]: I0218 15:50:16.423506 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/metrics-server-7ffc4d6784-c7kvk" podUID="bdcbb72c-6e5e-4167-baf4-ca754b4122a0" containerName="metrics-server" probeResult="failure" output="Get \"https://10.217.0.79:10250/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:16 crc kubenswrapper[4957]: I0218 15:50:16.547694 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-xg6pp" podUID="94bb800a-9927-4d0f-b9d2-53e4fb398fda" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.108:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:16 crc kubenswrapper[4957]: I0218 15:50:16.617101 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-fx4tl" podUID="91fd8838-0687-420b-b3dd-4130e221a66d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.107:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:16 crc kubenswrapper[4957]: I0218 15:50:16.617116 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-vt4tc" podUID="78c8fb66-d71a-44b7-b858-51f7ca26a407" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.110:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:16 crc kubenswrapper[4957]: I0218 15:50:16.657641 4957 patch_prober.go:28] interesting pod/router-default-5444994796-jg752 container/router namespace/openshift-ingress: Readiness probe status=failure output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:16 crc kubenswrapper[4957]: I0218 15:50:16.657987 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-5444994796-jg752" podUID="099076c9-9f78-47b8-87f1-3c9cc47e0b09" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:16 crc kubenswrapper[4957]: I0218 15:50:16.698694 4957 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-8bmsm container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.31:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:16 crc kubenswrapper[4957]: I0218 15:50:16.698717 4957 patch_prober.go:28] interesting pod/router-default-5444994796-jg752 container/router namespace/openshift-ingress: Liveness probe status=failure output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:16 crc kubenswrapper[4957]: I0218 15:50:16.698788 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8bmsm" podUID="3925001b-348a-4dde-a066-e49891c345bb" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.31:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:16 crc kubenswrapper[4957]: I0218 15:50:16.698771 4957 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-8bmsm container/catalog-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.31:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:16 crc kubenswrapper[4957]: I0218 15:50:16.698988 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8bmsm" podUID="3925001b-348a-4dde-a066-e49891c345bb" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.31:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:16 crc kubenswrapper[4957]: I0218 15:50:16.699288 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-ingress/router-default-5444994796-jg752" podUID="099076c9-9f78-47b8-87f1-3c9cc47e0b09" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:16 crc kubenswrapper[4957]: I0218 15:50:16.711752 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="a600b253-74c8-473b-ba57-e03ac741c902" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.166:9090/-/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:16 crc kubenswrapper[4957]: I0218 15:50:16.711900 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/prometheus-metric-storage-0" podUID="a600b253-74c8-473b-ba57-e03ac741c902" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.166:9090/-/healthy\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:16 crc kubenswrapper[4957]: I0218 15:50:16.797645 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-rd7mm" podUID="eb1f93ca-0a6f-4582-9d3a-329ddd1dd4fe" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.114:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:16 crc kubenswrapper[4957]: I0218 15:50:16.838944 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-czjx4" podUID="147c50a5-37fc-4b06-803f-8ad1d1fd4625" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.111:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:16 crc kubenswrapper[4957]: I0218 15:50:16.839005 4957 patch_prober.go:28] interesting pod/downloads-7954f5f757-fxh8s container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.11:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:16 crc kubenswrapper[4957]: I0218 15:50:16.839017 4957 patch_prober.go:28] interesting pod/console-84fccf866c-pq2q4 container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.139:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:16 crc kubenswrapper[4957]: I0218 15:50:16.839104 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-fxh8s" podUID="6e462cfd-ac3c-4e75-bcce-f8291746b89e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:16 crc kubenswrapper[4957]: I0218 15:50:16.839146 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-84fccf866c-pq2q4" podUID="95e72012-f049-432b-8490-b92c0e5724b8" containerName="console" probeResult="failure" output="Get \"https://10.217.0.139:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:16 crc kubenswrapper[4957]: I0218 15:50:16.921600 4957 patch_prober.go:28] interesting pod/downloads-7954f5f757-fxh8s container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:16 crc kubenswrapper[4957]: I0218 15:50:16.921660 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-fxh8s" podUID="6e462cfd-ac3c-4e75-bcce-f8291746b89e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:16 crc kubenswrapper[4957]: I0218 15:50:16.921726 4957 patch_prober.go:28] interesting pod/authentication-operator-69f744f599-vd6hx container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.217.0.13:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:16 crc kubenswrapper[4957]: I0218 15:50:16.921861 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-69f744f599-vd6hx" podUID="7e32179f-a59d-44e1-9a56-ca25b8c5ff21" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.217.0.13:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:16 crc kubenswrapper[4957]: I0218 15:50:16.962667 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-kx5gv" podUID="f70d6609-fcf8-47f9-89dc-986f8f2f902b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.115:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:16 crc kubenswrapper[4957]: I0218 15:50:16.962723 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-cxzhc" podUID="ef6b6faf-f852-4948-8d1b-d53eace855a4" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.116:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:17 crc kubenswrapper[4957]: I0218 15:50:17.003663 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-zc4tx" podUID="ff480d9a-ead3-47a1-a765-59507dfe0853" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.112:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:17 crc kubenswrapper[4957]: I0218 15:50:17.003653 4957 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-q4vp7 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.38:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:17 crc kubenswrapper[4957]: I0218 15:50:17.003784 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-q4vp7" podUID="9a161f1b-77bb-4a9d-9bfc-345bb46d439b" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.38:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:17 crc kubenswrapper[4957]: I0218 15:50:17.003700 4957 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-q4vp7 container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.38:5443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:17 crc kubenswrapper[4957]: I0218 15:50:17.003857 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-q4vp7" podUID="9a161f1b-77bb-4a9d-9bfc-345bb46d439b" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.38:5443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:17 crc kubenswrapper[4957]: I0218 15:50:17.003714 4957 patch_prober.go:28] interesting pod/monitoring-plugin-6fb88c9bd-6wgxk container/monitoring-plugin namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.80:9443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:17 crc kubenswrapper[4957]: I0218 15:50:17.003747 4957 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-qc4wh container/olm-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.35:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:17 crc kubenswrapper[4957]: I0218 15:50:17.003905 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/monitoring-plugin-6fb88c9bd-6wgxk" podUID="4d4390cd-8a73-4bc0-8a08-8f018c308d17" containerName="monitoring-plugin" probeResult="failure" output="Get \"https://10.217.0.80:9443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:17 crc kubenswrapper[4957]: I0218 15:50:17.003935 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qc4wh" podUID="455504d8-7edb-4008-9343-536491e9504a" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.35:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:17 crc kubenswrapper[4957]: I0218 15:50:17.003762 4957 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-qc4wh container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.35:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:17 crc kubenswrapper[4957]: I0218 15:50:17.004025 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qc4wh" podUID="455504d8-7edb-4008-9343-536491e9504a" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.35:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:17 crc kubenswrapper[4957]: I0218 15:50:17.081824 4957 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-tmknf container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.29:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:17 crc kubenswrapper[4957]: I0218 15:50:17.081864 4957 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-tmknf container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.29:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:17 crc kubenswrapper[4957]: I0218 15:50:17.081901 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-tmknf" podUID="7b0517cf-fb33-49b0-9f1c-ba39f8edfc37" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.29:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:17 crc kubenswrapper[4957]: I0218 15:50:17.081923 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-tmknf" podUID="7b0517cf-fb33-49b0-9f1c-ba39f8edfc37" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.29:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:17 crc kubenswrapper[4957]: I0218 15:50:17.203568 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-9mf8z" podUID="644451ba-ce73-4312-b6cd-af99eb6c9fbc" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.118:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:17 crc kubenswrapper[4957]: I0218 15:50:17.244656 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/swift-operator-controller-manager-68f46476f-5k7g6" podUID="8adf52f0-b132-4541-8962-7fae9bce89c6" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.119:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:17 crc kubenswrapper[4957]: I0218 15:50:17.327599 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/telemetry-operator-controller-manager-54bf66477-rc4j4" podUID="f507ee0e-6836-4f30-b79e-63979d76a449" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.120:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:17 crc kubenswrapper[4957]: I0218 15:50:17.327676 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/test-operator-controller-manager-7866795846-2fdgz" podUID="b724d9a9-8ae5-4295-9b4e-5ec65793b59f" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.121:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:17 crc kubenswrapper[4957]: I0218 15:50:17.807628 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="64952df6-ca80-4f2b-a8e3-ce0539d32008" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Feb 18 15:50:17 crc kubenswrapper[4957]: I0218 15:50:17.807889 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-handler-vwl5q" podUID="3b69da89-d2a6-4e8f-ac79-99e1bb296fcc" containerName="nmstate-handler" probeResult="failure" output="command timed out" Feb 18 15:50:17 crc kubenswrapper[4957]: I0218 15:50:17.819859 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-controller-init-5666c999f9-b87pp" podUID="09673cd4-22c2-43fa-87ae-17b7a8a03308" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.101:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:17 crc kubenswrapper[4957]: I0218 15:50:17.819880 4957 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-mxz2r container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.21:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:17 crc kubenswrapper[4957]: I0218 15:50:17.819997 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-init-5666c999f9-b87pp" podUID="09673cd4-22c2-43fa-87ae-17b7a8a03308" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.101:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:17 crc kubenswrapper[4957]: I0218 15:50:17.820026 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-59bdc8b94-mxz2r" podUID="955eb799-56c6-47e7-b5f7-eccac4b52134" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.21:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:17 crc kubenswrapper[4957]: I0218 15:50:17.820065 4957 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-mxz2r container/operator namespace/openshift-operators: Liveness probe status=failure output="Get \"http://10.217.0.21:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:17 crc kubenswrapper[4957]: I0218 15:50:17.820177 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operators/observability-operator-59bdc8b94-mxz2r" podUID="955eb799-56c6-47e7-b5f7-eccac4b52134" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.21:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:17 crc kubenswrapper[4957]: I0218 15:50:17.820269 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="hostpath-provisioner/csi-hostpathplugin-gdjrj" podUID="11cb8341-3939-4c82-9745-510f73904864" containerName="hostpath-provisioner" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 15:50:17 crc kubenswrapper[4957]: I0218 15:50:17.935823 4957 patch_prober.go:28] interesting pod/perses-operator-5bf474d74f-khfcc container/perses-operator namespace/openshift-operators: Liveness probe status=failure output="Get \"http://10.217.0.23:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:17 crc kubenswrapper[4957]: I0218 15:50:17.935891 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operators/perses-operator-5bf474d74f-khfcc" podUID="aa193683-1796-419f-ac5f-e620b3206699" containerName="perses-operator" probeResult="failure" output="Get \"http://10.217.0.23:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:17 crc kubenswrapper[4957]: I0218 15:50:17.936351 4957 patch_prober.go:28] interesting pod/perses-operator-5bf474d74f-khfcc container/perses-operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.23:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:17 crc kubenswrapper[4957]: I0218 15:50:17.936400 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/perses-operator-5bf474d74f-khfcc" podUID="aa193683-1796-419f-ac5f-e620b3206699" containerName="perses-operator" probeResult="failure" output="Get \"http://10.217.0.23:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:17 crc kubenswrapper[4957]: I0218 15:50:17.985805 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-controller-manager-8675cb849f-2g7hj" podUID="4f287d67-8d26-430a-a775-fdf0abeed6dd" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.94:8080/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:18 crc kubenswrapper[4957]: I0218 15:50:18.068026 4957 patch_prober.go:28] interesting pod/logging-loki-querier-76bf7b6d45-t4b27 container/loki-querier namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.53:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:18 crc kubenswrapper[4957]: I0218 15:50:18.068140 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-querier-76bf7b6d45-t4b27" podUID="c093fd9d-72e8-42d1-a5ad-5e687f61aa9e" containerName="loki-querier" probeResult="failure" output="Get \"https://10.217.0.53:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:18 crc kubenswrapper[4957]: I0218 15:50:18.144530 4957 patch_prober.go:28] interesting pod/logging-loki-query-frontend-6d6859c548-x4qsb container/loki-query-frontend namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.54:3101/loki/api/v1/status/buildinfo\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:18 crc kubenswrapper[4957]: I0218 15:50:18.144596 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-x4qsb" podUID="d1e1403b-f3b0-4377-9c77-1d9f5b3c10c7" containerName="loki-query-frontend" probeResult="failure" output="Get \"https://10.217.0.54:3101/loki/api/v1/status/buildinfo\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:18 crc kubenswrapper[4957]: I0218 15:50:18.175128 4957 patch_prober.go:28] interesting pod/thanos-querier-5bd76d8bd7-7bctr container/kube-rbac-proxy-web namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.77:9091/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:18 crc kubenswrapper[4957]: I0218 15:50:18.175184 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/thanos-querier-5bd76d8bd7-7bctr" podUID="513d3d65-d89a-4418-b0c2-4eadd3ae5600" containerName="kube-rbac-proxy-web" probeResult="failure" output="Get \"https://10.217.0.77:9091/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:18 crc kubenswrapper[4957]: I0218 15:50:18.205905 4957 patch_prober.go:28] interesting pod/logging-loki-distributor-5d5548c9f5-2wbxs container/loki-distributor namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.52:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:18 crc kubenswrapper[4957]: I0218 15:50:18.205981 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-2wbxs" podUID="4de1a2b8-9bfb-4104-b065-e0c991cb95ea" containerName="loki-distributor" probeResult="failure" output="Get \"https://10.217.0.52:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:18 crc kubenswrapper[4957]: I0218 15:50:18.514659 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/metallb-operator-webhook-server-84df667ccc-2w5tf" podUID="32734ff2-fe7b-4588-a4c8-0e5882b54b87" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.95:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:18 crc kubenswrapper[4957]: I0218 15:50:18.514978 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-webhook-server-84df667ccc-2w5tf" podUID="32734ff2-fe7b-4588-a4c8-0e5882b54b87" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.95:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:18 crc kubenswrapper[4957]: I0218 15:50:18.936343 4957 patch_prober.go:28] interesting pod/logging-loki-gateway-7b58bd6fcd-n8wjk container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.55:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:18 crc kubenswrapper[4957]: I0218 15:50:18.936802 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-7b58bd6fcd-n8wjk" podUID="ae719427-398b-455b-8d4f-d1f96df0e800" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.55:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:18 crc kubenswrapper[4957]: I0218 15:50:18.946487 4957 patch_prober.go:28] interesting pod/logging-loki-gateway-7b58bd6fcd-58vxq container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.56:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:18 crc kubenswrapper[4957]: I0218 15:50:18.946524 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-7b58bd6fcd-58vxq" podUID="6e82b47f-b61b-40dd-92f1-62180459082f" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.56:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:18 crc kubenswrapper[4957]: I0218 15:50:18.989668 4957 patch_prober.go:28] interesting pod/logging-loki-gateway-7b58bd6fcd-n8wjk container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.55:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:18 crc kubenswrapper[4957]: I0218 15:50:18.989718 4957 patch_prober.go:28] interesting pod/logging-loki-gateway-7b58bd6fcd-58vxq container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.56:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:18 crc kubenswrapper[4957]: I0218 15:50:18.989676 4957 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-cvskg container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:18 crc kubenswrapper[4957]: I0218 15:50:18.989747 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-7b58bd6fcd-58vxq" podUID="6e82b47f-b61b-40dd-92f1-62180459082f" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.56:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:18 crc kubenswrapper[4957]: I0218 15:50:18.989729 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-7b58bd6fcd-n8wjk" podUID="ae719427-398b-455b-8d4f-d1f96df0e800" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.55:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:18 crc kubenswrapper[4957]: I0218 15:50:18.989801 4957 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-cvskg container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.217.0.66:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:18 crc kubenswrapper[4957]: I0218 15:50:18.989804 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-cvskg" podUID="2d4f652d-1d92-4ace-8ac1-51aaf64ff1cd" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.66:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:18 crc kubenswrapper[4957]: I0218 15:50:18.989832 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-79b997595-cvskg" podUID="2d4f652d-1d92-4ace-8ac1-51aaf64ff1cd" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.66:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:19 crc kubenswrapper[4957]: I0218 15:50:19.067660 4957 patch_prober.go:28] interesting pod/logging-loki-querier-76bf7b6d45-t4b27 container/loki-querier namespace/openshift-logging: Liveness probe status=failure output="Get \"https://10.217.0.53:3101/loki/api/v1/status/buildinfo\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:19 crc kubenswrapper[4957]: I0218 15:50:19.067733 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-logging/logging-loki-querier-76bf7b6d45-t4b27" podUID="c093fd9d-72e8-42d1-a5ad-5e687f61aa9e" containerName="loki-querier" probeResult="failure" output="Get \"https://10.217.0.53:3101/loki/api/v1/status/buildinfo\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:19 crc kubenswrapper[4957]: I0218 15:50:19.144642 4957 patch_prober.go:28] interesting pod/logging-loki-query-frontend-6d6859c548-x4qsb container/loki-query-frontend namespace/openshift-logging: Liveness probe status=failure output="Get \"https://10.217.0.54:3101/loki/api/v1/status/buildinfo\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:19 crc kubenswrapper[4957]: I0218 15:50:19.144739 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-x4qsb" podUID="d1e1403b-f3b0-4377-9c77-1d9f5b3c10c7" containerName="loki-query-frontend" probeResult="failure" output="Get \"https://10.217.0.54:3101/loki/api/v1/status/buildinfo\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:19 crc kubenswrapper[4957]: I0218 15:50:19.202715 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-c8kqz" podUID="06e3f0bd-70d3-493b-ab24-e8f75298f7a3" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:19 crc kubenswrapper[4957]: I0218 15:50:19.202727 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-bl7jx" podUID="84258a40-276a-4da4-8240-603932be25c0" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.96:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:19 crc kubenswrapper[4957]: I0218 15:50:19.202931 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="hostpath-provisioner/csi-hostpathplugin-gdjrj" podUID="11cb8341-3939-4c82-9745-510f73904864" containerName="hostpath-provisioner" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 15:50:19 crc kubenswrapper[4957]: I0218 15:50:19.204459 4957 patch_prober.go:28] interesting pod/logging-loki-distributor-5d5548c9f5-2wbxs container/loki-distributor namespace/openshift-logging: Liveness probe status=failure output="Get \"https://10.217.0.52:3101/loki/api/v1/status/buildinfo\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:19 crc kubenswrapper[4957]: I0218 15:50:19.204526 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-2wbxs" podUID="4de1a2b8-9bfb-4104-b065-e0c991cb95ea" containerName="loki-distributor" probeResult="failure" output="Get \"https://10.217.0.52:3101/loki/api/v1/status/buildinfo\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:19 crc kubenswrapper[4957]: I0218 15:50:19.285581 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-bl7jx" podUID="84258a40-276a-4da4-8240-603932be25c0" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.96:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:19 crc kubenswrapper[4957]: I0218 15:50:19.285588 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-controller-manager-549dd7c895-84tm5" podUID="33e1b915-d740-4ec7-b74e-b8b8b6356d4d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.123:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:19 crc kubenswrapper[4957]: I0218 15:50:19.285960 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-c8kqz" podUID="06e3f0bd-70d3-493b-ab24-e8f75298f7a3" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:19 crc kubenswrapper[4957]: I0218 15:50:19.310106 4957 patch_prober.go:28] interesting pod/logging-loki-index-gateway-0 container/loki-index-gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.60:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:19 crc kubenswrapper[4957]: I0218 15:50:19.310245 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-index-gateway-0" podUID="c7e42da2-0160-4c19-bd98-1ebb4d0d84dc" containerName="loki-index-gateway" probeResult="failure" output="Get \"https://10.217.0.60:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:19 crc kubenswrapper[4957]: I0218 15:50:19.367683 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/controller-69bbfbf88f-hw6sv" podUID="3929daaa-39b8-475f-9af0-644180cb7682" containerName="controller" probeResult="failure" output="Get \"http://10.217.0.97:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:19 crc kubenswrapper[4957]: I0218 15:50:19.367650 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-c8kqz" podUID="06e3f0bd-70d3-493b-ab24-e8f75298f7a3" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:19 crc kubenswrapper[4957]: I0218 15:50:19.367756 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-manager-549dd7c895-84tm5" podUID="33e1b915-d740-4ec7-b74e-b8b8b6356d4d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.123:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:19 crc kubenswrapper[4957]: I0218 15:50:19.367798 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/controller-69bbfbf88f-hw6sv" podUID="3929daaa-39b8-475f-9af0-644180cb7682" containerName="controller" probeResult="failure" output="Get \"http://10.217.0.97:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:19 crc kubenswrapper[4957]: I0218 15:50:19.383821 4957 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.57:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:19 crc kubenswrapper[4957]: I0218 15:50:19.384250 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="f0549571-def1-4cd5-9cae-77780cf6870b" containerName="loki-ingester" probeResult="failure" output="Get \"https://10.217.0.57:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:19 crc kubenswrapper[4957]: I0218 15:50:19.471575 4957 patch_prober.go:28] interesting pod/logging-loki-compactor-0 container/loki-compactor namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.58:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:19 crc kubenswrapper[4957]: I0218 15:50:19.471650 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-compactor-0" podUID="8ebf8bd1-097b-45c7-be49-c38760e885e2" containerName="loki-compactor" probeResult="failure" output="Get \"https://10.217.0.58:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:19 crc kubenswrapper[4957]: I0218 15:50:19.798652 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-k8s-0" podUID="564a48e7-438e-4374-9b43-92409e093ae2" containerName="prometheus" probeResult="failure" output="command timed out" Feb 18 15:50:19 crc kubenswrapper[4957]: I0218 15:50:19.798673 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-vc7fk" podUID="910cf14f-cf93-4db8-8681-12130ae2ae27" containerName="nbdb" probeResult="failure" output="command timed out" Feb 18 15:50:19 crc kubenswrapper[4957]: I0218 15:50:19.798657 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-vc7fk" podUID="910cf14f-cf93-4db8-8681-12130ae2ae27" containerName="sbdb" probeResult="failure" output="command timed out" Feb 18 15:50:19 crc kubenswrapper[4957]: I0218 15:50:19.798774 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-k8s-0" podUID="564a48e7-438e-4374-9b43-92409e093ae2" containerName="prometheus" probeResult="failure" output="command timed out" Feb 18 15:50:19 crc kubenswrapper[4957]: I0218 15:50:19.936573 4957 patch_prober.go:28] interesting pod/logging-loki-gateway-7b58bd6fcd-n8wjk container/gateway namespace/openshift-logging: Liveness probe status=failure output="Get \"https://10.217.0.55:8081/live\": context deadline exceeded" start-of-body= Feb 18 15:50:19 crc kubenswrapper[4957]: I0218 15:50:19.936811 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-logging/logging-loki-gateway-7b58bd6fcd-n8wjk" podUID="ae719427-398b-455b-8d4f-d1f96df0e800" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.55:8081/live\": context deadline exceeded" Feb 18 15:50:19 crc kubenswrapper[4957]: I0218 15:50:19.936598 4957 patch_prober.go:28] interesting pod/logging-loki-gateway-7b58bd6fcd-n8wjk container/opa namespace/openshift-logging: Liveness probe status=failure output="Get \"https://10.217.0.55:8083/live\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:19 crc kubenswrapper[4957]: I0218 15:50:19.936903 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-logging/logging-loki-gateway-7b58bd6fcd-n8wjk" podUID="ae719427-398b-455b-8d4f-d1f96df0e800" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.55:8083/live\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:19 crc kubenswrapper[4957]: I0218 15:50:19.945797 4957 patch_prober.go:28] interesting pod/logging-loki-gateway-7b58bd6fcd-58vxq container/gateway namespace/openshift-logging: Liveness probe status=failure output="Get \"https://10.217.0.56:8081/live\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:19 crc kubenswrapper[4957]: I0218 15:50:19.945832 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-logging/logging-loki-gateway-7b58bd6fcd-58vxq" podUID="6e82b47f-b61b-40dd-92f1-62180459082f" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.56:8081/live\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:19 crc kubenswrapper[4957]: I0218 15:50:19.945880 4957 patch_prober.go:28] interesting pod/logging-loki-gateway-7b58bd6fcd-58vxq container/opa namespace/openshift-logging: Liveness probe status=failure output="Get \"https://10.217.0.56:8083/live\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:19 crc kubenswrapper[4957]: I0218 15:50:19.945897 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-logging/logging-loki-gateway-7b58bd6fcd-58vxq" podUID="6e82b47f-b61b-40dd-92f1-62180459082f" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.56:8083/live\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:20 crc kubenswrapper[4957]: I0218 15:50:20.081954 4957 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-tmknf container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.29:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:20 crc kubenswrapper[4957]: I0218 15:50:20.082085 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-tmknf" podUID="7b0517cf-fb33-49b0-9f1c-ba39f8edfc37" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.29:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:20 crc kubenswrapper[4957]: I0218 15:50:20.082124 4957 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-tmknf container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.29:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:20 crc kubenswrapper[4957]: I0218 15:50:20.082359 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-tmknf" podUID="7b0517cf-fb33-49b0-9f1c-ba39f8edfc37" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.29:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:20 crc kubenswrapper[4957]: I0218 15:50:20.309180 4957 patch_prober.go:28] interesting pod/logging-loki-index-gateway-0 container/loki-index-gateway namespace/openshift-logging: Liveness probe status=failure output="Get \"https://10.217.0.60:3101/loki/api/v1/status/buildinfo\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:20 crc kubenswrapper[4957]: I0218 15:50:20.309240 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-logging/logging-loki-index-gateway-0" podUID="c7e42da2-0160-4c19-bd98-1ebb4d0d84dc" containerName="loki-index-gateway" probeResult="failure" output="Get \"https://10.217.0.60:3101/loki/api/v1/status/buildinfo\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:20 crc kubenswrapper[4957]: I0218 15:50:20.340334 4957 trace.go:236] Trace[1738606906]: "Calculate volume metrics of persistence for pod openstack/rabbitmq-server-0" (18-Feb-2026 15:50:12.126) (total time: 8208ms): Feb 18 15:50:20 crc kubenswrapper[4957]: Trace[1738606906]: [8.208652929s] [8.208652929s] END Feb 18 15:50:20 crc kubenswrapper[4957]: I0218 15:50:20.699622 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/speaker-wvv4k" podUID="379fdde6-815b-433b-b62c-b9863ea4fb9e" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:20 crc kubenswrapper[4957]: I0218 15:50:20.699714 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/speaker-wvv4k" podUID="379fdde6-815b-433b-b62c-b9863ea4fb9e" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:21 crc kubenswrapper[4957]: I0218 15:50:21.196501 4957 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:21 crc kubenswrapper[4957]: I0218 15:50:21.196786 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:21 crc kubenswrapper[4957]: I0218 15:50:21.451309 4957 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Liveness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:21 crc kubenswrapper[4957]: I0218 15:50:21.451366 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:21 crc kubenswrapper[4957]: I0218 15:50:21.711915 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/prometheus-metric-storage-0" podUID="a600b253-74c8-473b-ba57-e03ac741c902" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.166:9090/-/healthy\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:21 crc kubenswrapper[4957]: I0218 15:50:21.711974 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="a600b253-74c8-473b-ba57-e03ac741c902" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.166:9090/-/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:21 crc kubenswrapper[4957]: I0218 15:50:21.794708 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="8e98adae-ca1c-4f1d-a7e4-6ea6ccea7070" containerName="galera" probeResult="failure" output="command timed out" Feb 18 15:50:21 crc kubenswrapper[4957]: I0218 15:50:21.808443 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="8e98adae-ca1c-4f1d-a7e4-6ea6ccea7070" containerName="galera" probeResult="failure" output="command timed out" Feb 18 15:50:21 crc kubenswrapper[4957]: I0218 15:50:21.825735 4957 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/openstack-galera-0" Feb 18 15:50:21 crc kubenswrapper[4957]: I0218 15:50:21.827732 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Feb 18 15:50:21 crc kubenswrapper[4957]: I0218 15:50:21.832007 4957 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="galera" containerStatusID={"Type":"cri-o","ID":"c2d6a2612ba18fb658d5c66424e36628b24b284c48f3a48ee8a77762ba6d77e3"} pod="openstack/openstack-galera-0" containerMessage="Container galera failed liveness probe, will be restarted" Feb 18 15:50:22 crc kubenswrapper[4957]: I0218 15:50:22.087156 4957 patch_prober.go:28] interesting pod/nmstate-webhook-866bcb46dc-vzkmx container/nmstate-webhook namespace/openshift-nmstate: Readiness probe status=failure output="Get \"https://10.217.0.88:9443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:22 crc kubenswrapper[4957]: I0218 15:50:22.087236 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-vzkmx" podUID="7bf5dd6b-3bc3-4ead-8fab-478e02b32496" containerName="nmstate-webhook" probeResult="failure" output="Get \"https://10.217.0.88:9443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:22 crc kubenswrapper[4957]: I0218 15:50:22.183631 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/infra-operator-controller-manager-79d975b745-pgchj" podUID="6c6f7318-74c7-4971-9888-45a6c025bdde" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.109:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:22 crc kubenswrapper[4957]: I0218 15:50:22.183692 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/infra-operator-controller-manager-79d975b745-pgchj" podUID="6c6f7318-74c7-4971-9888-45a6c025bdde" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.109:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:22 crc kubenswrapper[4957]: I0218 15:50:22.495604 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cwqlsr" podUID="8bd25216-306e-42c0-93da-a51803507c1f" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.117:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:22 crc kubenswrapper[4957]: I0218 15:50:22.495679 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cwqlsr" podUID="8bd25216-306e-42c0-93da-a51803507c1f" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.117:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:22 crc kubenswrapper[4957]: I0218 15:50:22.495731 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cwqlsr" Feb 18 15:50:22 crc kubenswrapper[4957]: I0218 15:50:22.794819 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="8e98adae-ca1c-4f1d-a7e4-6ea6ccea7070" containerName="galera" probeResult="failure" output="command timed out" Feb 18 15:50:22 crc kubenswrapper[4957]: I0218 15:50:22.795767 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-handler-vwl5q" podUID="3b69da89-d2a6-4e8f-ac79-99e1bb296fcc" containerName="nmstate-handler" probeResult="failure" output="command timed out" Feb 18 15:50:22 crc kubenswrapper[4957]: I0218 15:50:22.796023 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="a1f75ef8-c8ef-4d67-8c40-11ff5f8f5f4c" containerName="galera" probeResult="failure" output="command timed out" Feb 18 15:50:22 crc kubenswrapper[4957]: I0218 15:50:22.796140 4957 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Feb 18 15:50:22 crc kubenswrapper[4957]: I0218 15:50:22.796485 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="a1f75ef8-c8ef-4d67-8c40-11ff5f8f5f4c" containerName="galera" probeResult="failure" output="command timed out" Feb 18 15:50:22 crc kubenswrapper[4957]: I0218 15:50:22.796575 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Feb 18 15:50:22 crc kubenswrapper[4957]: I0218 15:50:22.799887 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-index-hbqf7" podUID="4c4be899-e6fc-4664-89e1-b2eb45187e3a" containerName="registry-server" probeResult="failure" output="command timed out" Feb 18 15:50:22 crc kubenswrapper[4957]: I0218 15:50:22.799987 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="64952df6-ca80-4f2b-a8e3-ce0539d32008" containerName="ceilometer-notification-agent" probeResult="failure" output="command timed out" Feb 18 15:50:22 crc kubenswrapper[4957]: I0218 15:50:22.800130 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-index-hbqf7" podUID="4c4be899-e6fc-4664-89e1-b2eb45187e3a" containerName="registry-server" probeResult="failure" output="command timed out" Feb 18 15:50:22 crc kubenswrapper[4957]: I0218 15:50:22.813092 4957 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="galera" containerStatusID={"Type":"cri-o","ID":"b75fc79baf9d486a4ac87956cd0d8a072675147f14f4b685af9fdcd0c2c1d609"} pod="openstack/openstack-cell1-galera-0" containerMessage="Container galera failed liveness probe, will be restarted" Feb 18 15:50:23 crc kubenswrapper[4957]: I0218 15:50:23.089935 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="hostpath-provisioner/csi-hostpathplugin-gdjrj" podUID="11cb8341-3939-4c82-9745-510f73904864" containerName="hostpath-provisioner" probeResult="failure" output="Get \"http://10.217.0.43:9898/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:23 crc kubenswrapper[4957]: I0218 15:50:23.172619 4957 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-tmknf container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.29:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:23 crc kubenswrapper[4957]: I0218 15:50:23.172936 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-tmknf" podUID="7b0517cf-fb33-49b0-9f1c-ba39f8edfc37" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.29:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:23 crc kubenswrapper[4957]: I0218 15:50:23.173011 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-tmknf" Feb 18 15:50:23 crc kubenswrapper[4957]: I0218 15:50:23.172736 4957 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-tmknf container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.29:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:23 crc kubenswrapper[4957]: I0218 15:50:23.173412 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-tmknf" podUID="7b0517cf-fb33-49b0-9f1c-ba39f8edfc37" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.29:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:23 crc kubenswrapper[4957]: I0218 15:50:23.173553 4957 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-config-operator/openshift-config-operator-7777fb866f-tmknf" Feb 18 15:50:23 crc kubenswrapper[4957]: I0218 15:50:23.175052 4957 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="openshift-config-operator" containerStatusID={"Type":"cri-o","ID":"613e0204559e97040ebfe66db7d75d7effa45b74745a4dccc086b2e01d24ad10"} pod="openshift-config-operator/openshift-config-operator-7777fb866f-tmknf" containerMessage="Container openshift-config-operator failed liveness probe, will be restarted" Feb 18 15:50:23 crc kubenswrapper[4957]: I0218 15:50:23.175090 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-config-operator/openshift-config-operator-7777fb866f-tmknf" podUID="7b0517cf-fb33-49b0-9f1c-ba39f8edfc37" containerName="openshift-config-operator" containerID="cri-o://613e0204559e97040ebfe66db7d75d7effa45b74745a4dccc086b2e01d24ad10" gracePeriod=30 Feb 18 15:50:23 crc kubenswrapper[4957]: I0218 15:50:23.175290 4957 patch_prober.go:28] interesting pod/thanos-querier-5bd76d8bd7-7bctr container/kube-rbac-proxy-web namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.77:9091/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:23 crc kubenswrapper[4957]: I0218 15:50:23.175313 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/thanos-querier-5bd76d8bd7-7bctr" podUID="513d3d65-d89a-4418-b0c2-4eadd3ae5600" containerName="kube-rbac-proxy-web" probeResult="failure" output="Get \"https://10.217.0.77:9091/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:23 crc kubenswrapper[4957]: I0218 15:50:23.175887 4957 patch_prober.go:28] interesting pod/thanos-querier-5bd76d8bd7-7bctr container/kube-rbac-proxy-web namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.77:9091/-/healthy\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:23 crc kubenswrapper[4957]: I0218 15:50:23.175974 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/thanos-querier-5bd76d8bd7-7bctr" podUID="513d3d65-d89a-4418-b0c2-4eadd3ae5600" containerName="kube-rbac-proxy-web" probeResult="failure" output="Get \"https://10.217.0.77:9091/-/healthy\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:23 crc kubenswrapper[4957]: I0218 15:50:23.537670 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cwqlsr" podUID="8bd25216-306e-42c0-93da-a51803507c1f" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.117:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:23 crc kubenswrapper[4957]: I0218 15:50:23.732588 4957 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-htr7f container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.72:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:23 crc kubenswrapper[4957]: I0218 15:50:23.732648 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-htr7f" podUID="eba7e3a4-c6f4-4473-bcec-23a777ba8798" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.72:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:23 crc kubenswrapper[4957]: I0218 15:50:23.732688 4957 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-htr7f" Feb 18 15:50:23 crc kubenswrapper[4957]: I0218 15:50:23.733218 4957 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-htr7f container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.72:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:23 crc kubenswrapper[4957]: I0218 15:50:23.733290 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-htr7f" podUID="eba7e3a4-c6f4-4473-bcec-23a777ba8798" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.72:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:23 crc kubenswrapper[4957]: I0218 15:50:23.733429 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-htr7f" Feb 18 15:50:23 crc kubenswrapper[4957]: I0218 15:50:23.734056 4957 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="prometheus-operator-admission-webhook" containerStatusID={"Type":"cri-o","ID":"a95bc19447cc8da6480533d1418392662f3d637a86c7ede2044b609a58c281db"} pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-htr7f" containerMessage="Container prometheus-operator-admission-webhook failed liveness probe, will be restarted" Feb 18 15:50:23 crc kubenswrapper[4957]: I0218 15:50:23.734104 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-htr7f" podUID="eba7e3a4-c6f4-4473-bcec-23a777ba8798" containerName="prometheus-operator-admission-webhook" containerID="cri-o://a95bc19447cc8da6480533d1418392662f3d637a86c7ede2044b609a58c281db" gracePeriod=30 Feb 18 15:50:23 crc kubenswrapper[4957]: I0218 15:50:23.811433 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="64952df6-ca80-4f2b-a8e3-ce0539d32008" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Feb 18 15:50:23 crc kubenswrapper[4957]: I0218 15:50:23.935917 4957 patch_prober.go:28] interesting pod/logging-loki-gateway-7b58bd6fcd-n8wjk container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.55:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:23 crc kubenswrapper[4957]: I0218 15:50:23.935973 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-7b58bd6fcd-n8wjk" podUID="ae719427-398b-455b-8d4f-d1f96df0e800" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.55:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:23 crc kubenswrapper[4957]: I0218 15:50:23.935918 4957 patch_prober.go:28] interesting pod/logging-loki-gateway-7b58bd6fcd-n8wjk container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.55:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:23 crc kubenswrapper[4957]: I0218 15:50:23.936024 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-7b58bd6fcd-n8wjk" podUID="ae719427-398b-455b-8d4f-d1f96df0e800" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.55:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:23 crc kubenswrapper[4957]: I0218 15:50:23.944139 4957 patch_prober.go:28] interesting pod/logging-loki-gateway-7b58bd6fcd-58vxq container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.56:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:23 crc kubenswrapper[4957]: I0218 15:50:23.944170 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-7b58bd6fcd-58vxq" podUID="6e82b47f-b61b-40dd-92f1-62180459082f" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.56:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:23 crc kubenswrapper[4957]: I0218 15:50:23.945258 4957 patch_prober.go:28] interesting pod/logging-loki-gateway-7b58bd6fcd-58vxq container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.56:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:23 crc kubenswrapper[4957]: I0218 15:50:23.945281 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-7b58bd6fcd-58vxq" podUID="6e82b47f-b61b-40dd-92f1-62180459082f" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.56:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:24 crc kubenswrapper[4957]: I0218 15:50:24.058641 4957 patch_prober.go:28] interesting pod/controller-manager-65c9ff5d9d-fz5wk container/controller-manager namespace/openshift-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.62:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:24 crc kubenswrapper[4957]: I0218 15:50:24.058644 4957 patch_prober.go:28] interesting pod/controller-manager-65c9ff5d9d-fz5wk container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.62:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:24 crc kubenswrapper[4957]: I0218 15:50:24.059347 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-65c9ff5d9d-fz5wk" podUID="1523e723-7145-4c5e-8834-990b6298db41" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.62:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:24 crc kubenswrapper[4957]: I0218 15:50:24.059287 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-controller-manager/controller-manager-65c9ff5d9d-fz5wk" podUID="1523e723-7145-4c5e-8834-990b6298db41" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.62:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:24 crc kubenswrapper[4957]: I0218 15:50:24.059410 4957 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-controller-manager/controller-manager-65c9ff5d9d-fz5wk" Feb 18 15:50:24 crc kubenswrapper[4957]: I0218 15:50:24.060904 4957 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="controller-manager" containerStatusID={"Type":"cri-o","ID":"fc933db99029da447db2b03fe12f3fa610f419a635cff15cf52b13279a672238"} pod="openshift-controller-manager/controller-manager-65c9ff5d9d-fz5wk" containerMessage="Container controller-manager failed liveness probe, will be restarted" Feb 18 15:50:24 crc kubenswrapper[4957]: I0218 15:50:24.060947 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-65c9ff5d9d-fz5wk" podUID="1523e723-7145-4c5e-8834-990b6298db41" containerName="controller-manager" containerID="cri-o://fc933db99029da447db2b03fe12f3fa610f419a635cff15cf52b13279a672238" gracePeriod=30 Feb 18 15:50:24 crc kubenswrapper[4957]: I0218 15:50:24.065603 4957 patch_prober.go:28] interesting pod/route-controller-manager-6f58845d78-bfqn7 container/route-controller-manager namespace/openshift-route-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.63:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:24 crc kubenswrapper[4957]: I0218 15:50:24.065669 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-route-controller-manager/route-controller-manager-6f58845d78-bfqn7" podUID="2fbd50ae-c490-4099-b01e-de491ad70559" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.63:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:24 crc kubenswrapper[4957]: I0218 15:50:24.065701 4957 patch_prober.go:28] interesting pod/route-controller-manager-6f58845d78-bfqn7 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.63:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:24 crc kubenswrapper[4957]: I0218 15:50:24.065739 4957 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-route-controller-manager/route-controller-manager-6f58845d78-bfqn7" Feb 18 15:50:24 crc kubenswrapper[4957]: I0218 15:50:24.065750 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6f58845d78-bfqn7" podUID="2fbd50ae-c490-4099-b01e-de491ad70559" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.63:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:24 crc kubenswrapper[4957]: I0218 15:50:24.067134 4957 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="route-controller-manager" containerStatusID={"Type":"cri-o","ID":"3ebc227181edc68e492ff82667e56ce703d103b6b3b7df6b20a3689f34338478"} pod="openshift-route-controller-manager/route-controller-manager-6f58845d78-bfqn7" containerMessage="Container route-controller-manager failed liveness probe, will be restarted" Feb 18 15:50:24 crc kubenswrapper[4957]: I0218 15:50:24.067180 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6f58845d78-bfqn7" podUID="2fbd50ae-c490-4099-b01e-de491ad70559" containerName="route-controller-manager" containerID="cri-o://3ebc227181edc68e492ff82667e56ce703d103b6b3b7df6b20a3689f34338478" gracePeriod=30 Feb 18 15:50:24 crc kubenswrapper[4957]: I0218 15:50:24.178645 4957 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-tmknf container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.29:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:24 crc kubenswrapper[4957]: I0218 15:50:24.178723 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-tmknf" podUID="7b0517cf-fb33-49b0-9f1c-ba39f8edfc37" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.29:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:24 crc kubenswrapper[4957]: I0218 15:50:24.796275 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-k8s-0" podUID="564a48e7-438e-4374-9b43-92409e093ae2" containerName="prometheus" probeResult="failure" output="command timed out" Feb 18 15:50:24 crc kubenswrapper[4957]: I0218 15:50:24.796444 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-k8s-0" Feb 18 15:50:24 crc kubenswrapper[4957]: I0218 15:50:24.796861 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-k8s-0" podUID="564a48e7-438e-4374-9b43-92409e093ae2" containerName="prometheus" probeResult="failure" output="command timed out" Feb 18 15:50:25 crc kubenswrapper[4957]: I0218 15:50:25.000747 4957 patch_prober.go:28] interesting pod/loki-operator-controller-manager-669bf4b44b-ndlc7 container/manager namespace/openshift-operators-redhat: Readiness probe status=failure output="Get \"http://10.217.0.48:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:25 crc kubenswrapper[4957]: I0218 15:50:25.000817 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators-redhat/loki-operator-controller-manager-669bf4b44b-ndlc7" podUID="da87ca13-b23a-4345-b79d-46c8e9bec9b3" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.48:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:25 crc kubenswrapper[4957]: I0218 15:50:25.000814 4957 patch_prober.go:28] interesting pod/loki-operator-controller-manager-669bf4b44b-ndlc7 container/manager namespace/openshift-operators-redhat: Liveness probe status=failure output="Get \"http://10.217.0.48:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:25 crc kubenswrapper[4957]: I0218 15:50:25.000866 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operators-redhat/loki-operator-controller-manager-669bf4b44b-ndlc7" podUID="da87ca13-b23a-4345-b79d-46c8e9bec9b3" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.48:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:25 crc kubenswrapper[4957]: I0218 15:50:25.119729 4957 patch_prober.go:28] interesting pod/apiserver-7bbb656c7d-2db8v container/oauth-apiserver namespace/openshift-oauth-apiserver: Liveness probe status=failure output="Get \"https://10.217.0.10:8443/livez?exclude=etcd\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:25 crc kubenswrapper[4957]: I0218 15:50:25.119805 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2db8v" podUID="1889400f-2fff-4c67-b401-966e820d5a26" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.217.0.10:8443/livez?exclude=etcd\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:25 crc kubenswrapper[4957]: I0218 15:50:25.119817 4957 patch_prober.go:28] interesting pod/apiserver-7bbb656c7d-2db8v container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:25 crc kubenswrapper[4957]: I0218 15:50:25.119872 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2db8v" podUID="1889400f-2fff-4c67-b401-966e820d5a26" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.217.0.10:8443/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:25 crc kubenswrapper[4957]: I0218 15:50:25.762877 4957 patch_prober.go:28] interesting pod/apiserver-76f77b778f-fpnw9 container/openshift-apiserver namespace/openshift-apiserver: Liveness probe status=failure output="Get \"https://10.217.0.12:8443/livez?exclude=etcd\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:25 crc kubenswrapper[4957]: I0218 15:50:25.763298 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-apiserver/apiserver-76f77b778f-fpnw9" podUID="b57b2a2c-ac8c-4c84-84c4-c24479600b71" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.12:8443/livez?exclude=etcd\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:25 crc kubenswrapper[4957]: I0218 15:50:25.762938 4957 patch_prober.go:28] interesting pod/apiserver-76f77b778f-fpnw9 container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/readyz?exclude=etcd&exclude=etcd-readiness\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:25 crc kubenswrapper[4957]: I0218 15:50:25.763410 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-76f77b778f-fpnw9" podUID="b57b2a2c-ac8c-4c84-84c4-c24479600b71" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.12:8443/readyz?exclude=etcd&exclude=etcd-readiness\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:25 crc kubenswrapper[4957]: I0218 15:50:25.769070 4957 patch_prober.go:28] interesting pod/console-84fccf866c-pq2q4 container/console namespace/openshift-console: Liveness probe status=failure output="Get \"https://10.217.0.139:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:25 crc kubenswrapper[4957]: I0218 15:50:25.769133 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/console-84fccf866c-pq2q4" podUID="95e72012-f049-432b-8490-b92c0e5724b8" containerName="console" probeResult="failure" output="Get \"https://10.217.0.139:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:25 crc kubenswrapper[4957]: I0218 15:50:25.769195 4957 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/console-84fccf866c-pq2q4" Feb 18 15:50:25 crc kubenswrapper[4957]: I0218 15:50:25.770636 4957 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="console" containerStatusID={"Type":"cri-o","ID":"5e1bcb031d8c22adc291ab4ddeff87798dc82f26b4768e6947cd172516be96d8"} pod="openshift-console/console-84fccf866c-pq2q4" containerMessage="Container console failed liveness probe, will be restarted" Feb 18 15:50:25 crc kubenswrapper[4957]: I0218 15:50:25.823355 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-gllqp" podUID="b95ede57-e275-4ba0-834d-43356f6b960b" containerName="registry-server" probeResult="failure" output="command timed out" Feb 18 15:50:25 crc kubenswrapper[4957]: I0218 15:50:25.823374 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/redhat-operators-vrpnn" podUID="9f04a8b9-47dc-4fdf-b0fa-b39ee03d5f00" containerName="registry-server" probeResult="failure" output="command timed out" Feb 18 15:50:25 crc kubenswrapper[4957]: I0218 15:50:25.823367 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/redhat-operators-vrpnn" podUID="9f04a8b9-47dc-4fdf-b0fa-b39ee03d5f00" containerName="registry-server" probeResult="failure" output="command timed out" Feb 18 15:50:25 crc kubenswrapper[4957]: I0218 15:50:25.823411 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/redhat-marketplace-gllqp" podUID="b95ede57-e275-4ba0-834d-43356f6b960b" containerName="registry-server" probeResult="failure" output="command timed out" Feb 18 15:50:26 crc kubenswrapper[4957]: I0218 15:50:26.057032 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-slwlj" podUID="e6651ea1-6311-4597-81cc-a8637f8cc88a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.104:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:26 crc kubenswrapper[4957]: I0218 15:50:26.097643 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-slwlj" podUID="e6651ea1-6311-4597-81cc-a8637f8cc88a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.104:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:26 crc kubenswrapper[4957]: I0218 15:50:26.138649 4957 patch_prober.go:28] interesting pod/console-operator-58897d9998-d9hcp container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.217.0.14:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:26 crc kubenswrapper[4957]: I0218 15:50:26.138690 4957 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-tmknf container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.29:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:26 crc kubenswrapper[4957]: I0218 15:50:26.138720 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-58897d9998-d9hcp" podUID="2d30d957-c658-4ce4-9b04-3f1d64fb67b7" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.14:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:26 crc kubenswrapper[4957]: I0218 15:50:26.138740 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-tmknf" podUID="7b0517cf-fb33-49b0-9f1c-ba39f8edfc37" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.29:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:26 crc kubenswrapper[4957]: I0218 15:50:26.138772 4957 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console-operator/console-operator-58897d9998-d9hcp" Feb 18 15:50:26 crc kubenswrapper[4957]: I0218 15:50:26.138796 4957 patch_prober.go:28] interesting pod/console-operator-58897d9998-d9hcp container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.14:8443/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:26 crc kubenswrapper[4957]: I0218 15:50:26.138817 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-d9hcp" podUID="2d30d957-c658-4ce4-9b04-3f1d64fb67b7" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.14:8443/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:26 crc kubenswrapper[4957]: I0218 15:50:26.139472 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-d9hcp" Feb 18 15:50:26 crc kubenswrapper[4957]: I0218 15:50:26.141122 4957 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="console-operator" containerStatusID={"Type":"cri-o","ID":"a9e6a9ad56696bcd6dfd3b32e6ea9c2bcf4aca0e403105f96e117d4daae08e28"} pod="openshift-console-operator/console-operator-58897d9998-d9hcp" containerMessage="Container console-operator failed liveness probe, will be restarted" Feb 18 15:50:26 crc kubenswrapper[4957]: I0218 15:50:26.141169 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console-operator/console-operator-58897d9998-d9hcp" podUID="2d30d957-c658-4ce4-9b04-3f1d64fb67b7" containerName="console-operator" containerID="cri-o://a9e6a9ad56696bcd6dfd3b32e6ea9c2bcf4aca0e403105f96e117d4daae08e28" gracePeriod=30 Feb 18 15:50:26 crc kubenswrapper[4957]: I0218 15:50:26.263724 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/glance-operator-controller-manager-77987464f4-b85vd" podUID="cc38dff8-4b46-4281-96a3-ff88c8200f59" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.105:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:26 crc kubenswrapper[4957]: I0218 15:50:26.263859 4957 patch_prober.go:28] interesting pod/oauth-openshift-76b96558df-8qxws container/oauth-openshift namespace/openshift-authentication: Liveness probe status=failure output="Get \"https://10.217.0.64:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:26 crc kubenswrapper[4957]: I0218 15:50:26.263917 4957 patch_prober.go:28] interesting pod/oauth-openshift-76b96558df-8qxws container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.64:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:26 crc kubenswrapper[4957]: I0218 15:50:26.263933 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication/oauth-openshift-76b96558df-8qxws" podUID="3c63d324-6b52-4815-922d-3dc270315126" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.64:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:26 crc kubenswrapper[4957]: I0218 15:50:26.263972 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-76b96558df-8qxws" podUID="3c63d324-6b52-4815-922d-3dc270315126" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.64:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:26 crc kubenswrapper[4957]: I0218 15:50:26.264095 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="hostpath-provisioner/csi-hostpathplugin-gdjrj" podUID="11cb8341-3939-4c82-9745-510f73904864" containerName="hostpath-provisioner" probeResult="failure" output="Get \"http://10.217.0.43:9898/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:26 crc kubenswrapper[4957]: I0218 15:50:26.264611 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/glance-operator-controller-manager-77987464f4-b85vd" podUID="cc38dff8-4b46-4281-96a3-ff88c8200f59" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.105:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:26 crc kubenswrapper[4957]: I0218 15:50:26.306698 4957 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:26 crc kubenswrapper[4957]: I0218 15:50:26.306771 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:26 crc kubenswrapper[4957]: I0218 15:50:26.347624 4957 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:26 crc kubenswrapper[4957]: I0218 15:50:26.347695 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:26 crc kubenswrapper[4957]: I0218 15:50:26.430691 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-jv5dd" podUID="86c162c7-c82d-4627-bf84-11d5fb80199f" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.102:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:26 crc kubenswrapper[4957]: I0218 15:50:26.512743 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-xt2c9" podUID="07a618be-7572-49b8-aeb3-12ce37fbe7b3" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.103:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:26 crc kubenswrapper[4957]: I0218 15:50:26.594732 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="cert-manager/cert-manager-webhook-687f57d79b-q5pw9" podUID="77a4b221-67be-4248-beaa-1f4602e3b35b" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.44:6080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:26 crc kubenswrapper[4957]: I0218 15:50:26.595112 4957 patch_prober.go:28] interesting pod/metrics-server-7ffc4d6784-c7kvk container/metrics-server namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.79:10250/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:26 crc kubenswrapper[4957]: I0218 15:50:26.595174 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/metrics-server-7ffc4d6784-c7kvk" podUID="bdcbb72c-6e5e-4167-baf4-ca754b4122a0" containerName="metrics-server" probeResult="failure" output="Get \"https://10.217.0.79:10250/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:26 crc kubenswrapper[4957]: I0218 15:50:26.595213 4957 patch_prober.go:28] interesting pod/metrics-server-7ffc4d6784-c7kvk container/metrics-server namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.79:10250/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:26 crc kubenswrapper[4957]: I0218 15:50:26.595258 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/metrics-server-7ffc4d6784-c7kvk" podUID="bdcbb72c-6e5e-4167-baf4-ca754b4122a0" containerName="metrics-server" probeResult="failure" output="Get \"https://10.217.0.79:10250/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:26 crc kubenswrapper[4957]: I0218 15:50:26.595306 4957 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-monitoring/metrics-server-7ffc4d6784-c7kvk" Feb 18 15:50:26 crc kubenswrapper[4957]: I0218 15:50:26.595043 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-jv5dd" podUID="86c162c7-c82d-4627-bf84-11d5fb80199f" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.102:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:26 crc kubenswrapper[4957]: I0218 15:50:26.616023 4957 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="metrics-server" containerStatusID={"Type":"cri-o","ID":"4c75ce34f7912c1678d6e79c294eb2798fa93aa8c3f970e4c466fb3bab99e09b"} pod="openshift-monitoring/metrics-server-7ffc4d6784-c7kvk" containerMessage="Container metrics-server failed liveness probe, will be restarted" Feb 18 15:50:26 crc kubenswrapper[4957]: I0218 15:50:26.616115 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/metrics-server-7ffc4d6784-c7kvk" podUID="bdcbb72c-6e5e-4167-baf4-ca754b4122a0" containerName="metrics-server" containerID="cri-o://4c75ce34f7912c1678d6e79c294eb2798fa93aa8c3f970e4c466fb3bab99e09b" gracePeriod=170 Feb 18 15:50:26 crc kubenswrapper[4957]: I0218 15:50:26.677593 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-xt2c9" podUID="07a618be-7572-49b8-aeb3-12ce37fbe7b3" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.103:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:26 crc kubenswrapper[4957]: I0218 15:50:26.677719 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-hhg5g" podUID="18f96572-e72c-48ae-b22b-4c6fb7a4d7b9" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.106:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:26 crc kubenswrapper[4957]: I0218 15:50:26.712055 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="a600b253-74c8-473b-ba57-e03ac741c902" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.166:9090/-/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:26 crc kubenswrapper[4957]: I0218 15:50:26.712216 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Feb 18 15:50:26 crc kubenswrapper[4957]: I0218 15:50:26.795655 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-vc7fk" podUID="910cf14f-cf93-4db8-8681-12130ae2ae27" containerName="ovnkube-controller" probeResult="failure" output="command timed out" Feb 18 15:50:26 crc kubenswrapper[4957]: I0218 15:50:26.841649 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="cert-manager/cert-manager-webhook-687f57d79b-q5pw9" podUID="77a4b221-67be-4248-beaa-1f4602e3b35b" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.44:6080/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:26 crc kubenswrapper[4957]: I0218 15:50:26.841654 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-fx4tl" podUID="91fd8838-0687-420b-b3dd-4130e221a66d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.107:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:26 crc kubenswrapper[4957]: I0218 15:50:26.924722 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-vt4tc" podUID="78c8fb66-d71a-44b7-b858-51f7ca26a407" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.110:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:27 crc kubenswrapper[4957]: I0218 15:50:27.007622 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-hhg5g" podUID="18f96572-e72c-48ae-b22b-4c6fb7a4d7b9" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.106:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:27 crc kubenswrapper[4957]: I0218 15:50:27.090545 4957 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-8bmsm container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.31:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:27 crc kubenswrapper[4957]: I0218 15:50:27.090645 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8bmsm" podUID="3925001b-348a-4dde-a066-e49891c345bb" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.31:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:27 crc kubenswrapper[4957]: I0218 15:50:27.090753 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8bmsm" Feb 18 15:50:27 crc kubenswrapper[4957]: I0218 15:50:27.091157 4957 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-bwgmc container/package-server-manager namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"http://10.217.0.22:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:27 crc kubenswrapper[4957]: I0218 15:50:27.091233 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bwgmc" podUID="1b3f7089-9ab3-4753-b0a2-7454ed4425ac" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.22:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:27 crc kubenswrapper[4957]: I0218 15:50:27.091297 4957 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-8bmsm container/catalog-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.31:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:27 crc kubenswrapper[4957]: I0218 15:50:27.091311 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8bmsm" podUID="3925001b-348a-4dde-a066-e49891c345bb" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.31:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:27 crc kubenswrapper[4957]: I0218 15:50:27.091336 4957 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8bmsm" Feb 18 15:50:27 crc kubenswrapper[4957]: I0218 15:50:27.091928 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/prometheus-metric-storage-0" podUID="a600b253-74c8-473b-ba57-e03ac741c902" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.166:9090/-/healthy\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:27 crc kubenswrapper[4957]: I0218 15:50:27.092677 4957 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="catalog-operator" containerStatusID={"Type":"cri-o","ID":"433e9255f81fe43bf83563552d9bb8b7ca0e6e76cd0d686373496064bf612423"} pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8bmsm" containerMessage="Container catalog-operator failed liveness probe, will be restarted" Feb 18 15:50:27 crc kubenswrapper[4957]: I0218 15:50:27.092722 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8bmsm" podUID="3925001b-348a-4dde-a066-e49891c345bb" containerName="catalog-operator" containerID="cri-o://433e9255f81fe43bf83563552d9bb8b7ca0e6e76cd0d686373496064bf612423" gracePeriod=30 Feb 18 15:50:27 crc kubenswrapper[4957]: I0218 15:50:27.134677 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-czjx4" podUID="147c50a5-37fc-4b06-803f-8ad1d1fd4625" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.111:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:27 crc kubenswrapper[4957]: I0218 15:50:27.175653 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-xg6pp" podUID="94bb800a-9927-4d0f-b9d2-53e4fb398fda" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.108:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:27 crc kubenswrapper[4957]: I0218 15:50:27.175767 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-czjx4" podUID="147c50a5-37fc-4b06-803f-8ad1d1fd4625" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.111:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:27 crc kubenswrapper[4957]: I0218 15:50:27.423603 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-xg6pp" podUID="94bb800a-9927-4d0f-b9d2-53e4fb398fda" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.108:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:27 crc kubenswrapper[4957]: I0218 15:50:27.423621 4957 patch_prober.go:28] interesting pod/console-84fccf866c-pq2q4 container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.139:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:27 crc kubenswrapper[4957]: I0218 15:50:27.423721 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-84fccf866c-pq2q4" podUID="95e72012-f049-432b-8490-b92c0e5724b8" containerName="console" probeResult="failure" output="Get \"https://10.217.0.139:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:27 crc kubenswrapper[4957]: I0218 15:50:27.423794 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-84fccf866c-pq2q4" Feb 18 15:50:27 crc kubenswrapper[4957]: I0218 15:50:27.423623 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-vn8z8" podUID="8aaaba83-1c93-481a-9627-a46dbd3eef31" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.113:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:27 crc kubenswrapper[4957]: I0218 15:50:27.505765 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-fx4tl" podUID="91fd8838-0687-420b-b3dd-4130e221a66d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.107:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:27 crc kubenswrapper[4957]: I0218 15:50:27.505893 4957 patch_prober.go:28] interesting pod/authentication-operator-69f744f599-vd6hx container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.217.0.13:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:27 crc kubenswrapper[4957]: I0218 15:50:27.506005 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-69f744f599-vd6hx" podUID="7e32179f-a59d-44e1-9a56-ca25b8c5ff21" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.217.0.13:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:27 crc kubenswrapper[4957]: I0218 15:50:27.505993 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-kx5gv" podUID="f70d6609-fcf8-47f9-89dc-986f8f2f902b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.115:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:27 crc kubenswrapper[4957]: I0218 15:50:27.587687 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-cxzhc" podUID="ef6b6faf-f852-4948-8d1b-d53eace855a4" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.116:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:27 crc kubenswrapper[4957]: I0218 15:50:27.587712 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-vt4tc" podUID="78c8fb66-d71a-44b7-b858-51f7ca26a407" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.110:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:27 crc kubenswrapper[4957]: I0218 15:50:27.629684 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-zc4tx" podUID="ff480d9a-ead3-47a1-a765-59507dfe0853" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.112:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:27 crc kubenswrapper[4957]: I0218 15:50:27.670871 4957 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-q4vp7 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.38:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:27 crc kubenswrapper[4957]: I0218 15:50:27.670939 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-q4vp7" podUID="9a161f1b-77bb-4a9d-9bfc-345bb46d439b" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.38:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:27 crc kubenswrapper[4957]: I0218 15:50:27.670991 4957 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-q4vp7 container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.38:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:27 crc kubenswrapper[4957]: I0218 15:50:27.671040 4957 patch_prober.go:28] interesting pod/monitoring-plugin-6fb88c9bd-6wgxk container/monitoring-plugin namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.80:9443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:27 crc kubenswrapper[4957]: I0218 15:50:27.671057 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/monitoring-plugin-6fb88c9bd-6wgxk" podUID="4d4390cd-8a73-4bc0-8a08-8f018c308d17" containerName="monitoring-plugin" probeResult="failure" output="Get \"https://10.217.0.80:9443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:27 crc kubenswrapper[4957]: I0218 15:50:27.671081 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-q4vp7" podUID="9a161f1b-77bb-4a9d-9bfc-345bb46d439b" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.38:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:27 crc kubenswrapper[4957]: I0218 15:50:27.671020 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-q4vp7" Feb 18 15:50:27 crc kubenswrapper[4957]: I0218 15:50:27.671115 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-zc4tx" podUID="ff480d9a-ead3-47a1-a765-59507dfe0853" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.112:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:27 crc kubenswrapper[4957]: I0218 15:50:27.671155 4957 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-qc4wh container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.35:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:27 crc kubenswrapper[4957]: I0218 15:50:27.671162 4957 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-q4vp7" Feb 18 15:50:27 crc kubenswrapper[4957]: I0218 15:50:27.671155 4957 patch_prober.go:28] interesting pod/router-default-5444994796-jg752 container/router namespace/openshift-ingress: Readiness probe status=failure output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:27 crc kubenswrapper[4957]: I0218 15:50:27.671205 4957 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-qc4wh container/olm-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.35:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:27 crc kubenswrapper[4957]: I0218 15:50:27.671232 4957 patch_prober.go:28] interesting pod/router-default-5444994796-jg752 container/router namespace/openshift-ingress: Liveness probe status=failure output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:27 crc kubenswrapper[4957]: I0218 15:50:27.671235 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qc4wh" podUID="455504d8-7edb-4008-9343-536491e9504a" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.35:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:27 crc kubenswrapper[4957]: I0218 15:50:27.671250 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-ingress/router-default-5444994796-jg752" podUID="099076c9-9f78-47b8-87f1-3c9cc47e0b09" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:27 crc kubenswrapper[4957]: I0218 15:50:27.671203 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/monitoring-plugin-6fb88c9bd-6wgxk" Feb 18 15:50:27 crc kubenswrapper[4957]: I0218 15:50:27.671253 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-5444994796-jg752" podUID="099076c9-9f78-47b8-87f1-3c9cc47e0b09" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:27 crc kubenswrapper[4957]: I0218 15:50:27.671177 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qc4wh" podUID="455504d8-7edb-4008-9343-536491e9504a" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.35:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:27 crc kubenswrapper[4957]: I0218 15:50:27.671315 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-zc4tx" Feb 18 15:50:27 crc kubenswrapper[4957]: I0218 15:50:27.671377 4957 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-ingress/router-default-5444994796-jg752" Feb 18 15:50:27 crc kubenswrapper[4957]: I0218 15:50:27.671624 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-jg752" Feb 18 15:50:27 crc kubenswrapper[4957]: I0218 15:50:27.671879 4957 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-bwgmc container/package-server-manager namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"http://10.217.0.22:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:27 crc kubenswrapper[4957]: I0218 15:50:27.671910 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bwgmc" podUID="1b3f7089-9ab3-4753-b0a2-7454ed4425ac" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.22:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:27 crc kubenswrapper[4957]: I0218 15:50:27.672824 4957 patch_prober.go:28] interesting pod/console-operator-58897d9998-d9hcp container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.14:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:27 crc kubenswrapper[4957]: I0218 15:50:27.672877 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-d9hcp" podUID="2d30d957-c658-4ce4-9b04-3f1d64fb67b7" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.14:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:27 crc kubenswrapper[4957]: I0218 15:50:27.680572 4957 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="router" containerStatusID={"Type":"cri-o","ID":"c8ed8cb910878cb5bab4b11705486636080e4cd10ec369ab57d89ad0832f752e"} pod="openshift-ingress/router-default-5444994796-jg752" containerMessage="Container router failed liveness probe, will be restarted" Feb 18 15:50:27 crc kubenswrapper[4957]: I0218 15:50:27.680620 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ingress/router-default-5444994796-jg752" podUID="099076c9-9f78-47b8-87f1-3c9cc47e0b09" containerName="router" containerID="cri-o://c8ed8cb910878cb5bab4b11705486636080e4cd10ec369ab57d89ad0832f752e" gracePeriod=10 Feb 18 15:50:27 crc kubenswrapper[4957]: I0218 15:50:27.696007 4957 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="packageserver" containerStatusID={"Type":"cri-o","ID":"5955a16e3e753996736404ee9f2ef22ee71db1024fa108014ada860916093401"} pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-q4vp7" containerMessage="Container packageserver failed liveness probe, will be restarted" Feb 18 15:50:27 crc kubenswrapper[4957]: I0218 15:50:27.696041 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-q4vp7" podUID="9a161f1b-77bb-4a9d-9bfc-345bb46d439b" containerName="packageserver" containerID="cri-o://5955a16e3e753996736404ee9f2ef22ee71db1024fa108014ada860916093401" gracePeriod=30 Feb 18 15:50:27 crc kubenswrapper[4957]: I0218 15:50:27.754838 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-9mf8z" podUID="644451ba-ce73-4312-b6cd-af99eb6c9fbc" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.118:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:27 crc kubenswrapper[4957]: I0218 15:50:27.796789 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-handler-vwl5q" podUID="3b69da89-d2a6-4e8f-ac79-99e1bb296fcc" containerName="nmstate-handler" probeResult="failure" output="command timed out" Feb 18 15:50:27 crc kubenswrapper[4957]: I0218 15:50:27.805480 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/certified-operators-mjqvd" podUID="3d9fa28a-2d86-4e9f-a5da-d5f545bb0331" containerName="registry-server" probeResult="failure" output="command timed out" Feb 18 15:50:27 crc kubenswrapper[4957]: I0218 15:50:27.805600 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/certified-operators-mjqvd" podUID="3d9fa28a-2d86-4e9f-a5da-d5f545bb0331" containerName="registry-server" probeResult="failure" output="command timed out" Feb 18 15:50:27 crc kubenswrapper[4957]: I0218 15:50:27.812198 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-vwl5q" Feb 18 15:50:27 crc kubenswrapper[4957]: I0218 15:50:27.839694 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/swift-operator-controller-manager-68f46476f-5k7g6" podUID="8adf52f0-b132-4541-8962-7fae9bce89c6" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.119:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:27 crc kubenswrapper[4957]: I0218 15:50:27.921682 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-rd7mm" podUID="eb1f93ca-0a6f-4582-9d3a-329ddd1dd4fe" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.114:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:27 crc kubenswrapper[4957]: I0218 15:50:27.921862 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/telemetry-operator-controller-manager-54bf66477-rc4j4" podUID="f507ee0e-6836-4f30-b79e-63979d76a449" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.120:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:28 crc kubenswrapper[4957]: I0218 15:50:28.004792 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/test-operator-controller-manager-7866795846-2fdgz" podUID="b724d9a9-8ae5-4295-9b4e-5ec65793b59f" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.121:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:28 crc kubenswrapper[4957]: I0218 15:50:28.004775 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-rd7mm" podUID="eb1f93ca-0a6f-4582-9d3a-329ddd1dd4fe" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.114:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:28 crc kubenswrapper[4957]: I0218 15:50:28.087679 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-kshbq" podUID="c17ba5f2-7fb4-4ed7-8623-f987653f8f9b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.122:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:28 crc kubenswrapper[4957]: I0218 15:50:28.087687 4957 patch_prober.go:28] interesting pod/downloads-7954f5f757-fxh8s container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.11:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:28 crc kubenswrapper[4957]: I0218 15:50:28.087797 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-fxh8s" podUID="6e462cfd-ac3c-4e75-bcce-f8291746b89e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:28 crc kubenswrapper[4957]: I0218 15:50:28.087740 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="hostpath-provisioner/csi-hostpathplugin-gdjrj" podUID="11cb8341-3939-4c82-9745-510f73904864" containerName="hostpath-provisioner" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 15:50:28 crc kubenswrapper[4957]: I0218 15:50:28.087855 4957 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="hostpath-provisioner/csi-hostpathplugin-gdjrj" Feb 18 15:50:28 crc kubenswrapper[4957]: I0218 15:50:28.087934 4957 patch_prober.go:28] interesting pod/downloads-7954f5f757-fxh8s container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:28 crc kubenswrapper[4957]: I0218 15:50:28.087950 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-vn8z8" podUID="8aaaba83-1c93-481a-9627-a46dbd3eef31" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.113:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:28 crc kubenswrapper[4957]: I0218 15:50:28.087977 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-fxh8s" podUID="6e462cfd-ac3c-4e75-bcce-f8291746b89e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:28 crc kubenswrapper[4957]: I0218 15:50:28.087988 4957 patch_prober.go:28] interesting pod/image-registry-66df7c8f76-97lxp container/registry namespace/openshift-image-registry: Liveness probe status=failure output="Get \"https://10.217.0.65:5000/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:28 crc kubenswrapper[4957]: I0218 15:50:28.088022 4957 patch_prober.go:28] interesting pod/image-registry-66df7c8f76-97lxp container/registry namespace/openshift-image-registry: Readiness probe status=failure output="Get \"https://10.217.0.65:5000/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:28 crc kubenswrapper[4957]: I0218 15:50:28.088044 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-image-registry/image-registry-66df7c8f76-97lxp" podUID="ace67b60-50d9-4a21-bbfc-fb3bf0ff40d6" containerName="registry" probeResult="failure" output="Get \"https://10.217.0.65:5000/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:28 crc kubenswrapper[4957]: I0218 15:50:28.088111 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-image-registry/image-registry-66df7c8f76-97lxp" podUID="ace67b60-50d9-4a21-bbfc-fb3bf0ff40d6" containerName="registry" probeResult="failure" output="Get \"https://10.217.0.65:5000/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:28 crc kubenswrapper[4957]: I0218 15:50:28.088222 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-kx5gv" podUID="f70d6609-fcf8-47f9-89dc-986f8f2f902b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.115:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:28 crc kubenswrapper[4957]: I0218 15:50:28.088752 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-cxzhc" podUID="ef6b6faf-f852-4948-8d1b-d53eace855a4" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.116:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:28 crc kubenswrapper[4957]: I0218 15:50:28.088830 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-init-5666c999f9-b87pp" podUID="09673cd4-22c2-43fa-87ae-17b7a8a03308" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.101:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:28 crc kubenswrapper[4957]: I0218 15:50:28.089509 4957 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="hostpath-provisioner" containerStatusID={"Type":"cri-o","ID":"fd27e8571ceb4b4cffa7844add2265690bd0eecf09b5f3c39992bde1f2249d31"} pod="hostpath-provisioner/csi-hostpathplugin-gdjrj" containerMessage="Container hostpath-provisioner failed liveness probe, will be restarted" Feb 18 15:50:28 crc kubenswrapper[4957]: I0218 15:50:28.089592 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="hostpath-provisioner/csi-hostpathplugin-gdjrj" podUID="11cb8341-3939-4c82-9745-510f73904864" containerName="hostpath-provisioner" containerID="cri-o://fd27e8571ceb4b4cffa7844add2265690bd0eecf09b5f3c39992bde1f2249d31" gracePeriod=30 Feb 18 15:50:28 crc kubenswrapper[4957]: I0218 15:50:28.170685 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-9mf8z" podUID="644451ba-ce73-4312-b6cd-af99eb6c9fbc" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.118:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:28 crc kubenswrapper[4957]: I0218 15:50:28.170920 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/kube-state-metrics-0" podUID="03293231-ba61-4099-89c9-b86cd6d9f489" containerName="kube-state-metrics" probeResult="failure" output="Get \"https://10.217.1.11:8081/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:28 crc kubenswrapper[4957]: I0218 15:50:28.170883 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/telemetry-operator-controller-manager-54bf66477-rc4j4" podUID="f507ee0e-6836-4f30-b79e-63979d76a449" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.120:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:28 crc kubenswrapper[4957]: I0218 15:50:28.170933 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/swift-operator-controller-manager-68f46476f-5k7g6" podUID="8adf52f0-b132-4541-8962-7fae9bce89c6" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.119:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:28 crc kubenswrapper[4957]: I0218 15:50:28.170801 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/kube-state-metrics-0" podUID="03293231-ba61-4099-89c9-b86cd6d9f489" containerName="kube-state-metrics" probeResult="failure" output="Get \"https://10.217.1.11:8080/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:28 crc kubenswrapper[4957]: I0218 15:50:28.171105 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-54bf66477-rc4j4" Feb 18 15:50:28 crc kubenswrapper[4957]: I0218 15:50:28.211804 4957 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-mxz2r container/operator namespace/openshift-operators: Liveness probe status=failure output="Get \"http://10.217.0.21:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:28 crc kubenswrapper[4957]: I0218 15:50:28.211854 4957 patch_prober.go:28] interesting pod/logging-loki-query-frontend-6d6859c548-x4qsb container/loki-query-frontend namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.54:3101/loki/api/v1/status/buildinfo\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:28 crc kubenswrapper[4957]: I0218 15:50:28.211834 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/test-operator-controller-manager-7866795846-2fdgz" podUID="b724d9a9-8ae5-4295-9b4e-5ec65793b59f" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.121:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:28 crc kubenswrapper[4957]: I0218 15:50:28.211899 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operators/observability-operator-59bdc8b94-mxz2r" podUID="955eb799-56c6-47e7-b5f7-eccac4b52134" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.21:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:28 crc kubenswrapper[4957]: I0218 15:50:28.212090 4957 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-operators/observability-operator-59bdc8b94-mxz2r" Feb 18 15:50:28 crc kubenswrapper[4957]: I0218 15:50:28.211804 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-controller-manager-8675cb849f-2g7hj" podUID="4f287d67-8d26-430a-a775-fdf0abeed6dd" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.94:8080/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:28 crc kubenswrapper[4957]: I0218 15:50:28.212641 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-kshbq" podUID="c17ba5f2-7fb4-4ed7-8623-f987653f8f9b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.122:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:28 crc kubenswrapper[4957]: I0218 15:50:28.211918 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-x4qsb" podUID="d1e1403b-f3b0-4377-9c77-1d9f5b3c10c7" containerName="loki-query-frontend" probeResult="failure" output="Get \"https://10.217.0.54:3101/loki/api/v1/status/buildinfo\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:28 crc kubenswrapper[4957]: I0218 15:50:28.211939 4957 patch_prober.go:28] interesting pod/logging-loki-querier-76bf7b6d45-t4b27 container/loki-querier namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.53:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:28 crc kubenswrapper[4957]: I0218 15:50:28.212189 4957 patch_prober.go:28] interesting pod/perses-operator-5bf474d74f-khfcc container/perses-operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.23:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:28 crc kubenswrapper[4957]: I0218 15:50:28.212994 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-querier-76bf7b6d45-t4b27" podUID="c093fd9d-72e8-42d1-a5ad-5e687f61aa9e" containerName="loki-querier" probeResult="failure" output="Get \"https://10.217.0.53:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:28 crc kubenswrapper[4957]: I0218 15:50:28.213040 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/perses-operator-5bf474d74f-khfcc" podUID="aa193683-1796-419f-ac5f-e620b3206699" containerName="perses-operator" probeResult="failure" output="Get \"http://10.217.0.23:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:28 crc kubenswrapper[4957]: I0218 15:50:28.212196 4957 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-8bmsm container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.31:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:28 crc kubenswrapper[4957]: I0218 15:50:28.213100 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8bmsm" podUID="3925001b-348a-4dde-a066-e49891c345bb" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.31:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:28 crc kubenswrapper[4957]: I0218 15:50:28.212221 4957 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-tmknf container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.29:8443/healthz\": dial tcp 10.217.0.29:8443: connect: connection refused" start-of-body= Feb 18 15:50:28 crc kubenswrapper[4957]: I0218 15:50:28.213136 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-tmknf" podUID="7b0517cf-fb33-49b0-9f1c-ba39f8edfc37" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.29:8443/healthz\": dial tcp 10.217.0.29:8443: connect: connection refused" Feb 18 15:50:28 crc kubenswrapper[4957]: I0218 15:50:28.212695 4957 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-mxz2r container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.21:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:28 crc kubenswrapper[4957]: I0218 15:50:28.213168 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-59bdc8b94-mxz2r" podUID="955eb799-56c6-47e7-b5f7-eccac4b52134" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.21:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:28 crc kubenswrapper[4957]: I0218 15:50:28.212794 4957 patch_prober.go:28] interesting pod/thanos-querier-5bd76d8bd7-7bctr container/kube-rbac-proxy-web namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.77:9091/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:28 crc kubenswrapper[4957]: I0218 15:50:28.213208 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/thanos-querier-5bd76d8bd7-7bctr" podUID="513d3d65-d89a-4418-b0c2-4eadd3ae5600" containerName="kube-rbac-proxy-web" probeResult="failure" output="Get \"https://10.217.0.77:9091/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:28 crc kubenswrapper[4957]: I0218 15:50:28.212881 4957 patch_prober.go:28] interesting pod/logging-loki-distributor-5d5548c9f5-2wbxs container/loki-distributor namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.52:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:28 crc kubenswrapper[4957]: I0218 15:50:28.213255 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-2wbxs" podUID="4de1a2b8-9bfb-4104-b065-e0c991cb95ea" containerName="loki-distributor" probeResult="failure" output="Get \"https://10.217.0.52:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:28 crc kubenswrapper[4957]: I0218 15:50:28.224319 4957 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="operator" containerStatusID={"Type":"cri-o","ID":"379afd60011766f0721f864c454ea6aef3f693607460f456e8d5baa010fdd53a"} pod="openshift-operators/observability-operator-59bdc8b94-mxz2r" containerMessage="Container operator failed liveness probe, will be restarted" Feb 18 15:50:28 crc kubenswrapper[4957]: I0218 15:50:28.224491 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-operators/observability-operator-59bdc8b94-mxz2r" podUID="955eb799-56c6-47e7-b5f7-eccac4b52134" containerName="operator" containerID="cri-o://379afd60011766f0721f864c454ea6aef3f693607460f456e8d5baa010fdd53a" gracePeriod=30 Feb 18 15:50:28 crc kubenswrapper[4957]: I0218 15:50:28.238916 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-mxz2r" Feb 18 15:50:28 crc kubenswrapper[4957]: I0218 15:50:28.472655 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/metallb-operator-webhook-server-84df667ccc-2w5tf" podUID="32734ff2-fe7b-4588-a4c8-0e5882b54b87" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.95:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:28 crc kubenswrapper[4957]: I0218 15:50:28.513722 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-webhook-server-84df667ccc-2w5tf" podUID="32734ff2-fe7b-4588-a4c8-0e5882b54b87" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.95:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:28 crc kubenswrapper[4957]: I0218 15:50:28.554691 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="5c129411-cf16-45ad-be6b-e31866a236e7" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.1.8:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:28 crc kubenswrapper[4957]: I0218 15:50:28.595824 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="5c129411-cf16-45ad-be6b-e31866a236e7" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.1.8:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:28 crc kubenswrapper[4957]: I0218 15:50:28.672439 4957 patch_prober.go:28] interesting pod/monitoring-plugin-6fb88c9bd-6wgxk container/monitoring-plugin namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.80:9443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:28 crc kubenswrapper[4957]: I0218 15:50:28.672474 4957 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-q4vp7 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.38:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:28 crc kubenswrapper[4957]: I0218 15:50:28.672550 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/monitoring-plugin-6fb88c9bd-6wgxk" podUID="4d4390cd-8a73-4bc0-8a08-8f018c308d17" containerName="monitoring-plugin" probeResult="failure" output="Get \"https://10.217.0.80:9443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:28 crc kubenswrapper[4957]: I0218 15:50:28.672631 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-q4vp7" podUID="9a161f1b-77bb-4a9d-9bfc-345bb46d439b" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.38:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:28 crc kubenswrapper[4957]: I0218 15:50:28.754628 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-zc4tx" podUID="ff480d9a-ead3-47a1-a765-59507dfe0853" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.112:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:28 crc kubenswrapper[4957]: I0218 15:50:28.754657 4957 patch_prober.go:28] interesting pod/router-default-5444994796-jg752 container/router namespace/openshift-ingress: Readiness probe status=failure output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:28 crc kubenswrapper[4957]: I0218 15:50:28.754812 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-5444994796-jg752" podUID="099076c9-9f78-47b8-87f1-3c9cc47e0b09" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:28 crc kubenswrapper[4957]: I0218 15:50:28.797090 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-k8s-0" podUID="564a48e7-438e-4374-9b43-92409e093ae2" containerName="prometheus" probeResult="failure" output="command timed out" Feb 18 15:50:28 crc kubenswrapper[4957]: I0218 15:50:28.798568 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-ovs-bgjwb" podUID="2c0f60cd-99b5-453c-9353-5c6298f95d2b" containerName="ovsdb-server" probeResult="failure" output="command timed out" Feb 18 15:50:28 crc kubenswrapper[4957]: I0218 15:50:28.798944 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-ovs-bgjwb" podUID="2c0f60cd-99b5-453c-9353-5c6298f95d2b" containerName="ovs-vswitchd" probeResult="failure" output="command timed out" Feb 18 15:50:28 crc kubenswrapper[4957]: I0218 15:50:28.798957 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ovn-controller-ovs-bgjwb" podUID="2c0f60cd-99b5-453c-9353-5c6298f95d2b" containerName="ovsdb-server" probeResult="failure" output="command timed out" Feb 18 15:50:28 crc kubenswrapper[4957]: I0218 15:50:28.799091 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ovn-controller-5s4rw" podUID="e2b4f5fe-0b27-47d8-8158-b51ad4229e86" containerName="ovn-controller" probeResult="failure" output="command timed out" Feb 18 15:50:28 crc kubenswrapper[4957]: I0218 15:50:28.935329 4957 patch_prober.go:28] interesting pod/logging-loki-gateway-7b58bd6fcd-n8wjk container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.55:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:28 crc kubenswrapper[4957]: I0218 15:50:28.935411 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-7b58bd6fcd-n8wjk" podUID="ae719427-398b-455b-8d4f-d1f96df0e800" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.55:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:28 crc kubenswrapper[4957]: I0218 15:50:28.988561 4957 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-cvskg container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.217.0.66:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:28 crc kubenswrapper[4957]: I0218 15:50:28.988593 4957 patch_prober.go:28] interesting pod/logging-loki-gateway-7b58bd6fcd-58vxq container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.56:8081/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:28 crc kubenswrapper[4957]: I0218 15:50:28.988621 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-79b997595-cvskg" podUID="2d4f652d-1d92-4ace-8ac1-51aaf64ff1cd" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.66:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:28 crc kubenswrapper[4957]: I0218 15:50:28.988639 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-7b58bd6fcd-58vxq" podUID="6e82b47f-b61b-40dd-92f1-62180459082f" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.56:8081/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:28 crc kubenswrapper[4957]: I0218 15:50:28.988561 4957 patch_prober.go:28] interesting pod/logging-loki-gateway-7b58bd6fcd-n8wjk container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.55:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:28 crc kubenswrapper[4957]: I0218 15:50:28.988673 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-7b58bd6fcd-n8wjk" podUID="ae719427-398b-455b-8d4f-d1f96df0e800" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.55:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:28 crc kubenswrapper[4957]: I0218 15:50:28.988702 4957 patch_prober.go:28] interesting pod/logging-loki-gateway-7b58bd6fcd-58vxq container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.56:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:28 crc kubenswrapper[4957]: I0218 15:50:28.988725 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-7b58bd6fcd-58vxq" podUID="6e82b47f-b61b-40dd-92f1-62180459082f" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.56:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:28 crc kubenswrapper[4957]: I0218 15:50:28.988768 4957 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-cvskg container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:28 crc kubenswrapper[4957]: I0218 15:50:28.988790 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-cvskg" podUID="2d4f652d-1d92-4ace-8ac1-51aaf64ff1cd" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.66:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:29 crc kubenswrapper[4957]: I0218 15:50:29.199970 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-c8kqz" podUID="06e3f0bd-70d3-493b-ab24-e8f75298f7a3" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:29 crc kubenswrapper[4957]: I0218 15:50:29.200021 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-bl7jx" podUID="84258a40-276a-4da4-8240-603932be25c0" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.96:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:29 crc kubenswrapper[4957]: I0218 15:50:29.240647 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-bl7jx" podUID="84258a40-276a-4da4-8240-603932be25c0" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.96:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:29 crc kubenswrapper[4957]: I0218 15:50:29.240646 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-manager-549dd7c895-84tm5" podUID="33e1b915-d740-4ec7-b74e-b8b8b6356d4d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.123:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:29 crc kubenswrapper[4957]: I0218 15:50:29.240661 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-c8kqz" podUID="06e3f0bd-70d3-493b-ab24-e8f75298f7a3" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:29 crc kubenswrapper[4957]: I0218 15:50:29.240787 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-549dd7c895-84tm5" Feb 18 15:50:29 crc kubenswrapper[4957]: I0218 15:50:29.322689 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/controller-69bbfbf88f-hw6sv" podUID="3929daaa-39b8-475f-9af0-644180cb7682" containerName="controller" probeResult="failure" output="Get \"http://10.217.0.97:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:29 crc kubenswrapper[4957]: I0218 15:50:29.322722 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-c8kqz" podUID="06e3f0bd-70d3-493b-ab24-e8f75298f7a3" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:29 crc kubenswrapper[4957]: I0218 15:50:29.322809 4957 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="metallb-system/frr-k8s-c8kqz" Feb 18 15:50:29 crc kubenswrapper[4957]: I0218 15:50:29.330586 4957 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="frr" containerStatusID={"Type":"cri-o","ID":"49b83ce706eea250b91649b2e85af341536f1b1130e5f548bacbc3725a8b4d72"} pod="metallb-system/frr-k8s-c8kqz" containerMessage="Container frr failed liveness probe, will be restarted" Feb 18 15:50:29 crc kubenswrapper[4957]: I0218 15:50:29.330750 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="metallb-system/frr-k8s-c8kqz" podUID="06e3f0bd-70d3-493b-ab24-e8f75298f7a3" containerName="frr" containerID="cri-o://49b83ce706eea250b91649b2e85af341536f1b1130e5f548bacbc3725a8b4d72" gracePeriod=2 Feb 18 15:50:29 crc kubenswrapper[4957]: I0218 15:50:29.363694 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/telemetry-operator-controller-manager-54bf66477-rc4j4" podUID="f507ee0e-6836-4f30-b79e-63979d76a449" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.120:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:29 crc kubenswrapper[4957]: I0218 15:50:29.363737 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/controller-69bbfbf88f-hw6sv" podUID="3929daaa-39b8-475f-9af0-644180cb7682" containerName="controller" probeResult="failure" output="Get \"http://10.217.0.97:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:29 crc kubenswrapper[4957]: I0218 15:50:29.363832 4957 patch_prober.go:28] interesting pod/logging-loki-index-gateway-0 container/loki-index-gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.60:3101/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:29 crc kubenswrapper[4957]: I0218 15:50:29.364087 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-index-gateway-0" podUID="c7e42da2-0160-4c19-bd98-1ebb4d0d84dc" containerName="loki-index-gateway" probeResult="failure" output="Get \"https://10.217.0.60:3101/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:29 crc kubenswrapper[4957]: I0218 15:50:29.384477 4957 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.57:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:29 crc kubenswrapper[4957]: I0218 15:50:29.384729 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="f0549571-def1-4cd5-9cae-77780cf6870b" containerName="loki-ingester" probeResult="failure" output="Get \"https://10.217.0.57:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:29 crc kubenswrapper[4957]: I0218 15:50:29.472147 4957 patch_prober.go:28] interesting pod/logging-loki-compactor-0 container/loki-compactor namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.58:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:29 crc kubenswrapper[4957]: I0218 15:50:29.472540 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-compactor-0" podUID="8ebf8bd1-097b-45c7-be49-c38760e885e2" containerName="loki-compactor" probeResult="failure" output="Get \"https://10.217.0.58:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:29 crc kubenswrapper[4957]: I0218 15:50:29.713931 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="a600b253-74c8-473b-ba57-e03ac741c902" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.166:9090/-/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:29 crc kubenswrapper[4957]: I0218 15:50:29.795246 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-handler-vwl5q" podUID="3b69da89-d2a6-4e8f-ac79-99e1bb296fcc" containerName="nmstate-handler" probeResult="failure" output="command timed out" Feb 18 15:50:29 crc kubenswrapper[4957]: I0218 15:50:29.801016 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="64952df6-ca80-4f2b-a8e3-ce0539d32008" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Feb 18 15:50:29 crc kubenswrapper[4957]: I0218 15:50:29.801105 4957 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/ceilometer-0" Feb 18 15:50:29 crc kubenswrapper[4957]: I0218 15:50:29.802356 4957 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="ceilometer-central-agent" containerStatusID={"Type":"cri-o","ID":"ee7bfd0150096901eaacb13ccf0a2ce0017054cd8ce3e58ca9d7e0cd72b7f6be"} pod="openstack/ceilometer-0" containerMessage="Container ceilometer-central-agent failed liveness probe, will be restarted" Feb 18 15:50:29 crc kubenswrapper[4957]: I0218 15:50:29.802502 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="64952df6-ca80-4f2b-a8e3-ce0539d32008" containerName="ceilometer-central-agent" containerID="cri-o://ee7bfd0150096901eaacb13ccf0a2ce0017054cd8ce3e58ca9d7e0cd72b7f6be" gracePeriod=30 Feb 18 15:50:29 crc kubenswrapper[4957]: I0218 15:50:29.803478 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/community-operators-nsb6k" podUID="a5d8a39a-4f7f-4d3e-b205-9a209721ca4b" containerName="registry-server" probeResult="failure" output="command timed out" Feb 18 15:50:29 crc kubenswrapper[4957]: I0218 15:50:29.803787 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/community-operators-nsb6k" podUID="a5d8a39a-4f7f-4d3e-b205-9a209721ca4b" containerName="registry-server" probeResult="failure" output="command timed out" Feb 18 15:50:30 crc kubenswrapper[4957]: I0218 15:50:30.282724 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-manager-549dd7c895-84tm5" podUID="33e1b915-d740-4ec7-b74e-b8b8b6356d4d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.123:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:30 crc kubenswrapper[4957]: I0218 15:50:30.697803 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/speaker-wvv4k" podUID="379fdde6-815b-433b-b62c-b9863ea4fb9e" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:30 crc kubenswrapper[4957]: I0218 15:50:30.698471 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/speaker-wvv4k" podUID="379fdde6-815b-433b-b62c-b9863ea4fb9e" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:30 crc kubenswrapper[4957]: I0218 15:50:30.796722 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-k8s-0" podUID="564a48e7-438e-4374-9b43-92409e093ae2" containerName="prometheus" probeResult="failure" output="command timed out" Feb 18 15:50:30 crc kubenswrapper[4957]: I0218 15:50:30.796736 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-vc7fk" podUID="910cf14f-cf93-4db8-8681-12130ae2ae27" containerName="nbdb" probeResult="failure" output="command timed out" Feb 18 15:50:30 crc kubenswrapper[4957]: I0218 15:50:30.796771 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-vc7fk" podUID="910cf14f-cf93-4db8-8681-12130ae2ae27" containerName="sbdb" probeResult="failure" output="command timed out" Feb 18 15:50:30 crc kubenswrapper[4957]: I0218 15:50:30.927689 4957 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Liveness probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded" start-of-body= Feb 18 15:50:30 crc kubenswrapper[4957]: I0218 15:50:30.927744 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded" Feb 18 15:50:30 crc kubenswrapper[4957]: E0218 15:50:30.990626 4957 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T15:50:20Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T15:50:20Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T15:50:20Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T15:50:20Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:31 crc kubenswrapper[4957]: I0218 15:50:31.081458 4957 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-tmknf container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.29:8443/healthz\": dial tcp 10.217.0.29:8443: connect: connection refused" start-of-body= Feb 18 15:50:31 crc kubenswrapper[4957]: I0218 15:50:31.081512 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-tmknf" podUID="7b0517cf-fb33-49b0-9f1c-ba39f8edfc37" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.29:8443/healthz\": dial tcp 10.217.0.29:8443: connect: connection refused" Feb 18 15:50:31 crc kubenswrapper[4957]: I0218 15:50:31.195150 4957 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:31 crc kubenswrapper[4957]: I0218 15:50:31.195203 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:31 crc kubenswrapper[4957]: I0218 15:50:31.230273 4957 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:6443/readyz\": context deadline exceeded" start-of-body= Feb 18 15:50:31 crc kubenswrapper[4957]: I0218 15:50:31.230469 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/readyz\": context deadline exceeded" Feb 18 15:50:31 crc kubenswrapper[4957]: I0218 15:50:31.371758 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="cert-manager/cert-manager-webhook-687f57d79b-q5pw9" podUID="77a4b221-67be-4248-beaa-1f4602e3b35b" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.44:6080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:31 crc kubenswrapper[4957]: I0218 15:50:31.452144 4957 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Liveness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:31 crc kubenswrapper[4957]: I0218 15:50:31.452246 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:31 crc kubenswrapper[4957]: I0218 15:50:31.583666 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-api-0" podUID="18e4612d-bb78-44c5-b59e-4dbe1342c3d3" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.223:8776/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:31 crc kubenswrapper[4957]: I0218 15:50:31.710386 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/prometheus-metric-storage-0" podUID="a600b253-74c8-473b-ba57-e03ac741c902" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.166:9090/-/healthy\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:31 crc kubenswrapper[4957]: I0218 15:50:31.757823 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-c8kqz" event={"ID":"06e3f0bd-70d3-493b-ab24-e8f75298f7a3","Type":"ContainerDied","Data":"49b83ce706eea250b91649b2e85af341536f1b1130e5f548bacbc3725a8b4d72"} Feb 18 15:50:31 crc kubenswrapper[4957]: I0218 15:50:31.759047 4957 generic.go:334] "Generic (PLEG): container finished" podID="06e3f0bd-70d3-493b-ab24-e8f75298f7a3" containerID="49b83ce706eea250b91649b2e85af341536f1b1130e5f548bacbc3725a8b4d72" exitCode=143 Feb 18 15:50:31 crc kubenswrapper[4957]: I0218 15:50:31.797805 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="8e98adae-ca1c-4f1d-a7e4-6ea6ccea7070" containerName="galera" probeResult="failure" output="command timed out" Feb 18 15:50:31 crc kubenswrapper[4957]: I0218 15:50:31.797944 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-index-hbqf7" podUID="4c4be899-e6fc-4664-89e1-b2eb45187e3a" containerName="registry-server" probeResult="failure" output="command timed out" Feb 18 15:50:31 crc kubenswrapper[4957]: I0218 15:50:31.799633 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/swift-proxy-7c68fdd987-chglv" podUID="47005221-336b-424d-8c90-fc0c320cd135" containerName="proxy-server" probeResult="failure" output="Get \"https://10.217.0.212:8080/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:31 crc kubenswrapper[4957]: I0218 15:50:31.799675 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/swift-proxy-7c68fdd987-chglv" podUID="47005221-336b-424d-8c90-fc0c320cd135" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.212:8080/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:31 crc kubenswrapper[4957]: I0218 15:50:31.800750 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-index-hbqf7" podUID="4c4be899-e6fc-4664-89e1-b2eb45187e3a" containerName="registry-server" probeResult="failure" output="command timed out" Feb 18 15:50:32 crc kubenswrapper[4957]: I0218 15:50:32.086885 4957 patch_prober.go:28] interesting pod/nmstate-webhook-866bcb46dc-vzkmx container/nmstate-webhook namespace/openshift-nmstate: Readiness probe status=failure output="Get \"https://10.217.0.88:9443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:32 crc kubenswrapper[4957]: I0218 15:50:32.087452 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-vzkmx" podUID="7bf5dd6b-3bc3-4ead-8fab-478e02b32496" containerName="nmstate-webhook" probeResult="failure" output="Get \"https://10.217.0.88:9443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:32 crc kubenswrapper[4957]: I0218 15:50:32.142654 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/infra-operator-controller-manager-79d975b745-pgchj" podUID="6c6f7318-74c7-4971-9888-45a6c025bdde" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.109:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:32 crc kubenswrapper[4957]: I0218 15:50:32.142903 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-79d975b745-pgchj" Feb 18 15:50:32 crc kubenswrapper[4957]: I0218 15:50:32.451772 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cwqlsr" podUID="8bd25216-306e-42c0-93da-a51803507c1f" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.117:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:32 crc kubenswrapper[4957]: I0218 15:50:32.715075 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="a600b253-74c8-473b-ba57-e03ac741c902" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.166:9090/-/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:32 crc kubenswrapper[4957]: E0218 15:50:32.755922 4957 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:32 crc kubenswrapper[4957]: I0218 15:50:32.796192 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-handler-vwl5q" podUID="3b69da89-d2a6-4e8f-ac79-99e1bb296fcc" containerName="nmstate-handler" probeResult="failure" output="command timed out" Feb 18 15:50:32 crc kubenswrapper[4957]: I0218 15:50:32.796313 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="a1f75ef8-c8ef-4d67-8c40-11ff5f8f5f4c" containerName="galera" probeResult="failure" output="command timed out" Feb 18 15:50:32 crc kubenswrapper[4957]: I0218 15:50:32.800675 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-k8s-0" podUID="564a48e7-438e-4374-9b43-92409e093ae2" containerName="prometheus" probeResult="failure" output="command timed out" Feb 18 15:50:32 crc kubenswrapper[4957]: I0218 15:50:32.869599 4957 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:32 crc kubenswrapper[4957]: I0218 15:50:32.869670 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:32 crc kubenswrapper[4957]: I0218 15:50:32.869793 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="18e4612d-bb78-44c5-b59e-4dbe1342c3d3" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.223:8776/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:32 crc kubenswrapper[4957]: I0218 15:50:32.869988 4957 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-htr7f container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.72:8443/healthz\": dial tcp 10.217.0.72:8443: connect: connection refused" start-of-body= Feb 18 15:50:32 crc kubenswrapper[4957]: I0218 15:50:32.870042 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-htr7f" podUID="eba7e3a4-c6f4-4473-bcec-23a777ba8798" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.72:8443/healthz\": dial tcp 10.217.0.72:8443: connect: connection refused" Feb 18 15:50:33 crc kubenswrapper[4957]: I0218 15:50:33.065139 4957 patch_prober.go:28] interesting pod/route-controller-manager-6f58845d78-bfqn7 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.63:8443/healthz\": dial tcp 10.217.0.63:8443: connect: connection refused" start-of-body= Feb 18 15:50:33 crc kubenswrapper[4957]: I0218 15:50:33.065198 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6f58845d78-bfqn7" podUID="2fbd50ae-c490-4099-b01e-de491ad70559" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.63:8443/healthz\": dial tcp 10.217.0.63:8443: connect: connection refused" Feb 18 15:50:33 crc kubenswrapper[4957]: I0218 15:50:33.175043 4957 patch_prober.go:28] interesting pod/thanos-querier-5bd76d8bd7-7bctr container/kube-rbac-proxy-web namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.77:9091/-/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:33 crc kubenswrapper[4957]: I0218 15:50:33.175138 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/thanos-querier-5bd76d8bd7-7bctr" podUID="513d3d65-d89a-4418-b0c2-4eadd3ae5600" containerName="kube-rbac-proxy-web" probeResult="failure" output="Get \"https://10.217.0.77:9091/-/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:33 crc kubenswrapper[4957]: I0218 15:50:33.185616 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/infra-operator-controller-manager-79d975b745-pgchj" podUID="6c6f7318-74c7-4971-9888-45a6c025bdde" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.109:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:33 crc kubenswrapper[4957]: I0218 15:50:33.676653 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-c7999fbc4-ttfwg" podUID="b15f7971-ec1f-4b4e-ae33-45863ceb6b09" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.204:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:33 crc kubenswrapper[4957]: I0218 15:50:33.676655 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-c7999fbc4-ttfwg" podUID="b15f7971-ec1f-4b4e-ae33-45863ceb6b09" containerName="barbican-api" probeResult="failure" output="Get \"https://10.217.0.204:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:33 crc kubenswrapper[4957]: I0218 15:50:33.676789 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-c7999fbc4-ttfwg" podUID="b15f7971-ec1f-4b4e-ae33-45863ceb6b09" containerName="barbican-api" probeResult="failure" output="Get \"https://10.217.0.204:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:33 crc kubenswrapper[4957]: I0218 15:50:33.676814 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-c7999fbc4-ttfwg" podUID="b15f7971-ec1f-4b4e-ae33-45863ceb6b09" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.204:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:33 crc kubenswrapper[4957]: I0218 15:50:33.792166 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-58897d9998-d9hcp_2d30d957-c658-4ce4-9b04-3f1d64fb67b7/console-operator/0.log" Feb 18 15:50:33 crc kubenswrapper[4957]: I0218 15:50:33.792242 4957 generic.go:334] "Generic (PLEG): container finished" podID="2d30d957-c658-4ce4-9b04-3f1d64fb67b7" containerID="a9e6a9ad56696bcd6dfd3b32e6ea9c2bcf4aca0e403105f96e117d4daae08e28" exitCode=1 Feb 18 15:50:33 crc kubenswrapper[4957]: I0218 15:50:33.792280 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-d9hcp" event={"ID":"2d30d957-c658-4ce4-9b04-3f1d64fb67b7","Type":"ContainerDied","Data":"a9e6a9ad56696bcd6dfd3b32e6ea9c2bcf4aca0e403105f96e117d4daae08e28"} Feb 18 15:50:33 crc kubenswrapper[4957]: I0218 15:50:33.798159 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ovn-controller-ovs-bgjwb" podUID="2c0f60cd-99b5-453c-9353-5c6298f95d2b" containerName="ovsdb-server" probeResult="failure" output="command timed out" Feb 18 15:50:33 crc kubenswrapper[4957]: I0218 15:50:33.801218 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ovn-controller-5s4rw" podUID="e2b4f5fe-0b27-47d8-8158-b51ad4229e86" containerName="ovn-controller" probeResult="failure" output="command timed out" Feb 18 15:50:33 crc kubenswrapper[4957]: I0218 15:50:33.801235 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ovn-controller-ovs-bgjwb" podUID="2c0f60cd-99b5-453c-9353-5c6298f95d2b" containerName="ovs-vswitchd" probeResult="failure" output="command timed out" Feb 18 15:50:33 crc kubenswrapper[4957]: I0218 15:50:33.801295 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-ovs-bgjwb" podUID="2c0f60cd-99b5-453c-9353-5c6298f95d2b" containerName="ovsdb-server" probeResult="failure" output="command timed out" Feb 18 15:50:33 crc kubenswrapper[4957]: I0218 15:50:33.801253 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-ovs-bgjwb" podUID="2c0f60cd-99b5-453c-9353-5c6298f95d2b" containerName="ovs-vswitchd" probeResult="failure" output="command timed out" Feb 18 15:50:33 crc kubenswrapper[4957]: I0218 15:50:33.801857 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-5s4rw" podUID="e2b4f5fe-0b27-47d8-8158-b51ad4229e86" containerName="ovn-controller" probeResult="failure" output="command timed out" Feb 18 15:50:33 crc kubenswrapper[4957]: I0218 15:50:33.936137 4957 patch_prober.go:28] interesting pod/logging-loki-gateway-7b58bd6fcd-n8wjk container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.55:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:33 crc kubenswrapper[4957]: I0218 15:50:33.936197 4957 patch_prober.go:28] interesting pod/logging-loki-gateway-7b58bd6fcd-n8wjk container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.55:8081/ready\": context deadline exceeded" start-of-body= Feb 18 15:50:33 crc kubenswrapper[4957]: I0218 15:50:33.936241 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-7b58bd6fcd-n8wjk" podUID="ae719427-398b-455b-8d4f-d1f96df0e800" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.55:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:33 crc kubenswrapper[4957]: I0218 15:50:33.936250 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-7b58bd6fcd-n8wjk" podUID="ae719427-398b-455b-8d4f-d1f96df0e800" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.55:8081/ready\": context deadline exceeded" Feb 18 15:50:33 crc kubenswrapper[4957]: I0218 15:50:33.945602 4957 patch_prober.go:28] interesting pod/logging-loki-gateway-7b58bd6fcd-58vxq container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.56:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:33 crc kubenswrapper[4957]: I0218 15:50:33.945667 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-7b58bd6fcd-58vxq" podUID="6e82b47f-b61b-40dd-92f1-62180459082f" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.56:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:33 crc kubenswrapper[4957]: I0218 15:50:33.945685 4957 patch_prober.go:28] interesting pod/logging-loki-gateway-7b58bd6fcd-58vxq container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.56:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:33 crc kubenswrapper[4957]: I0218 15:50:33.945832 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-7b58bd6fcd-58vxq" podUID="6e82b47f-b61b-40dd-92f1-62180459082f" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.56:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:34 crc kubenswrapper[4957]: I0218 15:50:34.142854 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-api-65d4996964-zpvph" podUID="2251ef18-33b0-4454-a9ff-2a00fd4974d7" containerName="heat-api" probeResult="failure" output="Get \"https://10.217.1.21:8004/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:34 crc kubenswrapper[4957]: I0218 15:50:34.143298 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/heat-cfnapi-69cc788d47-5c2pf" podUID="bd286cd4-02f3-4357-8c0e-bf30451df530" containerName="heat-cfnapi" probeResult="failure" output="Get \"https://10.217.1.20:8000/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:34 crc kubenswrapper[4957]: I0218 15:50:34.143373 4957 patch_prober.go:28] interesting pod/controller-manager-65c9ff5d9d-fz5wk container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.62:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:34 crc kubenswrapper[4957]: I0218 15:50:34.143474 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-65c9ff5d9d-fz5wk" podUID="1523e723-7145-4c5e-8834-990b6298db41" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.62:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:34 crc kubenswrapper[4957]: I0218 15:50:34.143543 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-cfnapi-69cc788d47-5c2pf" podUID="bd286cd4-02f3-4357-8c0e-bf30451df530" containerName="heat-cfnapi" probeResult="failure" output="Get \"https://10.217.1.20:8000/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:34 crc kubenswrapper[4957]: I0218 15:50:34.143809 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/heat-api-65d4996964-zpvph" podUID="2251ef18-33b0-4454-a9ff-2a00fd4974d7" containerName="heat-api" probeResult="failure" output="Get \"https://10.217.1.21:8004/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:34 crc kubenswrapper[4957]: I0218 15:50:34.144354 4957 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-tmknf container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.29:8443/healthz\": dial tcp 10.217.0.29:8443: connect: connection refused" start-of-body= Feb 18 15:50:34 crc kubenswrapper[4957]: I0218 15:50:34.144706 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-tmknf" podUID="7b0517cf-fb33-49b0-9f1c-ba39f8edfc37" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.29:8443/healthz\": dial tcp 10.217.0.29:8443: connect: connection refused" Feb 18 15:50:34 crc kubenswrapper[4957]: I0218 15:50:34.798907 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/redhat-marketplace-gllqp" podUID="b95ede57-e275-4ba0-834d-43356f6b960b" containerName="registry-server" probeResult="failure" output="command timed out" Feb 18 15:50:34 crc kubenswrapper[4957]: I0218 15:50:34.799160 4957 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-gllqp" Feb 18 15:50:34 crc kubenswrapper[4957]: I0218 15:50:34.799317 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/heat-engine-596b8bcf84-qf6sp" podUID="44f06eec-0e32-4246-a893-652c9b180b2c" containerName="heat-engine" probeResult="failure" output="command timed out" Feb 18 15:50:34 crc kubenswrapper[4957]: I0218 15:50:34.801347 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-gllqp" podUID="b95ede57-e275-4ba0-834d-43356f6b960b" containerName="registry-server" probeResult="failure" output="command timed out" Feb 18 15:50:34 crc kubenswrapper[4957]: I0218 15:50:34.801530 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-gllqp" Feb 18 15:50:34 crc kubenswrapper[4957]: I0218 15:50:34.803210 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-engine-596b8bcf84-qf6sp" podUID="44f06eec-0e32-4246-a893-652c9b180b2c" containerName="heat-engine" probeResult="failure" output="command timed out" Feb 18 15:50:34 crc kubenswrapper[4957]: I0218 15:50:34.807058 4957 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="registry-server" containerStatusID={"Type":"cri-o","ID":"0a85b3f07e602aefbbbda7dac31c3b119df8cb10abdcb7825d70f1610a35622c"} pod="openshift-marketplace/redhat-marketplace-gllqp" containerMessage="Container registry-server failed liveness probe, will be restarted" Feb 18 15:50:34 crc kubenswrapper[4957]: I0218 15:50:34.807236 4957 generic.go:334] "Generic (PLEG): container finished" podID="3925001b-348a-4dde-a066-e49891c345bb" containerID="433e9255f81fe43bf83563552d9bb8b7ca0e6e76cd0d686373496064bf612423" exitCode=0 Feb 18 15:50:34 crc kubenswrapper[4957]: I0218 15:50:34.807275 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8bmsm" event={"ID":"3925001b-348a-4dde-a066-e49891c345bb","Type":"ContainerDied","Data":"433e9255f81fe43bf83563552d9bb8b7ca0e6e76cd0d686373496064bf612423"} Feb 18 15:50:34 crc kubenswrapper[4957]: I0218 15:50:34.807110 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-gllqp" podUID="b95ede57-e275-4ba0-834d-43356f6b960b" containerName="registry-server" containerID="cri-o://0a85b3f07e602aefbbbda7dac31c3b119df8cb10abdcb7825d70f1610a35622c" gracePeriod=30 Feb 18 15:50:34 crc kubenswrapper[4957]: I0218 15:50:34.959639 4957 patch_prober.go:28] interesting pod/loki-operator-controller-manager-669bf4b44b-ndlc7 container/manager namespace/openshift-operators-redhat: Readiness probe status=failure output="Get \"http://10.217.0.48:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:34 crc kubenswrapper[4957]: I0218 15:50:34.959710 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators-redhat/loki-operator-controller-manager-669bf4b44b-ndlc7" podUID="da87ca13-b23a-4345-b79d-46c8e9bec9b3" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.48:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:34 crc kubenswrapper[4957]: I0218 15:50:34.959791 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators-redhat/loki-operator-controller-manager-669bf4b44b-ndlc7" Feb 18 15:50:35 crc kubenswrapper[4957]: I0218 15:50:35.090386 4957 patch_prober.go:28] interesting pod/console-operator-58897d9998-d9hcp container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.14:8443/readyz\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Feb 18 15:50:35 crc kubenswrapper[4957]: I0218 15:50:35.090788 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-d9hcp" podUID="2d30d957-c658-4ce4-9b04-3f1d64fb67b7" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.14:8443/readyz\": dial tcp 10.217.0.14:8443: connect: connection refused" Feb 18 15:50:35 crc kubenswrapper[4957]: I0218 15:50:35.120658 4957 patch_prober.go:28] interesting pod/apiserver-7bbb656c7d-2db8v container/oauth-apiserver namespace/openshift-oauth-apiserver: Liveness probe status=failure output="Get \"https://10.217.0.10:8443/livez?exclude=etcd\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:35 crc kubenswrapper[4957]: I0218 15:50:35.120719 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2db8v" podUID="1889400f-2fff-4c67-b401-966e820d5a26" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.217.0.10:8443/livez?exclude=etcd\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:35 crc kubenswrapper[4957]: I0218 15:50:35.120752 4957 patch_prober.go:28] interesting pod/apiserver-7bbb656c7d-2db8v container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/readyz\": context deadline exceeded" start-of-body= Feb 18 15:50:35 crc kubenswrapper[4957]: I0218 15:50:35.120822 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2db8v" podUID="1889400f-2fff-4c67-b401-966e820d5a26" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.217.0.10:8443/readyz\": context deadline exceeded" Feb 18 15:50:35 crc kubenswrapper[4957]: I0218 15:50:35.644985 4957 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-8bmsm container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: connect: connection refused" start-of-body= Feb 18 15:50:35 crc kubenswrapper[4957]: I0218 15:50:35.645539 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8bmsm" podUID="3925001b-348a-4dde-a066-e49891c345bb" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: connect: connection refused" Feb 18 15:50:35 crc kubenswrapper[4957]: I0218 15:50:35.763962 4957 patch_prober.go:28] interesting pod/apiserver-76f77b778f-fpnw9 container/openshift-apiserver namespace/openshift-apiserver: Liveness probe status=failure output="Get \"https://10.217.0.12:8443/livez?exclude=etcd\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:35 crc kubenswrapper[4957]: I0218 15:50:35.764014 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-apiserver/apiserver-76f77b778f-fpnw9" podUID="b57b2a2c-ac8c-4c84-84c4-c24479600b71" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.12:8443/livez?exclude=etcd\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:35 crc kubenswrapper[4957]: I0218 15:50:35.764577 4957 patch_prober.go:28] interesting pod/apiserver-76f77b778f-fpnw9 container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/readyz?exclude=etcd&exclude=etcd-readiness\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:35 crc kubenswrapper[4957]: I0218 15:50:35.764607 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-76f77b778f-fpnw9" podUID="b57b2a2c-ac8c-4c84-84c4-c24479600b71" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.12:8443/readyz?exclude=etcd&exclude=etcd-readiness\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:35 crc kubenswrapper[4957]: I0218 15:50:35.797452 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-k8s-0" podUID="564a48e7-438e-4374-9b43-92409e093ae2" containerName="prometheus" probeResult="failure" output="command timed out" Feb 18 15:50:35 crc kubenswrapper[4957]: I0218 15:50:35.798298 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-k8s-0" podUID="564a48e7-438e-4374-9b43-92409e093ae2" containerName="prometheus" probeResult="failure" output="command timed out" Feb 18 15:50:35 crc kubenswrapper[4957]: I0218 15:50:35.798892 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/redhat-operators-vrpnn" podUID="9f04a8b9-47dc-4fdf-b0fa-b39ee03d5f00" containerName="registry-server" probeResult="failure" output="command timed out" Feb 18 15:50:35 crc kubenswrapper[4957]: I0218 15:50:35.798932 4957 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-marketplace/redhat-operators-vrpnn" Feb 18 15:50:35 crc kubenswrapper[4957]: I0218 15:50:35.809400 4957 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="registry-server" containerStatusID={"Type":"cri-o","ID":"2fde7c5ef212618d402aa21565652c78d592fc74e88dbe573e09bdb5b2127eda"} pod="openshift-marketplace/redhat-operators-vrpnn" containerMessage="Container registry-server failed liveness probe, will be restarted" Feb 18 15:50:35 crc kubenswrapper[4957]: I0218 15:50:35.809466 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-vrpnn" podUID="9f04a8b9-47dc-4fdf-b0fa-b39ee03d5f00" containerName="registry-server" containerID="cri-o://2fde7c5ef212618d402aa21565652c78d592fc74e88dbe573e09bdb5b2127eda" gracePeriod=30 Feb 18 15:50:35 crc kubenswrapper[4957]: I0218 15:50:35.826834 4957 generic.go:334] "Generic (PLEG): container finished" podID="955eb799-56c6-47e7-b5f7-eccac4b52134" containerID="379afd60011766f0721f864c454ea6aef3f693607460f456e8d5baa010fdd53a" exitCode=0 Feb 18 15:50:35 crc kubenswrapper[4957]: I0218 15:50:35.826887 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-mxz2r" event={"ID":"955eb799-56c6-47e7-b5f7-eccac4b52134","Type":"ContainerDied","Data":"379afd60011766f0721f864c454ea6aef3f693607460f456e8d5baa010fdd53a"} Feb 18 15:50:35 crc kubenswrapper[4957]: I0218 15:50:35.831848 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/redhat-operators-vrpnn" podUID="9f04a8b9-47dc-4fdf-b0fa-b39ee03d5f00" containerName="registry-server" probeResult="failure" output="command timed out" Feb 18 15:50:35 crc kubenswrapper[4957]: I0218 15:50:35.831966 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-vrpnn" Feb 18 15:50:35 crc kubenswrapper[4957]: I0218 15:50:35.954013 4957 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-q4vp7 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.38:5443/healthz\": dial tcp 10.217.0.38:5443: connect: connection refused" start-of-body= Feb 18 15:50:35 crc kubenswrapper[4957]: I0218 15:50:35.954074 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-q4vp7" podUID="9a161f1b-77bb-4a9d-9bfc-345bb46d439b" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.38:5443/healthz\": dial tcp 10.217.0.38:5443: connect: connection refused" Feb 18 15:50:36 crc kubenswrapper[4957]: I0218 15:50:36.001727 4957 patch_prober.go:28] interesting pod/loki-operator-controller-manager-669bf4b44b-ndlc7 container/manager namespace/openshift-operators-redhat: Readiness probe status=failure output="Get \"http://10.217.0.48:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:36 crc kubenswrapper[4957]: I0218 15:50:36.001815 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators-redhat/loki-operator-controller-manager-669bf4b44b-ndlc7" podUID="da87ca13-b23a-4345-b79d-46c8e9bec9b3" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.48:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:36 crc kubenswrapper[4957]: I0218 15:50:36.056737 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-slwlj" podUID="e6651ea1-6311-4597-81cc-a8637f8cc88a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.104:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:36 crc kubenswrapper[4957]: I0218 15:50:36.056845 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-slwlj" Feb 18 15:50:36 crc kubenswrapper[4957]: I0218 15:50:36.136335 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/glance-operator-controller-manager-77987464f4-b85vd" podUID="cc38dff8-4b46-4281-96a3-ff88c8200f59" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.105:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:36 crc kubenswrapper[4957]: I0218 15:50:36.136966 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-77987464f4-b85vd" Feb 18 15:50:36 crc kubenswrapper[4957]: I0218 15:50:36.146722 4957 patch_prober.go:28] interesting pod/oauth-openshift-76b96558df-8qxws container/oauth-openshift namespace/openshift-authentication: Liveness probe status=failure output="Get \"https://10.217.0.64:6443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:36 crc kubenswrapper[4957]: I0218 15:50:36.146758 4957 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:36 crc kubenswrapper[4957]: I0218 15:50:36.146836 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication/oauth-openshift-76b96558df-8qxws" podUID="3c63d324-6b52-4815-922d-3dc270315126" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.64:6443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:36 crc kubenswrapper[4957]: I0218 15:50:36.146876 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:36 crc kubenswrapper[4957]: I0218 15:50:36.147275 4957 patch_prober.go:28] interesting pod/oauth-openshift-76b96558df-8qxws container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.64:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:36 crc kubenswrapper[4957]: I0218 15:50:36.147306 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-76b96558df-8qxws" podUID="3c63d324-6b52-4815-922d-3dc270315126" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.64:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:36 crc kubenswrapper[4957]: I0218 15:50:36.230248 4957 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:6443/livez?exclude=etcd\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:36 crc kubenswrapper[4957]: I0218 15:50:36.230300 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/livez?exclude=etcd\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:36 crc kubenswrapper[4957]: I0218 15:50:36.281939 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-jv5dd" podUID="86c162c7-c82d-4627-bf84-11d5fb80199f" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.102:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:36 crc kubenswrapper[4957]: I0218 15:50:36.282057 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-jv5dd" Feb 18 15:50:36 crc kubenswrapper[4957]: I0218 15:50:36.412677 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="cert-manager/cert-manager-webhook-687f57d79b-q5pw9" podUID="77a4b221-67be-4248-beaa-1f4602e3b35b" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.44:6080/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:36 crc kubenswrapper[4957]: I0218 15:50:36.413110 4957 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="cert-manager/cert-manager-webhook-687f57d79b-q5pw9" Feb 18 15:50:36 crc kubenswrapper[4957]: I0218 15:50:36.412948 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="cert-manager/cert-manager-webhook-687f57d79b-q5pw9" podUID="77a4b221-67be-4248-beaa-1f4602e3b35b" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.44:6080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:36 crc kubenswrapper[4957]: I0218 15:50:36.413432 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-q5pw9" Feb 18 15:50:36 crc kubenswrapper[4957]: I0218 15:50:36.420495 4957 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cert-manager-webhook" containerStatusID={"Type":"cri-o","ID":"b353c117523172744de7839ed70ea1395c04c0968731c41a24a113c9cb1a10dc"} pod="cert-manager/cert-manager-webhook-687f57d79b-q5pw9" containerMessage="Container cert-manager-webhook failed liveness probe, will be restarted" Feb 18 15:50:36 crc kubenswrapper[4957]: I0218 15:50:36.420565 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="cert-manager/cert-manager-webhook-687f57d79b-q5pw9" podUID="77a4b221-67be-4248-beaa-1f4602e3b35b" containerName="cert-manager-webhook" containerID="cri-o://b353c117523172744de7839ed70ea1395c04c0968731c41a24a113c9cb1a10dc" gracePeriod=30 Feb 18 15:50:36 crc kubenswrapper[4957]: I0218 15:50:36.486589 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-hhg5g" podUID="18f96572-e72c-48ae-b22b-4c6fb7a4d7b9" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.106:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:36 crc kubenswrapper[4957]: I0218 15:50:36.547638 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-xg6pp" podUID="94bb800a-9927-4d0f-b9d2-53e4fb398fda" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.108:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:36 crc kubenswrapper[4957]: I0218 15:50:36.547940 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-xg6pp" Feb 18 15:50:36 crc kubenswrapper[4957]: I0218 15:50:36.589606 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-fx4tl" podUID="91fd8838-0687-420b-b3dd-4130e221a66d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.107:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:36 crc kubenswrapper[4957]: I0218 15:50:36.589716 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-fx4tl" Feb 18 15:50:36 crc kubenswrapper[4957]: I0218 15:50:36.631683 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-vt4tc" podUID="78c8fb66-d71a-44b7-b858-51f7ca26a407" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.110:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:36 crc kubenswrapper[4957]: I0218 15:50:36.631781 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-vt4tc" Feb 18 15:50:36 crc kubenswrapper[4957]: I0218 15:50:36.757532 4957 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-bwgmc container/package-server-manager namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"http://10.217.0.22:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:36 crc kubenswrapper[4957]: I0218 15:50:36.757601 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bwgmc" podUID="1b3f7089-9ab3-4753-b0a2-7454ed4425ac" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.22:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:36 crc kubenswrapper[4957]: I0218 15:50:36.757919 4957 patch_prober.go:28] interesting pod/router-default-5444994796-jg752 container/router namespace/openshift-ingress: Readiness probe status=failure output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:36 crc kubenswrapper[4957]: I0218 15:50:36.757995 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-5444994796-jg752" podUID="099076c9-9f78-47b8-87f1-3c9cc47e0b09" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:36 crc kubenswrapper[4957]: I0218 15:50:36.758000 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="a600b253-74c8-473b-ba57-e03ac741c902" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.166:9090/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:36 crc kubenswrapper[4957]: I0218 15:50:36.758058 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/prometheus-metric-storage-0" podUID="a600b253-74c8-473b-ba57-e03ac741c902" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.166:9090/-/healthy\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:36 crc kubenswrapper[4957]: I0218 15:50:36.758093 4957 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-bwgmc container/package-server-manager namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"http://10.217.0.22:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:36 crc kubenswrapper[4957]: I0218 15:50:36.758110 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bwgmc" podUID="1b3f7089-9ab3-4753-b0a2-7454ed4425ac" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.22:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:36 crc kubenswrapper[4957]: I0218 15:50:36.842860 4957 generic.go:334] "Generic (PLEG): container finished" podID="eba7e3a4-c6f4-4473-bcec-23a777ba8798" containerID="a95bc19447cc8da6480533d1418392662f3d637a86c7ede2044b609a58c281db" exitCode=0 Feb 18 15:50:36 crc kubenswrapper[4957]: I0218 15:50:36.842943 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-htr7f" event={"ID":"eba7e3a4-c6f4-4473-bcec-23a777ba8798","Type":"ContainerDied","Data":"a95bc19447cc8da6480533d1418392662f3d637a86c7ede2044b609a58c281db"} Feb 18 15:50:36 crc kubenswrapper[4957]: I0218 15:50:36.848730 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-c8kqz" event={"ID":"06e3f0bd-70d3-493b-ab24-e8f75298f7a3","Type":"ContainerStarted","Data":"3b7595c353eb427a928b8d7d24655732b7e88bc942522b46d3007ba852114935"} Feb 18 15:50:36 crc kubenswrapper[4957]: I0218 15:50:36.850457 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8bmsm" event={"ID":"3925001b-348a-4dde-a066-e49891c345bb","Type":"ContainerStarted","Data":"e69c1155c801f626f32d112689ad19ece065404cef8dd25f191ce6cdfcefe7f6"} Feb 18 15:50:36 crc kubenswrapper[4957]: I0218 15:50:36.850760 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8bmsm" Feb 18 15:50:36 crc kubenswrapper[4957]: I0218 15:50:36.965617 4957 patch_prober.go:28] interesting pod/downloads-7954f5f757-fxh8s container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:36 crc kubenswrapper[4957]: I0218 15:50:36.966028 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-fxh8s" podUID="6e462cfd-ac3c-4e75-bcce-f8291746b89e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:36 crc kubenswrapper[4957]: I0218 15:50:36.965687 4957 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-mxz2r container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.21:8081/healthz\": dial tcp 10.217.0.21:8081: connect: connection refused" start-of-body= Feb 18 15:50:36 crc kubenswrapper[4957]: I0218 15:50:36.965717 4957 patch_prober.go:28] interesting pod/console-84fccf866c-pq2q4 container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.139:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:36 crc kubenswrapper[4957]: I0218 15:50:36.966126 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-59bdc8b94-mxz2r" podUID="955eb799-56c6-47e7-b5f7-eccac4b52134" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.21:8081/healthz\": dial tcp 10.217.0.21:8081: connect: connection refused" Feb 18 15:50:36 crc kubenswrapper[4957]: I0218 15:50:36.966154 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-84fccf866c-pq2q4" podUID="95e72012-f049-432b-8490-b92c0e5724b8" containerName="console" probeResult="failure" output="Get \"https://10.217.0.139:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:36 crc kubenswrapper[4957]: I0218 15:50:36.966120 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-fxh8s" Feb 18 15:50:37 crc kubenswrapper[4957]: I0218 15:50:37.006930 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-czjx4" podUID="147c50a5-37fc-4b06-803f-8ad1d1fd4625" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.111:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:37 crc kubenswrapper[4957]: I0218 15:50:37.007099 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-czjx4" Feb 18 15:50:37 crc kubenswrapper[4957]: I0218 15:50:37.007354 4957 patch_prober.go:28] interesting pod/authentication-operator-69f744f599-vd6hx container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.217.0.13:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:37 crc kubenswrapper[4957]: I0218 15:50:37.007387 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-69f744f599-vd6hx" podUID="7e32179f-a59d-44e1-9a56-ca25b8c5ff21" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.217.0.13:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:37 crc kubenswrapper[4957]: I0218 15:50:37.007417 4957 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-authentication-operator/authentication-operator-69f744f599-vd6hx" Feb 18 15:50:37 crc kubenswrapper[4957]: I0218 15:50:37.008145 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-kx5gv" podUID="f70d6609-fcf8-47f9-89dc-986f8f2f902b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.115:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:37 crc kubenswrapper[4957]: I0218 15:50:37.008196 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-kx5gv" Feb 18 15:50:37 crc kubenswrapper[4957]: I0218 15:50:37.008364 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-rd7mm" podUID="eb1f93ca-0a6f-4582-9d3a-329ddd1dd4fe" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.114:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:37 crc kubenswrapper[4957]: I0218 15:50:37.008567 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-rd7mm" Feb 18 15:50:37 crc kubenswrapper[4957]: I0218 15:50:37.025040 4957 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="authentication-operator" containerStatusID={"Type":"cri-o","ID":"968ae94edfcba12fecbbd2e18efed4c32a46c1e1c7b27c9816cc37fc4a4e28be"} pod="openshift-authentication-operator/authentication-operator-69f744f599-vd6hx" containerMessage="Container authentication-operator failed liveness probe, will be restarted" Feb 18 15:50:37 crc kubenswrapper[4957]: I0218 15:50:37.025115 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication-operator/authentication-operator-69f744f599-vd6hx" podUID="7e32179f-a59d-44e1-9a56-ca25b8c5ff21" containerName="authentication-operator" containerID="cri-o://968ae94edfcba12fecbbd2e18efed4c32a46c1e1c7b27c9816cc37fc4a4e28be" gracePeriod=30 Feb 18 15:50:37 crc kubenswrapper[4957]: I0218 15:50:37.049659 4957 patch_prober.go:28] interesting pod/downloads-7954f5f757-fxh8s container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.11:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:37 crc kubenswrapper[4957]: I0218 15:50:37.049733 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-fxh8s" podUID="6e462cfd-ac3c-4e75-bcce-f8291746b89e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:37 crc kubenswrapper[4957]: I0218 15:50:37.049674 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-cxzhc" podUID="ef6b6faf-f852-4948-8d1b-d53eace855a4" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.116:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:37 crc kubenswrapper[4957]: I0218 15:50:37.049742 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-vn8z8" podUID="8aaaba83-1c93-481a-9627-a46dbd3eef31" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.113:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:37 crc kubenswrapper[4957]: I0218 15:50:37.049800 4957 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-7954f5f757-fxh8s" Feb 18 15:50:37 crc kubenswrapper[4957]: I0218 15:50:37.049902 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-cxzhc" Feb 18 15:50:37 crc kubenswrapper[4957]: I0218 15:50:37.090672 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-zc4tx" podUID="ff480d9a-ead3-47a1-a765-59507dfe0853" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.112:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:37 crc kubenswrapper[4957]: I0218 15:50:37.090756 4957 patch_prober.go:28] interesting pod/monitoring-plugin-6fb88c9bd-6wgxk container/monitoring-plugin namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.80:9443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:37 crc kubenswrapper[4957]: I0218 15:50:37.090776 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/monitoring-plugin-6fb88c9bd-6wgxk" podUID="4d4390cd-8a73-4bc0-8a08-8f018c308d17" containerName="monitoring-plugin" probeResult="failure" output="Get \"https://10.217.0.80:9443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:37 crc kubenswrapper[4957]: I0218 15:50:37.091088 4957 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-qc4wh container/olm-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.35:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:37 crc kubenswrapper[4957]: I0218 15:50:37.091105 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qc4wh" podUID="455504d8-7edb-4008-9343-536491e9504a" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.35:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:37 crc kubenswrapper[4957]: I0218 15:50:37.091131 4957 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qc4wh" Feb 18 15:50:37 crc kubenswrapper[4957]: I0218 15:50:37.091203 4957 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-qc4wh container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.35:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:37 crc kubenswrapper[4957]: I0218 15:50:37.091228 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qc4wh" podUID="455504d8-7edb-4008-9343-536491e9504a" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.35:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:37 crc kubenswrapper[4957]: I0218 15:50:37.091269 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qc4wh" Feb 18 15:50:37 crc kubenswrapper[4957]: I0218 15:50:37.091506 4957 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-8bmsm container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: connect: connection refused" start-of-body= Feb 18 15:50:37 crc kubenswrapper[4957]: I0218 15:50:37.091525 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8bmsm" podUID="3925001b-348a-4dde-a066-e49891c345bb" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: connect: connection refused" Feb 18 15:50:37 crc kubenswrapper[4957]: I0218 15:50:37.092443 4957 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="olm-operator" containerStatusID={"Type":"cri-o","ID":"eb98dfdcf50ca67106e54b27dffd20a2d5ae30d6bd371d384f667d49a3b4466a"} pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qc4wh" containerMessage="Container olm-operator failed liveness probe, will be restarted" Feb 18 15:50:37 crc kubenswrapper[4957]: I0218 15:50:37.092478 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qc4wh" podUID="455504d8-7edb-4008-9343-536491e9504a" containerName="olm-operator" containerID="cri-o://eb98dfdcf50ca67106e54b27dffd20a2d5ae30d6bd371d384f667d49a3b4466a" gracePeriod=30 Feb 18 15:50:37 crc kubenswrapper[4957]: I0218 15:50:37.132614 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-slwlj" podUID="e6651ea1-6311-4597-81cc-a8637f8cc88a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.104:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:37 crc kubenswrapper[4957]: I0218 15:50:37.133017 4957 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-tmknf container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.29:8443/healthz\": dial tcp 10.217.0.29:8443: connect: connection refused" start-of-body= Feb 18 15:50:37 crc kubenswrapper[4957]: I0218 15:50:37.133064 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-tmknf" podUID="7b0517cf-fb33-49b0-9f1c-ba39f8edfc37" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.29:8443/healthz\": dial tcp 10.217.0.29:8443: connect: connection refused" Feb 18 15:50:37 crc kubenswrapper[4957]: I0218 15:50:37.179670 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/glance-operator-controller-manager-77987464f4-b85vd" podUID="cc38dff8-4b46-4281-96a3-ff88c8200f59" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.105:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:37 crc kubenswrapper[4957]: I0218 15:50:37.220829 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-9mf8z" podUID="644451ba-ce73-4312-b6cd-af99eb6c9fbc" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.118:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:37 crc kubenswrapper[4957]: I0218 15:50:37.221142 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-9mf8z" Feb 18 15:50:37 crc kubenswrapper[4957]: I0218 15:50:37.261693 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/swift-operator-controller-manager-68f46476f-5k7g6" podUID="8adf52f0-b132-4541-8962-7fae9bce89c6" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.119:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:37 crc kubenswrapper[4957]: I0218 15:50:37.261834 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-68f46476f-5k7g6" Feb 18 15:50:37 crc kubenswrapper[4957]: I0218 15:50:37.303622 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/telemetry-operator-controller-manager-54bf66477-rc4j4" podUID="f507ee0e-6836-4f30-b79e-63979d76a449" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.120:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:37 crc kubenswrapper[4957]: I0218 15:50:37.385615 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-kshbq" podUID="c17ba5f2-7fb4-4ed7-8623-f987653f8f9b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.122:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:37 crc kubenswrapper[4957]: I0218 15:50:37.408813 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-jv5dd" podUID="86c162c7-c82d-4627-bf84-11d5fb80199f" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.102:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:37 crc kubenswrapper[4957]: I0218 15:50:37.408813 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/test-operator-controller-manager-7866795846-2fdgz" podUID="b724d9a9-8ae5-4295-9b4e-5ec65793b59f" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.121:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:37 crc kubenswrapper[4957]: I0218 15:50:37.408903 4957 patch_prober.go:28] interesting pod/machine-config-daemon-x8wwg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 15:50:37 crc kubenswrapper[4957]: I0218 15:50:37.408942 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 15:50:37 crc kubenswrapper[4957]: I0218 15:50:37.408963 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-7866795846-2fdgz" Feb 18 15:50:37 crc kubenswrapper[4957]: I0218 15:50:37.412768 4957 patch_prober.go:28] interesting pod/image-registry-66df7c8f76-97lxp container/registry namespace/openshift-image-registry: Readiness probe status=failure output="Get \"https://10.217.0.65:5000/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:37 crc kubenswrapper[4957]: I0218 15:50:37.412872 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-image-registry/image-registry-66df7c8f76-97lxp" podUID="ace67b60-50d9-4a21-bbfc-fb3bf0ff40d6" containerName="registry" probeResult="failure" output="Get \"https://10.217.0.65:5000/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:37 crc kubenswrapper[4957]: I0218 15:50:37.412794 4957 patch_prober.go:28] interesting pod/image-registry-66df7c8f76-97lxp container/registry namespace/openshift-image-registry: Liveness probe status=failure output="Get \"https://10.217.0.65:5000/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:37 crc kubenswrapper[4957]: I0218 15:50:37.412939 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-image-registry/image-registry-66df7c8f76-97lxp" podUID="ace67b60-50d9-4a21-bbfc-fb3bf0ff40d6" containerName="registry" probeResult="failure" output="Get \"https://10.217.0.65:5000/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:37 crc kubenswrapper[4957]: I0218 15:50:37.590668 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-xg6pp" podUID="94bb800a-9927-4d0f-b9d2-53e4fb398fda" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.108:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:37 crc kubenswrapper[4957]: I0218 15:50:37.631643 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-fx4tl" podUID="91fd8838-0687-420b-b3dd-4130e221a66d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.107:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:37 crc kubenswrapper[4957]: I0218 15:50:37.755687 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-init-5666c999f9-b87pp" podUID="09673cd4-22c2-43fa-87ae-17b7a8a03308" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.101:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:37 crc kubenswrapper[4957]: I0218 15:50:37.755995 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-5666c999f9-b87pp" Feb 18 15:50:37 crc kubenswrapper[4957]: I0218 15:50:37.755725 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-controller-init-5666c999f9-b87pp" podUID="09673cd4-22c2-43fa-87ae-17b7a8a03308" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.101:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:37 crc kubenswrapper[4957]: I0218 15:50:37.755686 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-vt4tc" podUID="78c8fb66-d71a-44b7-b858-51f7ca26a407" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.110:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:37 crc kubenswrapper[4957]: I0218 15:50:37.765639 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/kube-state-metrics-0" podUID="03293231-ba61-4099-89c9-b86cd6d9f489" containerName="kube-state-metrics" probeResult="failure" output="Get \"https://10.217.1.11:8080/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:37 crc kubenswrapper[4957]: I0218 15:50:37.765676 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/kube-state-metrics-0" podUID="03293231-ba61-4099-89c9-b86cd6d9f489" containerName="kube-state-metrics" probeResult="failure" output="Get \"https://10.217.1.11:8081/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:37 crc kubenswrapper[4957]: I0218 15:50:37.799693 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/certified-operators-mjqvd" podUID="3d9fa28a-2d86-4e9f-a5da-d5f545bb0331" containerName="registry-server" probeResult="failure" output="command timed out" Feb 18 15:50:37 crc kubenswrapper[4957]: I0218 15:50:37.803181 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/certified-operators-mjqvd" podUID="3d9fa28a-2d86-4e9f-a5da-d5f545bb0331" containerName="registry-server" probeResult="failure" output="command timed out" Feb 18 15:50:37 crc kubenswrapper[4957]: I0218 15:50:37.859586 4957 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"3d3a1ae29fe36ca19aeed36e4506b26f798c3d34fcc449df5b7dcb5cac3ace47"} pod="openshift-console/downloads-7954f5f757-fxh8s" containerMessage="Container download-server failed liveness probe, will be restarted" Feb 18 15:50:37 crc kubenswrapper[4957]: I0218 15:50:37.859659 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/downloads-7954f5f757-fxh8s" podUID="6e462cfd-ac3c-4e75-bcce-f8291746b89e" containerName="download-server" containerID="cri-o://3d3a1ae29fe36ca19aeed36e4506b26f798c3d34fcc449df5b7dcb5cac3ace47" gracePeriod=2 Feb 18 15:50:37 crc kubenswrapper[4957]: I0218 15:50:37.936688 4957 patch_prober.go:28] interesting pod/perses-operator-5bf474d74f-khfcc container/perses-operator namespace/openshift-operators: Liveness probe status=failure output="Get \"http://10.217.0.23:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:37 crc kubenswrapper[4957]: I0218 15:50:37.936763 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operators/perses-operator-5bf474d74f-khfcc" podUID="aa193683-1796-419f-ac5f-e620b3206699" containerName="perses-operator" probeResult="failure" output="Get \"http://10.217.0.23:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:37 crc kubenswrapper[4957]: I0218 15:50:37.936994 4957 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-8bmsm container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: connect: connection refused" start-of-body= Feb 18 15:50:37 crc kubenswrapper[4957]: I0218 15:50:37.937027 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8bmsm" podUID="3925001b-348a-4dde-a066-e49891c345bb" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: connect: connection refused" Feb 18 15:50:37 crc kubenswrapper[4957]: I0218 15:50:37.937587 4957 patch_prober.go:28] interesting pod/perses-operator-5bf474d74f-khfcc container/perses-operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.23:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:37 crc kubenswrapper[4957]: I0218 15:50:37.937707 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/perses-operator-5bf474d74f-khfcc" podUID="aa193683-1796-419f-ac5f-e620b3206699" containerName="perses-operator" probeResult="failure" output="Get \"http://10.217.0.23:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:37 crc kubenswrapper[4957]: I0218 15:50:37.938042 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-khfcc" Feb 18 15:50:38 crc kubenswrapper[4957]: I0218 15:50:38.023936 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-c8kqz" Feb 18 15:50:38 crc kubenswrapper[4957]: I0218 15:50:38.027595 4957 patch_prober.go:28] interesting pod/downloads-7954f5f757-fxh8s container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:38 crc kubenswrapper[4957]: I0218 15:50:38.027670 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-fxh8s" podUID="6e462cfd-ac3c-4e75-bcce-f8291746b89e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:38 crc kubenswrapper[4957]: I0218 15:50:38.027589 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-controller-manager-8675cb849f-2g7hj" podUID="4f287d67-8d26-430a-a775-fdf0abeed6dd" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.94:8080/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:38 crc kubenswrapper[4957]: I0218 15:50:38.027866 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-8675cb849f-2g7hj" Feb 18 15:50:38 crc kubenswrapper[4957]: I0218 15:50:38.155175 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-rd7mm" podUID="eb1f93ca-0a6f-4582-9d3a-329ddd1dd4fe" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.114:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:38 crc kubenswrapper[4957]: I0218 15:50:38.155710 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-7866795846-2fdgz" Feb 18 15:50:38 crc kubenswrapper[4957]: I0218 15:50:38.196658 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-cxzhc" podUID="ef6b6faf-f852-4948-8d1b-d53eace855a4" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.116:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:38 crc kubenswrapper[4957]: I0218 15:50:38.196684 4957 patch_prober.go:28] interesting pod/logging-loki-querier-76bf7b6d45-t4b27 container/loki-querier namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.53:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:38 crc kubenswrapper[4957]: I0218 15:50:38.196744 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-querier-76bf7b6d45-t4b27" podUID="c093fd9d-72e8-42d1-a5ad-5e687f61aa9e" containerName="loki-querier" probeResult="failure" output="Get \"https://10.217.0.53:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:38 crc kubenswrapper[4957]: I0218 15:50:38.196809 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-kx5gv" podUID="f70d6609-fcf8-47f9-89dc-986f8f2f902b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.115:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:38 crc kubenswrapper[4957]: I0218 15:50:38.196827 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-querier-76bf7b6d45-t4b27" Feb 18 15:50:38 crc kubenswrapper[4957]: I0218 15:50:38.196843 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-czjx4" podUID="147c50a5-37fc-4b06-803f-8ad1d1fd4625" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.111:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:38 crc kubenswrapper[4957]: I0218 15:50:38.196879 4957 patch_prober.go:28] interesting pod/logging-loki-query-frontend-6d6859c548-x4qsb container/loki-query-frontend namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.54:3101/loki/api/v1/status/buildinfo\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:38 crc kubenswrapper[4957]: I0218 15:50:38.196901 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-x4qsb" podUID="d1e1403b-f3b0-4377-9c77-1d9f5b3c10c7" containerName="loki-query-frontend" probeResult="failure" output="Get \"https://10.217.0.54:3101/loki/api/v1/status/buildinfo\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:38 crc kubenswrapper[4957]: I0218 15:50:38.196969 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-x4qsb" Feb 18 15:50:38 crc kubenswrapper[4957]: I0218 15:50:38.204255 4957 patch_prober.go:28] interesting pod/logging-loki-distributor-5d5548c9f5-2wbxs container/loki-distributor namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.52:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:38 crc kubenswrapper[4957]: I0218 15:50:38.204314 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-2wbxs" podUID="4de1a2b8-9bfb-4104-b065-e0c991cb95ea" containerName="loki-distributor" probeResult="failure" output="Get \"https://10.217.0.52:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:38 crc kubenswrapper[4957]: I0218 15:50:38.204411 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-2wbxs" Feb 18 15:50:38 crc kubenswrapper[4957]: I0218 15:50:38.262584 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-9mf8z" podUID="644451ba-ce73-4312-b6cd-af99eb6c9fbc" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.118:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:38 crc kubenswrapper[4957]: I0218 15:50:38.304587 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/swift-operator-controller-manager-68f46476f-5k7g6" podUID="8adf52f0-b132-4541-8962-7fae9bce89c6" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.119:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:38 crc kubenswrapper[4957]: I0218 15:50:38.512686 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-webhook-server-84df667ccc-2w5tf" podUID="32734ff2-fe7b-4588-a4c8-0e5882b54b87" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.95:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:38 crc kubenswrapper[4957]: I0218 15:50:38.512816 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-84df667ccc-2w5tf" Feb 18 15:50:38 crc kubenswrapper[4957]: I0218 15:50:38.513471 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/metallb-operator-webhook-server-84df667ccc-2w5tf" podUID="32734ff2-fe7b-4588-a4c8-0e5882b54b87" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.95:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:38 crc kubenswrapper[4957]: I0218 15:50:38.513528 4957 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="metallb-system/metallb-operator-webhook-server-84df667ccc-2w5tf" Feb 18 15:50:38 crc kubenswrapper[4957]: I0218 15:50:38.525117 4957 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="webhook-server" containerStatusID={"Type":"cri-o","ID":"e4afb054492727097059a2d4d40ca47d547635f3083ddb7d324534e16cc97534"} pod="metallb-system/metallb-operator-webhook-server-84df667ccc-2w5tf" containerMessage="Container webhook-server failed liveness probe, will be restarted" Feb 18 15:50:38 crc kubenswrapper[4957]: I0218 15:50:38.525185 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="metallb-system/metallb-operator-webhook-server-84df667ccc-2w5tf" podUID="32734ff2-fe7b-4588-a4c8-0e5882b54b87" containerName="webhook-server" containerID="cri-o://e4afb054492727097059a2d4d40ca47d547635f3083ddb7d324534e16cc97534" gracePeriod=2 Feb 18 15:50:38 crc kubenswrapper[4957]: I0218 15:50:38.795568 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-handler-vwl5q" podUID="3b69da89-d2a6-4e8f-ac79-99e1bb296fcc" containerName="nmstate-handler" probeResult="failure" output="command timed out" Feb 18 15:50:38 crc kubenswrapper[4957]: I0218 15:50:38.797672 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-init-5666c999f9-b87pp" podUID="09673cd4-22c2-43fa-87ae-17b7a8a03308" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.101:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:38 crc kubenswrapper[4957]: I0218 15:50:38.807722 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-8675cb849f-2g7hj" Feb 18 15:50:38 crc kubenswrapper[4957]: I0218 15:50:38.887073 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-58897d9998-d9hcp_2d30d957-c658-4ce4-9b04-3f1d64fb67b7/console-operator/0.log" Feb 18 15:50:38 crc kubenswrapper[4957]: I0218 15:50:38.889626 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-d9hcp" Feb 18 15:50:38 crc kubenswrapper[4957]: I0218 15:50:38.890260 4957 patch_prober.go:28] interesting pod/console-operator-58897d9998-d9hcp container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.14:8443/readyz\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Feb 18 15:50:38 crc kubenswrapper[4957]: I0218 15:50:38.890308 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-d9hcp" podUID="2d30d957-c658-4ce4-9b04-3f1d64fb67b7" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.14:8443/readyz\": dial tcp 10.217.0.14:8443: connect: connection refused" Feb 18 15:50:38 crc kubenswrapper[4957]: I0218 15:50:38.894360 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress_router-default-5444994796-jg752_099076c9-9f78-47b8-87f1-3c9cc47e0b09/router/0.log" Feb 18 15:50:38 crc kubenswrapper[4957]: I0218 15:50:38.894484 4957 generic.go:334] "Generic (PLEG): container finished" podID="099076c9-9f78-47b8-87f1-3c9cc47e0b09" containerID="c8ed8cb910878cb5bab4b11705486636080e4cd10ec369ab57d89ad0832f752e" exitCode=137 Feb 18 15:50:38 crc kubenswrapper[4957]: I0218 15:50:38.894576 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-jg752" event={"ID":"099076c9-9f78-47b8-87f1-3c9cc47e0b09","Type":"ContainerDied","Data":"c8ed8cb910878cb5bab4b11705486636080e4cd10ec369ab57d89ad0832f752e"} Feb 18 15:50:38 crc kubenswrapper[4957]: I0218 15:50:38.898490 4957 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-mxz2r container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.21:8081/healthz\": dial tcp 10.217.0.21:8081: connect: connection refused" start-of-body= Feb 18 15:50:38 crc kubenswrapper[4957]: I0218 15:50:38.898653 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-59bdc8b94-mxz2r" podUID="955eb799-56c6-47e7-b5f7-eccac4b52134" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.21:8081/healthz\": dial tcp 10.217.0.21:8081: connect: connection refused" Feb 18 15:50:38 crc kubenswrapper[4957]: I0218 15:50:38.988607 4957 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-cvskg container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:38 crc kubenswrapper[4957]: I0218 15:50:38.988689 4957 patch_prober.go:28] interesting pod/logging-loki-gateway-7b58bd6fcd-n8wjk container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.55:8081/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:38 crc kubenswrapper[4957]: I0218 15:50:38.988670 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-cvskg" podUID="2d4f652d-1d92-4ace-8ac1-51aaf64ff1cd" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.66:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:38 crc kubenswrapper[4957]: I0218 15:50:38.988788 4957 patch_prober.go:28] interesting pod/logging-loki-gateway-7b58bd6fcd-n8wjk container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.55:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:38 crc kubenswrapper[4957]: I0218 15:50:38.988867 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-7b58bd6fcd-n8wjk" podUID="ae719427-398b-455b-8d4f-d1f96df0e800" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.55:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:38 crc kubenswrapper[4957]: I0218 15:50:38.988778 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-7b58bd6fcd-n8wjk" podUID="ae719427-398b-455b-8d4f-d1f96df0e800" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.55:8081/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:38 crc kubenswrapper[4957]: I0218 15:50:38.988893 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-cvskg" Feb 18 15:50:39 crc kubenswrapper[4957]: I0218 15:50:39.032529 4957 patch_prober.go:28] interesting pod/logging-loki-gateway-7b58bd6fcd-58vxq container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.56:8083/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:39 crc kubenswrapper[4957]: I0218 15:50:39.032560 4957 patch_prober.go:28] interesting pod/perses-operator-5bf474d74f-khfcc container/perses-operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.23:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:39 crc kubenswrapper[4957]: I0218 15:50:39.032594 4957 patch_prober.go:28] interesting pod/logging-loki-gateway-7b58bd6fcd-58vxq container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.56:8081/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:39 crc kubenswrapper[4957]: I0218 15:50:39.032629 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/perses-operator-5bf474d74f-khfcc" podUID="aa193683-1796-419f-ac5f-e620b3206699" containerName="perses-operator" probeResult="failure" output="Get \"http://10.217.0.23:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:39 crc kubenswrapper[4957]: I0218 15:50:39.032651 4957 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-cvskg container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.217.0.66:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:39 crc kubenswrapper[4957]: I0218 15:50:39.032670 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-79b997595-cvskg" podUID="2d4f652d-1d92-4ace-8ac1-51aaf64ff1cd" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.66:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:39 crc kubenswrapper[4957]: I0218 15:50:39.032659 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-7b58bd6fcd-58vxq" podUID="6e82b47f-b61b-40dd-92f1-62180459082f" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.56:8081/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:39 crc kubenswrapper[4957]: I0218 15:50:39.032704 4957 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-marketplace/marketplace-operator-79b997595-cvskg" Feb 18 15:50:39 crc kubenswrapper[4957]: I0218 15:50:39.032593 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-7b58bd6fcd-58vxq" podUID="6e82b47f-b61b-40dd-92f1-62180459082f" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.56:8083/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:39 crc kubenswrapper[4957]: I0218 15:50:39.114649 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-bl7jx" podUID="84258a40-276a-4da4-8240-603932be25c0" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.96:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:39 crc kubenswrapper[4957]: I0218 15:50:39.114776 4957 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-bl7jx" Feb 18 15:50:39 crc kubenswrapper[4957]: I0218 15:50:39.126116 4957 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="frr-k8s-webhook-server" containerStatusID={"Type":"cri-o","ID":"8ec51358c3b2e5eea4e34871efe9d013d61ddca8462c2c83fe7be24427a95975"} pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-bl7jx" containerMessage="Container frr-k8s-webhook-server failed liveness probe, will be restarted" Feb 18 15:50:39 crc kubenswrapper[4957]: I0218 15:50:39.126211 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-bl7jx" podUID="84258a40-276a-4da4-8240-603932be25c0" containerName="frr-k8s-webhook-server" containerID="cri-o://8ec51358c3b2e5eea4e34871efe9d013d61ddca8462c2c83fe7be24427a95975" gracePeriod=10 Feb 18 15:50:39 crc kubenswrapper[4957]: I0218 15:50:39.279702 4957 patch_prober.go:28] interesting pod/downloads-7954f5f757-fxh8s container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:39 crc kubenswrapper[4957]: I0218 15:50:39.280052 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-fxh8s" podUID="6e462cfd-ac3c-4e75-bcce-f8291746b89e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:39 crc kubenswrapper[4957]: I0218 15:50:39.362613 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-bl7jx" podUID="84258a40-276a-4da4-8240-603932be25c0" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.96:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:39 crc kubenswrapper[4957]: I0218 15:50:39.362749 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-bl7jx" Feb 18 15:50:39 crc kubenswrapper[4957]: I0218 15:50:39.362791 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-manager-549dd7c895-84tm5" podUID="33e1b915-d740-4ec7-b74e-b8b8b6356d4d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.123:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:39 crc kubenswrapper[4957]: I0218 15:50:39.363404 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/controller-69bbfbf88f-hw6sv" podUID="3929daaa-39b8-475f-9af0-644180cb7682" containerName="controller" probeResult="failure" output="Get \"http://10.217.0.97:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:39 crc kubenswrapper[4957]: I0218 15:50:39.363505 4957 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="metallb-system/controller-69bbfbf88f-hw6sv" Feb 18 15:50:39 crc kubenswrapper[4957]: I0218 15:50:39.363484 4957 prober.go:107] "Probe failed" probeType="Startup" pod="metallb-system/frr-k8s-c8kqz" podUID="06e3f0bd-70d3-493b-ab24-e8f75298f7a3" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:39 crc kubenswrapper[4957]: I0218 15:50:39.363538 4957 patch_prober.go:28] interesting pod/logging-loki-querier-76bf7b6d45-t4b27 container/loki-querier namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.53:3101/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:39 crc kubenswrapper[4957]: I0218 15:50:39.363484 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-c8kqz" podUID="06e3f0bd-70d3-493b-ab24-e8f75298f7a3" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:39 crc kubenswrapper[4957]: I0218 15:50:39.363581 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-querier-76bf7b6d45-t4b27" podUID="c093fd9d-72e8-42d1-a5ad-5e687f61aa9e" containerName="loki-querier" probeResult="failure" output="Get \"https://10.217.0.53:3101/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:39 crc kubenswrapper[4957]: I0218 15:50:39.363613 4957 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="metallb-system/frr-k8s-c8kqz" Feb 18 15:50:39 crc kubenswrapper[4957]: I0218 15:50:39.363631 4957 patch_prober.go:28] interesting pod/logging-loki-query-frontend-6d6859c548-x4qsb container/loki-query-frontend namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.54:3101/loki/api/v1/status/buildinfo\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:39 crc kubenswrapper[4957]: I0218 15:50:39.363659 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-x4qsb" podUID="d1e1403b-f3b0-4377-9c77-1d9f5b3c10c7" containerName="loki-query-frontend" probeResult="failure" output="Get \"https://10.217.0.54:3101/loki/api/v1/status/buildinfo\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:39 crc kubenswrapper[4957]: I0218 15:50:39.363712 4957 patch_prober.go:28] interesting pod/logging-loki-distributor-5d5548c9f5-2wbxs container/loki-distributor namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.52:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:39 crc kubenswrapper[4957]: I0218 15:50:39.363737 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-2wbxs" podUID="4de1a2b8-9bfb-4104-b065-e0c991cb95ea" containerName="loki-distributor" probeResult="failure" output="Get \"https://10.217.0.52:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:39 crc kubenswrapper[4957]: I0218 15:50:39.363888 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-c8kqz" podUID="06e3f0bd-70d3-493b-ab24-e8f75298f7a3" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:39 crc kubenswrapper[4957]: I0218 15:50:39.364002 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-c8kqz" Feb 18 15:50:39 crc kubenswrapper[4957]: I0218 15:50:39.364177 4957 patch_prober.go:28] interesting pod/logging-loki-index-gateway-0 container/loki-index-gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.60:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:39 crc kubenswrapper[4957]: I0218 15:50:39.364259 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-index-gateway-0" podUID="c7e42da2-0160-4c19-bd98-1ebb4d0d84dc" containerName="loki-index-gateway" probeResult="failure" output="Get \"https://10.217.0.60:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:39 crc kubenswrapper[4957]: I0218 15:50:39.364279 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-controller-manager-549dd7c895-84tm5" podUID="33e1b915-d740-4ec7-b74e-b8b8b6356d4d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.123:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:39 crc kubenswrapper[4957]: I0218 15:50:39.364381 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-index-gateway-0" Feb 18 15:50:39 crc kubenswrapper[4957]: I0218 15:50:39.365324 4957 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="controller" containerStatusID={"Type":"cri-o","ID":"a1d4ad4f006df753a43b4aedb6e48444c211669d711d0b365685c94cc06d2194"} pod="metallb-system/controller-69bbfbf88f-hw6sv" containerMessage="Container controller failed liveness probe, will be restarted" Feb 18 15:50:39 crc kubenswrapper[4957]: I0218 15:50:39.365407 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="metallb-system/controller-69bbfbf88f-hw6sv" podUID="3929daaa-39b8-475f-9af0-644180cb7682" containerName="controller" containerID="cri-o://a1d4ad4f006df753a43b4aedb6e48444c211669d711d0b365685c94cc06d2194" gracePeriod=2 Feb 18 15:50:39 crc kubenswrapper[4957]: I0218 15:50:39.384265 4957 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.57:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:39 crc kubenswrapper[4957]: I0218 15:50:39.384313 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="f0549571-def1-4cd5-9cae-77780cf6870b" containerName="loki-ingester" probeResult="failure" output="Get \"https://10.217.0.57:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:39 crc kubenswrapper[4957]: I0218 15:50:39.385900 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-ingester-0" Feb 18 15:50:39 crc kubenswrapper[4957]: I0218 15:50:39.447355 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-84df667ccc-2w5tf" Feb 18 15:50:39 crc kubenswrapper[4957]: I0218 15:50:39.471988 4957 patch_prober.go:28] interesting pod/logging-loki-compactor-0 container/loki-compactor namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.58:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:39 crc kubenswrapper[4957]: I0218 15:50:39.472057 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-compactor-0" podUID="8ebf8bd1-097b-45c7-be49-c38760e885e2" containerName="loki-compactor" probeResult="failure" output="Get \"https://10.217.0.58:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:39 crc kubenswrapper[4957]: I0218 15:50:39.472141 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-compactor-0" Feb 18 15:50:39 crc kubenswrapper[4957]: I0218 15:50:39.608993 4957 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Liveness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Feb 18 15:50:39 crc kubenswrapper[4957]: I0218 15:50:39.609290 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Feb 18 15:50:39 crc kubenswrapper[4957]: E0218 15:50:39.733367 4957 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0a85b3f07e602aefbbbda7dac31c3b119df8cb10abdcb7825d70f1610a35622c" cmd=["grpc_health_probe","-addr=:50051"] Feb 18 15:50:39 crc kubenswrapper[4957]: E0218 15:50:39.739976 4957 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0a85b3f07e602aefbbbda7dac31c3b119df8cb10abdcb7825d70f1610a35622c" cmd=["grpc_health_probe","-addr=:50051"] Feb 18 15:50:39 crc kubenswrapper[4957]: E0218 15:50:39.757723 4957 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0a85b3f07e602aefbbbda7dac31c3b119df8cb10abdcb7825d70f1610a35622c" cmd=["grpc_health_probe","-addr=:50051"] Feb 18 15:50:39 crc kubenswrapper[4957]: E0218 15:50:39.757848 4957 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-gllqp" podUID="b95ede57-e275-4ba0-834d-43356f6b960b" containerName="registry-server" Feb 18 15:50:39 crc kubenswrapper[4957]: I0218 15:50:39.800989 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/community-operators-nsb6k" podUID="a5d8a39a-4f7f-4d3e-b205-9a209721ca4b" containerName="registry-server" probeResult="failure" output="command timed out" Feb 18 15:50:39 crc kubenswrapper[4957]: I0218 15:50:39.801285 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/community-operators-nsb6k" podUID="a5d8a39a-4f7f-4d3e-b205-9a209721ca4b" containerName="registry-server" probeResult="failure" output="command timed out" Feb 18 15:50:39 crc kubenswrapper[4957]: I0218 15:50:39.930545 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Feb 18 15:50:39 crc kubenswrapper[4957]: I0218 15:50:39.936400 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 18 15:50:39 crc kubenswrapper[4957]: I0218 15:50:39.936470 4957 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="72629481855f492daa6c750fb938bc0e4083bafacfb2a9c5025a2eb58e3dd607" exitCode=1 Feb 18 15:50:39 crc kubenswrapper[4957]: I0218 15:50:39.936559 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"72629481855f492daa6c750fb938bc0e4083bafacfb2a9c5025a2eb58e3dd607"} Feb 18 15:50:39 crc kubenswrapper[4957]: I0218 15:50:39.938697 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-mxz2r" event={"ID":"955eb799-56c6-47e7-b5f7-eccac4b52134","Type":"ContainerStarted","Data":"be11812be588ac22da0e67219fb401ad88da5e617a63bbda1db9d971d23cd01f"} Feb 18 15:50:39 crc kubenswrapper[4957]: I0218 15:50:39.939215 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-mxz2r" Feb 18 15:50:39 crc kubenswrapper[4957]: I0218 15:50:39.939285 4957 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-mxz2r container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.21:8081/healthz\": dial tcp 10.217.0.21:8081: connect: connection refused" start-of-body= Feb 18 15:50:39 crc kubenswrapper[4957]: I0218 15:50:39.939318 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-59bdc8b94-mxz2r" podUID="955eb799-56c6-47e7-b5f7-eccac4b52134" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.21:8081/healthz\": dial tcp 10.217.0.21:8081: connect: connection refused" Feb 18 15:50:39 crc kubenswrapper[4957]: I0218 15:50:39.941216 4957 scope.go:117] "RemoveContainer" containerID="bb9f54ba352c3e8d1cb70e31e2321f32cc741e9ea7fa871ac8db9cb380486d6c" Feb 18 15:50:39 crc kubenswrapper[4957]: I0218 15:50:39.941689 4957 scope.go:117] "RemoveContainer" containerID="72629481855f492daa6c750fb938bc0e4083bafacfb2a9c5025a2eb58e3dd607" Feb 18 15:50:39 crc kubenswrapper[4957]: I0218 15:50:39.941993 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-58897d9998-d9hcp_2d30d957-c658-4ce4-9b04-3f1d64fb67b7/console-operator/0.log" Feb 18 15:50:39 crc kubenswrapper[4957]: I0218 15:50:39.942241 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-d9hcp" event={"ID":"2d30d957-c658-4ce4-9b04-3f1d64fb67b7","Type":"ContainerStarted","Data":"a8f966b29e4272c48ee49c0dcd25d6701185dda2191c3e64e5c082afdbac8b09"} Feb 18 15:50:39 crc kubenswrapper[4957]: I0218 15:50:39.942760 4957 patch_prober.go:28] interesting pod/console-operator-58897d9998-d9hcp container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.14:8443/readyz\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Feb 18 15:50:39 crc kubenswrapper[4957]: I0218 15:50:39.942789 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-d9hcp" podUID="2d30d957-c658-4ce4-9b04-3f1d64fb67b7" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.14:8443/readyz\": dial tcp 10.217.0.14:8443: connect: connection refused" Feb 18 15:50:39 crc kubenswrapper[4957]: I0218 15:50:39.946270 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-htr7f" event={"ID":"eba7e3a4-c6f4-4473-bcec-23a777ba8798","Type":"ContainerStarted","Data":"d988534525e30ac554f3fb3afb4edad3198a3233eedd8b7c4de9e63a01594cc3"} Feb 18 15:50:39 crc kubenswrapper[4957]: I0218 15:50:39.946441 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-htr7f" Feb 18 15:50:39 crc kubenswrapper[4957]: I0218 15:50:39.949110 4957 generic.go:334] "Generic (PLEG): container finished" podID="9a161f1b-77bb-4a9d-9bfc-345bb46d439b" containerID="5955a16e3e753996736404ee9f2ef22ee71db1024fa108014ada860916093401" exitCode=0 Feb 18 15:50:39 crc kubenswrapper[4957]: I0218 15:50:39.949263 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-q4vp7" event={"ID":"9a161f1b-77bb-4a9d-9bfc-345bb46d439b","Type":"ContainerDied","Data":"5955a16e3e753996736404ee9f2ef22ee71db1024fa108014ada860916093401"} Feb 18 15:50:39 crc kubenswrapper[4957]: I0218 15:50:39.949479 4957 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="marketplace-operator" containerStatusID={"Type":"cri-o","ID":"1830e452e88ead9167650804bef4fbf015b4ee32728ed0842a8afb13ee9ec0f5"} pod="openshift-marketplace/marketplace-operator-79b997595-cvskg" containerMessage="Container marketplace-operator failed liveness probe, will be restarted" Feb 18 15:50:39 crc kubenswrapper[4957]: I0218 15:50:39.949508 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-cvskg" podUID="2d4f652d-1d92-4ace-8ac1-51aaf64ff1cd" containerName="marketplace-operator" containerID="cri-o://1830e452e88ead9167650804bef4fbf015b4ee32728ed0842a8afb13ee9ec0f5" gracePeriod=30 Feb 18 15:50:39 crc kubenswrapper[4957]: I0218 15:50:39.949873 4957 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="controller" containerStatusID={"Type":"cri-o","ID":"02d58f3725fc3455093ad9d4bc9a6fff935d5e35877a9633675912a93af85d0d"} pod="metallb-system/frr-k8s-c8kqz" containerMessage="Container controller failed liveness probe, will be restarted" Feb 18 15:50:39 crc kubenswrapper[4957]: I0218 15:50:39.949991 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="metallb-system/frr-k8s-c8kqz" podUID="06e3f0bd-70d3-493b-ab24-e8f75298f7a3" containerName="controller" containerID="cri-o://02d58f3725fc3455093ad9d4bc9a6fff935d5e35877a9633675912a93af85d0d" gracePeriod=2 Feb 18 15:50:39 crc kubenswrapper[4957]: I0218 15:50:39.951706 4957 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-htr7f container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.72:8443/healthz\": dial tcp 10.217.0.72:8443: connect: connection refused" start-of-body= Feb 18 15:50:39 crc kubenswrapper[4957]: I0218 15:50:39.951735 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-htr7f" podUID="eba7e3a4-c6f4-4473-bcec-23a777ba8798" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.72:8443/healthz\": dial tcp 10.217.0.72:8443: connect: connection refused" Feb 18 15:50:40 crc kubenswrapper[4957]: I0218 15:50:40.029563 4957 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-cvskg container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:40 crc kubenswrapper[4957]: I0218 15:50:40.029630 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-cvskg" podUID="2d4f652d-1d92-4ace-8ac1-51aaf64ff1cd" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.66:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:40 crc kubenswrapper[4957]: I0218 15:50:40.081765 4957 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-tmknf container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.29:8443/healthz\": dial tcp 10.217.0.29:8443: connect: connection refused" start-of-body= Feb 18 15:50:40 crc kubenswrapper[4957]: I0218 15:50:40.081819 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-tmknf" podUID="7b0517cf-fb33-49b0-9f1c-ba39f8edfc37" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.29:8443/healthz\": dial tcp 10.217.0.29:8443: connect: connection refused" Feb 18 15:50:40 crc kubenswrapper[4957]: I0218 15:50:40.110964 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-c8kqz" Feb 18 15:50:40 crc kubenswrapper[4957]: E0218 15:50:40.347910 4957 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2fde7c5ef212618d402aa21565652c78d592fc74e88dbe573e09bdb5b2127eda" cmd=["grpc_health_probe","-addr=:50051"] Feb 18 15:50:40 crc kubenswrapper[4957]: E0218 15:50:40.349342 4957 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2fde7c5ef212618d402aa21565652c78d592fc74e88dbe573e09bdb5b2127eda" cmd=["grpc_health_probe","-addr=:50051"] Feb 18 15:50:40 crc kubenswrapper[4957]: E0218 15:50:40.350432 4957 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2fde7c5ef212618d402aa21565652c78d592fc74e88dbe573e09bdb5b2127eda" cmd=["grpc_health_probe","-addr=:50051"] Feb 18 15:50:40 crc kubenswrapper[4957]: E0218 15:50:40.350487 4957 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-marketplace/redhat-operators-vrpnn" podUID="9f04a8b9-47dc-4fdf-b0fa-b39ee03d5f00" containerName="registry-server" Feb 18 15:50:40 crc kubenswrapper[4957]: I0218 15:50:40.365226 4957 patch_prober.go:28] interesting pod/logging-loki-index-gateway-0 container/loki-index-gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.60:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:40 crc kubenswrapper[4957]: I0218 15:50:40.365322 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-index-gateway-0" podUID="c7e42da2-0160-4c19-bd98-1ebb4d0d84dc" containerName="loki-index-gateway" probeResult="failure" output="Get \"https://10.217.0.60:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:40 crc kubenswrapper[4957]: I0218 15:50:40.386680 4957 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.57:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:40 crc kubenswrapper[4957]: I0218 15:50:40.386764 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="f0549571-def1-4cd5-9cae-77780cf6870b" containerName="loki-ingester" probeResult="failure" output="Get \"https://10.217.0.57:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:40 crc kubenswrapper[4957]: I0218 15:50:40.403508 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-q5pw9" Feb 18 15:50:40 crc kubenswrapper[4957]: I0218 15:50:40.472853 4957 patch_prober.go:28] interesting pod/logging-loki-compactor-0 container/loki-compactor namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.58:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:40 crc kubenswrapper[4957]: I0218 15:50:40.472913 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-compactor-0" podUID="8ebf8bd1-097b-45c7-be49-c38760e885e2" containerName="loki-compactor" probeResult="failure" output="Get \"https://10.217.0.58:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:40 crc kubenswrapper[4957]: I0218 15:50:40.657570 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/speaker-wvv4k" podUID="379fdde6-815b-433b-b62c-b9863ea4fb9e" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:40 crc kubenswrapper[4957]: I0218 15:50:40.657652 4957 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="metallb-system/speaker-wvv4k" Feb 18 15:50:40 crc kubenswrapper[4957]: I0218 15:50:40.657659 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/speaker-wvv4k" podUID="379fdde6-815b-433b-b62c-b9863ea4fb9e" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:40 crc kubenswrapper[4957]: I0218 15:50:40.657743 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-wvv4k" Feb 18 15:50:40 crc kubenswrapper[4957]: I0218 15:50:40.658560 4957 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="speaker" containerStatusID={"Type":"cri-o","ID":"46fa0439b78dc98b2e466e1c77a11ad92a91158f1c9a87d365215a5a6fad8545"} pod="metallb-system/speaker-wvv4k" containerMessage="Container speaker failed liveness probe, will be restarted" Feb 18 15:50:40 crc kubenswrapper[4957]: I0218 15:50:40.658612 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="metallb-system/speaker-wvv4k" podUID="379fdde6-815b-433b-b62c-b9863ea4fb9e" containerName="speaker" containerID="cri-o://46fa0439b78dc98b2e466e1c77a11ad92a91158f1c9a87d365215a5a6fad8545" gracePeriod=2 Feb 18 15:50:40 crc kubenswrapper[4957]: E0218 15:50:40.705758 4957 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3929daaa_39b8_475f_9af0_644180cb7682.slice/crio-a1d4ad4f006df753a43b4aedb6e48444c211669d711d0b365685c94cc06d2194.scope\": RecentStats: unable to find data in memory cache]" Feb 18 15:50:40 crc kubenswrapper[4957]: I0218 15:50:40.794902 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-k8s-0" podUID="564a48e7-438e-4374-9b43-92409e093ae2" containerName="prometheus" probeResult="failure" output="command timed out" Feb 18 15:50:40 crc kubenswrapper[4957]: I0218 15:50:40.794911 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="8e98adae-ca1c-4f1d-a7e4-6ea6ccea7070" containerName="galera" probeResult="failure" output="command timed out" Feb 18 15:50:40 crc kubenswrapper[4957]: I0218 15:50:40.795222 4957 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-monitoring/prometheus-k8s-0" Feb 18 15:50:40 crc kubenswrapper[4957]: I0218 15:50:40.797313 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-k8s-0" podUID="564a48e7-438e-4374-9b43-92409e093ae2" containerName="prometheus" probeResult="failure" output="command timed out" Feb 18 15:50:40 crc kubenswrapper[4957]: I0218 15:50:40.833185 4957 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="prometheus" containerStatusID={"Type":"cri-o","ID":"8ba3262a879c6a0bbe38c431c19cbff1d6ba81480df2e43affbc70c27daaf5bb"} pod="openshift-monitoring/prometheus-k8s-0" containerMessage="Container prometheus failed liveness probe, will be restarted" Feb 18 15:50:40 crc kubenswrapper[4957]: I0218 15:50:40.833349 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/prometheus-k8s-0" podUID="564a48e7-438e-4374-9b43-92409e093ae2" containerName="prometheus" containerID="cri-o://8ba3262a879c6a0bbe38c431c19cbff1d6ba81480df2e43affbc70c27daaf5bb" gracePeriod=600 Feb 18 15:50:40 crc kubenswrapper[4957]: I0218 15:50:40.963313 4957 generic.go:334] "Generic (PLEG): container finished" podID="6e462cfd-ac3c-4e75-bcce-f8291746b89e" containerID="3d3a1ae29fe36ca19aeed36e4506b26f798c3d34fcc449df5b7dcb5cac3ace47" exitCode=0 Feb 18 15:50:40 crc kubenswrapper[4957]: I0218 15:50:40.963370 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-fxh8s" event={"ID":"6e462cfd-ac3c-4e75-bcce-f8291746b89e","Type":"ContainerDied","Data":"3d3a1ae29fe36ca19aeed36e4506b26f798c3d34fcc449df5b7dcb5cac3ace47"} Feb 18 15:50:40 crc kubenswrapper[4957]: I0218 15:50:40.966508 4957 generic.go:334] "Generic (PLEG): container finished" podID="455504d8-7edb-4008-9343-536491e9504a" containerID="eb98dfdcf50ca67106e54b27dffd20a2d5ae30d6bd371d384f667d49a3b4466a" exitCode=0 Feb 18 15:50:40 crc kubenswrapper[4957]: I0218 15:50:40.966579 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qc4wh" event={"ID":"455504d8-7edb-4008-9343-536491e9504a","Type":"ContainerDied","Data":"eb98dfdcf50ca67106e54b27dffd20a2d5ae30d6bd371d384f667d49a3b4466a"} Feb 18 15:50:40 crc kubenswrapper[4957]: I0218 15:50:40.968614 4957 generic.go:334] "Generic (PLEG): container finished" podID="b724d9a9-8ae5-4295-9b4e-5ec65793b59f" containerID="fd8403759e30331549c2e4eacb6050e7241900bfd9dc4c1665a5aaa4974cb2d7" exitCode=1 Feb 18 15:50:40 crc kubenswrapper[4957]: I0218 15:50:40.968658 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-7866795846-2fdgz" event={"ID":"b724d9a9-8ae5-4295-9b4e-5ec65793b59f","Type":"ContainerDied","Data":"fd8403759e30331549c2e4eacb6050e7241900bfd9dc4c1665a5aaa4974cb2d7"} Feb 18 15:50:40 crc kubenswrapper[4957]: I0218 15:50:40.970268 4957 scope.go:117] "RemoveContainer" containerID="fd8403759e30331549c2e4eacb6050e7241900bfd9dc4c1665a5aaa4974cb2d7" Feb 18 15:50:40 crc kubenswrapper[4957]: I0218 15:50:40.972954 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Feb 18 15:50:40 crc kubenswrapper[4957]: I0218 15:50:40.973843 4957 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Liveness probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:40 crc kubenswrapper[4957]: I0218 15:50:40.973896 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:40 crc kubenswrapper[4957]: I0218 15:50:40.975320 4957 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-htr7f container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.72:8443/healthz\": dial tcp 10.217.0.72:8443: connect: connection refused" start-of-body= Feb 18 15:50:40 crc kubenswrapper[4957]: I0218 15:50:40.975355 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-htr7f" podUID="eba7e3a4-c6f4-4473-bcec-23a777ba8798" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.72:8443/healthz\": dial tcp 10.217.0.72:8443: connect: connection refused" Feb 18 15:50:40 crc kubenswrapper[4957]: I0218 15:50:40.975320 4957 patch_prober.go:28] interesting pod/console-operator-58897d9998-d9hcp container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.14:8443/readyz\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Feb 18 15:50:40 crc kubenswrapper[4957]: I0218 15:50:40.975392 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-d9hcp" podUID="2d30d957-c658-4ce4-9b04-3f1d64fb67b7" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.14:8443/readyz\": dial tcp 10.217.0.14:8443: connect: connection refused" Feb 18 15:50:40 crc kubenswrapper[4957]: I0218 15:50:40.975580 4957 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-mxz2r container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.21:8081/healthz\": dial tcp 10.217.0.21:8081: connect: connection refused" start-of-body= Feb 18 15:50:40 crc kubenswrapper[4957]: I0218 15:50:40.975599 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-59bdc8b94-mxz2r" podUID="955eb799-56c6-47e7-b5f7-eccac4b52134" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.21:8081/healthz\": dial tcp 10.217.0.21:8081: connect: connection refused" Feb 18 15:50:41 crc kubenswrapper[4957]: E0218 15:50:41.026129 4957 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:41 crc kubenswrapper[4957]: I0218 15:50:41.071562 4957 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-cvskg container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:41 crc kubenswrapper[4957]: I0218 15:50:41.071625 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-cvskg" podUID="2d4f652d-1d92-4ace-8ac1-51aaf64ff1cd" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.66:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:41 crc kubenswrapper[4957]: I0218 15:50:41.118075 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-wvv4k" Feb 18 15:50:41 crc kubenswrapper[4957]: I0218 15:50:41.118304 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-vwl5q" Feb 18 15:50:41 crc kubenswrapper[4957]: I0218 15:50:41.196060 4957 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:41 crc kubenswrapper[4957]: I0218 15:50:41.196125 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:41 crc kubenswrapper[4957]: I0218 15:50:41.196218 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 18 15:50:41 crc kubenswrapper[4957]: I0218 15:50:41.211189 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 18 15:50:41 crc kubenswrapper[4957]: I0218 15:50:41.215783 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-79d975b745-pgchj" Feb 18 15:50:41 crc kubenswrapper[4957]: I0218 15:50:41.218133 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-k8s-0" podUID="564a48e7-438e-4374-9b43-92409e093ae2" containerName="prometheus" probeResult="failure" output=< Feb 18 15:50:41 crc kubenswrapper[4957]: % Total % Received % Xferd Average Speed Time Time Time Current Feb 18 15:50:41 crc kubenswrapper[4957]: Dload Upload Total Spent Left Speed Feb 18 15:50:41 crc kubenswrapper[4957]: [166B blob data] Feb 18 15:50:41 crc kubenswrapper[4957]: curl: (22) The requested URL returned error: 503 Feb 18 15:50:41 crc kubenswrapper[4957]: > Feb 18 15:50:41 crc kubenswrapper[4957]: I0218 15:50:41.230682 4957 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:6443/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:41 crc kubenswrapper[4957]: I0218 15:50:41.230750 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:41 crc kubenswrapper[4957]: I0218 15:50:41.411498 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cwqlsr" podUID="8bd25216-306e-42c0-93da-a51803507c1f" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.117:8081/readyz\": dial tcp 10.217.0.117:8081: connect: connection refused" Feb 18 15:50:41 crc kubenswrapper[4957]: I0218 15:50:41.411599 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cwqlsr" podUID="8bd25216-306e-42c0-93da-a51803507c1f" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.117:8081/healthz\": dial tcp 10.217.0.117:8081: connect: connection refused" Feb 18 15:50:41 crc kubenswrapper[4957]: I0218 15:50:41.411929 4957 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cwqlsr" Feb 18 15:50:41 crc kubenswrapper[4957]: I0218 15:50:41.413002 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cwqlsr" podUID="8bd25216-306e-42c0-93da-a51803507c1f" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.117:8081/readyz\": dial tcp 10.217.0.117:8081: connect: connection refused" Feb 18 15:50:41 crc kubenswrapper[4957]: I0218 15:50:41.415459 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Feb 18 15:50:41 crc kubenswrapper[4957]: I0218 15:50:41.421821 4957 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="manager" containerStatusID={"Type":"cri-o","ID":"7784a495e093959bb54bfa3752c432302bc14dec26cfc01f800810757b0aa226"} pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cwqlsr" containerMessage="Container manager failed liveness probe, will be restarted" Feb 18 15:50:41 crc kubenswrapper[4957]: I0218 15:50:41.421907 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cwqlsr" podUID="8bd25216-306e-42c0-93da-a51803507c1f" containerName="manager" containerID="cri-o://7784a495e093959bb54bfa3752c432302bc14dec26cfc01f800810757b0aa226" gracePeriod=10 Feb 18 15:50:41 crc kubenswrapper[4957]: E0218 15:50:41.517232 4957 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="8ba3262a879c6a0bbe38c431c19cbff1d6ba81480df2e43affbc70c27daaf5bb" cmd=["sh","-c","if [ -x \"$(command -v curl)\" ]; then exec curl --fail http://localhost:9090/-/ready; elif [ -x \"$(command -v wget)\" ]; then exec wget -q -O /dev/null http://localhost:9090/-/ready; else exit 1; fi"] Feb 18 15:50:41 crc kubenswrapper[4957]: E0218 15:50:41.519356 4957 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="8ba3262a879c6a0bbe38c431c19cbff1d6ba81480df2e43affbc70c27daaf5bb" cmd=["sh","-c","if [ -x \"$(command -v curl)\" ]; then exec curl --fail http://localhost:9090/-/ready; elif [ -x \"$(command -v wget)\" ]; then exec wget -q -O /dev/null http://localhost:9090/-/ready; else exit 1; fi"] Feb 18 15:50:41 crc kubenswrapper[4957]: E0218 15:50:41.533069 4957 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="8ba3262a879c6a0bbe38c431c19cbff1d6ba81480df2e43affbc70c27daaf5bb" cmd=["sh","-c","if [ -x \"$(command -v curl)\" ]; then exec curl --fail http://localhost:9090/-/ready; elif [ -x \"$(command -v wget)\" ]; then exec wget -q -O /dev/null http://localhost:9090/-/ready; else exit 1; fi"] Feb 18 15:50:41 crc kubenswrapper[4957]: E0218 15:50:41.533128 4957 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-monitoring/prometheus-k8s-0" podUID="564a48e7-438e-4374-9b43-92409e093ae2" containerName="prometheus" Feb 18 15:50:41 crc kubenswrapper[4957]: I0218 15:50:41.799074 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-index-hbqf7" podUID="4c4be899-e6fc-4664-89e1-b2eb45187e3a" containerName="registry-server" probeResult="failure" output="command timed out" Feb 18 15:50:41 crc kubenswrapper[4957]: I0218 15:50:41.799155 4957 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/openstack-operator-index-hbqf7" Feb 18 15:50:41 crc kubenswrapper[4957]: I0218 15:50:41.799701 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-index-hbqf7" podUID="4c4be899-e6fc-4664-89e1-b2eb45187e3a" containerName="registry-server" probeResult="failure" output="command timed out" Feb 18 15:50:41 crc kubenswrapper[4957]: I0218 15:50:41.799823 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-hbqf7" Feb 18 15:50:41 crc kubenswrapper[4957]: I0218 15:50:41.807793 4957 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="registry-server" containerStatusID={"Type":"cri-o","ID":"810337c5e67f29ddb7ae6e69c4047e13673031b1389e81b5251b6b5e7b806d4f"} pod="openstack-operators/openstack-operator-index-hbqf7" containerMessage="Container registry-server failed liveness probe, will be restarted" Feb 18 15:50:41 crc kubenswrapper[4957]: I0218 15:50:41.807855 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-hbqf7" podUID="4c4be899-e6fc-4664-89e1-b2eb45187e3a" containerName="registry-server" containerID="cri-o://810337c5e67f29ddb7ae6e69c4047e13673031b1389e81b5251b6b5e7b806d4f" gracePeriod=30 Feb 18 15:50:41 crc kubenswrapper[4957]: I0218 15:50:41.941038 4957 trace.go:236] Trace[122845134]: "Calculate volume metrics of mysql-db for pod openstack/openstack-galera-0" (18-Feb-2026 15:50:26.910) (total time: 15024ms): Feb 18 15:50:41 crc kubenswrapper[4957]: Trace[122845134]: [15.024306872s] [15.024306872s] END Feb 18 15:50:41 crc kubenswrapper[4957]: I0218 15:50:41.984136 4957 generic.go:334] "Generic (PLEG): container finished" podID="32734ff2-fe7b-4588-a4c8-0e5882b54b87" containerID="e4afb054492727097059a2d4d40ca47d547635f3083ddb7d324534e16cc97534" exitCode=0 Feb 18 15:50:41 crc kubenswrapper[4957]: I0218 15:50:41.984188 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-84df667ccc-2w5tf" event={"ID":"32734ff2-fe7b-4588-a4c8-0e5882b54b87","Type":"ContainerDied","Data":"e4afb054492727097059a2d4d40ca47d547635f3083ddb7d324534e16cc97534"} Feb 18 15:50:41 crc kubenswrapper[4957]: I0218 15:50:41.988671 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-q4vp7" event={"ID":"9a161f1b-77bb-4a9d-9bfc-345bb46d439b","Type":"ContainerStarted","Data":"2132c63845750426029327e692cd0ee4063412adfb3dc10fbc36b08ee9545ff4"} Feb 18 15:50:41 crc kubenswrapper[4957]: I0218 15:50:41.988939 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-q4vp7" Feb 18 15:50:41 crc kubenswrapper[4957]: I0218 15:50:41.989282 4957 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-q4vp7 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.38:5443/healthz\": dial tcp 10.217.0.38:5443: connect: connection refused" start-of-body= Feb 18 15:50:41 crc kubenswrapper[4957]: I0218 15:50:41.989340 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-q4vp7" podUID="9a161f1b-77bb-4a9d-9bfc-345bb46d439b" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.38:5443/healthz\": dial tcp 10.217.0.38:5443: connect: connection refused" Feb 18 15:50:41 crc kubenswrapper[4957]: I0218 15:50:41.999715 4957 generic.go:334] "Generic (PLEG): container finished" podID="06e3f0bd-70d3-493b-ab24-e8f75298f7a3" containerID="02d58f3725fc3455093ad9d4bc9a6fff935d5e35877a9633675912a93af85d0d" exitCode=0 Feb 18 15:50:42 crc kubenswrapper[4957]: I0218 15:50:41.999781 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-c8kqz" event={"ID":"06e3f0bd-70d3-493b-ab24-e8f75298f7a3","Type":"ContainerDied","Data":"02d58f3725fc3455093ad9d4bc9a6fff935d5e35877a9633675912a93af85d0d"} Feb 18 15:50:42 crc kubenswrapper[4957]: I0218 15:50:42.003460 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-fxh8s" event={"ID":"6e462cfd-ac3c-4e75-bcce-f8291746b89e","Type":"ContainerStarted","Data":"dbbd1df2955da23b7bf8778a59c511ed786bb412bed8437feb995b43317ddb58"} Feb 18 15:50:42 crc kubenswrapper[4957]: I0218 15:50:42.003894 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-fxh8s" Feb 18 15:50:42 crc kubenswrapper[4957]: I0218 15:50:42.004131 4957 patch_prober.go:28] interesting pod/downloads-7954f5f757-fxh8s container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Feb 18 15:50:42 crc kubenswrapper[4957]: I0218 15:50:42.004166 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-fxh8s" podUID="6e462cfd-ac3c-4e75-bcce-f8291746b89e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Feb 18 15:50:42 crc kubenswrapper[4957]: I0218 15:50:42.006826 4957 generic.go:334] "Generic (PLEG): container finished" podID="3929daaa-39b8-475f-9af0-644180cb7682" containerID="a1d4ad4f006df753a43b4aedb6e48444c211669d711d0b365685c94cc06d2194" exitCode=0 Feb 18 15:50:42 crc kubenswrapper[4957]: I0218 15:50:42.006890 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-hw6sv" event={"ID":"3929daaa-39b8-475f-9af0-644180cb7682","Type":"ContainerDied","Data":"a1d4ad4f006df753a43b4aedb6e48444c211669d711d0b365685c94cc06d2194"} Feb 18 15:50:42 crc kubenswrapper[4957]: I0218 15:50:42.009042 4957 generic.go:334] "Generic (PLEG): container finished" podID="84258a40-276a-4da4-8240-603932be25c0" containerID="8ec51358c3b2e5eea4e34871efe9d013d61ddca8462c2c83fe7be24427a95975" exitCode=0 Feb 18 15:50:42 crc kubenswrapper[4957]: I0218 15:50:42.009101 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-bl7jx" event={"ID":"84258a40-276a-4da4-8240-603932be25c0","Type":"ContainerDied","Data":"8ec51358c3b2e5eea4e34871efe9d013d61ddca8462c2c83fe7be24427a95975"} Feb 18 15:50:42 crc kubenswrapper[4957]: I0218 15:50:42.013289 4957 generic.go:334] "Generic (PLEG): container finished" podID="8bd25216-306e-42c0-93da-a51803507c1f" containerID="7784a495e093959bb54bfa3752c432302bc14dec26cfc01f800810757b0aa226" exitCode=1 Feb 18 15:50:42 crc kubenswrapper[4957]: I0218 15:50:42.013331 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cwqlsr" event={"ID":"8bd25216-306e-42c0-93da-a51803507c1f","Type":"ContainerDied","Data":"7784a495e093959bb54bfa3752c432302bc14dec26cfc01f800810757b0aa226"} Feb 18 15:50:42 crc kubenswrapper[4957]: I0218 15:50:42.015728 4957 generic.go:334] "Generic (PLEG): container finished" podID="33e1b915-d740-4ec7-b74e-b8b8b6356d4d" containerID="36d4be2eb68074c03ba91f396a4d8e45729d01422b9eba20f191932441ec74f5" exitCode=1 Feb 18 15:50:42 crc kubenswrapper[4957]: I0218 15:50:42.015770 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-549dd7c895-84tm5" event={"ID":"33e1b915-d740-4ec7-b74e-b8b8b6356d4d","Type":"ContainerDied","Data":"36d4be2eb68074c03ba91f396a4d8e45729d01422b9eba20f191932441ec74f5"} Feb 18 15:50:42 crc kubenswrapper[4957]: I0218 15:50:42.020838 4957 scope.go:117] "RemoveContainer" containerID="36d4be2eb68074c03ba91f396a4d8e45729d01422b9eba20f191932441ec74f5" Feb 18 15:50:42 crc kubenswrapper[4957]: I0218 15:50:42.029086 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Feb 18 15:50:42 crc kubenswrapper[4957]: I0218 15:50:42.039735 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"677429a7a4dd4297826c6a370d4946c28fb120501759a126c27fca6577c5ab66"} Feb 18 15:50:42 crc kubenswrapper[4957]: I0218 15:50:42.549310 4957 trace.go:236] Trace[1152264283]: "Calculate volume metrics of ovndbcluster-sb-etc-ovn for pod openstack/ovsdbserver-sb-0" (18-Feb-2026 15:50:35.429) (total time: 7120ms): Feb 18 15:50:42 crc kubenswrapper[4957]: Trace[1152264283]: [7.12025106s] [7.12025106s] END Feb 18 15:50:42 crc kubenswrapper[4957]: I0218 15:50:42.558617 4957 trace.go:236] Trace[1720764851]: "Calculate volume metrics of storage for pod openshift-logging/logging-loki-index-gateway-0" (18-Feb-2026 15:50:39.226) (total time: 3332ms): Feb 18 15:50:42 crc kubenswrapper[4957]: Trace[1720764851]: [3.332530716s] [3.332530716s] END Feb 18 15:50:42 crc kubenswrapper[4957]: I0218 15:50:42.734226 4957 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-htr7f container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.72:8443/healthz\": dial tcp 10.217.0.72:8443: connect: connection refused" start-of-body= Feb 18 15:50:42 crc kubenswrapper[4957]: I0218 15:50:42.734530 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-htr7f" podUID="eba7e3a4-c6f4-4473-bcec-23a777ba8798" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.72:8443/healthz\": dial tcp 10.217.0.72:8443: connect: connection refused" Feb 18 15:50:42 crc kubenswrapper[4957]: I0218 15:50:42.734313 4957 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-htr7f container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.72:8443/healthz\": dial tcp 10.217.0.72:8443: connect: connection refused" start-of-body= Feb 18 15:50:42 crc kubenswrapper[4957]: I0218 15:50:42.734580 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-htr7f" podUID="eba7e3a4-c6f4-4473-bcec-23a777ba8798" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.72:8443/healthz\": dial tcp 10.217.0.72:8443: connect: connection refused" Feb 18 15:50:42 crc kubenswrapper[4957]: I0218 15:50:42.795218 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="a1f75ef8-c8ef-4d67-8c40-11ff5f8f5f4c" containerName="galera" probeResult="failure" output="command timed out" Feb 18 15:50:43 crc kubenswrapper[4957]: I0218 15:50:43.065633 4957 patch_prober.go:28] interesting pod/controller-manager-65c9ff5d9d-fz5wk container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.62:8443/healthz\": dial tcp 10.217.0.62:8443: connect: connection refused" start-of-body= Feb 18 15:50:43 crc kubenswrapper[4957]: I0218 15:50:43.065906 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-65c9ff5d9d-fz5wk" podUID="1523e723-7145-4c5e-8834-990b6298db41" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.62:8443/healthz\": dial tcp 10.217.0.62:8443: connect: connection refused" Feb 18 15:50:43 crc kubenswrapper[4957]: I0218 15:50:43.066964 4957 patch_prober.go:28] interesting pod/route-controller-manager-6f58845d78-bfqn7 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.63:8443/healthz\": dial tcp 10.217.0.63:8443: connect: connection refused" start-of-body= Feb 18 15:50:43 crc kubenswrapper[4957]: I0218 15:50:43.066982 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6f58845d78-bfqn7" podUID="2fbd50ae-c490-4099-b01e-de491ad70559" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.63:8443/healthz\": dial tcp 10.217.0.63:8443: connect: connection refused" Feb 18 15:50:43 crc kubenswrapper[4957]: I0218 15:50:43.081252 4957 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-tmknf container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.29:8443/healthz\": dial tcp 10.217.0.29:8443: connect: connection refused" start-of-body= Feb 18 15:50:43 crc kubenswrapper[4957]: I0218 15:50:43.081300 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-tmknf" podUID="7b0517cf-fb33-49b0-9f1c-ba39f8edfc37" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.29:8443/healthz\": dial tcp 10.217.0.29:8443: connect: connection refused" Feb 18 15:50:43 crc kubenswrapper[4957]: I0218 15:50:43.123310 4957 generic.go:334] "Generic (PLEG): container finished" podID="f507ee0e-6836-4f30-b79e-63979d76a449" containerID="c57fb1d63bd24ab77c2265fcc4abd95ad9b30a2b4f8f1e289dfc3c4f662a2c38" exitCode=1 Feb 18 15:50:43 crc kubenswrapper[4957]: I0218 15:50:43.123387 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-54bf66477-rc4j4" event={"ID":"f507ee0e-6836-4f30-b79e-63979d76a449","Type":"ContainerDied","Data":"c57fb1d63bd24ab77c2265fcc4abd95ad9b30a2b4f8f1e289dfc3c4f662a2c38"} Feb 18 15:50:43 crc kubenswrapper[4957]: I0218 15:50:43.124227 4957 scope.go:117] "RemoveContainer" containerID="c57fb1d63bd24ab77c2265fcc4abd95ad9b30a2b4f8f1e289dfc3c4f662a2c38" Feb 18 15:50:43 crc kubenswrapper[4957]: I0218 15:50:43.182517 4957 generic.go:334] "Generic (PLEG): container finished" podID="9f04a8b9-47dc-4fdf-b0fa-b39ee03d5f00" containerID="2fde7c5ef212618d402aa21565652c78d592fc74e88dbe573e09bdb5b2127eda" exitCode=0 Feb 18 15:50:43 crc kubenswrapper[4957]: I0218 15:50:43.182606 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vrpnn" event={"ID":"9f04a8b9-47dc-4fdf-b0fa-b39ee03d5f00","Type":"ContainerDied","Data":"2fde7c5ef212618d402aa21565652c78d592fc74e88dbe573e09bdb5b2127eda"} Feb 18 15:50:43 crc kubenswrapper[4957]: I0218 15:50:43.197163 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-c8kqz" Feb 18 15:50:43 crc kubenswrapper[4957]: I0218 15:50:43.197409 4957 generic.go:334] "Generic (PLEG): container finished" podID="b95ede57-e275-4ba0-834d-43356f6b960b" containerID="0a85b3f07e602aefbbbda7dac31c3b119df8cb10abdcb7825d70f1610a35622c" exitCode=0 Feb 18 15:50:43 crc kubenswrapper[4957]: I0218 15:50:43.197472 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gllqp" event={"ID":"b95ede57-e275-4ba0-834d-43356f6b960b","Type":"ContainerDied","Data":"0a85b3f07e602aefbbbda7dac31c3b119df8cb10abdcb7825d70f1610a35622c"} Feb 18 15:50:43 crc kubenswrapper[4957]: I0218 15:50:43.224134 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qc4wh" event={"ID":"455504d8-7edb-4008-9343-536491e9504a","Type":"ContainerStarted","Data":"a733835dfc81e5e48d4ab81250d942552547355b8296808dc483b8ccab602abf"} Feb 18 15:50:43 crc kubenswrapper[4957]: I0218 15:50:43.224759 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qc4wh" Feb 18 15:50:43 crc kubenswrapper[4957]: I0218 15:50:43.227113 4957 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-qc4wh container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" start-of-body= Feb 18 15:50:43 crc kubenswrapper[4957]: I0218 15:50:43.227175 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qc4wh" podUID="455504d8-7edb-4008-9343-536491e9504a" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" Feb 18 15:50:43 crc kubenswrapper[4957]: I0218 15:50:43.257052 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-7866795846-2fdgz" event={"ID":"b724d9a9-8ae5-4295-9b4e-5ec65793b59f","Type":"ContainerStarted","Data":"3d7eb356cfaa25279ead6eb962d35ff80eb4ba04ba523c2819b8ff9e0e271414"} Feb 18 15:50:43 crc kubenswrapper[4957]: I0218 15:50:43.257893 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-7866795846-2fdgz" Feb 18 15:50:43 crc kubenswrapper[4957]: I0218 15:50:43.260994 4957 generic.go:334] "Generic (PLEG): container finished" podID="1523e723-7145-4c5e-8834-990b6298db41" containerID="fc933db99029da447db2b03fe12f3fa610f419a635cff15cf52b13279a672238" exitCode=0 Feb 18 15:50:43 crc kubenswrapper[4957]: I0218 15:50:43.261055 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65c9ff5d9d-fz5wk" event={"ID":"1523e723-7145-4c5e-8834-990b6298db41","Type":"ContainerDied","Data":"fc933db99029da447db2b03fe12f3fa610f419a635cff15cf52b13279a672238"} Feb 18 15:50:43 crc kubenswrapper[4957]: I0218 15:50:43.262649 4957 generic.go:334] "Generic (PLEG): container finished" podID="8adf52f0-b132-4541-8962-7fae9bce89c6" containerID="64c5948e60606b306816ffc784ec763072230aa73135b3beca2d6c498bb68dcb" exitCode=1 Feb 18 15:50:43 crc kubenswrapper[4957]: I0218 15:50:43.262681 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68f46476f-5k7g6" event={"ID":"8adf52f0-b132-4541-8962-7fae9bce89c6","Type":"ContainerDied","Data":"64c5948e60606b306816ffc784ec763072230aa73135b3beca2d6c498bb68dcb"} Feb 18 15:50:43 crc kubenswrapper[4957]: I0218 15:50:43.263468 4957 scope.go:117] "RemoveContainer" containerID="64c5948e60606b306816ffc784ec763072230aa73135b3beca2d6c498bb68dcb" Feb 18 15:50:43 crc kubenswrapper[4957]: I0218 15:50:43.267614 4957 generic.go:334] "Generic (PLEG): container finished" podID="f70d6609-fcf8-47f9-89dc-986f8f2f902b" containerID="fa6417bfc6a7b42ddeef2365810ad7dff17a1a7009678576c67615b5019beb0e" exitCode=1 Feb 18 15:50:43 crc kubenswrapper[4957]: I0218 15:50:43.267659 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-kx5gv" event={"ID":"f70d6609-fcf8-47f9-89dc-986f8f2f902b","Type":"ContainerDied","Data":"fa6417bfc6a7b42ddeef2365810ad7dff17a1a7009678576c67615b5019beb0e"} Feb 18 15:50:43 crc kubenswrapper[4957]: I0218 15:50:43.268159 4957 scope.go:117] "RemoveContainer" containerID="fa6417bfc6a7b42ddeef2365810ad7dff17a1a7009678576c67615b5019beb0e" Feb 18 15:50:43 crc kubenswrapper[4957]: I0218 15:50:43.275239 4957 generic.go:334] "Generic (PLEG): container finished" podID="77a4b221-67be-4248-beaa-1f4602e3b35b" containerID="b353c117523172744de7839ed70ea1395c04c0968731c41a24a113c9cb1a10dc" exitCode=0 Feb 18 15:50:43 crc kubenswrapper[4957]: I0218 15:50:43.275277 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-q5pw9" event={"ID":"77a4b221-67be-4248-beaa-1f4602e3b35b","Type":"ContainerDied","Data":"b353c117523172744de7839ed70ea1395c04c0968731c41a24a113c9cb1a10dc"} Feb 18 15:50:43 crc kubenswrapper[4957]: I0218 15:50:43.277220 4957 generic.go:334] "Generic (PLEG): container finished" podID="7e32179f-a59d-44e1-9a56-ca25b8c5ff21" containerID="968ae94edfcba12fecbbd2e18efed4c32a46c1e1c7b27c9816cc37fc4a4e28be" exitCode=0 Feb 18 15:50:43 crc kubenswrapper[4957]: I0218 15:50:43.277264 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-vd6hx" event={"ID":"7e32179f-a59d-44e1-9a56-ca25b8c5ff21","Type":"ContainerDied","Data":"968ae94edfcba12fecbbd2e18efed4c32a46c1e1c7b27c9816cc37fc4a4e28be"} Feb 18 15:50:43 crc kubenswrapper[4957]: I0218 15:50:43.279538 4957 generic.go:334] "Generic (PLEG): container finished" podID="da87ca13-b23a-4345-b79d-46c8e9bec9b3" containerID="e370b474353f69743b7b5ca79226784bb826fa7402419014686a331422c5efbd" exitCode=1 Feb 18 15:50:43 crc kubenswrapper[4957]: I0218 15:50:43.279574 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-669bf4b44b-ndlc7" event={"ID":"da87ca13-b23a-4345-b79d-46c8e9bec9b3","Type":"ContainerDied","Data":"e370b474353f69743b7b5ca79226784bb826fa7402419014686a331422c5efbd"} Feb 18 15:50:43 crc kubenswrapper[4957]: I0218 15:50:43.280053 4957 scope.go:117] "RemoveContainer" containerID="e370b474353f69743b7b5ca79226784bb826fa7402419014686a331422c5efbd" Feb 18 15:50:43 crc kubenswrapper[4957]: I0218 15:50:43.282440 4957 generic.go:334] "Generic (PLEG): container finished" podID="379fdde6-815b-433b-b62c-b9863ea4fb9e" containerID="46fa0439b78dc98b2e466e1c77a11ad92a91158f1c9a87d365215a5a6fad8545" exitCode=0 Feb 18 15:50:43 crc kubenswrapper[4957]: I0218 15:50:43.282478 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-wvv4k" event={"ID":"379fdde6-815b-433b-b62c-b9863ea4fb9e","Type":"ContainerDied","Data":"46fa0439b78dc98b2e466e1c77a11ad92a91158f1c9a87d365215a5a6fad8545"} Feb 18 15:50:43 crc kubenswrapper[4957]: I0218 15:50:43.283618 4957 generic.go:334] "Generic (PLEG): container finished" podID="07a618be-7572-49b8-aeb3-12ce37fbe7b3" containerID="55cb52efab64e1a01e30e5e8774b1a8050be910854a4f247409108e4efdd6c46" exitCode=1 Feb 18 15:50:43 crc kubenswrapper[4957]: I0218 15:50:43.283679 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-xt2c9" event={"ID":"07a618be-7572-49b8-aeb3-12ce37fbe7b3","Type":"ContainerDied","Data":"55cb52efab64e1a01e30e5e8774b1a8050be910854a4f247409108e4efdd6c46"} Feb 18 15:50:43 crc kubenswrapper[4957]: I0218 15:50:43.284858 4957 generic.go:334] "Generic (PLEG): container finished" podID="2fbd50ae-c490-4099-b01e-de491ad70559" containerID="3ebc227181edc68e492ff82667e56ce703d103b6b3b7df6b20a3689f34338478" exitCode=0 Feb 18 15:50:43 crc kubenswrapper[4957]: I0218 15:50:43.284894 4957 scope.go:117] "RemoveContainer" containerID="55cb52efab64e1a01e30e5e8774b1a8050be910854a4f247409108e4efdd6c46" Feb 18 15:50:43 crc kubenswrapper[4957]: I0218 15:50:43.284906 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6f58845d78-bfqn7" event={"ID":"2fbd50ae-c490-4099-b01e-de491ad70559","Type":"ContainerDied","Data":"3ebc227181edc68e492ff82667e56ce703d103b6b3b7df6b20a3689f34338478"} Feb 18 15:50:43 crc kubenswrapper[4957]: I0218 15:50:43.286475 4957 generic.go:334] "Generic (PLEG): container finished" podID="4c4be899-e6fc-4664-89e1-b2eb45187e3a" containerID="810337c5e67f29ddb7ae6e69c4047e13673031b1389e81b5251b6b5e7b806d4f" exitCode=0 Feb 18 15:50:43 crc kubenswrapper[4957]: I0218 15:50:43.286515 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-hbqf7" event={"ID":"4c4be899-e6fc-4664-89e1-b2eb45187e3a","Type":"ContainerDied","Data":"810337c5e67f29ddb7ae6e69c4047e13673031b1389e81b5251b6b5e7b806d4f"} Feb 18 15:50:43 crc kubenswrapper[4957]: I0218 15:50:43.289398 4957 generic.go:334] "Generic (PLEG): container finished" podID="644451ba-ce73-4312-b6cd-af99eb6c9fbc" containerID="24e1387807fa85236773e4e4bc9fa2e09df0e198efe5be275265d1a261fc2df9" exitCode=1 Feb 18 15:50:43 crc kubenswrapper[4957]: I0218 15:50:43.289456 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-9mf8z" event={"ID":"644451ba-ce73-4312-b6cd-af99eb6c9fbc","Type":"ContainerDied","Data":"24e1387807fa85236773e4e4bc9fa2e09df0e198efe5be275265d1a261fc2df9"} Feb 18 15:50:43 crc kubenswrapper[4957]: I0218 15:50:43.290012 4957 scope.go:117] "RemoveContainer" containerID="24e1387807fa85236773e4e4bc9fa2e09df0e198efe5be275265d1a261fc2df9" Feb 18 15:50:43 crc kubenswrapper[4957]: I0218 15:50:43.295441 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress_router-default-5444994796-jg752_099076c9-9f78-47b8-87f1-3c9cc47e0b09/router/0.log" Feb 18 15:50:43 crc kubenswrapper[4957]: I0218 15:50:43.296135 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-jg752" event={"ID":"099076c9-9f78-47b8-87f1-3c9cc47e0b09","Type":"ContainerStarted","Data":"a995d7e3aabd76674adc71a862271b739f4cdad5ee5c8db04f91b3b4edf0d273"} Feb 18 15:50:43 crc kubenswrapper[4957]: I0218 15:50:43.296535 4957 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-q4vp7 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.38:5443/healthz\": dial tcp 10.217.0.38:5443: connect: connection refused" start-of-body= Feb 18 15:50:43 crc kubenswrapper[4957]: I0218 15:50:43.296592 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-q4vp7" podUID="9a161f1b-77bb-4a9d-9bfc-345bb46d439b" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.38:5443/healthz\": dial tcp 10.217.0.38:5443: connect: connection refused" Feb 18 15:50:43 crc kubenswrapper[4957]: I0218 15:50:43.296727 4957 patch_prober.go:28] interesting pod/downloads-7954f5f757-fxh8s container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Feb 18 15:50:43 crc kubenswrapper[4957]: I0218 15:50:43.296758 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-fxh8s" podUID="6e462cfd-ac3c-4e75-bcce-f8291746b89e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Feb 18 15:50:43 crc kubenswrapper[4957]: I0218 15:50:43.618662 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-jg752" Feb 18 15:50:43 crc kubenswrapper[4957]: I0218 15:50:43.632550 4957 patch_prober.go:28] interesting pod/router-default-5444994796-jg752 container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Feb 18 15:50:43 crc kubenswrapper[4957]: I0218 15:50:43.632601 4957 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jg752" podUID="099076c9-9f78-47b8-87f1-3c9cc47e0b09" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Feb 18 15:50:43 crc kubenswrapper[4957]: I0218 15:50:43.847174 4957 patch_prober.go:28] interesting pod/etcd-crc container/etcd namespace/openshift-etcd: Liveness probe status=failure output="Get \"https://192.168.126.11:9980/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:43 crc kubenswrapper[4957]: I0218 15:50:43.847706 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-etcd/etcd-crc" podUID="2139d3e2895fc6797b9c76a1b4c9886d" containerName="etcd" probeResult="failure" output="Get \"https://192.168.126.11:9980/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:43 crc kubenswrapper[4957]: I0218 15:50:43.922579 4957 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-operators-redhat/loki-operator-controller-manager-669bf4b44b-ndlc7" Feb 18 15:50:44 crc kubenswrapper[4957]: I0218 15:50:44.207807 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 15:50:44 crc kubenswrapper[4957]: I0218 15:50:44.208166 4957 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Feb 18 15:50:44 crc kubenswrapper[4957]: I0218 15:50:44.208210 4957 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Feb 18 15:50:44 crc kubenswrapper[4957]: I0218 15:50:44.295072 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-cell1-galera-0" podUID="a1f75ef8-c8ef-4d67-8c40-11ff5f8f5f4c" containerName="galera" containerID="cri-o://b75fc79baf9d486a4ac87956cd0d8a072675147f14f4b685af9fdcd0c2c1d609" gracePeriod=9 Feb 18 15:50:44 crc kubenswrapper[4957]: I0218 15:50:44.295219 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-galera-0" podUID="8e98adae-ca1c-4f1d-a7e4-6ea6ccea7070" containerName="galera" containerID="cri-o://c2d6a2612ba18fb658d5c66424e36628b24b284c48f3a48ee8a77762ba6d77e3" gracePeriod=8 Feb 18 15:50:44 crc kubenswrapper[4957]: I0218 15:50:44.369399 4957 generic.go:334] "Generic (PLEG): container finished" podID="7b0517cf-fb33-49b0-9f1c-ba39f8edfc37" containerID="613e0204559e97040ebfe66db7d75d7effa45b74745a4dccc086b2e01d24ad10" exitCode=0 Feb 18 15:50:44 crc kubenswrapper[4957]: I0218 15:50:44.369865 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-tmknf" event={"ID":"7b0517cf-fb33-49b0-9f1c-ba39f8edfc37","Type":"ContainerDied","Data":"613e0204559e97040ebfe66db7d75d7effa45b74745a4dccc086b2e01d24ad10"} Feb 18 15:50:44 crc kubenswrapper[4957]: I0218 15:50:44.397153 4957 generic.go:334] "Generic (PLEG): container finished" podID="64952df6-ca80-4f2b-a8e3-ce0539d32008" containerID="ee7bfd0150096901eaacb13ccf0a2ce0017054cd8ce3e58ca9d7e0cd72b7f6be" exitCode=0 Feb 18 15:50:44 crc kubenswrapper[4957]: I0218 15:50:44.397225 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"64952df6-ca80-4f2b-a8e3-ce0539d32008","Type":"ContainerDied","Data":"ee7bfd0150096901eaacb13ccf0a2ce0017054cd8ce3e58ca9d7e0cd72b7f6be"} Feb 18 15:50:44 crc kubenswrapper[4957]: I0218 15:50:44.436709 4957 generic.go:334] "Generic (PLEG): container finished" podID="cc38dff8-4b46-4281-96a3-ff88c8200f59" containerID="0a62d87beb942e7bb0389dbbe45d3ae760e02628c464a58cbdee9272f63cb387" exitCode=1 Feb 18 15:50:44 crc kubenswrapper[4957]: I0218 15:50:44.436796 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987464f4-b85vd" event={"ID":"cc38dff8-4b46-4281-96a3-ff88c8200f59","Type":"ContainerDied","Data":"0a62d87beb942e7bb0389dbbe45d3ae760e02628c464a58cbdee9272f63cb387"} Feb 18 15:50:44 crc kubenswrapper[4957]: I0218 15:50:44.437579 4957 scope.go:117] "RemoveContainer" containerID="0a62d87beb942e7bb0389dbbe45d3ae760e02628c464a58cbdee9272f63cb387" Feb 18 15:50:44 crc kubenswrapper[4957]: I0218 15:50:44.456702 4957 generic.go:334] "Generic (PLEG): container finished" podID="6c6f7318-74c7-4971-9888-45a6c025bdde" containerID="c1c834fa1e5f54a66603ef6c4c7a02077f8ad7ce4500b581beae338a5652083b" exitCode=1 Feb 18 15:50:44 crc kubenswrapper[4957]: I0218 15:50:44.456800 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79d975b745-pgchj" event={"ID":"6c6f7318-74c7-4971-9888-45a6c025bdde","Type":"ContainerDied","Data":"c1c834fa1e5f54a66603ef6c4c7a02077f8ad7ce4500b581beae338a5652083b"} Feb 18 15:50:44 crc kubenswrapper[4957]: I0218 15:50:44.457536 4957 scope.go:117] "RemoveContainer" containerID="c1c834fa1e5f54a66603ef6c4c7a02077f8ad7ce4500b581beae338a5652083b" Feb 18 15:50:44 crc kubenswrapper[4957]: I0218 15:50:44.478512 4957 generic.go:334] "Generic (PLEG): container finished" podID="2d4f652d-1d92-4ace-8ac1-51aaf64ff1cd" containerID="1830e452e88ead9167650804bef4fbf015b4ee32728ed0842a8afb13ee9ec0f5" exitCode=0 Feb 18 15:50:44 crc kubenswrapper[4957]: I0218 15:50:44.478618 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-cvskg" event={"ID":"2d4f652d-1d92-4ace-8ac1-51aaf64ff1cd","Type":"ContainerDied","Data":"1830e452e88ead9167650804bef4fbf015b4ee32728ed0842a8afb13ee9ec0f5"} Feb 18 15:50:45 crc kubenswrapper[4957]: I0218 15:50:44.495375 4957 generic.go:334] "Generic (PLEG): container finished" podID="564a48e7-438e-4374-9b43-92409e093ae2" containerID="8ba3262a879c6a0bbe38c431c19cbff1d6ba81480df2e43affbc70c27daaf5bb" exitCode=0 Feb 18 15:50:45 crc kubenswrapper[4957]: I0218 15:50:44.495672 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"564a48e7-438e-4374-9b43-92409e093ae2","Type":"ContainerDied","Data":"8ba3262a879c6a0bbe38c431c19cbff1d6ba81480df2e43affbc70c27daaf5bb"} Feb 18 15:50:45 crc kubenswrapper[4957]: I0218 15:50:44.526186 4957 generic.go:334] "Generic (PLEG): container finished" podID="4f287d67-8d26-430a-a775-fdf0abeed6dd" containerID="afa6f369785be620d98fa0eec8cfde9c15198ff7482cb86d1e5977af114d991d" exitCode=1 Feb 18 15:50:45 crc kubenswrapper[4957]: I0218 15:50:44.526246 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-8675cb849f-2g7hj" event={"ID":"4f287d67-8d26-430a-a775-fdf0abeed6dd","Type":"ContainerDied","Data":"afa6f369785be620d98fa0eec8cfde9c15198ff7482cb86d1e5977af114d991d"} Feb 18 15:50:45 crc kubenswrapper[4957]: I0218 15:50:44.526669 4957 scope.go:117] "RemoveContainer" containerID="afa6f369785be620d98fa0eec8cfde9c15198ff7482cb86d1e5977af114d991d" Feb 18 15:50:45 crc kubenswrapper[4957]: I0218 15:50:44.538046 4957 generic.go:334] "Generic (PLEG): container finished" podID="18f96572-e72c-48ae-b22b-4c6fb7a4d7b9" containerID="293cf42b3f50593b2932ae2af41ab4fbca55dc0922eb5797a858e5c7d72ab522" exitCode=1 Feb 18 15:50:45 crc kubenswrapper[4957]: I0218 15:50:44.538104 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-hhg5g" event={"ID":"18f96572-e72c-48ae-b22b-4c6fb7a4d7b9","Type":"ContainerDied","Data":"293cf42b3f50593b2932ae2af41ab4fbca55dc0922eb5797a858e5c7d72ab522"} Feb 18 15:50:45 crc kubenswrapper[4957]: I0218 15:50:44.542699 4957 scope.go:117] "RemoveContainer" containerID="293cf42b3f50593b2932ae2af41ab4fbca55dc0922eb5797a858e5c7d72ab522" Feb 18 15:50:45 crc kubenswrapper[4957]: I0218 15:50:44.609153 4957 generic.go:334] "Generic (PLEG): container finished" podID="09673cd4-22c2-43fa-87ae-17b7a8a03308" containerID="6e53b53fba9c4ba90a2dd99b95b660b7e9883137fab02e7cafb7a73e864e5ed9" exitCode=1 Feb 18 15:50:45 crc kubenswrapper[4957]: I0218 15:50:44.609238 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-5666c999f9-b87pp" event={"ID":"09673cd4-22c2-43fa-87ae-17b7a8a03308","Type":"ContainerDied","Data":"6e53b53fba9c4ba90a2dd99b95b660b7e9883137fab02e7cafb7a73e864e5ed9"} Feb 18 15:50:45 crc kubenswrapper[4957]: I0218 15:50:44.610022 4957 scope.go:117] "RemoveContainer" containerID="6e53b53fba9c4ba90a2dd99b95b660b7e9883137fab02e7cafb7a73e864e5ed9" Feb 18 15:50:45 crc kubenswrapper[4957]: I0218 15:50:44.616016 4957 patch_prober.go:28] interesting pod/router-default-5444994796-jg752 container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Feb 18 15:50:45 crc kubenswrapper[4957]: I0218 15:50:44.616061 4957 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jg752" podUID="099076c9-9f78-47b8-87f1-3c9cc47e0b09" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Feb 18 15:50:45 crc kubenswrapper[4957]: I0218 15:50:44.617316 4957 generic.go:334] "Generic (PLEG): container finished" podID="c17ba5f2-7fb4-4ed7-8623-f987653f8f9b" containerID="94dd8748d3f7a11c97dbfaccdb5220072e7a27a32f484ff69678927cb7b0efad" exitCode=1 Feb 18 15:50:45 crc kubenswrapper[4957]: I0218 15:50:44.617367 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-kshbq" event={"ID":"c17ba5f2-7fb4-4ed7-8623-f987653f8f9b","Type":"ContainerDied","Data":"94dd8748d3f7a11c97dbfaccdb5220072e7a27a32f484ff69678927cb7b0efad"} Feb 18 15:50:45 crc kubenswrapper[4957]: I0218 15:50:44.618181 4957 scope.go:117] "RemoveContainer" containerID="94dd8748d3f7a11c97dbfaccdb5220072e7a27a32f484ff69678927cb7b0efad" Feb 18 15:50:45 crc kubenswrapper[4957]: I0218 15:50:44.664627 4957 patch_prober.go:28] interesting pod/etcd-crc container/etcd namespace/openshift-etcd: Readiness probe status=failure output="Get \"https://192.168.126.11:9980/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:50:45 crc kubenswrapper[4957]: I0218 15:50:44.664701 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-etcd/etcd-crc" podUID="2139d3e2895fc6797b9c76a1b4c9886d" containerName="etcd" probeResult="failure" output="Get \"https://192.168.126.11:9980/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 15:50:45 crc kubenswrapper[4957]: I0218 15:50:45.423441 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-xt2c9" Feb 18 15:50:45 crc kubenswrapper[4957]: I0218 15:50:45.424520 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="cert-manager/cert-manager-webhook-687f57d79b-q5pw9" podUID="77a4b221-67be-4248-beaa-1f4602e3b35b" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.44:6080/healthz\": dial tcp 10.217.0.44:6080: connect: connection refused" Feb 18 15:50:45 crc kubenswrapper[4957]: I0218 15:50:45.429139 4957 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-xt2c9" Feb 18 15:50:45 crc kubenswrapper[4957]: I0218 15:50:45.436251 4957 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/glance-operator-controller-manager-77987464f4-b85vd" Feb 18 15:50:45 crc kubenswrapper[4957]: I0218 15:50:45.436311 4957 patch_prober.go:28] interesting pod/console-operator-58897d9998-d9hcp container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.14:8443/readyz\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Feb 18 15:50:45 crc kubenswrapper[4957]: I0218 15:50:45.436337 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-d9hcp" podUID="2d30d957-c658-4ce4-9b04-3f1d64fb67b7" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.14:8443/readyz\": dial tcp 10.217.0.14:8443: connect: connection refused" Feb 18 15:50:45 crc kubenswrapper[4957]: I0218 15:50:45.436383 4957 patch_prober.go:28] interesting pod/console-operator-58897d9998-d9hcp container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.217.0.14:8443/healthz\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Feb 18 15:50:45 crc kubenswrapper[4957]: I0218 15:50:45.436398 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-58897d9998-d9hcp" podUID="2d30d957-c658-4ce4-9b04-3f1d64fb67b7" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.14:8443/healthz\": dial tcp 10.217.0.14:8443: connect: connection refused" Feb 18 15:50:45 crc kubenswrapper[4957]: I0218 15:50:45.436785 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-slwlj" Feb 18 15:50:45 crc kubenswrapper[4957]: I0218 15:50:45.444484 4957 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-hhg5g" Feb 18 15:50:45 crc kubenswrapper[4957]: I0218 15:50:45.444547 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-hhg5g" Feb 18 15:50:45 crc kubenswrapper[4957]: I0218 15:50:45.468527 4957 generic.go:334] "Generic (PLEG): container finished" podID="eb1f93ca-0a6f-4582-9d3a-329ddd1dd4fe" containerID="cabb58affd24b77604dc16dd5acc1e8c116d4b9953701d961a7a64b31b2e4c32" exitCode=1 Feb 18 15:50:45 crc kubenswrapper[4957]: I0218 15:50:45.468640 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-rd7mm" event={"ID":"eb1f93ca-0a6f-4582-9d3a-329ddd1dd4fe","Type":"ContainerDied","Data":"cabb58affd24b77604dc16dd5acc1e8c116d4b9953701d961a7a64b31b2e4c32"} Feb 18 15:50:45 crc kubenswrapper[4957]: I0218 15:50:45.469743 4957 scope.go:117] "RemoveContainer" containerID="cabb58affd24b77604dc16dd5acc1e8c116d4b9953701d961a7a64b31b2e4c32" Feb 18 15:50:45 crc kubenswrapper[4957]: I0218 15:50:45.479194 4957 generic.go:334] "Generic (PLEG): container finished" podID="147c50a5-37fc-4b06-803f-8ad1d1fd4625" containerID="775828847bbe6eab090bab810ff3d23e9c9158df42f6aa15ac2d381ab4f6e1a4" exitCode=1 Feb 18 15:50:45 crc kubenswrapper[4957]: I0218 15:50:45.479283 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-czjx4" event={"ID":"147c50a5-37fc-4b06-803f-8ad1d1fd4625","Type":"ContainerDied","Data":"775828847bbe6eab090bab810ff3d23e9c9158df42f6aa15ac2d381ab4f6e1a4"} Feb 18 15:50:45 crc kubenswrapper[4957]: I0218 15:50:45.480602 4957 scope.go:117] "RemoveContainer" containerID="775828847bbe6eab090bab810ff3d23e9c9158df42f6aa15ac2d381ab4f6e1a4" Feb 18 15:50:45 crc kubenswrapper[4957]: I0218 15:50:45.523385 4957 generic.go:334] "Generic (PLEG): container finished" podID="ff480d9a-ead3-47a1-a765-59507dfe0853" containerID="14fd62a5a09beb5fc28157c1ed82c2f6be5742bb8bd993c668c3451458eeac47" exitCode=1 Feb 18 15:50:45 crc kubenswrapper[4957]: I0218 15:50:45.523500 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-zc4tx" event={"ID":"ff480d9a-ead3-47a1-a765-59507dfe0853","Type":"ContainerDied","Data":"14fd62a5a09beb5fc28157c1ed82c2f6be5742bb8bd993c668c3451458eeac47"} Feb 18 15:50:45 crc kubenswrapper[4957]: I0218 15:50:45.524357 4957 scope.go:117] "RemoveContainer" containerID="14fd62a5a09beb5fc28157c1ed82c2f6be5742bb8bd993c668c3451458eeac47" Feb 18 15:50:45 crc kubenswrapper[4957]: I0218 15:50:45.528798 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-xg6pp" Feb 18 15:50:45 crc kubenswrapper[4957]: I0218 15:50:45.539857 4957 generic.go:334] "Generic (PLEG): container finished" podID="91fd8838-0687-420b-b3dd-4130e221a66d" containerID="82fc05ab6ab6921cb405de4a1c392ed2a0127f16a96515ebfb8dfbe845722ea4" exitCode=1 Feb 18 15:50:45 crc kubenswrapper[4957]: I0218 15:50:45.540329 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-fx4tl" event={"ID":"91fd8838-0687-420b-b3dd-4130e221a66d","Type":"ContainerDied","Data":"82fc05ab6ab6921cb405de4a1c392ed2a0127f16a96515ebfb8dfbe845722ea4"} Feb 18 15:50:45 crc kubenswrapper[4957]: I0218 15:50:45.541230 4957 scope.go:117] "RemoveContainer" containerID="82fc05ab6ab6921cb405de4a1c392ed2a0127f16a96515ebfb8dfbe845722ea4" Feb 18 15:50:45 crc kubenswrapper[4957]: I0218 15:50:45.542448 4957 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-qc4wh container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" start-of-body= Feb 18 15:50:45 crc kubenswrapper[4957]: I0218 15:50:45.542470 4957 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-fx4tl" Feb 18 15:50:45 crc kubenswrapper[4957]: I0218 15:50:45.542472 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qc4wh" podUID="455504d8-7edb-4008-9343-536491e9504a" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" Feb 18 15:50:45 crc kubenswrapper[4957]: I0218 15:50:45.588109 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-vt4tc" Feb 18 15:50:45 crc kubenswrapper[4957]: I0218 15:50:45.591989 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-scheduler-0" podUID="e7fbc309-e5ee-4222-8409-6d68468ae015" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 15:50:45 crc kubenswrapper[4957]: I0218 15:50:45.599266 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-jv5dd" Feb 18 15:50:45 crc kubenswrapper[4957]: I0218 15:50:45.610356 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-jg752" Feb 18 15:50:45 crc kubenswrapper[4957]: I0218 15:50:45.611648 4957 patch_prober.go:28] interesting pod/router-default-5444994796-jg752 container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Feb 18 15:50:45 crc kubenswrapper[4957]: I0218 15:50:45.611697 4957 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jg752" podUID="099076c9-9f78-47b8-87f1-3c9cc47e0b09" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Feb 18 15:50:45 crc kubenswrapper[4957]: I0218 15:50:45.646328 4957 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-8bmsm container/catalog-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: connect: connection refused" start-of-body= Feb 18 15:50:45 crc kubenswrapper[4957]: I0218 15:50:45.646629 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8bmsm" podUID="3925001b-348a-4dde-a066-e49891c345bb" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: connect: connection refused" Feb 18 15:50:45 crc kubenswrapper[4957]: I0218 15:50:45.646749 4957 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-8bmsm container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: connect: connection refused" start-of-body= Feb 18 15:50:45 crc kubenswrapper[4957]: I0218 15:50:45.646782 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8bmsm" podUID="3925001b-348a-4dde-a066-e49891c345bb" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: connect: connection refused" Feb 18 15:50:45 crc kubenswrapper[4957]: I0218 15:50:45.723155 4957 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-czjx4" Feb 18 15:50:45 crc kubenswrapper[4957]: I0218 15:50:45.724659 4957 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-rd7mm" Feb 18 15:50:45 crc kubenswrapper[4957]: I0218 15:50:45.740002 4957 patch_prober.go:28] interesting pod/downloads-7954f5f757-fxh8s container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Feb 18 15:50:45 crc kubenswrapper[4957]: I0218 15:50:45.740050 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-fxh8s" podUID="6e462cfd-ac3c-4e75-bcce-f8291746b89e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Feb 18 15:50:45 crc kubenswrapper[4957]: I0218 15:50:45.740107 4957 patch_prober.go:28] interesting pod/downloads-7954f5f757-fxh8s container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Feb 18 15:50:45 crc kubenswrapper[4957]: I0218 15:50:45.740159 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-fxh8s" podUID="6e462cfd-ac3c-4e75-bcce-f8291746b89e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Feb 18 15:50:45 crc kubenswrapper[4957]: I0218 15:50:45.783464 4957 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-kx5gv" Feb 18 15:50:45 crc kubenswrapper[4957]: I0218 15:50:45.805181 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-84fccf866c-pq2q4" Feb 18 15:50:45 crc kubenswrapper[4957]: I0218 15:50:45.857383 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-cxzhc" Feb 18 15:50:45 crc kubenswrapper[4957]: I0218 15:50:45.929202 4957 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-zc4tx" Feb 18 15:50:45 crc kubenswrapper[4957]: I0218 15:50:45.953595 4957 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-q4vp7 container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.38:5443/healthz\": dial tcp 10.217.0.38:5443: connect: connection refused" start-of-body= Feb 18 15:50:45 crc kubenswrapper[4957]: I0218 15:50:45.953662 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-q4vp7" podUID="9a161f1b-77bb-4a9d-9bfc-345bb46d439b" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.38:5443/healthz\": dial tcp 10.217.0.38:5443: connect: connection refused" Feb 18 15:50:45 crc kubenswrapper[4957]: I0218 15:50:45.955029 4957 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-q4vp7 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.38:5443/healthz\": dial tcp 10.217.0.38:5443: connect: connection refused" start-of-body= Feb 18 15:50:45 crc kubenswrapper[4957]: I0218 15:50:45.955059 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-q4vp7" podUID="9a161f1b-77bb-4a9d-9bfc-345bb46d439b" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.38:5443/healthz\": dial tcp 10.217.0.38:5443: connect: connection refused" Feb 18 15:50:45 crc kubenswrapper[4957]: I0218 15:50:45.968785 4957 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-qc4wh container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" start-of-body= Feb 18 15:50:45 crc kubenswrapper[4957]: I0218 15:50:45.968828 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qc4wh" podUID="455504d8-7edb-4008-9343-536491e9504a" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" Feb 18 15:50:45 crc kubenswrapper[4957]: I0218 15:50:45.968872 4957 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-qc4wh container/olm-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" start-of-body= Feb 18 15:50:45 crc kubenswrapper[4957]: I0218 15:50:45.968921 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qc4wh" podUID="455504d8-7edb-4008-9343-536491e9504a" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" Feb 18 15:50:46 crc kubenswrapper[4957]: I0218 15:50:46.081113 4957 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-tmknf container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.29:8443/healthz\": dial tcp 10.217.0.29:8443: connect: connection refused" start-of-body= Feb 18 15:50:46 crc kubenswrapper[4957]: I0218 15:50:46.081560 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-tmknf" podUID="7b0517cf-fb33-49b0-9f1c-ba39f8edfc37" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.29:8443/healthz\": dial tcp 10.217.0.29:8443: connect: connection refused" Feb 18 15:50:46 crc kubenswrapper[4957]: I0218 15:50:46.100609 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 15:50:46 crc kubenswrapper[4957]: I0218 15:50:46.162380 4957 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-9mf8z" Feb 18 15:50:46 crc kubenswrapper[4957]: I0218 15:50:46.188036 4957 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/swift-operator-controller-manager-68f46476f-5k7g6" Feb 18 15:50:46 crc kubenswrapper[4957]: I0218 15:50:46.214656 4957 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/telemetry-operator-controller-manager-54bf66477-rc4j4" Feb 18 15:50:46 crc kubenswrapper[4957]: I0218 15:50:46.234297 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/monitoring-plugin-6fb88c9bd-6wgxk" Feb 18 15:50:46 crc kubenswrapper[4957]: I0218 15:50:46.266403 4957 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-kshbq" Feb 18 15:50:46 crc kubenswrapper[4957]: I0218 15:50:46.266488 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-kshbq" Feb 18 15:50:46 crc kubenswrapper[4957]: E0218 15:50:46.516081 4957 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 8ba3262a879c6a0bbe38c431c19cbff1d6ba81480df2e43affbc70c27daaf5bb is running failed: container process not found" containerID="8ba3262a879c6a0bbe38c431c19cbff1d6ba81480df2e43affbc70c27daaf5bb" cmd=["sh","-c","if [ -x \"$(command -v curl)\" ]; then exec curl --fail http://localhost:9090/-/ready; elif [ -x \"$(command -v wget)\" ]; then exec wget -q -O /dev/null http://localhost:9090/-/ready; else exit 1; fi"] Feb 18 15:50:46 crc kubenswrapper[4957]: E0218 15:50:46.516769 4957 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 8ba3262a879c6a0bbe38c431c19cbff1d6ba81480df2e43affbc70c27daaf5bb is running failed: container process not found" containerID="8ba3262a879c6a0bbe38c431c19cbff1d6ba81480df2e43affbc70c27daaf5bb" cmd=["sh","-c","if [ -x \"$(command -v curl)\" ]; then exec curl --fail http://localhost:9090/-/ready; elif [ -x \"$(command -v wget)\" ]; then exec wget -q -O /dev/null http://localhost:9090/-/ready; else exit 1; fi"] Feb 18 15:50:46 crc kubenswrapper[4957]: E0218 15:50:46.517969 4957 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 8ba3262a879c6a0bbe38c431c19cbff1d6ba81480df2e43affbc70c27daaf5bb is running failed: container process not found" containerID="8ba3262a879c6a0bbe38c431c19cbff1d6ba81480df2e43affbc70c27daaf5bb" cmd=["sh","-c","if [ -x \"$(command -v curl)\" ]; then exec curl --fail http://localhost:9090/-/ready; elif [ -x \"$(command -v wget)\" ]; then exec wget -q -O /dev/null http://localhost:9090/-/ready; else exit 1; fi"] Feb 18 15:50:46 crc kubenswrapper[4957]: E0218 15:50:46.518085 4957 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 8ba3262a879c6a0bbe38c431c19cbff1d6ba81480df2e43affbc70c27daaf5bb is running failed: container process not found" probeType="Readiness" pod="openshift-monitoring/prometheus-k8s-0" podUID="564a48e7-438e-4374-9b43-92409e093ae2" containerName="prometheus" Feb 18 15:50:46 crc kubenswrapper[4957]: I0218 15:50:46.926541 4957 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-mxz2r container/operator namespace/openshift-operators: Liveness probe status=failure output="Get \"http://10.217.0.21:8081/healthz\": dial tcp 10.217.0.21:8081: connect: connection refused" start-of-body= Feb 18 15:50:46 crc kubenswrapper[4957]: I0218 15:50:46.926586 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operators/observability-operator-59bdc8b94-mxz2r" podUID="955eb799-56c6-47e7-b5f7-eccac4b52134" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.21:8081/healthz\": dial tcp 10.217.0.21:8081: connect: connection refused" Feb 18 15:50:46 crc kubenswrapper[4957]: I0218 15:50:46.926739 4957 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-mxz2r container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.21:8081/healthz\": dial tcp 10.217.0.21:8081: connect: connection refused" start-of-body= Feb 18 15:50:46 crc kubenswrapper[4957]: I0218 15:50:46.926780 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-59bdc8b94-mxz2r" podUID="955eb799-56c6-47e7-b5f7-eccac4b52134" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.21:8081/healthz\": dial tcp 10.217.0.21:8081: connect: connection refused" Feb 18 15:50:46 crc kubenswrapper[4957]: I0218 15:50:46.928687 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-khfcc" Feb 18 15:50:46 crc kubenswrapper[4957]: E0218 15:50:46.932713 4957 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 810337c5e67f29ddb7ae6e69c4047e13673031b1389e81b5251b6b5e7b806d4f is running failed: container process not found" containerID="810337c5e67f29ddb7ae6e69c4047e13673031b1389e81b5251b6b5e7b806d4f" cmd=["grpc_health_probe","-addr=:50051"] Feb 18 15:50:46 crc kubenswrapper[4957]: I0218 15:50:46.932770 4957 patch_prober.go:28] interesting pod/router-default-5444994796-jg752 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 15:50:46 crc kubenswrapper[4957]: [-]has-synced failed: reason withheld Feb 18 15:50:46 crc kubenswrapper[4957]: [+]process-running ok Feb 18 15:50:46 crc kubenswrapper[4957]: healthz check failed Feb 18 15:50:46 crc kubenswrapper[4957]: I0218 15:50:46.932810 4957 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jg752" podUID="099076c9-9f78-47b8-87f1-3c9cc47e0b09" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 15:50:47 crc kubenswrapper[4957]: E0218 15:50:46.933319 4957 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 810337c5e67f29ddb7ae6e69c4047e13673031b1389e81b5251b6b5e7b806d4f is running failed: container process not found" containerID="810337c5e67f29ddb7ae6e69c4047e13673031b1389e81b5251b6b5e7b806d4f" cmd=["grpc_health_probe","-addr=:50051"] Feb 18 15:50:47 crc kubenswrapper[4957]: E0218 15:50:46.933638 4957 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 810337c5e67f29ddb7ae6e69c4047e13673031b1389e81b5251b6b5e7b806d4f is running failed: container process not found" containerID="810337c5e67f29ddb7ae6e69c4047e13673031b1389e81b5251b6b5e7b806d4f" cmd=["grpc_health_probe","-addr=:50051"] Feb 18 15:50:47 crc kubenswrapper[4957]: E0218 15:50:46.933675 4957 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 810337c5e67f29ddb7ae6e69c4047e13673031b1389e81b5251b6b5e7b806d4f is running failed: container process not found" probeType="Readiness" pod="openstack-operators/openstack-operator-index-hbqf7" podUID="4c4be899-e6fc-4664-89e1-b2eb45187e3a" containerName="registry-server" Feb 18 15:50:47 crc kubenswrapper[4957]: I0218 15:50:46.943942 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-8675cb849f-2g7hj" Feb 18 15:50:47 crc kubenswrapper[4957]: I0218 15:50:46.946096 4957 generic.go:334] "Generic (PLEG): container finished" podID="33e776b3-c81e-4655-82a8-88c63ff8adf7" containerID="3dcdc21909fe479edf48311f9ce2eef3b7e38d5d08ab62b69edbe6218561c6a3" exitCode=1 Feb 18 15:50:47 crc kubenswrapper[4957]: I0218 15:50:46.946140 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-d7pcl" event={"ID":"33e776b3-c81e-4655-82a8-88c63ff8adf7","Type":"ContainerDied","Data":"3dcdc21909fe479edf48311f9ce2eef3b7e38d5d08ab62b69edbe6218561c6a3"} Feb 18 15:50:47 crc kubenswrapper[4957]: I0218 15:50:46.946891 4957 scope.go:117] "RemoveContainer" containerID="3dcdc21909fe479edf48311f9ce2eef3b7e38d5d08ab62b69edbe6218561c6a3" Feb 18 15:50:47 crc kubenswrapper[4957]: I0218 15:50:46.954523 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-c8kqz" event={"ID":"06e3f0bd-70d3-493b-ab24-e8f75298f7a3","Type":"ContainerStarted","Data":"d6d328ed79b43da1cd3c72d089256b5c6d505167cf1eedecf2875b89c80febda"} Feb 18 15:50:47 crc kubenswrapper[4957]: I0218 15:50:46.957563 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-c8kqz" podUID="06e3f0bd-70d3-493b-ab24-e8f75298f7a3" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": dial tcp 127.0.0.1:7572: connect: connection refused" Feb 18 15:50:47 crc kubenswrapper[4957]: I0218 15:50:47.132062 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-querier-76bf7b6d45-t4b27" Feb 18 15:50:47 crc kubenswrapper[4957]: I0218 15:50:47.182557 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-x4qsb" Feb 18 15:50:47 crc kubenswrapper[4957]: I0218 15:50:47.215057 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-2wbxs" Feb 18 15:50:47 crc kubenswrapper[4957]: I0218 15:50:47.430940 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-webhook-server-84df667ccc-2w5tf" podUID="32734ff2-fe7b-4588-a4c8-0e5882b54b87" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.95:7472/metrics\": dial tcp 10.217.0.95:7472: connect: connection refused" Feb 18 15:50:47 crc kubenswrapper[4957]: I0218 15:50:47.463754 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-scheduler-0" podUID="e7fbc309-e5ee-4222-8409-6d68468ae015" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 15:50:47 crc kubenswrapper[4957]: I0218 15:50:47.609334 4957 patch_prober.go:28] interesting pod/router-default-5444994796-jg752 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 15:50:47 crc kubenswrapper[4957]: [+]has-synced ok Feb 18 15:50:47 crc kubenswrapper[4957]: [+]process-running ok Feb 18 15:50:47 crc kubenswrapper[4957]: healthz check failed Feb 18 15:50:47 crc kubenswrapper[4957]: I0218 15:50:47.609543 4957 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jg752" podUID="099076c9-9f78-47b8-87f1-3c9cc47e0b09" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 15:50:47 crc kubenswrapper[4957]: I0218 15:50:47.905971 4957 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-cvskg container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/healthz\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Feb 18 15:50:47 crc kubenswrapper[4957]: I0218 15:50:47.906473 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-cvskg" podUID="2d4f652d-1d92-4ace-8ac1-51aaf64ff1cd" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.66:8080/healthz\": dial tcp 10.217.0.66:8080: connect: connection refused" Feb 18 15:50:47 crc kubenswrapper[4957]: I0218 15:50:47.989846 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-cvskg" event={"ID":"2d4f652d-1d92-4ace-8ac1-51aaf64ff1cd","Type":"ContainerStarted","Data":"252fe5c75814ed90f5a62b2f32611fcab73570fdfd239ede69a5529bf4dc59d1"} Feb 18 15:50:47 crc kubenswrapper[4957]: I0218 15:50:47.992128 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-hw6sv" event={"ID":"3929daaa-39b8-475f-9af0-644180cb7682","Type":"ContainerStarted","Data":"22cf8773503769322592cf324af669f728bb0b5aca7ebaae5493ac66dd3fce9b"} Feb 18 15:50:48 crc kubenswrapper[4957]: I0218 15:50:47.992278 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-69bbfbf88f-hw6sv" Feb 18 15:50:48 crc kubenswrapper[4957]: I0218 15:50:47.994024 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-bl7jx" podUID="84258a40-276a-4da4-8240-603932be25c0" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.96:7572/metrics\": dial tcp 10.217.0.96:7572: connect: connection refused" Feb 18 15:50:48 crc kubenswrapper[4957]: I0218 15:50:48.003225 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-84df667ccc-2w5tf" event={"ID":"32734ff2-fe7b-4588-a4c8-0e5882b54b87","Type":"ContainerStarted","Data":"a14536e3f4e035b6df34db0b81fc9988196bdec93b5bfa91798d2323a7d8135e"} Feb 18 15:50:48 crc kubenswrapper[4957]: I0218 15:50:48.005649 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-wvv4k" event={"ID":"379fdde6-815b-433b-b62c-b9863ea4fb9e","Type":"ContainerStarted","Data":"fc5c8c8d4f4a57fe871c423c4d6886ca0a7b477a9a62826a5f23c527581b3452"} Feb 18 15:50:48 crc kubenswrapper[4957]: I0218 15:50:48.012863 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-c8kqz" Feb 18 15:50:48 crc kubenswrapper[4957]: I0218 15:50:48.018686 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-kx5gv" event={"ID":"f70d6609-fcf8-47f9-89dc-986f8f2f902b","Type":"ContainerStarted","Data":"8f095b7a1ab1291119dfebfc4d40142d62b6896bf76b3511f44053ddc6d7f925"} Feb 18 15:50:48 crc kubenswrapper[4957]: I0218 15:50:48.018871 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-kx5gv" Feb 18 15:50:48 crc kubenswrapper[4957]: I0218 15:50:48.036046 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-q5pw9" event={"ID":"77a4b221-67be-4248-beaa-1f4602e3b35b","Type":"ContainerStarted","Data":"264bc712cb19a345000953c0a09ce601e5f8fc8b239aed64d824c1326d71f82b"} Feb 18 15:50:48 crc kubenswrapper[4957]: I0218 15:50:48.059399 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-tmknf" event={"ID":"7b0517cf-fb33-49b0-9f1c-ba39f8edfc37","Type":"ContainerStarted","Data":"4e7e054f2e8b1c0f29d823679d3e88bf61a3e956d7b80699c40819441412be31"} Feb 18 15:50:48 crc kubenswrapper[4957]: I0218 15:50:48.061519 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68f46476f-5k7g6" event={"ID":"8adf52f0-b132-4541-8962-7fae9bce89c6","Type":"ContainerStarted","Data":"92c9f45aa94f1e86c777adee3067e44574c3ed28c52d999e8c37d389bfc7ba6c"} Feb 18 15:50:48 crc kubenswrapper[4957]: I0218 15:50:48.062989 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-8675cb849f-2g7hj" event={"ID":"4f287d67-8d26-430a-a775-fdf0abeed6dd","Type":"ContainerStarted","Data":"864c3b34e0d464a94f8c3ca675f2cb5db6f90dde7144645b318a5a171e530f8e"} Feb 18 15:50:48 crc kubenswrapper[4957]: I0218 15:50:48.064901 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cwqlsr" event={"ID":"8bd25216-306e-42c0-93da-a51803507c1f","Type":"ContainerStarted","Data":"200338243d4ecb45bd982da99bca51668f179178194bce3f1e186aac31fc9047"} Feb 18 15:50:48 crc kubenswrapper[4957]: I0218 15:50:48.067045 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987464f4-b85vd" event={"ID":"cc38dff8-4b46-4281-96a3-ff88c8200f59","Type":"ContainerStarted","Data":"3b2148103dfdf85f14d7b2cf017226ef556bd9dc6afb3a7570cf622939e80604"} Feb 18 15:50:48 crc kubenswrapper[4957]: I0218 15:50:48.069570 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-xt2c9" event={"ID":"07a618be-7572-49b8-aeb3-12ce37fbe7b3","Type":"ContainerStarted","Data":"54f9f5d8976c38f3f8637e6fb9c02b9b32f8f44e694f9bdc358aa35626580ae6"} Feb 18 15:50:48 crc kubenswrapper[4957]: I0218 15:50:48.094735 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"564a48e7-438e-4374-9b43-92409e093ae2","Type":"ContainerStarted","Data":"aa2be02babf8db94e0c785c1d36b08dd7f5728d98e598e72fed680953a578980"} Feb 18 15:50:48 crc kubenswrapper[4957]: I0218 15:50:48.137233 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-669bf4b44b-ndlc7" event={"ID":"da87ca13-b23a-4345-b79d-46c8e9bec9b3","Type":"ContainerStarted","Data":"ec800b32147233e2ea92a49dccbe2d8c2523a5a6444f27d58061cae6ace8fbde"} Feb 18 15:50:48 crc kubenswrapper[4957]: I0218 15:50:48.153145 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65c9ff5d9d-fz5wk" event={"ID":"1523e723-7145-4c5e-8834-990b6298db41","Type":"ContainerStarted","Data":"c25cb7733ea463cce06ffea062ab60da1a5c51c1c2a5dfdc8a55c3c894b7844d"} Feb 18 15:50:48 crc kubenswrapper[4957]: I0218 15:50:48.168548 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-54bf66477-rc4j4" event={"ID":"f507ee0e-6836-4f30-b79e-63979d76a449","Type":"ContainerStarted","Data":"ccb1063df90c6e35f2936f7f4f0936b085b6a65c97d6ea195f5bf7ad5f801a19"} Feb 18 15:50:48 crc kubenswrapper[4957]: I0218 15:50:48.176338 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79d975b745-pgchj" event={"ID":"6c6f7318-74c7-4971-9888-45a6c025bdde","Type":"ContainerStarted","Data":"bf71a9989e4ddec8653375a0d4b3c38e4d983aad02ba36d0f587baf0932be43d"} Feb 18 15:50:48 crc kubenswrapper[4957]: I0218 15:50:48.178585 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-79d975b745-pgchj" Feb 18 15:50:48 crc kubenswrapper[4957]: I0218 15:50:48.205089 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6f58845d78-bfqn7" event={"ID":"2fbd50ae-c490-4099-b01e-de491ad70559","Type":"ContainerStarted","Data":"970eb537a4bb8348c03c52dfb083d8107b4c2db48fa7fa2179d0e3aa1561d0b1"} Feb 18 15:50:48 crc kubenswrapper[4957]: I0218 15:50:48.206741 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6f58845d78-bfqn7" Feb 18 15:50:48 crc kubenswrapper[4957]: I0218 15:50:48.206834 4957 patch_prober.go:28] interesting pod/route-controller-manager-6f58845d78-bfqn7 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.63:8443/healthz\": dial tcp 10.217.0.63:8443: connect: connection refused" start-of-body= Feb 18 15:50:48 crc kubenswrapper[4957]: I0218 15:50:48.206868 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6f58845d78-bfqn7" podUID="2fbd50ae-c490-4099-b01e-de491ad70559" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.63:8443/healthz\": dial tcp 10.217.0.63:8443: connect: connection refused" Feb 18 15:50:48 crc kubenswrapper[4957]: I0218 15:50:48.279755 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-549dd7c895-84tm5" event={"ID":"33e1b915-d740-4ec7-b74e-b8b8b6356d4d","Type":"ContainerStarted","Data":"06e888faab675fdbbc06558eaa9d5884be0125586d1e2ece881282b81c00c194"} Feb 18 15:50:48 crc kubenswrapper[4957]: I0218 15:50:48.283158 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-549dd7c895-84tm5" Feb 18 15:50:48 crc kubenswrapper[4957]: I0218 15:50:48.283431 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-bl7jx" event={"ID":"84258a40-276a-4da4-8240-603932be25c0","Type":"ContainerStarted","Data":"a1d64414ce6a3812d90c64113cab98f2a967657b7501aff86374def58a7cd061"} Feb 18 15:50:48 crc kubenswrapper[4957]: I0218 15:50:48.283470 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-bl7jx" Feb 18 15:50:48 crc kubenswrapper[4957]: I0218 15:50:48.289446 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-9mf8z" event={"ID":"644451ba-ce73-4312-b6cd-af99eb6c9fbc","Type":"ContainerStarted","Data":"99a3915065586d56c3323707b99afc47ba9b1a7e31ae1429ff640b4fbaa376fd"} Feb 18 15:50:48 crc kubenswrapper[4957]: I0218 15:50:48.290781 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-9mf8z" Feb 18 15:50:48 crc kubenswrapper[4957]: I0218 15:50:48.302944 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-vd6hx" event={"ID":"7e32179f-a59d-44e1-9a56-ca25b8c5ff21","Type":"ContainerStarted","Data":"437fce3d6219bac2b6f598a79ba9f3410ab903485096e52c96ea1669d183188c"} Feb 18 15:50:48 crc kubenswrapper[4957]: I0218 15:50:48.321891 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-index-gateway-0" Feb 18 15:50:48 crc kubenswrapper[4957]: I0218 15:50:48.390240 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-ingester-0" Feb 18 15:50:48 crc kubenswrapper[4957]: I0218 15:50:48.476106 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-compactor-0" Feb 18 15:50:48 crc kubenswrapper[4957]: I0218 15:50:48.613065 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-jg752" Feb 18 15:50:49 crc kubenswrapper[4957]: I0218 15:50:49.081974 4957 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-tmknf container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.29:8443/healthz\": dial tcp 10.217.0.29:8443: connect: connection refused" start-of-body= Feb 18 15:50:49 crc kubenswrapper[4957]: I0218 15:50:49.082032 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-tmknf" podUID="7b0517cf-fb33-49b0-9f1c-ba39f8edfc37" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.29:8443/healthz\": dial tcp 10.217.0.29:8443: connect: connection refused" Feb 18 15:50:49 crc kubenswrapper[4957]: I0218 15:50:49.315569 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-d7pcl" event={"ID":"33e776b3-c81e-4655-82a8-88c63ff8adf7","Type":"ContainerStarted","Data":"0d623b8a21477a80c3b08a5bd1b130fad15f48d6151a6dea159858255b5a7f44"} Feb 18 15:50:49 crc kubenswrapper[4957]: I0218 15:50:49.318260 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-hbqf7" event={"ID":"4c4be899-e6fc-4664-89e1-b2eb45187e3a","Type":"ContainerStarted","Data":"357e54e4219d584c96e2f74b54cc4e0b5edd0b705ca5f5a8ce0824124b549764"} Feb 18 15:50:49 crc kubenswrapper[4957]: I0218 15:50:49.321624 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-rd7mm" event={"ID":"eb1f93ca-0a6f-4582-9d3a-329ddd1dd4fe","Type":"ContainerStarted","Data":"ea21bd50341e9792fb0ad9a240e3b1898f2d0118106d1568f06005f19f9de6ae"} Feb 18 15:50:49 crc kubenswrapper[4957]: I0218 15:50:49.321733 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-rd7mm" Feb 18 15:50:49 crc kubenswrapper[4957]: I0218 15:50:49.324358 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-czjx4" event={"ID":"147c50a5-37fc-4b06-803f-8ad1d1fd4625","Type":"ContainerStarted","Data":"8f27b3d6e9233e2eefd075abba21c6536489f2940f953a29bffa6fe39af1f26d"} Feb 18 15:50:49 crc kubenswrapper[4957]: I0218 15:50:49.324913 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-czjx4" Feb 18 15:50:49 crc kubenswrapper[4957]: I0218 15:50:49.329058 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"64952df6-ca80-4f2b-a8e3-ce0539d32008","Type":"ContainerStarted","Data":"daecdfd0e42acc167fb8317a1a09f714d3aa6f05aaed86f55ff9bbaaf60a4415"} Feb 18 15:50:49 crc kubenswrapper[4957]: I0218 15:50:49.336722 4957 generic.go:334] "Generic (PLEG): container finished" podID="a1f75ef8-c8ef-4d67-8c40-11ff5f8f5f4c" containerID="b75fc79baf9d486a4ac87956cd0d8a072675147f14f4b685af9fdcd0c2c1d609" exitCode=0 Feb 18 15:50:49 crc kubenswrapper[4957]: I0218 15:50:49.336799 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"a1f75ef8-c8ef-4d67-8c40-11ff5f8f5f4c","Type":"ContainerDied","Data":"b75fc79baf9d486a4ac87956cd0d8a072675147f14f4b685af9fdcd0c2c1d609"} Feb 18 15:50:49 crc kubenswrapper[4957]: I0218 15:50:49.341178 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-zc4tx" event={"ID":"ff480d9a-ead3-47a1-a765-59507dfe0853","Type":"ContainerStarted","Data":"4af40a2cc72c6ed59bd1cb12fe3f2f1006ffb3ed617a2a5e60f84adcf90d37fe"} Feb 18 15:50:49 crc kubenswrapper[4957]: I0218 15:50:49.342272 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-zc4tx" Feb 18 15:50:49 crc kubenswrapper[4957]: I0218 15:50:49.348560 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-5666c999f9-b87pp" event={"ID":"09673cd4-22c2-43fa-87ae-17b7a8a03308","Type":"ContainerStarted","Data":"5cf66d83502edd41ec818ab73f94c2766932f8fcabb055aab4835571359a3ad8"} Feb 18 15:50:49 crc kubenswrapper[4957]: I0218 15:50:49.349736 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-5666c999f9-b87pp" Feb 18 15:50:49 crc kubenswrapper[4957]: I0218 15:50:49.360746 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gllqp" event={"ID":"b95ede57-e275-4ba0-834d-43356f6b960b","Type":"ContainerStarted","Data":"dc5141968f3272c219328846c42f7822b493046e84c7e80fea336c1a2fe0f0cf"} Feb 18 15:50:49 crc kubenswrapper[4957]: I0218 15:50:49.363788 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-fx4tl" event={"ID":"91fd8838-0687-420b-b3dd-4130e221a66d","Type":"ContainerStarted","Data":"1dc7f78977cff256be02801f78082fdc2bea3e50c26a4a32011e2e0e62c4f8a0"} Feb 18 15:50:49 crc kubenswrapper[4957]: I0218 15:50:49.363920 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-fx4tl" Feb 18 15:50:49 crc kubenswrapper[4957]: I0218 15:50:49.366123 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-hhg5g" event={"ID":"18f96572-e72c-48ae-b22b-4c6fb7a4d7b9","Type":"ContainerStarted","Data":"2add5f41d679c5df649e798af9d672ef3451b78c214657a1efd03fe7036a54cb"} Feb 18 15:50:49 crc kubenswrapper[4957]: I0218 15:50:49.366255 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-hhg5g" Feb 18 15:50:49 crc kubenswrapper[4957]: I0218 15:50:49.368290 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-kshbq" event={"ID":"c17ba5f2-7fb4-4ed7-8623-f987653f8f9b","Type":"ContainerStarted","Data":"0046ec61b217e53493724ed3b8d245e4b29291c8d6949bed0886f7498a64f4c2"} Feb 18 15:50:49 crc kubenswrapper[4957]: I0218 15:50:49.368792 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-kshbq" Feb 18 15:50:49 crc kubenswrapper[4957]: I0218 15:50:49.375820 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vrpnn" event={"ID":"9f04a8b9-47dc-4fdf-b0fa-b39ee03d5f00","Type":"ContainerStarted","Data":"7ad31d8e6a406cf0887d96673a35c5569584e551a89febcee4c6cdcfc7645dff"} Feb 18 15:50:49 crc kubenswrapper[4957]: I0218 15:50:49.376407 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="cert-manager/cert-manager-webhook-687f57d79b-q5pw9" podUID="77a4b221-67be-4248-beaa-1f4602e3b35b" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.44:6080/healthz\": dial tcp 10.217.0.44:6080: connect: connection refused" Feb 18 15:50:49 crc kubenswrapper[4957]: I0218 15:50:49.376521 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cwqlsr" podUID="8bd25216-306e-42c0-93da-a51803507c1f" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.117:8081/readyz\": dial tcp 10.217.0.117:8081: connect: connection refused" Feb 18 15:50:49 crc kubenswrapper[4957]: I0218 15:50:49.377118 4957 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-cvskg container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/healthz\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Feb 18 15:50:49 crc kubenswrapper[4957]: I0218 15:50:49.377145 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-cvskg" podUID="2d4f652d-1d92-4ace-8ac1-51aaf64ff1cd" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.66:8080/healthz\": dial tcp 10.217.0.66:8080: connect: connection refused" Feb 18 15:50:49 crc kubenswrapper[4957]: I0218 15:50:49.378308 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-webhook-server-84df667ccc-2w5tf" podUID="32734ff2-fe7b-4588-a4c8-0e5882b54b87" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.95:7472/metrics\": dial tcp 10.217.0.95:7472: connect: connection refused" Feb 18 15:50:49 crc kubenswrapper[4957]: I0218 15:50:49.378370 4957 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-tmknf container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.29:8443/healthz\": dial tcp 10.217.0.29:8443: connect: connection refused" start-of-body= Feb 18 15:50:49 crc kubenswrapper[4957]: I0218 15:50:49.378388 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-tmknf" podUID="7b0517cf-fb33-49b0-9f1c-ba39f8edfc37" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.29:8443/healthz\": dial tcp 10.217.0.29:8443: connect: connection refused" Feb 18 15:50:49 crc kubenswrapper[4957]: I0218 15:50:49.378723 4957 patch_prober.go:28] interesting pod/route-controller-manager-6f58845d78-bfqn7 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.63:8443/healthz\": dial tcp 10.217.0.63:8443: connect: connection refused" start-of-body= Feb 18 15:50:49 crc kubenswrapper[4957]: I0218 15:50:49.378750 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6f58845d78-bfqn7" podUID="2fbd50ae-c490-4099-b01e-de491ad70559" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.63:8443/healthz\": dial tcp 10.217.0.63:8443: connect: connection refused" Feb 18 15:50:49 crc kubenswrapper[4957]: I0218 15:50:49.378974 4957 patch_prober.go:28] interesting pod/controller-manager-65c9ff5d9d-fz5wk container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.62:8443/healthz\": dial tcp 10.217.0.62:8443: connect: connection refused" start-of-body= Feb 18 15:50:49 crc kubenswrapper[4957]: I0218 15:50:49.379010 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-65c9ff5d9d-fz5wk" podUID="1523e723-7145-4c5e-8834-990b6298db41" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.62:8443/healthz\": dial tcp 10.217.0.62:8443: connect: connection refused" Feb 18 15:50:49 crc kubenswrapper[4957]: I0218 15:50:49.380299 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/speaker-wvv4k" podUID="379fdde6-815b-433b-b62c-b9863ea4fb9e" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": dial tcp [::1]:29150: connect: connection refused" Feb 18 15:50:49 crc kubenswrapper[4957]: I0218 15:50:49.383923 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-jg752" Feb 18 15:50:49 crc kubenswrapper[4957]: I0218 15:50:49.616623 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-wvv4k" Feb 18 15:50:49 crc kubenswrapper[4957]: I0218 15:50:49.714095 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-gllqp" Feb 18 15:50:49 crc kubenswrapper[4957]: I0218 15:50:49.714514 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-gllqp" Feb 18 15:50:49 crc kubenswrapper[4957]: E0218 15:50:49.857815 4957 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c2d6a2612ba18fb658d5c66424e36628b24b284c48f3a48ee8a77762ba6d77e3" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Feb 18 15:50:49 crc kubenswrapper[4957]: E0218 15:50:49.865227 4957 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c2d6a2612ba18fb658d5c66424e36628b24b284c48f3a48ee8a77762ba6d77e3" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Feb 18 15:50:49 crc kubenswrapper[4957]: E0218 15:50:49.870190 4957 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c2d6a2612ba18fb658d5c66424e36628b24b284c48f3a48ee8a77762ba6d77e3" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Feb 18 15:50:49 crc kubenswrapper[4957]: E0218 15:50:49.870284 4957 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="8e98adae-ca1c-4f1d-a7e4-6ea6ccea7070" containerName="galera" Feb 18 15:50:50 crc kubenswrapper[4957]: I0218 15:50:50.330322 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-q5pw9" Feb 18 15:50:50 crc kubenswrapper[4957]: I0218 15:50:50.345530 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-vrpnn" Feb 18 15:50:50 crc kubenswrapper[4957]: I0218 15:50:50.345646 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-vrpnn" Feb 18 15:50:50 crc kubenswrapper[4957]: I0218 15:50:50.388003 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"a1f75ef8-c8ef-4d67-8c40-11ff5f8f5f4c","Type":"ContainerStarted","Data":"0b5d350a68f562463b40c0e1bae7ff92ef070a4218d0f10891dbebffc145b9bf"} Feb 18 15:50:50 crc kubenswrapper[4957]: I0218 15:50:50.388667 4957 patch_prober.go:28] interesting pod/route-controller-manager-6f58845d78-bfqn7 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.63:8443/healthz\": dial tcp 10.217.0.63:8443: connect: connection refused" start-of-body= Feb 18 15:50:50 crc kubenswrapper[4957]: I0218 15:50:50.388720 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6f58845d78-bfqn7" podUID="2fbd50ae-c490-4099-b01e-de491ad70559" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.63:8443/healthz\": dial tcp 10.217.0.63:8443: connect: connection refused" Feb 18 15:50:50 crc kubenswrapper[4957]: I0218 15:50:50.450897 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-scheduler-0" podUID="e7fbc309-e5ee-4222-8409-6d68468ae015" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 15:50:50 crc kubenswrapper[4957]: I0218 15:50:50.450975 4957 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 18 15:50:50 crc kubenswrapper[4957]: I0218 15:50:50.451888 4957 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cinder-scheduler" containerStatusID={"Type":"cri-o","ID":"c56a353ed7b7e1b4b51e206a5ceb87b775ac853438042079b219de59fd7eab4c"} pod="openstack/cinder-scheduler-0" containerMessage="Container cinder-scheduler failed liveness probe, will be restarted" Feb 18 15:50:50 crc kubenswrapper[4957]: I0218 15:50:50.451939 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="e7fbc309-e5ee-4222-8409-6d68468ae015" containerName="cinder-scheduler" containerID="cri-o://c56a353ed7b7e1b4b51e206a5ceb87b775ac853438042079b219de59fd7eab4c" gracePeriod=30 Feb 18 15:50:50 crc kubenswrapper[4957]: I0218 15:50:50.774000 4957 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-gllqp" podUID="b95ede57-e275-4ba0-834d-43356f6b960b" containerName="registry-server" probeResult="failure" output=< Feb 18 15:50:50 crc kubenswrapper[4957]: timeout: failed to connect service ":50051" within 1s Feb 18 15:50:50 crc kubenswrapper[4957]: > Feb 18 15:50:51 crc kubenswrapper[4957]: I0218 15:50:51.399451 4957 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-vrpnn" podUID="9f04a8b9-47dc-4fdf-b0fa-b39ee03d5f00" containerName="registry-server" probeResult="failure" output=< Feb 18 15:50:51 crc kubenswrapper[4957]: timeout: failed to connect service ":50051" within 1s Feb 18 15:50:51 crc kubenswrapper[4957]: > Feb 18 15:50:51 crc kubenswrapper[4957]: I0218 15:50:51.402732 4957 generic.go:334] "Generic (PLEG): container finished" podID="8e98adae-ca1c-4f1d-a7e4-6ea6ccea7070" containerID="c2d6a2612ba18fb658d5c66424e36628b24b284c48f3a48ee8a77762ba6d77e3" exitCode=0 Feb 18 15:50:51 crc kubenswrapper[4957]: I0218 15:50:51.403738 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"8e98adae-ca1c-4f1d-a7e4-6ea6ccea7070","Type":"ContainerDied","Data":"c2d6a2612ba18fb658d5c66424e36628b24b284c48f3a48ee8a77762ba6d77e3"} Feb 18 15:50:51 crc kubenswrapper[4957]: I0218 15:50:51.411386 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cwqlsr" Feb 18 15:50:51 crc kubenswrapper[4957]: I0218 15:50:51.514813 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-k8s-0" Feb 18 15:50:51 crc kubenswrapper[4957]: I0218 15:50:51.597939 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Feb 18 15:50:51 crc kubenswrapper[4957]: I0218 15:50:51.597973 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Feb 18 15:50:52 crc kubenswrapper[4957]: I0218 15:50:52.081737 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-tmknf" Feb 18 15:50:52 crc kubenswrapper[4957]: I0218 15:50:52.082042 4957 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-tmknf container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.29:8443/healthz\": dial tcp 10.217.0.29:8443: connect: connection refused" start-of-body= Feb 18 15:50:52 crc kubenswrapper[4957]: I0218 15:50:52.082164 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-tmknf" podUID="7b0517cf-fb33-49b0-9f1c-ba39f8edfc37" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.29:8443/healthz\": dial tcp 10.217.0.29:8443: connect: connection refused" Feb 18 15:50:52 crc kubenswrapper[4957]: I0218 15:50:52.082042 4957 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-tmknf container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.29:8443/healthz\": dial tcp 10.217.0.29:8443: connect: connection refused" start-of-body= Feb 18 15:50:52 crc kubenswrapper[4957]: I0218 15:50:52.082217 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-tmknf" podUID="7b0517cf-fb33-49b0-9f1c-ba39f8edfc37" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.29:8443/healthz\": dial tcp 10.217.0.29:8443: connect: connection refused" Feb 18 15:50:52 crc kubenswrapper[4957]: I0218 15:50:52.082811 4957 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-tmknf container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.29:8443/healthz\": dial tcp 10.217.0.29:8443: connect: connection refused" start-of-body= Feb 18 15:50:52 crc kubenswrapper[4957]: I0218 15:50:52.082839 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-tmknf" podUID="7b0517cf-fb33-49b0-9f1c-ba39f8edfc37" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.29:8443/healthz\": dial tcp 10.217.0.29:8443: connect: connection refused" Feb 18 15:50:52 crc kubenswrapper[4957]: I0218 15:50:52.421468 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"8e98adae-ca1c-4f1d-a7e4-6ea6ccea7070","Type":"ContainerStarted","Data":"76de37e133ed607042fa9f9e7fbbbc6098ca2bf6a3ee0da537aa80aa8fa20276"} Feb 18 15:50:52 crc kubenswrapper[4957]: I0218 15:50:52.763737 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-htr7f" Feb 18 15:50:53 crc kubenswrapper[4957]: I0218 15:50:53.062570 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-65c9ff5d9d-fz5wk" Feb 18 15:50:53 crc kubenswrapper[4957]: I0218 15:50:53.071072 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-65c9ff5d9d-fz5wk" Feb 18 15:50:53 crc kubenswrapper[4957]: I0218 15:50:53.077799 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6f58845d78-bfqn7" Feb 18 15:50:53 crc kubenswrapper[4957]: I0218 15:50:53.447569 4957 trace.go:236] Trace[1593290171]: "Calculate volume metrics of wal for pod openshift-logging/logging-loki-ingester-0" (18-Feb-2026 15:50:51.904) (total time: 1543ms): Feb 18 15:50:53 crc kubenswrapper[4957]: Trace[1593290171]: [1.543088131s] [1.543088131s] END Feb 18 15:50:53 crc kubenswrapper[4957]: I0218 15:50:53.918371 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators-redhat/loki-operator-controller-manager-669bf4b44b-ndlc7" Feb 18 15:50:53 crc kubenswrapper[4957]: I0218 15:50:53.919826 4957 patch_prober.go:28] interesting pod/loki-operator-controller-manager-669bf4b44b-ndlc7 container/manager namespace/openshift-operators-redhat: Readiness probe status=failure output="Get \"http://10.217.0.48:8081/readyz\": dial tcp 10.217.0.48:8081: connect: connection refused" start-of-body= Feb 18 15:50:53 crc kubenswrapper[4957]: I0218 15:50:53.919896 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators-redhat/loki-operator-controller-manager-669bf4b44b-ndlc7" podUID="da87ca13-b23a-4345-b79d-46c8e9bec9b3" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.48:8081/readyz\": dial tcp 10.217.0.48:8081: connect: connection refused" Feb 18 15:50:53 crc kubenswrapper[4957]: I0218 15:50:53.920315 4957 patch_prober.go:28] interesting pod/loki-operator-controller-manager-669bf4b44b-ndlc7 container/manager namespace/openshift-operators-redhat: Readiness probe status=failure output="Get \"http://10.217.0.48:8081/readyz\": dial tcp 10.217.0.48:8081: connect: connection refused" start-of-body= Feb 18 15:50:53 crc kubenswrapper[4957]: I0218 15:50:53.920339 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators-redhat/loki-operator-controller-manager-669bf4b44b-ndlc7" podUID="da87ca13-b23a-4345-b79d-46c8e9bec9b3" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.48:8081/readyz\": dial tcp 10.217.0.48:8081: connect: connection refused" Feb 18 15:50:54 crc kubenswrapper[4957]: I0218 15:50:54.239411 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 15:50:54 crc kubenswrapper[4957]: I0218 15:50:54.243702 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 15:50:55 crc kubenswrapper[4957]: I0218 15:50:55.269272 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-77987464f4-b85vd" Feb 18 15:50:55 crc kubenswrapper[4957]: I0218 15:50:55.272974 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-xt2c9" Feb 18 15:50:55 crc kubenswrapper[4957]: I0218 15:50:55.295095 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-xt2c9" Feb 18 15:50:55 crc kubenswrapper[4957]: I0218 15:50:55.295191 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-77987464f4-b85vd" Feb 18 15:50:55 crc kubenswrapper[4957]: I0218 15:50:55.309182 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-d9hcp" Feb 18 15:50:55 crc kubenswrapper[4957]: I0218 15:50:55.333395 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="cert-manager/cert-manager-webhook-687f57d79b-q5pw9" podUID="77a4b221-67be-4248-beaa-1f4602e3b35b" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.44:6080/healthz\": dial tcp 10.217.0.44:6080: connect: connection refused" Feb 18 15:50:55 crc kubenswrapper[4957]: I0218 15:50:55.345823 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-tmknf" Feb 18 15:50:55 crc kubenswrapper[4957]: I0218 15:50:55.450918 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-hhg5g" Feb 18 15:50:55 crc kubenswrapper[4957]: I0218 15:50:55.538816 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-fx4tl" Feb 18 15:50:55 crc kubenswrapper[4957]: I0218 15:50:55.653391 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8bmsm" Feb 18 15:50:55 crc kubenswrapper[4957]: I0218 15:50:55.740494 4957 patch_prober.go:28] interesting pod/downloads-7954f5f757-fxh8s container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Feb 18 15:50:55 crc kubenswrapper[4957]: I0218 15:50:55.740562 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-fxh8s" podUID="6e462cfd-ac3c-4e75-bcce-f8291746b89e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Feb 18 15:50:55 crc kubenswrapper[4957]: I0218 15:50:55.740642 4957 patch_prober.go:28] interesting pod/downloads-7954f5f757-fxh8s container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Feb 18 15:50:55 crc kubenswrapper[4957]: I0218 15:50:55.740696 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-fxh8s" podUID="6e462cfd-ac3c-4e75-bcce-f8291746b89e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Feb 18 15:50:56 crc kubenswrapper[4957]: I0218 15:50:56.188184 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-68f46476f-5k7g6" Feb 18 15:50:56 crc kubenswrapper[4957]: I0218 15:50:56.194442 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-q4vp7" Feb 18 15:50:56 crc kubenswrapper[4957]: I0218 15:50:56.200931 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-kx5gv" Feb 18 15:50:56 crc kubenswrapper[4957]: I0218 15:50:56.201428 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-9mf8z" Feb 18 15:50:56 crc kubenswrapper[4957]: I0218 15:50:56.201565 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-zc4tx" Feb 18 15:50:56 crc kubenswrapper[4957]: I0218 15:50:56.201641 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-czjx4" Feb 18 15:50:56 crc kubenswrapper[4957]: I0218 15:50:56.202241 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-rd7mm" Feb 18 15:50:56 crc kubenswrapper[4957]: I0218 15:50:56.202971 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-84fccf866c-pq2q4" podUID="95e72012-f049-432b-8490-b92c0e5724b8" containerName="console" containerID="cri-o://5e1bcb031d8c22adc291ab4ddeff87798dc82f26b4768e6947cd172516be96d8" gracePeriod=10 Feb 18 15:50:56 crc kubenswrapper[4957]: I0218 15:50:56.203340 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-68f46476f-5k7g6" Feb 18 15:50:56 crc kubenswrapper[4957]: I0218 15:50:56.207473 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qc4wh" Feb 18 15:50:56 crc kubenswrapper[4957]: I0218 15:50:56.220265 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-54bf66477-rc4j4" Feb 18 15:50:56 crc kubenswrapper[4957]: I0218 15:50:56.249524 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-54bf66477-rc4j4" Feb 18 15:50:56 crc kubenswrapper[4957]: I0218 15:50:56.249595 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-7866795846-2fdgz" Feb 18 15:50:56 crc kubenswrapper[4957]: I0218 15:50:56.266678 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-kshbq" Feb 18 15:50:56 crc kubenswrapper[4957]: I0218 15:50:56.513922 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-84fccf866c-pq2q4_95e72012-f049-432b-8490-b92c0e5724b8/console/0.log" Feb 18 15:50:56 crc kubenswrapper[4957]: I0218 15:50:56.513992 4957 generic.go:334] "Generic (PLEG): container finished" podID="95e72012-f049-432b-8490-b92c0e5724b8" containerID="5e1bcb031d8c22adc291ab4ddeff87798dc82f26b4768e6947cd172516be96d8" exitCode=2 Feb 18 15:50:56 crc kubenswrapper[4957]: I0218 15:50:56.514168 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-84fccf866c-pq2q4" event={"ID":"95e72012-f049-432b-8490-b92c0e5724b8","Type":"ContainerDied","Data":"5e1bcb031d8c22adc291ab4ddeff87798dc82f26b4768e6947cd172516be96d8"} Feb 18 15:50:56 crc kubenswrapper[4957]: I0218 15:50:56.668407 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-5666c999f9-b87pp" Feb 18 15:50:56 crc kubenswrapper[4957]: I0218 15:50:56.685093 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-mxz2r" Feb 18 15:50:56 crc kubenswrapper[4957]: I0218 15:50:56.766677 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-8lgmk"] Feb 18 15:50:56 crc kubenswrapper[4957]: I0218 15:50:56.772043 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8lgmk" Feb 18 15:50:56 crc kubenswrapper[4957]: I0218 15:50:56.858025 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a5c9544-331f-47df-898d-d19b2c9fe2b1-utilities\") pod \"redhat-operators-8lgmk\" (UID: \"9a5c9544-331f-47df-898d-d19b2c9fe2b1\") " pod="openshift-marketplace/redhat-operators-8lgmk" Feb 18 15:50:56 crc kubenswrapper[4957]: I0218 15:50:56.858250 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrdss\" (UniqueName: \"kubernetes.io/projected/9a5c9544-331f-47df-898d-d19b2c9fe2b1-kube-api-access-jrdss\") pod \"redhat-operators-8lgmk\" (UID: \"9a5c9544-331f-47df-898d-d19b2c9fe2b1\") " pod="openshift-marketplace/redhat-operators-8lgmk" Feb 18 15:50:56 crc kubenswrapper[4957]: I0218 15:50:56.858474 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a5c9544-331f-47df-898d-d19b2c9fe2b1-catalog-content\") pod \"redhat-operators-8lgmk\" (UID: \"9a5c9544-331f-47df-898d-d19b2c9fe2b1\") " pod="openshift-marketplace/redhat-operators-8lgmk" Feb 18 15:50:56 crc kubenswrapper[4957]: I0218 15:50:56.877387 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-hbqf7" Feb 18 15:50:56 crc kubenswrapper[4957]: I0218 15:50:56.882734 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-hbqf7" Feb 18 15:50:56 crc kubenswrapper[4957]: I0218 15:50:56.945545 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-8675cb849f-2g7hj" Feb 18 15:50:56 crc kubenswrapper[4957]: I0218 15:50:56.952750 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8lgmk"] Feb 18 15:50:56 crc kubenswrapper[4957]: I0218 15:50:56.966099 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jrdss\" (UniqueName: \"kubernetes.io/projected/9a5c9544-331f-47df-898d-d19b2c9fe2b1-kube-api-access-jrdss\") pod \"redhat-operators-8lgmk\" (UID: \"9a5c9544-331f-47df-898d-d19b2c9fe2b1\") " pod="openshift-marketplace/redhat-operators-8lgmk" Feb 18 15:50:56 crc kubenswrapper[4957]: I0218 15:50:56.966300 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a5c9544-331f-47df-898d-d19b2c9fe2b1-catalog-content\") pod \"redhat-operators-8lgmk\" (UID: \"9a5c9544-331f-47df-898d-d19b2c9fe2b1\") " pod="openshift-marketplace/redhat-operators-8lgmk" Feb 18 15:50:56 crc kubenswrapper[4957]: I0218 15:50:56.966356 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a5c9544-331f-47df-898d-d19b2c9fe2b1-utilities\") pod \"redhat-operators-8lgmk\" (UID: \"9a5c9544-331f-47df-898d-d19b2c9fe2b1\") " pod="openshift-marketplace/redhat-operators-8lgmk" Feb 18 15:50:56 crc kubenswrapper[4957]: I0218 15:50:56.967065 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a5c9544-331f-47df-898d-d19b2c9fe2b1-utilities\") pod \"redhat-operators-8lgmk\" (UID: \"9a5c9544-331f-47df-898d-d19b2c9fe2b1\") " pod="openshift-marketplace/redhat-operators-8lgmk" Feb 18 15:50:56 crc kubenswrapper[4957]: I0218 15:50:56.967409 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a5c9544-331f-47df-898d-d19b2c9fe2b1-catalog-content\") pod \"redhat-operators-8lgmk\" (UID: \"9a5c9544-331f-47df-898d-d19b2c9fe2b1\") " pod="openshift-marketplace/redhat-operators-8lgmk" Feb 18 15:50:57 crc kubenswrapper[4957]: I0218 15:50:57.044815 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jrdss\" (UniqueName: \"kubernetes.io/projected/9a5c9544-331f-47df-898d-d19b2c9fe2b1-kube-api-access-jrdss\") pod \"redhat-operators-8lgmk\" (UID: \"9a5c9544-331f-47df-898d-d19b2c9fe2b1\") " pod="openshift-marketplace/redhat-operators-8lgmk" Feb 18 15:50:57 crc kubenswrapper[4957]: I0218 15:50:57.057430 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-hbqf7" Feb 18 15:50:57 crc kubenswrapper[4957]: I0218 15:50:57.124083 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8lgmk" Feb 18 15:50:57 crc kubenswrapper[4957]: I0218 15:50:57.468986 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-84df667ccc-2w5tf" Feb 18 15:50:57 crc kubenswrapper[4957]: I0218 15:50:57.480012 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-84df667ccc-2w5tf" Feb 18 15:50:57 crc kubenswrapper[4957]: I0218 15:50:57.573138 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-84fccf866c-pq2q4_95e72012-f049-432b-8490-b92c0e5724b8/console/0.log" Feb 18 15:50:57 crc kubenswrapper[4957]: I0218 15:50:57.573281 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-84fccf866c-pq2q4" event={"ID":"95e72012-f049-432b-8490-b92c0e5724b8","Type":"ContainerStarted","Data":"8cce53c9be9db6b36ae3872b83359bc8bff4cbf5ae6c55165e110a6f792dd8b5"} Feb 18 15:50:57 crc kubenswrapper[4957]: I0218 15:50:57.578303 4957 generic.go:334] "Generic (PLEG): container finished" podID="e7fbc309-e5ee-4222-8409-6d68468ae015" containerID="c56a353ed7b7e1b4b51e206a5ceb87b775ac853438042079b219de59fd7eab4c" exitCode=0 Feb 18 15:50:57 crc kubenswrapper[4957]: I0218 15:50:57.580817 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e7fbc309-e5ee-4222-8409-6d68468ae015","Type":"ContainerDied","Data":"c56a353ed7b7e1b4b51e206a5ceb87b775ac853438042079b219de59fd7eab4c"} Feb 18 15:50:57 crc kubenswrapper[4957]: I0218 15:50:57.630244 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-hbqf7" Feb 18 15:50:57 crc kubenswrapper[4957]: I0218 15:50:57.905133 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-cvskg" Feb 18 15:50:57 crc kubenswrapper[4957]: I0218 15:50:57.909800 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-cvskg" Feb 18 15:50:58 crc kubenswrapper[4957]: I0218 15:50:58.020026 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-c8kqz" Feb 18 15:50:58 crc kubenswrapper[4957]: I0218 15:50:58.072866 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-bl7jx" Feb 18 15:50:58 crc kubenswrapper[4957]: I0218 15:50:58.087683 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-549dd7c895-84tm5" Feb 18 15:50:58 crc kubenswrapper[4957]: I0218 15:50:58.155102 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-69bbfbf88f-hw6sv" Feb 18 15:50:58 crc kubenswrapper[4957]: W0218 15:50:58.215499 4957 logging.go:55] [core] [Channel #4955 SubChannel #4956]grpc: addrConn.createTransport failed to connect to {Addr: "/var/lib/kubelet/plugins/csi-hostpath/csi.sock", ServerName: "localhost", }. Err: connection error: desc = "transport: Error while dialing: dial unix /var/lib/kubelet/plugins/csi-hostpath/csi.sock: connect: connection refused" Feb 18 15:50:58 crc kubenswrapper[4957]: I0218 15:50:58.592599 4957 generic.go:334] "Generic (PLEG): container finished" podID="11cb8341-3939-4c82-9745-510f73904864" containerID="fd27e8571ceb4b4cffa7844add2265690bd0eecf09b5f3c39992bde1f2249d31" exitCode=137 Feb 18 15:50:58 crc kubenswrapper[4957]: I0218 15:50:58.592792 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-gdjrj" event={"ID":"11cb8341-3939-4c82-9745-510f73904864","Type":"ContainerDied","Data":"fd27e8571ceb4b4cffa7844add2265690bd0eecf09b5f3c39992bde1f2249d31"} Feb 18 15:50:59 crc kubenswrapper[4957]: I0218 15:50:59.619783 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-wvv4k" Feb 18 15:50:59 crc kubenswrapper[4957]: W0218 15:50:59.634127 4957 logging.go:55] [core] [Channel #4957 SubChannel #4958]grpc: addrConn.createTransport failed to connect to {Addr: "/var/lib/kubelet/plugins/csi-hostpath/csi.sock", ServerName: "localhost", }. Err: connection error: desc = "transport: Error while dialing: dial unix /var/lib/kubelet/plugins/csi-hostpath/csi.sock: connect: connection refused" Feb 18 15:50:59 crc kubenswrapper[4957]: I0218 15:50:59.634594 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8lgmk"] Feb 18 15:50:59 crc kubenswrapper[4957]: W0218 15:50:59.651127 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9a5c9544_331f_47df_898d_d19b2c9fe2b1.slice/crio-c82df8bfed1c0866de3c2760e68b832c0f327e5860c44ddda623bdddf09de526 WatchSource:0}: Error finding container c82df8bfed1c0866de3c2760e68b832c0f327e5860c44ddda623bdddf09de526: Status 404 returned error can't find the container with id c82df8bfed1c0866de3c2760e68b832c0f327e5860c44ddda623bdddf09de526 Feb 18 15:50:59 crc kubenswrapper[4957]: I0218 15:50:59.850825 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Feb 18 15:50:59 crc kubenswrapper[4957]: I0218 15:50:59.851082 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Feb 18 15:50:59 crc kubenswrapper[4957]: I0218 15:50:59.893747 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-6fjk2"] Feb 18 15:50:59 crc kubenswrapper[4957]: I0218 15:50:59.945761 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6fjk2" Feb 18 15:51:00 crc kubenswrapper[4957]: I0218 15:51:00.066584 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6fjk2"] Feb 18 15:51:00 crc kubenswrapper[4957]: I0218 15:51:00.083518 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-np4sf\" (UniqueName: \"kubernetes.io/projected/1a06ac29-ae1b-4ee9-896f-b5522fd99ba0-kube-api-access-np4sf\") pod \"certified-operators-6fjk2\" (UID: \"1a06ac29-ae1b-4ee9-896f-b5522fd99ba0\") " pod="openshift-marketplace/certified-operators-6fjk2" Feb 18 15:51:00 crc kubenswrapper[4957]: I0218 15:51:00.083626 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a06ac29-ae1b-4ee9-896f-b5522fd99ba0-utilities\") pod \"certified-operators-6fjk2\" (UID: \"1a06ac29-ae1b-4ee9-896f-b5522fd99ba0\") " pod="openshift-marketplace/certified-operators-6fjk2" Feb 18 15:51:00 crc kubenswrapper[4957]: I0218 15:51:00.083651 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a06ac29-ae1b-4ee9-896f-b5522fd99ba0-catalog-content\") pod \"certified-operators-6fjk2\" (UID: \"1a06ac29-ae1b-4ee9-896f-b5522fd99ba0\") " pod="openshift-marketplace/certified-operators-6fjk2" Feb 18 15:51:00 crc kubenswrapper[4957]: I0218 15:51:00.185954 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-np4sf\" (UniqueName: \"kubernetes.io/projected/1a06ac29-ae1b-4ee9-896f-b5522fd99ba0-kube-api-access-np4sf\") pod \"certified-operators-6fjk2\" (UID: \"1a06ac29-ae1b-4ee9-896f-b5522fd99ba0\") " pod="openshift-marketplace/certified-operators-6fjk2" Feb 18 15:51:00 crc kubenswrapper[4957]: I0218 15:51:00.186280 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a06ac29-ae1b-4ee9-896f-b5522fd99ba0-utilities\") pod \"certified-operators-6fjk2\" (UID: \"1a06ac29-ae1b-4ee9-896f-b5522fd99ba0\") " pod="openshift-marketplace/certified-operators-6fjk2" Feb 18 15:51:00 crc kubenswrapper[4957]: I0218 15:51:00.186307 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a06ac29-ae1b-4ee9-896f-b5522fd99ba0-catalog-content\") pod \"certified-operators-6fjk2\" (UID: \"1a06ac29-ae1b-4ee9-896f-b5522fd99ba0\") " pod="openshift-marketplace/certified-operators-6fjk2" Feb 18 15:51:00 crc kubenswrapper[4957]: I0218 15:51:00.187110 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a06ac29-ae1b-4ee9-896f-b5522fd99ba0-utilities\") pod \"certified-operators-6fjk2\" (UID: \"1a06ac29-ae1b-4ee9-896f-b5522fd99ba0\") " pod="openshift-marketplace/certified-operators-6fjk2" Feb 18 15:51:00 crc kubenswrapper[4957]: I0218 15:51:00.187235 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a06ac29-ae1b-4ee9-896f-b5522fd99ba0-catalog-content\") pod \"certified-operators-6fjk2\" (UID: \"1a06ac29-ae1b-4ee9-896f-b5522fd99ba0\") " pod="openshift-marketplace/certified-operators-6fjk2" Feb 18 15:51:00 crc kubenswrapper[4957]: I0218 15:51:00.212338 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-np4sf\" (UniqueName: \"kubernetes.io/projected/1a06ac29-ae1b-4ee9-896f-b5522fd99ba0-kube-api-access-np4sf\") pod \"certified-operators-6fjk2\" (UID: \"1a06ac29-ae1b-4ee9-896f-b5522fd99ba0\") " pod="openshift-marketplace/certified-operators-6fjk2" Feb 18 15:51:00 crc kubenswrapper[4957]: I0218 15:51:00.331186 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6fjk2" Feb 18 15:51:00 crc kubenswrapper[4957]: I0218 15:51:00.333561 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-q5pw9" Feb 18 15:51:00 crc kubenswrapper[4957]: I0218 15:51:00.653848 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8lgmk" event={"ID":"9a5c9544-331f-47df-898d-d19b2c9fe2b1","Type":"ContainerStarted","Data":"7303af9945a0d42dfa98f9f786eef94c77a39ccbb9abe0ac67e0886b428d7195"} Feb 18 15:51:00 crc kubenswrapper[4957]: I0218 15:51:00.654342 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8lgmk" event={"ID":"9a5c9544-331f-47df-898d-d19b2c9fe2b1","Type":"ContainerStarted","Data":"c82df8bfed1c0866de3c2760e68b832c0f327e5860c44ddda623bdddf09de526"} Feb 18 15:51:00 crc kubenswrapper[4957]: I0218 15:51:00.683917 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-gdjrj" event={"ID":"11cb8341-3939-4c82-9745-510f73904864","Type":"ContainerStarted","Data":"20fc7fc79183597c401d1be51da1397f84da90ce9a97764e5ced3648db131b81"} Feb 18 15:51:00 crc kubenswrapper[4957]: I0218 15:51:00.714112 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e7fbc309-e5ee-4222-8409-6d68468ae015","Type":"ContainerStarted","Data":"9a65b000af11288b45233c3048e759a3b9e31499b1203d39e14ce3c311a2d032"} Feb 18 15:51:00 crc kubenswrapper[4957]: I0218 15:51:00.819602 4957 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-gllqp" podUID="b95ede57-e275-4ba0-834d-43356f6b960b" containerName="registry-server" probeResult="failure" output=< Feb 18 15:51:00 crc kubenswrapper[4957]: timeout: failed to connect service ":50051" within 1s Feb 18 15:51:00 crc kubenswrapper[4957]: > Feb 18 15:51:01 crc kubenswrapper[4957]: I0218 15:51:01.086519 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6fjk2"] Feb 18 15:51:01 crc kubenswrapper[4957]: W0218 15:51:01.093962 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1a06ac29_ae1b_4ee9_896f_b5522fd99ba0.slice/crio-59b8230dffce9befff29af0b49c3cc4bf211784aca6f5575107abb05b3d7b7ae WatchSource:0}: Error finding container 59b8230dffce9befff29af0b49c3cc4bf211784aca6f5575107abb05b3d7b7ae: Status 404 returned error can't find the container with id 59b8230dffce9befff29af0b49c3cc4bf211784aca6f5575107abb05b3d7b7ae Feb 18 15:51:01 crc kubenswrapper[4957]: I0218 15:51:01.454170 4957 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-vrpnn" podUID="9f04a8b9-47dc-4fdf-b0fa-b39ee03d5f00" containerName="registry-server" probeResult="failure" output=< Feb 18 15:51:01 crc kubenswrapper[4957]: timeout: failed to connect service ":50051" within 1s Feb 18 15:51:01 crc kubenswrapper[4957]: > Feb 18 15:51:01 crc kubenswrapper[4957]: I0218 15:51:01.654932 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cwqlsr" Feb 18 15:51:01 crc kubenswrapper[4957]: I0218 15:51:01.731704 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6fjk2" event={"ID":"1a06ac29-ae1b-4ee9-896f-b5522fd99ba0","Type":"ContainerStarted","Data":"b7847e2f1d5e0c69802a7ad1abd6e2316ef1cdbabeab1b008c7b9785f8f61569"} Feb 18 15:51:01 crc kubenswrapper[4957]: I0218 15:51:01.731738 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6fjk2" event={"ID":"1a06ac29-ae1b-4ee9-896f-b5522fd99ba0","Type":"ContainerStarted","Data":"59b8230dffce9befff29af0b49c3cc4bf211784aca6f5575107abb05b3d7b7ae"} Feb 18 15:51:01 crc kubenswrapper[4957]: I0218 15:51:01.863336 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-79d975b745-pgchj" Feb 18 15:51:02 crc kubenswrapper[4957]: I0218 15:51:02.744786 4957 generic.go:334] "Generic (PLEG): container finished" podID="9a5c9544-331f-47df-898d-d19b2c9fe2b1" containerID="7303af9945a0d42dfa98f9f786eef94c77a39ccbb9abe0ac67e0886b428d7195" exitCode=0 Feb 18 15:51:02 crc kubenswrapper[4957]: I0218 15:51:02.744850 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8lgmk" event={"ID":"9a5c9544-331f-47df-898d-d19b2c9fe2b1","Type":"ContainerDied","Data":"7303af9945a0d42dfa98f9f786eef94c77a39ccbb9abe0ac67e0886b428d7195"} Feb 18 15:51:03 crc kubenswrapper[4957]: I0218 15:51:03.407850 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 18 15:51:03 crc kubenswrapper[4957]: I0218 15:51:03.759674 4957 generic.go:334] "Generic (PLEG): container finished" podID="1a06ac29-ae1b-4ee9-896f-b5522fd99ba0" containerID="b7847e2f1d5e0c69802a7ad1abd6e2316ef1cdbabeab1b008c7b9785f8f61569" exitCode=0 Feb 18 15:51:03 crc kubenswrapper[4957]: I0218 15:51:03.759780 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6fjk2" event={"ID":"1a06ac29-ae1b-4ee9-896f-b5522fd99ba0","Type":"ContainerDied","Data":"b7847e2f1d5e0c69802a7ad1abd6e2316ef1cdbabeab1b008c7b9785f8f61569"} Feb 18 15:51:05 crc kubenswrapper[4957]: I0218 15:51:05.001642 4957 patch_prober.go:28] interesting pod/loki-operator-controller-manager-669bf4b44b-ndlc7 container/manager namespace/openshift-operators-redhat: Liveness probe status=failure output="Get \"http://10.217.0.48:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:51:05 crc kubenswrapper[4957]: I0218 15:51:05.002333 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operators-redhat/loki-operator-controller-manager-669bf4b44b-ndlc7" podUID="da87ca13-b23a-4345-b79d-46c8e9bec9b3" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.48:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:51:05 crc kubenswrapper[4957]: I0218 15:51:05.001688 4957 patch_prober.go:28] interesting pod/loki-operator-controller-manager-669bf4b44b-ndlc7 container/manager namespace/openshift-operators-redhat: Readiness probe status=failure output="Get \"http://10.217.0.48:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:51:05 crc kubenswrapper[4957]: I0218 15:51:05.002475 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators-redhat/loki-operator-controller-manager-669bf4b44b-ndlc7" podUID="da87ca13-b23a-4345-b79d-46c8e9bec9b3" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.48:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:51:05 crc kubenswrapper[4957]: I0218 15:51:05.741854 4957 patch_prober.go:28] interesting pod/downloads-7954f5f757-fxh8s container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Feb 18 15:51:05 crc kubenswrapper[4957]: I0218 15:51:05.741915 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-fxh8s" podUID="6e462cfd-ac3c-4e75-bcce-f8291746b89e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Feb 18 15:51:05 crc kubenswrapper[4957]: I0218 15:51:05.741938 4957 patch_prober.go:28] interesting pod/downloads-7954f5f757-fxh8s container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Feb 18 15:51:05 crc kubenswrapper[4957]: I0218 15:51:05.741993 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-fxh8s" podUID="6e462cfd-ac3c-4e75-bcce-f8291746b89e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Feb 18 15:51:05 crc kubenswrapper[4957]: I0218 15:51:05.742039 4957 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-7954f5f757-fxh8s" Feb 18 15:51:05 crc kubenswrapper[4957]: I0218 15:51:05.742651 4957 patch_prober.go:28] interesting pod/downloads-7954f5f757-fxh8s container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Feb 18 15:51:05 crc kubenswrapper[4957]: I0218 15:51:05.742683 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-fxh8s" podUID="6e462cfd-ac3c-4e75-bcce-f8291746b89e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Feb 18 15:51:05 crc kubenswrapper[4957]: I0218 15:51:05.742937 4957 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"dbbd1df2955da23b7bf8778a59c511ed786bb412bed8437feb995b43317ddb58"} pod="openshift-console/downloads-7954f5f757-fxh8s" containerMessage="Container download-server failed liveness probe, will be restarted" Feb 18 15:51:05 crc kubenswrapper[4957]: I0218 15:51:05.742986 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/downloads-7954f5f757-fxh8s" podUID="6e462cfd-ac3c-4e75-bcce-f8291746b89e" containerName="download-server" containerID="cri-o://dbbd1df2955da23b7bf8778a59c511ed786bb412bed8437feb995b43317ddb58" gracePeriod=2 Feb 18 15:51:05 crc kubenswrapper[4957]: I0218 15:51:05.768387 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-84fccf866c-pq2q4" Feb 18 15:51:05 crc kubenswrapper[4957]: I0218 15:51:05.768460 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-84fccf866c-pq2q4" Feb 18 15:51:05 crc kubenswrapper[4957]: I0218 15:51:05.775935 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-84fccf866c-pq2q4" Feb 18 15:51:05 crc kubenswrapper[4957]: I0218 15:51:05.793440 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-84fccf866c-pq2q4" Feb 18 15:51:06 crc kubenswrapper[4957]: I0218 15:51:06.346709 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-xt2c9" podUID="07a618be-7572-49b8-aeb3-12ce37fbe7b3" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.103:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:51:06 crc kubenswrapper[4957]: I0218 15:51:06.346839 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-xt2c9" podUID="07a618be-7572-49b8-aeb3-12ce37fbe7b3" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.103:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:51:06 crc kubenswrapper[4957]: I0218 15:51:06.527613 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-hhg5g" podUID="18f96572-e72c-48ae-b22b-4c6fb7a4d7b9" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.106:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:51:06 crc kubenswrapper[4957]: I0218 15:51:06.527652 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-hhg5g" podUID="18f96572-e72c-48ae-b22b-4c6fb7a4d7b9" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.106:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:51:06 crc kubenswrapper[4957]: I0218 15:51:06.809293 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-rd7mm" podUID="eb1f93ca-0a6f-4582-9d3a-329ddd1dd4fe" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.114:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:51:06 crc kubenswrapper[4957]: I0218 15:51:06.810234 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-rd7mm" podUID="eb1f93ca-0a6f-4582-9d3a-329ddd1dd4fe" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.114:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:51:06 crc kubenswrapper[4957]: I0218 15:51:06.819502 4957 generic.go:334] "Generic (PLEG): container finished" podID="6e462cfd-ac3c-4e75-bcce-f8291746b89e" containerID="dbbd1df2955da23b7bf8778a59c511ed786bb412bed8437feb995b43317ddb58" exitCode=0 Feb 18 15:51:06 crc kubenswrapper[4957]: I0218 15:51:06.819583 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-fxh8s" event={"ID":"6e462cfd-ac3c-4e75-bcce-f8291746b89e","Type":"ContainerDied","Data":"dbbd1df2955da23b7bf8778a59c511ed786bb412bed8437feb995b43317ddb58"} Feb 18 15:51:06 crc kubenswrapper[4957]: I0218 15:51:06.819624 4957 scope.go:117] "RemoveContainer" containerID="3d3a1ae29fe36ca19aeed36e4506b26f798c3d34fcc449df5b7dcb5cac3ace47" Feb 18 15:51:06 crc kubenswrapper[4957]: I0218 15:51:06.823463 4957 generic.go:334] "Generic (PLEG): container finished" podID="b596b9fb-f116-4712-81fa-9382d13c295b" containerID="4e6f4ccd1a9e888690a11595ded507a9c3931c292d2c13f291becb44621b3c2b" exitCode=1 Feb 18 15:51:06 crc kubenswrapper[4957]: I0218 15:51:06.823503 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"b596b9fb-f116-4712-81fa-9382d13c295b","Type":"ContainerDied","Data":"4e6f4ccd1a9e888690a11595ded507a9c3931c292d2c13f291becb44621b3c2b"} Feb 18 15:51:06 crc kubenswrapper[4957]: I0218 15:51:06.829611 4957 generic.go:334] "Generic (PLEG): container finished" podID="11cb8341-3939-4c82-9745-510f73904864" containerID="bc55b07a7f2ec6e82b508acf90927e8054d9719dc5154e90299a5c59e1b0c522" exitCode=1 Feb 18 15:51:06 crc kubenswrapper[4957]: I0218 15:51:06.829738 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-gdjrj" event={"ID":"11cb8341-3939-4c82-9745-510f73904864","Type":"ContainerDied","Data":"bc55b07a7f2ec6e82b508acf90927e8054d9719dc5154e90299a5c59e1b0c522"} Feb 18 15:51:06 crc kubenswrapper[4957]: I0218 15:51:06.830811 4957 scope.go:117] "RemoveContainer" containerID="bc55b07a7f2ec6e82b508acf90927e8054d9719dc5154e90299a5c59e1b0c522" Feb 18 15:51:07 crc kubenswrapper[4957]: I0218 15:51:07.281271 4957 patch_prober.go:28] interesting pod/machine-config-daemon-x8wwg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 15:51:07 crc kubenswrapper[4957]: I0218 15:51:07.281685 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 15:51:07 crc kubenswrapper[4957]: I0218 15:51:07.845104 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6fjk2" event={"ID":"1a06ac29-ae1b-4ee9-896f-b5522fd99ba0","Type":"ContainerStarted","Data":"08f964e0c70524951714c0d497f0f6a3843332369e4fb5625ea865b2fb73ffd9"} Feb 18 15:51:07 crc kubenswrapper[4957]: I0218 15:51:07.855401 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-fxh8s" event={"ID":"6e462cfd-ac3c-4e75-bcce-f8291746b89e","Type":"ContainerStarted","Data":"35efc37db2b6d42a647606ee42db87995bdd1c6993715242445aeb13c929aca1"} Feb 18 15:51:07 crc kubenswrapper[4957]: I0218 15:51:07.855893 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-fxh8s" Feb 18 15:51:07 crc kubenswrapper[4957]: I0218 15:51:07.855995 4957 patch_prober.go:28] interesting pod/downloads-7954f5f757-fxh8s container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Feb 18 15:51:07 crc kubenswrapper[4957]: I0218 15:51:07.856039 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-fxh8s" podUID="6e462cfd-ac3c-4e75-bcce-f8291746b89e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Feb 18 15:51:08 crc kubenswrapper[4957]: I0218 15:51:08.444798 4957 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="e7fbc309-e5ee-4222-8409-6d68468ae015" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 15:51:08 crc kubenswrapper[4957]: I0218 15:51:08.867989 4957 patch_prober.go:28] interesting pod/downloads-7954f5f757-fxh8s container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Feb 18 15:51:08 crc kubenswrapper[4957]: I0218 15:51:08.868054 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-fxh8s" podUID="6e462cfd-ac3c-4e75-bcce-f8291746b89e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Feb 18 15:51:09 crc kubenswrapper[4957]: I0218 15:51:09.077700 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-bl7jx" podUID="84258a40-276a-4da4-8240-603932be25c0" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.96:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:51:09 crc kubenswrapper[4957]: I0218 15:51:09.160645 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-c8kqz" podUID="06e3f0bd-70d3-493b-ab24-e8f75298f7a3" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:51:09 crc kubenswrapper[4957]: I0218 15:51:09.160634 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-bl7jx" podUID="84258a40-276a-4da4-8240-603932be25c0" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.96:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:51:09 crc kubenswrapper[4957]: I0218 15:51:09.202640 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-c8kqz" podUID="06e3f0bd-70d3-493b-ab24-e8f75298f7a3" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:51:09 crc kubenswrapper[4957]: I0218 15:51:09.202674 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-manager-549dd7c895-84tm5" podUID="33e1b915-d740-4ec7-b74e-b8b8b6356d4d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.123:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:51:09 crc kubenswrapper[4957]: I0218 15:51:09.404586 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 18 15:51:09 crc kubenswrapper[4957]: I0218 15:51:09.524352 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/b596b9fb-f116-4712-81fa-9382d13c295b-ca-certs\") pod \"b596b9fb-f116-4712-81fa-9382d13c295b\" (UID: \"b596b9fb-f116-4712-81fa-9382d13c295b\") " Feb 18 15:51:09 crc kubenswrapper[4957]: I0218 15:51:09.524487 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9dnwm\" (UniqueName: \"kubernetes.io/projected/b596b9fb-f116-4712-81fa-9382d13c295b-kube-api-access-9dnwm\") pod \"b596b9fb-f116-4712-81fa-9382d13c295b\" (UID: \"b596b9fb-f116-4712-81fa-9382d13c295b\") " Feb 18 15:51:09 crc kubenswrapper[4957]: I0218 15:51:09.524542 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b596b9fb-f116-4712-81fa-9382d13c295b-config-data\") pod \"b596b9fb-f116-4712-81fa-9382d13c295b\" (UID: \"b596b9fb-f116-4712-81fa-9382d13c295b\") " Feb 18 15:51:09 crc kubenswrapper[4957]: I0218 15:51:09.524702 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/b596b9fb-f116-4712-81fa-9382d13c295b-openstack-config\") pod \"b596b9fb-f116-4712-81fa-9382d13c295b\" (UID: \"b596b9fb-f116-4712-81fa-9382d13c295b\") " Feb 18 15:51:09 crc kubenswrapper[4957]: I0218 15:51:09.524743 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/b596b9fb-f116-4712-81fa-9382d13c295b-test-operator-ephemeral-temporary\") pod \"b596b9fb-f116-4712-81fa-9382d13c295b\" (UID: \"b596b9fb-f116-4712-81fa-9382d13c295b\") " Feb 18 15:51:09 crc kubenswrapper[4957]: I0218 15:51:09.524838 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/b596b9fb-f116-4712-81fa-9382d13c295b-openstack-config-secret\") pod \"b596b9fb-f116-4712-81fa-9382d13c295b\" (UID: \"b596b9fb-f116-4712-81fa-9382d13c295b\") " Feb 18 15:51:09 crc kubenswrapper[4957]: I0218 15:51:09.524860 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b596b9fb-f116-4712-81fa-9382d13c295b-ssh-key\") pod \"b596b9fb-f116-4712-81fa-9382d13c295b\" (UID: \"b596b9fb-f116-4712-81fa-9382d13c295b\") " Feb 18 15:51:09 crc kubenswrapper[4957]: I0218 15:51:09.524882 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"b596b9fb-f116-4712-81fa-9382d13c295b\" (UID: \"b596b9fb-f116-4712-81fa-9382d13c295b\") " Feb 18 15:51:09 crc kubenswrapper[4957]: I0218 15:51:09.524923 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/b596b9fb-f116-4712-81fa-9382d13c295b-test-operator-ephemeral-workdir\") pod \"b596b9fb-f116-4712-81fa-9382d13c295b\" (UID: \"b596b9fb-f116-4712-81fa-9382d13c295b\") " Feb 18 15:51:09 crc kubenswrapper[4957]: I0218 15:51:09.528649 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b596b9fb-f116-4712-81fa-9382d13c295b-config-data" (OuterVolumeSpecName: "config-data") pod "b596b9fb-f116-4712-81fa-9382d13c295b" (UID: "b596b9fb-f116-4712-81fa-9382d13c295b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 15:51:09 crc kubenswrapper[4957]: I0218 15:51:09.592287 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b596b9fb-f116-4712-81fa-9382d13c295b-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "b596b9fb-f116-4712-81fa-9382d13c295b" (UID: "b596b9fb-f116-4712-81fa-9382d13c295b"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 15:51:09 crc kubenswrapper[4957]: I0218 15:51:09.596230 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "test-operator-logs") pod "b596b9fb-f116-4712-81fa-9382d13c295b" (UID: "b596b9fb-f116-4712-81fa-9382d13c295b"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 18 15:51:09 crc kubenswrapper[4957]: I0218 15:51:09.597563 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b596b9fb-f116-4712-81fa-9382d13c295b-kube-api-access-9dnwm" (OuterVolumeSpecName: "kube-api-access-9dnwm") pod "b596b9fb-f116-4712-81fa-9382d13c295b" (UID: "b596b9fb-f116-4712-81fa-9382d13c295b"). InnerVolumeSpecName "kube-api-access-9dnwm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 15:51:09 crc kubenswrapper[4957]: I0218 15:51:09.604209 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b596b9fb-f116-4712-81fa-9382d13c295b-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "b596b9fb-f116-4712-81fa-9382d13c295b" (UID: "b596b9fb-f116-4712-81fa-9382d13c295b"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 15:51:09 crc kubenswrapper[4957]: I0218 15:51:09.612478 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b596b9fb-f116-4712-81fa-9382d13c295b-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "b596b9fb-f116-4712-81fa-9382d13c295b" (UID: "b596b9fb-f116-4712-81fa-9382d13c295b"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 15:51:09 crc kubenswrapper[4957]: I0218 15:51:09.625786 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b596b9fb-f116-4712-81fa-9382d13c295b-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "b596b9fb-f116-4712-81fa-9382d13c295b" (UID: "b596b9fb-f116-4712-81fa-9382d13c295b"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 15:51:09 crc kubenswrapper[4957]: I0218 15:51:09.628935 4957 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b596b9fb-f116-4712-81fa-9382d13c295b-ssh-key\") on node \"crc\" DevicePath \"\"" Feb 18 15:51:09 crc kubenswrapper[4957]: I0218 15:51:09.629913 4957 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Feb 18 15:51:09 crc kubenswrapper[4957]: I0218 15:51:09.630022 4957 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/b596b9fb-f116-4712-81fa-9382d13c295b-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Feb 18 15:51:09 crc kubenswrapper[4957]: I0218 15:51:09.630103 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9dnwm\" (UniqueName: \"kubernetes.io/projected/b596b9fb-f116-4712-81fa-9382d13c295b-kube-api-access-9dnwm\") on node \"crc\" DevicePath \"\"" Feb 18 15:51:09 crc kubenswrapper[4957]: I0218 15:51:09.630172 4957 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b596b9fb-f116-4712-81fa-9382d13c295b-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 15:51:09 crc kubenswrapper[4957]: I0218 15:51:09.630258 4957 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/b596b9fb-f116-4712-81fa-9382d13c295b-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Feb 18 15:51:09 crc kubenswrapper[4957]: I0218 15:51:09.630346 4957 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/b596b9fb-f116-4712-81fa-9382d13c295b-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Feb 18 15:51:09 crc kubenswrapper[4957]: I0218 15:51:09.643061 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b596b9fb-f116-4712-81fa-9382d13c295b-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "b596b9fb-f116-4712-81fa-9382d13c295b" (UID: "b596b9fb-f116-4712-81fa-9382d13c295b"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 15:51:09 crc kubenswrapper[4957]: I0218 15:51:09.648857 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b596b9fb-f116-4712-81fa-9382d13c295b-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "b596b9fb-f116-4712-81fa-9382d13c295b" (UID: "b596b9fb-f116-4712-81fa-9382d13c295b"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 15:51:09 crc kubenswrapper[4957]: I0218 15:51:09.669963 4957 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Feb 18 15:51:09 crc kubenswrapper[4957]: I0218 15:51:09.734294 4957 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/b596b9fb-f116-4712-81fa-9382d13c295b-openstack-config\") on node \"crc\" DevicePath \"\"" Feb 18 15:51:09 crc kubenswrapper[4957]: I0218 15:51:09.734336 4957 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Feb 18 15:51:09 crc kubenswrapper[4957]: I0218 15:51:09.734346 4957 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/b596b9fb-f116-4712-81fa-9382d13c295b-ca-certs\") on node \"crc\" DevicePath \"\"" Feb 18 15:51:09 crc kubenswrapper[4957]: I0218 15:51:09.890366 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"b596b9fb-f116-4712-81fa-9382d13c295b","Type":"ContainerDied","Data":"1584ff5c3201d5e2d0d251eb02c49df2003b8cf4a0533d7e045c83d3e6050a30"} Feb 18 15:51:09 crc kubenswrapper[4957]: I0218 15:51:09.890418 4957 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1584ff5c3201d5e2d0d251eb02c49df2003b8cf4a0533d7e045c83d3e6050a30" Feb 18 15:51:09 crc kubenswrapper[4957]: I0218 15:51:09.890502 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 18 15:51:10 crc kubenswrapper[4957]: I0218 15:51:10.870392 4957 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-gllqp" podUID="b95ede57-e275-4ba0-834d-43356f6b960b" containerName="registry-server" probeResult="failure" output=< Feb 18 15:51:10 crc kubenswrapper[4957]: timeout: failed to connect service ":50051" within 1s Feb 18 15:51:10 crc kubenswrapper[4957]: > Feb 18 15:51:10 crc kubenswrapper[4957]: I0218 15:51:10.909884 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-gdjrj" event={"ID":"11cb8341-3939-4c82-9745-510f73904864","Type":"ContainerStarted","Data":"a789f79de387a6723d5bfd65a63ea3807265c5ec574da1975d54cb87a9c56ec8"} Feb 18 15:51:11 crc kubenswrapper[4957]: I0218 15:51:11.427229 4957 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-vrpnn" podUID="9f04a8b9-47dc-4fdf-b0fa-b39ee03d5f00" containerName="registry-server" probeResult="failure" output=< Feb 18 15:51:11 crc kubenswrapper[4957]: timeout: failed to connect service ":50051" within 1s Feb 18 15:51:11 crc kubenswrapper[4957]: > Feb 18 15:51:13 crc kubenswrapper[4957]: I0218 15:51:13.844279 4957 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="e7fbc309-e5ee-4222-8409-6d68468ae015" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 15:51:13 crc kubenswrapper[4957]: I0218 15:51:13.921986 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators-redhat/loki-operator-controller-manager-669bf4b44b-ndlc7" Feb 18 15:51:15 crc kubenswrapper[4957]: I0218 15:51:15.740650 4957 patch_prober.go:28] interesting pod/downloads-7954f5f757-fxh8s container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Feb 18 15:51:15 crc kubenswrapper[4957]: I0218 15:51:15.741057 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-fxh8s" podUID="6e462cfd-ac3c-4e75-bcce-f8291746b89e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Feb 18 15:51:15 crc kubenswrapper[4957]: I0218 15:51:15.740752 4957 patch_prober.go:28] interesting pod/downloads-7954f5f757-fxh8s container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Feb 18 15:51:15 crc kubenswrapper[4957]: I0218 15:51:15.741378 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-fxh8s" podUID="6e462cfd-ac3c-4e75-bcce-f8291746b89e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Feb 18 15:51:17 crc kubenswrapper[4957]: I0218 15:51:17.463313 4957 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 4.719566602s: [/var/lib/containers/storage/overlay/a3c9052e77839471dc886109d612642dea1fbacb48fade794ce8efbfde9f59f0/diff /var/log/pods/openstack_openstackclient_aa2f421b-f6d0-4db4-9162-f863e45ca417/openstackclient/0.log]; will not log again for this container unless duration exceeds 2s Feb 18 15:51:18 crc kubenswrapper[4957]: I0218 15:51:18.089247 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 18 15:51:18 crc kubenswrapper[4957]: E0218 15:51:18.090077 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b596b9fb-f116-4712-81fa-9382d13c295b" containerName="tempest-tests-tempest-tests-runner" Feb 18 15:51:18 crc kubenswrapper[4957]: I0218 15:51:18.091322 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="b596b9fb-f116-4712-81fa-9382d13c295b" containerName="tempest-tests-tempest-tests-runner" Feb 18 15:51:18 crc kubenswrapper[4957]: I0218 15:51:18.093073 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="b596b9fb-f116-4712-81fa-9382d13c295b" containerName="tempest-tests-tempest-tests-runner" Feb 18 15:51:18 crc kubenswrapper[4957]: I0218 15:51:18.094292 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 18 15:51:18 crc kubenswrapper[4957]: I0218 15:51:18.121087 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-l59wk" Feb 18 15:51:18 crc kubenswrapper[4957]: I0218 15:51:18.125399 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 18 15:51:18 crc kubenswrapper[4957]: I0218 15:51:18.244702 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9bcw\" (UniqueName: \"kubernetes.io/projected/d182365d-2786-484b-855e-dbb5452eb045-kube-api-access-c9bcw\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"d182365d-2786-484b-855e-dbb5452eb045\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 18 15:51:18 crc kubenswrapper[4957]: I0218 15:51:18.244760 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"d182365d-2786-484b-855e-dbb5452eb045\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 18 15:51:18 crc kubenswrapper[4957]: I0218 15:51:18.347287 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c9bcw\" (UniqueName: \"kubernetes.io/projected/d182365d-2786-484b-855e-dbb5452eb045-kube-api-access-c9bcw\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"d182365d-2786-484b-855e-dbb5452eb045\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 18 15:51:18 crc kubenswrapper[4957]: I0218 15:51:18.347347 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"d182365d-2786-484b-855e-dbb5452eb045\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 18 15:51:18 crc kubenswrapper[4957]: I0218 15:51:18.500552 4957 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"d182365d-2786-484b-855e-dbb5452eb045\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 18 15:51:18 crc kubenswrapper[4957]: I0218 15:51:18.502410 4957 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="e7fbc309-e5ee-4222-8409-6d68468ae015" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 15:51:18 crc kubenswrapper[4957]: I0218 15:51:18.537608 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"d182365d-2786-484b-855e-dbb5452eb045\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 18 15:51:18 crc kubenswrapper[4957]: I0218 15:51:18.554577 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c9bcw\" (UniqueName: \"kubernetes.io/projected/d182365d-2786-484b-855e-dbb5452eb045-kube-api-access-c9bcw\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"d182365d-2786-484b-855e-dbb5452eb045\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 18 15:51:18 crc kubenswrapper[4957]: I0218 15:51:18.732075 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 18 15:51:20 crc kubenswrapper[4957]: I0218 15:51:20.847709 4957 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-gllqp" podUID="b95ede57-e275-4ba0-834d-43356f6b960b" containerName="registry-server" probeResult="failure" output=< Feb 18 15:51:20 crc kubenswrapper[4957]: timeout: failed to connect service ":50051" within 1s Feb 18 15:51:20 crc kubenswrapper[4957]: > Feb 18 15:51:21 crc kubenswrapper[4957]: I0218 15:51:21.394023 4957 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-vrpnn" podUID="9f04a8b9-47dc-4fdf-b0fa-b39ee03d5f00" containerName="registry-server" probeResult="failure" output=< Feb 18 15:51:21 crc kubenswrapper[4957]: timeout: failed to connect service ":50051" within 1s Feb 18 15:51:21 crc kubenswrapper[4957]: > Feb 18 15:51:22 crc kubenswrapper[4957]: E0218 15:51:22.028812 4957 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1a06ac29_ae1b_4ee9_896f_b5522fd99ba0.slice/crio-conmon-08f964e0c70524951714c0d497f0f6a3843332369e4fb5625ea865b2fb73ffd9.scope\": RecentStats: unable to find data in memory cache]" Feb 18 15:51:22 crc kubenswrapper[4957]: I0218 15:51:22.328054 4957 generic.go:334] "Generic (PLEG): container finished" podID="1a06ac29-ae1b-4ee9-896f-b5522fd99ba0" containerID="08f964e0c70524951714c0d497f0f6a3843332369e4fb5625ea865b2fb73ffd9" exitCode=0 Feb 18 15:51:22 crc kubenswrapper[4957]: I0218 15:51:22.328107 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6fjk2" event={"ID":"1a06ac29-ae1b-4ee9-896f-b5522fd99ba0","Type":"ContainerDied","Data":"08f964e0c70524951714c0d497f0f6a3843332369e4fb5625ea865b2fb73ffd9"} Feb 18 15:51:23 crc kubenswrapper[4957]: I0218 15:51:23.451818 4957 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="e7fbc309-e5ee-4222-8409-6d68468ae015" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 15:51:25 crc kubenswrapper[4957]: I0218 15:51:25.741453 4957 patch_prober.go:28] interesting pod/downloads-7954f5f757-fxh8s container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Feb 18 15:51:25 crc kubenswrapper[4957]: I0218 15:51:25.741857 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-fxh8s" podUID="6e462cfd-ac3c-4e75-bcce-f8291746b89e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Feb 18 15:51:25 crc kubenswrapper[4957]: I0218 15:51:25.741500 4957 patch_prober.go:28] interesting pod/downloads-7954f5f757-fxh8s container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Feb 18 15:51:25 crc kubenswrapper[4957]: I0218 15:51:25.741961 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-fxh8s" podUID="6e462cfd-ac3c-4e75-bcce-f8291746b89e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Feb 18 15:51:26 crc kubenswrapper[4957]: I0218 15:51:26.515118 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/prometheus-k8s-0" Feb 18 15:51:26 crc kubenswrapper[4957]: I0218 15:51:26.636064 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/prometheus-k8s-0" Feb 18 15:51:26 crc kubenswrapper[4957]: I0218 15:51:26.955038 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-8675cb849f-2g7hj" Feb 18 15:51:27 crc kubenswrapper[4957]: I0218 15:51:27.517286 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-k8s-0" Feb 18 15:51:27 crc kubenswrapper[4957]: I0218 15:51:27.866222 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 15:51:27 crc kubenswrapper[4957]: I0218 15:51:27.867300 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="64952df6-ca80-4f2b-a8e3-ce0539d32008" containerName="ceilometer-notification-agent" containerID="cri-o://7b413331cea5e555f4d93bf6d2d3428b505571c72bf8f0f46960e059983bca28" gracePeriod=30 Feb 18 15:51:27 crc kubenswrapper[4957]: I0218 15:51:27.867749 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="64952df6-ca80-4f2b-a8e3-ce0539d32008" containerName="ceilometer-central-agent" containerID="cri-o://daecdfd0e42acc167fb8317a1a09f714d3aa6f05aaed86f55ff9bbaaf60a4415" gracePeriod=30 Feb 18 15:51:27 crc kubenswrapper[4957]: I0218 15:51:27.867911 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="64952df6-ca80-4f2b-a8e3-ce0539d32008" containerName="sg-core" containerID="cri-o://0981eed4b32399b279bd7ee497db112c2b06ffc0754e95a1fd4627eab628515d" gracePeriod=30 Feb 18 15:51:27 crc kubenswrapper[4957]: I0218 15:51:27.867964 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="64952df6-ca80-4f2b-a8e3-ce0539d32008" containerName="proxy-httpd" containerID="cri-o://a2151e86737bd73097792d0d09c48facd303756dcb6d7280590c9ceebfcbdb3b" gracePeriod=30 Feb 18 15:51:28 crc kubenswrapper[4957]: I0218 15:51:28.462330 4957 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="e7fbc309-e5ee-4222-8409-6d68468ae015" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 15:51:29 crc kubenswrapper[4957]: I0218 15:51:29.462407 4957 generic.go:334] "Generic (PLEG): container finished" podID="64952df6-ca80-4f2b-a8e3-ce0539d32008" containerID="daecdfd0e42acc167fb8317a1a09f714d3aa6f05aaed86f55ff9bbaaf60a4415" exitCode=0 Feb 18 15:51:29 crc kubenswrapper[4957]: I0218 15:51:29.462769 4957 generic.go:334] "Generic (PLEG): container finished" podID="64952df6-ca80-4f2b-a8e3-ce0539d32008" containerID="a2151e86737bd73097792d0d09c48facd303756dcb6d7280590c9ceebfcbdb3b" exitCode=0 Feb 18 15:51:29 crc kubenswrapper[4957]: I0218 15:51:29.462785 4957 generic.go:334] "Generic (PLEG): container finished" podID="64952df6-ca80-4f2b-a8e3-ce0539d32008" containerID="0981eed4b32399b279bd7ee497db112c2b06ffc0754e95a1fd4627eab628515d" exitCode=2 Feb 18 15:51:29 crc kubenswrapper[4957]: I0218 15:51:29.462456 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"64952df6-ca80-4f2b-a8e3-ce0539d32008","Type":"ContainerDied","Data":"daecdfd0e42acc167fb8317a1a09f714d3aa6f05aaed86f55ff9bbaaf60a4415"} Feb 18 15:51:29 crc kubenswrapper[4957]: I0218 15:51:29.462837 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"64952df6-ca80-4f2b-a8e3-ce0539d32008","Type":"ContainerDied","Data":"a2151e86737bd73097792d0d09c48facd303756dcb6d7280590c9ceebfcbdb3b"} Feb 18 15:51:29 crc kubenswrapper[4957]: I0218 15:51:29.462855 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"64952df6-ca80-4f2b-a8e3-ce0539d32008","Type":"ContainerDied","Data":"0981eed4b32399b279bd7ee497db112c2b06ffc0754e95a1fd4627eab628515d"} Feb 18 15:51:29 crc kubenswrapper[4957]: I0218 15:51:29.462899 4957 scope.go:117] "RemoveContainer" containerID="ee7bfd0150096901eaacb13ccf0a2ce0017054cd8ce3e58ca9d7e0cd72b7f6be" Feb 18 15:51:30 crc kubenswrapper[4957]: I0218 15:51:30.785878 4957 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-gllqp" podUID="b95ede57-e275-4ba0-834d-43356f6b960b" containerName="registry-server" probeResult="failure" output=< Feb 18 15:51:30 crc kubenswrapper[4957]: timeout: failed to connect service ":50051" within 1s Feb 18 15:51:30 crc kubenswrapper[4957]: > Feb 18 15:51:31 crc kubenswrapper[4957]: I0218 15:51:31.402008 4957 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-vrpnn" podUID="9f04a8b9-47dc-4fdf-b0fa-b39ee03d5f00" containerName="registry-server" probeResult="failure" output=< Feb 18 15:51:31 crc kubenswrapper[4957]: timeout: failed to connect service ":50051" within 1s Feb 18 15:51:31 crc kubenswrapper[4957]: > Feb 18 15:51:31 crc kubenswrapper[4957]: I0218 15:51:31.484549 4957 generic.go:334] "Generic (PLEG): container finished" podID="64952df6-ca80-4f2b-a8e3-ce0539d32008" containerID="7b413331cea5e555f4d93bf6d2d3428b505571c72bf8f0f46960e059983bca28" exitCode=0 Feb 18 15:51:31 crc kubenswrapper[4957]: I0218 15:51:31.484594 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"64952df6-ca80-4f2b-a8e3-ce0539d32008","Type":"ContainerDied","Data":"7b413331cea5e555f4d93bf6d2d3428b505571c72bf8f0f46960e059983bca28"} Feb 18 15:51:31 crc kubenswrapper[4957]: I0218 15:51:31.542672 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Feb 18 15:51:31 crc kubenswrapper[4957]: I0218 15:51:31.726751 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Feb 18 15:51:31 crc kubenswrapper[4957]: I0218 15:51:31.811377 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Feb 18 15:51:31 crc kubenswrapper[4957]: I0218 15:51:31.967637 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Feb 18 15:51:33 crc kubenswrapper[4957]: I0218 15:51:33.430954 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 18 15:51:35 crc kubenswrapper[4957]: I0218 15:51:35.739829 4957 patch_prober.go:28] interesting pod/downloads-7954f5f757-fxh8s container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Feb 18 15:51:35 crc kubenswrapper[4957]: I0218 15:51:35.739971 4957 patch_prober.go:28] interesting pod/downloads-7954f5f757-fxh8s container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Feb 18 15:51:35 crc kubenswrapper[4957]: I0218 15:51:35.740474 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-fxh8s" podUID="6e462cfd-ac3c-4e75-bcce-f8291746b89e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Feb 18 15:51:35 crc kubenswrapper[4957]: I0218 15:51:35.740518 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-fxh8s" podUID="6e462cfd-ac3c-4e75-bcce-f8291746b89e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Feb 18 15:51:35 crc kubenswrapper[4957]: I0218 15:51:35.740539 4957 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-7954f5f757-fxh8s" Feb 18 15:51:35 crc kubenswrapper[4957]: I0218 15:51:35.741048 4957 patch_prober.go:28] interesting pod/downloads-7954f5f757-fxh8s container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Feb 18 15:51:35 crc kubenswrapper[4957]: I0218 15:51:35.741090 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-fxh8s" podUID="6e462cfd-ac3c-4e75-bcce-f8291746b89e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Feb 18 15:51:35 crc kubenswrapper[4957]: I0218 15:51:35.741592 4957 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"35efc37db2b6d42a647606ee42db87995bdd1c6993715242445aeb13c929aca1"} pod="openshift-console/downloads-7954f5f757-fxh8s" containerMessage="Container download-server failed liveness probe, will be restarted" Feb 18 15:51:35 crc kubenswrapper[4957]: I0218 15:51:35.741633 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/downloads-7954f5f757-fxh8s" podUID="6e462cfd-ac3c-4e75-bcce-f8291746b89e" containerName="download-server" containerID="cri-o://35efc37db2b6d42a647606ee42db87995bdd1c6993715242445aeb13c929aca1" gracePeriod=2 Feb 18 15:51:35 crc kubenswrapper[4957]: E0218 15:51:35.944347 4957 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 18 15:51:35 crc kubenswrapper[4957]: E0218 15:51:35.947538 4957 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jrdss,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-8lgmk_openshift-marketplace(9a5c9544-331f-47df-898d-d19b2c9fe2b1): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 18 15:51:35 crc kubenswrapper[4957]: E0218 15:51:35.949645 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-8lgmk" podUID="9a5c9544-331f-47df-898d-d19b2c9fe2b1" Feb 18 15:51:36 crc kubenswrapper[4957]: I0218 15:51:36.607923 4957 generic.go:334] "Generic (PLEG): container finished" podID="6e462cfd-ac3c-4e75-bcce-f8291746b89e" containerID="35efc37db2b6d42a647606ee42db87995bdd1c6993715242445aeb13c929aca1" exitCode=0 Feb 18 15:51:36 crc kubenswrapper[4957]: I0218 15:51:36.608045 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-fxh8s" event={"ID":"6e462cfd-ac3c-4e75-bcce-f8291746b89e","Type":"ContainerDied","Data":"35efc37db2b6d42a647606ee42db87995bdd1c6993715242445aeb13c929aca1"} Feb 18 15:51:36 crc kubenswrapper[4957]: I0218 15:51:36.608762 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-fxh8s" event={"ID":"6e462cfd-ac3c-4e75-bcce-f8291746b89e","Type":"ContainerStarted","Data":"2a26195f0e2bb61cf43aaa2f57792214aebcfcaff0cdee001539412045b1f14e"} Feb 18 15:51:36 crc kubenswrapper[4957]: I0218 15:51:36.608836 4957 scope.go:117] "RemoveContainer" containerID="dbbd1df2955da23b7bf8778a59c511ed786bb412bed8437feb995b43317ddb58" Feb 18 15:51:36 crc kubenswrapper[4957]: I0218 15:51:36.609019 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-fxh8s" Feb 18 15:51:36 crc kubenswrapper[4957]: I0218 15:51:36.609441 4957 patch_prober.go:28] interesting pod/downloads-7954f5f757-fxh8s container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Feb 18 15:51:36 crc kubenswrapper[4957]: I0218 15:51:36.609493 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-fxh8s" podUID="6e462cfd-ac3c-4e75-bcce-f8291746b89e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Feb 18 15:51:37 crc kubenswrapper[4957]: I0218 15:51:37.281596 4957 patch_prober.go:28] interesting pod/machine-config-daemon-x8wwg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 15:51:37 crc kubenswrapper[4957]: I0218 15:51:37.281873 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 15:51:37 crc kubenswrapper[4957]: I0218 15:51:37.281921 4957 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" Feb 18 15:51:37 crc kubenswrapper[4957]: I0218 15:51:37.283292 4957 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d09aa8248fe62b5495e82460606315a805e4d670c5f748bbdf39b1d2a62e0d2a"} pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 15:51:37 crc kubenswrapper[4957]: I0218 15:51:37.283347 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" containerName="machine-config-daemon" containerID="cri-o://d09aa8248fe62b5495e82460606315a805e4d670c5f748bbdf39b1d2a62e0d2a" gracePeriod=600 Feb 18 15:51:37 crc kubenswrapper[4957]: I0218 15:51:37.424154 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 15:51:37 crc kubenswrapper[4957]: I0218 15:51:37.516655 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/64952df6-ca80-4f2b-a8e3-ce0539d32008-ceilometer-tls-certs\") pod \"64952df6-ca80-4f2b-a8e3-ce0539d32008\" (UID: \"64952df6-ca80-4f2b-a8e3-ce0539d32008\") " Feb 18 15:51:37 crc kubenswrapper[4957]: I0218 15:51:37.516779 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rhg77\" (UniqueName: \"kubernetes.io/projected/64952df6-ca80-4f2b-a8e3-ce0539d32008-kube-api-access-rhg77\") pod \"64952df6-ca80-4f2b-a8e3-ce0539d32008\" (UID: \"64952df6-ca80-4f2b-a8e3-ce0539d32008\") " Feb 18 15:51:37 crc kubenswrapper[4957]: I0218 15:51:37.516856 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64952df6-ca80-4f2b-a8e3-ce0539d32008-combined-ca-bundle\") pod \"64952df6-ca80-4f2b-a8e3-ce0539d32008\" (UID: \"64952df6-ca80-4f2b-a8e3-ce0539d32008\") " Feb 18 15:51:37 crc kubenswrapper[4957]: I0218 15:51:37.516916 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/64952df6-ca80-4f2b-a8e3-ce0539d32008-run-httpd\") pod \"64952df6-ca80-4f2b-a8e3-ce0539d32008\" (UID: \"64952df6-ca80-4f2b-a8e3-ce0539d32008\") " Feb 18 15:51:37 crc kubenswrapper[4957]: I0218 15:51:37.516947 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/64952df6-ca80-4f2b-a8e3-ce0539d32008-sg-core-conf-yaml\") pod \"64952df6-ca80-4f2b-a8e3-ce0539d32008\" (UID: \"64952df6-ca80-4f2b-a8e3-ce0539d32008\") " Feb 18 15:51:37 crc kubenswrapper[4957]: I0218 15:51:37.517002 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/64952df6-ca80-4f2b-a8e3-ce0539d32008-scripts\") pod \"64952df6-ca80-4f2b-a8e3-ce0539d32008\" (UID: \"64952df6-ca80-4f2b-a8e3-ce0539d32008\") " Feb 18 15:51:37 crc kubenswrapper[4957]: I0218 15:51:37.517041 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/64952df6-ca80-4f2b-a8e3-ce0539d32008-log-httpd\") pod \"64952df6-ca80-4f2b-a8e3-ce0539d32008\" (UID: \"64952df6-ca80-4f2b-a8e3-ce0539d32008\") " Feb 18 15:51:37 crc kubenswrapper[4957]: I0218 15:51:37.517057 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64952df6-ca80-4f2b-a8e3-ce0539d32008-config-data\") pod \"64952df6-ca80-4f2b-a8e3-ce0539d32008\" (UID: \"64952df6-ca80-4f2b-a8e3-ce0539d32008\") " Feb 18 15:51:37 crc kubenswrapper[4957]: I0218 15:51:37.558038 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 18 15:51:37 crc kubenswrapper[4957]: I0218 15:51:37.578475 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64952df6-ca80-4f2b-a8e3-ce0539d32008-scripts" (OuterVolumeSpecName: "scripts") pod "64952df6-ca80-4f2b-a8e3-ce0539d32008" (UID: "64952df6-ca80-4f2b-a8e3-ce0539d32008"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 15:51:37 crc kubenswrapper[4957]: I0218 15:51:37.600123 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/64952df6-ca80-4f2b-a8e3-ce0539d32008-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "64952df6-ca80-4f2b-a8e3-ce0539d32008" (UID: "64952df6-ca80-4f2b-a8e3-ce0539d32008"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 15:51:37 crc kubenswrapper[4957]: I0218 15:51:37.606924 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64952df6-ca80-4f2b-a8e3-ce0539d32008-kube-api-access-rhg77" (OuterVolumeSpecName: "kube-api-access-rhg77") pod "64952df6-ca80-4f2b-a8e3-ce0539d32008" (UID: "64952df6-ca80-4f2b-a8e3-ce0539d32008"). InnerVolumeSpecName "kube-api-access-rhg77". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 15:51:37 crc kubenswrapper[4957]: I0218 15:51:37.607385 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/64952df6-ca80-4f2b-a8e3-ce0539d32008-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "64952df6-ca80-4f2b-a8e3-ce0539d32008" (UID: "64952df6-ca80-4f2b-a8e3-ce0539d32008"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 15:51:37 crc kubenswrapper[4957]: I0218 15:51:37.621665 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rhg77\" (UniqueName: \"kubernetes.io/projected/64952df6-ca80-4f2b-a8e3-ce0539d32008-kube-api-access-rhg77\") on node \"crc\" DevicePath \"\"" Feb 18 15:51:37 crc kubenswrapper[4957]: I0218 15:51:37.633940 4957 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/64952df6-ca80-4f2b-a8e3-ce0539d32008-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 15:51:37 crc kubenswrapper[4957]: I0218 15:51:37.634946 4957 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/64952df6-ca80-4f2b-a8e3-ce0539d32008-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 15:51:37 crc kubenswrapper[4957]: I0218 15:51:37.635111 4957 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/64952df6-ca80-4f2b-a8e3-ce0539d32008-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 15:51:37 crc kubenswrapper[4957]: I0218 15:51:37.650159 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"d182365d-2786-484b-855e-dbb5452eb045","Type":"ContainerStarted","Data":"d85d28b9d78d0f040881e3e2d4eea9a377549528fa5650cdf13ac7eec830d998"} Feb 18 15:51:37 crc kubenswrapper[4957]: I0218 15:51:37.663892 4957 patch_prober.go:28] interesting pod/downloads-7954f5f757-fxh8s container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Feb 18 15:51:37 crc kubenswrapper[4957]: I0218 15:51:37.663937 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-fxh8s" podUID="6e462cfd-ac3c-4e75-bcce-f8291746b89e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Feb 18 15:51:37 crc kubenswrapper[4957]: I0218 15:51:37.709997 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"64952df6-ca80-4f2b-a8e3-ce0539d32008","Type":"ContainerDied","Data":"c894a00d0a29ccfc0f7d4a82480d6fd432c8aa19a68f3046bca522108817b4a9"} Feb 18 15:51:37 crc kubenswrapper[4957]: I0218 15:51:37.710048 4957 scope.go:117] "RemoveContainer" containerID="daecdfd0e42acc167fb8317a1a09f714d3aa6f05aaed86f55ff9bbaaf60a4415" Feb 18 15:51:37 crc kubenswrapper[4957]: I0218 15:51:37.710159 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 15:51:37 crc kubenswrapper[4957]: I0218 15:51:37.751575 4957 generic.go:334] "Generic (PLEG): container finished" podID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" containerID="d09aa8248fe62b5495e82460606315a805e4d670c5f748bbdf39b1d2a62e0d2a" exitCode=0 Feb 18 15:51:37 crc kubenswrapper[4957]: I0218 15:51:37.751613 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" event={"ID":"4cde17e3-43e9-4bed-afe8-5b76229e35cf","Type":"ContainerDied","Data":"d09aa8248fe62b5495e82460606315a805e4d670c5f748bbdf39b1d2a62e0d2a"} Feb 18 15:51:37 crc kubenswrapper[4957]: I0218 15:51:37.795581 4957 scope.go:117] "RemoveContainer" containerID="a2151e86737bd73097792d0d09c48facd303756dcb6d7280590c9ceebfcbdb3b" Feb 18 15:51:37 crc kubenswrapper[4957]: I0218 15:51:37.824508 4957 scope.go:117] "RemoveContainer" containerID="0981eed4b32399b279bd7ee497db112c2b06ffc0754e95a1fd4627eab628515d" Feb 18 15:51:37 crc kubenswrapper[4957]: I0218 15:51:37.860338 4957 scope.go:117] "RemoveContainer" containerID="7b413331cea5e555f4d93bf6d2d3428b505571c72bf8f0f46960e059983bca28" Feb 18 15:51:37 crc kubenswrapper[4957]: I0218 15:51:37.921642 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64952df6-ca80-4f2b-a8e3-ce0539d32008-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "64952df6-ca80-4f2b-a8e3-ce0539d32008" (UID: "64952df6-ca80-4f2b-a8e3-ce0539d32008"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 15:51:37 crc kubenswrapper[4957]: I0218 15:51:37.948362 4957 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/64952df6-ca80-4f2b-a8e3-ce0539d32008-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 18 15:51:37 crc kubenswrapper[4957]: I0218 15:51:37.956047 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64952df6-ca80-4f2b-a8e3-ce0539d32008-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "64952df6-ca80-4f2b-a8e3-ce0539d32008" (UID: "64952df6-ca80-4f2b-a8e3-ce0539d32008"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 15:51:37 crc kubenswrapper[4957]: I0218 15:51:37.960957 4957 scope.go:117] "RemoveContainer" containerID="f9077133a95a58a1f91a3ffae91cbaefd9887cd4916d6d9348fd595927183247" Feb 18 15:51:38 crc kubenswrapper[4957]: I0218 15:51:38.019782 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64952df6-ca80-4f2b-a8e3-ce0539d32008-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "64952df6-ca80-4f2b-a8e3-ce0539d32008" (UID: "64952df6-ca80-4f2b-a8e3-ce0539d32008"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 15:51:38 crc kubenswrapper[4957]: I0218 15:51:38.050081 4957 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/64952df6-ca80-4f2b-a8e3-ce0539d32008-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 15:51:38 crc kubenswrapper[4957]: I0218 15:51:38.050261 4957 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64952df6-ca80-4f2b-a8e3-ce0539d32008-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 15:51:38 crc kubenswrapper[4957]: I0218 15:51:38.095830 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64952df6-ca80-4f2b-a8e3-ce0539d32008-config-data" (OuterVolumeSpecName: "config-data") pod "64952df6-ca80-4f2b-a8e3-ce0539d32008" (UID: "64952df6-ca80-4f2b-a8e3-ce0539d32008"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 15:51:38 crc kubenswrapper[4957]: I0218 15:51:38.155555 4957 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64952df6-ca80-4f2b-a8e3-ce0539d32008-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 15:51:38 crc kubenswrapper[4957]: I0218 15:51:38.365646 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 15:51:38 crc kubenswrapper[4957]: I0218 15:51:38.380233 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 18 15:51:38 crc kubenswrapper[4957]: I0218 15:51:38.391825 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 18 15:51:38 crc kubenswrapper[4957]: E0218 15:51:38.392327 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64952df6-ca80-4f2b-a8e3-ce0539d32008" containerName="ceilometer-central-agent" Feb 18 15:51:38 crc kubenswrapper[4957]: I0218 15:51:38.392345 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="64952df6-ca80-4f2b-a8e3-ce0539d32008" containerName="ceilometer-central-agent" Feb 18 15:51:38 crc kubenswrapper[4957]: E0218 15:51:38.392367 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64952df6-ca80-4f2b-a8e3-ce0539d32008" containerName="sg-core" Feb 18 15:51:38 crc kubenswrapper[4957]: I0218 15:51:38.392374 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="64952df6-ca80-4f2b-a8e3-ce0539d32008" containerName="sg-core" Feb 18 15:51:38 crc kubenswrapper[4957]: E0218 15:51:38.392397 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64952df6-ca80-4f2b-a8e3-ce0539d32008" containerName="ceilometer-central-agent" Feb 18 15:51:38 crc kubenswrapper[4957]: I0218 15:51:38.392402 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="64952df6-ca80-4f2b-a8e3-ce0539d32008" containerName="ceilometer-central-agent" Feb 18 15:51:38 crc kubenswrapper[4957]: E0218 15:51:38.392430 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64952df6-ca80-4f2b-a8e3-ce0539d32008" containerName="proxy-httpd" Feb 18 15:51:38 crc kubenswrapper[4957]: I0218 15:51:38.392436 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="64952df6-ca80-4f2b-a8e3-ce0539d32008" containerName="proxy-httpd" Feb 18 15:51:38 crc kubenswrapper[4957]: E0218 15:51:38.392457 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64952df6-ca80-4f2b-a8e3-ce0539d32008" containerName="ceilometer-notification-agent" Feb 18 15:51:38 crc kubenswrapper[4957]: I0218 15:51:38.392463 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="64952df6-ca80-4f2b-a8e3-ce0539d32008" containerName="ceilometer-notification-agent" Feb 18 15:51:38 crc kubenswrapper[4957]: I0218 15:51:38.392676 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="64952df6-ca80-4f2b-a8e3-ce0539d32008" containerName="ceilometer-notification-agent" Feb 18 15:51:38 crc kubenswrapper[4957]: I0218 15:51:38.392702 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="64952df6-ca80-4f2b-a8e3-ce0539d32008" containerName="ceilometer-central-agent" Feb 18 15:51:38 crc kubenswrapper[4957]: I0218 15:51:38.392717 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="64952df6-ca80-4f2b-a8e3-ce0539d32008" containerName="sg-core" Feb 18 15:51:38 crc kubenswrapper[4957]: I0218 15:51:38.392725 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="64952df6-ca80-4f2b-a8e3-ce0539d32008" containerName="proxy-httpd" Feb 18 15:51:38 crc kubenswrapper[4957]: I0218 15:51:38.394221 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="64952df6-ca80-4f2b-a8e3-ce0539d32008" containerName="ceilometer-central-agent" Feb 18 15:51:38 crc kubenswrapper[4957]: I0218 15:51:38.401812 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 15:51:38 crc kubenswrapper[4957]: I0218 15:51:38.408560 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 18 15:51:38 crc kubenswrapper[4957]: I0218 15:51:38.409309 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 18 15:51:38 crc kubenswrapper[4957]: I0218 15:51:38.409527 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 18 15:51:38 crc kubenswrapper[4957]: I0218 15:51:38.421211 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 15:51:38 crc kubenswrapper[4957]: I0218 15:51:38.566333 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd00e1ad-a5c2-463f-a710-cdaf6ab4c122-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bd00e1ad-a5c2-463f-a710-cdaf6ab4c122\") " pod="openstack/ceilometer-0" Feb 18 15:51:38 crc kubenswrapper[4957]: I0218 15:51:38.566486 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd00e1ad-a5c2-463f-a710-cdaf6ab4c122-scripts\") pod \"ceilometer-0\" (UID: \"bd00e1ad-a5c2-463f-a710-cdaf6ab4c122\") " pod="openstack/ceilometer-0" Feb 18 15:51:38 crc kubenswrapper[4957]: I0218 15:51:38.566527 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd00e1ad-a5c2-463f-a710-cdaf6ab4c122-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"bd00e1ad-a5c2-463f-a710-cdaf6ab4c122\") " pod="openstack/ceilometer-0" Feb 18 15:51:38 crc kubenswrapper[4957]: I0218 15:51:38.566590 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bd00e1ad-a5c2-463f-a710-cdaf6ab4c122-run-httpd\") pod \"ceilometer-0\" (UID: \"bd00e1ad-a5c2-463f-a710-cdaf6ab4c122\") " pod="openstack/ceilometer-0" Feb 18 15:51:38 crc kubenswrapper[4957]: I0218 15:51:38.566614 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd00e1ad-a5c2-463f-a710-cdaf6ab4c122-config-data\") pod \"ceilometer-0\" (UID: \"bd00e1ad-a5c2-463f-a710-cdaf6ab4c122\") " pod="openstack/ceilometer-0" Feb 18 15:51:38 crc kubenswrapper[4957]: I0218 15:51:38.566640 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bd00e1ad-a5c2-463f-a710-cdaf6ab4c122-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bd00e1ad-a5c2-463f-a710-cdaf6ab4c122\") " pod="openstack/ceilometer-0" Feb 18 15:51:38 crc kubenswrapper[4957]: I0218 15:51:38.566680 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7d4p\" (UniqueName: \"kubernetes.io/projected/bd00e1ad-a5c2-463f-a710-cdaf6ab4c122-kube-api-access-p7d4p\") pod \"ceilometer-0\" (UID: \"bd00e1ad-a5c2-463f-a710-cdaf6ab4c122\") " pod="openstack/ceilometer-0" Feb 18 15:51:38 crc kubenswrapper[4957]: I0218 15:51:38.566711 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bd00e1ad-a5c2-463f-a710-cdaf6ab4c122-log-httpd\") pod \"ceilometer-0\" (UID: \"bd00e1ad-a5c2-463f-a710-cdaf6ab4c122\") " pod="openstack/ceilometer-0" Feb 18 15:51:38 crc kubenswrapper[4957]: I0218 15:51:38.668361 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bd00e1ad-a5c2-463f-a710-cdaf6ab4c122-run-httpd\") pod \"ceilometer-0\" (UID: \"bd00e1ad-a5c2-463f-a710-cdaf6ab4c122\") " pod="openstack/ceilometer-0" Feb 18 15:51:38 crc kubenswrapper[4957]: I0218 15:51:38.668442 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd00e1ad-a5c2-463f-a710-cdaf6ab4c122-config-data\") pod \"ceilometer-0\" (UID: \"bd00e1ad-a5c2-463f-a710-cdaf6ab4c122\") " pod="openstack/ceilometer-0" Feb 18 15:51:38 crc kubenswrapper[4957]: I0218 15:51:38.668490 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bd00e1ad-a5c2-463f-a710-cdaf6ab4c122-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bd00e1ad-a5c2-463f-a710-cdaf6ab4c122\") " pod="openstack/ceilometer-0" Feb 18 15:51:38 crc kubenswrapper[4957]: I0218 15:51:38.668554 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7d4p\" (UniqueName: \"kubernetes.io/projected/bd00e1ad-a5c2-463f-a710-cdaf6ab4c122-kube-api-access-p7d4p\") pod \"ceilometer-0\" (UID: \"bd00e1ad-a5c2-463f-a710-cdaf6ab4c122\") " pod="openstack/ceilometer-0" Feb 18 15:51:38 crc kubenswrapper[4957]: I0218 15:51:38.668599 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bd00e1ad-a5c2-463f-a710-cdaf6ab4c122-log-httpd\") pod \"ceilometer-0\" (UID: \"bd00e1ad-a5c2-463f-a710-cdaf6ab4c122\") " pod="openstack/ceilometer-0" Feb 18 15:51:38 crc kubenswrapper[4957]: I0218 15:51:38.668631 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd00e1ad-a5c2-463f-a710-cdaf6ab4c122-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bd00e1ad-a5c2-463f-a710-cdaf6ab4c122\") " pod="openstack/ceilometer-0" Feb 18 15:51:38 crc kubenswrapper[4957]: I0218 15:51:38.668733 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd00e1ad-a5c2-463f-a710-cdaf6ab4c122-scripts\") pod \"ceilometer-0\" (UID: \"bd00e1ad-a5c2-463f-a710-cdaf6ab4c122\") " pod="openstack/ceilometer-0" Feb 18 15:51:38 crc kubenswrapper[4957]: I0218 15:51:38.668782 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd00e1ad-a5c2-463f-a710-cdaf6ab4c122-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"bd00e1ad-a5c2-463f-a710-cdaf6ab4c122\") " pod="openstack/ceilometer-0" Feb 18 15:51:38 crc kubenswrapper[4957]: I0218 15:51:38.668858 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bd00e1ad-a5c2-463f-a710-cdaf6ab4c122-run-httpd\") pod \"ceilometer-0\" (UID: \"bd00e1ad-a5c2-463f-a710-cdaf6ab4c122\") " pod="openstack/ceilometer-0" Feb 18 15:51:38 crc kubenswrapper[4957]: I0218 15:51:38.669576 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bd00e1ad-a5c2-463f-a710-cdaf6ab4c122-log-httpd\") pod \"ceilometer-0\" (UID: \"bd00e1ad-a5c2-463f-a710-cdaf6ab4c122\") " pod="openstack/ceilometer-0" Feb 18 15:51:38 crc kubenswrapper[4957]: I0218 15:51:38.802842 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" event={"ID":"4cde17e3-43e9-4bed-afe8-5b76229e35cf","Type":"ContainerStarted","Data":"89f533c1d1cd15588f3413497b2789cc0f343faad672d55c7e1b45c57bcadec6"} Feb 18 15:51:38 crc kubenswrapper[4957]: I0218 15:51:38.821821 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6fjk2" event={"ID":"1a06ac29-ae1b-4ee9-896f-b5522fd99ba0","Type":"ContainerStarted","Data":"ebd1669255cf3a7eba480b75c2dc342de6f1a397a728af21fdc21f86037c5b65"} Feb 18 15:51:38 crc kubenswrapper[4957]: I0218 15:51:38.866406 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-6fjk2" podStartSLOduration=6.200394943 podStartE2EDuration="39.860866311s" podCreationTimestamp="2026-02-18 15:50:59 +0000 UTC" firstStartedPulling="2026-02-18 15:51:03.764204794 +0000 UTC m=+4770.285069528" lastFinishedPulling="2026-02-18 15:51:37.424676152 +0000 UTC m=+4803.945540896" observedRunningTime="2026-02-18 15:51:38.855824667 +0000 UTC m=+4805.376689441" watchObservedRunningTime="2026-02-18 15:51:38.860866311 +0000 UTC m=+4805.381731055" Feb 18 15:51:39 crc kubenswrapper[4957]: I0218 15:51:39.095564 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bd00e1ad-a5c2-463f-a710-cdaf6ab4c122-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bd00e1ad-a5c2-463f-a710-cdaf6ab4c122\") " pod="openstack/ceilometer-0" Feb 18 15:51:39 crc kubenswrapper[4957]: I0218 15:51:39.096163 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd00e1ad-a5c2-463f-a710-cdaf6ab4c122-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bd00e1ad-a5c2-463f-a710-cdaf6ab4c122\") " pod="openstack/ceilometer-0" Feb 18 15:51:39 crc kubenswrapper[4957]: I0218 15:51:39.096628 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd00e1ad-a5c2-463f-a710-cdaf6ab4c122-config-data\") pod \"ceilometer-0\" (UID: \"bd00e1ad-a5c2-463f-a710-cdaf6ab4c122\") " pod="openstack/ceilometer-0" Feb 18 15:51:39 crc kubenswrapper[4957]: I0218 15:51:39.103000 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7d4p\" (UniqueName: \"kubernetes.io/projected/bd00e1ad-a5c2-463f-a710-cdaf6ab4c122-kube-api-access-p7d4p\") pod \"ceilometer-0\" (UID: \"bd00e1ad-a5c2-463f-a710-cdaf6ab4c122\") " pod="openstack/ceilometer-0" Feb 18 15:51:39 crc kubenswrapper[4957]: I0218 15:51:39.105044 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd00e1ad-a5c2-463f-a710-cdaf6ab4c122-scripts\") pod \"ceilometer-0\" (UID: \"bd00e1ad-a5c2-463f-a710-cdaf6ab4c122\") " pod="openstack/ceilometer-0" Feb 18 15:51:39 crc kubenswrapper[4957]: I0218 15:51:39.105980 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd00e1ad-a5c2-463f-a710-cdaf6ab4c122-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"bd00e1ad-a5c2-463f-a710-cdaf6ab4c122\") " pod="openstack/ceilometer-0" Feb 18 15:51:39 crc kubenswrapper[4957]: I0218 15:51:39.325283 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 15:51:40 crc kubenswrapper[4957]: I0218 15:51:40.103175 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 15:51:40 crc kubenswrapper[4957]: I0218 15:51:40.236110 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="64952df6-ca80-4f2b-a8e3-ce0539d32008" path="/var/lib/kubelet/pods/64952df6-ca80-4f2b-a8e3-ce0539d32008/volumes" Feb 18 15:51:40 crc kubenswrapper[4957]: I0218 15:51:40.332037 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-6fjk2" Feb 18 15:51:40 crc kubenswrapper[4957]: I0218 15:51:40.332662 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-6fjk2" Feb 18 15:51:40 crc kubenswrapper[4957]: I0218 15:51:40.764559 4957 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-gllqp" podUID="b95ede57-e275-4ba0-834d-43356f6b960b" containerName="registry-server" probeResult="failure" output=< Feb 18 15:51:40 crc kubenswrapper[4957]: timeout: failed to connect service ":50051" within 1s Feb 18 15:51:40 crc kubenswrapper[4957]: > Feb 18 15:51:40 crc kubenswrapper[4957]: I0218 15:51:40.794115 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 15:51:40 crc kubenswrapper[4957]: I0218 15:51:40.865147 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"d182365d-2786-484b-855e-dbb5452eb045","Type":"ContainerStarted","Data":"8bad59ac806d92a3c5242ec2c0918ae54d0dffb4b4624d1f7d1a232245e4c4e8"} Feb 18 15:51:40 crc kubenswrapper[4957]: I0218 15:51:40.888739 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=21.638347126 podStartE2EDuration="23.888722244s" podCreationTimestamp="2026-02-18 15:51:17 +0000 UTC" firstStartedPulling="2026-02-18 15:51:37.623086907 +0000 UTC m=+4804.143951651" lastFinishedPulling="2026-02-18 15:51:39.873462025 +0000 UTC m=+4806.394326769" observedRunningTime="2026-02-18 15:51:40.88052014 +0000 UTC m=+4807.401384904" watchObservedRunningTime="2026-02-18 15:51:40.888722244 +0000 UTC m=+4807.409586988" Feb 18 15:51:41 crc kubenswrapper[4957]: I0218 15:51:41.388820 4957 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-6fjk2" podUID="1a06ac29-ae1b-4ee9-896f-b5522fd99ba0" containerName="registry-server" probeResult="failure" output=< Feb 18 15:51:41 crc kubenswrapper[4957]: timeout: failed to connect service ":50051" within 1s Feb 18 15:51:41 crc kubenswrapper[4957]: > Feb 18 15:51:41 crc kubenswrapper[4957]: I0218 15:51:41.426457 4957 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-vrpnn" podUID="9f04a8b9-47dc-4fdf-b0fa-b39ee03d5f00" containerName="registry-server" probeResult="failure" output=< Feb 18 15:51:41 crc kubenswrapper[4957]: timeout: failed to connect service ":50051" within 1s Feb 18 15:51:41 crc kubenswrapper[4957]: > Feb 18 15:51:41 crc kubenswrapper[4957]: I0218 15:51:41.877090 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bd00e1ad-a5c2-463f-a710-cdaf6ab4c122","Type":"ContainerStarted","Data":"cea05ccac1bce02675087627fa2baa8b9a817d98ab7b3aaba41b2692ef509788"} Feb 18 15:51:42 crc kubenswrapper[4957]: I0218 15:51:42.907352 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bd00e1ad-a5c2-463f-a710-cdaf6ab4c122","Type":"ContainerStarted","Data":"dc1ebefe56b6d18c57ea7819674070f2d5629390f369d0b0eb22a0c0308a521f"} Feb 18 15:51:43 crc kubenswrapper[4957]: I0218 15:51:43.922262 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bd00e1ad-a5c2-463f-a710-cdaf6ab4c122","Type":"ContainerStarted","Data":"b1e7f76411b3a00051d54ec0d6f68c655c45fc9a7456727b6c534b34c623408e"} Feb 18 15:51:44 crc kubenswrapper[4957]: I0218 15:51:44.942514 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bd00e1ad-a5c2-463f-a710-cdaf6ab4c122","Type":"ContainerStarted","Data":"db5dbdc5ad308cda0208c6ea2ecab7e288b3df70768e53ee8c9c0ce64806afa3"} Feb 18 15:51:45 crc kubenswrapper[4957]: I0218 15:51:45.740492 4957 patch_prober.go:28] interesting pod/downloads-7954f5f757-fxh8s container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Feb 18 15:51:45 crc kubenswrapper[4957]: I0218 15:51:45.740749 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-fxh8s" podUID="6e462cfd-ac3c-4e75-bcce-f8291746b89e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Feb 18 15:51:45 crc kubenswrapper[4957]: I0218 15:51:45.740569 4957 patch_prober.go:28] interesting pod/downloads-7954f5f757-fxh8s container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Feb 18 15:51:45 crc kubenswrapper[4957]: I0218 15:51:45.740976 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-fxh8s" podUID="6e462cfd-ac3c-4e75-bcce-f8291746b89e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Feb 18 15:51:47 crc kubenswrapper[4957]: I0218 15:51:47.988198 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bd00e1ad-a5c2-463f-a710-cdaf6ab4c122","Type":"ContainerStarted","Data":"f02f056d324f100dbb32ec26524bad1a05928ed6fdfda98a581bb5198d04ec80"} Feb 18 15:51:47 crc kubenswrapper[4957]: I0218 15:51:47.988369 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bd00e1ad-a5c2-463f-a710-cdaf6ab4c122" containerName="ceilometer-central-agent" containerID="cri-o://dc1ebefe56b6d18c57ea7819674070f2d5629390f369d0b0eb22a0c0308a521f" gracePeriod=30 Feb 18 15:51:47 crc kubenswrapper[4957]: I0218 15:51:47.988393 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bd00e1ad-a5c2-463f-a710-cdaf6ab4c122" containerName="ceilometer-notification-agent" containerID="cri-o://b1e7f76411b3a00051d54ec0d6f68c655c45fc9a7456727b6c534b34c623408e" gracePeriod=30 Feb 18 15:51:47 crc kubenswrapper[4957]: I0218 15:51:47.988821 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 18 15:51:47 crc kubenswrapper[4957]: I0218 15:51:47.988476 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bd00e1ad-a5c2-463f-a710-cdaf6ab4c122" containerName="proxy-httpd" containerID="cri-o://f02f056d324f100dbb32ec26524bad1a05928ed6fdfda98a581bb5198d04ec80" gracePeriod=30 Feb 18 15:51:47 crc kubenswrapper[4957]: I0218 15:51:47.988433 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bd00e1ad-a5c2-463f-a710-cdaf6ab4c122" containerName="sg-core" containerID="cri-o://db5dbdc5ad308cda0208c6ea2ecab7e288b3df70768e53ee8c9c0ce64806afa3" gracePeriod=30 Feb 18 15:51:48 crc kubenswrapper[4957]: I0218 15:51:48.022574 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=4.360276781 podStartE2EDuration="10.022553812s" podCreationTimestamp="2026-02-18 15:51:38 +0000 UTC" firstStartedPulling="2026-02-18 15:51:41.204728827 +0000 UTC m=+4807.725593571" lastFinishedPulling="2026-02-18 15:51:46.867005858 +0000 UTC m=+4813.387870602" observedRunningTime="2026-02-18 15:51:48.008831921 +0000 UTC m=+4814.529696665" watchObservedRunningTime="2026-02-18 15:51:48.022553812 +0000 UTC m=+4814.543418556" Feb 18 15:51:49 crc kubenswrapper[4957]: I0218 15:51:49.003887 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bd00e1ad-a5c2-463f-a710-cdaf6ab4c122","Type":"ContainerDied","Data":"db5dbdc5ad308cda0208c6ea2ecab7e288b3df70768e53ee8c9c0ce64806afa3"} Feb 18 15:51:49 crc kubenswrapper[4957]: I0218 15:51:49.004374 4957 generic.go:334] "Generic (PLEG): container finished" podID="bd00e1ad-a5c2-463f-a710-cdaf6ab4c122" containerID="db5dbdc5ad308cda0208c6ea2ecab7e288b3df70768e53ee8c9c0ce64806afa3" exitCode=2 Feb 18 15:51:50 crc kubenswrapper[4957]: I0218 15:51:50.017467 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8lgmk" event={"ID":"9a5c9544-331f-47df-898d-d19b2c9fe2b1","Type":"ContainerStarted","Data":"f1e2e7021a759dfca224249e7a03c1e1b21b6ce55c8d36f13007267ab0c346d1"} Feb 18 15:51:50 crc kubenswrapper[4957]: I0218 15:51:50.830146 4957 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-gllqp" podUID="b95ede57-e275-4ba0-834d-43356f6b960b" containerName="registry-server" probeResult="failure" output=< Feb 18 15:51:50 crc kubenswrapper[4957]: timeout: failed to connect service ":50051" within 1s Feb 18 15:51:50 crc kubenswrapper[4957]: > Feb 18 15:51:51 crc kubenswrapper[4957]: I0218 15:51:51.840860 4957 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-6fjk2" podUID="1a06ac29-ae1b-4ee9-896f-b5522fd99ba0" containerName="registry-server" probeResult="failure" output=< Feb 18 15:51:51 crc kubenswrapper[4957]: timeout: failed to connect service ":50051" within 1s Feb 18 15:51:51 crc kubenswrapper[4957]: > Feb 18 15:51:51 crc kubenswrapper[4957]: I0218 15:51:51.854069 4957 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-vrpnn" podUID="9f04a8b9-47dc-4fdf-b0fa-b39ee03d5f00" containerName="registry-server" probeResult="failure" output=< Feb 18 15:51:51 crc kubenswrapper[4957]: timeout: failed to connect service ":50051" within 1s Feb 18 15:51:51 crc kubenswrapper[4957]: > Feb 18 15:51:53 crc kubenswrapper[4957]: I0218 15:51:53.626457 4957 trace.go:236] Trace[2106819946]: "Calculate volume metrics of catalog-content for pod openshift-marketplace/certified-operators-mjqvd" (18-Feb-2026 15:51:52.502) (total time: 1120ms): Feb 18 15:51:53 crc kubenswrapper[4957]: Trace[2106819946]: [1.120472924s] [1.120472924s] END Feb 18 15:51:55 crc kubenswrapper[4957]: I0218 15:51:55.740689 4957 patch_prober.go:28] interesting pod/downloads-7954f5f757-fxh8s container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Feb 18 15:51:55 crc kubenswrapper[4957]: I0218 15:51:55.740809 4957 patch_prober.go:28] interesting pod/downloads-7954f5f757-fxh8s container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Feb 18 15:51:55 crc kubenswrapper[4957]: I0218 15:51:55.741232 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-fxh8s" podUID="6e462cfd-ac3c-4e75-bcce-f8291746b89e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Feb 18 15:51:55 crc kubenswrapper[4957]: I0218 15:51:55.741278 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-fxh8s" podUID="6e462cfd-ac3c-4e75-bcce-f8291746b89e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Feb 18 15:51:59 crc kubenswrapper[4957]: I0218 15:51:59.766319 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-gllqp" Feb 18 15:51:59 crc kubenswrapper[4957]: I0218 15:51:59.837429 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-gllqp" Feb 18 15:51:59 crc kubenswrapper[4957]: I0218 15:51:59.846388 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-zmxxj"] Feb 18 15:51:59 crc kubenswrapper[4957]: I0218 15:51:59.849045 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zmxxj" Feb 18 15:51:59 crc kubenswrapper[4957]: I0218 15:51:59.920984 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zmxxj"] Feb 18 15:51:59 crc kubenswrapper[4957]: I0218 15:51:59.922093 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/084dfd3c-aa7d-49a6-bae1-bd2cdeeccba2-utilities\") pod \"redhat-marketplace-zmxxj\" (UID: \"084dfd3c-aa7d-49a6-bae1-bd2cdeeccba2\") " pod="openshift-marketplace/redhat-marketplace-zmxxj" Feb 18 15:51:59 crc kubenswrapper[4957]: I0218 15:51:59.922708 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lclp\" (UniqueName: \"kubernetes.io/projected/084dfd3c-aa7d-49a6-bae1-bd2cdeeccba2-kube-api-access-2lclp\") pod \"redhat-marketplace-zmxxj\" (UID: \"084dfd3c-aa7d-49a6-bae1-bd2cdeeccba2\") " pod="openshift-marketplace/redhat-marketplace-zmxxj" Feb 18 15:51:59 crc kubenswrapper[4957]: I0218 15:51:59.922815 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/084dfd3c-aa7d-49a6-bae1-bd2cdeeccba2-catalog-content\") pod \"redhat-marketplace-zmxxj\" (UID: \"084dfd3c-aa7d-49a6-bae1-bd2cdeeccba2\") " pod="openshift-marketplace/redhat-marketplace-zmxxj" Feb 18 15:52:00 crc kubenswrapper[4957]: I0218 15:52:00.025741 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2lclp\" (UniqueName: \"kubernetes.io/projected/084dfd3c-aa7d-49a6-bae1-bd2cdeeccba2-kube-api-access-2lclp\") pod \"redhat-marketplace-zmxxj\" (UID: \"084dfd3c-aa7d-49a6-bae1-bd2cdeeccba2\") " pod="openshift-marketplace/redhat-marketplace-zmxxj" Feb 18 15:52:00 crc kubenswrapper[4957]: I0218 15:52:00.025819 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/084dfd3c-aa7d-49a6-bae1-bd2cdeeccba2-catalog-content\") pod \"redhat-marketplace-zmxxj\" (UID: \"084dfd3c-aa7d-49a6-bae1-bd2cdeeccba2\") " pod="openshift-marketplace/redhat-marketplace-zmxxj" Feb 18 15:52:00 crc kubenswrapper[4957]: I0218 15:52:00.025978 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/084dfd3c-aa7d-49a6-bae1-bd2cdeeccba2-utilities\") pod \"redhat-marketplace-zmxxj\" (UID: \"084dfd3c-aa7d-49a6-bae1-bd2cdeeccba2\") " pod="openshift-marketplace/redhat-marketplace-zmxxj" Feb 18 15:52:00 crc kubenswrapper[4957]: I0218 15:52:00.026446 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/084dfd3c-aa7d-49a6-bae1-bd2cdeeccba2-catalog-content\") pod \"redhat-marketplace-zmxxj\" (UID: \"084dfd3c-aa7d-49a6-bae1-bd2cdeeccba2\") " pod="openshift-marketplace/redhat-marketplace-zmxxj" Feb 18 15:52:00 crc kubenswrapper[4957]: I0218 15:52:00.028038 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/084dfd3c-aa7d-49a6-bae1-bd2cdeeccba2-utilities\") pod \"redhat-marketplace-zmxxj\" (UID: \"084dfd3c-aa7d-49a6-bae1-bd2cdeeccba2\") " pod="openshift-marketplace/redhat-marketplace-zmxxj" Feb 18 15:52:00 crc kubenswrapper[4957]: I0218 15:52:00.072899 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2lclp\" (UniqueName: \"kubernetes.io/projected/084dfd3c-aa7d-49a6-bae1-bd2cdeeccba2-kube-api-access-2lclp\") pod \"redhat-marketplace-zmxxj\" (UID: \"084dfd3c-aa7d-49a6-bae1-bd2cdeeccba2\") " pod="openshift-marketplace/redhat-marketplace-zmxxj" Feb 18 15:52:00 crc kubenswrapper[4957]: I0218 15:52:00.146894 4957 generic.go:334] "Generic (PLEG): container finished" podID="9a5c9544-331f-47df-898d-d19b2c9fe2b1" containerID="f1e2e7021a759dfca224249e7a03c1e1b21b6ce55c8d36f13007267ab0c346d1" exitCode=0 Feb 18 15:52:00 crc kubenswrapper[4957]: I0218 15:52:00.147477 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8lgmk" event={"ID":"9a5c9544-331f-47df-898d-d19b2c9fe2b1","Type":"ContainerDied","Data":"f1e2e7021a759dfca224249e7a03c1e1b21b6ce55c8d36f13007267ab0c346d1"} Feb 18 15:52:00 crc kubenswrapper[4957]: I0218 15:52:00.190346 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zmxxj" Feb 18 15:52:00 crc kubenswrapper[4957]: I0218 15:52:00.785476 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zmxxj"] Feb 18 15:52:00 crc kubenswrapper[4957]: W0218 15:52:00.801679 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod084dfd3c_aa7d_49a6_bae1_bd2cdeeccba2.slice/crio-9fd4dea043e2f1d4029481d66a3e51fa046971ed7f2c8976af27ff2d34c48ed9 WatchSource:0}: Error finding container 9fd4dea043e2f1d4029481d66a3e51fa046971ed7f2c8976af27ff2d34c48ed9: Status 404 returned error can't find the container with id 9fd4dea043e2f1d4029481d66a3e51fa046971ed7f2c8976af27ff2d34c48ed9 Feb 18 15:52:01 crc kubenswrapper[4957]: I0218 15:52:01.162268 4957 generic.go:334] "Generic (PLEG): container finished" podID="084dfd3c-aa7d-49a6-bae1-bd2cdeeccba2" containerID="aa3822a62d1b5af22df4495292d643ee67263ab35c27b60d0917d08d723fb8d9" exitCode=0 Feb 18 15:52:01 crc kubenswrapper[4957]: I0218 15:52:01.162328 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zmxxj" event={"ID":"084dfd3c-aa7d-49a6-bae1-bd2cdeeccba2","Type":"ContainerDied","Data":"aa3822a62d1b5af22df4495292d643ee67263ab35c27b60d0917d08d723fb8d9"} Feb 18 15:52:01 crc kubenswrapper[4957]: I0218 15:52:01.162400 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zmxxj" event={"ID":"084dfd3c-aa7d-49a6-bae1-bd2cdeeccba2","Type":"ContainerStarted","Data":"9fd4dea043e2f1d4029481d66a3e51fa046971ed7f2c8976af27ff2d34c48ed9"} Feb 18 15:52:01 crc kubenswrapper[4957]: I0218 15:52:01.167976 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8lgmk" event={"ID":"9a5c9544-331f-47df-898d-d19b2c9fe2b1","Type":"ContainerStarted","Data":"4eccd74e32339acf6c802b8ed25148dce7b749fa36e956ab6dbe54ff9f382dea"} Feb 18 15:52:01 crc kubenswrapper[4957]: I0218 15:52:01.219966 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-8lgmk" podStartSLOduration=7.423817195 podStartE2EDuration="1m5.21994623s" podCreationTimestamp="2026-02-18 15:50:56 +0000 UTC" firstStartedPulling="2026-02-18 15:51:02.760572647 +0000 UTC m=+4769.281437391" lastFinishedPulling="2026-02-18 15:52:00.556701682 +0000 UTC m=+4827.077566426" observedRunningTime="2026-02-18 15:52:01.215825993 +0000 UTC m=+4827.736690747" watchObservedRunningTime="2026-02-18 15:52:01.21994623 +0000 UTC m=+4827.740810974" Feb 18 15:52:01 crc kubenswrapper[4957]: I0218 15:52:01.390554 4957 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-6fjk2" podUID="1a06ac29-ae1b-4ee9-896f-b5522fd99ba0" containerName="registry-server" probeResult="failure" output=< Feb 18 15:52:01 crc kubenswrapper[4957]: timeout: failed to connect service ":50051" within 1s Feb 18 15:52:01 crc kubenswrapper[4957]: > Feb 18 15:52:01 crc kubenswrapper[4957]: I0218 15:52:01.400928 4957 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-vrpnn" podUID="9f04a8b9-47dc-4fdf-b0fa-b39ee03d5f00" containerName="registry-server" probeResult="failure" output=< Feb 18 15:52:01 crc kubenswrapper[4957]: timeout: failed to connect service ":50051" within 1s Feb 18 15:52:01 crc kubenswrapper[4957]: > Feb 18 15:52:03 crc kubenswrapper[4957]: I0218 15:52:03.202032 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zmxxj" event={"ID":"084dfd3c-aa7d-49a6-bae1-bd2cdeeccba2","Type":"ContainerStarted","Data":"b48920229d10dca71dd3eb68aa1960214c7dd04ee231409984aaa87ef6719595"} Feb 18 15:52:05 crc kubenswrapper[4957]: I0218 15:52:05.224736 4957 generic.go:334] "Generic (PLEG): container finished" podID="084dfd3c-aa7d-49a6-bae1-bd2cdeeccba2" containerID="b48920229d10dca71dd3eb68aa1960214c7dd04ee231409984aaa87ef6719595" exitCode=0 Feb 18 15:52:05 crc kubenswrapper[4957]: I0218 15:52:05.224789 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zmxxj" event={"ID":"084dfd3c-aa7d-49a6-bae1-bd2cdeeccba2","Type":"ContainerDied","Data":"b48920229d10dca71dd3eb68aa1960214c7dd04ee231409984aaa87ef6719595"} Feb 18 15:52:05 crc kubenswrapper[4957]: I0218 15:52:05.229878 4957 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 15:52:05 crc kubenswrapper[4957]: I0218 15:52:05.796795 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-fxh8s" Feb 18 15:52:06 crc kubenswrapper[4957]: I0218 15:52:06.239778 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zmxxj" event={"ID":"084dfd3c-aa7d-49a6-bae1-bd2cdeeccba2","Type":"ContainerStarted","Data":"b10e414777ecff722d7fc7ecb2fceba979c79eeb8841a4d34df6d7ec40e456dc"} Feb 18 15:52:06 crc kubenswrapper[4957]: I0218 15:52:06.277640 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-zmxxj" podStartSLOduration=2.7397311909999997 podStartE2EDuration="7.277622066s" podCreationTimestamp="2026-02-18 15:51:59 +0000 UTC" firstStartedPulling="2026-02-18 15:52:01.164289781 +0000 UTC m=+4827.685154545" lastFinishedPulling="2026-02-18 15:52:05.702180676 +0000 UTC m=+4832.223045420" observedRunningTime="2026-02-18 15:52:06.260904769 +0000 UTC m=+4832.781769513" watchObservedRunningTime="2026-02-18 15:52:06.277622066 +0000 UTC m=+4832.798486810" Feb 18 15:52:07 crc kubenswrapper[4957]: I0218 15:52:07.124478 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-8lgmk" Feb 18 15:52:07 crc kubenswrapper[4957]: I0218 15:52:07.124535 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-8lgmk" Feb 18 15:52:08 crc kubenswrapper[4957]: I0218 15:52:08.173086 4957 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-8lgmk" podUID="9a5c9544-331f-47df-898d-d19b2c9fe2b1" containerName="registry-server" probeResult="failure" output=< Feb 18 15:52:08 crc kubenswrapper[4957]: timeout: failed to connect service ":50051" within 1s Feb 18 15:52:08 crc kubenswrapper[4957]: > Feb 18 15:52:09 crc kubenswrapper[4957]: I0218 15:52:09.344944 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="bd00e1ad-a5c2-463f-a710-cdaf6ab4c122" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 18 15:52:10 crc kubenswrapper[4957]: I0218 15:52:10.191295 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-zmxxj" Feb 18 15:52:10 crc kubenswrapper[4957]: I0218 15:52:10.191673 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-zmxxj" Feb 18 15:52:10 crc kubenswrapper[4957]: I0218 15:52:10.257101 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-zmxxj" Feb 18 15:52:10 crc kubenswrapper[4957]: I0218 15:52:10.394642 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-6fjk2" Feb 18 15:52:10 crc kubenswrapper[4957]: I0218 15:52:10.458242 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-6fjk2" Feb 18 15:52:11 crc kubenswrapper[4957]: I0218 15:52:11.397188 4957 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-vrpnn" podUID="9f04a8b9-47dc-4fdf-b0fa-b39ee03d5f00" containerName="registry-server" probeResult="failure" output=< Feb 18 15:52:11 crc kubenswrapper[4957]: timeout: failed to connect service ":50051" within 1s Feb 18 15:52:11 crc kubenswrapper[4957]: > Feb 18 15:52:17 crc kubenswrapper[4957]: I0218 15:52:17.602854 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6fjk2"] Feb 18 15:52:17 crc kubenswrapper[4957]: I0218 15:52:17.604618 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-6fjk2" podUID="1a06ac29-ae1b-4ee9-896f-b5522fd99ba0" containerName="registry-server" containerID="cri-o://ebd1669255cf3a7eba480b75c2dc342de6f1a397a728af21fdc21f86037c5b65" gracePeriod=2 Feb 18 15:52:17 crc kubenswrapper[4957]: I0218 15:52:17.963600 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-6llw9/must-gather-qx9qn"] Feb 18 15:52:17 crc kubenswrapper[4957]: I0218 15:52:17.980429 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6llw9/must-gather-qx9qn" Feb 18 15:52:18 crc kubenswrapper[4957]: I0218 15:52:18.043024 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-6llw9/must-gather-qx9qn"] Feb 18 15:52:18 crc kubenswrapper[4957]: I0218 15:52:18.070578 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-6llw9"/"openshift-service-ca.crt" Feb 18 15:52:18 crc kubenswrapper[4957]: I0218 15:52:18.070837 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-6llw9"/"kube-root-ca.crt" Feb 18 15:52:18 crc kubenswrapper[4957]: I0218 15:52:18.161160 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/c567a0d5-a5c1-4eb5-bbbf-5b68e1bff3ba-must-gather-output\") pod \"must-gather-qx9qn\" (UID: \"c567a0d5-a5c1-4eb5-bbbf-5b68e1bff3ba\") " pod="openshift-must-gather-6llw9/must-gather-qx9qn" Feb 18 15:52:18 crc kubenswrapper[4957]: I0218 15:52:18.161282 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btj9w\" (UniqueName: \"kubernetes.io/projected/c567a0d5-a5c1-4eb5-bbbf-5b68e1bff3ba-kube-api-access-btj9w\") pod \"must-gather-qx9qn\" (UID: \"c567a0d5-a5c1-4eb5-bbbf-5b68e1bff3ba\") " pod="openshift-must-gather-6llw9/must-gather-qx9qn" Feb 18 15:52:18 crc kubenswrapper[4957]: I0218 15:52:18.263458 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-btj9w\" (UniqueName: \"kubernetes.io/projected/c567a0d5-a5c1-4eb5-bbbf-5b68e1bff3ba-kube-api-access-btj9w\") pod \"must-gather-qx9qn\" (UID: \"c567a0d5-a5c1-4eb5-bbbf-5b68e1bff3ba\") " pod="openshift-must-gather-6llw9/must-gather-qx9qn" Feb 18 15:52:18 crc kubenswrapper[4957]: I0218 15:52:18.263768 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/c567a0d5-a5c1-4eb5-bbbf-5b68e1bff3ba-must-gather-output\") pod \"must-gather-qx9qn\" (UID: \"c567a0d5-a5c1-4eb5-bbbf-5b68e1bff3ba\") " pod="openshift-must-gather-6llw9/must-gather-qx9qn" Feb 18 15:52:18 crc kubenswrapper[4957]: I0218 15:52:18.264364 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/c567a0d5-a5c1-4eb5-bbbf-5b68e1bff3ba-must-gather-output\") pod \"must-gather-qx9qn\" (UID: \"c567a0d5-a5c1-4eb5-bbbf-5b68e1bff3ba\") " pod="openshift-must-gather-6llw9/must-gather-qx9qn" Feb 18 15:52:18 crc kubenswrapper[4957]: I0218 15:52:18.300175 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-btj9w\" (UniqueName: \"kubernetes.io/projected/c567a0d5-a5c1-4eb5-bbbf-5b68e1bff3ba-kube-api-access-btj9w\") pod \"must-gather-qx9qn\" (UID: \"c567a0d5-a5c1-4eb5-bbbf-5b68e1bff3ba\") " pod="openshift-must-gather-6llw9/must-gather-qx9qn" Feb 18 15:52:18 crc kubenswrapper[4957]: I0218 15:52:18.335530 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6llw9/must-gather-qx9qn" Feb 18 15:52:18 crc kubenswrapper[4957]: I0218 15:52:18.401271 4957 generic.go:334] "Generic (PLEG): container finished" podID="bd00e1ad-a5c2-463f-a710-cdaf6ab4c122" containerID="f02f056d324f100dbb32ec26524bad1a05928ed6fdfda98a581bb5198d04ec80" exitCode=137 Feb 18 15:52:18 crc kubenswrapper[4957]: I0218 15:52:18.401300 4957 generic.go:334] "Generic (PLEG): container finished" podID="bd00e1ad-a5c2-463f-a710-cdaf6ab4c122" containerID="b1e7f76411b3a00051d54ec0d6f68c655c45fc9a7456727b6c534b34c623408e" exitCode=137 Feb 18 15:52:18 crc kubenswrapper[4957]: I0218 15:52:18.401309 4957 generic.go:334] "Generic (PLEG): container finished" podID="bd00e1ad-a5c2-463f-a710-cdaf6ab4c122" containerID="dc1ebefe56b6d18c57ea7819674070f2d5629390f369d0b0eb22a0c0308a521f" exitCode=137 Feb 18 15:52:18 crc kubenswrapper[4957]: I0218 15:52:18.401385 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bd00e1ad-a5c2-463f-a710-cdaf6ab4c122","Type":"ContainerDied","Data":"f02f056d324f100dbb32ec26524bad1a05928ed6fdfda98a581bb5198d04ec80"} Feb 18 15:52:18 crc kubenswrapper[4957]: I0218 15:52:18.401458 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bd00e1ad-a5c2-463f-a710-cdaf6ab4c122","Type":"ContainerDied","Data":"b1e7f76411b3a00051d54ec0d6f68c655c45fc9a7456727b6c534b34c623408e"} Feb 18 15:52:18 crc kubenswrapper[4957]: I0218 15:52:18.401473 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bd00e1ad-a5c2-463f-a710-cdaf6ab4c122","Type":"ContainerDied","Data":"dc1ebefe56b6d18c57ea7819674070f2d5629390f369d0b0eb22a0c0308a521f"} Feb 18 15:52:18 crc kubenswrapper[4957]: I0218 15:52:18.407467 4957 generic.go:334] "Generic (PLEG): container finished" podID="1a06ac29-ae1b-4ee9-896f-b5522fd99ba0" containerID="ebd1669255cf3a7eba480b75c2dc342de6f1a397a728af21fdc21f86037c5b65" exitCode=0 Feb 18 15:52:18 crc kubenswrapper[4957]: I0218 15:52:18.407507 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6fjk2" event={"ID":"1a06ac29-ae1b-4ee9-896f-b5522fd99ba0","Type":"ContainerDied","Data":"ebd1669255cf3a7eba480b75c2dc342de6f1a397a728af21fdc21f86037c5b65"} Feb 18 15:52:18 crc kubenswrapper[4957]: I0218 15:52:18.465947 4957 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-8lgmk" podUID="9a5c9544-331f-47df-898d-d19b2c9fe2b1" containerName="registry-server" probeResult="failure" output=< Feb 18 15:52:18 crc kubenswrapper[4957]: timeout: failed to connect service ":50051" within 1s Feb 18 15:52:18 crc kubenswrapper[4957]: > Feb 18 15:52:19 crc kubenswrapper[4957]: I0218 15:52:19.752520 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-6llw9/must-gather-qx9qn"] Feb 18 15:52:19 crc kubenswrapper[4957]: W0218 15:52:19.753551 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc567a0d5_a5c1_4eb5_bbbf_5b68e1bff3ba.slice/crio-cc624fe409693c334005ba8ce3d030f6bcdabfca931191279e037c89757e2b18 WatchSource:0}: Error finding container cc624fe409693c334005ba8ce3d030f6bcdabfca931191279e037c89757e2b18: Status 404 returned error can't find the container with id cc624fe409693c334005ba8ce3d030f6bcdabfca931191279e037c89757e2b18 Feb 18 15:52:19 crc kubenswrapper[4957]: I0218 15:52:19.837341 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 15:52:19 crc kubenswrapper[4957]: I0218 15:52:19.841850 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6fjk2" Feb 18 15:52:20 crc kubenswrapper[4957]: I0218 15:52:20.012962 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a06ac29-ae1b-4ee9-896f-b5522fd99ba0-utilities\") pod \"1a06ac29-ae1b-4ee9-896f-b5522fd99ba0\" (UID: \"1a06ac29-ae1b-4ee9-896f-b5522fd99ba0\") " Feb 18 15:52:20 crc kubenswrapper[4957]: I0218 15:52:20.013057 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-np4sf\" (UniqueName: \"kubernetes.io/projected/1a06ac29-ae1b-4ee9-896f-b5522fd99ba0-kube-api-access-np4sf\") pod \"1a06ac29-ae1b-4ee9-896f-b5522fd99ba0\" (UID: \"1a06ac29-ae1b-4ee9-896f-b5522fd99ba0\") " Feb 18 15:52:20 crc kubenswrapper[4957]: I0218 15:52:20.013162 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd00e1ad-a5c2-463f-a710-cdaf6ab4c122-combined-ca-bundle\") pod \"bd00e1ad-a5c2-463f-a710-cdaf6ab4c122\" (UID: \"bd00e1ad-a5c2-463f-a710-cdaf6ab4c122\") " Feb 18 15:52:20 crc kubenswrapper[4957]: I0218 15:52:20.013192 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd00e1ad-a5c2-463f-a710-cdaf6ab4c122-config-data\") pod \"bd00e1ad-a5c2-463f-a710-cdaf6ab4c122\" (UID: \"bd00e1ad-a5c2-463f-a710-cdaf6ab4c122\") " Feb 18 15:52:20 crc kubenswrapper[4957]: I0218 15:52:20.013227 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a06ac29-ae1b-4ee9-896f-b5522fd99ba0-catalog-content\") pod \"1a06ac29-ae1b-4ee9-896f-b5522fd99ba0\" (UID: \"1a06ac29-ae1b-4ee9-896f-b5522fd99ba0\") " Feb 18 15:52:20 crc kubenswrapper[4957]: I0218 15:52:20.013251 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd00e1ad-a5c2-463f-a710-cdaf6ab4c122-scripts\") pod \"bd00e1ad-a5c2-463f-a710-cdaf6ab4c122\" (UID: \"bd00e1ad-a5c2-463f-a710-cdaf6ab4c122\") " Feb 18 15:52:20 crc kubenswrapper[4957]: I0218 15:52:20.013290 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd00e1ad-a5c2-463f-a710-cdaf6ab4c122-ceilometer-tls-certs\") pod \"bd00e1ad-a5c2-463f-a710-cdaf6ab4c122\" (UID: \"bd00e1ad-a5c2-463f-a710-cdaf6ab4c122\") " Feb 18 15:52:20 crc kubenswrapper[4957]: I0218 15:52:20.013332 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bd00e1ad-a5c2-463f-a710-cdaf6ab4c122-sg-core-conf-yaml\") pod \"bd00e1ad-a5c2-463f-a710-cdaf6ab4c122\" (UID: \"bd00e1ad-a5c2-463f-a710-cdaf6ab4c122\") " Feb 18 15:52:20 crc kubenswrapper[4957]: I0218 15:52:20.013355 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bd00e1ad-a5c2-463f-a710-cdaf6ab4c122-log-httpd\") pod \"bd00e1ad-a5c2-463f-a710-cdaf6ab4c122\" (UID: \"bd00e1ad-a5c2-463f-a710-cdaf6ab4c122\") " Feb 18 15:52:20 crc kubenswrapper[4957]: I0218 15:52:20.013396 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p7d4p\" (UniqueName: \"kubernetes.io/projected/bd00e1ad-a5c2-463f-a710-cdaf6ab4c122-kube-api-access-p7d4p\") pod \"bd00e1ad-a5c2-463f-a710-cdaf6ab4c122\" (UID: \"bd00e1ad-a5c2-463f-a710-cdaf6ab4c122\") " Feb 18 15:52:20 crc kubenswrapper[4957]: I0218 15:52:20.013438 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1a06ac29-ae1b-4ee9-896f-b5522fd99ba0-utilities" (OuterVolumeSpecName: "utilities") pod "1a06ac29-ae1b-4ee9-896f-b5522fd99ba0" (UID: "1a06ac29-ae1b-4ee9-896f-b5522fd99ba0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 15:52:20 crc kubenswrapper[4957]: I0218 15:52:20.013452 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bd00e1ad-a5c2-463f-a710-cdaf6ab4c122-run-httpd\") pod \"bd00e1ad-a5c2-463f-a710-cdaf6ab4c122\" (UID: \"bd00e1ad-a5c2-463f-a710-cdaf6ab4c122\") " Feb 18 15:52:20 crc kubenswrapper[4957]: I0218 15:52:20.014202 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bd00e1ad-a5c2-463f-a710-cdaf6ab4c122-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "bd00e1ad-a5c2-463f-a710-cdaf6ab4c122" (UID: "bd00e1ad-a5c2-463f-a710-cdaf6ab4c122"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 15:52:20 crc kubenswrapper[4957]: I0218 15:52:20.015915 4957 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bd00e1ad-a5c2-463f-a710-cdaf6ab4c122-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 15:52:20 crc kubenswrapper[4957]: I0218 15:52:20.015966 4957 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a06ac29-ae1b-4ee9-896f-b5522fd99ba0-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 15:52:20 crc kubenswrapper[4957]: I0218 15:52:20.016227 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bd00e1ad-a5c2-463f-a710-cdaf6ab4c122-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "bd00e1ad-a5c2-463f-a710-cdaf6ab4c122" (UID: "bd00e1ad-a5c2-463f-a710-cdaf6ab4c122"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 15:52:20 crc kubenswrapper[4957]: I0218 15:52:20.041000 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd00e1ad-a5c2-463f-a710-cdaf6ab4c122-scripts" (OuterVolumeSpecName: "scripts") pod "bd00e1ad-a5c2-463f-a710-cdaf6ab4c122" (UID: "bd00e1ad-a5c2-463f-a710-cdaf6ab4c122"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 15:52:20 crc kubenswrapper[4957]: I0218 15:52:20.041063 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a06ac29-ae1b-4ee9-896f-b5522fd99ba0-kube-api-access-np4sf" (OuterVolumeSpecName: "kube-api-access-np4sf") pod "1a06ac29-ae1b-4ee9-896f-b5522fd99ba0" (UID: "1a06ac29-ae1b-4ee9-896f-b5522fd99ba0"). InnerVolumeSpecName "kube-api-access-np4sf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 15:52:20 crc kubenswrapper[4957]: I0218 15:52:20.042962 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd00e1ad-a5c2-463f-a710-cdaf6ab4c122-kube-api-access-p7d4p" (OuterVolumeSpecName: "kube-api-access-p7d4p") pod "bd00e1ad-a5c2-463f-a710-cdaf6ab4c122" (UID: "bd00e1ad-a5c2-463f-a710-cdaf6ab4c122"). InnerVolumeSpecName "kube-api-access-p7d4p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 15:52:20 crc kubenswrapper[4957]: I0218 15:52:20.053456 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd00e1ad-a5c2-463f-a710-cdaf6ab4c122-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "bd00e1ad-a5c2-463f-a710-cdaf6ab4c122" (UID: "bd00e1ad-a5c2-463f-a710-cdaf6ab4c122"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 15:52:20 crc kubenswrapper[4957]: I0218 15:52:20.083555 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1a06ac29-ae1b-4ee9-896f-b5522fd99ba0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1a06ac29-ae1b-4ee9-896f-b5522fd99ba0" (UID: "1a06ac29-ae1b-4ee9-896f-b5522fd99ba0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 15:52:20 crc kubenswrapper[4957]: I0218 15:52:20.119355 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-np4sf\" (UniqueName: \"kubernetes.io/projected/1a06ac29-ae1b-4ee9-896f-b5522fd99ba0-kube-api-access-np4sf\") on node \"crc\" DevicePath \"\"" Feb 18 15:52:20 crc kubenswrapper[4957]: I0218 15:52:20.119389 4957 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a06ac29-ae1b-4ee9-896f-b5522fd99ba0-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 15:52:20 crc kubenswrapper[4957]: I0218 15:52:20.119402 4957 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd00e1ad-a5c2-463f-a710-cdaf6ab4c122-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 15:52:20 crc kubenswrapper[4957]: I0218 15:52:20.119546 4957 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bd00e1ad-a5c2-463f-a710-cdaf6ab4c122-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 18 15:52:20 crc kubenswrapper[4957]: I0218 15:52:20.119564 4957 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bd00e1ad-a5c2-463f-a710-cdaf6ab4c122-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 15:52:20 crc kubenswrapper[4957]: I0218 15:52:20.119578 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p7d4p\" (UniqueName: \"kubernetes.io/projected/bd00e1ad-a5c2-463f-a710-cdaf6ab4c122-kube-api-access-p7d4p\") on node \"crc\" DevicePath \"\"" Feb 18 15:52:20 crc kubenswrapper[4957]: I0218 15:52:20.129018 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd00e1ad-a5c2-463f-a710-cdaf6ab4c122-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "bd00e1ad-a5c2-463f-a710-cdaf6ab4c122" (UID: "bd00e1ad-a5c2-463f-a710-cdaf6ab4c122"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 15:52:20 crc kubenswrapper[4957]: I0218 15:52:20.132105 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd00e1ad-a5c2-463f-a710-cdaf6ab4c122-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bd00e1ad-a5c2-463f-a710-cdaf6ab4c122" (UID: "bd00e1ad-a5c2-463f-a710-cdaf6ab4c122"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 15:52:20 crc kubenswrapper[4957]: I0218 15:52:20.200573 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd00e1ad-a5c2-463f-a710-cdaf6ab4c122-config-data" (OuterVolumeSpecName: "config-data") pod "bd00e1ad-a5c2-463f-a710-cdaf6ab4c122" (UID: "bd00e1ad-a5c2-463f-a710-cdaf6ab4c122"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 15:52:20 crc kubenswrapper[4957]: I0218 15:52:20.221480 4957 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd00e1ad-a5c2-463f-a710-cdaf6ab4c122-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 15:52:20 crc kubenswrapper[4957]: I0218 15:52:20.221519 4957 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd00e1ad-a5c2-463f-a710-cdaf6ab4c122-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 15:52:20 crc kubenswrapper[4957]: I0218 15:52:20.221530 4957 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd00e1ad-a5c2-463f-a710-cdaf6ab4c122-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 15:52:20 crc kubenswrapper[4957]: I0218 15:52:20.273839 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-zmxxj" Feb 18 15:52:20 crc kubenswrapper[4957]: I0218 15:52:20.437753 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6fjk2" event={"ID":"1a06ac29-ae1b-4ee9-896f-b5522fd99ba0","Type":"ContainerDied","Data":"59b8230dffce9befff29af0b49c3cc4bf211784aca6f5575107abb05b3d7b7ae"} Feb 18 15:52:20 crc kubenswrapper[4957]: I0218 15:52:20.438201 4957 scope.go:117] "RemoveContainer" containerID="ebd1669255cf3a7eba480b75c2dc342de6f1a397a728af21fdc21f86037c5b65" Feb 18 15:52:20 crc kubenswrapper[4957]: I0218 15:52:20.437784 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6fjk2" Feb 18 15:52:20 crc kubenswrapper[4957]: I0218 15:52:20.440447 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6llw9/must-gather-qx9qn" event={"ID":"c567a0d5-a5c1-4eb5-bbbf-5b68e1bff3ba","Type":"ContainerStarted","Data":"cc624fe409693c334005ba8ce3d030f6bcdabfca931191279e037c89757e2b18"} Feb 18 15:52:20 crc kubenswrapper[4957]: I0218 15:52:20.447226 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bd00e1ad-a5c2-463f-a710-cdaf6ab4c122","Type":"ContainerDied","Data":"cea05ccac1bce02675087627fa2baa8b9a817d98ab7b3aaba41b2692ef509788"} Feb 18 15:52:20 crc kubenswrapper[4957]: I0218 15:52:20.447320 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 15:52:20 crc kubenswrapper[4957]: I0218 15:52:20.472923 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6fjk2"] Feb 18 15:52:20 crc kubenswrapper[4957]: I0218 15:52:20.486096 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-6fjk2"] Feb 18 15:52:20 crc kubenswrapper[4957]: I0218 15:52:20.495820 4957 scope.go:117] "RemoveContainer" containerID="08f964e0c70524951714c0d497f0f6a3843332369e4fb5625ea865b2fb73ffd9" Feb 18 15:52:20 crc kubenswrapper[4957]: I0218 15:52:20.496943 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 15:52:20 crc kubenswrapper[4957]: I0218 15:52:20.507289 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 18 15:52:20 crc kubenswrapper[4957]: I0218 15:52:20.532900 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 18 15:52:20 crc kubenswrapper[4957]: E0218 15:52:20.533583 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd00e1ad-a5c2-463f-a710-cdaf6ab4c122" containerName="ceilometer-notification-agent" Feb 18 15:52:20 crc kubenswrapper[4957]: I0218 15:52:20.533612 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd00e1ad-a5c2-463f-a710-cdaf6ab4c122" containerName="ceilometer-notification-agent" Feb 18 15:52:20 crc kubenswrapper[4957]: E0218 15:52:20.533631 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd00e1ad-a5c2-463f-a710-cdaf6ab4c122" containerName="sg-core" Feb 18 15:52:20 crc kubenswrapper[4957]: I0218 15:52:20.533640 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd00e1ad-a5c2-463f-a710-cdaf6ab4c122" containerName="sg-core" Feb 18 15:52:20 crc kubenswrapper[4957]: E0218 15:52:20.533663 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a06ac29-ae1b-4ee9-896f-b5522fd99ba0" containerName="extract-utilities" Feb 18 15:52:20 crc kubenswrapper[4957]: I0218 15:52:20.533672 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a06ac29-ae1b-4ee9-896f-b5522fd99ba0" containerName="extract-utilities" Feb 18 15:52:20 crc kubenswrapper[4957]: E0218 15:52:20.533709 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd00e1ad-a5c2-463f-a710-cdaf6ab4c122" containerName="proxy-httpd" Feb 18 15:52:20 crc kubenswrapper[4957]: I0218 15:52:20.533717 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd00e1ad-a5c2-463f-a710-cdaf6ab4c122" containerName="proxy-httpd" Feb 18 15:52:20 crc kubenswrapper[4957]: E0218 15:52:20.533726 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a06ac29-ae1b-4ee9-896f-b5522fd99ba0" containerName="extract-content" Feb 18 15:52:20 crc kubenswrapper[4957]: I0218 15:52:20.533734 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a06ac29-ae1b-4ee9-896f-b5522fd99ba0" containerName="extract-content" Feb 18 15:52:20 crc kubenswrapper[4957]: E0218 15:52:20.533768 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a06ac29-ae1b-4ee9-896f-b5522fd99ba0" containerName="registry-server" Feb 18 15:52:20 crc kubenswrapper[4957]: I0218 15:52:20.533777 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a06ac29-ae1b-4ee9-896f-b5522fd99ba0" containerName="registry-server" Feb 18 15:52:20 crc kubenswrapper[4957]: E0218 15:52:20.533790 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd00e1ad-a5c2-463f-a710-cdaf6ab4c122" containerName="ceilometer-central-agent" Feb 18 15:52:20 crc kubenswrapper[4957]: I0218 15:52:20.533798 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd00e1ad-a5c2-463f-a710-cdaf6ab4c122" containerName="ceilometer-central-agent" Feb 18 15:52:20 crc kubenswrapper[4957]: I0218 15:52:20.534086 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a06ac29-ae1b-4ee9-896f-b5522fd99ba0" containerName="registry-server" Feb 18 15:52:20 crc kubenswrapper[4957]: I0218 15:52:20.534103 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd00e1ad-a5c2-463f-a710-cdaf6ab4c122" containerName="ceilometer-notification-agent" Feb 18 15:52:20 crc kubenswrapper[4957]: I0218 15:52:20.534119 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd00e1ad-a5c2-463f-a710-cdaf6ab4c122" containerName="ceilometer-central-agent" Feb 18 15:52:20 crc kubenswrapper[4957]: I0218 15:52:20.534158 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd00e1ad-a5c2-463f-a710-cdaf6ab4c122" containerName="sg-core" Feb 18 15:52:20 crc kubenswrapper[4957]: I0218 15:52:20.534170 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd00e1ad-a5c2-463f-a710-cdaf6ab4c122" containerName="proxy-httpd" Feb 18 15:52:20 crc kubenswrapper[4957]: I0218 15:52:20.542778 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 15:52:20 crc kubenswrapper[4957]: I0218 15:52:20.549529 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 18 15:52:20 crc kubenswrapper[4957]: I0218 15:52:20.549723 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 18 15:52:20 crc kubenswrapper[4957]: I0218 15:52:20.549866 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 18 15:52:20 crc kubenswrapper[4957]: I0218 15:52:20.563569 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 15:52:20 crc kubenswrapper[4957]: I0218 15:52:20.604282 4957 scope.go:117] "RemoveContainer" containerID="b7847e2f1d5e0c69802a7ad1abd6e2316ef1cdbabeab1b008c7b9785f8f61569" Feb 18 15:52:20 crc kubenswrapper[4957]: I0218 15:52:20.630304 4957 scope.go:117] "RemoveContainer" containerID="f02f056d324f100dbb32ec26524bad1a05928ed6fdfda98a581bb5198d04ec80" Feb 18 15:52:20 crc kubenswrapper[4957]: I0218 15:52:20.631664 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/807ce5d3-ed11-4105-9e7a-ed9fe8af0fa5-scripts\") pod \"ceilometer-0\" (UID: \"807ce5d3-ed11-4105-9e7a-ed9fe8af0fa5\") " pod="openstack/ceilometer-0" Feb 18 15:52:20 crc kubenswrapper[4957]: I0218 15:52:20.631721 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gpch9\" (UniqueName: \"kubernetes.io/projected/807ce5d3-ed11-4105-9e7a-ed9fe8af0fa5-kube-api-access-gpch9\") pod \"ceilometer-0\" (UID: \"807ce5d3-ed11-4105-9e7a-ed9fe8af0fa5\") " pod="openstack/ceilometer-0" Feb 18 15:52:20 crc kubenswrapper[4957]: I0218 15:52:20.631781 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/807ce5d3-ed11-4105-9e7a-ed9fe8af0fa5-log-httpd\") pod \"ceilometer-0\" (UID: \"807ce5d3-ed11-4105-9e7a-ed9fe8af0fa5\") " pod="openstack/ceilometer-0" Feb 18 15:52:20 crc kubenswrapper[4957]: I0218 15:52:20.631833 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/807ce5d3-ed11-4105-9e7a-ed9fe8af0fa5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"807ce5d3-ed11-4105-9e7a-ed9fe8af0fa5\") " pod="openstack/ceilometer-0" Feb 18 15:52:20 crc kubenswrapper[4957]: I0218 15:52:20.631942 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/807ce5d3-ed11-4105-9e7a-ed9fe8af0fa5-run-httpd\") pod \"ceilometer-0\" (UID: \"807ce5d3-ed11-4105-9e7a-ed9fe8af0fa5\") " pod="openstack/ceilometer-0" Feb 18 15:52:20 crc kubenswrapper[4957]: I0218 15:52:20.631966 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/807ce5d3-ed11-4105-9e7a-ed9fe8af0fa5-config-data\") pod \"ceilometer-0\" (UID: \"807ce5d3-ed11-4105-9e7a-ed9fe8af0fa5\") " pod="openstack/ceilometer-0" Feb 18 15:52:20 crc kubenswrapper[4957]: I0218 15:52:20.631980 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/807ce5d3-ed11-4105-9e7a-ed9fe8af0fa5-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"807ce5d3-ed11-4105-9e7a-ed9fe8af0fa5\") " pod="openstack/ceilometer-0" Feb 18 15:52:20 crc kubenswrapper[4957]: I0218 15:52:20.632033 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/807ce5d3-ed11-4105-9e7a-ed9fe8af0fa5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"807ce5d3-ed11-4105-9e7a-ed9fe8af0fa5\") " pod="openstack/ceilometer-0" Feb 18 15:52:20 crc kubenswrapper[4957]: I0218 15:52:20.652331 4957 scope.go:117] "RemoveContainer" containerID="db5dbdc5ad308cda0208c6ea2ecab7e288b3df70768e53ee8c9c0ce64806afa3" Feb 18 15:52:20 crc kubenswrapper[4957]: I0218 15:52:20.684036 4957 scope.go:117] "RemoveContainer" containerID="b1e7f76411b3a00051d54ec0d6f68c655c45fc9a7456727b6c534b34c623408e" Feb 18 15:52:20 crc kubenswrapper[4957]: I0218 15:52:20.734724 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/807ce5d3-ed11-4105-9e7a-ed9fe8af0fa5-scripts\") pod \"ceilometer-0\" (UID: \"807ce5d3-ed11-4105-9e7a-ed9fe8af0fa5\") " pod="openstack/ceilometer-0" Feb 18 15:52:20 crc kubenswrapper[4957]: I0218 15:52:20.734771 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gpch9\" (UniqueName: \"kubernetes.io/projected/807ce5d3-ed11-4105-9e7a-ed9fe8af0fa5-kube-api-access-gpch9\") pod \"ceilometer-0\" (UID: \"807ce5d3-ed11-4105-9e7a-ed9fe8af0fa5\") " pod="openstack/ceilometer-0" Feb 18 15:52:20 crc kubenswrapper[4957]: I0218 15:52:20.734846 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/807ce5d3-ed11-4105-9e7a-ed9fe8af0fa5-log-httpd\") pod \"ceilometer-0\" (UID: \"807ce5d3-ed11-4105-9e7a-ed9fe8af0fa5\") " pod="openstack/ceilometer-0" Feb 18 15:52:20 crc kubenswrapper[4957]: I0218 15:52:20.734903 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/807ce5d3-ed11-4105-9e7a-ed9fe8af0fa5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"807ce5d3-ed11-4105-9e7a-ed9fe8af0fa5\") " pod="openstack/ceilometer-0" Feb 18 15:52:20 crc kubenswrapper[4957]: I0218 15:52:20.734976 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/807ce5d3-ed11-4105-9e7a-ed9fe8af0fa5-run-httpd\") pod \"ceilometer-0\" (UID: \"807ce5d3-ed11-4105-9e7a-ed9fe8af0fa5\") " pod="openstack/ceilometer-0" Feb 18 15:52:20 crc kubenswrapper[4957]: I0218 15:52:20.734999 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/807ce5d3-ed11-4105-9e7a-ed9fe8af0fa5-config-data\") pod \"ceilometer-0\" (UID: \"807ce5d3-ed11-4105-9e7a-ed9fe8af0fa5\") " pod="openstack/ceilometer-0" Feb 18 15:52:20 crc kubenswrapper[4957]: I0218 15:52:20.735017 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/807ce5d3-ed11-4105-9e7a-ed9fe8af0fa5-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"807ce5d3-ed11-4105-9e7a-ed9fe8af0fa5\") " pod="openstack/ceilometer-0" Feb 18 15:52:20 crc kubenswrapper[4957]: I0218 15:52:20.735076 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/807ce5d3-ed11-4105-9e7a-ed9fe8af0fa5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"807ce5d3-ed11-4105-9e7a-ed9fe8af0fa5\") " pod="openstack/ceilometer-0" Feb 18 15:52:20 crc kubenswrapper[4957]: I0218 15:52:20.735988 4957 scope.go:117] "RemoveContainer" containerID="dc1ebefe56b6d18c57ea7819674070f2d5629390f369d0b0eb22a0c0308a521f" Feb 18 15:52:20 crc kubenswrapper[4957]: I0218 15:52:20.737594 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/807ce5d3-ed11-4105-9e7a-ed9fe8af0fa5-run-httpd\") pod \"ceilometer-0\" (UID: \"807ce5d3-ed11-4105-9e7a-ed9fe8af0fa5\") " pod="openstack/ceilometer-0" Feb 18 15:52:20 crc kubenswrapper[4957]: I0218 15:52:20.737956 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/807ce5d3-ed11-4105-9e7a-ed9fe8af0fa5-log-httpd\") pod \"ceilometer-0\" (UID: \"807ce5d3-ed11-4105-9e7a-ed9fe8af0fa5\") " pod="openstack/ceilometer-0" Feb 18 15:52:20 crc kubenswrapper[4957]: I0218 15:52:20.742386 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/807ce5d3-ed11-4105-9e7a-ed9fe8af0fa5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"807ce5d3-ed11-4105-9e7a-ed9fe8af0fa5\") " pod="openstack/ceilometer-0" Feb 18 15:52:20 crc kubenswrapper[4957]: I0218 15:52:20.742632 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/807ce5d3-ed11-4105-9e7a-ed9fe8af0fa5-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"807ce5d3-ed11-4105-9e7a-ed9fe8af0fa5\") " pod="openstack/ceilometer-0" Feb 18 15:52:20 crc kubenswrapper[4957]: I0218 15:52:20.744021 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/807ce5d3-ed11-4105-9e7a-ed9fe8af0fa5-config-data\") pod \"ceilometer-0\" (UID: \"807ce5d3-ed11-4105-9e7a-ed9fe8af0fa5\") " pod="openstack/ceilometer-0" Feb 18 15:52:20 crc kubenswrapper[4957]: I0218 15:52:20.744136 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/807ce5d3-ed11-4105-9e7a-ed9fe8af0fa5-scripts\") pod \"ceilometer-0\" (UID: \"807ce5d3-ed11-4105-9e7a-ed9fe8af0fa5\") " pod="openstack/ceilometer-0" Feb 18 15:52:20 crc kubenswrapper[4957]: I0218 15:52:20.753110 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/807ce5d3-ed11-4105-9e7a-ed9fe8af0fa5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"807ce5d3-ed11-4105-9e7a-ed9fe8af0fa5\") " pod="openstack/ceilometer-0" Feb 18 15:52:20 crc kubenswrapper[4957]: I0218 15:52:20.759725 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gpch9\" (UniqueName: \"kubernetes.io/projected/807ce5d3-ed11-4105-9e7a-ed9fe8af0fa5-kube-api-access-gpch9\") pod \"ceilometer-0\" (UID: \"807ce5d3-ed11-4105-9e7a-ed9fe8af0fa5\") " pod="openstack/ceilometer-0" Feb 18 15:52:20 crc kubenswrapper[4957]: I0218 15:52:20.916163 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 15:52:21 crc kubenswrapper[4957]: I0218 15:52:21.391470 4957 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-vrpnn" podUID="9f04a8b9-47dc-4fdf-b0fa-b39ee03d5f00" containerName="registry-server" probeResult="failure" output=< Feb 18 15:52:21 crc kubenswrapper[4957]: timeout: failed to connect service ":50051" within 1s Feb 18 15:52:21 crc kubenswrapper[4957]: > Feb 18 15:52:21 crc kubenswrapper[4957]: I0218 15:52:21.391864 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-vrpnn" Feb 18 15:52:21 crc kubenswrapper[4957]: I0218 15:52:21.392530 4957 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="registry-server" containerStatusID={"Type":"cri-o","ID":"7ad31d8e6a406cf0887d96673a35c5569584e551a89febcee4c6cdcfc7645dff"} pod="openshift-marketplace/redhat-operators-vrpnn" containerMessage="Container registry-server failed startup probe, will be restarted" Feb 18 15:52:21 crc kubenswrapper[4957]: I0218 15:52:21.392572 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-vrpnn" podUID="9f04a8b9-47dc-4fdf-b0fa-b39ee03d5f00" containerName="registry-server" containerID="cri-o://7ad31d8e6a406cf0887d96673a35c5569584e551a89febcee4c6cdcfc7645dff" gracePeriod=30 Feb 18 15:52:21 crc kubenswrapper[4957]: I0218 15:52:21.560354 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 15:52:22 crc kubenswrapper[4957]: I0218 15:52:22.235025 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a06ac29-ae1b-4ee9-896f-b5522fd99ba0" path="/var/lib/kubelet/pods/1a06ac29-ae1b-4ee9-896f-b5522fd99ba0/volumes" Feb 18 15:52:22 crc kubenswrapper[4957]: I0218 15:52:22.237340 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd00e1ad-a5c2-463f-a710-cdaf6ab4c122" path="/var/lib/kubelet/pods/bd00e1ad-a5c2-463f-a710-cdaf6ab4c122/volumes" Feb 18 15:52:22 crc kubenswrapper[4957]: I0218 15:52:22.487061 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"807ce5d3-ed11-4105-9e7a-ed9fe8af0fa5","Type":"ContainerStarted","Data":"22396fe206172553f1e8acefaff416fc80c9272d391d7fdc2f6add192dbc423e"} Feb 18 15:52:23 crc kubenswrapper[4957]: I0218 15:52:23.503912 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"807ce5d3-ed11-4105-9e7a-ed9fe8af0fa5","Type":"ContainerStarted","Data":"a816862ec99234f5118821790d5991e9f3d7191f77fa86213521cedb5d1f6644"} Feb 18 15:52:26 crc kubenswrapper[4957]: I0218 15:52:26.211456 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zmxxj"] Feb 18 15:52:26 crc kubenswrapper[4957]: I0218 15:52:26.212214 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-zmxxj" podUID="084dfd3c-aa7d-49a6-bae1-bd2cdeeccba2" containerName="registry-server" containerID="cri-o://b10e414777ecff722d7fc7ecb2fceba979c79eeb8841a4d34df6d7ec40e456dc" gracePeriod=2 Feb 18 15:52:26 crc kubenswrapper[4957]: I0218 15:52:26.552988 4957 generic.go:334] "Generic (PLEG): container finished" podID="084dfd3c-aa7d-49a6-bae1-bd2cdeeccba2" containerID="b10e414777ecff722d7fc7ecb2fceba979c79eeb8841a4d34df6d7ec40e456dc" exitCode=0 Feb 18 15:52:26 crc kubenswrapper[4957]: I0218 15:52:26.553146 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zmxxj" event={"ID":"084dfd3c-aa7d-49a6-bae1-bd2cdeeccba2","Type":"ContainerDied","Data":"b10e414777ecff722d7fc7ecb2fceba979c79eeb8841a4d34df6d7ec40e456dc"} Feb 18 15:52:28 crc kubenswrapper[4957]: I0218 15:52:28.191801 4957 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-8lgmk" podUID="9a5c9544-331f-47df-898d-d19b2c9fe2b1" containerName="registry-server" probeResult="failure" output=< Feb 18 15:52:28 crc kubenswrapper[4957]: timeout: failed to connect service ":50051" within 1s Feb 18 15:52:28 crc kubenswrapper[4957]: > Feb 18 15:52:30 crc kubenswrapper[4957]: E0218 15:52:30.191613 4957 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b10e414777ecff722d7fc7ecb2fceba979c79eeb8841a4d34df6d7ec40e456dc is running failed: container process not found" containerID="b10e414777ecff722d7fc7ecb2fceba979c79eeb8841a4d34df6d7ec40e456dc" cmd=["grpc_health_probe","-addr=:50051"] Feb 18 15:52:30 crc kubenswrapper[4957]: E0218 15:52:30.192582 4957 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b10e414777ecff722d7fc7ecb2fceba979c79eeb8841a4d34df6d7ec40e456dc is running failed: container process not found" containerID="b10e414777ecff722d7fc7ecb2fceba979c79eeb8841a4d34df6d7ec40e456dc" cmd=["grpc_health_probe","-addr=:50051"] Feb 18 15:52:30 crc kubenswrapper[4957]: E0218 15:52:30.192894 4957 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b10e414777ecff722d7fc7ecb2fceba979c79eeb8841a4d34df6d7ec40e456dc is running failed: container process not found" containerID="b10e414777ecff722d7fc7ecb2fceba979c79eeb8841a4d34df6d7ec40e456dc" cmd=["grpc_health_probe","-addr=:50051"] Feb 18 15:52:30 crc kubenswrapper[4957]: E0218 15:52:30.192938 4957 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b10e414777ecff722d7fc7ecb2fceba979c79eeb8841a4d34df6d7ec40e456dc is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-zmxxj" podUID="084dfd3c-aa7d-49a6-bae1-bd2cdeeccba2" containerName="registry-server" Feb 18 15:52:30 crc kubenswrapper[4957]: I0218 15:52:30.608495 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6llw9/must-gather-qx9qn" event={"ID":"c567a0d5-a5c1-4eb5-bbbf-5b68e1bff3ba","Type":"ContainerStarted","Data":"e9231d43cbbd0b6f4e3b1d2b56c8d76634e51ce9ee98ef4510743f1e94cce6ba"} Feb 18 15:52:30 crc kubenswrapper[4957]: I0218 15:52:30.636751 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"807ce5d3-ed11-4105-9e7a-ed9fe8af0fa5","Type":"ContainerStarted","Data":"1d2839c635d5fc3aa3c49f15a018099eba62d506b7e5db0476f159cf698be3d5"} Feb 18 15:52:30 crc kubenswrapper[4957]: I0218 15:52:30.736801 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zmxxj" Feb 18 15:52:30 crc kubenswrapper[4957]: I0218 15:52:30.873261 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2lclp\" (UniqueName: \"kubernetes.io/projected/084dfd3c-aa7d-49a6-bae1-bd2cdeeccba2-kube-api-access-2lclp\") pod \"084dfd3c-aa7d-49a6-bae1-bd2cdeeccba2\" (UID: \"084dfd3c-aa7d-49a6-bae1-bd2cdeeccba2\") " Feb 18 15:52:30 crc kubenswrapper[4957]: I0218 15:52:30.873372 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/084dfd3c-aa7d-49a6-bae1-bd2cdeeccba2-catalog-content\") pod \"084dfd3c-aa7d-49a6-bae1-bd2cdeeccba2\" (UID: \"084dfd3c-aa7d-49a6-bae1-bd2cdeeccba2\") " Feb 18 15:52:30 crc kubenswrapper[4957]: I0218 15:52:30.873683 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/084dfd3c-aa7d-49a6-bae1-bd2cdeeccba2-utilities\") pod \"084dfd3c-aa7d-49a6-bae1-bd2cdeeccba2\" (UID: \"084dfd3c-aa7d-49a6-bae1-bd2cdeeccba2\") " Feb 18 15:52:30 crc kubenswrapper[4957]: I0218 15:52:30.875288 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/084dfd3c-aa7d-49a6-bae1-bd2cdeeccba2-utilities" (OuterVolumeSpecName: "utilities") pod "084dfd3c-aa7d-49a6-bae1-bd2cdeeccba2" (UID: "084dfd3c-aa7d-49a6-bae1-bd2cdeeccba2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 15:52:30 crc kubenswrapper[4957]: I0218 15:52:30.898206 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/084dfd3c-aa7d-49a6-bae1-bd2cdeeccba2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "084dfd3c-aa7d-49a6-bae1-bd2cdeeccba2" (UID: "084dfd3c-aa7d-49a6-bae1-bd2cdeeccba2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 15:52:30 crc kubenswrapper[4957]: I0218 15:52:30.899258 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/084dfd3c-aa7d-49a6-bae1-bd2cdeeccba2-kube-api-access-2lclp" (OuterVolumeSpecName: "kube-api-access-2lclp") pod "084dfd3c-aa7d-49a6-bae1-bd2cdeeccba2" (UID: "084dfd3c-aa7d-49a6-bae1-bd2cdeeccba2"). InnerVolumeSpecName "kube-api-access-2lclp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 15:52:30 crc kubenswrapper[4957]: I0218 15:52:30.976256 4957 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/084dfd3c-aa7d-49a6-bae1-bd2cdeeccba2-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 15:52:30 crc kubenswrapper[4957]: I0218 15:52:30.976289 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2lclp\" (UniqueName: \"kubernetes.io/projected/084dfd3c-aa7d-49a6-bae1-bd2cdeeccba2-kube-api-access-2lclp\") on node \"crc\" DevicePath \"\"" Feb 18 15:52:30 crc kubenswrapper[4957]: I0218 15:52:30.976302 4957 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/084dfd3c-aa7d-49a6-bae1-bd2cdeeccba2-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 15:52:31 crc kubenswrapper[4957]: I0218 15:52:31.650072 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zmxxj" event={"ID":"084dfd3c-aa7d-49a6-bae1-bd2cdeeccba2","Type":"ContainerDied","Data":"9fd4dea043e2f1d4029481d66a3e51fa046971ed7f2c8976af27ff2d34c48ed9"} Feb 18 15:52:31 crc kubenswrapper[4957]: I0218 15:52:31.650124 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zmxxj" Feb 18 15:52:31 crc kubenswrapper[4957]: I0218 15:52:31.650143 4957 scope.go:117] "RemoveContainer" containerID="b10e414777ecff722d7fc7ecb2fceba979c79eeb8841a4d34df6d7ec40e456dc" Feb 18 15:52:31 crc kubenswrapper[4957]: I0218 15:52:31.656025 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"807ce5d3-ed11-4105-9e7a-ed9fe8af0fa5","Type":"ContainerStarted","Data":"db9261b83cc14e443736b02acf3724a3fc9044280341896452ea05a4a5ba1700"} Feb 18 15:52:31 crc kubenswrapper[4957]: I0218 15:52:31.659900 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6llw9/must-gather-qx9qn" event={"ID":"c567a0d5-a5c1-4eb5-bbbf-5b68e1bff3ba","Type":"ContainerStarted","Data":"2a27196e0ef58f1c8574f0cf0b7d6ef0f169772770f18196d6396199f9979df6"} Feb 18 15:52:31 crc kubenswrapper[4957]: I0218 15:52:31.688118 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-6llw9/must-gather-qx9qn" podStartSLOduration=4.460659091 podStartE2EDuration="14.688097884s" podCreationTimestamp="2026-02-18 15:52:17 +0000 UTC" firstStartedPulling="2026-02-18 15:52:19.756165112 +0000 UTC m=+4846.277029856" lastFinishedPulling="2026-02-18 15:52:29.983603905 +0000 UTC m=+4856.504468649" observedRunningTime="2026-02-18 15:52:31.678662505 +0000 UTC m=+4858.199527249" watchObservedRunningTime="2026-02-18 15:52:31.688097884 +0000 UTC m=+4858.208962628" Feb 18 15:52:31 crc kubenswrapper[4957]: I0218 15:52:31.707517 4957 scope.go:117] "RemoveContainer" containerID="b48920229d10dca71dd3eb68aa1960214c7dd04ee231409984aaa87ef6719595" Feb 18 15:52:31 crc kubenswrapper[4957]: I0218 15:52:31.708888 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zmxxj"] Feb 18 15:52:31 crc kubenswrapper[4957]: I0218 15:52:31.725171 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-zmxxj"] Feb 18 15:52:31 crc kubenswrapper[4957]: I0218 15:52:31.740850 4957 scope.go:117] "RemoveContainer" containerID="aa3822a62d1b5af22df4495292d643ee67263ab35c27b60d0917d08d723fb8d9" Feb 18 15:52:32 crc kubenswrapper[4957]: I0218 15:52:32.226766 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="084dfd3c-aa7d-49a6-bae1-bd2cdeeccba2" path="/var/lib/kubelet/pods/084dfd3c-aa7d-49a6-bae1-bd2cdeeccba2/volumes" Feb 18 15:52:33 crc kubenswrapper[4957]: I0218 15:52:33.687297 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"807ce5d3-ed11-4105-9e7a-ed9fe8af0fa5","Type":"ContainerStarted","Data":"3816ee4ec6d162d72e42fd3b23266086881ecc009ab6f113a6c23e39aef49b17"} Feb 18 15:52:33 crc kubenswrapper[4957]: I0218 15:52:33.687801 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 18 15:52:33 crc kubenswrapper[4957]: I0218 15:52:33.746706 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.3834023 podStartE2EDuration="13.746687856s" podCreationTimestamp="2026-02-18 15:52:20 +0000 UTC" firstStartedPulling="2026-02-18 15:52:21.582689546 +0000 UTC m=+4848.103554290" lastFinishedPulling="2026-02-18 15:52:32.945975112 +0000 UTC m=+4859.466839846" observedRunningTime="2026-02-18 15:52:33.718227043 +0000 UTC m=+4860.239091787" watchObservedRunningTime="2026-02-18 15:52:33.746687856 +0000 UTC m=+4860.267552600" Feb 18 15:52:36 crc kubenswrapper[4957]: E0218 15:52:36.548068 4957 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.213:33658->38.102.83.213:46479: write tcp 38.102.83.213:33658->38.102.83.213:46479: write: broken pipe Feb 18 15:52:37 crc kubenswrapper[4957]: I0218 15:52:37.550108 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-6llw9/crc-debug-l44n4"] Feb 18 15:52:37 crc kubenswrapper[4957]: E0218 15:52:37.551388 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="084dfd3c-aa7d-49a6-bae1-bd2cdeeccba2" containerName="registry-server" Feb 18 15:52:37 crc kubenswrapper[4957]: I0218 15:52:37.551420 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="084dfd3c-aa7d-49a6-bae1-bd2cdeeccba2" containerName="registry-server" Feb 18 15:52:37 crc kubenswrapper[4957]: E0218 15:52:37.551472 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="084dfd3c-aa7d-49a6-bae1-bd2cdeeccba2" containerName="extract-content" Feb 18 15:52:37 crc kubenswrapper[4957]: I0218 15:52:37.551481 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="084dfd3c-aa7d-49a6-bae1-bd2cdeeccba2" containerName="extract-content" Feb 18 15:52:37 crc kubenswrapper[4957]: E0218 15:52:37.551518 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="084dfd3c-aa7d-49a6-bae1-bd2cdeeccba2" containerName="extract-utilities" Feb 18 15:52:37 crc kubenswrapper[4957]: I0218 15:52:37.551528 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="084dfd3c-aa7d-49a6-bae1-bd2cdeeccba2" containerName="extract-utilities" Feb 18 15:52:37 crc kubenswrapper[4957]: I0218 15:52:37.551802 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="084dfd3c-aa7d-49a6-bae1-bd2cdeeccba2" containerName="registry-server" Feb 18 15:52:37 crc kubenswrapper[4957]: I0218 15:52:37.552883 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6llw9/crc-debug-l44n4" Feb 18 15:52:37 crc kubenswrapper[4957]: I0218 15:52:37.554977 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-6llw9"/"default-dockercfg-pstlf" Feb 18 15:52:37 crc kubenswrapper[4957]: I0218 15:52:37.663513 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmww7\" (UniqueName: \"kubernetes.io/projected/83e04620-a3d4-4086-8f1f-a7023ab4cfe3-kube-api-access-qmww7\") pod \"crc-debug-l44n4\" (UID: \"83e04620-a3d4-4086-8f1f-a7023ab4cfe3\") " pod="openshift-must-gather-6llw9/crc-debug-l44n4" Feb 18 15:52:37 crc kubenswrapper[4957]: I0218 15:52:37.663660 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/83e04620-a3d4-4086-8f1f-a7023ab4cfe3-host\") pod \"crc-debug-l44n4\" (UID: \"83e04620-a3d4-4086-8f1f-a7023ab4cfe3\") " pod="openshift-must-gather-6llw9/crc-debug-l44n4" Feb 18 15:52:37 crc kubenswrapper[4957]: I0218 15:52:37.732241 4957 generic.go:334] "Generic (PLEG): container finished" podID="9f04a8b9-47dc-4fdf-b0fa-b39ee03d5f00" containerID="7ad31d8e6a406cf0887d96673a35c5569584e551a89febcee4c6cdcfc7645dff" exitCode=0 Feb 18 15:52:37 crc kubenswrapper[4957]: I0218 15:52:37.732287 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vrpnn" event={"ID":"9f04a8b9-47dc-4fdf-b0fa-b39ee03d5f00","Type":"ContainerDied","Data":"7ad31d8e6a406cf0887d96673a35c5569584e551a89febcee4c6cdcfc7645dff"} Feb 18 15:52:37 crc kubenswrapper[4957]: I0218 15:52:37.732313 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vrpnn" event={"ID":"9f04a8b9-47dc-4fdf-b0fa-b39ee03d5f00","Type":"ContainerStarted","Data":"d983eb4a0e815c325155d134ad922689c7c19c62ac6d15f31301a766e95b12d1"} Feb 18 15:52:37 crc kubenswrapper[4957]: I0218 15:52:37.732332 4957 scope.go:117] "RemoveContainer" containerID="2fde7c5ef212618d402aa21565652c78d592fc74e88dbe573e09bdb5b2127eda" Feb 18 15:52:37 crc kubenswrapper[4957]: I0218 15:52:37.768814 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/83e04620-a3d4-4086-8f1f-a7023ab4cfe3-host\") pod \"crc-debug-l44n4\" (UID: \"83e04620-a3d4-4086-8f1f-a7023ab4cfe3\") " pod="openshift-must-gather-6llw9/crc-debug-l44n4" Feb 18 15:52:37 crc kubenswrapper[4957]: I0218 15:52:37.769284 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qmww7\" (UniqueName: \"kubernetes.io/projected/83e04620-a3d4-4086-8f1f-a7023ab4cfe3-kube-api-access-qmww7\") pod \"crc-debug-l44n4\" (UID: \"83e04620-a3d4-4086-8f1f-a7023ab4cfe3\") " pod="openshift-must-gather-6llw9/crc-debug-l44n4" Feb 18 15:52:37 crc kubenswrapper[4957]: I0218 15:52:37.771524 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/83e04620-a3d4-4086-8f1f-a7023ab4cfe3-host\") pod \"crc-debug-l44n4\" (UID: \"83e04620-a3d4-4086-8f1f-a7023ab4cfe3\") " pod="openshift-must-gather-6llw9/crc-debug-l44n4" Feb 18 15:52:37 crc kubenswrapper[4957]: I0218 15:52:37.813587 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qmww7\" (UniqueName: \"kubernetes.io/projected/83e04620-a3d4-4086-8f1f-a7023ab4cfe3-kube-api-access-qmww7\") pod \"crc-debug-l44n4\" (UID: \"83e04620-a3d4-4086-8f1f-a7023ab4cfe3\") " pod="openshift-must-gather-6llw9/crc-debug-l44n4" Feb 18 15:52:37 crc kubenswrapper[4957]: I0218 15:52:37.885388 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6llw9/crc-debug-l44n4" Feb 18 15:52:37 crc kubenswrapper[4957]: W0218 15:52:37.929194 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod83e04620_a3d4_4086_8f1f_a7023ab4cfe3.slice/crio-355ff27148f4e4b5e36ed1a2de5afa1e00cb26873e00820cc914b8a8fe8a544e WatchSource:0}: Error finding container 355ff27148f4e4b5e36ed1a2de5afa1e00cb26873e00820cc914b8a8fe8a544e: Status 404 returned error can't find the container with id 355ff27148f4e4b5e36ed1a2de5afa1e00cb26873e00820cc914b8a8fe8a544e Feb 18 15:52:38 crc kubenswrapper[4957]: I0218 15:52:38.182325 4957 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-8lgmk" podUID="9a5c9544-331f-47df-898d-d19b2c9fe2b1" containerName="registry-server" probeResult="failure" output=< Feb 18 15:52:38 crc kubenswrapper[4957]: timeout: failed to connect service ":50051" within 1s Feb 18 15:52:38 crc kubenswrapper[4957]: > Feb 18 15:52:38 crc kubenswrapper[4957]: I0218 15:52:38.748929 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6llw9/crc-debug-l44n4" event={"ID":"83e04620-a3d4-4086-8f1f-a7023ab4cfe3","Type":"ContainerStarted","Data":"355ff27148f4e4b5e36ed1a2de5afa1e00cb26873e00820cc914b8a8fe8a544e"} Feb 18 15:52:40 crc kubenswrapper[4957]: I0218 15:52:40.345185 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-vrpnn" Feb 18 15:52:40 crc kubenswrapper[4957]: I0218 15:52:40.345627 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-vrpnn" Feb 18 15:52:41 crc kubenswrapper[4957]: I0218 15:52:41.400174 4957 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-vrpnn" podUID="9f04a8b9-47dc-4fdf-b0fa-b39ee03d5f00" containerName="registry-server" probeResult="failure" output=< Feb 18 15:52:41 crc kubenswrapper[4957]: timeout: failed to connect service ":50051" within 1s Feb 18 15:52:41 crc kubenswrapper[4957]: > Feb 18 15:52:48 crc kubenswrapper[4957]: I0218 15:52:48.217917 4957 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-8lgmk" podUID="9a5c9544-331f-47df-898d-d19b2c9fe2b1" containerName="registry-server" probeResult="failure" output=< Feb 18 15:52:48 crc kubenswrapper[4957]: timeout: failed to connect service ":50051" within 1s Feb 18 15:52:48 crc kubenswrapper[4957]: > Feb 18 15:52:50 crc kubenswrapper[4957]: I0218 15:52:50.938281 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 18 15:52:51 crc kubenswrapper[4957]: I0218 15:52:51.420073 4957 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-vrpnn" podUID="9f04a8b9-47dc-4fdf-b0fa-b39ee03d5f00" containerName="registry-server" probeResult="failure" output=< Feb 18 15:52:51 crc kubenswrapper[4957]: timeout: failed to connect service ":50051" within 1s Feb 18 15:52:51 crc kubenswrapper[4957]: > Feb 18 15:52:54 crc kubenswrapper[4957]: I0218 15:52:54.984223 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6llw9/crc-debug-l44n4" event={"ID":"83e04620-a3d4-4086-8f1f-a7023ab4cfe3","Type":"ContainerStarted","Data":"4916d51595abddad001c698605c4dc16dc207af0355baee6bb28524a254b8db3"} Feb 18 15:52:55 crc kubenswrapper[4957]: I0218 15:52:55.017000 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-6llw9/crc-debug-l44n4" podStartSLOduration=2.038105516 podStartE2EDuration="18.016969974s" podCreationTimestamp="2026-02-18 15:52:37 +0000 UTC" firstStartedPulling="2026-02-18 15:52:37.932878507 +0000 UTC m=+4864.453743251" lastFinishedPulling="2026-02-18 15:52:53.911742965 +0000 UTC m=+4880.432607709" observedRunningTime="2026-02-18 15:52:54.996936332 +0000 UTC m=+4881.517801076" watchObservedRunningTime="2026-02-18 15:52:55.016969974 +0000 UTC m=+4881.537834758" Feb 18 15:52:58 crc kubenswrapper[4957]: I0218 15:52:58.269739 4957 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-8lgmk" podUID="9a5c9544-331f-47df-898d-d19b2c9fe2b1" containerName="registry-server" probeResult="failure" output=< Feb 18 15:52:58 crc kubenswrapper[4957]: timeout: failed to connect service ":50051" within 1s Feb 18 15:52:58 crc kubenswrapper[4957]: > Feb 18 15:53:00 crc kubenswrapper[4957]: I0218 15:53:00.066028 4957 generic.go:334] "Generic (PLEG): container finished" podID="bdcbb72c-6e5e-4167-baf4-ca754b4122a0" containerID="4c75ce34f7912c1678d6e79c294eb2798fa93aa8c3f970e4c466fb3bab99e09b" exitCode=0 Feb 18 15:53:00 crc kubenswrapper[4957]: I0218 15:53:00.066784 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-7ffc4d6784-c7kvk" event={"ID":"bdcbb72c-6e5e-4167-baf4-ca754b4122a0","Type":"ContainerDied","Data":"4c75ce34f7912c1678d6e79c294eb2798fa93aa8c3f970e4c466fb3bab99e09b"} Feb 18 15:53:01 crc kubenswrapper[4957]: I0218 15:53:01.083441 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-7ffc4d6784-c7kvk" event={"ID":"bdcbb72c-6e5e-4167-baf4-ca754b4122a0","Type":"ContainerStarted","Data":"a7d43a6b1fb98a1caebe375ad062f3c08791bc73985183dc74588eca2b99adc8"} Feb 18 15:53:01 crc kubenswrapper[4957]: I0218 15:53:01.400305 4957 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-vrpnn" podUID="9f04a8b9-47dc-4fdf-b0fa-b39ee03d5f00" containerName="registry-server" probeResult="failure" output=< Feb 18 15:53:01 crc kubenswrapper[4957]: timeout: failed to connect service ":50051" within 1s Feb 18 15:53:01 crc kubenswrapper[4957]: > Feb 18 15:53:05 crc kubenswrapper[4957]: I0218 15:53:05.421785 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/metrics-server-7ffc4d6784-c7kvk" Feb 18 15:53:05 crc kubenswrapper[4957]: I0218 15:53:05.422335 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/metrics-server-7ffc4d6784-c7kvk" Feb 18 15:53:07 crc kubenswrapper[4957]: I0218 15:53:07.226629 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-8lgmk" Feb 18 15:53:07 crc kubenswrapper[4957]: I0218 15:53:07.299303 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-8lgmk" Feb 18 15:53:07 crc kubenswrapper[4957]: I0218 15:53:07.414977 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8lgmk"] Feb 18 15:53:07 crc kubenswrapper[4957]: I0218 15:53:07.473237 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vrpnn"] Feb 18 15:53:07 crc kubenswrapper[4957]: I0218 15:53:07.473577 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-vrpnn" podUID="9f04a8b9-47dc-4fdf-b0fa-b39ee03d5f00" containerName="registry-server" containerID="cri-o://d983eb4a0e815c325155d134ad922689c7c19c62ac6d15f31301a766e95b12d1" gracePeriod=2 Feb 18 15:53:10 crc kubenswrapper[4957]: I0218 15:53:10.187635 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-vrpnn_9f04a8b9-47dc-4fdf-b0fa-b39ee03d5f00/registry-server/2.log" Feb 18 15:53:10 crc kubenswrapper[4957]: I0218 15:53:10.193090 4957 generic.go:334] "Generic (PLEG): container finished" podID="9f04a8b9-47dc-4fdf-b0fa-b39ee03d5f00" containerID="d983eb4a0e815c325155d134ad922689c7c19c62ac6d15f31301a766e95b12d1" exitCode=137 Feb 18 15:53:10 crc kubenswrapper[4957]: I0218 15:53:10.193141 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vrpnn" event={"ID":"9f04a8b9-47dc-4fdf-b0fa-b39ee03d5f00","Type":"ContainerDied","Data":"d983eb4a0e815c325155d134ad922689c7c19c62ac6d15f31301a766e95b12d1"} Feb 18 15:53:10 crc kubenswrapper[4957]: I0218 15:53:10.193178 4957 scope.go:117] "RemoveContainer" containerID="7ad31d8e6a406cf0887d96673a35c5569584e551a89febcee4c6cdcfc7645dff" Feb 18 15:53:11 crc kubenswrapper[4957]: I0218 15:53:11.214131 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-vrpnn_9f04a8b9-47dc-4fdf-b0fa-b39ee03d5f00/registry-server/2.log" Feb 18 15:53:12 crc kubenswrapper[4957]: I0218 15:53:12.232263 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-vrpnn_9f04a8b9-47dc-4fdf-b0fa-b39ee03d5f00/registry-server/2.log" Feb 18 15:53:12 crc kubenswrapper[4957]: I0218 15:53:12.233256 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vrpnn" event={"ID":"9f04a8b9-47dc-4fdf-b0fa-b39ee03d5f00","Type":"ContainerDied","Data":"c720ea27e2c752eaf35d404e0a2abed9da9d1894fddbaccc74fdf4f2a9b66915"} Feb 18 15:53:12 crc kubenswrapper[4957]: I0218 15:53:12.233721 4957 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c720ea27e2c752eaf35d404e0a2abed9da9d1894fddbaccc74fdf4f2a9b66915" Feb 18 15:53:13 crc kubenswrapper[4957]: I0218 15:53:13.417243 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-vrpnn_9f04a8b9-47dc-4fdf-b0fa-b39ee03d5f00/registry-server/2.log" Feb 18 15:53:13 crc kubenswrapper[4957]: I0218 15:53:13.418438 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vrpnn" Feb 18 15:53:13 crc kubenswrapper[4957]: I0218 15:53:13.572815 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f04a8b9-47dc-4fdf-b0fa-b39ee03d5f00-utilities\") pod \"9f04a8b9-47dc-4fdf-b0fa-b39ee03d5f00\" (UID: \"9f04a8b9-47dc-4fdf-b0fa-b39ee03d5f00\") " Feb 18 15:53:13 crc kubenswrapper[4957]: I0218 15:53:13.573054 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9hds8\" (UniqueName: \"kubernetes.io/projected/9f04a8b9-47dc-4fdf-b0fa-b39ee03d5f00-kube-api-access-9hds8\") pod \"9f04a8b9-47dc-4fdf-b0fa-b39ee03d5f00\" (UID: \"9f04a8b9-47dc-4fdf-b0fa-b39ee03d5f00\") " Feb 18 15:53:13 crc kubenswrapper[4957]: I0218 15:53:13.573087 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f04a8b9-47dc-4fdf-b0fa-b39ee03d5f00-catalog-content\") pod \"9f04a8b9-47dc-4fdf-b0fa-b39ee03d5f00\" (UID: \"9f04a8b9-47dc-4fdf-b0fa-b39ee03d5f00\") " Feb 18 15:53:13 crc kubenswrapper[4957]: I0218 15:53:13.574464 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9f04a8b9-47dc-4fdf-b0fa-b39ee03d5f00-utilities" (OuterVolumeSpecName: "utilities") pod "9f04a8b9-47dc-4fdf-b0fa-b39ee03d5f00" (UID: "9f04a8b9-47dc-4fdf-b0fa-b39ee03d5f00"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 15:53:13 crc kubenswrapper[4957]: I0218 15:53:13.601353 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f04a8b9-47dc-4fdf-b0fa-b39ee03d5f00-kube-api-access-9hds8" (OuterVolumeSpecName: "kube-api-access-9hds8") pod "9f04a8b9-47dc-4fdf-b0fa-b39ee03d5f00" (UID: "9f04a8b9-47dc-4fdf-b0fa-b39ee03d5f00"). InnerVolumeSpecName "kube-api-access-9hds8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 15:53:13 crc kubenswrapper[4957]: I0218 15:53:13.675746 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9hds8\" (UniqueName: \"kubernetes.io/projected/9f04a8b9-47dc-4fdf-b0fa-b39ee03d5f00-kube-api-access-9hds8\") on node \"crc\" DevicePath \"\"" Feb 18 15:53:13 crc kubenswrapper[4957]: I0218 15:53:13.675787 4957 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f04a8b9-47dc-4fdf-b0fa-b39ee03d5f00-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 15:53:13 crc kubenswrapper[4957]: I0218 15:53:13.690523 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9f04a8b9-47dc-4fdf-b0fa-b39ee03d5f00-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9f04a8b9-47dc-4fdf-b0fa-b39ee03d5f00" (UID: "9f04a8b9-47dc-4fdf-b0fa-b39ee03d5f00"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 15:53:13 crc kubenswrapper[4957]: I0218 15:53:13.778106 4957 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f04a8b9-47dc-4fdf-b0fa-b39ee03d5f00-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 15:53:14 crc kubenswrapper[4957]: I0218 15:53:14.269265 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vrpnn" Feb 18 15:53:14 crc kubenswrapper[4957]: I0218 15:53:14.351478 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vrpnn"] Feb 18 15:53:14 crc kubenswrapper[4957]: I0218 15:53:14.372623 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-vrpnn"] Feb 18 15:53:16 crc kubenswrapper[4957]: I0218 15:53:16.229079 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f04a8b9-47dc-4fdf-b0fa-b39ee03d5f00" path="/var/lib/kubelet/pods/9f04a8b9-47dc-4fdf-b0fa-b39ee03d5f00/volumes" Feb 18 15:53:20 crc kubenswrapper[4957]: I0218 15:53:20.099735 4957 scope.go:117] "RemoveContainer" containerID="98e175a08d48848566a5f995eccdfb671697af4cdad7fb6c6b5ade1708dc6017" Feb 18 15:53:20 crc kubenswrapper[4957]: I0218 15:53:20.423476 4957 scope.go:117] "RemoveContainer" containerID="74c401adc17ef8ccef40e2d9a358b735cb288099545d35004f38cebbc600ba8b" Feb 18 15:53:25 crc kubenswrapper[4957]: I0218 15:53:25.432119 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/metrics-server-7ffc4d6784-c7kvk" Feb 18 15:53:25 crc kubenswrapper[4957]: I0218 15:53:25.437584 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/metrics-server-7ffc4d6784-c7kvk" Feb 18 15:53:41 crc kubenswrapper[4957]: I0218 15:53:41.654600 4957 generic.go:334] "Generic (PLEG): container finished" podID="83e04620-a3d4-4086-8f1f-a7023ab4cfe3" containerID="4916d51595abddad001c698605c4dc16dc207af0355baee6bb28524a254b8db3" exitCode=0 Feb 18 15:53:41 crc kubenswrapper[4957]: I0218 15:53:41.654658 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6llw9/crc-debug-l44n4" event={"ID":"83e04620-a3d4-4086-8f1f-a7023ab4cfe3","Type":"ContainerDied","Data":"4916d51595abddad001c698605c4dc16dc207af0355baee6bb28524a254b8db3"} Feb 18 15:53:42 crc kubenswrapper[4957]: I0218 15:53:42.784307 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6llw9/crc-debug-l44n4" Feb 18 15:53:42 crc kubenswrapper[4957]: I0218 15:53:42.836472 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-6llw9/crc-debug-l44n4"] Feb 18 15:53:42 crc kubenswrapper[4957]: I0218 15:53:42.847092 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-6llw9/crc-debug-l44n4"] Feb 18 15:53:42 crc kubenswrapper[4957]: I0218 15:53:42.911763 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/83e04620-a3d4-4086-8f1f-a7023ab4cfe3-host\") pod \"83e04620-a3d4-4086-8f1f-a7023ab4cfe3\" (UID: \"83e04620-a3d4-4086-8f1f-a7023ab4cfe3\") " Feb 18 15:53:42 crc kubenswrapper[4957]: I0218 15:53:42.911835 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/83e04620-a3d4-4086-8f1f-a7023ab4cfe3-host" (OuterVolumeSpecName: "host") pod "83e04620-a3d4-4086-8f1f-a7023ab4cfe3" (UID: "83e04620-a3d4-4086-8f1f-a7023ab4cfe3"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 15:53:42 crc kubenswrapper[4957]: I0218 15:53:42.911882 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qmww7\" (UniqueName: \"kubernetes.io/projected/83e04620-a3d4-4086-8f1f-a7023ab4cfe3-kube-api-access-qmww7\") pod \"83e04620-a3d4-4086-8f1f-a7023ab4cfe3\" (UID: \"83e04620-a3d4-4086-8f1f-a7023ab4cfe3\") " Feb 18 15:53:42 crc kubenswrapper[4957]: I0218 15:53:42.912730 4957 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/83e04620-a3d4-4086-8f1f-a7023ab4cfe3-host\") on node \"crc\" DevicePath \"\"" Feb 18 15:53:42 crc kubenswrapper[4957]: I0218 15:53:42.917665 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83e04620-a3d4-4086-8f1f-a7023ab4cfe3-kube-api-access-qmww7" (OuterVolumeSpecName: "kube-api-access-qmww7") pod "83e04620-a3d4-4086-8f1f-a7023ab4cfe3" (UID: "83e04620-a3d4-4086-8f1f-a7023ab4cfe3"). InnerVolumeSpecName "kube-api-access-qmww7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 15:53:43 crc kubenswrapper[4957]: I0218 15:53:43.014927 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qmww7\" (UniqueName: \"kubernetes.io/projected/83e04620-a3d4-4086-8f1f-a7023ab4cfe3-kube-api-access-qmww7\") on node \"crc\" DevicePath \"\"" Feb 18 15:53:43 crc kubenswrapper[4957]: I0218 15:53:43.676601 4957 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="355ff27148f4e4b5e36ed1a2de5afa1e00cb26873e00820cc914b8a8fe8a544e" Feb 18 15:53:43 crc kubenswrapper[4957]: I0218 15:53:43.677029 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6llw9/crc-debug-l44n4" Feb 18 15:53:44 crc kubenswrapper[4957]: I0218 15:53:44.238525 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83e04620-a3d4-4086-8f1f-a7023ab4cfe3" path="/var/lib/kubelet/pods/83e04620-a3d4-4086-8f1f-a7023ab4cfe3/volumes" Feb 18 15:53:44 crc kubenswrapper[4957]: I0218 15:53:44.282869 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-6llw9/crc-debug-xh5dg"] Feb 18 15:53:44 crc kubenswrapper[4957]: E0218 15:53:44.283361 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83e04620-a3d4-4086-8f1f-a7023ab4cfe3" containerName="container-00" Feb 18 15:53:44 crc kubenswrapper[4957]: I0218 15:53:44.283374 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="83e04620-a3d4-4086-8f1f-a7023ab4cfe3" containerName="container-00" Feb 18 15:53:44 crc kubenswrapper[4957]: E0218 15:53:44.283381 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f04a8b9-47dc-4fdf-b0fa-b39ee03d5f00" containerName="extract-content" Feb 18 15:53:44 crc kubenswrapper[4957]: I0218 15:53:44.283387 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f04a8b9-47dc-4fdf-b0fa-b39ee03d5f00" containerName="extract-content" Feb 18 15:53:44 crc kubenswrapper[4957]: E0218 15:53:44.283409 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f04a8b9-47dc-4fdf-b0fa-b39ee03d5f00" containerName="registry-server" Feb 18 15:53:44 crc kubenswrapper[4957]: I0218 15:53:44.283430 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f04a8b9-47dc-4fdf-b0fa-b39ee03d5f00" containerName="registry-server" Feb 18 15:53:44 crc kubenswrapper[4957]: E0218 15:53:44.283442 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f04a8b9-47dc-4fdf-b0fa-b39ee03d5f00" containerName="registry-server" Feb 18 15:53:44 crc kubenswrapper[4957]: I0218 15:53:44.283448 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f04a8b9-47dc-4fdf-b0fa-b39ee03d5f00" containerName="registry-server" Feb 18 15:53:44 crc kubenswrapper[4957]: E0218 15:53:44.283473 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f04a8b9-47dc-4fdf-b0fa-b39ee03d5f00" containerName="extract-utilities" Feb 18 15:53:44 crc kubenswrapper[4957]: I0218 15:53:44.283479 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f04a8b9-47dc-4fdf-b0fa-b39ee03d5f00" containerName="extract-utilities" Feb 18 15:53:44 crc kubenswrapper[4957]: I0218 15:53:44.283722 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="83e04620-a3d4-4086-8f1f-a7023ab4cfe3" containerName="container-00" Feb 18 15:53:44 crc kubenswrapper[4957]: I0218 15:53:44.283739 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f04a8b9-47dc-4fdf-b0fa-b39ee03d5f00" containerName="registry-server" Feb 18 15:53:44 crc kubenswrapper[4957]: I0218 15:53:44.283752 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f04a8b9-47dc-4fdf-b0fa-b39ee03d5f00" containerName="registry-server" Feb 18 15:53:44 crc kubenswrapper[4957]: I0218 15:53:44.283771 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f04a8b9-47dc-4fdf-b0fa-b39ee03d5f00" containerName="registry-server" Feb 18 15:53:44 crc kubenswrapper[4957]: I0218 15:53:44.284577 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6llw9/crc-debug-xh5dg" Feb 18 15:53:44 crc kubenswrapper[4957]: I0218 15:53:44.286396 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-6llw9"/"default-dockercfg-pstlf" Feb 18 15:53:44 crc kubenswrapper[4957]: I0218 15:53:44.445652 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9cae46f3-26e4-4117-aff0-706ce5c323f7-host\") pod \"crc-debug-xh5dg\" (UID: \"9cae46f3-26e4-4117-aff0-706ce5c323f7\") " pod="openshift-must-gather-6llw9/crc-debug-xh5dg" Feb 18 15:53:44 crc kubenswrapper[4957]: I0218 15:53:44.445702 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvlpn\" (UniqueName: \"kubernetes.io/projected/9cae46f3-26e4-4117-aff0-706ce5c323f7-kube-api-access-nvlpn\") pod \"crc-debug-xh5dg\" (UID: \"9cae46f3-26e4-4117-aff0-706ce5c323f7\") " pod="openshift-must-gather-6llw9/crc-debug-xh5dg" Feb 18 15:53:44 crc kubenswrapper[4957]: I0218 15:53:44.548294 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9cae46f3-26e4-4117-aff0-706ce5c323f7-host\") pod \"crc-debug-xh5dg\" (UID: \"9cae46f3-26e4-4117-aff0-706ce5c323f7\") " pod="openshift-must-gather-6llw9/crc-debug-xh5dg" Feb 18 15:53:44 crc kubenswrapper[4957]: I0218 15:53:44.548364 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nvlpn\" (UniqueName: \"kubernetes.io/projected/9cae46f3-26e4-4117-aff0-706ce5c323f7-kube-api-access-nvlpn\") pod \"crc-debug-xh5dg\" (UID: \"9cae46f3-26e4-4117-aff0-706ce5c323f7\") " pod="openshift-must-gather-6llw9/crc-debug-xh5dg" Feb 18 15:53:44 crc kubenswrapper[4957]: I0218 15:53:44.548481 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9cae46f3-26e4-4117-aff0-706ce5c323f7-host\") pod \"crc-debug-xh5dg\" (UID: \"9cae46f3-26e4-4117-aff0-706ce5c323f7\") " pod="openshift-must-gather-6llw9/crc-debug-xh5dg" Feb 18 15:53:44 crc kubenswrapper[4957]: I0218 15:53:44.568621 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nvlpn\" (UniqueName: \"kubernetes.io/projected/9cae46f3-26e4-4117-aff0-706ce5c323f7-kube-api-access-nvlpn\") pod \"crc-debug-xh5dg\" (UID: \"9cae46f3-26e4-4117-aff0-706ce5c323f7\") " pod="openshift-must-gather-6llw9/crc-debug-xh5dg" Feb 18 15:53:44 crc kubenswrapper[4957]: I0218 15:53:44.601050 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6llw9/crc-debug-xh5dg" Feb 18 15:53:44 crc kubenswrapper[4957]: I0218 15:53:44.689693 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6llw9/crc-debug-xh5dg" event={"ID":"9cae46f3-26e4-4117-aff0-706ce5c323f7","Type":"ContainerStarted","Data":"6ea3a498cecb6525745bfba0343281fc9bdaf2584158fff90466fe0e02dbe7a5"} Feb 18 15:53:45 crc kubenswrapper[4957]: I0218 15:53:45.710055 4957 generic.go:334] "Generic (PLEG): container finished" podID="9cae46f3-26e4-4117-aff0-706ce5c323f7" containerID="d1527e771e632b4b6dad271517475d0c90401605a73b27cc1fb3983a48900e21" exitCode=0 Feb 18 15:53:45 crc kubenswrapper[4957]: I0218 15:53:45.710291 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6llw9/crc-debug-xh5dg" event={"ID":"9cae46f3-26e4-4117-aff0-706ce5c323f7","Type":"ContainerDied","Data":"d1527e771e632b4b6dad271517475d0c90401605a73b27cc1fb3983a48900e21"} Feb 18 15:53:46 crc kubenswrapper[4957]: I0218 15:53:46.651542 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-6llw9/crc-debug-xh5dg"] Feb 18 15:53:46 crc kubenswrapper[4957]: I0218 15:53:46.666553 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-6llw9/crc-debug-xh5dg"] Feb 18 15:53:46 crc kubenswrapper[4957]: I0218 15:53:46.844809 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6llw9/crc-debug-xh5dg" Feb 18 15:53:46 crc kubenswrapper[4957]: I0218 15:53:46.929162 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nvlpn\" (UniqueName: \"kubernetes.io/projected/9cae46f3-26e4-4117-aff0-706ce5c323f7-kube-api-access-nvlpn\") pod \"9cae46f3-26e4-4117-aff0-706ce5c323f7\" (UID: \"9cae46f3-26e4-4117-aff0-706ce5c323f7\") " Feb 18 15:53:46 crc kubenswrapper[4957]: I0218 15:53:46.929345 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9cae46f3-26e4-4117-aff0-706ce5c323f7-host\") pod \"9cae46f3-26e4-4117-aff0-706ce5c323f7\" (UID: \"9cae46f3-26e4-4117-aff0-706ce5c323f7\") " Feb 18 15:53:46 crc kubenswrapper[4957]: I0218 15:53:46.929569 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9cae46f3-26e4-4117-aff0-706ce5c323f7-host" (OuterVolumeSpecName: "host") pod "9cae46f3-26e4-4117-aff0-706ce5c323f7" (UID: "9cae46f3-26e4-4117-aff0-706ce5c323f7"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 15:53:46 crc kubenswrapper[4957]: I0218 15:53:46.935214 4957 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9cae46f3-26e4-4117-aff0-706ce5c323f7-host\") on node \"crc\" DevicePath \"\"" Feb 18 15:53:46 crc kubenswrapper[4957]: I0218 15:53:46.944302 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9cae46f3-26e4-4117-aff0-706ce5c323f7-kube-api-access-nvlpn" (OuterVolumeSpecName: "kube-api-access-nvlpn") pod "9cae46f3-26e4-4117-aff0-706ce5c323f7" (UID: "9cae46f3-26e4-4117-aff0-706ce5c323f7"). InnerVolumeSpecName "kube-api-access-nvlpn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 15:53:47 crc kubenswrapper[4957]: I0218 15:53:47.040712 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nvlpn\" (UniqueName: \"kubernetes.io/projected/9cae46f3-26e4-4117-aff0-706ce5c323f7-kube-api-access-nvlpn\") on node \"crc\" DevicePath \"\"" Feb 18 15:53:47 crc kubenswrapper[4957]: I0218 15:53:47.731572 4957 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6ea3a498cecb6525745bfba0343281fc9bdaf2584158fff90466fe0e02dbe7a5" Feb 18 15:53:47 crc kubenswrapper[4957]: I0218 15:53:47.731670 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6llw9/crc-debug-xh5dg" Feb 18 15:53:47 crc kubenswrapper[4957]: I0218 15:53:47.911243 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-6llw9/crc-debug-x5kz8"] Feb 18 15:53:47 crc kubenswrapper[4957]: E0218 15:53:47.911701 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f04a8b9-47dc-4fdf-b0fa-b39ee03d5f00" containerName="registry-server" Feb 18 15:53:47 crc kubenswrapper[4957]: I0218 15:53:47.911714 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f04a8b9-47dc-4fdf-b0fa-b39ee03d5f00" containerName="registry-server" Feb 18 15:53:47 crc kubenswrapper[4957]: E0218 15:53:47.911737 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9cae46f3-26e4-4117-aff0-706ce5c323f7" containerName="container-00" Feb 18 15:53:47 crc kubenswrapper[4957]: I0218 15:53:47.911743 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="9cae46f3-26e4-4117-aff0-706ce5c323f7" containerName="container-00" Feb 18 15:53:47 crc kubenswrapper[4957]: I0218 15:53:47.911988 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="9cae46f3-26e4-4117-aff0-706ce5c323f7" containerName="container-00" Feb 18 15:53:47 crc kubenswrapper[4957]: I0218 15:53:47.913550 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6llw9/crc-debug-x5kz8" Feb 18 15:53:47 crc kubenswrapper[4957]: I0218 15:53:47.915384 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-6llw9"/"default-dockercfg-pstlf" Feb 18 15:53:47 crc kubenswrapper[4957]: I0218 15:53:47.964288 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-htc4l\" (UniqueName: \"kubernetes.io/projected/102a2a30-4adf-4ef7-8c60-bf10d5db664b-kube-api-access-htc4l\") pod \"crc-debug-x5kz8\" (UID: \"102a2a30-4adf-4ef7-8c60-bf10d5db664b\") " pod="openshift-must-gather-6llw9/crc-debug-x5kz8" Feb 18 15:53:47 crc kubenswrapper[4957]: I0218 15:53:47.964696 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/102a2a30-4adf-4ef7-8c60-bf10d5db664b-host\") pod \"crc-debug-x5kz8\" (UID: \"102a2a30-4adf-4ef7-8c60-bf10d5db664b\") " pod="openshift-must-gather-6llw9/crc-debug-x5kz8" Feb 18 15:53:48 crc kubenswrapper[4957]: I0218 15:53:48.066576 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-htc4l\" (UniqueName: \"kubernetes.io/projected/102a2a30-4adf-4ef7-8c60-bf10d5db664b-kube-api-access-htc4l\") pod \"crc-debug-x5kz8\" (UID: \"102a2a30-4adf-4ef7-8c60-bf10d5db664b\") " pod="openshift-must-gather-6llw9/crc-debug-x5kz8" Feb 18 15:53:48 crc kubenswrapper[4957]: I0218 15:53:48.066664 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/102a2a30-4adf-4ef7-8c60-bf10d5db664b-host\") pod \"crc-debug-x5kz8\" (UID: \"102a2a30-4adf-4ef7-8c60-bf10d5db664b\") " pod="openshift-must-gather-6llw9/crc-debug-x5kz8" Feb 18 15:53:48 crc kubenswrapper[4957]: I0218 15:53:48.067713 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/102a2a30-4adf-4ef7-8c60-bf10d5db664b-host\") pod \"crc-debug-x5kz8\" (UID: \"102a2a30-4adf-4ef7-8c60-bf10d5db664b\") " pod="openshift-must-gather-6llw9/crc-debug-x5kz8" Feb 18 15:53:48 crc kubenswrapper[4957]: I0218 15:53:48.093589 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-htc4l\" (UniqueName: \"kubernetes.io/projected/102a2a30-4adf-4ef7-8c60-bf10d5db664b-kube-api-access-htc4l\") pod \"crc-debug-x5kz8\" (UID: \"102a2a30-4adf-4ef7-8c60-bf10d5db664b\") " pod="openshift-must-gather-6llw9/crc-debug-x5kz8" Feb 18 15:53:48 crc kubenswrapper[4957]: I0218 15:53:48.228296 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9cae46f3-26e4-4117-aff0-706ce5c323f7" path="/var/lib/kubelet/pods/9cae46f3-26e4-4117-aff0-706ce5c323f7/volumes" Feb 18 15:53:48 crc kubenswrapper[4957]: I0218 15:53:48.232446 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6llw9/crc-debug-x5kz8" Feb 18 15:53:48 crc kubenswrapper[4957]: I0218 15:53:48.742136 4957 generic.go:334] "Generic (PLEG): container finished" podID="102a2a30-4adf-4ef7-8c60-bf10d5db664b" containerID="764bd57b0f11646fa308a355a44f8e4c3b0e440c752e44e9eb128fd67eb71cbf" exitCode=0 Feb 18 15:53:48 crc kubenswrapper[4957]: I0218 15:53:48.742358 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6llw9/crc-debug-x5kz8" event={"ID":"102a2a30-4adf-4ef7-8c60-bf10d5db664b","Type":"ContainerDied","Data":"764bd57b0f11646fa308a355a44f8e4c3b0e440c752e44e9eb128fd67eb71cbf"} Feb 18 15:53:48 crc kubenswrapper[4957]: I0218 15:53:48.742464 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6llw9/crc-debug-x5kz8" event={"ID":"102a2a30-4adf-4ef7-8c60-bf10d5db664b","Type":"ContainerStarted","Data":"4783f6ab701b3788e858908ab24d24bfcbcf4eda02a97bae1e212a2a2ad3b9f0"} Feb 18 15:53:48 crc kubenswrapper[4957]: I0218 15:53:48.871469 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-6llw9/crc-debug-x5kz8"] Feb 18 15:53:48 crc kubenswrapper[4957]: I0218 15:53:48.887942 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-6llw9/crc-debug-x5kz8"] Feb 18 15:53:49 crc kubenswrapper[4957]: I0218 15:53:49.894757 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6llw9/crc-debug-x5kz8" Feb 18 15:53:50 crc kubenswrapper[4957]: I0218 15:53:50.015638 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/102a2a30-4adf-4ef7-8c60-bf10d5db664b-host\") pod \"102a2a30-4adf-4ef7-8c60-bf10d5db664b\" (UID: \"102a2a30-4adf-4ef7-8c60-bf10d5db664b\") " Feb 18 15:53:50 crc kubenswrapper[4957]: I0218 15:53:50.015764 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/102a2a30-4adf-4ef7-8c60-bf10d5db664b-host" (OuterVolumeSpecName: "host") pod "102a2a30-4adf-4ef7-8c60-bf10d5db664b" (UID: "102a2a30-4adf-4ef7-8c60-bf10d5db664b"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 15:53:50 crc kubenswrapper[4957]: I0218 15:53:50.016104 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htc4l\" (UniqueName: \"kubernetes.io/projected/102a2a30-4adf-4ef7-8c60-bf10d5db664b-kube-api-access-htc4l\") pod \"102a2a30-4adf-4ef7-8c60-bf10d5db664b\" (UID: \"102a2a30-4adf-4ef7-8c60-bf10d5db664b\") " Feb 18 15:53:50 crc kubenswrapper[4957]: I0218 15:53:50.016991 4957 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/102a2a30-4adf-4ef7-8c60-bf10d5db664b-host\") on node \"crc\" DevicePath \"\"" Feb 18 15:53:50 crc kubenswrapper[4957]: I0218 15:53:50.021758 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/102a2a30-4adf-4ef7-8c60-bf10d5db664b-kube-api-access-htc4l" (OuterVolumeSpecName: "kube-api-access-htc4l") pod "102a2a30-4adf-4ef7-8c60-bf10d5db664b" (UID: "102a2a30-4adf-4ef7-8c60-bf10d5db664b"). InnerVolumeSpecName "kube-api-access-htc4l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 15:53:50 crc kubenswrapper[4957]: I0218 15:53:50.118926 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htc4l\" (UniqueName: \"kubernetes.io/projected/102a2a30-4adf-4ef7-8c60-bf10d5db664b-kube-api-access-htc4l\") on node \"crc\" DevicePath \"\"" Feb 18 15:53:50 crc kubenswrapper[4957]: I0218 15:53:50.229168 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="102a2a30-4adf-4ef7-8c60-bf10d5db664b" path="/var/lib/kubelet/pods/102a2a30-4adf-4ef7-8c60-bf10d5db664b/volumes" Feb 18 15:53:50 crc kubenswrapper[4957]: I0218 15:53:50.776674 4957 scope.go:117] "RemoveContainer" containerID="764bd57b0f11646fa308a355a44f8e4c3b0e440c752e44e9eb128fd67eb71cbf" Feb 18 15:53:50 crc kubenswrapper[4957]: I0218 15:53:50.777172 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6llw9/crc-debug-x5kz8" Feb 18 15:54:07 crc kubenswrapper[4957]: I0218 15:54:07.279328 4957 patch_prober.go:28] interesting pod/machine-config-daemon-x8wwg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 15:54:07 crc kubenswrapper[4957]: I0218 15:54:07.279882 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 15:54:29 crc kubenswrapper[4957]: I0218 15:54:29.322547 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_bdf7772f-356b-41a5-ad1d-80b40c742d36/aodh-api/0.log" Feb 18 15:54:29 crc kubenswrapper[4957]: I0218 15:54:29.560791 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_bdf7772f-356b-41a5-ad1d-80b40c742d36/aodh-evaluator/0.log" Feb 18 15:54:29 crc kubenswrapper[4957]: I0218 15:54:29.573361 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_bdf7772f-356b-41a5-ad1d-80b40c742d36/aodh-listener/0.log" Feb 18 15:54:29 crc kubenswrapper[4957]: I0218 15:54:29.579599 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_bdf7772f-356b-41a5-ad1d-80b40c742d36/aodh-notifier/0.log" Feb 18 15:54:29 crc kubenswrapper[4957]: I0218 15:54:29.776862 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-c7999fbc4-ttfwg_b15f7971-ec1f-4b4e-ae33-45863ceb6b09/barbican-api/0.log" Feb 18 15:54:29 crc kubenswrapper[4957]: I0218 15:54:29.815661 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-c7999fbc4-ttfwg_b15f7971-ec1f-4b4e-ae33-45863ceb6b09/barbican-api-log/0.log" Feb 18 15:54:29 crc kubenswrapper[4957]: I0218 15:54:29.869962 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-fffb49c9b-fm77z_a84b8763-62fd-4a64-ab6e-276ca0488599/barbican-keystone-listener/0.log" Feb 18 15:54:30 crc kubenswrapper[4957]: I0218 15:54:30.043915 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-fffb49c9b-fm77z_a84b8763-62fd-4a64-ab6e-276ca0488599/barbican-keystone-listener-log/0.log" Feb 18 15:54:30 crc kubenswrapper[4957]: I0218 15:54:30.086983 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-7667cffcf-zch65_4370835e-b763-479d-b742-fe754a24bd1b/barbican-worker/0.log" Feb 18 15:54:30 crc kubenswrapper[4957]: I0218 15:54:30.133872 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-7667cffcf-zch65_4370835e-b763-479d-b742-fe754a24bd1b/barbican-worker-log/0.log" Feb 18 15:54:30 crc kubenswrapper[4957]: I0218 15:54:30.318193 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-45npd_ce22a60c-ac91-4fcd-a298-330ace1c4d68/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Feb 18 15:54:30 crc kubenswrapper[4957]: I0218 15:54:30.380664 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_807ce5d3-ed11-4105-9e7a-ed9fe8af0fa5/ceilometer-central-agent/0.log" Feb 18 15:54:30 crc kubenswrapper[4957]: I0218 15:54:30.484971 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_807ce5d3-ed11-4105-9e7a-ed9fe8af0fa5/ceilometer-notification-agent/0.log" Feb 18 15:54:30 crc kubenswrapper[4957]: I0218 15:54:30.525501 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_807ce5d3-ed11-4105-9e7a-ed9fe8af0fa5/proxy-httpd/0.log" Feb 18 15:54:30 crc kubenswrapper[4957]: I0218 15:54:30.550698 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_807ce5d3-ed11-4105-9e7a-ed9fe8af0fa5/sg-core/0.log" Feb 18 15:54:30 crc kubenswrapper[4957]: I0218 15:54:30.746008 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_18e4612d-bb78-44c5-b59e-4dbe1342c3d3/cinder-api/0.log" Feb 18 15:54:30 crc kubenswrapper[4957]: I0218 15:54:30.802361 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_18e4612d-bb78-44c5-b59e-4dbe1342c3d3/cinder-api-log/0.log" Feb 18 15:54:30 crc kubenswrapper[4957]: I0218 15:54:30.891017 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_e7fbc309-e5ee-4222-8409-6d68468ae015/cinder-scheduler/1.log" Feb 18 15:54:30 crc kubenswrapper[4957]: I0218 15:54:30.991929 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_e7fbc309-e5ee-4222-8409-6d68468ae015/cinder-scheduler/0.log" Feb 18 15:54:31 crc kubenswrapper[4957]: I0218 15:54:31.063195 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_e7fbc309-e5ee-4222-8409-6d68468ae015/probe/0.log" Feb 18 15:54:31 crc kubenswrapper[4957]: I0218 15:54:31.123141 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-5h6ql_a4eba666-1695-49b1-8825-a5ba56bee93e/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 18 15:54:31 crc kubenswrapper[4957]: I0218 15:54:31.274547 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-rvs68_4156e79b-8cb7-4a2f-95a8-d782eed526a3/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 18 15:54:31 crc kubenswrapper[4957]: I0218 15:54:31.349189 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6f6df4f56c-jt2ll_b830b39f-23b6-4b85-ba54-e8f4b81d5d5f/init/0.log" Feb 18 15:54:31 crc kubenswrapper[4957]: I0218 15:54:31.502049 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6f6df4f56c-jt2ll_b830b39f-23b6-4b85-ba54-e8f4b81d5d5f/init/0.log" Feb 18 15:54:31 crc kubenswrapper[4957]: I0218 15:54:31.607555 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-bpv8q_1a0ed9bb-80bf-415e-b67d-8b79fb24cff8/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Feb 18 15:54:31 crc kubenswrapper[4957]: I0218 15:54:31.614215 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6f6df4f56c-jt2ll_b830b39f-23b6-4b85-ba54-e8f4b81d5d5f/dnsmasq-dns/0.log" Feb 18 15:54:31 crc kubenswrapper[4957]: I0218 15:54:31.830668 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_1308e026-1cb5-4c09-8623-beac2d513406/glance-log/0.log" Feb 18 15:54:31 crc kubenswrapper[4957]: I0218 15:54:31.858313 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_1308e026-1cb5-4c09-8623-beac2d513406/glance-httpd/0.log" Feb 18 15:54:32 crc kubenswrapper[4957]: I0218 15:54:32.014044 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_bb989a4d-f9cc-4305-a0e6-8f162b666e9d/glance-log/0.log" Feb 18 15:54:32 crc kubenswrapper[4957]: I0218 15:54:32.023610 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_bb989a4d-f9cc-4305-a0e6-8f162b666e9d/glance-httpd/0.log" Feb 18 15:54:32 crc kubenswrapper[4957]: I0218 15:54:32.704435 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-api-65d4996964-zpvph_2251ef18-33b0-4454-a9ff-2a00fd4974d7/heat-api/0.log" Feb 18 15:54:32 crc kubenswrapper[4957]: I0218 15:54:32.738779 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-engine-596b8bcf84-qf6sp_44f06eec-0e32-4246-a893-652c9b180b2c/heat-engine/0.log" Feb 18 15:54:32 crc kubenswrapper[4957]: I0218 15:54:32.783618 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-cfnapi-69cc788d47-5c2pf_bd286cd4-02f3-4357-8c0e-bf30451df530/heat-cfnapi/0.log" Feb 18 15:54:32 crc kubenswrapper[4957]: I0218 15:54:32.899634 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-wf22p_47b95013-4d1a-4b81-b1c7-1fed8ecff2b1/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Feb 18 15:54:33 crc kubenswrapper[4957]: I0218 15:54:33.035596 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-66bnk_98d2a9b7-af08-49d1-b1d0-6e9c09a09bf1/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 18 15:54:33 crc kubenswrapper[4957]: I0218 15:54:33.258852 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29523781-pkckt_c5829257-e7f8-4a49-8e8d-1780b76c346a/keystone-cron/0.log" Feb 18 15:54:33 crc kubenswrapper[4957]: I0218 15:54:33.379376 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_03293231-ba61-4099-89c9-b86cd6d9f489/kube-state-metrics/0.log" Feb 18 15:54:33 crc kubenswrapper[4957]: I0218 15:54:33.560441 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-td67t_ee4446c2-295a-4f11-b689-78721a39f23f/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Feb 18 15:54:33 crc kubenswrapper[4957]: I0218 15:54:33.632138 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_logging-edpm-deployment-openstack-edpm-ipam-tjx2d_3d0d2840-7aaf-4aca-b1c8-f43e8ccffdb8/logging-edpm-deployment-openstack-edpm-ipam/0.log" Feb 18 15:54:33 crc kubenswrapper[4957]: I0218 15:54:33.858161 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mysqld-exporter-0_09169c38-c2c1-43b6-b01e-45320845dc8e/mysqld-exporter/0.log" Feb 18 15:54:34 crc kubenswrapper[4957]: I0218 15:54:34.316704 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-7fbd58c64f-dmc49_c7e3267f-8a47-48ec-94ee-40aed5e39cff/neutron-httpd/0.log" Feb 18 15:54:34 crc kubenswrapper[4957]: I0218 15:54:34.357677 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-59cd79686b-x6zk5_47942968-a16c-4e8d-8aa8-2a54303782a5/keystone-api/0.log" Feb 18 15:54:34 crc kubenswrapper[4957]: I0218 15:54:34.362014 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-7fbd58c64f-dmc49_c7e3267f-8a47-48ec-94ee-40aed5e39cff/neutron-api/0.log" Feb 18 15:54:34 crc kubenswrapper[4957]: I0218 15:54:34.544027 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-w9htc_afe6ed34-f3ab-456a-8628-d7128dcc602b/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Feb 18 15:54:35 crc kubenswrapper[4957]: I0218 15:54:35.016080 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_dda89cee-ab64-461a-a48a-b5ec914cfb05/nova-cell0-conductor-conductor/0.log" Feb 18 15:54:35 crc kubenswrapper[4957]: I0218 15:54:35.022066 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_e38b57a1-5b5d-4046-88b1-248b5eb0fe97/nova-api-log/0.log" Feb 18 15:54:35 crc kubenswrapper[4957]: I0218 15:54:35.383996 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_4657d4de-c971-422a-abb6-9f3f16421c2a/nova-cell1-novncproxy-novncproxy/0.log" Feb 18 15:54:35 crc kubenswrapper[4957]: I0218 15:54:35.387010 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_d3787001-5ff0-47b4-917d-b0e9cbabd9a0/nova-cell1-conductor-conductor/0.log" Feb 18 15:54:35 crc kubenswrapper[4957]: I0218 15:54:35.426058 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_e38b57a1-5b5d-4046-88b1-248b5eb0fe97/nova-api-api/0.log" Feb 18 15:54:36 crc kubenswrapper[4957]: I0218 15:54:36.310028 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-rdvhw_13f17c48-8243-416b-939e-7ba8a50f08d4/nova-edpm-deployment-openstack-edpm-ipam/0.log" Feb 18 15:54:36 crc kubenswrapper[4957]: I0218 15:54:36.394821 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_5c129411-cf16-45ad-be6b-e31866a236e7/nova-metadata-log/0.log" Feb 18 15:54:36 crc kubenswrapper[4957]: I0218 15:54:36.754118 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_a1f75ef8-c8ef-4d67-8c40-11ff5f8f5f4c/mysql-bootstrap/0.log" Feb 18 15:54:36 crc kubenswrapper[4957]: I0218 15:54:36.843411 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_1a3cba85-6288-4e31-aeeb-c65994d4592b/nova-scheduler-scheduler/0.log" Feb 18 15:54:36 crc kubenswrapper[4957]: I0218 15:54:36.959896 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_a1f75ef8-c8ef-4d67-8c40-11ff5f8f5f4c/mysql-bootstrap/0.log" Feb 18 15:54:37 crc kubenswrapper[4957]: I0218 15:54:37.001695 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_a1f75ef8-c8ef-4d67-8c40-11ff5f8f5f4c/galera/1.log" Feb 18 15:54:37 crc kubenswrapper[4957]: I0218 15:54:37.081540 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_a1f75ef8-c8ef-4d67-8c40-11ff5f8f5f4c/galera/0.log" Feb 18 15:54:37 crc kubenswrapper[4957]: I0218 15:54:37.242033 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_8e98adae-ca1c-4f1d-a7e4-6ea6ccea7070/mysql-bootstrap/0.log" Feb 18 15:54:37 crc kubenswrapper[4957]: I0218 15:54:37.278523 4957 patch_prober.go:28] interesting pod/machine-config-daemon-x8wwg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 15:54:37 crc kubenswrapper[4957]: I0218 15:54:37.278575 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 15:54:37 crc kubenswrapper[4957]: I0218 15:54:37.463966 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_8e98adae-ca1c-4f1d-a7e4-6ea6ccea7070/mysql-bootstrap/0.log" Feb 18 15:54:37 crc kubenswrapper[4957]: I0218 15:54:37.517465 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_8e98adae-ca1c-4f1d-a7e4-6ea6ccea7070/galera/0.log" Feb 18 15:54:37 crc kubenswrapper[4957]: I0218 15:54:37.545362 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_8e98adae-ca1c-4f1d-a7e4-6ea6ccea7070/galera/1.log" Feb 18 15:54:38 crc kubenswrapper[4957]: I0218 15:54:38.027949 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_aa2f421b-f6d0-4db4-9162-f863e45ca417/openstackclient/0.log" Feb 18 15:54:38 crc kubenswrapper[4957]: I0218 15:54:38.175797 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_5c129411-cf16-45ad-be6b-e31866a236e7/nova-metadata-metadata/0.log" Feb 18 15:54:38 crc kubenswrapper[4957]: I0218 15:54:38.269900 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-ztccc_93827c35-2801-441b-9a24-813c5d4a29ed/openstack-network-exporter/0.log" Feb 18 15:54:38 crc kubenswrapper[4957]: I0218 15:54:38.281075 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-5s4rw_e2b4f5fe-0b27-47d8-8158-b51ad4229e86/ovn-controller/0.log" Feb 18 15:54:38 crc kubenswrapper[4957]: I0218 15:54:38.529922 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-bgjwb_2c0f60cd-99b5-453c-9353-5c6298f95d2b/ovsdb-server-init/0.log" Feb 18 15:54:38 crc kubenswrapper[4957]: I0218 15:54:38.758130 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-bgjwb_2c0f60cd-99b5-453c-9353-5c6298f95d2b/ovsdb-server-init/0.log" Feb 18 15:54:38 crc kubenswrapper[4957]: I0218 15:54:38.797274 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-bgjwb_2c0f60cd-99b5-453c-9353-5c6298f95d2b/ovs-vswitchd/0.log" Feb 18 15:54:38 crc kubenswrapper[4957]: I0218 15:54:38.848308 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-bgjwb_2c0f60cd-99b5-453c-9353-5c6298f95d2b/ovsdb-server/0.log" Feb 18 15:54:39 crc kubenswrapper[4957]: I0218 15:54:39.002680 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_164c9825-00c8-4fcb-a706-b2afb16b9229/openstack-network-exporter/0.log" Feb 18 15:54:39 crc kubenswrapper[4957]: I0218 15:54:39.056791 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_164c9825-00c8-4fcb-a706-b2afb16b9229/ovn-northd/0.log" Feb 18 15:54:39 crc kubenswrapper[4957]: I0218 15:54:39.083840 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-b4qdb_e041f9c9-d870-4b03-a9c1-98316547db7b/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Feb 18 15:54:39 crc kubenswrapper[4957]: I0218 15:54:39.221569 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_7be80ac2-8e92-4cb0-8184-c35add0ccc9b/openstack-network-exporter/0.log" Feb 18 15:54:39 crc kubenswrapper[4957]: I0218 15:54:39.224960 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_7be80ac2-8e92-4cb0-8184-c35add0ccc9b/ovsdbserver-nb/0.log" Feb 18 15:54:39 crc kubenswrapper[4957]: I0218 15:54:39.442776 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_79f094ab-0b7e-4749-8c78-237f70a4bcef/openstack-network-exporter/0.log" Feb 18 15:54:39 crc kubenswrapper[4957]: I0218 15:54:39.523258 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_79f094ab-0b7e-4749-8c78-237f70a4bcef/ovsdbserver-sb/0.log" Feb 18 15:54:39 crc kubenswrapper[4957]: I0218 15:54:39.689631 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-76ccd9d8b4-5flnw_226e0541-e9a7-4516-9a20-94ace7e03a41/placement-api/0.log" Feb 18 15:54:39 crc kubenswrapper[4957]: I0218 15:54:39.807698 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-76ccd9d8b4-5flnw_226e0541-e9a7-4516-9a20-94ace7e03a41/placement-log/0.log" Feb 18 15:54:39 crc kubenswrapper[4957]: I0218 15:54:39.840788 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_a600b253-74c8-473b-ba57-e03ac741c902/init-config-reloader/0.log" Feb 18 15:54:40 crc kubenswrapper[4957]: I0218 15:54:40.029316 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_a600b253-74c8-473b-ba57-e03ac741c902/init-config-reloader/0.log" Feb 18 15:54:40 crc kubenswrapper[4957]: I0218 15:54:40.063946 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_a600b253-74c8-473b-ba57-e03ac741c902/thanos-sidecar/0.log" Feb 18 15:54:40 crc kubenswrapper[4957]: I0218 15:54:40.068794 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_a600b253-74c8-473b-ba57-e03ac741c902/prometheus/0.log" Feb 18 15:54:40 crc kubenswrapper[4957]: I0218 15:54:40.083368 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_a600b253-74c8-473b-ba57-e03ac741c902/config-reloader/0.log" Feb 18 15:54:40 crc kubenswrapper[4957]: I0218 15:54:40.282345 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_a6192b5e-59c5-4986-bac1-41acf8c0d46e/setup-container/0.log" Feb 18 15:54:40 crc kubenswrapper[4957]: I0218 15:54:40.470062 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_a6192b5e-59c5-4986-bac1-41acf8c0d46e/setup-container/0.log" Feb 18 15:54:40 crc kubenswrapper[4957]: I0218 15:54:40.518301 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_a6192b5e-59c5-4986-bac1-41acf8c0d46e/rabbitmq/0.log" Feb 18 15:54:40 crc kubenswrapper[4957]: I0218 15:54:40.569848 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_7c71c78f-243c-40f3-aa9b-1cae17afc260/setup-container/0.log" Feb 18 15:54:40 crc kubenswrapper[4957]: I0218 15:54:40.777598 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_7c71c78f-243c-40f3-aa9b-1cae17afc260/setup-container/0.log" Feb 18 15:54:40 crc kubenswrapper[4957]: I0218 15:54:40.815439 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_7c71c78f-243c-40f3-aa9b-1cae17afc260/rabbitmq/0.log" Feb 18 15:54:40 crc kubenswrapper[4957]: I0218 15:54:40.827173 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-1_e6f1982d-1c44-43d2-8a39-6e247a6f09c8/setup-container/0.log" Feb 18 15:54:41 crc kubenswrapper[4957]: I0218 15:54:41.316719 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-1_e6f1982d-1c44-43d2-8a39-6e247a6f09c8/setup-container/0.log" Feb 18 15:54:41 crc kubenswrapper[4957]: I0218 15:54:41.371212 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-2_99fe3777-adec-48ee-b2a8-df742111168d/setup-container/0.log" Feb 18 15:54:41 crc kubenswrapper[4957]: I0218 15:54:41.388153 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-1_e6f1982d-1c44-43d2-8a39-6e247a6f09c8/rabbitmq/0.log" Feb 18 15:54:41 crc kubenswrapper[4957]: I0218 15:54:41.590775 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-2_99fe3777-adec-48ee-b2a8-df742111168d/setup-container/0.log" Feb 18 15:54:41 crc kubenswrapper[4957]: I0218 15:54:41.660691 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-94fqn_37b31c92-756d-4b57-874f-c5278c279d8b/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 18 15:54:41 crc kubenswrapper[4957]: I0218 15:54:41.700842 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-2_99fe3777-adec-48ee-b2a8-df742111168d/rabbitmq/0.log" Feb 18 15:54:41 crc kubenswrapper[4957]: I0218 15:54:41.866763 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-szxlv_61f57f78-cac7-4ae9-b5bd-eef2885ac7ce/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Feb 18 15:54:41 crc kubenswrapper[4957]: I0218 15:54:41.930900 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-dsk48_2cef43ea-a55c-4f05-8598-54b9bfc950b3/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Feb 18 15:54:42 crc kubenswrapper[4957]: I0218 15:54:42.176127 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-5c4px_7eb43167-b3f1-4daa-a843-70abb56b314f/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 18 15:54:42 crc kubenswrapper[4957]: I0218 15:54:42.191511 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-44n6z_3cfe39f1-7ef2-4668-aef1-22d3b50fb8e9/ssh-known-hosts-edpm-deployment/0.log" Feb 18 15:54:42 crc kubenswrapper[4957]: I0218 15:54:42.471393 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-7c68fdd987-chglv_47005221-336b-424d-8c90-fc0c320cd135/proxy-server/0.log" Feb 18 15:54:42 crc kubenswrapper[4957]: I0218 15:54:42.593314 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-hzpv6_73ec2a80-4748-44f3-a979-1c854a4f3b49/swift-ring-rebalance/0.log" Feb 18 15:54:42 crc kubenswrapper[4957]: I0218 15:54:42.637842 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-7c68fdd987-chglv_47005221-336b-424d-8c90-fc0c320cd135/proxy-httpd/0.log" Feb 18 15:54:42 crc kubenswrapper[4957]: I0218 15:54:42.762830 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_5790a7ec-79bb-49af-842f-e2b879f33184/account-auditor/0.log" Feb 18 15:54:42 crc kubenswrapper[4957]: I0218 15:54:42.888176 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_5790a7ec-79bb-49af-842f-e2b879f33184/account-replicator/0.log" Feb 18 15:54:42 crc kubenswrapper[4957]: I0218 15:54:42.893307 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_5790a7ec-79bb-49af-842f-e2b879f33184/account-reaper/0.log" Feb 18 15:54:42 crc kubenswrapper[4957]: I0218 15:54:42.929083 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_5790a7ec-79bb-49af-842f-e2b879f33184/account-server/0.log" Feb 18 15:54:42 crc kubenswrapper[4957]: I0218 15:54:42.986869 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_5790a7ec-79bb-49af-842f-e2b879f33184/container-auditor/0.log" Feb 18 15:54:43 crc kubenswrapper[4957]: I0218 15:54:43.135346 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_5790a7ec-79bb-49af-842f-e2b879f33184/container-updater/0.log" Feb 18 15:54:43 crc kubenswrapper[4957]: I0218 15:54:43.166232 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_5790a7ec-79bb-49af-842f-e2b879f33184/container-replicator/0.log" Feb 18 15:54:43 crc kubenswrapper[4957]: I0218 15:54:43.180126 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_5790a7ec-79bb-49af-842f-e2b879f33184/container-server/0.log" Feb 18 15:54:43 crc kubenswrapper[4957]: I0218 15:54:43.224553 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_5790a7ec-79bb-49af-842f-e2b879f33184/object-auditor/0.log" Feb 18 15:54:43 crc kubenswrapper[4957]: I0218 15:54:43.345339 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_5790a7ec-79bb-49af-842f-e2b879f33184/object-expirer/0.log" Feb 18 15:54:43 crc kubenswrapper[4957]: I0218 15:54:43.430460 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_5790a7ec-79bb-49af-842f-e2b879f33184/object-replicator/0.log" Feb 18 15:54:43 crc kubenswrapper[4957]: I0218 15:54:43.437183 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_5790a7ec-79bb-49af-842f-e2b879f33184/object-server/0.log" Feb 18 15:54:43 crc kubenswrapper[4957]: I0218 15:54:43.467774 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_5790a7ec-79bb-49af-842f-e2b879f33184/object-updater/0.log" Feb 18 15:54:43 crc kubenswrapper[4957]: I0218 15:54:43.607525 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_5790a7ec-79bb-49af-842f-e2b879f33184/rsync/0.log" Feb 18 15:54:43 crc kubenswrapper[4957]: I0218 15:54:43.665565 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_5790a7ec-79bb-49af-842f-e2b879f33184/swift-recon-cron/0.log" Feb 18 15:54:43 crc kubenswrapper[4957]: I0218 15:54:43.766852 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-w6sdx_2735d3be-2856-4c38-8944-8e8698f1fc14/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Feb 18 15:54:43 crc kubenswrapper[4957]: I0218 15:54:43.969553 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-power-monitoring-edpm-deployment-openstack-edpm-xrx4l_0705aa1e-ac4d-4316-a4b1-9ad967170574/telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam/0.log" Feb 18 15:54:44 crc kubenswrapper[4957]: I0218 15:54:44.175782 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_d182365d-2786-484b-855e-dbb5452eb045/test-operator-logs-container/0.log" Feb 18 15:54:44 crc kubenswrapper[4957]: I0218 15:54:44.408827 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-jn2jf_b5039a76-1c37-420a-9427-87f7d9b35576/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 18 15:54:44 crc kubenswrapper[4957]: I0218 15:54:44.511033 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_b596b9fb-f116-4712-81fa-9382d13c295b/tempest-tests-tempest-tests-runner/0.log" Feb 18 15:54:51 crc kubenswrapper[4957]: I0218 15:54:51.487810 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_ac146603-56e6-49dc-afe3-d46b005945a3/memcached/0.log" Feb 18 15:55:07 crc kubenswrapper[4957]: I0218 15:55:07.278679 4957 patch_prober.go:28] interesting pod/machine-config-daemon-x8wwg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 15:55:07 crc kubenswrapper[4957]: I0218 15:55:07.279344 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 15:55:07 crc kubenswrapper[4957]: I0218 15:55:07.279394 4957 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" Feb 18 15:55:07 crc kubenswrapper[4957]: I0218 15:55:07.280397 4957 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"89f533c1d1cd15588f3413497b2789cc0f343faad672d55c7e1b45c57bcadec6"} pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 15:55:07 crc kubenswrapper[4957]: I0218 15:55:07.280486 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" containerName="machine-config-daemon" containerID="cri-o://89f533c1d1cd15588f3413497b2789cc0f343faad672d55c7e1b45c57bcadec6" gracePeriod=600 Feb 18 15:55:07 crc kubenswrapper[4957]: E0218 15:55:07.399994 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-x8wwg_openshift-machine-config-operator(4cde17e3-43e9-4bed-afe8-5b76229e35cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" Feb 18 15:55:08 crc kubenswrapper[4957]: I0218 15:55:08.132003 4957 generic.go:334] "Generic (PLEG): container finished" podID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" containerID="89f533c1d1cd15588f3413497b2789cc0f343faad672d55c7e1b45c57bcadec6" exitCode=0 Feb 18 15:55:08 crc kubenswrapper[4957]: I0218 15:55:08.132108 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" event={"ID":"4cde17e3-43e9-4bed-afe8-5b76229e35cf","Type":"ContainerDied","Data":"89f533c1d1cd15588f3413497b2789cc0f343faad672d55c7e1b45c57bcadec6"} Feb 18 15:55:08 crc kubenswrapper[4957]: I0218 15:55:08.132437 4957 scope.go:117] "RemoveContainer" containerID="d09aa8248fe62b5495e82460606315a805e4d670c5f748bbdf39b1d2a62e0d2a" Feb 18 15:55:08 crc kubenswrapper[4957]: I0218 15:55:08.133356 4957 scope.go:117] "RemoveContainer" containerID="89f533c1d1cd15588f3413497b2789cc0f343faad672d55c7e1b45c57bcadec6" Feb 18 15:55:08 crc kubenswrapper[4957]: E0218 15:55:08.133834 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-x8wwg_openshift-machine-config-operator(4cde17e3-43e9-4bed-afe8-5b76229e35cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" Feb 18 15:55:17 crc kubenswrapper[4957]: I0218 15:55:17.777750 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_916dfaad630ecae58644db741cc680a2786dd5efa7bb707c738ce0be548rf8h_8fd97eb8-3fca-445e-9811-3921ab8ec6e8/util/0.log" Feb 18 15:55:17 crc kubenswrapper[4957]: I0218 15:55:17.988130 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_916dfaad630ecae58644db741cc680a2786dd5efa7bb707c738ce0be548rf8h_8fd97eb8-3fca-445e-9811-3921ab8ec6e8/util/0.log" Feb 18 15:55:18 crc kubenswrapper[4957]: I0218 15:55:18.007046 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_916dfaad630ecae58644db741cc680a2786dd5efa7bb707c738ce0be548rf8h_8fd97eb8-3fca-445e-9811-3921ab8ec6e8/pull/0.log" Feb 18 15:55:18 crc kubenswrapper[4957]: I0218 15:55:18.053973 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_916dfaad630ecae58644db741cc680a2786dd5efa7bb707c738ce0be548rf8h_8fd97eb8-3fca-445e-9811-3921ab8ec6e8/pull/0.log" Feb 18 15:55:18 crc kubenswrapper[4957]: I0218 15:55:18.140437 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_916dfaad630ecae58644db741cc680a2786dd5efa7bb707c738ce0be548rf8h_8fd97eb8-3fca-445e-9811-3921ab8ec6e8/util/0.log" Feb 18 15:55:18 crc kubenswrapper[4957]: I0218 15:55:18.197833 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_916dfaad630ecae58644db741cc680a2786dd5efa7bb707c738ce0be548rf8h_8fd97eb8-3fca-445e-9811-3921ab8ec6e8/pull/0.log" Feb 18 15:55:18 crc kubenswrapper[4957]: I0218 15:55:18.292736 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_916dfaad630ecae58644db741cc680a2786dd5efa7bb707c738ce0be548rf8h_8fd97eb8-3fca-445e-9811-3921ab8ec6e8/extract/0.log" Feb 18 15:55:18 crc kubenswrapper[4957]: I0218 15:55:18.948313 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-6d8bf5c495-slwlj_e6651ea1-6311-4597-81cc-a8637f8cc88a/manager/0.log" Feb 18 15:55:19 crc kubenswrapper[4957]: I0218 15:55:19.446745 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-77987464f4-b85vd_cc38dff8-4b46-4281-96a3-ff88c8200f59/manager/1.log" Feb 18 15:55:19 crc kubenswrapper[4957]: I0218 15:55:19.868273 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-77987464f4-b85vd_cc38dff8-4b46-4281-96a3-ff88c8200f59/manager/0.log" Feb 18 15:55:20 crc kubenswrapper[4957]: I0218 15:55:20.008285 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-5d946d989d-xt2c9_07a618be-7572-49b8-aeb3-12ce37fbe7b3/manager/1.log" Feb 18 15:55:20 crc kubenswrapper[4957]: I0218 15:55:20.238471 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-69f49c598c-hhg5g_18f96572-e72c-48ae-b22b-4c6fb7a4d7b9/manager/1.log" Feb 18 15:55:20 crc kubenswrapper[4957]: I0218 15:55:20.471131 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-69f49c598c-hhg5g_18f96572-e72c-48ae-b22b-4c6fb7a4d7b9/manager/0.log" Feb 18 15:55:20 crc kubenswrapper[4957]: I0218 15:55:20.724770 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5b9b8895d5-xg6pp_94bb800a-9927-4d0f-b9d2-53e4fb398fda/manager/0.log" Feb 18 15:55:20 crc kubenswrapper[4957]: I0218 15:55:20.971620 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-79d975b745-pgchj_6c6f7318-74c7-4971-9888-45a6c025bdde/manager/1.log" Feb 18 15:55:21 crc kubenswrapper[4957]: I0218 15:55:21.202939 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-554564d7fc-fx4tl_91fd8838-0687-420b-b3dd-4130e221a66d/manager/1.log" Feb 18 15:55:21 crc kubenswrapper[4957]: I0218 15:55:21.316917 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-554564d7fc-fx4tl_91fd8838-0687-420b-b3dd-4130e221a66d/manager/0.log" Feb 18 15:55:21 crc kubenswrapper[4957]: I0218 15:55:21.540597 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-5d946d989d-xt2c9_07a618be-7572-49b8-aeb3-12ce37fbe7b3/manager/0.log" Feb 18 15:55:21 crc kubenswrapper[4957]: I0218 15:55:21.682025 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b4d948c87-vt4tc_78c8fb66-d71a-44b7-b858-51f7ca26a407/manager/0.log" Feb 18 15:55:21 crc kubenswrapper[4957]: I0218 15:55:21.691334 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-79d975b745-pgchj_6c6f7318-74c7-4971-9888-45a6c025bdde/manager/0.log" Feb 18 15:55:21 crc kubenswrapper[4957]: I0218 15:55:21.762688 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-54f6768c69-czjx4_147c50a5-37fc-4b06-803f-8ad1d1fd4625/manager/1.log" Feb 18 15:55:21 crc kubenswrapper[4957]: I0218 15:55:21.910522 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-54f6768c69-czjx4_147c50a5-37fc-4b06-803f-8ad1d1fd4625/manager/0.log" Feb 18 15:55:21 crc kubenswrapper[4957]: I0218 15:55:21.947199 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6994f66f48-zc4tx_ff480d9a-ead3-47a1-a765-59507dfe0853/manager/1.log" Feb 18 15:55:22 crc kubenswrapper[4957]: I0218 15:55:22.314272 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6994f66f48-zc4tx_ff480d9a-ead3-47a1-a765-59507dfe0853/manager/0.log" Feb 18 15:55:22 crc kubenswrapper[4957]: I0218 15:55:22.442348 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-64ddbf8bb-vn8z8_8aaaba83-1c93-481a-9627-a46dbd3eef31/manager/0.log" Feb 18 15:55:22 crc kubenswrapper[4957]: I0218 15:55:22.678700 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-567668f5cf-rd7mm_eb1f93ca-0a6f-4582-9d3a-329ddd1dd4fe/manager/1.log" Feb 18 15:55:22 crc kubenswrapper[4957]: I0218 15:55:22.764006 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-567668f5cf-rd7mm_eb1f93ca-0a6f-4582-9d3a-329ddd1dd4fe/manager/0.log" Feb 18 15:55:22 crc kubenswrapper[4957]: I0218 15:55:22.988912 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-7c6767dc9cwqlsr_8bd25216-306e-42c0-93da-a51803507c1f/manager/1.log" Feb 18 15:55:23 crc kubenswrapper[4957]: I0218 15:55:23.212873 4957 scope.go:117] "RemoveContainer" containerID="89f533c1d1cd15588f3413497b2789cc0f343faad672d55c7e1b45c57bcadec6" Feb 18 15:55:23 crc kubenswrapper[4957]: E0218 15:55:23.213372 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-x8wwg_openshift-machine-config-operator(4cde17e3-43e9-4bed-afe8-5b76229e35cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" Feb 18 15:55:23 crc kubenswrapper[4957]: I0218 15:55:23.231110 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-7c6767dc9cwqlsr_8bd25216-306e-42c0-93da-a51803507c1f/manager/0.log" Feb 18 15:55:23 crc kubenswrapper[4957]: I0218 15:55:23.461826 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-5666c999f9-b87pp_09673cd4-22c2-43fa-87ae-17b7a8a03308/operator/1.log" Feb 18 15:55:23 crc kubenswrapper[4957]: I0218 15:55:23.681260 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-5666c999f9-b87pp_09673cd4-22c2-43fa-87ae-17b7a8a03308/operator/0.log" Feb 18 15:55:24 crc kubenswrapper[4957]: I0218 15:55:24.021411 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-549dd7c895-84tm5_33e1b915-d740-4ec7-b74e-b8b8b6356d4d/manager/1.log" Feb 18 15:55:24 crc kubenswrapper[4957]: I0218 15:55:24.398170 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-69f8888797-kx5gv_f70d6609-fcf8-47f9-89dc-986f8f2f902b/manager/1.log" Feb 18 15:55:24 crc kubenswrapper[4957]: I0218 15:55:24.782952 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-hbqf7_4c4be899-e6fc-4664-89e1-b2eb45187e3a/registry-server/1.log" Feb 18 15:55:25 crc kubenswrapper[4957]: I0218 15:55:25.026656 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-hbqf7_4c4be899-e6fc-4664-89e1-b2eb45187e3a/registry-server/0.log" Feb 18 15:55:25 crc kubenswrapper[4957]: I0218 15:55:25.376722 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-d44cf6b75-cxzhc_ef6b6faf-f852-4948-8d1b-d53eace855a4/manager/0.log" Feb 18 15:55:25 crc kubenswrapper[4957]: I0218 15:55:25.570860 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-8497b45c89-9mf8z_644451ba-ce73-4312-b6cd-af99eb6c9fbc/manager/1.log" Feb 18 15:55:25 crc kubenswrapper[4957]: I0218 15:55:25.643344 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-8497b45c89-9mf8z_644451ba-ce73-4312-b6cd-af99eb6c9fbc/manager/0.log" Feb 18 15:55:25 crc kubenswrapper[4957]: I0218 15:55:25.935332 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-d7pcl_33e776b3-c81e-4655-82a8-88c63ff8adf7/operator/1.log" Feb 18 15:55:26 crc kubenswrapper[4957]: I0218 15:55:26.072763 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-d7pcl_33e776b3-c81e-4655-82a8-88c63ff8adf7/operator/0.log" Feb 18 15:55:26 crc kubenswrapper[4957]: I0218 15:55:26.301285 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-69f8888797-kx5gv_f70d6609-fcf8-47f9-89dc-986f8f2f902b/manager/0.log" Feb 18 15:55:26 crc kubenswrapper[4957]: I0218 15:55:26.454037 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-549dd7c895-84tm5_33e1b915-d740-4ec7-b74e-b8b8b6356d4d/manager/0.log" Feb 18 15:55:26 crc kubenswrapper[4957]: I0218 15:55:26.830288 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-68f46476f-5k7g6_8adf52f0-b132-4541-8962-7fae9bce89c6/manager/1.log" Feb 18 15:55:26 crc kubenswrapper[4957]: I0218 15:55:26.897048 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-68f46476f-5k7g6_8adf52f0-b132-4541-8962-7fae9bce89c6/manager/0.log" Feb 18 15:55:27 crc kubenswrapper[4957]: I0218 15:55:27.053570 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-54bf66477-rc4j4_f507ee0e-6836-4f30-b79e-63979d76a449/manager/1.log" Feb 18 15:55:27 crc kubenswrapper[4957]: I0218 15:55:27.156235 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-7866795846-2fdgz_b724d9a9-8ae5-4295-9b4e-5ec65793b59f/manager/1.log" Feb 18 15:55:27 crc kubenswrapper[4957]: I0218 15:55:27.357409 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-7866795846-2fdgz_b724d9a9-8ae5-4295-9b4e-5ec65793b59f/manager/0.log" Feb 18 15:55:27 crc kubenswrapper[4957]: I0218 15:55:27.548995 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-54bf66477-rc4j4_f507ee0e-6836-4f30-b79e-63979d76a449/manager/0.log" Feb 18 15:55:27 crc kubenswrapper[4957]: I0218 15:55:27.643173 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-5db88f68c-kshbq_c17ba5f2-7fb4-4ed7-8623-f987653f8f9b/manager/1.log" Feb 18 15:55:27 crc kubenswrapper[4957]: I0218 15:55:27.764229 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-5db88f68c-kshbq_c17ba5f2-7fb4-4ed7-8623-f987653f8f9b/manager/0.log" Feb 18 15:55:32 crc kubenswrapper[4957]: I0218 15:55:32.610655 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-868647ff47-jv5dd_86c162c7-c82d-4627-bf84-11d5fb80199f/manager/0.log" Feb 18 15:55:34 crc kubenswrapper[4957]: I0218 15:55:34.224390 4957 scope.go:117] "RemoveContainer" containerID="89f533c1d1cd15588f3413497b2789cc0f343faad672d55c7e1b45c57bcadec6" Feb 18 15:55:34 crc kubenswrapper[4957]: E0218 15:55:34.225392 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-x8wwg_openshift-machine-config-operator(4cde17e3-43e9-4bed-afe8-5b76229e35cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" Feb 18 15:55:46 crc kubenswrapper[4957]: I0218 15:55:46.638624 4957 trace.go:236] Trace[155399184]: "Calculate volume metrics of registry-storage for pod openshift-image-registry/image-registry-66df7c8f76-97lxp" (18-Feb-2026 15:55:45.324) (total time: 1309ms): Feb 18 15:55:46 crc kubenswrapper[4957]: Trace[155399184]: [1.309434886s] [1.309434886s] END Feb 18 15:55:46 crc kubenswrapper[4957]: E0218 15:55:46.642285 4957 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/system.slice/rpm-ostreed.service\": RecentStats: unable to find data in memory cache]" Feb 18 15:55:49 crc kubenswrapper[4957]: I0218 15:55:49.213548 4957 scope.go:117] "RemoveContainer" containerID="89f533c1d1cd15588f3413497b2789cc0f343faad672d55c7e1b45c57bcadec6" Feb 18 15:55:49 crc kubenswrapper[4957]: E0218 15:55:49.214330 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-x8wwg_openshift-machine-config-operator(4cde17e3-43e9-4bed-afe8-5b76229e35cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" Feb 18 15:55:53 crc kubenswrapper[4957]: I0218 15:55:53.028026 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-mn6hj_914e8f14-972c-4ca7-bcc6-4fc802cdfdc6/control-plane-machine-set-operator/0.log" Feb 18 15:55:53 crc kubenswrapper[4957]: I0218 15:55:53.218436 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-d4mbw_7ebbd0f0-af37-460a-88f5-ff0e855f652c/machine-api-operator/0.log" Feb 18 15:55:53 crc kubenswrapper[4957]: I0218 15:55:53.245069 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-d4mbw_7ebbd0f0-af37-460a-88f5-ff0e855f652c/kube-rbac-proxy/0.log" Feb 18 15:56:00 crc kubenswrapper[4957]: I0218 15:56:00.796565 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="8e98adae-ca1c-4f1d-a7e4-6ea6ccea7070" containerName="galera" probeResult="failure" output="command timed out" Feb 18 15:56:01 crc kubenswrapper[4957]: I0218 15:56:01.213649 4957 scope.go:117] "RemoveContainer" containerID="89f533c1d1cd15588f3413497b2789cc0f343faad672d55c7e1b45c57bcadec6" Feb 18 15:56:01 crc kubenswrapper[4957]: E0218 15:56:01.214380 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-x8wwg_openshift-machine-config-operator(4cde17e3-43e9-4bed-afe8-5b76229e35cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" Feb 18 15:56:10 crc kubenswrapper[4957]: I0218 15:56:10.686405 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-mqqs4_12b5712d-e8c7-43f6-b44f-4641e48d8046/cert-manager-controller/0.log" Feb 18 15:56:10 crc kubenswrapper[4957]: I0218 15:56:10.849704 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-t6tqs_4aa8b825-ca42-4619-b7bb-380195ddbf84/cert-manager-cainjector/0.log" Feb 18 15:56:10 crc kubenswrapper[4957]: I0218 15:56:10.914879 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-q5pw9_77a4b221-67be-4248-beaa-1f4602e3b35b/cert-manager-webhook/1.log" Feb 18 15:56:10 crc kubenswrapper[4957]: I0218 15:56:10.969004 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-q5pw9_77a4b221-67be-4248-beaa-1f4602e3b35b/cert-manager-webhook/0.log" Feb 18 15:56:15 crc kubenswrapper[4957]: I0218 15:56:15.214541 4957 scope.go:117] "RemoveContainer" containerID="89f533c1d1cd15588f3413497b2789cc0f343faad672d55c7e1b45c57bcadec6" Feb 18 15:56:15 crc kubenswrapper[4957]: E0218 15:56:15.215347 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-x8wwg_openshift-machine-config-operator(4cde17e3-43e9-4bed-afe8-5b76229e35cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" Feb 18 15:56:26 crc kubenswrapper[4957]: I0218 15:56:26.885167 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-5c78fc5d65-dd487_938fd18b-26fb-40b4-ab31-e0e8dff90d82/nmstate-console-plugin/0.log" Feb 18 15:56:27 crc kubenswrapper[4957]: I0218 15:56:27.098871 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58c85c668d-6hxmz_20ae69bb-d285-4fbd-8c24-8385e5f6152d/kube-rbac-proxy/0.log" Feb 18 15:56:27 crc kubenswrapper[4957]: I0218 15:56:27.104633 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-vwl5q_3b69da89-d2a6-4e8f-ac79-99e1bb296fcc/nmstate-handler/0.log" Feb 18 15:56:27 crc kubenswrapper[4957]: I0218 15:56:27.233884 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58c85c668d-6hxmz_20ae69bb-d285-4fbd-8c24-8385e5f6152d/nmstate-metrics/0.log" Feb 18 15:56:27 crc kubenswrapper[4957]: I0218 15:56:27.305789 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-694c9596b7-kk85q_25b254d3-9fe8-4024-b499-a813dbd98972/nmstate-operator/0.log" Feb 18 15:56:27 crc kubenswrapper[4957]: I0218 15:56:27.419337 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-866bcb46dc-vzkmx_7bf5dd6b-3bc3-4ead-8fab-478e02b32496/nmstate-webhook/0.log" Feb 18 15:56:29 crc kubenswrapper[4957]: I0218 15:56:29.212807 4957 scope.go:117] "RemoveContainer" containerID="89f533c1d1cd15588f3413497b2789cc0f343faad672d55c7e1b45c57bcadec6" Feb 18 15:56:29 crc kubenswrapper[4957]: E0218 15:56:29.213880 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-x8wwg_openshift-machine-config-operator(4cde17e3-43e9-4bed-afe8-5b76229e35cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" Feb 18 15:56:41 crc kubenswrapper[4957]: I0218 15:56:41.213398 4957 scope.go:117] "RemoveContainer" containerID="89f533c1d1cd15588f3413497b2789cc0f343faad672d55c7e1b45c57bcadec6" Feb 18 15:56:41 crc kubenswrapper[4957]: E0218 15:56:41.214233 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-x8wwg_openshift-machine-config-operator(4cde17e3-43e9-4bed-afe8-5b76229e35cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" Feb 18 15:56:44 crc kubenswrapper[4957]: I0218 15:56:44.797884 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-669bf4b44b-ndlc7_da87ca13-b23a-4345-b79d-46c8e9bec9b3/kube-rbac-proxy/0.log" Feb 18 15:56:44 crc kubenswrapper[4957]: I0218 15:56:44.812004 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-669bf4b44b-ndlc7_da87ca13-b23a-4345-b79d-46c8e9bec9b3/manager/1.log" Feb 18 15:56:44 crc kubenswrapper[4957]: I0218 15:56:44.988598 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-669bf4b44b-ndlc7_da87ca13-b23a-4345-b79d-46c8e9bec9b3/manager/0.log" Feb 18 15:56:55 crc kubenswrapper[4957]: I0218 15:56:55.213262 4957 scope.go:117] "RemoveContainer" containerID="89f533c1d1cd15588f3413497b2789cc0f343faad672d55c7e1b45c57bcadec6" Feb 18 15:56:55 crc kubenswrapper[4957]: E0218 15:56:55.214196 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-x8wwg_openshift-machine-config-operator(4cde17e3-43e9-4bed-afe8-5b76229e35cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" Feb 18 15:57:00 crc kubenswrapper[4957]: I0218 15:57:00.778022 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-h8fsb_36ce4ea6-b461-4e76-9db4-10bb9d864512/prometheus-operator/0.log" Feb 18 15:57:00 crc kubenswrapper[4957]: I0218 15:57:00.978224 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-745fbd9cbf-82dch_d89f03d4-3521-43ea-85f4-631c25a3379b/prometheus-operator-admission-webhook/0.log" Feb 18 15:57:01 crc kubenswrapper[4957]: I0218 15:57:01.023215 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-745fbd9cbf-xjh8t_8fd72d01-103a-4c09-8858-0fcd773f5d13/prometheus-operator-admission-webhook/0.log" Feb 18 15:57:01 crc kubenswrapper[4957]: I0218 15:57:01.168688 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-mxz2r_955eb799-56c6-47e7-b5f7-eccac4b52134/operator/1.log" Feb 18 15:57:01 crc kubenswrapper[4957]: I0218 15:57:01.246743 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-mxz2r_955eb799-56c6-47e7-b5f7-eccac4b52134/operator/0.log" Feb 18 15:57:01 crc kubenswrapper[4957]: I0218 15:57:01.327766 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-ui-dashboards-66cbf594b5-624nc_791dd051-aa05-448b-b1d5-26cafb4662fd/observability-ui-dashboards/0.log" Feb 18 15:57:01 crc kubenswrapper[4957]: I0218 15:57:01.402953 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-khfcc_aa193683-1796-419f-ac5f-e620b3206699/perses-operator/0.log" Feb 18 15:57:06 crc kubenswrapper[4957]: I0218 15:57:06.212809 4957 scope.go:117] "RemoveContainer" containerID="89f533c1d1cd15588f3413497b2789cc0f343faad672d55c7e1b45c57bcadec6" Feb 18 15:57:06 crc kubenswrapper[4957]: E0218 15:57:06.214741 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-x8wwg_openshift-machine-config-operator(4cde17e3-43e9-4bed-afe8-5b76229e35cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" Feb 18 15:57:17 crc kubenswrapper[4957]: I0218 15:57:17.145185 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_cluster-logging-operator-c769fd969-kl2hc_e703948a-fdb1-445a-8ece-94bd76181899/cluster-logging-operator/0.log" Feb 18 15:57:17 crc kubenswrapper[4957]: I0218 15:57:17.288487 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_collector-5tn9r_f43654c7-84ed-488e-912c-c089b778adc7/collector/0.log" Feb 18 15:57:17 crc kubenswrapper[4957]: I0218 15:57:17.437629 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-compactor-0_8ebf8bd1-097b-45c7-be49-c38760e885e2/loki-compactor/0.log" Feb 18 15:57:17 crc kubenswrapper[4957]: I0218 15:57:17.531927 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-distributor-5d5548c9f5-2wbxs_4de1a2b8-9bfb-4104-b065-e0c991cb95ea/loki-distributor/0.log" Feb 18 15:57:17 crc kubenswrapper[4957]: I0218 15:57:17.666399 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-7b58bd6fcd-58vxq_6e82b47f-b61b-40dd-92f1-62180459082f/gateway/0.log" Feb 18 15:57:17 crc kubenswrapper[4957]: I0218 15:57:17.686029 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-7b58bd6fcd-58vxq_6e82b47f-b61b-40dd-92f1-62180459082f/opa/0.log" Feb 18 15:57:17 crc kubenswrapper[4957]: I0218 15:57:17.866637 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-7b58bd6fcd-n8wjk_ae719427-398b-455b-8d4f-d1f96df0e800/gateway/0.log" Feb 18 15:57:17 crc kubenswrapper[4957]: I0218 15:57:17.917033 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-7b58bd6fcd-n8wjk_ae719427-398b-455b-8d4f-d1f96df0e800/opa/0.log" Feb 18 15:57:18 crc kubenswrapper[4957]: I0218 15:57:18.035726 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-index-gateway-0_c7e42da2-0160-4c19-bd98-1ebb4d0d84dc/loki-index-gateway/0.log" Feb 18 15:57:18 crc kubenswrapper[4957]: I0218 15:57:18.175837 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-ingester-0_f0549571-def1-4cd5-9cae-77780cf6870b/loki-ingester/0.log" Feb 18 15:57:18 crc kubenswrapper[4957]: I0218 15:57:18.284678 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-querier-76bf7b6d45-t4b27_c093fd9d-72e8-42d1-a5ad-5e687f61aa9e/loki-querier/0.log" Feb 18 15:57:18 crc kubenswrapper[4957]: I0218 15:57:18.439978 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-query-frontend-6d6859c548-x4qsb_d1e1403b-f3b0-4377-9c77-1d9f5b3c10c7/loki-query-frontend/0.log" Feb 18 15:57:19 crc kubenswrapper[4957]: I0218 15:57:19.213215 4957 scope.go:117] "RemoveContainer" containerID="89f533c1d1cd15588f3413497b2789cc0f343faad672d55c7e1b45c57bcadec6" Feb 18 15:57:19 crc kubenswrapper[4957]: E0218 15:57:19.215023 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-x8wwg_openshift-machine-config-operator(4cde17e3-43e9-4bed-afe8-5b76229e35cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" Feb 18 15:57:32 crc kubenswrapper[4957]: I0218 15:57:32.212966 4957 scope.go:117] "RemoveContainer" containerID="89f533c1d1cd15588f3413497b2789cc0f343faad672d55c7e1b45c57bcadec6" Feb 18 15:57:32 crc kubenswrapper[4957]: E0218 15:57:32.213831 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-x8wwg_openshift-machine-config-operator(4cde17e3-43e9-4bed-afe8-5b76229e35cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" Feb 18 15:57:34 crc kubenswrapper[4957]: I0218 15:57:34.619673 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-hw6sv_3929daaa-39b8-475f-9af0-644180cb7682/controller/1.log" Feb 18 15:57:34 crc kubenswrapper[4957]: I0218 15:57:34.730160 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-hw6sv_3929daaa-39b8-475f-9af0-644180cb7682/controller/0.log" Feb 18 15:57:34 crc kubenswrapper[4957]: I0218 15:57:34.733650 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-hw6sv_3929daaa-39b8-475f-9af0-644180cb7682/kube-rbac-proxy/0.log" Feb 18 15:57:34 crc kubenswrapper[4957]: I0218 15:57:34.842752 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-c8kqz_06e3f0bd-70d3-493b-ab24-e8f75298f7a3/cp-frr-files/0.log" Feb 18 15:57:35 crc kubenswrapper[4957]: I0218 15:57:35.075497 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-c8kqz_06e3f0bd-70d3-493b-ab24-e8f75298f7a3/cp-metrics/0.log" Feb 18 15:57:35 crc kubenswrapper[4957]: I0218 15:57:35.108515 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-c8kqz_06e3f0bd-70d3-493b-ab24-e8f75298f7a3/cp-reloader/0.log" Feb 18 15:57:35 crc kubenswrapper[4957]: I0218 15:57:35.110531 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-c8kqz_06e3f0bd-70d3-493b-ab24-e8f75298f7a3/cp-reloader/0.log" Feb 18 15:57:35 crc kubenswrapper[4957]: I0218 15:57:35.113939 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-c8kqz_06e3f0bd-70d3-493b-ab24-e8f75298f7a3/cp-frr-files/0.log" Feb 18 15:57:35 crc kubenswrapper[4957]: I0218 15:57:35.348029 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-c8kqz_06e3f0bd-70d3-493b-ab24-e8f75298f7a3/cp-reloader/0.log" Feb 18 15:57:35 crc kubenswrapper[4957]: I0218 15:57:35.348485 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-c8kqz_06e3f0bd-70d3-493b-ab24-e8f75298f7a3/cp-metrics/0.log" Feb 18 15:57:35 crc kubenswrapper[4957]: I0218 15:57:35.375918 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-c8kqz_06e3f0bd-70d3-493b-ab24-e8f75298f7a3/cp-metrics/0.log" Feb 18 15:57:35 crc kubenswrapper[4957]: I0218 15:57:35.386561 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-c8kqz_06e3f0bd-70d3-493b-ab24-e8f75298f7a3/cp-frr-files/0.log" Feb 18 15:57:35 crc kubenswrapper[4957]: I0218 15:57:35.570451 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-c8kqz_06e3f0bd-70d3-493b-ab24-e8f75298f7a3/controller/1.log" Feb 18 15:57:35 crc kubenswrapper[4957]: I0218 15:57:35.575956 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-c8kqz_06e3f0bd-70d3-493b-ab24-e8f75298f7a3/cp-frr-files/0.log" Feb 18 15:57:35 crc kubenswrapper[4957]: I0218 15:57:35.600365 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-c8kqz_06e3f0bd-70d3-493b-ab24-e8f75298f7a3/cp-reloader/0.log" Feb 18 15:57:35 crc kubenswrapper[4957]: I0218 15:57:35.612895 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-c8kqz_06e3f0bd-70d3-493b-ab24-e8f75298f7a3/cp-metrics/0.log" Feb 18 15:57:35 crc kubenswrapper[4957]: I0218 15:57:35.782881 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-c8kqz_06e3f0bd-70d3-493b-ab24-e8f75298f7a3/controller/0.log" Feb 18 15:57:35 crc kubenswrapper[4957]: I0218 15:57:35.870670 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-c8kqz_06e3f0bd-70d3-493b-ab24-e8f75298f7a3/frr-metrics/0.log" Feb 18 15:57:35 crc kubenswrapper[4957]: I0218 15:57:35.887461 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-c8kqz_06e3f0bd-70d3-493b-ab24-e8f75298f7a3/frr/1.log" Feb 18 15:57:36 crc kubenswrapper[4957]: I0218 15:57:36.039649 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-c8kqz_06e3f0bd-70d3-493b-ab24-e8f75298f7a3/kube-rbac-proxy/0.log" Feb 18 15:57:36 crc kubenswrapper[4957]: I0218 15:57:36.192115 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-c8kqz_06e3f0bd-70d3-493b-ab24-e8f75298f7a3/reloader/0.log" Feb 18 15:57:36 crc kubenswrapper[4957]: I0218 15:57:36.212612 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-c8kqz_06e3f0bd-70d3-493b-ab24-e8f75298f7a3/kube-rbac-proxy-frr/0.log" Feb 18 15:57:36 crc kubenswrapper[4957]: I0218 15:57:36.431054 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-78b44bf5bb-bl7jx_84258a40-276a-4da4-8240-603932be25c0/frr-k8s-webhook-server/1.log" Feb 18 15:57:36 crc kubenswrapper[4957]: I0218 15:57:36.455890 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-78b44bf5bb-bl7jx_84258a40-276a-4da4-8240-603932be25c0/frr-k8s-webhook-server/0.log" Feb 18 15:57:36 crc kubenswrapper[4957]: I0218 15:57:36.734906 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-8675cb849f-2g7hj_4f287d67-8d26-430a-a775-fdf0abeed6dd/manager/1.log" Feb 18 15:57:36 crc kubenswrapper[4957]: I0218 15:57:36.750662 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-8675cb849f-2g7hj_4f287d67-8d26-430a-a775-fdf0abeed6dd/manager/0.log" Feb 18 15:57:36 crc kubenswrapper[4957]: I0218 15:57:36.985308 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-84df667ccc-2w5tf_32734ff2-fe7b-4588-a4c8-0e5882b54b87/webhook-server/1.log" Feb 18 15:57:36 crc kubenswrapper[4957]: I0218 15:57:36.988631 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-84df667ccc-2w5tf_32734ff2-fe7b-4588-a4c8-0e5882b54b87/webhook-server/0.log" Feb 18 15:57:37 crc kubenswrapper[4957]: I0218 15:57:37.347451 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-c8kqz_06e3f0bd-70d3-493b-ab24-e8f75298f7a3/frr/0.log" Feb 18 15:57:37 crc kubenswrapper[4957]: I0218 15:57:37.879212 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-wvv4k_379fdde6-815b-433b-b62c-b9863ea4fb9e/kube-rbac-proxy/0.log" Feb 18 15:57:38 crc kubenswrapper[4957]: I0218 15:57:38.002807 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-wvv4k_379fdde6-815b-433b-b62c-b9863ea4fb9e/speaker/1.log" Feb 18 15:57:38 crc kubenswrapper[4957]: I0218 15:57:38.347950 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-wvv4k_379fdde6-815b-433b-b62c-b9863ea4fb9e/speaker/0.log" Feb 18 15:57:45 crc kubenswrapper[4957]: I0218 15:57:45.213402 4957 scope.go:117] "RemoveContainer" containerID="89f533c1d1cd15588f3413497b2789cc0f343faad672d55c7e1b45c57bcadec6" Feb 18 15:57:45 crc kubenswrapper[4957]: E0218 15:57:45.216501 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-x8wwg_openshift-machine-config-operator(4cde17e3-43e9-4bed-afe8-5b76229e35cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" Feb 18 15:57:53 crc kubenswrapper[4957]: I0218 15:57:53.138644 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19mcx5v_f303e553-d646-4bd0-9fff-92ba6ad6dc90/util/0.log" Feb 18 15:57:54 crc kubenswrapper[4957]: I0218 15:57:54.098936 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19mcx5v_f303e553-d646-4bd0-9fff-92ba6ad6dc90/pull/0.log" Feb 18 15:57:54 crc kubenswrapper[4957]: I0218 15:57:54.138214 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19mcx5v_f303e553-d646-4bd0-9fff-92ba6ad6dc90/util/0.log" Feb 18 15:57:54 crc kubenswrapper[4957]: I0218 15:57:54.182297 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19mcx5v_f303e553-d646-4bd0-9fff-92ba6ad6dc90/pull/0.log" Feb 18 15:57:54 crc kubenswrapper[4957]: I0218 15:57:54.393361 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19mcx5v_f303e553-d646-4bd0-9fff-92ba6ad6dc90/util/0.log" Feb 18 15:57:54 crc kubenswrapper[4957]: I0218 15:57:54.395150 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19mcx5v_f303e553-d646-4bd0-9fff-92ba6ad6dc90/pull/0.log" Feb 18 15:57:54 crc kubenswrapper[4957]: I0218 15:57:54.401660 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19mcx5v_f303e553-d646-4bd0-9fff-92ba6ad6dc90/extract/0.log" Feb 18 15:57:54 crc kubenswrapper[4957]: I0218 15:57:54.598725 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08596th_b83c916a-2d04-4618-9254-4f4660a4b976/util/0.log" Feb 18 15:57:54 crc kubenswrapper[4957]: I0218 15:57:54.774431 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08596th_b83c916a-2d04-4618-9254-4f4660a4b976/util/0.log" Feb 18 15:57:54 crc kubenswrapper[4957]: I0218 15:57:54.777041 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08596th_b83c916a-2d04-4618-9254-4f4660a4b976/pull/0.log" Feb 18 15:57:54 crc kubenswrapper[4957]: I0218 15:57:54.830072 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08596th_b83c916a-2d04-4618-9254-4f4660a4b976/pull/0.log" Feb 18 15:57:54 crc kubenswrapper[4957]: I0218 15:57:54.966972 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08596th_b83c916a-2d04-4618-9254-4f4660a4b976/util/0.log" Feb 18 15:57:54 crc kubenswrapper[4957]: I0218 15:57:54.967786 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08596th_b83c916a-2d04-4618-9254-4f4660a4b976/extract/0.log" Feb 18 15:57:54 crc kubenswrapper[4957]: I0218 15:57:54.976156 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08596th_b83c916a-2d04-4618-9254-4f4660a4b976/pull/0.log" Feb 18 15:57:55 crc kubenswrapper[4957]: I0218 15:57:55.143669 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213kbn6m_e69e6354-b8aa-4eb0-80c2-85121b9ce3de/util/0.log" Feb 18 15:57:55 crc kubenswrapper[4957]: I0218 15:57:55.351530 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213kbn6m_e69e6354-b8aa-4eb0-80c2-85121b9ce3de/util/0.log" Feb 18 15:57:55 crc kubenswrapper[4957]: I0218 15:57:55.355558 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213kbn6m_e69e6354-b8aa-4eb0-80c2-85121b9ce3de/pull/0.log" Feb 18 15:57:55 crc kubenswrapper[4957]: I0218 15:57:55.397325 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213kbn6m_e69e6354-b8aa-4eb0-80c2-85121b9ce3de/pull/0.log" Feb 18 15:57:55 crc kubenswrapper[4957]: I0218 15:57:55.585370 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213kbn6m_e69e6354-b8aa-4eb0-80c2-85121b9ce3de/pull/0.log" Feb 18 15:57:55 crc kubenswrapper[4957]: I0218 15:57:55.588080 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213kbn6m_e69e6354-b8aa-4eb0-80c2-85121b9ce3de/extract/0.log" Feb 18 15:57:55 crc kubenswrapper[4957]: I0218 15:57:55.593275 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213kbn6m_e69e6354-b8aa-4eb0-80c2-85121b9ce3de/util/0.log" Feb 18 15:57:55 crc kubenswrapper[4957]: I0218 15:57:55.754470 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-mjqvd_3d9fa28a-2d86-4e9f-a5da-d5f545bb0331/extract-utilities/0.log" Feb 18 15:57:55 crc kubenswrapper[4957]: I0218 15:57:55.980336 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-mjqvd_3d9fa28a-2d86-4e9f-a5da-d5f545bb0331/extract-content/0.log" Feb 18 15:57:56 crc kubenswrapper[4957]: I0218 15:57:56.010413 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-mjqvd_3d9fa28a-2d86-4e9f-a5da-d5f545bb0331/extract-content/0.log" Feb 18 15:57:56 crc kubenswrapper[4957]: I0218 15:57:56.018020 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-mjqvd_3d9fa28a-2d86-4e9f-a5da-d5f545bb0331/extract-utilities/0.log" Feb 18 15:57:56 crc kubenswrapper[4957]: I0218 15:57:56.205013 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-mjqvd_3d9fa28a-2d86-4e9f-a5da-d5f545bb0331/extract-content/0.log" Feb 18 15:57:56 crc kubenswrapper[4957]: I0218 15:57:56.206450 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-mjqvd_3d9fa28a-2d86-4e9f-a5da-d5f545bb0331/extract-utilities/0.log" Feb 18 15:57:56 crc kubenswrapper[4957]: I0218 15:57:56.219085 4957 scope.go:117] "RemoveContainer" containerID="89f533c1d1cd15588f3413497b2789cc0f343faad672d55c7e1b45c57bcadec6" Feb 18 15:57:56 crc kubenswrapper[4957]: E0218 15:57:56.229405 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-x8wwg_openshift-machine-config-operator(4cde17e3-43e9-4bed-afe8-5b76229e35cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" Feb 18 15:57:56 crc kubenswrapper[4957]: I0218 15:57:56.435207 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-nsb6k_a5d8a39a-4f7f-4d3e-b205-9a209721ca4b/extract-utilities/0.log" Feb 18 15:57:56 crc kubenswrapper[4957]: I0218 15:57:56.712850 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-nsb6k_a5d8a39a-4f7f-4d3e-b205-9a209721ca4b/extract-utilities/0.log" Feb 18 15:57:56 crc kubenswrapper[4957]: I0218 15:57:56.716271 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-nsb6k_a5d8a39a-4f7f-4d3e-b205-9a209721ca4b/extract-content/0.log" Feb 18 15:57:56 crc kubenswrapper[4957]: I0218 15:57:56.770812 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-nsb6k_a5d8a39a-4f7f-4d3e-b205-9a209721ca4b/extract-content/0.log" Feb 18 15:57:56 crc kubenswrapper[4957]: I0218 15:57:56.921480 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-nsb6k_a5d8a39a-4f7f-4d3e-b205-9a209721ca4b/extract-content/0.log" Feb 18 15:57:56 crc kubenswrapper[4957]: I0218 15:57:56.922882 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-nsb6k_a5d8a39a-4f7f-4d3e-b205-9a209721ca4b/extract-utilities/0.log" Feb 18 15:57:57 crc kubenswrapper[4957]: I0218 15:57:57.146627 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989kcpzg_9994b531-894e-4fbf-a8dc-8bdaa0684615/util/0.log" Feb 18 15:57:57 crc kubenswrapper[4957]: I0218 15:57:57.346556 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-mjqvd_3d9fa28a-2d86-4e9f-a5da-d5f545bb0331/registry-server/0.log" Feb 18 15:57:57 crc kubenswrapper[4957]: I0218 15:57:57.390995 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989kcpzg_9994b531-894e-4fbf-a8dc-8bdaa0684615/pull/0.log" Feb 18 15:57:57 crc kubenswrapper[4957]: I0218 15:57:57.396005 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989kcpzg_9994b531-894e-4fbf-a8dc-8bdaa0684615/pull/0.log" Feb 18 15:57:57 crc kubenswrapper[4957]: I0218 15:57:57.445262 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989kcpzg_9994b531-894e-4fbf-a8dc-8bdaa0684615/util/0.log" Feb 18 15:57:57 crc kubenswrapper[4957]: I0218 15:57:57.609851 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989kcpzg_9994b531-894e-4fbf-a8dc-8bdaa0684615/util/0.log" Feb 18 15:57:57 crc kubenswrapper[4957]: I0218 15:57:57.629002 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989kcpzg_9994b531-894e-4fbf-a8dc-8bdaa0684615/extract/0.log" Feb 18 15:57:57 crc kubenswrapper[4957]: I0218 15:57:57.721797 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989kcpzg_9994b531-894e-4fbf-a8dc-8bdaa0684615/pull/0.log" Feb 18 15:57:57 crc kubenswrapper[4957]: I0218 15:57:57.825545 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecat2mww_12179b8a-bf4d-428f-a65c-26ac376f2945/util/0.log" Feb 18 15:57:57 crc kubenswrapper[4957]: I0218 15:57:57.835996 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-nsb6k_a5d8a39a-4f7f-4d3e-b205-9a209721ca4b/registry-server/0.log" Feb 18 15:57:58 crc kubenswrapper[4957]: I0218 15:57:58.036847 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecat2mww_12179b8a-bf4d-428f-a65c-26ac376f2945/pull/0.log" Feb 18 15:57:58 crc kubenswrapper[4957]: I0218 15:57:58.048623 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecat2mww_12179b8a-bf4d-428f-a65c-26ac376f2945/util/0.log" Feb 18 15:57:58 crc kubenswrapper[4957]: I0218 15:57:58.070098 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecat2mww_12179b8a-bf4d-428f-a65c-26ac376f2945/pull/0.log" Feb 18 15:57:58 crc kubenswrapper[4957]: I0218 15:57:58.266785 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecat2mww_12179b8a-bf4d-428f-a65c-26ac376f2945/util/0.log" Feb 18 15:57:58 crc kubenswrapper[4957]: I0218 15:57:58.278646 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecat2mww_12179b8a-bf4d-428f-a65c-26ac376f2945/extract/0.log" Feb 18 15:57:58 crc kubenswrapper[4957]: I0218 15:57:58.305143 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-cvskg_2d4f652d-1d92-4ace-8ac1-51aaf64ff1cd/marketplace-operator/1.log" Feb 18 15:57:58 crc kubenswrapper[4957]: I0218 15:57:58.312108 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecat2mww_12179b8a-bf4d-428f-a65c-26ac376f2945/pull/0.log" Feb 18 15:57:58 crc kubenswrapper[4957]: I0218 15:57:58.473950 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-cvskg_2d4f652d-1d92-4ace-8ac1-51aaf64ff1cd/marketplace-operator/0.log" Feb 18 15:57:58 crc kubenswrapper[4957]: I0218 15:57:58.506140 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-gllqp_b95ede57-e275-4ba0-834d-43356f6b960b/extract-utilities/0.log" Feb 18 15:57:58 crc kubenswrapper[4957]: I0218 15:57:58.666939 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-gllqp_b95ede57-e275-4ba0-834d-43356f6b960b/extract-utilities/0.log" Feb 18 15:57:58 crc kubenswrapper[4957]: I0218 15:57:58.682495 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-gllqp_b95ede57-e275-4ba0-834d-43356f6b960b/extract-content/0.log" Feb 18 15:57:58 crc kubenswrapper[4957]: I0218 15:57:58.692834 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-gllqp_b95ede57-e275-4ba0-834d-43356f6b960b/extract-content/0.log" Feb 18 15:57:58 crc kubenswrapper[4957]: I0218 15:57:58.847729 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-gllqp_b95ede57-e275-4ba0-834d-43356f6b960b/extract-utilities/0.log" Feb 18 15:57:58 crc kubenswrapper[4957]: I0218 15:57:58.878332 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-gllqp_b95ede57-e275-4ba0-834d-43356f6b960b/extract-content/0.log" Feb 18 15:57:58 crc kubenswrapper[4957]: I0218 15:57:58.897690 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-gllqp_b95ede57-e275-4ba0-834d-43356f6b960b/registry-server/1.log" Feb 18 15:57:59 crc kubenswrapper[4957]: I0218 15:57:59.037770 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-gllqp_b95ede57-e275-4ba0-834d-43356f6b960b/registry-server/0.log" Feb 18 15:57:59 crc kubenswrapper[4957]: I0218 15:57:59.062796 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-8lgmk_9a5c9544-331f-47df-898d-d19b2c9fe2b1/extract-utilities/0.log" Feb 18 15:57:59 crc kubenswrapper[4957]: I0218 15:57:59.228254 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-8lgmk_9a5c9544-331f-47df-898d-d19b2c9fe2b1/extract-utilities/0.log" Feb 18 15:57:59 crc kubenswrapper[4957]: I0218 15:57:59.231621 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-8lgmk_9a5c9544-331f-47df-898d-d19b2c9fe2b1/extract-content/0.log" Feb 18 15:57:59 crc kubenswrapper[4957]: I0218 15:57:59.237867 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-8lgmk_9a5c9544-331f-47df-898d-d19b2c9fe2b1/extract-content/0.log" Feb 18 15:57:59 crc kubenswrapper[4957]: I0218 15:57:59.427864 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-8lgmk_9a5c9544-331f-47df-898d-d19b2c9fe2b1/extract-utilities/0.log" Feb 18 15:57:59 crc kubenswrapper[4957]: I0218 15:57:59.441813 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-8lgmk_9a5c9544-331f-47df-898d-d19b2c9fe2b1/extract-content/0.log" Feb 18 15:57:59 crc kubenswrapper[4957]: I0218 15:57:59.514367 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-8lgmk_9a5c9544-331f-47df-898d-d19b2c9fe2b1/registry-server/0.log" Feb 18 15:58:10 crc kubenswrapper[4957]: I0218 15:58:10.214233 4957 scope.go:117] "RemoveContainer" containerID="89f533c1d1cd15588f3413497b2789cc0f343faad672d55c7e1b45c57bcadec6" Feb 18 15:58:10 crc kubenswrapper[4957]: E0218 15:58:10.215280 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-x8wwg_openshift-machine-config-operator(4cde17e3-43e9-4bed-afe8-5b76229e35cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" Feb 18 15:58:14 crc kubenswrapper[4957]: I0218 15:58:14.591389 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-h8fsb_36ce4ea6-b461-4e76-9db4-10bb9d864512/prometheus-operator/0.log" Feb 18 15:58:14 crc kubenswrapper[4957]: I0218 15:58:14.612825 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-745fbd9cbf-xjh8t_8fd72d01-103a-4c09-8858-0fcd773f5d13/prometheus-operator-admission-webhook/0.log" Feb 18 15:58:14 crc kubenswrapper[4957]: I0218 15:58:14.613530 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-745fbd9cbf-82dch_d89f03d4-3521-43ea-85f4-631c25a3379b/prometheus-operator-admission-webhook/0.log" Feb 18 15:58:16 crc kubenswrapper[4957]: I0218 15:58:16.087953 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-mxz2r_955eb799-56c6-47e7-b5f7-eccac4b52134/operator/1.log" Feb 18 15:58:16 crc kubenswrapper[4957]: I0218 15:58:16.212793 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-mxz2r_955eb799-56c6-47e7-b5f7-eccac4b52134/operator/0.log" Feb 18 15:58:16 crc kubenswrapper[4957]: I0218 15:58:16.220499 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-ui-dashboards-66cbf594b5-624nc_791dd051-aa05-448b-b1d5-26cafb4662fd/observability-ui-dashboards/0.log" Feb 18 15:58:16 crc kubenswrapper[4957]: I0218 15:58:16.238820 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-khfcc_aa193683-1796-419f-ac5f-e620b3206699/perses-operator/0.log" Feb 18 15:58:25 crc kubenswrapper[4957]: I0218 15:58:25.213694 4957 scope.go:117] "RemoveContainer" containerID="89f533c1d1cd15588f3413497b2789cc0f343faad672d55c7e1b45c57bcadec6" Feb 18 15:58:25 crc kubenswrapper[4957]: E0218 15:58:25.216234 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-x8wwg_openshift-machine-config-operator(4cde17e3-43e9-4bed-afe8-5b76229e35cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" Feb 18 15:58:31 crc kubenswrapper[4957]: I0218 15:58:31.536239 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-669bf4b44b-ndlc7_da87ca13-b23a-4345-b79d-46c8e9bec9b3/manager/0.log" Feb 18 15:58:31 crc kubenswrapper[4957]: I0218 15:58:31.549705 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-669bf4b44b-ndlc7_da87ca13-b23a-4345-b79d-46c8e9bec9b3/manager/1.log" Feb 18 15:58:31 crc kubenswrapper[4957]: I0218 15:58:31.585796 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-669bf4b44b-ndlc7_da87ca13-b23a-4345-b79d-46c8e9bec9b3/kube-rbac-proxy/0.log" Feb 18 15:58:35 crc kubenswrapper[4957]: E0218 15:58:35.710743 4957 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.213:47444->38.102.83.213:46479: write tcp 38.102.83.213:47444->38.102.83.213:46479: write: broken pipe Feb 18 15:58:37 crc kubenswrapper[4957]: I0218 15:58:37.212996 4957 scope.go:117] "RemoveContainer" containerID="89f533c1d1cd15588f3413497b2789cc0f343faad672d55c7e1b45c57bcadec6" Feb 18 15:58:37 crc kubenswrapper[4957]: E0218 15:58:37.213669 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-x8wwg_openshift-machine-config-operator(4cde17e3-43e9-4bed-afe8-5b76229e35cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" Feb 18 15:58:43 crc kubenswrapper[4957]: E0218 15:58:43.808677 4957 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.213:47566->38.102.83.213:46479: write tcp 38.102.83.213:47566->38.102.83.213:46479: write: broken pipe Feb 18 15:58:51 crc kubenswrapper[4957]: I0218 15:58:51.214266 4957 scope.go:117] "RemoveContainer" containerID="89f533c1d1cd15588f3413497b2789cc0f343faad672d55c7e1b45c57bcadec6" Feb 18 15:58:51 crc kubenswrapper[4957]: E0218 15:58:51.215124 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-x8wwg_openshift-machine-config-operator(4cde17e3-43e9-4bed-afe8-5b76229e35cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" Feb 18 15:59:02 crc kubenswrapper[4957]: I0218 15:59:02.214166 4957 scope.go:117] "RemoveContainer" containerID="89f533c1d1cd15588f3413497b2789cc0f343faad672d55c7e1b45c57bcadec6" Feb 18 15:59:02 crc kubenswrapper[4957]: E0218 15:59:02.214855 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-x8wwg_openshift-machine-config-operator(4cde17e3-43e9-4bed-afe8-5b76229e35cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" Feb 18 15:59:17 crc kubenswrapper[4957]: I0218 15:59:17.213684 4957 scope.go:117] "RemoveContainer" containerID="89f533c1d1cd15588f3413497b2789cc0f343faad672d55c7e1b45c57bcadec6" Feb 18 15:59:17 crc kubenswrapper[4957]: E0218 15:59:17.214735 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-x8wwg_openshift-machine-config-operator(4cde17e3-43e9-4bed-afe8-5b76229e35cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" Feb 18 15:59:21 crc kubenswrapper[4957]: I0218 15:59:21.145852 4957 scope.go:117] "RemoveContainer" containerID="d983eb4a0e815c325155d134ad922689c7c19c62ac6d15f31301a766e95b12d1" Feb 18 15:59:21 crc kubenswrapper[4957]: I0218 15:59:21.241846 4957 scope.go:117] "RemoveContainer" containerID="4916d51595abddad001c698605c4dc16dc207af0355baee6bb28524a254b8db3" Feb 18 15:59:28 crc kubenswrapper[4957]: I0218 15:59:28.213966 4957 scope.go:117] "RemoveContainer" containerID="89f533c1d1cd15588f3413497b2789cc0f343faad672d55c7e1b45c57bcadec6" Feb 18 15:59:28 crc kubenswrapper[4957]: E0218 15:59:28.215369 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-x8wwg_openshift-machine-config-operator(4cde17e3-43e9-4bed-afe8-5b76229e35cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" Feb 18 15:59:41 crc kubenswrapper[4957]: I0218 15:59:41.214731 4957 scope.go:117] "RemoveContainer" containerID="89f533c1d1cd15588f3413497b2789cc0f343faad672d55c7e1b45c57bcadec6" Feb 18 15:59:41 crc kubenswrapper[4957]: E0218 15:59:41.216909 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-x8wwg_openshift-machine-config-operator(4cde17e3-43e9-4bed-afe8-5b76229e35cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" Feb 18 15:59:53 crc kubenswrapper[4957]: I0218 15:59:53.214066 4957 scope.go:117] "RemoveContainer" containerID="89f533c1d1cd15588f3413497b2789cc0f343faad672d55c7e1b45c57bcadec6" Feb 18 15:59:53 crc kubenswrapper[4957]: E0218 15:59:53.215977 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-x8wwg_openshift-machine-config-operator(4cde17e3-43e9-4bed-afe8-5b76229e35cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" Feb 18 16:00:00 crc kubenswrapper[4957]: I0218 16:00:00.361837 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29523840-rxk87"] Feb 18 16:00:00 crc kubenswrapper[4957]: E0218 16:00:00.365816 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="102a2a30-4adf-4ef7-8c60-bf10d5db664b" containerName="container-00" Feb 18 16:00:00 crc kubenswrapper[4957]: I0218 16:00:00.365841 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="102a2a30-4adf-4ef7-8c60-bf10d5db664b" containerName="container-00" Feb 18 16:00:00 crc kubenswrapper[4957]: I0218 16:00:00.366980 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="102a2a30-4adf-4ef7-8c60-bf10d5db664b" containerName="container-00" Feb 18 16:00:00 crc kubenswrapper[4957]: I0218 16:00:00.368790 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29523840-rxk87" Feb 18 16:00:00 crc kubenswrapper[4957]: I0218 16:00:00.370796 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 18 16:00:00 crc kubenswrapper[4957]: I0218 16:00:00.378892 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 18 16:00:00 crc kubenswrapper[4957]: I0218 16:00:00.454908 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29523840-rxk87"] Feb 18 16:00:00 crc kubenswrapper[4957]: I0218 16:00:00.454995 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b0c871db-b13e-48dc-a791-b7dbceb6c544-secret-volume\") pod \"collect-profiles-29523840-rxk87\" (UID: \"b0c871db-b13e-48dc-a791-b7dbceb6c544\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523840-rxk87" Feb 18 16:00:00 crc kubenswrapper[4957]: I0218 16:00:00.455178 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b0c871db-b13e-48dc-a791-b7dbceb6c544-config-volume\") pod \"collect-profiles-29523840-rxk87\" (UID: \"b0c871db-b13e-48dc-a791-b7dbceb6c544\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523840-rxk87" Feb 18 16:00:00 crc kubenswrapper[4957]: I0218 16:00:00.455243 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5dx45\" (UniqueName: \"kubernetes.io/projected/b0c871db-b13e-48dc-a791-b7dbceb6c544-kube-api-access-5dx45\") pod \"collect-profiles-29523840-rxk87\" (UID: \"b0c871db-b13e-48dc-a791-b7dbceb6c544\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523840-rxk87" Feb 18 16:00:00 crc kubenswrapper[4957]: I0218 16:00:00.558063 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b0c871db-b13e-48dc-a791-b7dbceb6c544-config-volume\") pod \"collect-profiles-29523840-rxk87\" (UID: \"b0c871db-b13e-48dc-a791-b7dbceb6c544\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523840-rxk87" Feb 18 16:00:00 crc kubenswrapper[4957]: I0218 16:00:00.558615 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5dx45\" (UniqueName: \"kubernetes.io/projected/b0c871db-b13e-48dc-a791-b7dbceb6c544-kube-api-access-5dx45\") pod \"collect-profiles-29523840-rxk87\" (UID: \"b0c871db-b13e-48dc-a791-b7dbceb6c544\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523840-rxk87" Feb 18 16:00:00 crc kubenswrapper[4957]: I0218 16:00:00.558867 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b0c871db-b13e-48dc-a791-b7dbceb6c544-secret-volume\") pod \"collect-profiles-29523840-rxk87\" (UID: \"b0c871db-b13e-48dc-a791-b7dbceb6c544\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523840-rxk87" Feb 18 16:00:00 crc kubenswrapper[4957]: I0218 16:00:00.559590 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b0c871db-b13e-48dc-a791-b7dbceb6c544-config-volume\") pod \"collect-profiles-29523840-rxk87\" (UID: \"b0c871db-b13e-48dc-a791-b7dbceb6c544\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523840-rxk87" Feb 18 16:00:00 crc kubenswrapper[4957]: I0218 16:00:00.581950 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b0c871db-b13e-48dc-a791-b7dbceb6c544-secret-volume\") pod \"collect-profiles-29523840-rxk87\" (UID: \"b0c871db-b13e-48dc-a791-b7dbceb6c544\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523840-rxk87" Feb 18 16:00:00 crc kubenswrapper[4957]: I0218 16:00:00.584161 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5dx45\" (UniqueName: \"kubernetes.io/projected/b0c871db-b13e-48dc-a791-b7dbceb6c544-kube-api-access-5dx45\") pod \"collect-profiles-29523840-rxk87\" (UID: \"b0c871db-b13e-48dc-a791-b7dbceb6c544\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523840-rxk87" Feb 18 16:00:00 crc kubenswrapper[4957]: I0218 16:00:00.700321 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29523840-rxk87" Feb 18 16:00:01 crc kubenswrapper[4957]: I0218 16:00:01.951294 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29523840-rxk87"] Feb 18 16:00:02 crc kubenswrapper[4957]: W0218 16:00:02.926532 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb0c871db_b13e_48dc_a791_b7dbceb6c544.slice/crio-52ba5d4f54ce737ddef1ac649d77d02c38351248880a0637b4c728ee92b6c7f4 WatchSource:0}: Error finding container 52ba5d4f54ce737ddef1ac649d77d02c38351248880a0637b4c728ee92b6c7f4: Status 404 returned error can't find the container with id 52ba5d4f54ce737ddef1ac649d77d02c38351248880a0637b4c728ee92b6c7f4 Feb 18 16:00:03 crc kubenswrapper[4957]: I0218 16:00:03.555883 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29523840-rxk87" event={"ID":"b0c871db-b13e-48dc-a791-b7dbceb6c544","Type":"ContainerStarted","Data":"df538163f388965ca98a4daf47ce043812b0865f6f8ed9ce17595347206baafa"} Feb 18 16:00:03 crc kubenswrapper[4957]: I0218 16:00:03.556160 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29523840-rxk87" event={"ID":"b0c871db-b13e-48dc-a791-b7dbceb6c544","Type":"ContainerStarted","Data":"52ba5d4f54ce737ddef1ac649d77d02c38351248880a0637b4c728ee92b6c7f4"} Feb 18 16:00:03 crc kubenswrapper[4957]: I0218 16:00:03.590081 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29523840-rxk87" podStartSLOduration=3.588984958 podStartE2EDuration="3.588984958s" podCreationTimestamp="2026-02-18 16:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 16:00:03.578185939 +0000 UTC m=+5310.099050693" watchObservedRunningTime="2026-02-18 16:00:03.588984958 +0000 UTC m=+5310.109849702" Feb 18 16:00:04 crc kubenswrapper[4957]: I0218 16:00:04.577517 4957 generic.go:334] "Generic (PLEG): container finished" podID="b0c871db-b13e-48dc-a791-b7dbceb6c544" containerID="df538163f388965ca98a4daf47ce043812b0865f6f8ed9ce17595347206baafa" exitCode=0 Feb 18 16:00:04 crc kubenswrapper[4957]: I0218 16:00:04.577713 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29523840-rxk87" event={"ID":"b0c871db-b13e-48dc-a791-b7dbceb6c544","Type":"ContainerDied","Data":"df538163f388965ca98a4daf47ce043812b0865f6f8ed9ce17595347206baafa"} Feb 18 16:00:05 crc kubenswrapper[4957]: I0218 16:00:05.214828 4957 scope.go:117] "RemoveContainer" containerID="89f533c1d1cd15588f3413497b2789cc0f343faad672d55c7e1b45c57bcadec6" Feb 18 16:00:05 crc kubenswrapper[4957]: E0218 16:00:05.215280 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-x8wwg_openshift-machine-config-operator(4cde17e3-43e9-4bed-afe8-5b76229e35cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" Feb 18 16:00:06 crc kubenswrapper[4957]: I0218 16:00:06.094039 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29523840-rxk87" Feb 18 16:00:06 crc kubenswrapper[4957]: I0218 16:00:06.202961 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b0c871db-b13e-48dc-a791-b7dbceb6c544-config-volume\") pod \"b0c871db-b13e-48dc-a791-b7dbceb6c544\" (UID: \"b0c871db-b13e-48dc-a791-b7dbceb6c544\") " Feb 18 16:00:06 crc kubenswrapper[4957]: I0218 16:00:06.203159 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5dx45\" (UniqueName: \"kubernetes.io/projected/b0c871db-b13e-48dc-a791-b7dbceb6c544-kube-api-access-5dx45\") pod \"b0c871db-b13e-48dc-a791-b7dbceb6c544\" (UID: \"b0c871db-b13e-48dc-a791-b7dbceb6c544\") " Feb 18 16:00:06 crc kubenswrapper[4957]: I0218 16:00:06.203464 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b0c871db-b13e-48dc-a791-b7dbceb6c544-secret-volume\") pod \"b0c871db-b13e-48dc-a791-b7dbceb6c544\" (UID: \"b0c871db-b13e-48dc-a791-b7dbceb6c544\") " Feb 18 16:00:06 crc kubenswrapper[4957]: I0218 16:00:06.205769 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b0c871db-b13e-48dc-a791-b7dbceb6c544-config-volume" (OuterVolumeSpecName: "config-volume") pod "b0c871db-b13e-48dc-a791-b7dbceb6c544" (UID: "b0c871db-b13e-48dc-a791-b7dbceb6c544"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:00:06 crc kubenswrapper[4957]: I0218 16:00:06.211720 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0c871db-b13e-48dc-a791-b7dbceb6c544-kube-api-access-5dx45" (OuterVolumeSpecName: "kube-api-access-5dx45") pod "b0c871db-b13e-48dc-a791-b7dbceb6c544" (UID: "b0c871db-b13e-48dc-a791-b7dbceb6c544"). InnerVolumeSpecName "kube-api-access-5dx45". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:00:06 crc kubenswrapper[4957]: I0218 16:00:06.213348 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0c871db-b13e-48dc-a791-b7dbceb6c544-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "b0c871db-b13e-48dc-a791-b7dbceb6c544" (UID: "b0c871db-b13e-48dc-a791-b7dbceb6c544"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:00:06 crc kubenswrapper[4957]: I0218 16:00:06.316332 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5dx45\" (UniqueName: \"kubernetes.io/projected/b0c871db-b13e-48dc-a791-b7dbceb6c544-kube-api-access-5dx45\") on node \"crc\" DevicePath \"\"" Feb 18 16:00:06 crc kubenswrapper[4957]: I0218 16:00:06.316710 4957 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b0c871db-b13e-48dc-a791-b7dbceb6c544-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 18 16:00:06 crc kubenswrapper[4957]: I0218 16:00:06.316727 4957 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b0c871db-b13e-48dc-a791-b7dbceb6c544-config-volume\") on node \"crc\" DevicePath \"\"" Feb 18 16:00:06 crc kubenswrapper[4957]: I0218 16:00:06.616964 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29523840-rxk87" event={"ID":"b0c871db-b13e-48dc-a791-b7dbceb6c544","Type":"ContainerDied","Data":"52ba5d4f54ce737ddef1ac649d77d02c38351248880a0637b4c728ee92b6c7f4"} Feb 18 16:00:06 crc kubenswrapper[4957]: I0218 16:00:06.617074 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29523840-rxk87" Feb 18 16:00:06 crc kubenswrapper[4957]: I0218 16:00:06.617647 4957 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="52ba5d4f54ce737ddef1ac649d77d02c38351248880a0637b4c728ee92b6c7f4" Feb 18 16:00:06 crc kubenswrapper[4957]: I0218 16:00:06.718239 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29523795-4bq6c"] Feb 18 16:00:06 crc kubenswrapper[4957]: I0218 16:00:06.731226 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29523795-4bq6c"] Feb 18 16:00:08 crc kubenswrapper[4957]: I0218 16:00:08.236866 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e5ee832-b364-4a76-8d7d-6d3a576713a8" path="/var/lib/kubelet/pods/8e5ee832-b364-4a76-8d7d-6d3a576713a8/volumes" Feb 18 16:00:18 crc kubenswrapper[4957]: I0218 16:00:18.556142 4957 trace.go:236] Trace[1241412115]: "Calculate volume metrics of persistence for pod openstack/rabbitmq-server-2" (18-Feb-2026 16:00:17.367) (total time: 1188ms): Feb 18 16:00:18 crc kubenswrapper[4957]: Trace[1241412115]: [1.188837988s] [1.188837988s] END Feb 18 16:00:20 crc kubenswrapper[4957]: I0218 16:00:20.214360 4957 scope.go:117] "RemoveContainer" containerID="89f533c1d1cd15588f3413497b2789cc0f343faad672d55c7e1b45c57bcadec6" Feb 18 16:00:20 crc kubenswrapper[4957]: I0218 16:00:20.859384 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" event={"ID":"4cde17e3-43e9-4bed-afe8-5b76229e35cf","Type":"ContainerStarted","Data":"a8f724e2df702f543ded24f9907a440b3b18f7f9a83807cdf909afc19774d049"} Feb 18 16:00:21 crc kubenswrapper[4957]: I0218 16:00:21.620248 4957 scope.go:117] "RemoveContainer" containerID="d1527e771e632b4b6dad271517475d0c90401605a73b27cc1fb3983a48900e21" Feb 18 16:00:21 crc kubenswrapper[4957]: I0218 16:00:21.662708 4957 scope.go:117] "RemoveContainer" containerID="9f6ecfbee96b92c21f3209b6456a4a4549e8d37666441c74e0089e4580a4dae3" Feb 18 16:00:35 crc kubenswrapper[4957]: I0218 16:00:35.065264 4957 generic.go:334] "Generic (PLEG): container finished" podID="c567a0d5-a5c1-4eb5-bbbf-5b68e1bff3ba" containerID="e9231d43cbbd0b6f4e3b1d2b56c8d76634e51ce9ee98ef4510743f1e94cce6ba" exitCode=0 Feb 18 16:00:35 crc kubenswrapper[4957]: I0218 16:00:35.065324 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6llw9/must-gather-qx9qn" event={"ID":"c567a0d5-a5c1-4eb5-bbbf-5b68e1bff3ba","Type":"ContainerDied","Data":"e9231d43cbbd0b6f4e3b1d2b56c8d76634e51ce9ee98ef4510743f1e94cce6ba"} Feb 18 16:00:35 crc kubenswrapper[4957]: I0218 16:00:35.068609 4957 scope.go:117] "RemoveContainer" containerID="e9231d43cbbd0b6f4e3b1d2b56c8d76634e51ce9ee98ef4510743f1e94cce6ba" Feb 18 16:00:35 crc kubenswrapper[4957]: I0218 16:00:35.353012 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-6llw9_must-gather-qx9qn_c567a0d5-a5c1-4eb5-bbbf-5b68e1bff3ba/gather/0.log" Feb 18 16:00:45 crc kubenswrapper[4957]: I0218 16:00:45.207783 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-6llw9/must-gather-qx9qn"] Feb 18 16:00:45 crc kubenswrapper[4957]: I0218 16:00:45.216616 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-6llw9/must-gather-qx9qn" podUID="c567a0d5-a5c1-4eb5-bbbf-5b68e1bff3ba" containerName="copy" containerID="cri-o://2a27196e0ef58f1c8574f0cf0b7d6ef0f169772770f18196d6396199f9979df6" gracePeriod=2 Feb 18 16:00:45 crc kubenswrapper[4957]: I0218 16:00:45.252192 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-6llw9/must-gather-qx9qn"] Feb 18 16:00:45 crc kubenswrapper[4957]: E0218 16:00:45.330119 4957 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc567a0d5_a5c1_4eb5_bbbf_5b68e1bff3ba.slice/crio-conmon-2a27196e0ef58f1c8574f0cf0b7d6ef0f169772770f18196d6396199f9979df6.scope\": RecentStats: unable to find data in memory cache]" Feb 18 16:00:45 crc kubenswrapper[4957]: I0218 16:00:45.710626 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-6llw9_must-gather-qx9qn_c567a0d5-a5c1-4eb5-bbbf-5b68e1bff3ba/copy/0.log" Feb 18 16:00:45 crc kubenswrapper[4957]: I0218 16:00:45.711551 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6llw9/must-gather-qx9qn" Feb 18 16:00:45 crc kubenswrapper[4957]: I0218 16:00:45.799696 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-btj9w\" (UniqueName: \"kubernetes.io/projected/c567a0d5-a5c1-4eb5-bbbf-5b68e1bff3ba-kube-api-access-btj9w\") pod \"c567a0d5-a5c1-4eb5-bbbf-5b68e1bff3ba\" (UID: \"c567a0d5-a5c1-4eb5-bbbf-5b68e1bff3ba\") " Feb 18 16:00:45 crc kubenswrapper[4957]: I0218 16:00:45.799812 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/c567a0d5-a5c1-4eb5-bbbf-5b68e1bff3ba-must-gather-output\") pod \"c567a0d5-a5c1-4eb5-bbbf-5b68e1bff3ba\" (UID: \"c567a0d5-a5c1-4eb5-bbbf-5b68e1bff3ba\") " Feb 18 16:00:45 crc kubenswrapper[4957]: I0218 16:00:45.806783 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c567a0d5-a5c1-4eb5-bbbf-5b68e1bff3ba-kube-api-access-btj9w" (OuterVolumeSpecName: "kube-api-access-btj9w") pod "c567a0d5-a5c1-4eb5-bbbf-5b68e1bff3ba" (UID: "c567a0d5-a5c1-4eb5-bbbf-5b68e1bff3ba"). InnerVolumeSpecName "kube-api-access-btj9w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:00:45 crc kubenswrapper[4957]: I0218 16:00:45.907064 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-btj9w\" (UniqueName: \"kubernetes.io/projected/c567a0d5-a5c1-4eb5-bbbf-5b68e1bff3ba-kube-api-access-btj9w\") on node \"crc\" DevicePath \"\"" Feb 18 16:00:46 crc kubenswrapper[4957]: I0218 16:00:46.043440 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c567a0d5-a5c1-4eb5-bbbf-5b68e1bff3ba-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "c567a0d5-a5c1-4eb5-bbbf-5b68e1bff3ba" (UID: "c567a0d5-a5c1-4eb5-bbbf-5b68e1bff3ba"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 16:00:46 crc kubenswrapper[4957]: I0218 16:00:46.112799 4957 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/c567a0d5-a5c1-4eb5-bbbf-5b68e1bff3ba-must-gather-output\") on node \"crc\" DevicePath \"\"" Feb 18 16:00:46 crc kubenswrapper[4957]: I0218 16:00:46.225278 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c567a0d5-a5c1-4eb5-bbbf-5b68e1bff3ba" path="/var/lib/kubelet/pods/c567a0d5-a5c1-4eb5-bbbf-5b68e1bff3ba/volumes" Feb 18 16:00:46 crc kubenswrapper[4957]: I0218 16:00:46.264228 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-6llw9_must-gather-qx9qn_c567a0d5-a5c1-4eb5-bbbf-5b68e1bff3ba/copy/0.log" Feb 18 16:00:46 crc kubenswrapper[4957]: I0218 16:00:46.264787 4957 generic.go:334] "Generic (PLEG): container finished" podID="c567a0d5-a5c1-4eb5-bbbf-5b68e1bff3ba" containerID="2a27196e0ef58f1c8574f0cf0b7d6ef0f169772770f18196d6396199f9979df6" exitCode=143 Feb 18 16:00:46 crc kubenswrapper[4957]: I0218 16:00:46.264840 4957 scope.go:117] "RemoveContainer" containerID="2a27196e0ef58f1c8574f0cf0b7d6ef0f169772770f18196d6396199f9979df6" Feb 18 16:00:46 crc kubenswrapper[4957]: I0218 16:00:46.264990 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6llw9/must-gather-qx9qn" Feb 18 16:00:46 crc kubenswrapper[4957]: I0218 16:00:46.293006 4957 scope.go:117] "RemoveContainer" containerID="e9231d43cbbd0b6f4e3b1d2b56c8d76634e51ce9ee98ef4510743f1e94cce6ba" Feb 18 16:00:46 crc kubenswrapper[4957]: I0218 16:00:46.335037 4957 scope.go:117] "RemoveContainer" containerID="2a27196e0ef58f1c8574f0cf0b7d6ef0f169772770f18196d6396199f9979df6" Feb 18 16:00:46 crc kubenswrapper[4957]: E0218 16:00:46.337578 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a27196e0ef58f1c8574f0cf0b7d6ef0f169772770f18196d6396199f9979df6\": container with ID starting with 2a27196e0ef58f1c8574f0cf0b7d6ef0f169772770f18196d6396199f9979df6 not found: ID does not exist" containerID="2a27196e0ef58f1c8574f0cf0b7d6ef0f169772770f18196d6396199f9979df6" Feb 18 16:00:46 crc kubenswrapper[4957]: I0218 16:00:46.337609 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a27196e0ef58f1c8574f0cf0b7d6ef0f169772770f18196d6396199f9979df6"} err="failed to get container status \"2a27196e0ef58f1c8574f0cf0b7d6ef0f169772770f18196d6396199f9979df6\": rpc error: code = NotFound desc = could not find container \"2a27196e0ef58f1c8574f0cf0b7d6ef0f169772770f18196d6396199f9979df6\": container with ID starting with 2a27196e0ef58f1c8574f0cf0b7d6ef0f169772770f18196d6396199f9979df6 not found: ID does not exist" Feb 18 16:00:46 crc kubenswrapper[4957]: I0218 16:00:46.337633 4957 scope.go:117] "RemoveContainer" containerID="e9231d43cbbd0b6f4e3b1d2b56c8d76634e51ce9ee98ef4510743f1e94cce6ba" Feb 18 16:00:46 crc kubenswrapper[4957]: E0218 16:00:46.337946 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e9231d43cbbd0b6f4e3b1d2b56c8d76634e51ce9ee98ef4510743f1e94cce6ba\": container with ID starting with e9231d43cbbd0b6f4e3b1d2b56c8d76634e51ce9ee98ef4510743f1e94cce6ba not found: ID does not exist" containerID="e9231d43cbbd0b6f4e3b1d2b56c8d76634e51ce9ee98ef4510743f1e94cce6ba" Feb 18 16:00:46 crc kubenswrapper[4957]: I0218 16:00:46.337970 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e9231d43cbbd0b6f4e3b1d2b56c8d76634e51ce9ee98ef4510743f1e94cce6ba"} err="failed to get container status \"e9231d43cbbd0b6f4e3b1d2b56c8d76634e51ce9ee98ef4510743f1e94cce6ba\": rpc error: code = NotFound desc = could not find container \"e9231d43cbbd0b6f4e3b1d2b56c8d76634e51ce9ee98ef4510743f1e94cce6ba\": container with ID starting with e9231d43cbbd0b6f4e3b1d2b56c8d76634e51ce9ee98ef4510743f1e94cce6ba not found: ID does not exist" Feb 18 16:01:00 crc kubenswrapper[4957]: I0218 16:01:00.155256 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29523841-498bd"] Feb 18 16:01:00 crc kubenswrapper[4957]: E0218 16:01:00.156397 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0c871db-b13e-48dc-a791-b7dbceb6c544" containerName="collect-profiles" Feb 18 16:01:00 crc kubenswrapper[4957]: I0218 16:01:00.156430 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0c871db-b13e-48dc-a791-b7dbceb6c544" containerName="collect-profiles" Feb 18 16:01:00 crc kubenswrapper[4957]: E0218 16:01:00.156447 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c567a0d5-a5c1-4eb5-bbbf-5b68e1bff3ba" containerName="gather" Feb 18 16:01:00 crc kubenswrapper[4957]: I0218 16:01:00.156454 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="c567a0d5-a5c1-4eb5-bbbf-5b68e1bff3ba" containerName="gather" Feb 18 16:01:00 crc kubenswrapper[4957]: E0218 16:01:00.156495 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c567a0d5-a5c1-4eb5-bbbf-5b68e1bff3ba" containerName="copy" Feb 18 16:01:00 crc kubenswrapper[4957]: I0218 16:01:00.156504 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="c567a0d5-a5c1-4eb5-bbbf-5b68e1bff3ba" containerName="copy" Feb 18 16:01:00 crc kubenswrapper[4957]: I0218 16:01:00.156769 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="c567a0d5-a5c1-4eb5-bbbf-5b68e1bff3ba" containerName="gather" Feb 18 16:01:00 crc kubenswrapper[4957]: I0218 16:01:00.156803 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="c567a0d5-a5c1-4eb5-bbbf-5b68e1bff3ba" containerName="copy" Feb 18 16:01:00 crc kubenswrapper[4957]: I0218 16:01:00.156819 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0c871db-b13e-48dc-a791-b7dbceb6c544" containerName="collect-profiles" Feb 18 16:01:00 crc kubenswrapper[4957]: I0218 16:01:00.160370 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29523841-498bd" Feb 18 16:01:00 crc kubenswrapper[4957]: I0218 16:01:00.167974 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29523841-498bd"] Feb 18 16:01:00 crc kubenswrapper[4957]: I0218 16:01:00.289695 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/27c47f80-c76e-4fbd-8004-88cee7a0497a-fernet-keys\") pod \"keystone-cron-29523841-498bd\" (UID: \"27c47f80-c76e-4fbd-8004-88cee7a0497a\") " pod="openstack/keystone-cron-29523841-498bd" Feb 18 16:01:00 crc kubenswrapper[4957]: I0218 16:01:00.289819 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27c47f80-c76e-4fbd-8004-88cee7a0497a-combined-ca-bundle\") pod \"keystone-cron-29523841-498bd\" (UID: \"27c47f80-c76e-4fbd-8004-88cee7a0497a\") " pod="openstack/keystone-cron-29523841-498bd" Feb 18 16:01:00 crc kubenswrapper[4957]: I0218 16:01:00.289972 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27c47f80-c76e-4fbd-8004-88cee7a0497a-config-data\") pod \"keystone-cron-29523841-498bd\" (UID: \"27c47f80-c76e-4fbd-8004-88cee7a0497a\") " pod="openstack/keystone-cron-29523841-498bd" Feb 18 16:01:00 crc kubenswrapper[4957]: I0218 16:01:00.290040 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cfpvq\" (UniqueName: \"kubernetes.io/projected/27c47f80-c76e-4fbd-8004-88cee7a0497a-kube-api-access-cfpvq\") pod \"keystone-cron-29523841-498bd\" (UID: \"27c47f80-c76e-4fbd-8004-88cee7a0497a\") " pod="openstack/keystone-cron-29523841-498bd" Feb 18 16:01:00 crc kubenswrapper[4957]: I0218 16:01:00.392367 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/27c47f80-c76e-4fbd-8004-88cee7a0497a-fernet-keys\") pod \"keystone-cron-29523841-498bd\" (UID: \"27c47f80-c76e-4fbd-8004-88cee7a0497a\") " pod="openstack/keystone-cron-29523841-498bd" Feb 18 16:01:00 crc kubenswrapper[4957]: I0218 16:01:00.392487 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27c47f80-c76e-4fbd-8004-88cee7a0497a-combined-ca-bundle\") pod \"keystone-cron-29523841-498bd\" (UID: \"27c47f80-c76e-4fbd-8004-88cee7a0497a\") " pod="openstack/keystone-cron-29523841-498bd" Feb 18 16:01:00 crc kubenswrapper[4957]: I0218 16:01:00.392578 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27c47f80-c76e-4fbd-8004-88cee7a0497a-config-data\") pod \"keystone-cron-29523841-498bd\" (UID: \"27c47f80-c76e-4fbd-8004-88cee7a0497a\") " pod="openstack/keystone-cron-29523841-498bd" Feb 18 16:01:00 crc kubenswrapper[4957]: I0218 16:01:00.392626 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cfpvq\" (UniqueName: \"kubernetes.io/projected/27c47f80-c76e-4fbd-8004-88cee7a0497a-kube-api-access-cfpvq\") pod \"keystone-cron-29523841-498bd\" (UID: \"27c47f80-c76e-4fbd-8004-88cee7a0497a\") " pod="openstack/keystone-cron-29523841-498bd" Feb 18 16:01:00 crc kubenswrapper[4957]: I0218 16:01:00.397918 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/27c47f80-c76e-4fbd-8004-88cee7a0497a-fernet-keys\") pod \"keystone-cron-29523841-498bd\" (UID: \"27c47f80-c76e-4fbd-8004-88cee7a0497a\") " pod="openstack/keystone-cron-29523841-498bd" Feb 18 16:01:00 crc kubenswrapper[4957]: I0218 16:01:00.398091 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27c47f80-c76e-4fbd-8004-88cee7a0497a-combined-ca-bundle\") pod \"keystone-cron-29523841-498bd\" (UID: \"27c47f80-c76e-4fbd-8004-88cee7a0497a\") " pod="openstack/keystone-cron-29523841-498bd" Feb 18 16:01:00 crc kubenswrapper[4957]: I0218 16:01:00.398957 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27c47f80-c76e-4fbd-8004-88cee7a0497a-config-data\") pod \"keystone-cron-29523841-498bd\" (UID: \"27c47f80-c76e-4fbd-8004-88cee7a0497a\") " pod="openstack/keystone-cron-29523841-498bd" Feb 18 16:01:00 crc kubenswrapper[4957]: I0218 16:01:00.417817 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cfpvq\" (UniqueName: \"kubernetes.io/projected/27c47f80-c76e-4fbd-8004-88cee7a0497a-kube-api-access-cfpvq\") pod \"keystone-cron-29523841-498bd\" (UID: \"27c47f80-c76e-4fbd-8004-88cee7a0497a\") " pod="openstack/keystone-cron-29523841-498bd" Feb 18 16:01:00 crc kubenswrapper[4957]: I0218 16:01:00.485902 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29523841-498bd" Feb 18 16:01:01 crc kubenswrapper[4957]: I0218 16:01:01.043987 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29523841-498bd"] Feb 18 16:01:01 crc kubenswrapper[4957]: I0218 16:01:01.472787 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29523841-498bd" event={"ID":"27c47f80-c76e-4fbd-8004-88cee7a0497a","Type":"ContainerStarted","Data":"d67d3a6e53ec6610d6d12c0057748fb0ad9fdf4b93a15e01c7bd637a7ab2d27f"} Feb 18 16:01:01 crc kubenswrapper[4957]: I0218 16:01:01.473059 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29523841-498bd" event={"ID":"27c47f80-c76e-4fbd-8004-88cee7a0497a","Type":"ContainerStarted","Data":"73a0da7ecfefdb65f7f01154be1e9d84226232ca9d77c8f740d9d8555e7570d7"} Feb 18 16:01:01 crc kubenswrapper[4957]: I0218 16:01:01.514676 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29523841-498bd" podStartSLOduration=1.514650508 podStartE2EDuration="1.514650508s" podCreationTimestamp="2026-02-18 16:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 16:01:01.489158819 +0000 UTC m=+5368.010023563" watchObservedRunningTime="2026-02-18 16:01:01.514650508 +0000 UTC m=+5368.035515262" Feb 18 16:01:05 crc kubenswrapper[4957]: I0218 16:01:05.531009 4957 generic.go:334] "Generic (PLEG): container finished" podID="27c47f80-c76e-4fbd-8004-88cee7a0497a" containerID="d67d3a6e53ec6610d6d12c0057748fb0ad9fdf4b93a15e01c7bd637a7ab2d27f" exitCode=0 Feb 18 16:01:05 crc kubenswrapper[4957]: I0218 16:01:05.531097 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29523841-498bd" event={"ID":"27c47f80-c76e-4fbd-8004-88cee7a0497a","Type":"ContainerDied","Data":"d67d3a6e53ec6610d6d12c0057748fb0ad9fdf4b93a15e01c7bd637a7ab2d27f"} Feb 18 16:01:07 crc kubenswrapper[4957]: I0218 16:01:07.239875 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29523841-498bd" Feb 18 16:01:07 crc kubenswrapper[4957]: I0218 16:01:07.284581 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfpvq\" (UniqueName: \"kubernetes.io/projected/27c47f80-c76e-4fbd-8004-88cee7a0497a-kube-api-access-cfpvq\") pod \"27c47f80-c76e-4fbd-8004-88cee7a0497a\" (UID: \"27c47f80-c76e-4fbd-8004-88cee7a0497a\") " Feb 18 16:01:07 crc kubenswrapper[4957]: I0218 16:01:07.284637 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27c47f80-c76e-4fbd-8004-88cee7a0497a-config-data\") pod \"27c47f80-c76e-4fbd-8004-88cee7a0497a\" (UID: \"27c47f80-c76e-4fbd-8004-88cee7a0497a\") " Feb 18 16:01:07 crc kubenswrapper[4957]: I0218 16:01:07.284722 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/27c47f80-c76e-4fbd-8004-88cee7a0497a-fernet-keys\") pod \"27c47f80-c76e-4fbd-8004-88cee7a0497a\" (UID: \"27c47f80-c76e-4fbd-8004-88cee7a0497a\") " Feb 18 16:01:07 crc kubenswrapper[4957]: I0218 16:01:07.285629 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27c47f80-c76e-4fbd-8004-88cee7a0497a-combined-ca-bundle\") pod \"27c47f80-c76e-4fbd-8004-88cee7a0497a\" (UID: \"27c47f80-c76e-4fbd-8004-88cee7a0497a\") " Feb 18 16:01:07 crc kubenswrapper[4957]: I0218 16:01:07.294098 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27c47f80-c76e-4fbd-8004-88cee7a0497a-kube-api-access-cfpvq" (OuterVolumeSpecName: "kube-api-access-cfpvq") pod "27c47f80-c76e-4fbd-8004-88cee7a0497a" (UID: "27c47f80-c76e-4fbd-8004-88cee7a0497a"). InnerVolumeSpecName "kube-api-access-cfpvq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:01:07 crc kubenswrapper[4957]: I0218 16:01:07.294464 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27c47f80-c76e-4fbd-8004-88cee7a0497a-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "27c47f80-c76e-4fbd-8004-88cee7a0497a" (UID: "27c47f80-c76e-4fbd-8004-88cee7a0497a"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:01:07 crc kubenswrapper[4957]: I0218 16:01:07.389074 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfpvq\" (UniqueName: \"kubernetes.io/projected/27c47f80-c76e-4fbd-8004-88cee7a0497a-kube-api-access-cfpvq\") on node \"crc\" DevicePath \"\"" Feb 18 16:01:07 crc kubenswrapper[4957]: I0218 16:01:07.389106 4957 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/27c47f80-c76e-4fbd-8004-88cee7a0497a-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 18 16:01:07 crc kubenswrapper[4957]: I0218 16:01:07.434793 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27c47f80-c76e-4fbd-8004-88cee7a0497a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "27c47f80-c76e-4fbd-8004-88cee7a0497a" (UID: "27c47f80-c76e-4fbd-8004-88cee7a0497a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:01:07 crc kubenswrapper[4957]: I0218 16:01:07.486559 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27c47f80-c76e-4fbd-8004-88cee7a0497a-config-data" (OuterVolumeSpecName: "config-data") pod "27c47f80-c76e-4fbd-8004-88cee7a0497a" (UID: "27c47f80-c76e-4fbd-8004-88cee7a0497a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:01:07 crc kubenswrapper[4957]: I0218 16:01:07.491609 4957 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27c47f80-c76e-4fbd-8004-88cee7a0497a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 16:01:07 crc kubenswrapper[4957]: I0218 16:01:07.491638 4957 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27c47f80-c76e-4fbd-8004-88cee7a0497a-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 16:01:07 crc kubenswrapper[4957]: I0218 16:01:07.603543 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29523841-498bd" event={"ID":"27c47f80-c76e-4fbd-8004-88cee7a0497a","Type":"ContainerDied","Data":"73a0da7ecfefdb65f7f01154be1e9d84226232ca9d77c8f740d9d8555e7570d7"} Feb 18 16:01:07 crc kubenswrapper[4957]: I0218 16:01:07.603590 4957 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="73a0da7ecfefdb65f7f01154be1e9d84226232ca9d77c8f740d9d8555e7570d7" Feb 18 16:01:07 crc kubenswrapper[4957]: I0218 16:01:07.603765 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29523841-498bd" Feb 18 16:01:20 crc kubenswrapper[4957]: I0218 16:01:20.447047 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-9hh92"] Feb 18 16:01:20 crc kubenswrapper[4957]: E0218 16:01:20.448618 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27c47f80-c76e-4fbd-8004-88cee7a0497a" containerName="keystone-cron" Feb 18 16:01:20 crc kubenswrapper[4957]: I0218 16:01:20.448641 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="27c47f80-c76e-4fbd-8004-88cee7a0497a" containerName="keystone-cron" Feb 18 16:01:20 crc kubenswrapper[4957]: I0218 16:01:20.450107 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="27c47f80-c76e-4fbd-8004-88cee7a0497a" containerName="keystone-cron" Feb 18 16:01:20 crc kubenswrapper[4957]: I0218 16:01:20.455630 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9hh92" Feb 18 16:01:20 crc kubenswrapper[4957]: I0218 16:01:20.486112 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9hh92"] Feb 18 16:01:20 crc kubenswrapper[4957]: I0218 16:01:20.505639 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5dc758f8-752e-4498-9c77-789c27fdd313-catalog-content\") pod \"community-operators-9hh92\" (UID: \"5dc758f8-752e-4498-9c77-789c27fdd313\") " pod="openshift-marketplace/community-operators-9hh92" Feb 18 16:01:20 crc kubenswrapper[4957]: I0218 16:01:20.507064 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gslh4\" (UniqueName: \"kubernetes.io/projected/5dc758f8-752e-4498-9c77-789c27fdd313-kube-api-access-gslh4\") pod \"community-operators-9hh92\" (UID: \"5dc758f8-752e-4498-9c77-789c27fdd313\") " pod="openshift-marketplace/community-operators-9hh92" Feb 18 16:01:20 crc kubenswrapper[4957]: I0218 16:01:20.507097 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5dc758f8-752e-4498-9c77-789c27fdd313-utilities\") pod \"community-operators-9hh92\" (UID: \"5dc758f8-752e-4498-9c77-789c27fdd313\") " pod="openshift-marketplace/community-operators-9hh92" Feb 18 16:01:20 crc kubenswrapper[4957]: I0218 16:01:20.609096 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5dc758f8-752e-4498-9c77-789c27fdd313-catalog-content\") pod \"community-operators-9hh92\" (UID: \"5dc758f8-752e-4498-9c77-789c27fdd313\") " pod="openshift-marketplace/community-operators-9hh92" Feb 18 16:01:20 crc kubenswrapper[4957]: I0218 16:01:20.609249 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gslh4\" (UniqueName: \"kubernetes.io/projected/5dc758f8-752e-4498-9c77-789c27fdd313-kube-api-access-gslh4\") pod \"community-operators-9hh92\" (UID: \"5dc758f8-752e-4498-9c77-789c27fdd313\") " pod="openshift-marketplace/community-operators-9hh92" Feb 18 16:01:20 crc kubenswrapper[4957]: I0218 16:01:20.609273 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5dc758f8-752e-4498-9c77-789c27fdd313-utilities\") pod \"community-operators-9hh92\" (UID: \"5dc758f8-752e-4498-9c77-789c27fdd313\") " pod="openshift-marketplace/community-operators-9hh92" Feb 18 16:01:20 crc kubenswrapper[4957]: I0218 16:01:20.609812 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5dc758f8-752e-4498-9c77-789c27fdd313-utilities\") pod \"community-operators-9hh92\" (UID: \"5dc758f8-752e-4498-9c77-789c27fdd313\") " pod="openshift-marketplace/community-operators-9hh92" Feb 18 16:01:20 crc kubenswrapper[4957]: I0218 16:01:20.610030 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5dc758f8-752e-4498-9c77-789c27fdd313-catalog-content\") pod \"community-operators-9hh92\" (UID: \"5dc758f8-752e-4498-9c77-789c27fdd313\") " pod="openshift-marketplace/community-operators-9hh92" Feb 18 16:01:20 crc kubenswrapper[4957]: I0218 16:01:20.648094 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gslh4\" (UniqueName: \"kubernetes.io/projected/5dc758f8-752e-4498-9c77-789c27fdd313-kube-api-access-gslh4\") pod \"community-operators-9hh92\" (UID: \"5dc758f8-752e-4498-9c77-789c27fdd313\") " pod="openshift-marketplace/community-operators-9hh92" Feb 18 16:01:20 crc kubenswrapper[4957]: I0218 16:01:20.804778 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9hh92" Feb 18 16:01:21 crc kubenswrapper[4957]: I0218 16:01:21.393287 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9hh92"] Feb 18 16:01:21 crc kubenswrapper[4957]: I0218 16:01:21.805527 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9hh92" event={"ID":"5dc758f8-752e-4498-9c77-789c27fdd313","Type":"ContainerStarted","Data":"18974ef4c16a9f8ee0561514f9639231726c24f044c4f15633fd1f5a7358424b"} Feb 18 16:01:21 crc kubenswrapper[4957]: I0218 16:01:21.805913 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9hh92" event={"ID":"5dc758f8-752e-4498-9c77-789c27fdd313","Type":"ContainerStarted","Data":"2941248a6318898e1566da24f2d88062668934f23cbe89e99201b65a8c039d20"} Feb 18 16:01:22 crc kubenswrapper[4957]: I0218 16:01:22.819694 4957 generic.go:334] "Generic (PLEG): container finished" podID="5dc758f8-752e-4498-9c77-789c27fdd313" containerID="18974ef4c16a9f8ee0561514f9639231726c24f044c4f15633fd1f5a7358424b" exitCode=0 Feb 18 16:01:22 crc kubenswrapper[4957]: I0218 16:01:22.819786 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9hh92" event={"ID":"5dc758f8-752e-4498-9c77-789c27fdd313","Type":"ContainerDied","Data":"18974ef4c16a9f8ee0561514f9639231726c24f044c4f15633fd1f5a7358424b"} Feb 18 16:01:22 crc kubenswrapper[4957]: I0218 16:01:22.828722 4957 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 16:01:23 crc kubenswrapper[4957]: I0218 16:01:23.836680 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9hh92" event={"ID":"5dc758f8-752e-4498-9c77-789c27fdd313","Type":"ContainerStarted","Data":"9cd8c1bbe09409e4ee4404d69caedf57c2cdca8388ae55d948f6ff4900982ab7"} Feb 18 16:01:25 crc kubenswrapper[4957]: I0218 16:01:25.876620 4957 generic.go:334] "Generic (PLEG): container finished" podID="5dc758f8-752e-4498-9c77-789c27fdd313" containerID="9cd8c1bbe09409e4ee4404d69caedf57c2cdca8388ae55d948f6ff4900982ab7" exitCode=0 Feb 18 16:01:25 crc kubenswrapper[4957]: I0218 16:01:25.876655 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9hh92" event={"ID":"5dc758f8-752e-4498-9c77-789c27fdd313","Type":"ContainerDied","Data":"9cd8c1bbe09409e4ee4404d69caedf57c2cdca8388ae55d948f6ff4900982ab7"} Feb 18 16:01:26 crc kubenswrapper[4957]: I0218 16:01:26.897799 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9hh92" event={"ID":"5dc758f8-752e-4498-9c77-789c27fdd313","Type":"ContainerStarted","Data":"d186d85525680d000e6b5f5dec26c850cfa296181c5753288f750c2ba4b0295a"} Feb 18 16:01:26 crc kubenswrapper[4957]: I0218 16:01:26.922004 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-9hh92" podStartSLOduration=3.239777918 podStartE2EDuration="6.921979267s" podCreationTimestamp="2026-02-18 16:01:20 +0000 UTC" firstStartedPulling="2026-02-18 16:01:22.82266662 +0000 UTC m=+5389.343531364" lastFinishedPulling="2026-02-18 16:01:26.504867969 +0000 UTC m=+5393.025732713" observedRunningTime="2026-02-18 16:01:26.919068834 +0000 UTC m=+5393.439933578" watchObservedRunningTime="2026-02-18 16:01:26.921979267 +0000 UTC m=+5393.442844031" Feb 18 16:01:30 crc kubenswrapper[4957]: I0218 16:01:30.806046 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-9hh92" Feb 18 16:01:30 crc kubenswrapper[4957]: I0218 16:01:30.807135 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-9hh92" Feb 18 16:01:31 crc kubenswrapper[4957]: I0218 16:01:31.893288 4957 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-9hh92" podUID="5dc758f8-752e-4498-9c77-789c27fdd313" containerName="registry-server" probeResult="failure" output=< Feb 18 16:01:31 crc kubenswrapper[4957]: timeout: failed to connect service ":50051" within 1s Feb 18 16:01:31 crc kubenswrapper[4957]: > Feb 18 16:01:41 crc kubenswrapper[4957]: I0218 16:01:41.882582 4957 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-9hh92" podUID="5dc758f8-752e-4498-9c77-789c27fdd313" containerName="registry-server" probeResult="failure" output=< Feb 18 16:01:41 crc kubenswrapper[4957]: timeout: failed to connect service ":50051" within 1s Feb 18 16:01:41 crc kubenswrapper[4957]: > Feb 18 16:01:51 crc kubenswrapper[4957]: I0218 16:01:51.908321 4957 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-9hh92" podUID="5dc758f8-752e-4498-9c77-789c27fdd313" containerName="registry-server" probeResult="failure" output=< Feb 18 16:01:51 crc kubenswrapper[4957]: timeout: failed to connect service ":50051" within 1s Feb 18 16:01:51 crc kubenswrapper[4957]: > Feb 18 16:01:56 crc kubenswrapper[4957]: I0218 16:01:56.393102 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-2l9g4"] Feb 18 16:01:56 crc kubenswrapper[4957]: I0218 16:01:56.396823 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2l9g4" Feb 18 16:01:56 crc kubenswrapper[4957]: I0218 16:01:56.407227 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2l9g4"] Feb 18 16:01:56 crc kubenswrapper[4957]: I0218 16:01:56.434346 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91710a2c-eb3b-48de-ba3e-f273865eecc8-catalog-content\") pod \"certified-operators-2l9g4\" (UID: \"91710a2c-eb3b-48de-ba3e-f273865eecc8\") " pod="openshift-marketplace/certified-operators-2l9g4" Feb 18 16:01:56 crc kubenswrapper[4957]: I0218 16:01:56.434559 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91710a2c-eb3b-48de-ba3e-f273865eecc8-utilities\") pod \"certified-operators-2l9g4\" (UID: \"91710a2c-eb3b-48de-ba3e-f273865eecc8\") " pod="openshift-marketplace/certified-operators-2l9g4" Feb 18 16:01:56 crc kubenswrapper[4957]: I0218 16:01:56.434623 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7zj5\" (UniqueName: \"kubernetes.io/projected/91710a2c-eb3b-48de-ba3e-f273865eecc8-kube-api-access-l7zj5\") pod \"certified-operators-2l9g4\" (UID: \"91710a2c-eb3b-48de-ba3e-f273865eecc8\") " pod="openshift-marketplace/certified-operators-2l9g4" Feb 18 16:01:56 crc kubenswrapper[4957]: I0218 16:01:56.536985 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91710a2c-eb3b-48de-ba3e-f273865eecc8-utilities\") pod \"certified-operators-2l9g4\" (UID: \"91710a2c-eb3b-48de-ba3e-f273865eecc8\") " pod="openshift-marketplace/certified-operators-2l9g4" Feb 18 16:01:56 crc kubenswrapper[4957]: I0218 16:01:56.537054 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l7zj5\" (UniqueName: \"kubernetes.io/projected/91710a2c-eb3b-48de-ba3e-f273865eecc8-kube-api-access-l7zj5\") pod \"certified-operators-2l9g4\" (UID: \"91710a2c-eb3b-48de-ba3e-f273865eecc8\") " pod="openshift-marketplace/certified-operators-2l9g4" Feb 18 16:01:56 crc kubenswrapper[4957]: I0218 16:01:56.537217 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91710a2c-eb3b-48de-ba3e-f273865eecc8-catalog-content\") pod \"certified-operators-2l9g4\" (UID: \"91710a2c-eb3b-48de-ba3e-f273865eecc8\") " pod="openshift-marketplace/certified-operators-2l9g4" Feb 18 16:01:56 crc kubenswrapper[4957]: I0218 16:01:56.539482 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91710a2c-eb3b-48de-ba3e-f273865eecc8-catalog-content\") pod \"certified-operators-2l9g4\" (UID: \"91710a2c-eb3b-48de-ba3e-f273865eecc8\") " pod="openshift-marketplace/certified-operators-2l9g4" Feb 18 16:01:56 crc kubenswrapper[4957]: I0218 16:01:56.539508 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91710a2c-eb3b-48de-ba3e-f273865eecc8-utilities\") pod \"certified-operators-2l9g4\" (UID: \"91710a2c-eb3b-48de-ba3e-f273865eecc8\") " pod="openshift-marketplace/certified-operators-2l9g4" Feb 18 16:01:56 crc kubenswrapper[4957]: I0218 16:01:56.560627 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7zj5\" (UniqueName: \"kubernetes.io/projected/91710a2c-eb3b-48de-ba3e-f273865eecc8-kube-api-access-l7zj5\") pod \"certified-operators-2l9g4\" (UID: \"91710a2c-eb3b-48de-ba3e-f273865eecc8\") " pod="openshift-marketplace/certified-operators-2l9g4" Feb 18 16:01:56 crc kubenswrapper[4957]: I0218 16:01:56.757358 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2l9g4" Feb 18 16:01:57 crc kubenswrapper[4957]: I0218 16:01:57.680378 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2l9g4"] Feb 18 16:01:57 crc kubenswrapper[4957]: W0218 16:01:57.697868 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod91710a2c_eb3b_48de_ba3e_f273865eecc8.slice/crio-458ec6e2b2f53183bb867ed28fe33fde61c9bc8a5da8bb6fc63dcbf69428d4c5 WatchSource:0}: Error finding container 458ec6e2b2f53183bb867ed28fe33fde61c9bc8a5da8bb6fc63dcbf69428d4c5: Status 404 returned error can't find the container with id 458ec6e2b2f53183bb867ed28fe33fde61c9bc8a5da8bb6fc63dcbf69428d4c5 Feb 18 16:01:58 crc kubenswrapper[4957]: I0218 16:01:58.362466 4957 generic.go:334] "Generic (PLEG): container finished" podID="91710a2c-eb3b-48de-ba3e-f273865eecc8" containerID="1a0ea7d7231d5056f6940107626714b819b5c7e7ea25b63ab96aabece49ffa75" exitCode=0 Feb 18 16:01:58 crc kubenswrapper[4957]: I0218 16:01:58.362544 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2l9g4" event={"ID":"91710a2c-eb3b-48de-ba3e-f273865eecc8","Type":"ContainerDied","Data":"1a0ea7d7231d5056f6940107626714b819b5c7e7ea25b63ab96aabece49ffa75"} Feb 18 16:01:58 crc kubenswrapper[4957]: I0218 16:01:58.362992 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2l9g4" event={"ID":"91710a2c-eb3b-48de-ba3e-f273865eecc8","Type":"ContainerStarted","Data":"458ec6e2b2f53183bb867ed28fe33fde61c9bc8a5da8bb6fc63dcbf69428d4c5"} Feb 18 16:02:00 crc kubenswrapper[4957]: I0218 16:02:00.931003 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-9hh92" Feb 18 16:02:01 crc kubenswrapper[4957]: I0218 16:02:01.069222 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-9hh92" Feb 18 16:02:01 crc kubenswrapper[4957]: I0218 16:02:01.402542 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2l9g4" event={"ID":"91710a2c-eb3b-48de-ba3e-f273865eecc8","Type":"ContainerStarted","Data":"5578d8437d3fbc380afca9ae6b358bd8edc2a69cd3bc78f96c7be682ea0d260a"} Feb 18 16:02:01 crc kubenswrapper[4957]: I0218 16:02:01.997578 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9hh92"] Feb 18 16:02:02 crc kubenswrapper[4957]: I0218 16:02:02.430354 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-9hh92" podUID="5dc758f8-752e-4498-9c77-789c27fdd313" containerName="registry-server" containerID="cri-o://d186d85525680d000e6b5f5dec26c850cfa296181c5753288f750c2ba4b0295a" gracePeriod=2 Feb 18 16:02:03 crc kubenswrapper[4957]: I0218 16:02:03.633173 4957 generic.go:334] "Generic (PLEG): container finished" podID="5dc758f8-752e-4498-9c77-789c27fdd313" containerID="d186d85525680d000e6b5f5dec26c850cfa296181c5753288f750c2ba4b0295a" exitCode=0 Feb 18 16:02:03 crc kubenswrapper[4957]: I0218 16:02:03.633245 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9hh92" event={"ID":"5dc758f8-752e-4498-9c77-789c27fdd313","Type":"ContainerDied","Data":"d186d85525680d000e6b5f5dec26c850cfa296181c5753288f750c2ba4b0295a"} Feb 18 16:02:04 crc kubenswrapper[4957]: I0218 16:02:04.642765 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9hh92" Feb 18 16:02:04 crc kubenswrapper[4957]: I0218 16:02:04.647716 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9hh92" event={"ID":"5dc758f8-752e-4498-9c77-789c27fdd313","Type":"ContainerDied","Data":"2941248a6318898e1566da24f2d88062668934f23cbe89e99201b65a8c039d20"} Feb 18 16:02:04 crc kubenswrapper[4957]: I0218 16:02:04.647770 4957 scope.go:117] "RemoveContainer" containerID="d186d85525680d000e6b5f5dec26c850cfa296181c5753288f750c2ba4b0295a" Feb 18 16:02:04 crc kubenswrapper[4957]: I0218 16:02:04.647933 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9hh92" Feb 18 16:02:04 crc kubenswrapper[4957]: I0218 16:02:04.704611 4957 scope.go:117] "RemoveContainer" containerID="9cd8c1bbe09409e4ee4404d69caedf57c2cdca8388ae55d948f6ff4900982ab7" Feb 18 16:02:04 crc kubenswrapper[4957]: I0218 16:02:04.736921 4957 scope.go:117] "RemoveContainer" containerID="18974ef4c16a9f8ee0561514f9639231726c24f044c4f15633fd1f5a7358424b" Feb 18 16:02:04 crc kubenswrapper[4957]: I0218 16:02:04.806379 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5dc758f8-752e-4498-9c77-789c27fdd313-utilities\") pod \"5dc758f8-752e-4498-9c77-789c27fdd313\" (UID: \"5dc758f8-752e-4498-9c77-789c27fdd313\") " Feb 18 16:02:04 crc kubenswrapper[4957]: I0218 16:02:04.806561 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gslh4\" (UniqueName: \"kubernetes.io/projected/5dc758f8-752e-4498-9c77-789c27fdd313-kube-api-access-gslh4\") pod \"5dc758f8-752e-4498-9c77-789c27fdd313\" (UID: \"5dc758f8-752e-4498-9c77-789c27fdd313\") " Feb 18 16:02:04 crc kubenswrapper[4957]: I0218 16:02:04.806730 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5dc758f8-752e-4498-9c77-789c27fdd313-catalog-content\") pod \"5dc758f8-752e-4498-9c77-789c27fdd313\" (UID: \"5dc758f8-752e-4498-9c77-789c27fdd313\") " Feb 18 16:02:04 crc kubenswrapper[4957]: I0218 16:02:04.809441 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5dc758f8-752e-4498-9c77-789c27fdd313-utilities" (OuterVolumeSpecName: "utilities") pod "5dc758f8-752e-4498-9c77-789c27fdd313" (UID: "5dc758f8-752e-4498-9c77-789c27fdd313"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 16:02:04 crc kubenswrapper[4957]: I0218 16:02:04.813882 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5dc758f8-752e-4498-9c77-789c27fdd313-kube-api-access-gslh4" (OuterVolumeSpecName: "kube-api-access-gslh4") pod "5dc758f8-752e-4498-9c77-789c27fdd313" (UID: "5dc758f8-752e-4498-9c77-789c27fdd313"). InnerVolumeSpecName "kube-api-access-gslh4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:02:04 crc kubenswrapper[4957]: I0218 16:02:04.858094 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5dc758f8-752e-4498-9c77-789c27fdd313-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5dc758f8-752e-4498-9c77-789c27fdd313" (UID: "5dc758f8-752e-4498-9c77-789c27fdd313"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 16:02:04 crc kubenswrapper[4957]: I0218 16:02:04.909881 4957 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5dc758f8-752e-4498-9c77-789c27fdd313-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 16:02:04 crc kubenswrapper[4957]: I0218 16:02:04.909931 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gslh4\" (UniqueName: \"kubernetes.io/projected/5dc758f8-752e-4498-9c77-789c27fdd313-kube-api-access-gslh4\") on node \"crc\" DevicePath \"\"" Feb 18 16:02:04 crc kubenswrapper[4957]: I0218 16:02:04.909946 4957 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5dc758f8-752e-4498-9c77-789c27fdd313-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 16:02:04 crc kubenswrapper[4957]: I0218 16:02:04.984535 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9hh92"] Feb 18 16:02:04 crc kubenswrapper[4957]: I0218 16:02:04.997401 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-9hh92"] Feb 18 16:02:05 crc kubenswrapper[4957]: I0218 16:02:05.663146 4957 generic.go:334] "Generic (PLEG): container finished" podID="91710a2c-eb3b-48de-ba3e-f273865eecc8" containerID="5578d8437d3fbc380afca9ae6b358bd8edc2a69cd3bc78f96c7be682ea0d260a" exitCode=0 Feb 18 16:02:05 crc kubenswrapper[4957]: I0218 16:02:05.663187 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2l9g4" event={"ID":"91710a2c-eb3b-48de-ba3e-f273865eecc8","Type":"ContainerDied","Data":"5578d8437d3fbc380afca9ae6b358bd8edc2a69cd3bc78f96c7be682ea0d260a"} Feb 18 16:02:06 crc kubenswrapper[4957]: I0218 16:02:06.226760 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5dc758f8-752e-4498-9c77-789c27fdd313" path="/var/lib/kubelet/pods/5dc758f8-752e-4498-9c77-789c27fdd313/volumes" Feb 18 16:02:08 crc kubenswrapper[4957]: I0218 16:02:08.705457 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2l9g4" event={"ID":"91710a2c-eb3b-48de-ba3e-f273865eecc8","Type":"ContainerStarted","Data":"8781f1275a4dfca1bcda2dfd57b0621c3a991d7de188d7132576827885f1e234"} Feb 18 16:02:08 crc kubenswrapper[4957]: I0218 16:02:08.748313 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-2l9g4" podStartSLOduration=4.7827993079999995 podStartE2EDuration="12.748291796s" podCreationTimestamp="2026-02-18 16:01:56 +0000 UTC" firstStartedPulling="2026-02-18 16:01:58.364918775 +0000 UTC m=+5424.885783509" lastFinishedPulling="2026-02-18 16:02:06.330411253 +0000 UTC m=+5432.851275997" observedRunningTime="2026-02-18 16:02:08.742636555 +0000 UTC m=+5435.263501299" watchObservedRunningTime="2026-02-18 16:02:08.748291796 +0000 UTC m=+5435.269156540" Feb 18 16:02:16 crc kubenswrapper[4957]: I0218 16:02:16.757609 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-2l9g4" Feb 18 16:02:16 crc kubenswrapper[4957]: I0218 16:02:16.758225 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-2l9g4" Feb 18 16:02:17 crc kubenswrapper[4957]: I0218 16:02:17.587693 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-2l9g4" Feb 18 16:02:17 crc kubenswrapper[4957]: I0218 16:02:17.679664 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-2l9g4" Feb 18 16:02:17 crc kubenswrapper[4957]: I0218 16:02:17.836638 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2l9g4"] Feb 18 16:02:18 crc kubenswrapper[4957]: I0218 16:02:18.828588 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-2l9g4" podUID="91710a2c-eb3b-48de-ba3e-f273865eecc8" containerName="registry-server" containerID="cri-o://8781f1275a4dfca1bcda2dfd57b0621c3a991d7de188d7132576827885f1e234" gracePeriod=2 Feb 18 16:02:19 crc kubenswrapper[4957]: I0218 16:02:19.844635 4957 generic.go:334] "Generic (PLEG): container finished" podID="91710a2c-eb3b-48de-ba3e-f273865eecc8" containerID="8781f1275a4dfca1bcda2dfd57b0621c3a991d7de188d7132576827885f1e234" exitCode=0 Feb 18 16:02:19 crc kubenswrapper[4957]: I0218 16:02:19.844688 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2l9g4" event={"ID":"91710a2c-eb3b-48de-ba3e-f273865eecc8","Type":"ContainerDied","Data":"8781f1275a4dfca1bcda2dfd57b0621c3a991d7de188d7132576827885f1e234"} Feb 18 16:02:20 crc kubenswrapper[4957]: I0218 16:02:20.242339 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2l9g4" Feb 18 16:02:20 crc kubenswrapper[4957]: I0218 16:02:20.355392 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91710a2c-eb3b-48de-ba3e-f273865eecc8-utilities\") pod \"91710a2c-eb3b-48de-ba3e-f273865eecc8\" (UID: \"91710a2c-eb3b-48de-ba3e-f273865eecc8\") " Feb 18 16:02:20 crc kubenswrapper[4957]: I0218 16:02:20.355674 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91710a2c-eb3b-48de-ba3e-f273865eecc8-catalog-content\") pod \"91710a2c-eb3b-48de-ba3e-f273865eecc8\" (UID: \"91710a2c-eb3b-48de-ba3e-f273865eecc8\") " Feb 18 16:02:20 crc kubenswrapper[4957]: I0218 16:02:20.355796 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l7zj5\" (UniqueName: \"kubernetes.io/projected/91710a2c-eb3b-48de-ba3e-f273865eecc8-kube-api-access-l7zj5\") pod \"91710a2c-eb3b-48de-ba3e-f273865eecc8\" (UID: \"91710a2c-eb3b-48de-ba3e-f273865eecc8\") " Feb 18 16:02:20 crc kubenswrapper[4957]: I0218 16:02:20.357349 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/91710a2c-eb3b-48de-ba3e-f273865eecc8-utilities" (OuterVolumeSpecName: "utilities") pod "91710a2c-eb3b-48de-ba3e-f273865eecc8" (UID: "91710a2c-eb3b-48de-ba3e-f273865eecc8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 16:02:20 crc kubenswrapper[4957]: I0218 16:02:20.357540 4957 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91710a2c-eb3b-48de-ba3e-f273865eecc8-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 16:02:20 crc kubenswrapper[4957]: I0218 16:02:20.364364 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91710a2c-eb3b-48de-ba3e-f273865eecc8-kube-api-access-l7zj5" (OuterVolumeSpecName: "kube-api-access-l7zj5") pod "91710a2c-eb3b-48de-ba3e-f273865eecc8" (UID: "91710a2c-eb3b-48de-ba3e-f273865eecc8"). InnerVolumeSpecName "kube-api-access-l7zj5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:02:20 crc kubenswrapper[4957]: I0218 16:02:20.450294 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/91710a2c-eb3b-48de-ba3e-f273865eecc8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "91710a2c-eb3b-48de-ba3e-f273865eecc8" (UID: "91710a2c-eb3b-48de-ba3e-f273865eecc8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 16:02:20 crc kubenswrapper[4957]: I0218 16:02:20.460846 4957 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91710a2c-eb3b-48de-ba3e-f273865eecc8-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 16:02:20 crc kubenswrapper[4957]: I0218 16:02:20.460895 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l7zj5\" (UniqueName: \"kubernetes.io/projected/91710a2c-eb3b-48de-ba3e-f273865eecc8-kube-api-access-l7zj5\") on node \"crc\" DevicePath \"\"" Feb 18 16:02:20 crc kubenswrapper[4957]: I0218 16:02:20.868096 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2l9g4" event={"ID":"91710a2c-eb3b-48de-ba3e-f273865eecc8","Type":"ContainerDied","Data":"458ec6e2b2f53183bb867ed28fe33fde61c9bc8a5da8bb6fc63dcbf69428d4c5"} Feb 18 16:02:20 crc kubenswrapper[4957]: I0218 16:02:20.868186 4957 scope.go:117] "RemoveContainer" containerID="8781f1275a4dfca1bcda2dfd57b0621c3a991d7de188d7132576827885f1e234" Feb 18 16:02:20 crc kubenswrapper[4957]: I0218 16:02:20.868190 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2l9g4" Feb 18 16:02:20 crc kubenswrapper[4957]: I0218 16:02:20.941153 4957 scope.go:117] "RemoveContainer" containerID="5578d8437d3fbc380afca9ae6b358bd8edc2a69cd3bc78f96c7be682ea0d260a" Feb 18 16:02:20 crc kubenswrapper[4957]: I0218 16:02:20.943930 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2l9g4"] Feb 18 16:02:20 crc kubenswrapper[4957]: I0218 16:02:20.961193 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-2l9g4"] Feb 18 16:02:20 crc kubenswrapper[4957]: I0218 16:02:20.994924 4957 scope.go:117] "RemoveContainer" containerID="1a0ea7d7231d5056f6940107626714b819b5c7e7ea25b63ab96aabece49ffa75" Feb 18 16:02:22 crc kubenswrapper[4957]: I0218 16:02:22.226310 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="91710a2c-eb3b-48de-ba3e-f273865eecc8" path="/var/lib/kubelet/pods/91710a2c-eb3b-48de-ba3e-f273865eecc8/volumes" Feb 18 16:02:37 crc kubenswrapper[4957]: I0218 16:02:37.279134 4957 patch_prober.go:28] interesting pod/machine-config-daemon-x8wwg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 16:02:37 crc kubenswrapper[4957]: I0218 16:02:37.279963 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 16:02:43 crc kubenswrapper[4957]: I0218 16:02:43.795866 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-l442t"] Feb 18 16:02:43 crc kubenswrapper[4957]: E0218 16:02:43.797408 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91710a2c-eb3b-48de-ba3e-f273865eecc8" containerName="registry-server" Feb 18 16:02:43 crc kubenswrapper[4957]: I0218 16:02:43.797447 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="91710a2c-eb3b-48de-ba3e-f273865eecc8" containerName="registry-server" Feb 18 16:02:43 crc kubenswrapper[4957]: E0218 16:02:43.797472 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5dc758f8-752e-4498-9c77-789c27fdd313" containerName="extract-content" Feb 18 16:02:43 crc kubenswrapper[4957]: I0218 16:02:43.797480 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="5dc758f8-752e-4498-9c77-789c27fdd313" containerName="extract-content" Feb 18 16:02:43 crc kubenswrapper[4957]: E0218 16:02:43.797515 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5dc758f8-752e-4498-9c77-789c27fdd313" containerName="registry-server" Feb 18 16:02:43 crc kubenswrapper[4957]: I0218 16:02:43.797523 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="5dc758f8-752e-4498-9c77-789c27fdd313" containerName="registry-server" Feb 18 16:02:43 crc kubenswrapper[4957]: E0218 16:02:43.797556 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91710a2c-eb3b-48de-ba3e-f273865eecc8" containerName="extract-content" Feb 18 16:02:43 crc kubenswrapper[4957]: I0218 16:02:43.797565 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="91710a2c-eb3b-48de-ba3e-f273865eecc8" containerName="extract-content" Feb 18 16:02:43 crc kubenswrapper[4957]: E0218 16:02:43.797579 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91710a2c-eb3b-48de-ba3e-f273865eecc8" containerName="extract-utilities" Feb 18 16:02:43 crc kubenswrapper[4957]: I0218 16:02:43.797588 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="91710a2c-eb3b-48de-ba3e-f273865eecc8" containerName="extract-utilities" Feb 18 16:02:43 crc kubenswrapper[4957]: E0218 16:02:43.797605 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5dc758f8-752e-4498-9c77-789c27fdd313" containerName="extract-utilities" Feb 18 16:02:43 crc kubenswrapper[4957]: I0218 16:02:43.797612 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="5dc758f8-752e-4498-9c77-789c27fdd313" containerName="extract-utilities" Feb 18 16:02:43 crc kubenswrapper[4957]: I0218 16:02:43.797873 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="91710a2c-eb3b-48de-ba3e-f273865eecc8" containerName="registry-server" Feb 18 16:02:43 crc kubenswrapper[4957]: I0218 16:02:43.797905 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="5dc758f8-752e-4498-9c77-789c27fdd313" containerName="registry-server" Feb 18 16:02:43 crc kubenswrapper[4957]: I0218 16:02:43.799908 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l442t" Feb 18 16:02:43 crc kubenswrapper[4957]: I0218 16:02:43.807057 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-l442t"] Feb 18 16:02:43 crc kubenswrapper[4957]: I0218 16:02:43.824756 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x4t74\" (UniqueName: \"kubernetes.io/projected/f552a2cd-44b2-4101-95ce-470866deba70-kube-api-access-x4t74\") pod \"redhat-operators-l442t\" (UID: \"f552a2cd-44b2-4101-95ce-470866deba70\") " pod="openshift-marketplace/redhat-operators-l442t" Feb 18 16:02:43 crc kubenswrapper[4957]: I0218 16:02:43.824898 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f552a2cd-44b2-4101-95ce-470866deba70-catalog-content\") pod \"redhat-operators-l442t\" (UID: \"f552a2cd-44b2-4101-95ce-470866deba70\") " pod="openshift-marketplace/redhat-operators-l442t" Feb 18 16:02:43 crc kubenswrapper[4957]: I0218 16:02:43.825067 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f552a2cd-44b2-4101-95ce-470866deba70-utilities\") pod \"redhat-operators-l442t\" (UID: \"f552a2cd-44b2-4101-95ce-470866deba70\") " pod="openshift-marketplace/redhat-operators-l442t" Feb 18 16:02:43 crc kubenswrapper[4957]: I0218 16:02:43.927149 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f552a2cd-44b2-4101-95ce-470866deba70-catalog-content\") pod \"redhat-operators-l442t\" (UID: \"f552a2cd-44b2-4101-95ce-470866deba70\") " pod="openshift-marketplace/redhat-operators-l442t" Feb 18 16:02:43 crc kubenswrapper[4957]: I0218 16:02:43.927473 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f552a2cd-44b2-4101-95ce-470866deba70-utilities\") pod \"redhat-operators-l442t\" (UID: \"f552a2cd-44b2-4101-95ce-470866deba70\") " pod="openshift-marketplace/redhat-operators-l442t" Feb 18 16:02:43 crc kubenswrapper[4957]: I0218 16:02:43.927682 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f552a2cd-44b2-4101-95ce-470866deba70-catalog-content\") pod \"redhat-operators-l442t\" (UID: \"f552a2cd-44b2-4101-95ce-470866deba70\") " pod="openshift-marketplace/redhat-operators-l442t" Feb 18 16:02:43 crc kubenswrapper[4957]: I0218 16:02:43.927726 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x4t74\" (UniqueName: \"kubernetes.io/projected/f552a2cd-44b2-4101-95ce-470866deba70-kube-api-access-x4t74\") pod \"redhat-operators-l442t\" (UID: \"f552a2cd-44b2-4101-95ce-470866deba70\") " pod="openshift-marketplace/redhat-operators-l442t" Feb 18 16:02:43 crc kubenswrapper[4957]: I0218 16:02:43.928405 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f552a2cd-44b2-4101-95ce-470866deba70-utilities\") pod \"redhat-operators-l442t\" (UID: \"f552a2cd-44b2-4101-95ce-470866deba70\") " pod="openshift-marketplace/redhat-operators-l442t" Feb 18 16:02:43 crc kubenswrapper[4957]: I0218 16:02:43.952786 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x4t74\" (UniqueName: \"kubernetes.io/projected/f552a2cd-44b2-4101-95ce-470866deba70-kube-api-access-x4t74\") pod \"redhat-operators-l442t\" (UID: \"f552a2cd-44b2-4101-95ce-470866deba70\") " pod="openshift-marketplace/redhat-operators-l442t" Feb 18 16:02:44 crc kubenswrapper[4957]: I0218 16:02:44.125014 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l442t" Feb 18 16:02:44 crc kubenswrapper[4957]: I0218 16:02:44.701697 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-l442t"] Feb 18 16:02:45 crc kubenswrapper[4957]: I0218 16:02:45.243096 4957 generic.go:334] "Generic (PLEG): container finished" podID="f552a2cd-44b2-4101-95ce-470866deba70" containerID="a92449e9eb18f62a1fd18522e175b8a3dd4bbe58ad00fa34e3810d12d8160a0a" exitCode=0 Feb 18 16:02:45 crc kubenswrapper[4957]: I0218 16:02:45.243164 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l442t" event={"ID":"f552a2cd-44b2-4101-95ce-470866deba70","Type":"ContainerDied","Data":"a92449e9eb18f62a1fd18522e175b8a3dd4bbe58ad00fa34e3810d12d8160a0a"} Feb 18 16:02:45 crc kubenswrapper[4957]: I0218 16:02:45.243468 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l442t" event={"ID":"f552a2cd-44b2-4101-95ce-470866deba70","Type":"ContainerStarted","Data":"e03ab1e84724eb7375fb6abfef29f2a840a9e52805a798cfb7d3af4f49287108"} Feb 18 16:02:46 crc kubenswrapper[4957]: I0218 16:02:46.261808 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l442t" event={"ID":"f552a2cd-44b2-4101-95ce-470866deba70","Type":"ContainerStarted","Data":"a3a4472816925676339ed0817fbb06a6947913caaa90bb6cab58041f40f65259"} Feb 18 16:02:51 crc kubenswrapper[4957]: I0218 16:02:51.336861 4957 generic.go:334] "Generic (PLEG): container finished" podID="f552a2cd-44b2-4101-95ce-470866deba70" containerID="a3a4472816925676339ed0817fbb06a6947913caaa90bb6cab58041f40f65259" exitCode=0 Feb 18 16:02:51 crc kubenswrapper[4957]: I0218 16:02:51.336930 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l442t" event={"ID":"f552a2cd-44b2-4101-95ce-470866deba70","Type":"ContainerDied","Data":"a3a4472816925676339ed0817fbb06a6947913caaa90bb6cab58041f40f65259"} Feb 18 16:02:52 crc kubenswrapper[4957]: I0218 16:02:52.362997 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l442t" event={"ID":"f552a2cd-44b2-4101-95ce-470866deba70","Type":"ContainerStarted","Data":"44f88c431d61dec34eecd4127d02b28e3ead236f965d1611582a4cc2aae2c878"} Feb 18 16:02:52 crc kubenswrapper[4957]: I0218 16:02:52.385846 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-l442t" podStartSLOduration=2.770167954 podStartE2EDuration="9.385829111s" podCreationTimestamp="2026-02-18 16:02:43 +0000 UTC" firstStartedPulling="2026-02-18 16:02:45.245129219 +0000 UTC m=+5471.765993963" lastFinishedPulling="2026-02-18 16:02:51.860790376 +0000 UTC m=+5478.381655120" observedRunningTime="2026-02-18 16:02:52.379185931 +0000 UTC m=+5478.900050685" watchObservedRunningTime="2026-02-18 16:02:52.385829111 +0000 UTC m=+5478.906693855" Feb 18 16:02:54 crc kubenswrapper[4957]: I0218 16:02:54.126017 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-l442t" Feb 18 16:02:54 crc kubenswrapper[4957]: I0218 16:02:54.126281 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-l442t" Feb 18 16:02:55 crc kubenswrapper[4957]: I0218 16:02:55.188061 4957 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-l442t" podUID="f552a2cd-44b2-4101-95ce-470866deba70" containerName="registry-server" probeResult="failure" output=< Feb 18 16:02:55 crc kubenswrapper[4957]: timeout: failed to connect service ":50051" within 1s Feb 18 16:02:55 crc kubenswrapper[4957]: > Feb 18 16:03:05 crc kubenswrapper[4957]: I0218 16:03:05.173580 4957 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-l442t" podUID="f552a2cd-44b2-4101-95ce-470866deba70" containerName="registry-server" probeResult="failure" output=< Feb 18 16:03:05 crc kubenswrapper[4957]: timeout: failed to connect service ":50051" within 1s Feb 18 16:03:05 crc kubenswrapper[4957]: > Feb 18 16:03:05 crc kubenswrapper[4957]: I0218 16:03:05.838848 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-txfpk"] Feb 18 16:03:05 crc kubenswrapper[4957]: I0218 16:03:05.842770 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-txfpk" Feb 18 16:03:05 crc kubenswrapper[4957]: I0218 16:03:05.876238 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-txfpk"] Feb 18 16:03:05 crc kubenswrapper[4957]: I0218 16:03:05.891536 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c32975f-2bdf-4fd5-8455-5ae76963eb32-catalog-content\") pod \"redhat-marketplace-txfpk\" (UID: \"4c32975f-2bdf-4fd5-8455-5ae76963eb32\") " pod="openshift-marketplace/redhat-marketplace-txfpk" Feb 18 16:03:05 crc kubenswrapper[4957]: I0218 16:03:05.891635 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c32975f-2bdf-4fd5-8455-5ae76963eb32-utilities\") pod \"redhat-marketplace-txfpk\" (UID: \"4c32975f-2bdf-4fd5-8455-5ae76963eb32\") " pod="openshift-marketplace/redhat-marketplace-txfpk" Feb 18 16:03:05 crc kubenswrapper[4957]: I0218 16:03:05.891704 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5w8mv\" (UniqueName: \"kubernetes.io/projected/4c32975f-2bdf-4fd5-8455-5ae76963eb32-kube-api-access-5w8mv\") pod \"redhat-marketplace-txfpk\" (UID: \"4c32975f-2bdf-4fd5-8455-5ae76963eb32\") " pod="openshift-marketplace/redhat-marketplace-txfpk" Feb 18 16:03:05 crc kubenswrapper[4957]: I0218 16:03:05.993867 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c32975f-2bdf-4fd5-8455-5ae76963eb32-catalog-content\") pod \"redhat-marketplace-txfpk\" (UID: \"4c32975f-2bdf-4fd5-8455-5ae76963eb32\") " pod="openshift-marketplace/redhat-marketplace-txfpk" Feb 18 16:03:05 crc kubenswrapper[4957]: I0218 16:03:05.994350 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c32975f-2bdf-4fd5-8455-5ae76963eb32-catalog-content\") pod \"redhat-marketplace-txfpk\" (UID: \"4c32975f-2bdf-4fd5-8455-5ae76963eb32\") " pod="openshift-marketplace/redhat-marketplace-txfpk" Feb 18 16:03:05 crc kubenswrapper[4957]: I0218 16:03:05.994490 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c32975f-2bdf-4fd5-8455-5ae76963eb32-utilities\") pod \"redhat-marketplace-txfpk\" (UID: \"4c32975f-2bdf-4fd5-8455-5ae76963eb32\") " pod="openshift-marketplace/redhat-marketplace-txfpk" Feb 18 16:03:05 crc kubenswrapper[4957]: I0218 16:03:05.994791 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c32975f-2bdf-4fd5-8455-5ae76963eb32-utilities\") pod \"redhat-marketplace-txfpk\" (UID: \"4c32975f-2bdf-4fd5-8455-5ae76963eb32\") " pod="openshift-marketplace/redhat-marketplace-txfpk" Feb 18 16:03:05 crc kubenswrapper[4957]: I0218 16:03:05.994858 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5w8mv\" (UniqueName: \"kubernetes.io/projected/4c32975f-2bdf-4fd5-8455-5ae76963eb32-kube-api-access-5w8mv\") pod \"redhat-marketplace-txfpk\" (UID: \"4c32975f-2bdf-4fd5-8455-5ae76963eb32\") " pod="openshift-marketplace/redhat-marketplace-txfpk" Feb 18 16:03:06 crc kubenswrapper[4957]: I0218 16:03:06.021759 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5w8mv\" (UniqueName: \"kubernetes.io/projected/4c32975f-2bdf-4fd5-8455-5ae76963eb32-kube-api-access-5w8mv\") pod \"redhat-marketplace-txfpk\" (UID: \"4c32975f-2bdf-4fd5-8455-5ae76963eb32\") " pod="openshift-marketplace/redhat-marketplace-txfpk" Feb 18 16:03:06 crc kubenswrapper[4957]: I0218 16:03:06.162915 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-txfpk" Feb 18 16:03:06 crc kubenswrapper[4957]: I0218 16:03:06.668719 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-txfpk"] Feb 18 16:03:07 crc kubenswrapper[4957]: I0218 16:03:07.256167 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-txfpk" event={"ID":"4c32975f-2bdf-4fd5-8455-5ae76963eb32","Type":"ContainerStarted","Data":"abe7c941eb2318eac02a84a4b516312ff6fb5f5c4a2bd557c2aef5a99f04a314"} Feb 18 16:03:07 crc kubenswrapper[4957]: I0218 16:03:07.279669 4957 patch_prober.go:28] interesting pod/machine-config-daemon-x8wwg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 16:03:07 crc kubenswrapper[4957]: I0218 16:03:07.279719 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 16:03:08 crc kubenswrapper[4957]: I0218 16:03:08.270785 4957 generic.go:334] "Generic (PLEG): container finished" podID="4c32975f-2bdf-4fd5-8455-5ae76963eb32" containerID="940fa622f84d4e7f09c072e056b96e343d6c0f03e98bb6378513d2c494e42528" exitCode=0 Feb 18 16:03:08 crc kubenswrapper[4957]: I0218 16:03:08.270832 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-txfpk" event={"ID":"4c32975f-2bdf-4fd5-8455-5ae76963eb32","Type":"ContainerDied","Data":"940fa622f84d4e7f09c072e056b96e343d6c0f03e98bb6378513d2c494e42528"} Feb 18 16:03:10 crc kubenswrapper[4957]: I0218 16:03:10.297646 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-txfpk" event={"ID":"4c32975f-2bdf-4fd5-8455-5ae76963eb32","Type":"ContainerStarted","Data":"85a99cdca1261d5442d875747fbd6da69ed020c545458f790c2ef22927dec4d0"} Feb 18 16:03:11 crc kubenswrapper[4957]: I0218 16:03:11.315616 4957 generic.go:334] "Generic (PLEG): container finished" podID="4c32975f-2bdf-4fd5-8455-5ae76963eb32" containerID="85a99cdca1261d5442d875747fbd6da69ed020c545458f790c2ef22927dec4d0" exitCode=0 Feb 18 16:03:11 crc kubenswrapper[4957]: I0218 16:03:11.316007 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-txfpk" event={"ID":"4c32975f-2bdf-4fd5-8455-5ae76963eb32","Type":"ContainerDied","Data":"85a99cdca1261d5442d875747fbd6da69ed020c545458f790c2ef22927dec4d0"} Feb 18 16:03:12 crc kubenswrapper[4957]: I0218 16:03:12.329508 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-txfpk" event={"ID":"4c32975f-2bdf-4fd5-8455-5ae76963eb32","Type":"ContainerStarted","Data":"44a546ad67dfeed8187ef4ad6452e9862ba8be50ef6cfcf5c7e899b969f208c5"} Feb 18 16:03:12 crc kubenswrapper[4957]: I0218 16:03:12.353241 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-txfpk" podStartSLOduration=3.8218366379999997 podStartE2EDuration="7.353217544s" podCreationTimestamp="2026-02-18 16:03:05 +0000 UTC" firstStartedPulling="2026-02-18 16:03:08.273220439 +0000 UTC m=+5494.794085203" lastFinishedPulling="2026-02-18 16:03:11.804601355 +0000 UTC m=+5498.325466109" observedRunningTime="2026-02-18 16:03:12.351762112 +0000 UTC m=+5498.872626856" watchObservedRunningTime="2026-02-18 16:03:12.353217544 +0000 UTC m=+5498.874082288" Feb 18 16:03:15 crc kubenswrapper[4957]: I0218 16:03:15.175036 4957 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-l442t" podUID="f552a2cd-44b2-4101-95ce-470866deba70" containerName="registry-server" probeResult="failure" output=< Feb 18 16:03:15 crc kubenswrapper[4957]: timeout: failed to connect service ":50051" within 1s Feb 18 16:03:15 crc kubenswrapper[4957]: > Feb 18 16:03:16 crc kubenswrapper[4957]: I0218 16:03:16.163664 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-txfpk" Feb 18 16:03:16 crc kubenswrapper[4957]: I0218 16:03:16.164033 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-txfpk" Feb 18 16:03:17 crc kubenswrapper[4957]: I0218 16:03:17.229420 4957 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-txfpk" podUID="4c32975f-2bdf-4fd5-8455-5ae76963eb32" containerName="registry-server" probeResult="failure" output=< Feb 18 16:03:17 crc kubenswrapper[4957]: timeout: failed to connect service ":50051" within 1s Feb 18 16:03:17 crc kubenswrapper[4957]: > Feb 18 16:03:25 crc kubenswrapper[4957]: I0218 16:03:25.187408 4957 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-l442t" podUID="f552a2cd-44b2-4101-95ce-470866deba70" containerName="registry-server" probeResult="failure" output=< Feb 18 16:03:25 crc kubenswrapper[4957]: timeout: failed to connect service ":50051" within 1s Feb 18 16:03:25 crc kubenswrapper[4957]: > Feb 18 16:03:26 crc kubenswrapper[4957]: I0218 16:03:26.258610 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-txfpk" Feb 18 16:03:26 crc kubenswrapper[4957]: I0218 16:03:26.328704 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-txfpk" Feb 18 16:03:26 crc kubenswrapper[4957]: I0218 16:03:26.502978 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-txfpk"] Feb 18 16:03:27 crc kubenswrapper[4957]: I0218 16:03:27.509133 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-txfpk" podUID="4c32975f-2bdf-4fd5-8455-5ae76963eb32" containerName="registry-server" containerID="cri-o://44a546ad67dfeed8187ef4ad6452e9862ba8be50ef6cfcf5c7e899b969f208c5" gracePeriod=2 Feb 18 16:03:28 crc kubenswrapper[4957]: I0218 16:03:28.211614 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-txfpk" Feb 18 16:03:28 crc kubenswrapper[4957]: I0218 16:03:28.324368 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c32975f-2bdf-4fd5-8455-5ae76963eb32-utilities\") pod \"4c32975f-2bdf-4fd5-8455-5ae76963eb32\" (UID: \"4c32975f-2bdf-4fd5-8455-5ae76963eb32\") " Feb 18 16:03:28 crc kubenswrapper[4957]: I0218 16:03:28.324712 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5w8mv\" (UniqueName: \"kubernetes.io/projected/4c32975f-2bdf-4fd5-8455-5ae76963eb32-kube-api-access-5w8mv\") pod \"4c32975f-2bdf-4fd5-8455-5ae76963eb32\" (UID: \"4c32975f-2bdf-4fd5-8455-5ae76963eb32\") " Feb 18 16:03:28 crc kubenswrapper[4957]: I0218 16:03:28.324833 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c32975f-2bdf-4fd5-8455-5ae76963eb32-catalog-content\") pod \"4c32975f-2bdf-4fd5-8455-5ae76963eb32\" (UID: \"4c32975f-2bdf-4fd5-8455-5ae76963eb32\") " Feb 18 16:03:28 crc kubenswrapper[4957]: I0218 16:03:28.327849 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4c32975f-2bdf-4fd5-8455-5ae76963eb32-utilities" (OuterVolumeSpecName: "utilities") pod "4c32975f-2bdf-4fd5-8455-5ae76963eb32" (UID: "4c32975f-2bdf-4fd5-8455-5ae76963eb32"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 16:03:28 crc kubenswrapper[4957]: I0218 16:03:28.345697 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c32975f-2bdf-4fd5-8455-5ae76963eb32-kube-api-access-5w8mv" (OuterVolumeSpecName: "kube-api-access-5w8mv") pod "4c32975f-2bdf-4fd5-8455-5ae76963eb32" (UID: "4c32975f-2bdf-4fd5-8455-5ae76963eb32"). InnerVolumeSpecName "kube-api-access-5w8mv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:03:28 crc kubenswrapper[4957]: I0218 16:03:28.406616 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4c32975f-2bdf-4fd5-8455-5ae76963eb32-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4c32975f-2bdf-4fd5-8455-5ae76963eb32" (UID: "4c32975f-2bdf-4fd5-8455-5ae76963eb32"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 16:03:28 crc kubenswrapper[4957]: I0218 16:03:28.427411 4957 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c32975f-2bdf-4fd5-8455-5ae76963eb32-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 16:03:28 crc kubenswrapper[4957]: I0218 16:03:28.427454 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5w8mv\" (UniqueName: \"kubernetes.io/projected/4c32975f-2bdf-4fd5-8455-5ae76963eb32-kube-api-access-5w8mv\") on node \"crc\" DevicePath \"\"" Feb 18 16:03:28 crc kubenswrapper[4957]: I0218 16:03:28.427464 4957 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c32975f-2bdf-4fd5-8455-5ae76963eb32-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 16:03:28 crc kubenswrapper[4957]: I0218 16:03:28.523846 4957 generic.go:334] "Generic (PLEG): container finished" podID="4c32975f-2bdf-4fd5-8455-5ae76963eb32" containerID="44a546ad67dfeed8187ef4ad6452e9862ba8be50ef6cfcf5c7e899b969f208c5" exitCode=0 Feb 18 16:03:28 crc kubenswrapper[4957]: I0218 16:03:28.523883 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-txfpk" event={"ID":"4c32975f-2bdf-4fd5-8455-5ae76963eb32","Type":"ContainerDied","Data":"44a546ad67dfeed8187ef4ad6452e9862ba8be50ef6cfcf5c7e899b969f208c5"} Feb 18 16:03:28 crc kubenswrapper[4957]: I0218 16:03:28.523906 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-txfpk" event={"ID":"4c32975f-2bdf-4fd5-8455-5ae76963eb32","Type":"ContainerDied","Data":"abe7c941eb2318eac02a84a4b516312ff6fb5f5c4a2bd557c2aef5a99f04a314"} Feb 18 16:03:28 crc kubenswrapper[4957]: I0218 16:03:28.523927 4957 scope.go:117] "RemoveContainer" containerID="44a546ad67dfeed8187ef4ad6452e9862ba8be50ef6cfcf5c7e899b969f208c5" Feb 18 16:03:28 crc kubenswrapper[4957]: I0218 16:03:28.523977 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-txfpk" Feb 18 16:03:28 crc kubenswrapper[4957]: I0218 16:03:28.557523 4957 scope.go:117] "RemoveContainer" containerID="85a99cdca1261d5442d875747fbd6da69ed020c545458f790c2ef22927dec4d0" Feb 18 16:03:28 crc kubenswrapper[4957]: I0218 16:03:28.561524 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-txfpk"] Feb 18 16:03:28 crc kubenswrapper[4957]: I0218 16:03:28.572759 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-txfpk"] Feb 18 16:03:28 crc kubenswrapper[4957]: I0218 16:03:28.585649 4957 scope.go:117] "RemoveContainer" containerID="940fa622f84d4e7f09c072e056b96e343d6c0f03e98bb6378513d2c494e42528" Feb 18 16:03:28 crc kubenswrapper[4957]: I0218 16:03:28.644693 4957 scope.go:117] "RemoveContainer" containerID="44a546ad67dfeed8187ef4ad6452e9862ba8be50ef6cfcf5c7e899b969f208c5" Feb 18 16:03:28 crc kubenswrapper[4957]: E0218 16:03:28.645068 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"44a546ad67dfeed8187ef4ad6452e9862ba8be50ef6cfcf5c7e899b969f208c5\": container with ID starting with 44a546ad67dfeed8187ef4ad6452e9862ba8be50ef6cfcf5c7e899b969f208c5 not found: ID does not exist" containerID="44a546ad67dfeed8187ef4ad6452e9862ba8be50ef6cfcf5c7e899b969f208c5" Feb 18 16:03:28 crc kubenswrapper[4957]: I0218 16:03:28.645105 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"44a546ad67dfeed8187ef4ad6452e9862ba8be50ef6cfcf5c7e899b969f208c5"} err="failed to get container status \"44a546ad67dfeed8187ef4ad6452e9862ba8be50ef6cfcf5c7e899b969f208c5\": rpc error: code = NotFound desc = could not find container \"44a546ad67dfeed8187ef4ad6452e9862ba8be50ef6cfcf5c7e899b969f208c5\": container with ID starting with 44a546ad67dfeed8187ef4ad6452e9862ba8be50ef6cfcf5c7e899b969f208c5 not found: ID does not exist" Feb 18 16:03:28 crc kubenswrapper[4957]: I0218 16:03:28.645131 4957 scope.go:117] "RemoveContainer" containerID="85a99cdca1261d5442d875747fbd6da69ed020c545458f790c2ef22927dec4d0" Feb 18 16:03:28 crc kubenswrapper[4957]: E0218 16:03:28.645395 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"85a99cdca1261d5442d875747fbd6da69ed020c545458f790c2ef22927dec4d0\": container with ID starting with 85a99cdca1261d5442d875747fbd6da69ed020c545458f790c2ef22927dec4d0 not found: ID does not exist" containerID="85a99cdca1261d5442d875747fbd6da69ed020c545458f790c2ef22927dec4d0" Feb 18 16:03:28 crc kubenswrapper[4957]: I0218 16:03:28.645514 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"85a99cdca1261d5442d875747fbd6da69ed020c545458f790c2ef22927dec4d0"} err="failed to get container status \"85a99cdca1261d5442d875747fbd6da69ed020c545458f790c2ef22927dec4d0\": rpc error: code = NotFound desc = could not find container \"85a99cdca1261d5442d875747fbd6da69ed020c545458f790c2ef22927dec4d0\": container with ID starting with 85a99cdca1261d5442d875747fbd6da69ed020c545458f790c2ef22927dec4d0 not found: ID does not exist" Feb 18 16:03:28 crc kubenswrapper[4957]: I0218 16:03:28.645540 4957 scope.go:117] "RemoveContainer" containerID="940fa622f84d4e7f09c072e056b96e343d6c0f03e98bb6378513d2c494e42528" Feb 18 16:03:28 crc kubenswrapper[4957]: E0218 16:03:28.646196 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"940fa622f84d4e7f09c072e056b96e343d6c0f03e98bb6378513d2c494e42528\": container with ID starting with 940fa622f84d4e7f09c072e056b96e343d6c0f03e98bb6378513d2c494e42528 not found: ID does not exist" containerID="940fa622f84d4e7f09c072e056b96e343d6c0f03e98bb6378513d2c494e42528" Feb 18 16:03:28 crc kubenswrapper[4957]: I0218 16:03:28.646218 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"940fa622f84d4e7f09c072e056b96e343d6c0f03e98bb6378513d2c494e42528"} err="failed to get container status \"940fa622f84d4e7f09c072e056b96e343d6c0f03e98bb6378513d2c494e42528\": rpc error: code = NotFound desc = could not find container \"940fa622f84d4e7f09c072e056b96e343d6c0f03e98bb6378513d2c494e42528\": container with ID starting with 940fa622f84d4e7f09c072e056b96e343d6c0f03e98bb6378513d2c494e42528 not found: ID does not exist" Feb 18 16:03:30 crc kubenswrapper[4957]: I0218 16:03:30.233389 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c32975f-2bdf-4fd5-8455-5ae76963eb32" path="/var/lib/kubelet/pods/4c32975f-2bdf-4fd5-8455-5ae76963eb32/volumes" Feb 18 16:03:34 crc kubenswrapper[4957]: I0218 16:03:34.188741 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-l442t" Feb 18 16:03:34 crc kubenswrapper[4957]: I0218 16:03:34.273736 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-l442t" Feb 18 16:03:34 crc kubenswrapper[4957]: I0218 16:03:34.452984 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-l442t"] Feb 18 16:03:35 crc kubenswrapper[4957]: I0218 16:03:35.623546 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-l442t" podUID="f552a2cd-44b2-4101-95ce-470866deba70" containerName="registry-server" containerID="cri-o://44f88c431d61dec34eecd4127d02b28e3ead236f965d1611582a4cc2aae2c878" gracePeriod=2 Feb 18 16:03:36 crc kubenswrapper[4957]: I0218 16:03:36.262272 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l442t" Feb 18 16:03:36 crc kubenswrapper[4957]: I0218 16:03:36.387801 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4t74\" (UniqueName: \"kubernetes.io/projected/f552a2cd-44b2-4101-95ce-470866deba70-kube-api-access-x4t74\") pod \"f552a2cd-44b2-4101-95ce-470866deba70\" (UID: \"f552a2cd-44b2-4101-95ce-470866deba70\") " Feb 18 16:03:36 crc kubenswrapper[4957]: I0218 16:03:36.389703 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f552a2cd-44b2-4101-95ce-470866deba70-catalog-content\") pod \"f552a2cd-44b2-4101-95ce-470866deba70\" (UID: \"f552a2cd-44b2-4101-95ce-470866deba70\") " Feb 18 16:03:36 crc kubenswrapper[4957]: I0218 16:03:36.389941 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f552a2cd-44b2-4101-95ce-470866deba70-utilities\") pod \"f552a2cd-44b2-4101-95ce-470866deba70\" (UID: \"f552a2cd-44b2-4101-95ce-470866deba70\") " Feb 18 16:03:36 crc kubenswrapper[4957]: I0218 16:03:36.390482 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f552a2cd-44b2-4101-95ce-470866deba70-utilities" (OuterVolumeSpecName: "utilities") pod "f552a2cd-44b2-4101-95ce-470866deba70" (UID: "f552a2cd-44b2-4101-95ce-470866deba70"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 16:03:36 crc kubenswrapper[4957]: I0218 16:03:36.391487 4957 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f552a2cd-44b2-4101-95ce-470866deba70-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 16:03:36 crc kubenswrapper[4957]: I0218 16:03:36.395793 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f552a2cd-44b2-4101-95ce-470866deba70-kube-api-access-x4t74" (OuterVolumeSpecName: "kube-api-access-x4t74") pod "f552a2cd-44b2-4101-95ce-470866deba70" (UID: "f552a2cd-44b2-4101-95ce-470866deba70"). InnerVolumeSpecName "kube-api-access-x4t74". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:03:36 crc kubenswrapper[4957]: I0218 16:03:36.494903 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4t74\" (UniqueName: \"kubernetes.io/projected/f552a2cd-44b2-4101-95ce-470866deba70-kube-api-access-x4t74\") on node \"crc\" DevicePath \"\"" Feb 18 16:03:36 crc kubenswrapper[4957]: I0218 16:03:36.531390 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f552a2cd-44b2-4101-95ce-470866deba70-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f552a2cd-44b2-4101-95ce-470866deba70" (UID: "f552a2cd-44b2-4101-95ce-470866deba70"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 16:03:36 crc kubenswrapper[4957]: I0218 16:03:36.597865 4957 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f552a2cd-44b2-4101-95ce-470866deba70-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 16:03:36 crc kubenswrapper[4957]: I0218 16:03:36.635323 4957 generic.go:334] "Generic (PLEG): container finished" podID="f552a2cd-44b2-4101-95ce-470866deba70" containerID="44f88c431d61dec34eecd4127d02b28e3ead236f965d1611582a4cc2aae2c878" exitCode=0 Feb 18 16:03:36 crc kubenswrapper[4957]: I0218 16:03:36.635381 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l442t" event={"ID":"f552a2cd-44b2-4101-95ce-470866deba70","Type":"ContainerDied","Data":"44f88c431d61dec34eecd4127d02b28e3ead236f965d1611582a4cc2aae2c878"} Feb 18 16:03:36 crc kubenswrapper[4957]: I0218 16:03:36.635443 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l442t" event={"ID":"f552a2cd-44b2-4101-95ce-470866deba70","Type":"ContainerDied","Data":"e03ab1e84724eb7375fb6abfef29f2a840a9e52805a798cfb7d3af4f49287108"} Feb 18 16:03:36 crc kubenswrapper[4957]: I0218 16:03:36.635411 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l442t" Feb 18 16:03:36 crc kubenswrapper[4957]: I0218 16:03:36.635468 4957 scope.go:117] "RemoveContainer" containerID="44f88c431d61dec34eecd4127d02b28e3ead236f965d1611582a4cc2aae2c878" Feb 18 16:03:36 crc kubenswrapper[4957]: I0218 16:03:36.676613 4957 scope.go:117] "RemoveContainer" containerID="a3a4472816925676339ed0817fbb06a6947913caaa90bb6cab58041f40f65259" Feb 18 16:03:36 crc kubenswrapper[4957]: I0218 16:03:36.696547 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-l442t"] Feb 18 16:03:36 crc kubenswrapper[4957]: I0218 16:03:36.712596 4957 scope.go:117] "RemoveContainer" containerID="a92449e9eb18f62a1fd18522e175b8a3dd4bbe58ad00fa34e3810d12d8160a0a" Feb 18 16:03:36 crc kubenswrapper[4957]: I0218 16:03:36.718011 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-l442t"] Feb 18 16:03:36 crc kubenswrapper[4957]: I0218 16:03:36.755895 4957 scope.go:117] "RemoveContainer" containerID="44f88c431d61dec34eecd4127d02b28e3ead236f965d1611582a4cc2aae2c878" Feb 18 16:03:36 crc kubenswrapper[4957]: E0218 16:03:36.756436 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"44f88c431d61dec34eecd4127d02b28e3ead236f965d1611582a4cc2aae2c878\": container with ID starting with 44f88c431d61dec34eecd4127d02b28e3ead236f965d1611582a4cc2aae2c878 not found: ID does not exist" containerID="44f88c431d61dec34eecd4127d02b28e3ead236f965d1611582a4cc2aae2c878" Feb 18 16:03:36 crc kubenswrapper[4957]: I0218 16:03:36.756491 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"44f88c431d61dec34eecd4127d02b28e3ead236f965d1611582a4cc2aae2c878"} err="failed to get container status \"44f88c431d61dec34eecd4127d02b28e3ead236f965d1611582a4cc2aae2c878\": rpc error: code = NotFound desc = could not find container \"44f88c431d61dec34eecd4127d02b28e3ead236f965d1611582a4cc2aae2c878\": container with ID starting with 44f88c431d61dec34eecd4127d02b28e3ead236f965d1611582a4cc2aae2c878 not found: ID does not exist" Feb 18 16:03:36 crc kubenswrapper[4957]: I0218 16:03:36.756522 4957 scope.go:117] "RemoveContainer" containerID="a3a4472816925676339ed0817fbb06a6947913caaa90bb6cab58041f40f65259" Feb 18 16:03:36 crc kubenswrapper[4957]: E0218 16:03:36.756982 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a3a4472816925676339ed0817fbb06a6947913caaa90bb6cab58041f40f65259\": container with ID starting with a3a4472816925676339ed0817fbb06a6947913caaa90bb6cab58041f40f65259 not found: ID does not exist" containerID="a3a4472816925676339ed0817fbb06a6947913caaa90bb6cab58041f40f65259" Feb 18 16:03:36 crc kubenswrapper[4957]: I0218 16:03:36.757012 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a3a4472816925676339ed0817fbb06a6947913caaa90bb6cab58041f40f65259"} err="failed to get container status \"a3a4472816925676339ed0817fbb06a6947913caaa90bb6cab58041f40f65259\": rpc error: code = NotFound desc = could not find container \"a3a4472816925676339ed0817fbb06a6947913caaa90bb6cab58041f40f65259\": container with ID starting with a3a4472816925676339ed0817fbb06a6947913caaa90bb6cab58041f40f65259 not found: ID does not exist" Feb 18 16:03:36 crc kubenswrapper[4957]: I0218 16:03:36.757032 4957 scope.go:117] "RemoveContainer" containerID="a92449e9eb18f62a1fd18522e175b8a3dd4bbe58ad00fa34e3810d12d8160a0a" Feb 18 16:03:36 crc kubenswrapper[4957]: E0218 16:03:36.757479 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a92449e9eb18f62a1fd18522e175b8a3dd4bbe58ad00fa34e3810d12d8160a0a\": container with ID starting with a92449e9eb18f62a1fd18522e175b8a3dd4bbe58ad00fa34e3810d12d8160a0a not found: ID does not exist" containerID="a92449e9eb18f62a1fd18522e175b8a3dd4bbe58ad00fa34e3810d12d8160a0a" Feb 18 16:03:36 crc kubenswrapper[4957]: I0218 16:03:36.757524 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a92449e9eb18f62a1fd18522e175b8a3dd4bbe58ad00fa34e3810d12d8160a0a"} err="failed to get container status \"a92449e9eb18f62a1fd18522e175b8a3dd4bbe58ad00fa34e3810d12d8160a0a\": rpc error: code = NotFound desc = could not find container \"a92449e9eb18f62a1fd18522e175b8a3dd4bbe58ad00fa34e3810d12d8160a0a\": container with ID starting with a92449e9eb18f62a1fd18522e175b8a3dd4bbe58ad00fa34e3810d12d8160a0a not found: ID does not exist" Feb 18 16:03:37 crc kubenswrapper[4957]: I0218 16:03:37.279696 4957 patch_prober.go:28] interesting pod/machine-config-daemon-x8wwg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 16:03:37 crc kubenswrapper[4957]: I0218 16:03:37.279801 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 16:03:37 crc kubenswrapper[4957]: I0218 16:03:37.279912 4957 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" Feb 18 16:03:37 crc kubenswrapper[4957]: I0218 16:03:37.281300 4957 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a8f724e2df702f543ded24f9907a440b3b18f7f9a83807cdf909afc19774d049"} pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 16:03:37 crc kubenswrapper[4957]: I0218 16:03:37.281478 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" containerName="machine-config-daemon" containerID="cri-o://a8f724e2df702f543ded24f9907a440b3b18f7f9a83807cdf909afc19774d049" gracePeriod=600 Feb 18 16:03:37 crc kubenswrapper[4957]: I0218 16:03:37.656491 4957 generic.go:334] "Generic (PLEG): container finished" podID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" containerID="a8f724e2df702f543ded24f9907a440b3b18f7f9a83807cdf909afc19774d049" exitCode=0 Feb 18 16:03:37 crc kubenswrapper[4957]: I0218 16:03:37.656539 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" event={"ID":"4cde17e3-43e9-4bed-afe8-5b76229e35cf","Type":"ContainerDied","Data":"a8f724e2df702f543ded24f9907a440b3b18f7f9a83807cdf909afc19774d049"} Feb 18 16:03:37 crc kubenswrapper[4957]: I0218 16:03:37.656925 4957 scope.go:117] "RemoveContainer" containerID="89f533c1d1cd15588f3413497b2789cc0f343faad672d55c7e1b45c57bcadec6" Feb 18 16:03:38 crc kubenswrapper[4957]: I0218 16:03:38.225153 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f552a2cd-44b2-4101-95ce-470866deba70" path="/var/lib/kubelet/pods/f552a2cd-44b2-4101-95ce-470866deba70/volumes" Feb 18 16:03:38 crc kubenswrapper[4957]: I0218 16:03:38.678093 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" event={"ID":"4cde17e3-43e9-4bed-afe8-5b76229e35cf","Type":"ContainerStarted","Data":"f23fae18af3b00356e4c0e83446e372c428da5199d658699ea3e4e4bda862cee"} Feb 18 16:05:37 crc kubenswrapper[4957]: I0218 16:05:37.279913 4957 patch_prober.go:28] interesting pod/machine-config-daemon-x8wwg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 16:05:37 crc kubenswrapper[4957]: I0218 16:05:37.280542 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 16:06:07 crc kubenswrapper[4957]: I0218 16:06:07.279002 4957 patch_prober.go:28] interesting pod/machine-config-daemon-x8wwg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 16:06:07 crc kubenswrapper[4957]: I0218 16:06:07.279624 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 16:06:37 crc kubenswrapper[4957]: I0218 16:06:37.279466 4957 patch_prober.go:28] interesting pod/machine-config-daemon-x8wwg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 16:06:37 crc kubenswrapper[4957]: I0218 16:06:37.280054 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 16:06:37 crc kubenswrapper[4957]: I0218 16:06:37.280100 4957 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" Feb 18 16:06:37 crc kubenswrapper[4957]: I0218 16:06:37.281002 4957 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f23fae18af3b00356e4c0e83446e372c428da5199d658699ea3e4e4bda862cee"} pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 16:06:37 crc kubenswrapper[4957]: I0218 16:06:37.281050 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" containerName="machine-config-daemon" containerID="cri-o://f23fae18af3b00356e4c0e83446e372c428da5199d658699ea3e4e4bda862cee" gracePeriod=600 Feb 18 16:06:37 crc kubenswrapper[4957]: E0218 16:06:37.415054 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-x8wwg_openshift-machine-config-operator(4cde17e3-43e9-4bed-afe8-5b76229e35cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" Feb 18 16:06:37 crc kubenswrapper[4957]: I0218 16:06:37.453596 4957 generic.go:334] "Generic (PLEG): container finished" podID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" containerID="f23fae18af3b00356e4c0e83446e372c428da5199d658699ea3e4e4bda862cee" exitCode=0 Feb 18 16:06:37 crc kubenswrapper[4957]: I0218 16:06:37.453648 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" event={"ID":"4cde17e3-43e9-4bed-afe8-5b76229e35cf","Type":"ContainerDied","Data":"f23fae18af3b00356e4c0e83446e372c428da5199d658699ea3e4e4bda862cee"} Feb 18 16:06:37 crc kubenswrapper[4957]: I0218 16:06:37.453683 4957 scope.go:117] "RemoveContainer" containerID="a8f724e2df702f543ded24f9907a440b3b18f7f9a83807cdf909afc19774d049" Feb 18 16:06:37 crc kubenswrapper[4957]: I0218 16:06:37.454509 4957 scope.go:117] "RemoveContainer" containerID="f23fae18af3b00356e4c0e83446e372c428da5199d658699ea3e4e4bda862cee" Feb 18 16:06:37 crc kubenswrapper[4957]: E0218 16:06:37.454834 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-x8wwg_openshift-machine-config-operator(4cde17e3-43e9-4bed-afe8-5b76229e35cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" Feb 18 16:06:49 crc kubenswrapper[4957]: I0218 16:06:49.214697 4957 scope.go:117] "RemoveContainer" containerID="f23fae18af3b00356e4c0e83446e372c428da5199d658699ea3e4e4bda862cee" Feb 18 16:06:49 crc kubenswrapper[4957]: E0218 16:06:49.215589 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-x8wwg_openshift-machine-config-operator(4cde17e3-43e9-4bed-afe8-5b76229e35cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" Feb 18 16:07:01 crc kubenswrapper[4957]: I0218 16:07:01.213275 4957 scope.go:117] "RemoveContainer" containerID="f23fae18af3b00356e4c0e83446e372c428da5199d658699ea3e4e4bda862cee" Feb 18 16:07:01 crc kubenswrapper[4957]: E0218 16:07:01.214254 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-x8wwg_openshift-machine-config-operator(4cde17e3-43e9-4bed-afe8-5b76229e35cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" Feb 18 16:07:14 crc kubenswrapper[4957]: I0218 16:07:14.222720 4957 scope.go:117] "RemoveContainer" containerID="f23fae18af3b00356e4c0e83446e372c428da5199d658699ea3e4e4bda862cee" Feb 18 16:07:14 crc kubenswrapper[4957]: E0218 16:07:14.226276 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-x8wwg_openshift-machine-config-operator(4cde17e3-43e9-4bed-afe8-5b76229e35cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" Feb 18 16:07:29 crc kubenswrapper[4957]: I0218 16:07:29.213049 4957 scope.go:117] "RemoveContainer" containerID="f23fae18af3b00356e4c0e83446e372c428da5199d658699ea3e4e4bda862cee" Feb 18 16:07:29 crc kubenswrapper[4957]: E0218 16:07:29.214065 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-x8wwg_openshift-machine-config-operator(4cde17e3-43e9-4bed-afe8-5b76229e35cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" Feb 18 16:07:44 crc kubenswrapper[4957]: I0218 16:07:44.223377 4957 scope.go:117] "RemoveContainer" containerID="f23fae18af3b00356e4c0e83446e372c428da5199d658699ea3e4e4bda862cee" Feb 18 16:07:44 crc kubenswrapper[4957]: E0218 16:07:44.224575 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-x8wwg_openshift-machine-config-operator(4cde17e3-43e9-4bed-afe8-5b76229e35cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" Feb 18 16:07:55 crc kubenswrapper[4957]: I0218 16:07:55.213346 4957 scope.go:117] "RemoveContainer" containerID="f23fae18af3b00356e4c0e83446e372c428da5199d658699ea3e4e4bda862cee" Feb 18 16:07:55 crc kubenswrapper[4957]: E0218 16:07:55.214380 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-x8wwg_openshift-machine-config-operator(4cde17e3-43e9-4bed-afe8-5b76229e35cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" Feb 18 16:08:10 crc kubenswrapper[4957]: I0218 16:08:10.213726 4957 scope.go:117] "RemoveContainer" containerID="f23fae18af3b00356e4c0e83446e372c428da5199d658699ea3e4e4bda862cee" Feb 18 16:08:10 crc kubenswrapper[4957]: E0218 16:08:10.214675 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-x8wwg_openshift-machine-config-operator(4cde17e3-43e9-4bed-afe8-5b76229e35cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-x8wwg" podUID="4cde17e3-43e9-4bed-afe8-5b76229e35cf" var/home/core/zuul-output/logs/crc-cloud-workdir-crc-all-logs.tar.gz0000644000175000000000000000005515145361767024464 0ustar coreroot  Om77'(var/home/core/zuul-output/logs/crc-cloud/0000755000175000000000000000000015145361767017401 5ustar corerootvar/home/core/zuul-output/artifacts/0000755000175000017500000000000015145346175016520 5ustar corecorevar/home/core/zuul-output/docs/0000755000175000017500000000000015145346175015470 5ustar corecore